Skip to content Skip to sidebar Skip to footer

Reducing Memory Footprint With Multiprocessing?

One of my applications runs about 100 workers. It started out as a threading application, but performance (latency) issues were hit. So I converted those workers to multiprocessing

Solution 1:

My understanding is that with dynamic languages, like Python, copy-on-write is not as effective as a lot more memory gets written to (and therefore copied) after forking. As the Python interpretor progresses through the program there's a lot more going on than just your code. For example reference-counting - very object will be written too pretty quickly as reference counting needs to write the reference count to memory (triggering a copy).

With that in mind you probably need to have a hybrid threading/processing approach. Have multiple process to take advantage of multiple cores etc, but have each one run multiple threads (so you can deal with the level of concurrency you need). You'll just need to experiment with how many threads vs processes you run.

Post a Comment for "Reducing Memory Footprint With Multiprocessing?"