Among numeric communities Python is popular because of its easy to use number crunching modules like Numpy, Scipy, Tensor Flow, Theano, Dask, and Numba – to name but a few. These modules often use parallel processing in order to exploit all of the resources of multi-core processors efficiently. However, when used together in the same application, or in an application which exposes parallelism itself, these Python modules can interfere with each other by requesting too many worker threads. That leads to inefficiency or even causes failure of the code due to resource exhaustion. Last year, the Intel® Threading Building Blocks (Intel® TBB) module for Python introduced a new approach to tackle these issues. However, It is limited to a single process and packages which can switch to using the Intel® TBB library for multi-threading (e.g. Numpy, Dask, Joblib, and Numba). In this work, we address both limitations in the existing approach by introducing a way to compose parallelism implemented with OpenMP* runtime and to support multiprocessing coordination for both Intel® TBB and OpenMP threading runtimes.