Parallel and Asynchronous Programming in Data Science
In a data science project, one of the biggest bottlenecks (in terms of time) is the constant wait for the data processing code to finish executing. Slow code, as well as connectivity issues, affect every step of a typical data science workflow — be it for network I/O operations or computation-driven workloads. In this talk, I will be sharing about common bottlenecks in data processing within a typical data science workflow, and exploring the use of parallel and asynchronous programming using concurrent.futures module in Python to speed up your data processing pipelines so that you could focus more on getting value out of your data. Through real-life analogies, you will learn about:
- Sequential vs parallel processing,
- Synchronous vs asynchronous execution,
- Network I/O operations vs computation-driven workloads in a data science workflow,
- When is parallelism and asynchronous programming a good idea,
- How to implement parallel and asynchronous programming using concurrent.futures module to speed up your data processing pipelines
This talk assumes basic understanding of data pipelines and data science workflows. While the main target audience are data scientists and engineers building data pipelines, the talk is designed such that anyone with a basic understanding of the Python language would be able to understand the illustrated concepts and use cases.