Constantly waiting for your data processing code to finish executing? Through real-life stories, we will explore how to leverage on parallel and asynchronous programming in Python to speed up your data processing pipelines - so that you could focus more on getting value out of your data. While this talk assumes a basic understanding of processes in data pipelines and data science workflows, anyone with a basic understanding of the Python language would be able to understand the concepts and use cases illustrated with analogies.
In any data-intensive application, one of the biggest bottlenecks (in terms of time) is the constant wait for the data processing code to finish executing. Slow code, as well as connectivity issues, affect every step of a typical data science workflow — be it for I/O operations or computation-driven workloads.
In this talk, Chin Hwee will be sharing about common bottlenecks in data processing within a typical data science workflow, and exploring the use of parallel and asynchronous programming in the Python Standard Library to speed up your data processing pipelines so that you could focus more on getting value out of your data. Through real-life analogies based on her experience in a young data science team getting started with real-world data, you will learn about:
1. Sequential vs parallel processing,
2. Synchronous vs asynchronous execution,
3. I/O operations vs computation-driven workloads in a data science workflow,
4. When is parallelism and asynchronous programming a good idea,
5. How to implement asynchronous programming using concurrent.futures to speed up your data processing pipelines