Python makes it incredibly easy to build programs that do what you want. But what happens when you want to do what you want, but with more input? One of the easiest things to do is to make a program concurrent so that you can get more performance on large data sets. But what's involved with that?
Right now, there are any number of ways to do this, and that can be confusing! How does asyncio work? What's the difference between a thread and a process? And what's this Hadoop thing everyone keeps talking about?
In this talk, we'll cover some broad ground of what the different concurrency models available to you as a Python developer are, the tradeoffs and advantages of each, and explain how you can select the right one for your purpose.