Contribute Media
A thank you to everyone who makes this possible: Read More

PySpark in Practice

Description

PyData London 2016

In this talk we will share our best practices of using PySpark in numerous customer facing data science engagements. Topics covered in this talk are:

  • Configuration
  • Unit testing with PySpark
  • Integration with SQL on Hadoop engines
  • Data pipeline management and workflows
  • Data Structures (RDDs vs. Data Frames vs. Data Sets)
  • When to use MLlib vs scikit-learn
  • Operationalisation

At Pivotal Labs we have many data science engagements on big data. Typical problems involve real-time data from sensors collected by telecom operators to GPS data produced by vehicle tracking systems. One widespread framework to solve those inherently difficult problems is Apache Spark. In this talk, we want to share our best practices with PySpark, Spark’s Python API, highlighting our experience as well as dos and don'ts. In particular, we will focus on the whole data science pipeline from data ingestion, data munging and wrangling to the actual model building.

Finally, many businesses have started to realise that there is no return on investment from data science if the models do not go into production. At Pivotal Labs, one our core principle is API first. Therefore, we will also talk how we put our models into production, sharing our hands-on knowledge in this field and also how this fits into test-driven development.

Slides available here: http://pydata2016.cfapps.io/

GitHub Repo: https://github.com/datitran/spark-tdd-example

Improve this page