Description
Description With the growth of AI, ever growing parts of products we build are changing from the deterministic to the probabilistic. The accuracy of machine learning applications can deteriorate in the wild without strategies for testing, monitoring and introspection. You'll leave this talk knowing how to combine the best of software engineering and machine learning to build robust machine learning products.
Abstract As machine learning becomes more prevalent, ever growing parts of the systems we build are changing from the deterministic to the probabilistic. The accuracy of machine learning applications can quickly deteriorate in the wild without strategies for testing models, instrumenting their behaviour and the ability to introspect and debug incorrect predictions.
This session will take an applied view from my experience of building production machine learning infrastructure at Ravelin. You’ll learn useful practices and tips to help ensure your machine learning systems are robust. We’ll go into:
Labels and Data - can you trust it? Can you infer them? Testing - how do you ensure your model is doing the basics, up to the more complicated examples? Auditing and versioning - what's the provenance of your model? What data was it trained on? With which hyper parameters? Can you reproduce it? Debugging and introspection when deployed - when you make an awful prediction, can you figure out why that happened and prevent it happening again? And more, with the aim of helping you sleep a little better at night knowing your model is out there in the wild.