"Trust Fall: Hidden Gems in MLFlow that Improve Model Credibility" When it comes to machine learning projects, verifying and trusting model performance results is a particularly grueling challenge. This talk will explore both how we can use Python to instill confidence in model metrics and the best way to keep models versioned to increase transparency and accessibility across the team. The tactics demonstrated will help developers save precious development time and increase transparency by incorporating metric tracking early on. Speaker: Krishi Sharmai
Krishi is a software developer at KUNGFU.AI and has been developing with Python for 5 years. She has leveraged Python for machine learning and data science, and hopes to help other developers using the knowledge she has gained by trial and error while working on consulting projects for clients across several different industries.