Gradient boosting is a very popular technique in Machine Learning and particularly so for competitions like those hosted by Kaggle. This talk will provide an introduction to the technique, a short historical recap and then focus on the existing libraries in Python that allow you to use this general family. It'll work through the advantages and disadvantage of specific libraries using worked examples. The code and slides will be in Github. The end learning of attendees will be sufficient knowledge to understand when to use this technique and what is the most appropriate Python library for their needs.