Today, businesses use algorithmic decision-making in various applications, such as determining who gets a bank loan, evaluating a teacher's performance, and other areas that greatly affect people's livelihood. In these applications, understanding why a statistical model makes a particular prediction can be as important as its accuracy. However often times, these models are complex black-boxes that are difficult or impossible to understand by humans. For persons whose lives are impacted by these algorithms, this lack of interpretability creates serious problems as these individuals are unable to improve their outcome. In this talk, I will discuss various definitions of global and local interpretability for machine learning models. Next, I will discuss methodologies for better understanding how a model created a prediction for a particular test instance.