How to Interpret ML Models and Increase Transparency

About the talk

Discover the significance of understanding sophisticated models, delve into calculating feature importance, and explore the power of Shapley values. Learn how ML practitioners can unravel the impact of each feature in specific cases.

This talk will cover

  • Why interpretability is so important that we still use logistic regression in many cases
  • What is the most common way to calculate feature importance
  • What is Shapley value from the game theory
  • What is Shap values and how to use them to understand the impact of every feature
Web Development

About the speaker

Artem Evstafev

Artem is a data scientist with more than 10 years of experience. He has created around 100 different models in lead fintech companies. He loves maths and implementing mathematical approaches in real business processes.

Want a high-income, remote career in Web Development?

Join Arc and receive offers from high-growth tech startups paying from $60,000 to 175,000 USD!

Discussion 

Loading...