Expert Insights | Earley Information Science

[Earley AI Podcast] Episode 27: Gordon Hart

Written by Earley Information Science Team | Mar 27, 2023 9:00:00 AM

Machine Learning and Algorithms

Guest: Gordon Hart

 

 

About this Episode:

Today’s guest is Gordon Hart, Co-Founder and Head of Product at Kolena. Gordon joins Seth Earley and Chris Featherstone and shares how ​​machine learning algorithms are a challenge from different perspectives. Gordon also discusses the core problem in his company before they turned it around. Be sure to listen in on Gordon giving his advice on how to validate models in order to have a successful product!

 

 

Takeaways:

  • Gordon noticed that developing algorithms internally or buying from other model vendors, has really had a constant unexpected model behavior. It made him feel he couldn’t trust the models to behave sensibly. 
  • Gordon started his company because he noticed that time after time, he was getting blindsided. So he thought to himself that there has to be a better way to develop models and be able to validate what they were doing
  • The key challenge that Gordon and his team ran into was that when you have all the data when they were looking at that one number, they were looking at that aggregate metric computed across their entire benchmark.
  • Gordon expresses the importance of going through scenarios with your products. He found that when you break down your evaluation into these different scenarios, the test gives you an understanding of how this model improves in the aggregate over previous models and how are the failures distributed.
  • Gordon believes that testing data versus training data is more critical since your testing data is what you're using to decide if your new model has the behaviors it needs to have.
  • Testing the full pipeline from pre-processing through post-processing rather than testing the model component will oftentimes improve the visibility into how your product is actually going to work when you put it out there.



 

Quote of the Show:

  • “Having your evaluation metrics align with the way that your system is going to be evaluated in the field is a key thing that you can do to get a better understanding of ‘is this model better for what I set out to do?’” (22:36)



Links:



Ways to Tune In:



Thanks to our sponsors: