Sam Stone, Director of Product Management, Pricing & Data Products, Opendoor
DescriptionStacking deep learning components is an increasingly attractive way to increase model accuracy - but it often comes at the cost of explainability. AI teams increasingly need to build explainability into their core model architecture or as an explicit post-processing layer. In this session, we'll discuss why AI teams should care about explainability and then dive into explainability strategies, detailing different solutions tailored for end customers vs operators vs developers. We'll draw on real examples from Opendoor systems used to buy and sell hundreds of thousands of homes algorithmically.
Takeaways- How to measure the business value of AI explainability
- How to decide when to invest in AI explainability and when not to
- How to tailor AI explainability strategies to best serve different user types