Play Video
Supercharging Search with LLMs

Discover how Instacart’s search journey has been revolutionized through the implementation of Language Models (LLMs). By leveraging the power of LLMs, we have achieved significant enhancements, transforming the search experience for Instacart users. Join us to discover real-world use cases, gain insights into our seamless integration strategies, and witness how LLMs have empowered us to overcome challenges, deliver personalized recommendations, and elevate the overall search experience at Instacart.

Play Video
GenAI: Lessons Learned

In the rapidly evolving landscape of GenAI, large US enterprises face unique challenges when considering its implementation. Beyond the well-acknowledged concerns of data privacy, security, bias, and regulatory compliance, our journey in executing GenAI within mission-critical applications has revealed additional complexities. In this session we will walk through a number of real examples of failed implementations and the lessons learned from them.

Play Video
Getting Higher ROI on MLOps Initiatives

MLOps is hard, because there’s so many “things” that you might want to integrate and connect with: A/B testing, feature stores, model registries, data catalogs, lineage systems, python dependencies, machine learning libraries, LLM APIs, orchestration systems, online vs offline systems, speculative business ideas, etc. In this talk I’ll cover five lessons that I learned while building out the self-service MLOps platform for over 100 data scientists at Stitch Fix. This talk is for anyone building their own, or buying it all off the shelf. Either way you’re still going to want everything to fit cohesively together, i.e. as a platform, and learning what to avoid/focus on will increase your ROI on MLOps initiatives.

Play Video
Is It Too Much to Ask for A Stable Baseline?

Evaluation and monitoring are the heart of any reliable machine learning system. But finding a stable reference point, a reliable comparison baseline, or even a decent performance metric can be surprisingly difficult in a world that is beset by changing conditions, feedback loops, and shifting distributions. In this talk, we will look at some of the ways that these conditions show up in more traditional settings like click through prediction, and then see how they might reappear in the emerging world of productionized LLMs and generative models.