Earley AI Podcast - Episode 14: Recommendation Engines and the Future of Advice with Michael Schrage

Why Recommendation Engines Are Really Technologies of Advice - and Why Trust, Transparency, and Agency Are the Real Design Problems

Guest: Michael Schrage, Research Fellow, MIT Sloan School of Management Initiative on the Digital Economy; Author, "Recommendation Engines"

Hosts: Seth Earley, CEO at Earley Information Science

             Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce 

Published on: April 8, 2022

 

 

 

 

In this episode, Seth Earley and Chris Featherstone speak with Michael Schrage, Research Fellow at MIT Sloan's Initiative on the Digital Economy and author of "Recommendation Engines" (MIT Press Essential Knowledge series). Drawing on a career that began as a freelance writer and technology columnist for Rolling Stone and the Washington Post - and led through decades at MIT's Media Lab - Michael argues that recommendation engines are not primarily technology products but technologies of advice, with roots that stretch from horoscopes and the I Ching to Spotify Discover and Amazon's book engine. He explores the philosophical origins of his interest in bounded rationality and cognitive bias, why Kahneman and Tversky's prospect theory changed his understanding of decision-making fundamentals, how machine learning colonized recommendation engines midway through writing the book, and why the most important design challenges are not algorithmic but ethical - centering on transparency, trust, agency, and the question of who the advice is actually for.

 

Key Takeaways:

  • Recommendation engines are fundamentally technologies of advice - from horoscopes and the I Ching to Netflix and Waze - and the core design question has always been how tools and technologies can produce better, more reliable, more trustworthy guidance for flawed, biased human decision-makers.
  • The principal-agent problem is the central ethical challenge in recommender design - Warren Buffett's maxim "never ask a barber if you need a haircut" captures the conflict of interest that makes transparency, interpretability, and explainability non-negotiable if organizations want users to trust their recommendations.
  • Click-through rate optimization is the wrong north star for recommendation systems - customer lifetime value is what matters, and the best way to upsell is simply to give people good advice, a point that most legacy organizations have not yet internalized.
  • Machine learning transformed recommendation engines from market basket correlation tools into systems that learn continuously, shifting the question from "what things go together" to "how do we learn to recommend better" - which makes data integrity, ontology, and semantic structure more important than ever, not less.
  • Recommendation engines should support human agency and autonomy rather than replace them - systems designed around the philosophy of "we want to know you better than you know yourself" are not recommendation engines but compliance engines, and that distinction matters enormously for trust and user experience.
  • The effective dimension of recommender design is as important as the cognitive one - good recommendations must not only offer choices a user would not have found on their own, but make users feel that those choices reflect and respect who they are and who they want to be.
  • Future recommender systems must account for multiple selves - the curious self, the cautious self, the adventurous self - because human beings are not singular, and the overlap and divergence between a person's different contextual identities represents a rich and largely unexplored design frontier.

 

Insightful Quotes:

"The notion of a perfect choice, the best choice, is a chimera - it's a unicorn, it's a myth. People are biased, human beings are flawed. What are the right choice architectures for people who are such flawed, biased, and poorly informed decision makers? That has been the theme haunting me my entire adult life." - Michael Schrage

"I want better recommendations to give me good choices - don't tell me the answer, give me the best options and let me choose. Don't try to turn me into a meat puppet. Make me feel like someone who is in control of my own destiny. Is that so wrong?" - Michael Schrage

"The rise of algorithmically innovative recommenders makes data, makes ontology, makes semantics that much more important. That is the antithesis of what we all grew up with, which is garbage in, garbage out." - Michael Schrage

Tune in to hear Michael Schrage explain why he realized Amazon was giving him better book recommendations than his friends, how Netflix has spent tens of millions of dollars trying to solve family-versus-individual recommendation, why he wants Alexa to ask "how sad?" when you say "play me a sad song," and what a meta-recommender for recommendation engines would look like - and why he believes ontology is what makes it possible.

 

Links:

Thanks to our sponsors:

 

Podcast Transcript: Recommendation Engines, the Ethics of Advice, and Why Trust Is the Real Design Problem

Transcript introduction

This transcript captures a wide-ranging conversation between Seth Earley, Chris Featherstone, and Michael Schrage about why recommendation engines are best understood as technologies of advice, tracing their intellectual roots from Kahneman and Tversky's prospect theory through the MIT Media Lab's early collaborative filtering experiments to the machine learning-driven systems of today. Michael explores the ethics of the principal-agent problem, the distinction between recommendation and compliance, the affective dimension of good design, and two research frontiers that will shape the next generation of recommender systems: learnable KPIs and the challenge of recommending across multiple selves.

Transcript

Seth Earley: Welcome to today's podcast. I am Seth Earley.

Chris Featherstone: And I'm Chris Featherstone. Good to be with you.

Seth Earley: Today's guest is a research fellow at the MIT Sloan School of Management Initiative on the Digital Economy and a sought-after expert on innovation, design, and network effects. His book "Recommendation Engines" is a fascinating look at the history, technology, business, and social impact of online recommendation engines made ubiquitous by the likes of Amazon, Spotify, and Netflix. Please welcome Michael Schrage.

Michael Schrage: I appreciate the opportunity - not just to talk about the book per se, but the real learning that comes from how people interpret or misinterpret things. One of the great things about doing this book, outside of the relief of having finished it, is the follow-up questions and conversations it catalyzes.

Seth Earley: Give us a little background - your origin story, how you ended up at MIT, and how you came to writing the book.

Michael Schrage: I grew up as a faculty brat in Chicago's Hyde Park, the University of Chicago area, during the 60s. That had a big impact on my curiosity and my approach to learning. My areas of study in school were computer science and economics. I had no interest in graduate school whatsoever, so as ambitious midwesterners do I picked up and moved to a coast.

I should mention one thing that got me into computers from the very beginning - a guy named Ted Nelson who wrote "Computer Lib/Dream Machines." My father actually gave Ted Nelson office space so he could finish that book, which then got me into the Itty Bitty Machine Company - one of those early stores that sold Apple computers - which of course went bankrupt.

I ended up in New York doing something you can no longer really do - making a living as a freelance writer. Because I had a technology background, I became the first video columnist, technology columnist, and personal computer columnist for Rolling Stone magazine. I was writing about video games, VCRs, and DVDs - the early consumer electronics beat. That got me to the Consumer Electronics Show, and eventually the Washington Post became interested in my work. I ended up writing for the business desk there.

The Washington Post at that time didn't try to be comprehensive about technology the way the Times or the Wall Street Journal did - it wanted to give the Washington elite and the Washington Literati insight into these things. So I could really indulge an extra day of reporting. I also covered defense technology - GPS, DARPA - and had a pretty broad set of interests.

Long story short, one of the stories I did while at the Washington Post was on this crazy startup at MIT - what would become the Media Lab, though at the time it was called the Architecture Machine Group, run by Nicholas Negroponte. His brother John Negroponte was running operations in Honduras and later headed the intelligence community, so the Post was very interested in these unusual family connections.

When I told Nicholas Negroponte I was interested in doing a book on collaboration and collaborative technologies - just as my father had provided office space for Ted Nelson - Negroponte invited me to be a fellow at the Media Lab to finish my book. The previous fellow who had done the same was Stewart Brand, who wrote "The Media Lab." Those were some big shoes to fill. I fell in love with MIT and have had a relationship with it, dotted line and solid, for decades. That was around 1985 or 1986.

Chris Featherstone: When did you start peering into behavioral economics and innovation, and how did that lead to the work on recommendation engines?

Michael Schrage: My father was a scientist - we got the Chronicle of Higher Education and Science magazine, both of which I read. There was an article by Kahneman and Tversky in Science magazine on judgment under uncertainty, and even in high school I understood this one - on cognitive biases and prospect theory. This struck me because I was always interested in how people make decisions.

One of my majors in school was economics, and in a standard economics curriculum the assumption is that people are trying to make the optimal decision. Most of economics behaves as if psychology doesn't exist. But Herb Simon - one of the pioneers of AI and decision theory, and a Nobel Prize winner - introduced the concept of bounded rationality. He argued that limitations on your ability to compute and survey the territory meant that you were always trying to satisfice, not maximize. You were trying to make a good enough decision because you didn't know enough to make the best one.

This is what leads directly to my interest in recommendation engines. The notion of a perfect choice, the best choice, is a chimera - it's a unicorn, it's a myth. People are biased, human beings are flawed. What are the right kind of choice architectures for people who are such flawed, biased, and poorly informed decision makers? That has been a theme haunting me my entire adult life. When I was at the Washington Post, in the center of power, I was not blown away by our elected officials. I thought: these are pretty big decisions you're making off of such crappy information.

Seth Earley: What led you to write the book, and can you talk about recommendation versus personalization?

Michael Schrage: I noticed - and I actually wrote a piece in MIT Technology Review about this - that in things like music, travel, and especially books with Amazon, I was increasingly relying upon recommendation engines to expose me to things I otherwise wouldn't have come across. This was intriguing to me because I had a sobering realization: I was getting better reading recommendations from Amazon than from my friends. Love my friends, wonderful people - but the recommendations from Amazon were genuinely more useful.

I had also been exposed to this through the Media Lab. Pattie Maes's group did a project called Firefly, which was one of the first collaborative filters - it ultimately got sold to Microsoft, which shut it down, stupidly. I was doing some work with Tom Malone at the Sloan Center for Coordination Science, who was doing preliminary work on recommenders. So I had been exposed to this space for a long time.

When the editorial director of MIT Press asked what I wanted to write about for their Essential Knowledge series, I said recommendation engines. I knew the technology well enough to bluff - but I didn't really know it deeply, and so I used the book as an opportunity to learn.

The wild card was that while I was writing the book, machine learning began to colonize recommendation. It wasn't just how do we come up with better market basket rules or better correlations - it became how do we learn, how do we learn to recommend? Recommendation engines became learning-to-recommend engines. And in the book I made an argument that a couple of peer reviewers actually rejected, but my editor and I agreed belonged: if you take recommendation engines seriously, if you take the fundamentals seriously, what you are really writing about is advice - the technologies of advice. Casting a horoscope is an algorithm based on the stars. The I Ching with its hexagrams is the same thing. I was really writing a book about the nature of technology-mediated advice. How do we use tools and technologies to come up with better and more reliable advice? That was the narrative arc and the transcendent insight that drove the book.

And the future is enormous. There is probably at least $2 trillion in underrealized, underappreciated value sitting around in the recommendation and advisory experience globally. And if you think about augmented reality advisory augmentation - your phone or device overlaid on the physical world, advising you about where to go, what to do - that alone represents a $100 billion valuation opportunity.

Chris Featherstone: Can recommendation engines ever get to the point where they feel like serendipity rather than surveillance? Right now people feel like someone is always looking through their window.

Michael Schrage: Let me deal with the ethical dimension directly. The famous issue in economics is the principal versus the agent problem - cui bono, who benefits? Warren Buffett was on the Washington Post board when I was there - I still marvel at having been able to talk to him. He had a wonderful phrase in his shareholder letters: never ask a barber if you need a haircut. Because there is an inherent conflict of interest in the advice.

The real ethical challenge is whether organizations can demonstrate that the recommendations they make truly reflect and incorporate the best interests of their customer, their client, their prospect. That is why interpretability, explainability, and transparency around recommendation algorithms are so critical. I want Amazon to make clear to me why they are recommending this book versus that one. I want Spotify to make clear why Discover Weekly is suggesting these genres. Same thing with LinkedIn, same thing with TikTok. Interpretability and transparency are the best and surest way of defusing the ethical question you are raising.

Now let me flip it to the user experience side. I believe recommender systems, when they offer better choices - choices you would not have come up with on your own - and you can see yourself in those choices, or you can see the kind of person you want to be in those choices, they become genuinely valuable. There has to be an affective as well as a cognitive component to recommender system design. You want people not just to say "these are great choices" but to feel that those choices reflect and respect the individual receiving them. That is not a subtle design problem - that is a huge design problem and that is why recommendation systems are so interesting.

Seth Earley: What about the distinction between recommendation and personalization?

Michael Schrage: In recommendation system terms, that is the difference between collaborative filtering - people like you get this kind of value from it - and content-based filtering, where you have expressed interest in these kinds of things. What you are really doing is the economics question: what is the indifference curve, what is the trade-off between these approaches?

I believe it becomes much easier to be open to and experiment with choices and recommendations if you trust the source - whether that is a person or a device or an algorithm. Data integrity, lineage, accountability - that is the antithesis of garbage in garbage out. And the rise of algorithmically innovative recommenders makes data, makes ontology, makes semantics that much more important. That is one of the reasons I find this so fascinating - it is not making knowledge management and information architecture less important. It is making them more important.

Michael Schrage: Let me also push on one specific thing about the affective side. One of the things I wish Amazon would do with Alexa is import an affective dimension. I want to be able to say "Alexa, play me a sad song," and I want Alexa to respond: "How sad?" So that we really do get customization and bespoke experience. Because you want to build a kind of connection where the user trusts the system. You want to design interactions and use cases not just that offer better choices and good advice, but that build trust. Customer lifetime value - that is the north star. Click-through rate is the devil. It is Satan. The right way to upsell would be to give people good advice. Organizations that do not understand that are baffling to me.

Chris Featherstone: When do recommendation engines start to produce diminishing returns? At what point do they undermine the ability to think for oneself?

Michael Schrage: One of the things that creeped me out in researching the book is that all over the world - in Europe, mainland China, and the United States - there were entrepreneurs whose philosophy of recommender design was "we want to know you better than you know yourself." That crosses a line for me. Why are we giving people choices? We should just give them the single choice we know is right. That is not a recommendation engine anymore. That is a compliance engine - a do-it-or-else engine.

The question you are really asking is: what impact does good advice have on agency and autonomy? I am a fan of technologies that promote, empower, and enable agency. I want better systems to give me good choices. Do not tell me the answer - give me the best options and let me choose. Do not try to turn me into a meat puppet. Make me feel like someone who is in control of their own destiny. And to do that with integrity - transparency, interpretability, explainability - that is how you make it work. I want to be able to compare recommendation engines and say "if you like your advice structured this way, these are the kinds of systems you should go to." That is meta-recommendation - recommendation about recommendations - and that is where ontology enables something genuinely powerful.

Seth Earley: What are you working on next?

Michael Schrage: Two research paths that I ended the book pointing toward. The first relates to metrics - NPS, net promoter score, is a bad metric. It is the evil twin mirror image of all the recommender and advice data. I am very interested in the notion that when you have orders of magnitude more data and algorithms that can learn, what if our KPIs could learn? What would it mean to have a portfolio of KPIs that could learn to optimize itself? That is one direction.

The other I refer to in the book as "selves." I am interested in recommendations for me - but I am married, I have a wife, and I prefer to be happy rather than unhappy. How should I weight her mental model or her affect in the recommendations I receive about where to travel, what to eat, how much time to spend with relatives? Netflix has spent tens of millions of dollars trying to refine recommenders for families as opposed to individual accounts.

But more broadly: human beings are not singular. You are the sigma and sum of multiple selves. What does it mean to make a recommendation for your curious self versus your cautious self? What is the Jaccard similarity - the overlap - between advice you would give your adventurous self versus your typical self? Recommenders are ultimately about the future of introspection and understanding yourself. That is the research frontier I find most compelling.

Seth Earley: We are at the top of the hour. Michael, this has been such a pleasure - always enjoy our conversations. We definitely have to continue this.

Michael Schrage: I am so grateful for this wide-ranging and fun conversation. You have given me a lot to think about. Thank you so much.

Chris Featherstone: Let's get you back when both books have second editions - or even before.

Michael Schrage: We will always end on a passive aggressive note. Take care, everyone.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.