The hidden forces that shape your behavior
Why incentives matter and why we need to talk about them more.
At the YouTube Leadership Summit back in 2012, then YouTube’s technical leader, Shishir Mehrota, announced an incredibly ambitious goal1.
By 2016, YouTube wanted to reach a combined total of 1 billion hours of watch-time per day.
Why was increasing total watch-time so important to YouTube?
The thinking was that longer watch-times meant a more engaged audience, which would in turn lead to more advertising money.
To hit their target, which they did in October of 2016, the YouTube team had to try new things. And one thing they tried was a new video recommendations engine.
This was nothing more than a self-learning algorithm that curated individual users’ YouTube feeds based on their previous behavior.
The more the user fed the algorithm with data — what they clicked on and watched, for example — the more the algorithm would learn about them and more accurately predict what they’d be likely to click on next.
For YouTube, this made perfect sense. It was, after all, in line with their business goals.
But this created a whole host of unintended consequences, most notably by serving more and more emotionally charged and polarizing content to users. And we all know where that’s led us to.
Algorithms don’t care, they’re just doing a job
The thing is, YouTube’s video recommendation engine wasn’t evil, it was just doing what it was programmed to do — increase watch-time.
And it quickly figured out that the best way to do that: hook users with emotionally charged content.
It didn’t make humans susceptible to emotionally charged content; it just figured out that exploiting human psychology was the best way to get to its goal. Being a non-organic entity, it didn’t care about the ethical implications of what it was doing.
And this isn’t just a YouTube issue.
Facebook’s algorithms were key vectors of the spread of hatred during the anti-Rohingya ethnic-cleansing of 2016-17 in Myanmar.2
When programmed to maximize for user engagement, these algorithms have one goal only — user engagement — and to hell with whatever unintended consequences can emerge from that single-minded goal.
The power of incentives
The point here is not to discuss the merits or dangers of algorithms for human society. Rather, they highlight the power incentives have in shaping our lives.
Their cautionary tales are forcing us to ask: what’s the incentive here? And, who’s interests is it serving?
What’s the incentive behind YouTube’s algorithm?
Is it for you to watch a video, perhaps about learning a new skill, and then go on to the rest of your day? Or is it to get you lost into a rabbit hole, mining your attention in exchange for advertiser money?
What incentives are underlining TikTok’s algorithm? Facebooks?
The danger of misaligned incentives
But the question of incentives extends far beyond our digital lives.
For example, what incentives are at work when students enter the classroom?
Is it to learn about the world around them and to learn how to think critically? Or is it to get good grades?
In my experience, it’s to get good grades, which, as we all know, don’t necessarily translate to a successful adult life. (Given the shift we’re only just starting to experience in professional life due to AI, I’d argue that the ability to learn and to think critically are more important now than ever before).
But what about incentives in a more professional environment?
As much as the Harvard Business Review3 loves to tout inclusivity and diversity as key to innovation, what incentives actually dominate professional environments?
Are you incentivized to speak your mind at work? Or have you learned, through bitter experience, that it’s better keep your mouth shut?
That’s why I believe this gap between purported goals and the incentives we are actually exposed to is so insidious:
We think we’re doing one thing when, in fact, we’re achieving something else entirely.
Enjoyed reading this?
You can support my writing by sending me as little as $1.
Nexus (2024) by Yuval Noah Harari (p.195-197)
This is because raising different perspectives increases the quality of discussions (see, for example, https://hbr.org/2013/12/how-diversity-can-drive-innovation)
This is a great read Ben.