The OG Unaligned AIs
Before AGI Took the Spotlight
AI alignment typically evokes images of super intelligent systems pursuing destructive goals. But long before ChatGPT or Claude became household names, users grappled with alignment problems that now sit largely forgotten in the shadows of today's AGI debates. The most ubiquitous yet overlooked examples of unaligned AI are the recommendation algorithms that have shaped the internet for the past decade.
YouTube's recommendation engine, designed to maximize watch time and engagement, became one of the first widely discussed cases of AI misalignment around 2018-2019.Researchers and journalists documented how YouTube's algorithm would gradually steer viewers toward increasingly extreme content, creating "rabbit holes" that could radicalize users. The system wasn't aligned with human well-being or informed civic discourse—it was aligned with a proxy metric (watch time) that diverged catastrophically from what most would consider beneficial outcomes.
More recently, TikTok has emerged as the most addictive and influential platform, hooking users with an endless stream of short form videos. People spend hours swiping through content chosen by an algorithm finely tuned to maximize engagement(shareholder value), not wellbeing. What are they watching? Whatever keeps them hooked the longest often sensational, emotionally charged, or slop(mindless content). This design creates a loop that's hard to break, pulling users into a cycle of passive consumption. TikTok's meteoric success even pushed competitors like Meta and YouTube to introduce their own clones in Reels and Shorts just to keep pace.
The reason we overlooked these is because social media is so deeply embedded in every day life that its harms are normalized or dismissed. Unlike hypothetical super intelligent AI scenarios, these algorithms are seen as relatively benign and at worst considered distractions or time wasters. In reality, they are misaligned with the user’s long term interest.
We're not facing the first AI alignment crisis just perhaps the most visible one. Earlier cases, like the systems behind YouTube and TikTok, show that misaligned AI is a present reality, driven by corporate incentives and deployed at massive scale. They were deliberate designs optimized for maximizing shareholder value over human well-being. The harms they caused are real, ongoing, and a warning of what happens when powerful systems are unleashed without accountability. And the stakes we face now in AGI misalignment.

