Essays. Computer programs. Customer service. Disturbingly realistic TikTok videos of Jake Paul wearing makeup and coming out as gay.
Artificial Intelligence is everywhere.
It’s actually hard to overstate just how ubiquitous A.I. has become. A Gallup poll from last year, for one, shows that 99% of U.S. adults use at least one A.I. tool every week, often without even realizing it. A study by Visual Capitalist indicates that A.I. reached a 39% global adoption rate as early as 2024, which means it’s spread as far in two years as personal computers did in 12. It may be surprising, then, to consider just how recent it truly is.
Let’s put things in perspective. When ChatGPT was first released in November 2022, I was just starting high school. (That’s right, seniors. We’re that old.) Over the course of my high school experience, I’ve lived to see A.I. turn from niche little websites that’d help you with your homework to the massive industrial behemoth it is today — Harvard economist Jason Furman has estimated that, without counting data centers, the U.S. economy grew by only 0.1% in the first half of 2025. The other side of that data is even more stunning: Furman also calculates that A.I. currently accounts for 92% of all U.S. economic growth.
But as A.I. has exploded over these past few years, so too have questions about its future. Plagiarism has skyrocketed — Adam Stirrat, Ladue’s Technology Coordinator, says teachers usually send him at least four suspicious-looking essays every day. Others object to the environmental cost of A.I. — especially now that a massive data center is slated to be built in the St. Louis armory. Zooming out, the concerns get even more terrifying: A.I. pioneer Yoshua Bengio — the single most-cited scientist on Earth — has recently come out with fears that terrorists might use his creations to engineer indestructible COVID variants.
Which brings us to the age-old question: “Are the robots gonna kill us all?”
A.I. experts themselves are bitterly conflicted on how dangerous their programs may be. While many have come out with chilling hypotheticals, others point to the real-world economic growth A.I. has already spurred. As countless firms around the globe compete with each other to make the fastest and smartest products, many CEOs and policymakers are unwilling to even engage with this question. Folks like us, in the meantime, are left with the disconcerting possibility that the entire job market might just disappear.
So, what gives? It’s hard to say, but past technologies may hold the answer.
Consider steam machinery. Over the course of the Industrial Revolution, craftsmen and artisans lost their jobs as production was redefined by steam engines and conveyor belts. Bucolic rural villages were depopulated and privatized as farmers sought jobs in the cities, and newly industrialized nations would go on to wage the most destructive wars in human history. Fast-forward to today, however, and the Industrial Revolution has ultimately improved living conditions and increased the human population beyond anything previously thought possible.
Or look at nuclear fission. What began as a small experiment to split the atom ultimately led to the total destruction of two entire cities during World War II, and humanity today still shudders under the threat of all-out thermonuclear obliteration. Even so, the atom has peaceful uses: nuclear power may well be the solution to our deepening climate crisis.
Did these inventions help or hurt us more? Will we survive learning the hard way again?
I don’t know. No one does, but maybe that’s not the point.
At the end of the day, technology itself has never done anything. Machines didn’t cause the World Wars — people did. Uranium didn’t bomb Japan — people did. The worst-case scenario for A.I. is literally that it’ll think like us — and even then, someone would have to use it that way to begin with.
The world might be a better place if we learned to fear ourselves instead.
