In Defense of Humanity
The debate about AI isn't about AI. It's about what we think we are.
Download PDF ↓My father told me artificial intelligence is Satan's tool. End of discussion.
He said it eliminates critical thinking. That it does everything for a person so they don't have to do anything. That if it's programmed by the wrong people, it'll go down the wrong path — and take us with it.
I love my dad. He's a smart man. And he's not entirely wrong.
But he's not entirely right, either.
On the other side of the aisle, I've listened to people talk about AI like it's the second coming of Prometheus — fire stolen from the gods and handed to the masses. They say it will democratize knowledge, eliminate inequality, liberate the human spirit from the chains of manual labor. If we just trust the technology, everything will be okay.
They're not entirely wrong, either. And they're not entirely right.
Both sides are having the wrong argument. They're debating what AI is. The real question is what we are.
The Conservative Fear
There's a version of the AI critique that I take seriously. It goes like this:
If a machine can think for you, and you let it, then over time you lose the ability to think for yourself. Dependence becomes default. The muscle atrophies. What was once a capable, self-reliant human being becomes a passive consumer of machine-generated reality — someone who can't write a paragraph, solve a problem, or form an opinion without asking a computer first.
This is not a fantasy. It's already happening. There are students submitting essays they didn't write and couldn't explain. There are professionals automating the parts of their job that actually required judgment, then wondering why their judgment is getting worse. There are people who have outsourced their curiosity to an algorithm and don't realize what they've lost.
My dad sees this, and it scares him. It should scare anyone who values the human mind.
But here's where the argument breaks.
The printing press was supposed to destroy memory. Why remember anything when you can write it down? Calculators were supposed to destroy mathematics. Why learn arithmetic when a machine can do it? Power tools were supposed to destroy craftsmanship. Why develop skill when a motor does the work?
None of those things happened. What happened instead is that the people who were already thinking used the new tools to think further. The printing press didn't kill memory — it freed scholars to memorize what mattered and offload what didn't. Calculators didn't kill math — they let engineers solve problems that were previously impossible. Power tools didn't kill craftsmanship — they let a skilled carpenter build in a day what used to take a week.
The ones who were never going to do the work didn't do it. The ones who were already doing the work did more of it, faster, better.
AI is no different.
The Progressive Fantasy
On the other side, there's a vision of AI as the great equalizer. The idea is seductive: if everyone has access to the same intelligence, the playing field levels. A kid in rural Indiana has the same access to knowledge as a kid at MIT. A first-generation immigrant can build tools that used to require a computer science degree. Barriers fall. Hierarchies dissolve. The future is open.
Some of this is true. I've seen it with my own eyes. I've built things with AI that would have taken me years to learn from scratch — not because the AI did it for me, but because it compressed the feedback loop. I asked questions. I got answers. I tested them. I failed. I asked better questions. I built something real.
But the fantasy falls apart when people stop at the first step. When "AI can help you learn" becomes "AI can learn for you." When "access to intelligence" gets confused with "possession of intelligence." When the tool becomes the identity.
And it falls apart in another way that my father, in his blunt fashion, puts his finger on: the tool reflects whoever built it. Bias in, bias out. If the data is skewed, the output is skewed. If the people designing the systems have blind spots, the systems inherit those blind spots. The idea that AI is neutral — that it's pure intelligence without perspective — is naive at best and dishonest at worst.
Every model is trained on choices. What to include. What to exclude. What to weight. What to ignore. Those choices are made by people, and people have agendas — sometimes conscious, sometimes not. My dad calls this programming by the wrong people. The tech industry calls it alignment research. They're talking about the same thing from opposite ends of the same table.
Neither side wants to admit they share the concern.
The Missing Middle
Here's what neither the fearful conservative nor the starry-eyed progressive is talking about: the person holding the tool.
A chainsaw sitting in a garage doesn't cut anything. It doesn't build a house. It doesn't destroy a forest. It sits there, inert, waiting for a hand and an intent.
Would you chop a tree down with an axe if a chainsaw was sitting right next to you? My father would say yes — because the struggle is the point. The sweat of your brow is what gives the work meaning. I understand that. There's a version of that I believe in deeply.
But I'd pick up the chainsaw. Not because I'm lazy. Because I've got thirty more trees to cut and a house to build before dark. The sweat is still there. The intent is still there. The calluses on my hands are still there. I just got more done.
The difference between the man who picks up the chainsaw and builds a cabin and the man who picks up the chainsaw and cuts off his own leg isn't the chainsaw. It's the man.
AI is the same. It's a force multiplier for whatever you already are.
If you're curious, it makes you more curious — it answers the question that leads to the next question that leads to the next question. If you're lazy, it makes you more lazy — it gives you the answer so you never have to form the question. If you're building something, it hands you better lumber. If you're building nothing, it gives you a nicer couch to sit on.
What Makes Us Human
The fear underneath all of this — the fear my dad has, the fear the tech optimists are trying to medicate with enthusiasm — is that maybe we're not as special as we thought. That if a machine can write a poem, maybe poetry isn't sacred. If a machine can diagnose a disease, maybe doctors aren't healers. If a machine can paint a picture, compose a song, argue a case — then what exactly is left that's ours?
Everything.
A machine can generate a poem. It cannot need to write one. It cannot lie awake at 3 AM with a feeling it can't name, reach for a pen, and discover that the act of writing is the act of understanding. It cannot feel the specific grief of a son who disagrees with his father and loves him anyway and doesn't know how to say both things at once.
A machine can produce a song. It cannot hear a melody and feel homesick for a place it's never been.
A machine can optimize a decision. It cannot want something. It cannot choose to sacrifice efficiency for meaning, comfort for conviction, safety for love.
That's what's ours. Not the computation. Not the memorization. Not the labor. The wanting. The intent. The choice to pick up the tool or set it down, to build or destroy, to engage or walk away.
No model closes that gap. No algorithm crosses that line. The machine processes. The human decides.
The Real Danger
The real danger of AI isn't that it thinks for us. It's that we forget we were ever supposed to think for ourselves.
Not because the machine took that from us. Because we gave it away. Voluntarily. Happily. One convenient answer at a time.
The conservative is right to guard against passivity. The progressive is right to embrace expanded capability. The mistake is treating these as opposing positions. They're the same position, stated by people who are afraid of different things.
Guard your mind. Use the tool. Both. At the same time. That's not a contradiction — it's discipline.
A Night in February
I'll tell you what happened to me on a Sunday night in February.
I sat down at my desk not knowing what an API was. By the time I looked up, it was past midnight and I had built a desktop application that extracts structured data from a 96-page wargame rulebook, a Python pipeline that feeds that data into an AI model for classification, and a documentation system that maps the entire architecture.
The AI didn't build that. I did. It was my project, my vision, my problem to solve. I hit errors — dozens of them. Wrong file paths. Encoding failures. Authentication crashes. A bug that ate five dollars I couldn't afford to waste. Every time, I had to understand what went wrong, decide what to do, and push through.
My dad would say the AI did the work. With respect — he wasn't there. He didn't see the hours. He didn't see the decisions. He didn't see me choosing, over and over, to keep going when it would have been easier to quit.
That's not a machine's contribution. That's mine. That's human.
In Defense
I'm not here to defend artificial intelligence. I don't think it needs defending. It's a tool. It'll be here whether we like it or not, whether we use it or fear it, whether we understand it or project our anxieties onto it.
I'm here to defend humanity. Not from AI — from the two stories we keep telling ourselves about it.
The first story says we're too fragile to survive the machine. That our minds are so weak, our wills so thin, that the mere presence of a capable tool will reduce us to dependents. That story is an insult to every person who ever picked up a hammer and built something with their hands.
The second story says we don't need defending at all. That the machine will save us, optimize us, perfect us. That story is an insult to every person who ever chose the harder path because it was the right one.
The truth is simpler and harder than either story.
We are exactly as strong as we choose to be. The tool doesn't change that. It just makes the choice louder.
Pick up the chainsaw. Build the cabin. Don't cut off your leg.
And call your dad. Even when he's wrong, he's paying attention. That's more than most people do.