Why AI (investment) is really the death of humanity
Lede not buried: we spend billions to train machines to take human jobs while we cut funding to educate human children. Is this progress?
AI is not just coming for your job, it’s coming for your life. No singularity is necessary for this to happen because AI won’t become conscious and won’t do the dirty deed itself. A few people with money and power have indicated very clearly that they don’t think we need this many pesky humans around, and are working hard to make them obsolete. Or more accurately, they’re not thinking. They are acting on an impulse that conscious thinking can’t deter them from, down a path that will eventually lead to erosion of their own standing as wealthy and powerful. It’s not an AI problem, it’s a human economic problem.
Few things in human society are truly binary, but AI investment is close. Money and resources are flowing directly away from people, and directly to machines. A CEO was murdered quite literally because he listened to an algorithm that said he could make more money if he let more people suffer and die. And the profits from that human suffering will be split between a few wealthy people, and investment in the machine. In a nutshell, that is what our economy has become.
The AI investment strategy is senseless on so many levels. Ironically, it indicates how much we overestimate our own intelligence and how little we actually understand it. We humans are far too stupid to create intelligence, and we’re out to prove it. We’ve taught the machine everything we know, and that hasn’t been nearly enough to create intelligence.
This failure, this inability to see the current limits of our own intelligence, will likely be the fall guy in the next major economic crisis. We think we’re training machines, but we’re training humanity with a spectacular collapse that hopefully doesn’t end us. It’s not about AI at all, really. It’s about how humans act when faced with prosperity. The result of this appears to be creating a series of hallucinatory hype bubbles that burst until we create one big enough that the ensuing collapse shakes us back into survival mode, which our consciousness is much more suited to handle than prosperity. And the next round of building to prosperity would theoretically begin a little bit further along than the last one. All the while, we think we’re doing something else entirely.
AI accelerates destruction of the planet we live on, which is currently the only place in our universe that we know we can survive. Our legacy is a few thousand billionaires in a few spaceships, wandering the galaxy for what — a year or two until they die? And stripped of the masses of people that give them power, those few thousand probably end up regressing and killing each other. Makes for good TV because fictional stories drive most of what Elon does. And maybe not just him. Maybe we’re all driven by fiction.
Economics is a social science. Without people, there is no wealth and power. Money spent by these superfluous humans creates the flow that ends in the pockets of billionaires. AI investment quite literally kills the cash cow. For a minute, Zuck was all in on universal basic income. The people need to be given money to spend to make this kookie machine work. In a monumental misunderstanding of economics, Zuck thought that since the government can print money, he would prefer that they give it to people rather than him having to send it back down to the bottom. Zuck was right in assuming he will eventually get a good portion of that printed money, but that money would have to be taxed 1:1 to maintain stable value, and he would need to pay it. Billionaires are billionaires because they’ve established a terminal position in this flow, and cutting off the beginning of the flow turns the spigot off on their wealth and power. Yet, they’re doing it anyway. That’s not intelligent. Billionaires are similar to farmers or ranchers, raising livestock to get nutrients from the livestock. They are essentially not planting anymore grain to give nutrients to the livestock that they can later take, but instead replacing the livestock with machines they can’t eat. Fundamentally, not intelligent.
Let’s imagine. If more investment goes to machines and less to people, more people will be marginalized, contributing next to nothing to the economic flow. So the burden of being the livestock moves up the chain. Those at the top have fewer sources, and “the bottom” keeps moving up. Millionaires become the new poor, then cease being millionaires. Smaller billionaires are the next targets. Forcing a zero sum game, makes it, well, a zero sum game. It will never get to one. Collapse and revolution would happen long before even millionaires were threatened, most likely, unless the aforementioned space odyssey comes to fruition, at which point we skip ahead to the big B billionaires eating the small B billionaires until the small Bs kill the big Bs. Same process. Back on earth, the leftovers will set to work building their own power strata, likely based on some Mad Max influenced basic resource hoarding/protection.
Have you seen TV commercials for AI? From Salesforce to Google to Apple, all of the commercials have one message: we think you’re stupid, and we’d love it if you used our tools to become stupider. It’s not a new message, just so much more transparent because there are so few tangible benefits of AI for consumers. Tech businesses no longer create tangible benefits for consumers. That era is completely over. They’re drug dealers and we’re hooked on their products. Now they’re just turning the screws, squeezing out as much data as possible. We can’t actually stop them. I mean, technically we could, but we won’t.
Luckily, I guess, economic collapse will do what we can’t bring ourselves to do and stop all of this tomfoolery. Those in power will run the ship into the ground, and all of a sudden, Apple Intelligence in my phone will disappear like the puff of smoke that it is, and we’ll get back to the business of surviving. Many of us won’t. Individual humans can understand this, but as a group, we are becoming less than the sum of our parts. Hindsight is 20/20, and now, faced with our own stupidity, many of us are squinting back in history, and saying “doh, that’s what happened last time, too”.
It doesn’t matter what I think, but personally, I think all of this investment in AI would be much better spent on neuroscience, understanding our own intelligence and our own consciousness. If we understood what we were working with — the actual mechanisms of our own consciousness — we would understand our own limitations, and push on that boundary. Consciousness is our model of the universe, and we would do well to understand both its accuracies and inaccuracies. I’m not talking about the bastardization of yoga nidra or buddhism spread to hallucinatory boomers by opportunistic eastern religion televangelists and cult leaders. I’m talking about really understanding the mechanisms and uses of consciousness to the human organism. I think we’ve fooled ourselves into thinking conscious thought is something that it’s not, and that AI hallucinates because we’ve taught it to.