Why do we want AI?

Pondering Skeptic
6 min readSep 17, 2023

--

AI is boring. We’ve done this already.

It feels like sacrilege to be working as a software engineer in Silicon Valley and say this, but AI is boring. Now, actually understanding intelligence is not boring, but outsourcing the attempts to understand it is just…bleh. I get it. Thinking is hard. But we’ve already tried outsourcing that before, and it has never worked out well for us.

Religion is outsourced thinking. It has given us centuries of obfuscation and war. It has taken some ideas which might have been good ideas and had meaning at one point, and put them in a black box, then put that box under the control of a few to use to control many. AI is really no different.

It is a black box that many can outsource their thinking to, controlled by a few (we call them Investors). Those in control don’t know what’s in the box, and those being controlled don’t know what’s in the box. There are a few, we’ll call them Mystics who understand the origins of things put into the box and have some idea of the functions happening in the box. But they don’t know what’s in the box either. At some point, all of the Mystics will die out, and we’ll just listen to the box. We called that the Dark Ages the first time.

Makes one wonder how we think we can create intelligence when we understand so little about it.

We each already have our own virtual reality, abstracted on the “real” world called perception. Now, do we intend to muffle and abstract that further to place yet another black box between ourselves and real intelligence? It seems counterproductive to me. I don’t get it. I don’t know why we want it. It is a way to build hype and extract resources (money), which is what Investors are looking for. I understand why they want it. But why do the rest of us want it? Do we? Do you? Do we just want to be able to get worse and worse at writing and then wave a magic wand over it and make it good writing? That seems to be a primary use case initially. But what does that really get us? Worse at writing.

Do we really want to apply a tool that makes us worse at whatever we apply it to? That stops our own learning in favor of a machine’s learning? Yes, that’s pessimistic. But we’re just coming off of Social Media that has made us more anti-social and blockchain intended to give us back control of transactions that has spawned just as much theft in (somewhat) new and novel ways. Do we really think it’s going to work this time?

And really, what is the promise of AI? It is so execs can fire workers, replace them with AI and send those salaries to shareholders. To put people out of work and give that money to the rich. That’s it. That’s the whole reason for AI hype. Sure, it could be used for other things, good things, but this is the thing that investors are shelling out big bucks for. And because this is where the money is going, this is the thing that will actually happen.

By the time we realize that AI maybe can’t replace people effectively, it will be too late. We’ll already be accustomed to things being generally shittier because AI can’t actually replace a human. We’ve already been slowly becoming accustomed to shittier things because ideas like planned obsolescence have successfully proven we will. Big [insert any industry here] has already determined that consumer experience with their products is pretty negligible. We’ll swallow anything and pay for it. So once the rails for AI making things shittier are in place, we’ll just accept more unemployed people and shittier things. So why, again, do we want this? And I guess, more importantly, how can we say we don’t want it in a way that will make any difference and before it becomes an irreversible drain on most of us?

Phew. That part was pure cartharsis — a reaction to product managers and business “leaders” pretending on AI to please their betters, when their betters see them as tools in the very best case and despise them in the more likely case. And to sub-C -> VP level wannabe Cs playing human sponge to all of the shit beneath (and above) them so no smell reaches the Cs (they’re already immune to their own). But, the catharsis is over, and something much better will emerge.

Actually understanding intelligence is interesting. Here are some off the cuff musings on that path.

Each human (or organism, really) is both an ecosystem and part of an ecosystem. You could say there are layers of ecosystems. As individuals, we formulate patterns of behavior that transfer resources from one level of the ecosystem (extrinsic) to another level of the ecosystem (intrinsic). Patterns that “work” are stored and passed on. But the intrinsic and extrinsic ecosystems constantly change. The patterns need to be updated.

Extrinsically, we get signals from our senses. Consciously, we’re aware of very little about these signals, but we do get highly processed versions of them that become what we see, hear, touch, smell, taste. But none of those final, processed signals is “real”. But it’s the best approximation of our environment that we can be aware of. The raw signals have more nuance, and are being processed by an intelligence that we aren’t aware of. Put a pin in that as the real intelligence that we want to dive into. This is where I personally think humans should invest in: understanding what lies beneath the virtualization of our own perceptions.

New signals are continually matched with existing intrinsic patterns of survival of the internal ecosystem, if there is a good match, we feel good. If there is a poor match, we feel bad. We may feel those things in varying magnitude based on how well or poor a match there is. As far as conscious perception goes, though, we’ve already abstracted much of the nuance away. We essentially get a dumbed down view of both the external stimulus and the internal reaction to that stimulus with which to use our conscious functions to create another layer of abstraction of patterns. What combination of extrinsic stimuli created the most positive intrinsic stimuli? How do we create scenarios with a greater chance of good feeling over bad?

We struggle with that daily. We plan to do things, we set goals for things that will give us more good intrinsic feedback. But we’re dealing with such an abstracted dataset that we often don’t understand why we fail or succeed at producing good or bad intrinsic feedback. But if we keep it at a coin flip, chance, 50/50, we can still keep ourselves at homeostasis. If we go too far out of balance either way (good or bad), we run the risk of death. Our own death, and the death of our intrinsic ecosystem.

The same sort of balance is happening in the extrinsic ecosystem. If that ecosystem dies, we die, just like all of the organisms in our intrinsic ecosystem die if we die. So it is in our best interest to affect our extrinsic ecosystem as well. On the other side of the coin, it is in the best interest of the organisms inside us to affect us, their extrinsic ecosystem. What part do those organisms play in the intrinsic feedback, the good/bad signals that we perceive? How do the act to affect us, their macro environment, as we attempt to affect our macro environment? What do the signals look like they receive and produce, and what does their interface with us look like? What processing are they doing on incoming signals? How are they determining if their own survival is threatened or enhanced? Do they have continually updated internal patterns?

We haven’t done a lot here, but we’ve identified two levels of intelligence working for survival. Each level is trying to keep homestasis internal to them and harmony external to them to survive. At each level, there are actors with a certain amount of imperfect data available to them about the current state, and the ideal state for survival, but they are all evaluating the current state against the ideal state and attempting to move what levers they can to affect a positive outcome.

I think that’s enough of a dive for today. We’ve taken a square root of intelligence, and there are exponential levels to go in each direction. More for later.

But anyway, our intelligence is already pretty artificial, and now we want to make it more so. Doesn’t make sense.

ADDENDUM: This paper outlines the similarities between bacteria cells/colonies and cell to cell communication in more complex organisms. The paper is outlining why studying bacteria may be a viable and easier to access substitute for neuroscientists to study than more complex nervous systems. The implication is a little creepy but fascinating — we may each have one or more other foreign brains in our bodies in the form of bacterial colonies.

SECOND ADDENDUM: There are as many microbial cells as human cells in the human body. We are each half ourselves and half something else. Crazy.

--

--