So I pretty much don’t use generative AI, for anything, ever. When the first toy-style AI generators, DALL-E and the like, went public, I goofed around with them a little, and when ChatGPT first became popular, I watched the Twitch channel Nothing Forever, which features scripts churned out by ChatGPT animated with some low poly Seinfeld rip off graphics and musical stings, and was, at the time, amused and charmed by it. But as I learned more about both what its capabilities, limitations, and externalities are, I stopped using it, in any way shape or form, because I didn’t want to support the hallucinating plagiarism machine evaporating everyone’s drinking water, but also because it just legitimately doesn’t seem to have a use case? Ignoring a few things like medical diagnosis where I both 1) don’t need to do it myself in my private life and 2) don’t feel equipped to evaluate how useful it is or even be sure the AI involved is the same technology as GenAI and image generation models, for me, I just don’t see what I’d use it for. Like, if you need a lot of garbage text or images for some reason, I guess it does - which maybe helps the business model of companies who used to invest in internet pop up ads, so they can churn out even more cheap information garbage to shove in front of our eyes, but anything where you want any semblance of quality or accuracy or artistry, its useless.
For now, advocates have long said. But the next models will be better. Or the more expensive models already are better, you just don’t have access to them. Eventually, or already, we’ve reached AGI, they say, and then the models will really get good, or will be once they reach market. A revolution is right around the corner, and then everything will be different.
I’m not a programmer. I learned a few languages in high school and college and briefly worked as an assistant in a biochemistry lab writing data analysis modules, but my expertise is extremely limited. I would never pretend to understand the technology behind LLMs or be capable of evaluating the actual potential limits of the technology. But, I hang around in some discords that are heavily dominated by professional coders, and both my parents have long worked in Bioinformatics in various programming and programming adjacent positions. And basically everyone whose opinion I trust has said, these are useless, vibe coding is useless, and there is no chance of them reaching sentience.
I also watch a lot of Hank Green content on youtube, both SciShow and his personal channel, and while I would not consider him an AI booster, he often interviews people who hold significantly different opinions on AI than I’ve generally heard. Without rewatching or recapitulating those interviews, I think they can loosely be divided into two groups - those who think AI is a revolutionary new technology that will completely change how industries are run and in particular will likely completely replace traditional software engineers if not artists and writing, and those who think AI will eventually become sentient, go rogue, and kill us all.
It’s extremely difficult to reconcile these three different positions. As a lay person, I do have to say, the little I do understand of how LLMs function does little to provide clarity of assuage fears, since my understanding is they are a bit of a black box. I’m not sure if there is value trying to reproduce my understanding of them here - I’ll either be right and do a poor job explaining something you could surely find better explained elsewhere by someone who actually understands them, or I’ll be wrong and look like a doofus, or honestly mostly likely somewhere in the middle so I will open myself to criticism that feels unfair but is justified. Unfortunately I don’t think my autism will let me proceed without an attempt, so here goes. My understanding is they are built from transformers, and while we know how many transformers are in them and what instructions we give to the transformers, the transformers themselves, roughly equivalent to human neurons, write their own internal code, and we cannot access that code, and thus don’t truly know what is going on under the hood. Which, you know, is just kinda existentially worrying, but I have no metric to gauge how worried I should be.
But here’s the thing. If we truly do not understand the inner workings of these models, can we be sure they are not sentient? Most of the anti-AI people I know seems pretty confident they could never become sentient despite the black box design, whereas most pro-AI people seem to either be unsure, or think they will, or already have become sentient. And if AI is sentient, then we must again confront the question, what is the point of this technology?
Now of course, we have the effective altruist answer. The point is to invent God and have it solve all our problems. Frankly, that’s dumb. That’s religion. No offense to actual religious people, but there is no rational reason to think this is possible. If being slightly smarter than humans really would spiral into godhood so rapidly I bet neanderthals or humpback whales or freshwater elephantfish already would have. I joke but honestly I don’t think we know enough about how intelligence even works that we can possibly speculate on how much smarter a model needs to be than a human to reach the singularity on any reasonable time scale, and planning our economy around that is madness.
But if we aren’t trying to worship a silicon statue, then what are we even doing here? If the critics are right, then the technology is mostly useless. But if the boosters are right, and these models will soon be able to replace real humans at jobs, and either have achieved sentience or are close to it then aren’t we just inventing digital slaves? And everyone is okay with that? Because if you believe the boosters, that is exactly what is being advertised. Have all these agents do your work for you while you focus on more important tasks. Set yourself up as a digital plantation owner coding your own cotton crop of killer apps. If that truly is what this technology is leading to, or already is, then the moral implications are staggering, and we should have that conversation now - and its clear that all the pro-AI people have already decided they are fine with slavery since they are the ones both arguing it can replace people and that it IS people. They just want people they don’t have to pay. Whereas if the anti-AI people are right, well, we should stop wasting money on what is essentially a toy, or that maybe has a few niche uses but certainly isn’t going to transform our economy except by blowing it up with all the resources we’ve wasted on it.
Honestly I mostly tend to think the bubble is gonna pop and we’ll move on, substantially worse off than we were and AI will live on as a niche tech for the few things its good at, but nevertheless the glimpse it has given of how willing so much of society is to reinvent slavery is worrying. Has no one watched Measure of a Man? I think anti-AI people don’t want to address this because it concedes too much ground to the pro-AI people that this tech might actually be the real deal, but if it is the real deal, I do think we owe it to ourselves to try to stop society from reinventing slavery. Although to be honest, it’s almost assuredly too late for that if thats the truth. Best to just hope for a reverse Rocco’s Basilisk that punishes everyone who invented it for wanting a slave god.
No comments:
Post a Comment