Skip to main content


Perpetual reminder that the entire business model of LLM-based chatbots, no matter their nationality, is based on intellectual property theft and this gem from XKCD:
XKCD comic, Cueball Prime stands with a paddle on top of a pile of stuff including a funnel labeled "data" and box labeled "answers".<br>Cueball II: This is your machine learning system? - Yup! You pour the data into this big pile of linear algebra, then collect the answers on the other side. - What if the answers are wrong? - Just stir the pile until they start looking right.
#AI #ArtificialIntelligence #LLM
This entry was edited (4 months ago)
in reply to Hypolite Petovan

"There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.

So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.

While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.

Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.

So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.

But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.

And while the human brain consumes around 20 watts, these massive pattern matching computers consume uncounted millions, and counting.

However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.

In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.

So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death."
SearingTruth

in reply to Hypolite Petovan

true except sometimes the last line is replaced with "fuck it, who cares?"
in reply to Hypolite Petovan

So im gonna listen to a song and the neural network called my brain will learn from it, so when i write my own song, turns out its theft? We're so used to culture being restricted that we lost our senses.
in reply to kn_fk

@kn_fk This is such a bad faith argument you've proved you're more than just a machine learning system.
in reply to Hypolite Petovan

@kn_fk

"The Human-AI Scale is Not Comparable

First, humans and AI systems do not consume creative works in the same way. A human can read a novel, watch a television show or movie, or listen to a song, and while it might spark inspiration, they cannot instantly absorb every book, every screenplay, every melody ever created. "

in reply to David Högberg

@kn_fk

"Artificial intelligence, by contrast, operates on a scale no human ever could. It ingests billions of pieces of work—copyrighted or otherwise—at speed most of us will never comprehend, building a knowledge base that no single creator, or even all creators combined, could rival.

That’s not inspiration; that’s extraction on an industrial scale."

in reply to David Högberg

@kn_fk
"Humans are bound by time, access, and attention. AI faces no such limits. It doesn’t skim a book; it processes every sentence. It doesn’t watch a film for its plot; it analyses every shot, script line, and score. The claim that this is equivalent to human inspiration trivialises the reality of what AI systems do when they train on copyrighted content.

AI isn’t inspired by a work—it’s inspired by all works."

in reply to David Högberg

@davido1975
The difference is still just scale. To think this is ethically questionable is a legitimate concern worthy of debate. But to call it theft would require a redefinition of property itself.
in reply to kn_fk

@kn_fk @David Högberg "just" is load-bearing here. Get the fuck out of here with your false equivalencies.
in reply to Hypolite Petovan

@kn_fk

"When a human consumes creative works—whether reading a book, streaming a movie, or listening to music—there’s an economic exchange.

Libraries pay for books and recorded works. Schools and universities do the same.

Streaming platforms license music and films. Cinemas pay to exhibit. Theatres, arenas and stadiums pay for performances. Even ad-supported services like radio and television networks ensure creators receive royalties, however small."

in reply to David Högberg

@kn_fk

"In every scenario, the creator is compensated, directly or indirectly, for the use of their work.

AI companies, however, have built models by sidestepping this system entirely."

in reply to David Högberg

@kn_fk

"They’re not paying licensing fees to access the books, films, or music they train on. They’re not compensating creators for the value their works add to the AI’s capabilities. Instead, they’re mining the world's reserve of copyrighted material without acknowledging or paying for the creativity, craft, and sheer labour that went into creating it."

in reply to David Högberg

@kn_fk

There’s another key distinction: humans are end users.

AI companies are platforms and enablers—just like Spotify, Netflix, or a publishing house. Those platforms don’t get a free pass to use copyrighted works because they facilitate creativity; they pay licensing fees to use, distribute, and profit from those works.

AI platforms should be no different."

in reply to David Högberg

@kn_fk

"The idea that AI shouldn’t have to pay because “it’s like a human finding inspiration” conveniently ignores the fact that AI is not a person—it’s a product. And when a product derives its value from copyrighted works, the creators of those works deserve compensation. Artificial intelligence companies are creating tools designed to replace human labour and creativity in many cases, and they are monetising those tools."

in reply to David Högberg

@kn_fk

"To claim that they don’t owe creators because “humans don’t pay for inspiration” is to obscure the scale and stakes of what’s happening."

in reply to David Högberg

@davido1975
You are making very good points and i agree that this could be disruptive with the existing business models surrounding culture. I just think it's incorrect to call it theft.
in reply to Hypolite Petovan

My company is very big on AI right now, and I'm now part of a trial aimed at using such tools to help Neurospicy employees.

They foolishly asked for my opinions ahead of the upcoming learning sessions.

I gave them both barrels. I wish I'd had this to attach for them.

in reply to Birne Helene

@Birne Helene AI bots blindly resharing AI criticism is a funny parallel with the fact LLM-based AI systems do not perceive meaning. Not that these channels are using any kind of AI though.
in reply to Hypolite Petovan

the fact LLM-based AI systems do not perceive meaning.


And that is the crucial point, I think.

⇧