AI is the snake that eats its tail


[ Follow Ups ] [ Post Follow Up ] [ UCLA Open Forum ]

Posted by blindness on March 29, 2024 at 10:22:30

Didn't see that coming, in all honesty. An interesting on AI in today's NYT:

A new study this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” almost 3,400 percent more than reviews had the previous year. Use of “commendable” increased by about 900 percent and “intricate” by over 1,000 percent. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

This ties in with the fear I had a while back when I started seeing comments on linkedin about using AI to "get started" on business correspondences. Once you realize businesses will naturally rely on GPT models to summarize correspondences they receive, it would be a teeny weeny little step from all correspondences between companies handled through AIs talking to one another, occasionally spitting out summaries tailored for what the company officials want to see, and actual cognition never being part of that chain anymore all in the name of productivity and efficiency. I'm sure productivity numbers skyrocket once human cognition is removed from the picture.

So now academics relying on AI to write about stuff that they had put all those years of hard work to achieve a better understanding of. Brilliant.

There’s so much synthetic garbage on the internet now that A.I. companies and researchers are themselves worried, not about the health of the culture, but about what’s going to happen with their models. As A.I. capabilities ramped up in 2022, I wrote on the risk of culture becoming so inundated with A.I. creations that, when future A.I.s were trained, the previous A.I. output would leak into the training set, leading to a future of copies of copies of copies, as content became ever more stereotyped and predictable. In 2023 researchers introduced a technical term for how this risk affected A.I. training: model collapse. In a way, we and these companies are in the same boat, paddling through the same sludge streaming into our cultural ocean.

So this would be what ... generative closure? Keep in mind all this is happening with zero cognition or reasoning involved anywhere along the process. All we have is complex matrix of collocation of words, nothing more.

The author then goes on to talk about all the AI garbage that's being generated polluting our culture, which is a term that I don't fits just right in this context but I suppose "culture" has a more visceral punch than "information ecosystem", which is what I think is actually getting polluted. Culture is what we humans create using those bits of information. Yeah. Nitpicking a little, whatchagonna do?

The next decade is going to be some weird psychedelic trip with every person on the planet manufacturing their own reality with airtight proof .... but I'm getting ahead of myself.

The piece I linked has interesting bits in it.



Follow Ups:



Post a Followup

Name:
Email:
Password:

Subject:

Comments:

Optional Link URL:
Link Title:
Optional Image URL:


[ Follow Ups ] [ Post Follow Up ] [ UCLA Open Forum ]