“…the AI is appealing to our baser instincts; and second, the AI is making us dumber in the process.”
Dumber is right. Appealing to our basic instincts is nothing exceptional, successful salespeople and just plain hucksters are masters of this. But dumber is very concerning. I’ve railed against this inevitability for some time now.
Indeed, I’ve been of late using AI wrt personal medical diagnosis, treatment, and prognosis. I have received from such inquiry a better relationship and subsequent understanding of my health than from my MD! At this point, the information given is only used for my education and discussion points with physician. But someday, especially given DEI initiatives in medical schools, we will see this being the “go to” for newly minted family physicians and specialists.
Correct. This is something that those who believe AI will be able to take over the world should remember. AI is, in essence, a large computer program. It can only do as instructed, no matter how complex the instruction.
I understand the above mentioned AI meddled with CNN and BBC articles. Unfortunately even taken 'pure' those sources are pretty bad for a whole slew of themes: climate collapse, institutional racism, Trump the Fascist etc. Hyperbole and bs rule. One prime example would be a BBC piece a few years ago about oceanic pollution. The article came with a dramatic image of a turtle with a big piece of plastic around its neck. When the BBC was found out to have photoshopped the image they apologized and called it 'a test image'...
At least, like the BBC, when found out ChatGPT admits it made things up:
Substackers and redditors probably do have higher IQs than people on other platforms, since people who prefer textual content over images and videos generally have higher IQs, but there’s no way the average redditor has an IQ of 115. A better estimate would be 100-105.
Which AI were you using? I've found Chat GPT 3.5 will often be incorrect factually about a general claim or inquiry, but once you go into finer-grain detail and prompt it correctly, it will become more aligned with the truth, even if you have to force it out of it. One example is I asked( not in these exact terms) how well Coon's work aligned with modern genetics, and it said he was outdated and bad, but as I mentioned specific examples from his works and compared them to modern DNA findings, it would concede that Coon's research was highly aligned with the modern findings.
Makes sense given LLMs are just agregates of all the text out there with limited discernment. There are probably very many shallow, nominal debunkings of Coon and one-sentences charicature summaries of his work aimed at broad audiences, while anything that vindicates him will be from rarer sources that go into specific detail.
Fundamentally, the project of the post-liberal Right will have to evolve to constantly flex and train our flesh brains against the thoughtless silicone, where the key to retaining both our critical thinking but also creative ability is to do the work ourselves. As luddite as it might sound, every metric and research points to intellectual decline with increased use of AI, so the key to our survival might literally be hard copy literacy, as opposed to digital hosepipe of slop.
I wrote this 🧲💯🚀
Congrats on publishing your first piece for Aporia!
“…the AI is appealing to our baser instincts; and second, the AI is making us dumber in the process.”
Dumber is right. Appealing to our basic instincts is nothing exceptional, successful salespeople and just plain hucksters are masters of this. But dumber is very concerning. I’ve railed against this inevitability for some time now.
Indeed, I’ve been of late using AI wrt personal medical diagnosis, treatment, and prognosis. I have received from such inquiry a better relationship and subsequent understanding of my health than from my MD! At this point, the information given is only used for my education and discussion points with physician. But someday, especially given DEI initiatives in medical schools, we will see this being the “go to” for newly minted family physicians and specialists.
Such cannot end well.
Excellent article.
"This brings us closer to my central argument: the AI’s success wasn’t due to superior reasoning or greater reliance on facts and logic."
And of course, that is the crux of the problem. AI just becomes a more efficient propaganda tool.
"Overreliance on AI causes a demonstrable atrophy in human capabilities."
Indeed, it does. Any reliance on a substitute for self-critical thinking lessens your abilities.
AI cannot be trusted. AI is the mind of the creator. Maybe AI and Reddit are useful together. AI alone is a Philip Dick nightmare.
"AI is the mind of the creator."
Correct. This is something that those who believe AI will be able to take over the world should remember. AI is, in essence, a large computer program. It can only do as instructed, no matter how complex the instruction.
Seems like an AI trained to say any lie necessary to persuade would do really well on Reddit. They are smart over there, but they like being lied to.
I understand the above mentioned AI meddled with CNN and BBC articles. Unfortunately even taken 'pure' those sources are pretty bad for a whole slew of themes: climate collapse, institutional racism, Trump the Fascist etc. Hyperbole and bs rule. One prime example would be a BBC piece a few years ago about oceanic pollution. The article came with a dramatic image of a turtle with a big piece of plastic around its neck. When the BBC was found out to have photoshopped the image they apologized and called it 'a test image'...
At least, like the BBC, when found out ChatGPT admits it made things up:
The Truth About Tariffs | Cullen Roche
https://www.youtube.com/watch?v=wN2K4q0krjc
At 46.20: Chat answers a question on behavioral finance. When asked for its sources Chat admits it made them up...
Substackers and redditors probably do have higher IQs than people on other platforms, since people who prefer textual content over images and videos generally have higher IQs, but there’s no way the average redditor has an IQ of 115. A better estimate would be 100-105.
An average redditor on epistemology/philosophy subreddit does
(IQ 103)
Which AI were you using? I've found Chat GPT 3.5 will often be incorrect factually about a general claim or inquiry, but once you go into finer-grain detail and prompt it correctly, it will become more aligned with the truth, even if you have to force it out of it. One example is I asked( not in these exact terms) how well Coon's work aligned with modern genetics, and it said he was outdated and bad, but as I mentioned specific examples from his works and compared them to modern DNA findings, it would concede that Coon's research was highly aligned with the modern findings.
Makes sense given LLMs are just agregates of all the text out there with limited discernment. There are probably very many shallow, nominal debunkings of Coon and one-sentences charicature summaries of his work aimed at broad audiences, while anything that vindicates him will be from rarer sources that go into specific detail.
Fundamentally, the project of the post-liberal Right will have to evolve to constantly flex and train our flesh brains against the thoughtless silicone, where the key to retaining both our critical thinking but also creative ability is to do the work ourselves. As luddite as it might sound, every metric and research points to intellectual decline with increased use of AI, so the key to our survival might literally be hard copy literacy, as opposed to digital hosepipe of slop.
"hard copy literacy". Nice phrase.