19 Comments

"American Renaissance is run by Jared Taylor, a self-described white advocate, whose X account has been banned since December of 2017, including under Elon Musk."

Evidently, there is a limit to Musk's stomach for truth.

Expand full comment

Grok is better than some other LLM's, which even refuse to answer questions about race differences in IQ. However, its "wokeness" level is not much lower than chatGPT's.

I asked Grok a few days ago this question: "Could you describe the role of genes and of the environment in IQ differences between the races?"

A long lecture followed, using much woke argumentation and some woke (CRT) terminology like "systemic racism".

I then delved into this part of its answer: "Human genetic variation does not align neatly with socially defined racial categories. There's more genetic variation within racial groups than between them, which challenges the notion of significant genetic differences in intelligence between races."

The first statement above basically restates the well-known woke standpoint that racial categories are social constructs and have little to do with genetic differences. The second statement is Lewontin's (1972) finding, which is used, to this day, to support the first statement.

A long discussion followed, where I brought up research by R. Lewontin, A.W.F. Edwards ("Lewontin's Fallacy"), N. Risch, H.Tang, D. J. Witherspoon. It knew the relevant research of all these people.

During the discussion I had to ask Grok again and again to answer succinctly because it tended to give lectures, trying to bring arguments irrelevant to my questions, all of which were supposed to weaken the case for the biological basis of race categories (e.g. "clines").

At the end, I asked: "... by the end of our discussion you have clearly negated both of your original statements mentioned above. Do you agree?"

Its answer: "Yes, I agree that based on our discussion, I have contradicted both of my original statements from the first answer."

Expand full comment

Interesting that it is restating Lewontin‘s fallacy and doubling down so hard on the environmental view. Musk follows Aporia on X, and many other dissident right magazines.

Expand full comment

Yes - in particular because it knew about all the research contradicting the environmental view. However, it didn't use that knowledge to formulate its first answer to my original question.

Expand full comment

https://www.telegraph.co.uk/world-news/2024/12/27/an-ai-chatbot-told-me-to-murder-my-bullies/

There are far greater negatives to AI than positives. I think we are releasing a daemon into the world.

Expand full comment

You might be right. If AI needs to be censored and controlled, who gets the authority to do that? At least until the recent election, the internet censors of recent years appeared to be the government's preferred candidates. Trump might change that.

Expand full comment

"Trump might change that."

It is very doubtful. Trump is a tool of the Deep State.

Expand full comment
Dec 27Edited

We shall see.

Then again, if someone actually wanted censorship in general, this could be the route. So what are our options? Bit of a double-bind.

Expand full comment

Musk also has a problem finding enough American STEM workers and, therefore, promotes more immigration. His problem is most likely that American STEM workers demand a hell of a lot higher salary the foreign ones.

Rapaciousness Rules.

https://www.mediaite.com/tech/elon-musks-critics-stripped-of-verification-badge-after-publicly-challenging-billionaire-the-beginning-stages-of-censorship/

Expand full comment

Elon’s recent meltdown basically sums up any expectations I have for any of his projects to be “based.” He’s just… a slightly less anti-white tech billionaire. That’s it

Expand full comment

I would allow an anti-woke AI to use my webpages as training data. In particular, my FAQs pages are probably ideal for this purpose. https://zerocontradictions.net/#faqs

Expand full comment

Don't programmers call this problem GIGO; Garbage In Garbage Out?

Expand full comment

Fuck Elon Musk, the fucking cock sucker

Expand full comment

I come for the comments. Digital bread and circuses.

Expand full comment

Whatever the current state of bias is for AI tools, markets will require better tools. That is to say biased tools will lose out to less partial tools. Devious left wing tech oligarchs are not going to indoctrinate enough people to really matter. At the end of the day what is going on in academia matters less and less. College attendance is declining as more people seek training for work over degrees with little practical application. Leftists are now losing their control over government the world over. Have a look at this seminal article in the Wall Street Journal:"https://www.wsj.com/world/global-politics-conservative-right-shift-ea0e8d05?mod=hp_lead_pos1 Leftism is dead, at least for now and academic discourse is irrelevant. Now let's on things that matter like brining prosperity to more people of the world, increasing optimism and reducing social isolation and restoring sustainable rates of natality.

Expand full comment

Thanks for your comment. The article demonstrates how AI’s are unable to know important truths that the academic left has successfully kept hidden for decades and that have already driven inefficient policies for generations, implying that such stupid inefficiencies will continue under AI. What did you think of the example of suppression of the real results of the Clark doll experiment and resulting inefficient and self-serving policies?

(It’s also worth keeping in mind that the people working on these AI’s are largely non-white and/or lefties often with little worldly experience outside of their media and tech bubbles. In other words, they are not incentivized to make AI tools that will not continue to disadvantage white people, despite the inefficiencies this creates.)

Expand full comment

I think this kinda misses the sense in which people want non-woke AI. They aren't concerned that the language model might have some biases in how it describes certain academic tests because of the fact the training data has a certain bias. People don't expect AI to be perfect.

They just want that AI not to behave in a way that feels insulting and infantalizing by refusing to entertain or produce messages in one direction but not the other. As long as it doesn't feel like there is an override that steps in to stop the AI from saying things the left doesn't like most people won't really be bothered.

Expand full comment

Zoltan’s first response above gives an example of just such a problem.

Expand full comment

I don't think that most people are going to care or really know about the answer to relatively explicit questions about race and genetics. They just want it to not insert black people into pictures of the founding fathers.

But the underlying reason for all of this is that even the people who write non-woke LLMs don't want them getting famous calling for a race war or something. Until we get over the idea that there is anything more interesting about an AI saying racist shit and autocomplete buisnesses will take the option that produces less bad PR.

Expand full comment