This is partly why AI / LLM abolition makes no sense: if a particular model is crap, then most folks will a different one that isn't. If all the models are crap, then most folks won't use any.
Of course, if AI / LLM models were truly crap, then abolitionists wouldn't have to work so hard.
It suggests that for the right to fully use "AI" for its purposes it needs to fund enough non-AI writing to contradict the factual writing that disproves their points. If a Musk-owned LLM ordered to refer to "white genocide" in every answer can't find evidence of a white genocide, that says a lot.
Oh, don't get me wrong, I'd have zero issue if LLM went away forever tomorrow. I just think it's interesting that the people who want to use it for evil will have to do a lot more work than think to make it as evil as they want it to be.
May 15, 2025 20:19