Psychologist turncoat | history of science, history of psychology, philosophy of science | PhD from Utrecht University | Postdoc at University of Rijeka, Croatia:
revenant.uniri.hr | Teaching in the cognitive sciences
cogsci.uniri.hr
On making the important distinction between the popular LLMs and the more constrained use of machine learning in different scientific fields. Re knowledge production.
It is very important to distinguish between Large Language Models and Machine Learning Algorithms in general. LLMs are only text generators that have no truth value assigned to statements. As such, they inherently cannot be reliable and are thus worthless for generating knowledge.
Obviously this is gross, fascist aesthetic stuff. But: every time I've flown into LAX, going back to the GWB admin, there's been a framed pic of the president on the wall welcoming you to the US. This is the end-point of 'respect the office' reverence; a difference of degree, not kind.
What do folks think about this statement about AI? Would you agree (I’m especially interested in the opinions of the cognitive scientists, historians and philosophers of science among you)? Why?
Ai is singularly the most paradigm shifting powerful tool of the last 100 years. The powerful good it is already achieving in science is astounding. There is no tool in history that cannot also be used negatively. A stick can be used to kill. The problem is not in the tool, but the user.