Anthony Moser
(He/Him) Folk Technologist • anthony.moser@gmail.com • N4EJ • http://www.BetterDataPortal.com • baker in The FOIA Bakery • publicdatatools.com • quick network graphs bit.ly/qng • http://deseguys.com • bit.ly/IneffectiveByChoice
- People believe many things that are foolish, ignorant, or wrong. Because people *believe things* LLMs do not believe things that are wrong, because *they do not have beliefs* They generate probable strings of words, based on the corpus of text used to train them
- This is why they can't fix it: it's not broken It's *structurally indifferent to truth*
- I don’t believe that attribution is an intractable problem. In fact, i jotted down a modification to the transformer architecture for this tonight. Need to train it now. I’ve got the perfect dataset for hallucination: I’m a mod in r/ArtificialSentience (fml). So now I just need to find the spoons
- Truth is not the same as attribution