I heard about that lawyer, leeneia; he turns up occasionally in articles in The Register as an Awful Warning about the perils of thinking that LLMs can think. Worse yet, these instances of Artificial Incompetence do a Donald* if they're challenged, and double down on their hallucinated answers: one chap went through endless trouble trying to convince ChatGPT that he hadn't died, and was met by a wall of obituaries for himself, faked up from whole cloth. As I've remarked before hereabouts, LLMs are extremely expensive guess-the-next-word boxes. The only reason people believe they're worth anything is because they cost so much: "Garbage In, Gospel Out", an' all that. But pattern-matching, now *that*'s a reasonable use for them, but only if someone does a full cost/benefit analysis, and there's a human in the loop whose instinct is to say "no". * Aside: doubling down on losses at blackjack is a guaranteed losing strategy. As a one-time owner of casinos, Don the Con ought to know this.
|