|
||||||||
|
Artificial Intelligence - what could go wrong?
|
|
|||||||
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 08 May 26 - 06:52 AM Verily. Dawkins hasn't realised (or has carefully forgotten) that there is no such thing as a Turing Test: it's the Imitation Game, and the computer system has won. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 16 May 26 - 04:59 PM --- Bingo: I've just reread Vernor Vinge's A Deepness in the Sky. One of the plot devices is that the Emergents have a mind-control process known as Focus: basically, they've worked out how to control a brain virus known as "mindrot", so that each specialist becomes so hyperfocussed on his or her speciality that the Emergents can treat Focussed people as machines. One thing that doesn't turn up in the Wikipedia page is that software hacked over by Focussed programmers becomes both unintelligable to normals and incredibly fragile.* And *that*'s what could go wrong with using Artificial Incompetence to write software: The machine could say, with total accuracy and no condescension, "I could explain what this software unit does, and how it does it, but you'd be incapable of understanding the explanation, let alone finding any bugs in it." This has already happened, even with pre-AI tech. A brute-force search was made for a solution to a certain chess-ending problem, and the answer was a specific number,† but no human can understand what's objectively different after (eg) move 198 from the starting position. We have been warned. I commend the book to y'all. Good hard SF, with much to say. * One of the recurring themes of the book is "excessive optimisation considered harmful". † No, not 42. I think it's somewhere in The Science of Discworld, but the index is silent. Grrr. |
| Translate Thread |