|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 06 Feb 26 - 08:31 AM Related comments on an Elreg article about Raspberry Pi price rises (tyops in original):
.... which drew the response:
.... I'll leave it there awhile. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 06 Feb 26 - 07:13 AM More straws in the wind. Just seen on BBC Red Button: Amazon shares fall as it joins Big Tech AI spending spree
The stock market's reaction: Amazon's shares drop over 11% in after-hours trading. Even the Wall Street gamblers are noticing the distinct lack of the Killer App we've all been promised, let alone a return on investment. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Bill D Date: 02 Feb 26 - 09:46 AM That conversation is with Jon Stewart being serious. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Bill D Date: 02 Feb 26 - 09:44 AM Long but fascinating conversation with founder of AI |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Stilly River Sage Date: 01 Feb 26 - 12:47 PM Artificial intelligence researchers hit by flood of ‘slop’ Conferences restrict use of LLMs after surge of low-quality AI-generated papers and reviews Artificial intelligence researchers are grappling with a problem core to their field: how to stop so-called “AI slop” from damaging confidence in the industry’s scientific work. AI conferences have rushed to restrict the use of large language models for writing and reviewing papers in recent months after being flooded with a wave of poor AI-written content. Further down the article is "A tell-tale sign is when papers contain hallucinated references in the bibliography, or figures that are wrong, said Dietterich. These users are then banned from submitting papers to arXiv for a while, he added." |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: The Sandman Date: 01 Feb 26 - 05:17 AM This interesting essay by James O'Sullivan was shared by an academic friend who writes about AI in the classroom." quote I disagree, i find the article opinionated and subjective. but we are all entitled to different opinions. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 30 Jan 26 - 05:39 PM The slop will still have better spelling than the flesh-and-blood moron activists. A minor point, but I just saw a Trump commercial on CNN that uses AI copy of Trump's voice. I'm surprised the fine print informed people of this. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Stilly River Sage Date: 30 Jan 26 - 12:44 PM This interesting essay by James O'Sullivan was shared by an academic friend who writes about AI in the classroom. I am so f**king sick of AI slop Extracted from it: What is alarming, however, is the sheer willingness of people to put their own names to any old slop the machine produces. And further down: The internet was, in its most idealistic (and yes, maybe naive) conception, a sprawling parlour for human conversation and the exchange of genuine thought. That vision is effectively dead. Open LinkedIn or Reddit (or X, if you really want to wind yourself up) and you will see streams of the same beige, hallucinatory text bearing the chirpy, predictive cadence of ChatGPT, generated by users who could not be bothered to read the content they are putting their name to. They enter a prompt and paste the result, engaging in a pantomime of interaction that benefits no one but the platform’s engagement metrics. It is a hall of mirrors where machines talk to machines while humans look on, increasingly alienated from the very networks built to connect them. He concludes "I can appreciate the technology for what it is, but I am finding it increasingly difficult to forgive the laziness of the people using it." Amen. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 23 Jan 26 - 10:03 AM > well, you do get pretty pictures!!! As it happens, Sandra, I saw an article some years ago about automated uglification using machine learning: teach the ML on pictures from one or more of the horror mags, then get them to "enhance" real-life photos to re-render them in that style. The most horrifying bit was that it was easy to tell that the image of a female English politician (Theresa May?) was processed this way, but that I found it difficult to tell that the one of Agent Orange wasn't the original. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 22 Jan 26 - 06:05 AM What's next? people might work. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Sandra in Sydney Date: 22 Jan 26 - 02:06 AM well, you do get pretty pictures!!! Tour website's AI sends visitors to Tasmanian sites that do not exist An AI-generated article on a travel booking website has sent tourists to a remote location in Tasmania's north-east, looking for hot springs that do not exist. Australian Tours and Cruises has admitted the AI technology it uses to create content and articles to help drive bookings has "completely messed up". What's next? The company has said it will review all of its AI-generated content, which is produced by a third party ... Mr Hennessy said that while all posts were normally reviewed before being posted, some had been made public by mistake while he was out of the country ... |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 18 Jan 26 - 06:11 AM Bingo: the RISKS Digest site at Newcastle is back up to date again. Herewith two consecutive articles from RISKS 34:83: Capability Maturity Models and generative artificial intelligence (see above) The AI boom is based on a fundamental mistake (The Verge)
|
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 17 Jan 26 - 01:16 PM Depends what you call "wisdom", and for that matter what "AI" means this week in *this* context for *that* person.
Only the last step is achievable by LLMs, imperfectly at that. When they manage it, that's anthropomorphised as "hallucinations"; when they upchuck the original data, it's called "plaigiarism". We haven't advanced much beyond the state of "I could eat alphabet soup and *shit* better lyrics". As for "AI", that's just a couple of letters that Marketing slap on anything which eats bits and |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 15 Jan 26 - 09:34 AM How many years of human growth does it require to achieve wisdom? While it may vary when it is achieved, it is not consistently infallible. For anthropomorphic reasons, let's assume that with constant study, it takes 50 years for a human to acquire wisdom. Let's apply the same standard to AI for the sake of discussion. Our civilizations have had many saviors of mankind and enlightened ones. Where will AI be in 50 years? It will still face the randomness of the quantum realm and risks beyond expectation. Is unquestioned alliance and allegiance ever a smart thing to do? Do deep thoughts like these even apply to AI? We have historically given certain questions to be in the domain of God or Gods. Will we grant that status upon a 'wise' AI. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 14 Jan 26 - 02:03 PM Meanwhile, firms are begining to wonder aloud about the promised Return On Investment. From Comments on Zuck forms Meta Compute to pave the planet with 'hundreds of gigawatts' of AI datacenters:
|
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 12:20 PM And the fly in the oinkment: The world is one bad decision away from a silicon ice age Venezuela today, Taiwan tomorrow? This might be the last good year for buying hardware
|
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 11:46 AM I'm sorry Dave, I can't think that. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 11:44 AM I strongly doubt there'll be any substantive advances till they know what they're doing (see Rob Slade's comment, quoted above), and I strongly doubt that they'll have time to get that right. As it is, the offending firms are all running at a heavy loss. Once someone needs to pay some real bills in genuine money, end-user bills will go up†, the market will relax back to normal, and LLMs will go back to simple pattern-matching tasks where they started (and where they do provably and reliably work). Don't make the category error of thinking that what LLMs do is thinking: all they've done is win at the Imitation Game. I read recenty that LLMs simulate the memory part of the brain, and that the reasoning part of the brain is entirely elsewhere. Someone needs to go back to first principles. * A business lack-of-strategy pioneered by Netscape. † Orders of magnitude are being suggested. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 11:09 AM Markets are inherently unstable,* run as they are on whim and panic. Every time there's a country-shaking implosion, laws are put in place to control the worst swings downward. Trouble is, said laws also tend to slightly impede upward swings as well, even where government policy is not actively counterproductive†; so the lobbyists push to have the safeguards relaxed because "it'll be different *this* time". Surprise: it won't be. * The techie in me keeps saying "chaotic relaxation oscillator". † But that's a different argument, which need not detain us here. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Aethelric Date: 13 Jan 26 - 10:27 AM MaJ, I'm sure it does not work as advertised - yet. But it is getting better very fast. I do not think the market will implode, although there will be wobbles as users have to change suppliers as some fail and others thrive. Hmm , I think that's what you said! |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 10:11 AM .... Such predictions assume, Aethelric, that Artificial Incompetence actually works as advertised, and to the extent it's hyped up to. Personally I doubt it can (but I've made that argument before). That aside, all it'll take is one *prangissimo* sufficiently large to unbalance the stock market, and sufficiently egregious to get through the cloud of Natural Incompetence in board rooms, and the market for this particular brand of snake oil will implode. But don't worry --- there'll be another brand along next week. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 13 Jan 26 - 09:57 AM Filk, you should write an article for the Atlantic ! People want to know what they are sacrificing for this outrageous investment at the cost they are expected to pay. We already know the foibles of AI mistakes and the great breakthroughs in AI medicine. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Aethelric Date: 13 Jan 26 - 09:46 AM II really could not give a damn about the investors in AI losing their money. Life and AI will go on. But the ones who will suffer are the general public due to the effect on the job market. Some well paid jobs will be created, but these numbers will be dwarfed by the rise in unemployment in lower and mid range jobs. How AI Will Replace White Collar Jobs by 2030: Timeline and Predictions |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 13 Jan 26 - 09:31 AM > How many years will it take to recoup the investment > in AI? There's two related ways of looking at this. One is that many (possibly most) of the LLM investments made are actually "circular financing": swapping shares between companies before the ink's dry. ("My parents don't give each other Chrismas presents any more .... they just swap fivers.") One of the initial offenders, one Sam Altman, should be an expert in this financial levitation trick, as his previous (*ahem*) scheme was cryptocurrencies.* Everybody is now looking nervously at everybody else before cashing in; nobody wants to be first, everybody wants to be second, and whoever's third is wiped out. With luck, most of the alleged monies will evaporate like so much cryptocurrency speculation, and the hedge funds paying my pension won't be hurt too much. But .... The other way of looking at this was neatly encapsulated by Max in the zeroth episode of Dark Angel (quote approximate):
* Guess where the first batch of GPUs for LLMs came from? |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 12 Jan 26 - 04:55 PM How many years will it take to recoup the investment in AI? Any profitability of even corrupt AI is probably desperately needed. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 12 Jan 26 - 10:47 AM Punchline to Rob Slade's msg mentioned above (oops):
|
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 12 Jan 26 - 10:26 AM Meanwhile, back at the subject, there's a seriously long article by Rob Slade, towards the end of Risks Digest 34.83*: Capability Maturity Models and generative artificial intelligence [ The first three steps (of five) in his framework are "chaos", "repeatable" and "documented". Slade argues, in detail, that LLMs aren't past the first step, in which "[w]e don't know what we're doing. Not really". ]
* Apologies for nonstandard site. The official RISKS site has been having hiccups for some months. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 12 Jan 26 - 09:59 AM > So why use it when you KNOW it is liable to be incorrect. Because *shiny*.† I've seen recently (ElReg, I *think*, but I can't spot it atm) a report of a survey that suggests people knowingly use low-grade news outlets because they prefer the flavour. I hereby dub this phenomenon "the MacDonalds Effect". Even worse, from my trade: Most devs don't trust AI-generated code, but fail to check it anyway (Summary: checking is a higher-stress activity than coding, and said devs don't have time to do it properly. See also the Comments.) † The old way to pronounce that was "GIGO: Garbage In, Gospel Out", because, dammit, said garbage has been filtered through a very expensive machine. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 07 Jan 26 - 06:43 PM Excellent sarcasm Aethelric |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Stilly River Sage Date: 07 Jan 26 - 02:30 PM It drives me nuts to hear people preface information they're about to share with "XYZ AI may be incorrect, but this is what it tells me." So why use it when you KNOW it is liable to be incorrect. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Aethelric Date: 07 Jan 26 - 11:45 AM I live in the UK. Nobody ever lies about anything - ever. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 07 Jan 26 - 05:29 AM I live in America, so I am accustomed to being lied to. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Sandra in Sydney Date: 06 Jan 26 - 10:10 PM AI hallucinations and the dilemma of false or misleading information The strangest thing recently happened involving a lying AI chatbot. It was at the end of November when I was reporting on gamified cryptocurrency and the ethics of allowing kids to play. I needed a response from a company called Aavegotchi, given they were the crypto game in question. Normally a company will take at least a few hours to respond to questions, sometimes even a day or two. But with Aavegotchi, a company that appears to be based in Singapore, the response came back in under 10 seconds, signed off as Alex Rivera, the Community Liaison at Aavegotchi. The response was detailed and physically impossible to write so quickly. Not to mention the fact that it allowed no time for an executive to sign off on the response before pressing send. And so naturally, I asked Alex Rivera if they were an AI bot. This is what came back: (read on!) |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 27 Dec 25 - 10:41 PM The NSA has the ability to record nearly every keystroke in the country but does not have the ability to analyze it all. With AI they could have eyeballs on everyone. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 08 Dec 25 - 07:25 PM Glitches are large and small but the evolution into stage 2 of AI will be the advent of robots to do domestic or factory jobs. It will add another car payment to households. Even now Walmart sells a 20 thousand dollar Chinese robot. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Stilly River Sage Date: 06 Dec 25 - 12:06 PM A friend uses the app Perplexity - he sent some text for a social media post at one time and I had trouble making it align with our site's wording. A search with the text itself only came up with AI responses, so I asked if he'd sent AI text. He said yes, but he'd edited it. Even then, it is distinctive for its lack of citations or quotes, and I wasn't able to use it. As a writer I find this type of program to be an inferior substitute for having a real person write text. And right now Perplexity is being sued by the New York Times and the Chicago Tribune for the way in which is harvests stuff from their sites and sometimes makes up stuff it attributes to them. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Aethelric Date: 06 Dec 25 - 11:55 AM I have used LibreOffice for years - originally in Windows and now on a MacBook. I use it mainly for docs and spreadsheets. It's completely free. I thoroughly recommend it. There are extensions you can add if you want AI help I guess Collabora may be better if working with others. I do use AI quite a bit. - it does web searches much faster than I can but it often makes mistakes. It told me my car headlights are all H4 bulbs - I bought two. Then found out that it should be two H7s and two H3s. To be fair - it did apologise which is more than many humans do. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Nigel Parsons Date: 06 Dec 25 - 07:08 AM Artificial Intelligence serves a real need. There's so little of the real stuff around ;) |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 06 Dec 25 - 07:00 AM From an Elreg commentard: French AI
|
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Bill D Date: 02 Dec 25 - 05:34 PM I've used LibreOffice for several years. Free and compatible with most anything I need. Libreoffice |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Stilly River Sage Date: 02 Dec 25 - 02:17 PM I turn off AI features on sites I use and if I have programs that use it the settings are a shut down as possible. I threatened to cancel my Microsoft Office account in order to get the offer to subscribe at the old rate without the AI equivalent of Mr. Paperclip. Here is a review from ZDNet. I found a powerful Microsoft Office alternative that doesn't push AI - and it's free The rest is at the link. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 02 Dec 25 - 01:07 PM From bigthink.com, via RISKS Digest 34:79: Why vibe physics is the ultimate example of AI slop
As usual, the late great Douglas Adams made a suspiciously similar point when he invented the Electric Monk, the function of which was to believe things on its users' behalf. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 01 Dec 25 - 06:01 AM Al, the race card has gone digital. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Big Al Whittle Date: 30 Nov 25 - 04:05 PM Due to reasons beyond my competence, I swapped my car twice within four months. Of course the DVLA was out of its depth. I ended up on one f their chatlines. Convinced that I was trying to converse with a computer due to the opaque nature of its responses. I wrote, Could you please put me in contact with a sentient human being? The response came - I really resent that..... |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: pattyClink Date: 29 Nov 25 - 10:55 PM I got 3/4 of the way through what I thought was a good blues piece on youtube before looking at who the artist was and finding some bullshit AI source instead of a performer. Sickening. Don't click on this stuff, it encourages it. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Mary G Date: 29 Nov 25 - 07:14 PM more than the likes of us can imagine. Cyber Crime wiping out our banking system, sexual harassment, impersonation, international warfare, threats, malpractice of every sort. We are not safe. On the bright side, incredible inventions, advancements in agriculture and medicine, potential for good (and bad)in education. Combined with robotics better care of patients in nursing homes, getting them to toilets, in wheelchairs (or have exoskeletons where they can walk with assistance. Personalized medicine, personalized nutrition. Constant medical testing. Cures or help for very rare diseases. Looking at what other countries do in terms of herbal medicine. Psychiatric care. Cutting expenses on travel, food, etc. Growing food on demand in small quantities on porches etc. Inventions that we can not believe possible. But which will win? |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 29 Nov 25 - 05:19 PM Feeding and watering AI is enormous. In 2023, data centers globally used an estimated 140 billion liters (about 37 billion gallons) of water for cooling, with AI's demand being a significant and growing contributor. Direct water usage by U.S. data centers alone was around 17.5 billion gallons, with an indirect water footprint from electricity generation estimated at 211 billion gallons. |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Sandra in Sydney Date: 24 Nov 25 - 09:14 AM Australia's Macquarie Dictionary (our version of the Oxford or Merriam Webster) has just released the word of the year - 'AI slop' crowned word of the year 2025 in Macquarie Dictionary's committee and people's choice categories ... The word refers to low-quality content created by generative AI which often contains errors and is not requested by the user. A technology innovation expert says AI slop is "making its way upstream into people’s media diets"... the article also has a link to 2024 word of the year! |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: Donuel Date: 24 Nov 25 - 06:13 AM country music digital tune |
|
Subject: RE: Artificial Intelligence - what could go wrong? From: MaJoC the Filk Date: 23 Nov 25 - 06:15 PM .... then they came for the country-music musicians: AI music has finally beaten hat-act humans, but sounds nothing like victory Top of the slops signposts the undiscovered country for an industry
As usual, reading the comments is seriously worth it: there's considerable wise discussion of the enshittification of the alleged popular music charts in times past, which Artificial Banality has now merely turbocharged. Spot the accountancy firm. |
| Share Thread: |