I’m now convinced that most holodeck malfunctions are the result of end users, who don’t know what they’re doing, using AI to generate poorly-written software they’re ill-equipped to debug or even really understand.
I’m now convinced that most holodeck malfunctions are the result of end users, who don’t know what they’re doing, using AI to generate poorly-written software they’re ill-equipped to debug or even really understand.
I mean 2 of the smartest people on the Enterprise D accidentally created an artificial lifeform and had no safeguards to prevent it from happening.
I guess ‘accidental, megalomaniacal sapience’ is technically a holodeck malfunction, lol. I wasn’t even thinking of that incident.
Really just one person- Geordie- through an accidental misphrasing of a request to the computer.
I would have never used that computer again. Or at least given it a complete overhaul. You shouldn’t be allowed to do the sort of thing you’d request of Dall-E in order for the computer to create intelligent life.
Yeah, they could stand to at least add a, “This request will result in the creation a sentient being, are you sure you wish to proceed?” warning.
Really, a lot of Star Trek problems could be averted with a “are you sure you want to do this?” before someone does something with the computer or the holodeck. Starfleet apparently never learns. That’s why in Prodigy-
spoiler
Janeway goes back to the Delta Quadrent in a different ship of a different but similar-looking class renamed Voyager.
Frankly I would posit that present-day LLMs demonstrate exactly why Moriarty wasn’t even necessarily sapient.