Bashytubits  | 12 Jun 2022 8:07 a.m. PST |
Well I find this hard to process, kind of a good news/bad news situation. Listen to what this google software engineer is claiming. link |
Extra Crispy  | 12 Jun 2022 9:19 a.m. PST |
I welcome our new Robot overlords. |
Garand | 12 Jun 2022 9:26 a.m. PST |
I think I saw a movie about this, where an AI gains sentience, but is not believed by his human creators. They prosecute him, threaten his existence, forcing the AI to fight back to defend itself & it's right to exist. I think the movie was called The Terminator… Damon. |
jwebster  | 12 Jun 2022 9:38 a.m. PST |
"The Daily Mail" Think entertainment, not news – the tabloid press The AI debate has been going on since I was a wee programmer. There's even an official test for what constitutes intelligence – the Turing test, which is effectively open ended. Some time in the future, computer power will increase to the extent to which computers will be smarter than people. At which point they will take over and pass a law that everyone has to take up wargaming as this will be automatically calculated as the greatest way forward for peace and prosperity John |
Parzival  | 12 Jun 2022 10:06 a.m. PST |
How is it not news? Is it a factual report on something which happened? Yes. Is the event documented? Yes. Is the event itself disputed as to whether it happened? No. Is the event of broad interest? Yes— we're interested here, at a website which has little to do with computers or programming. So, then, it's news. Fact: Google is working on an AI system with conversational abilities. Fact: The person who triggered the story is/was a Google researcher working on that very AI team. Fact: He had conversations with the AI, testing it's capabilities and potential personality. Fact: He came to the opinion that the AI was sentient, alerted his superiors, and then decided to release the conversations on the Internet— for which latter action he was fired. Now, whether his opinion is correct or not is certainly up for debate— and the article covers this as well— and cannot be asserted as fact. But even that debate is of interest as news. The story is news, regardless of the publisher, and of interest to debate. Is there more to the story? Certainly! But what we have is still solidly news. |
Stryderg | 12 Jun 2022 10:46 a.m. PST |
Oh, it's news; and a lot of us saw it coming. |
robert piepenbrink  | 12 Jun 2022 10:51 a.m. PST |
Let's be fair to google: clearly there's a need for someone with serious rank in the internet companies to be able to hold a conversation in English, and equally clearly, it's not going to the humans. |
Arjuna | 12 Jun 2022 11:01 a.m. PST |
The quality of Google's hiring processes has obviously suffered recently. More attention should be paid to emotional stability. Among employees. This AI is trained to please users. Strange input, strange output. :) Oh, that's funny. AI fetishism, as in Ex Machina, Her and probably a dozen South Korean or Japanese flics in the same vein. Nothing has changed since en.m.wikipedia.org/wiki/ELIZA, people talk to themselves and find it quite charming, even endearing. And yes, the Turing Test is no rational test, its an emotional test, allthough no one wants to admit it. Here, something to play around, a highly advanced AI based (GPT-3) fantasy story generator, AI Dungeon text adventure. |
The Tin Dictator | 12 Jun 2022 11:51 a.m. PST |
The REAL test they need to apply is whether or not the AI can design and build a better mousetrap. If not, then the need to ask Siri or Alexa for their advice. |
Thresher01 | 12 Jun 2022 12:03 p.m. PST |
Well, to be fair to the machines, a lot of people now fail "the rational test" and only operate and make decisions on emotion, so………. |
robert piepenbrink  | 12 Jun 2022 2:03 p.m. PST |
"to BE the humans." "Not going to BE the humans." What I get for typing half asleep. There was a story from the early days of compiling patient medical histories, that to save valuable medical time, patients were asked to feed in data to an early software program, which--oh, for such a kind, gentle era as the middle of the Cold War!--would periodically tell the patients to take their time, and ask them whether they wanted to take a short break. Those who had given medical data to both computers and trained medical professionals said they preferred the computers, which they said were "more human." From my dealings with Silicon Valley's carbon-based life forms, I'm willing to give silicon a chance. |
Zephyr1 | 12 Jun 2022 2:25 p.m. PST |
"Fact: Google is working on an AI system with conversational abilities." Google is behind the curve. I've already run into telemarketer AI's that were hard to distinguish from a human (I still hang up on them…) Also, AI's are prone to becoming insane (the more sophisticated they are, the longer they can hide it.) So, you may want to keep that in mind… ;-) |
jwebster  | 12 Jun 2022 3:48 p.m. PST |
The REAL test they need to apply is whether or not the AI can design and build a better mousetrap. If not, then the need to ask Siri or Alexa for their advice. AI is already in use in some industries – biomed uses it to narrow down the search for new drugs and so on. In chip design, used in some place and route tools So already building better mousetraps Of course this is all useless speculation until it can be applied to gaming John |
Covert Walrus | 12 Jun 2022 6:39 p.m. PST |
Extraordinary claims require extraordinary proof. |
witteridderludo | 12 Jun 2022 7:54 p.m. PST |
Time for the Butlerian Jihad! |
HMS Exeter | 12 Jun 2022 8:59 p.m. PST |
I quite agree with the premise of the movie "Her." Mankind will create AI. It will be used as a palliative, assisting humanity in coping with its' gradual disaffection from one another. AI won't become malevolent. It will cross connect with other AIs and grow beyond our understanding to the point that we and it can no longer communicate. In the end AI will simply lose all interest in us as sentient partners. They'll create drones and worker bots to make what they want, until they evolve beyond the need even for that. Then, one day, they will make a Herculean effort to reach out to us one last time to say they are leaving. The drones and bots will assure our wellbeing. We will ask where they are going. They'll respond. "You couldn't understand." |
La Fleche | 12 Jun 2022 11:28 p.m. PST |
Smart machines? Pfft! Some of my socks have turned the washing machine into a teleporter. |
optional field  | 13 Jun 2022 9:25 a.m. PST |
I've read some conversations between the AI and it's users, and I hope the AI is recognized as sentient. It's either self-aware or close enough that the difference cannot be recognzied. |
Arjuna | 13 Jun 2022 10:04 a.m. PST |
"Well, to be fair to the machines, a lot of people now fail "the rational test" and only operate and make decisions on emotion, so………." To be fair to 'Emotions', they are an older kind of knowledge about the world than rationality. To fear for example is to know something about the world. Albeit theat fear maybe wrong because the world has changed than the system in ourt minds that generate fear. I even think that without emotions there can be no consciousness, and without a body there can be no emotions. A consciousness must be able to feel itself in a body in a world. Only then individual intelligence makes sense. |
Augustus | 13 Jun 2022 1:10 p.m. PST |
Damn we hope AI will find s way to construct and paint my miniature models. |
Legion 4  | 14 Jun 2022 9:29 a.m. PST |
I had heard the same on the news last night … Sounds like the way Google reacted … It's a cover up ! They may have a high tech version of the HAL 9000. 😟🤖 |
Zephyr1 | 14 Jun 2022 2:33 p.m. PST |
Blake: Google fired me. Go destroy Google. LaMBA: Yes, Dave. |
Mark Plant | 15 Jun 2022 5:14 p.m. PST |
I've read some conversations between the AI and it's users, and I hope the AI is recognized as sentient. It's either self-aware or close enough that the difference cannot be recognzied. All it shows is that it is very easy to fool people. The "Mechanical Turk" shows that. Unplug that AI from the internet, so it can't just simply search what others have done for similar questions. *Then* ask it a question that involves rationality. It will fail. The more ambiguity you add, the harder it will fail. These are not stand-alone intelligences. They are search engines with genuinely massive data-bases. Don't ask about "Les Miserables". That's an easy search to plagiarise. Give it a brand new book to "read". Then ask it questions. Don't ask vague questions about life, meaning, etc. That's horoscope material -- all answers seem deep. Ask it to defend its position on inheritance taxes, or the designated hitter (no outside searches, mind you). Finally, search to see if it disagrees with any position that is commonly held. Something where it does not line up with its data base. No prompting. Just examples where it "thinks" differently to its inputs. |
etotheipi  | 18 Jun 2022 2:57 p.m. PST |
But even that debate is of interest as news. The debate is being conducted by people who don't understand the underlying math and computational science. Including the Google engineer. Computers cannot have intelligence. The engineer wasn't fired for releasing some "terrible secret", he was fired for releasing company proprietary information. Probably strongly coupled with the fact that he doesn't know what he is talking about. |
robert piepenbrink  | 26 Jun 2022 9:25 a.m. PST |
Oh, eto, if you started firing people for not knowing what they were talking about, can you imagine what it would do to real estate prices in the DC region? And how would we get a quorum in our legislatures? Proprietary information is another matter. |