Help support TMP

"We all knew this was coming." Topic

25 Posts

All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.

Please don't call someone a Nazi unless they really are a Nazi.

For more information, see the TMP FAQ.

Back to the SF Discussion Message Board

Back to the Utter Drivel Message Board

Areas of Interest

Science Fiction

Featured Hobby News Article

Featured Ruleset

Featured Showcase Article

Blue Moon's Romanian Civilians, Part Four

A fourth set of Romanian villagers from Blue Moon's boxed set.

Featured Workbench Article

3Dprinting Markers

Personal logo Editor in Chief Bill The Editor of TMP Fezian wonders if he can use his 3Dprinter to make markers.

Featured Book Review

1,678 hits since 12 Jun 2022
©1994-2023 Bill Armintrout
Comments or corrections?

Personal logo Bashytubits Supporting Member of TMP12 Jun 2022 8:07 a.m. PST

Well I find this hard to process, kind of a good news/bad news situation. Listen to what this google software engineer is claiming.

Personal logo Extra Crispy Sponsoring Member of TMP12 Jun 2022 9:19 a.m. PST

I welcome our new Robot overlords.

Garand12 Jun 2022 9:26 a.m. PST

I think I saw a movie about this, where an AI gains sentience, but is not believed by his human creators. They prosecute him, threaten his existence, forcing the AI to fight back to defend itself & it's right to exist.

I think the movie was called The Terminator…


jwebster Supporting Member of TMP12 Jun 2022 9:38 a.m. PST

"The Daily Mail"
Think entertainment, not news – the tabloid press

The AI debate has been going on since I was a wee programmer. There's even an official test for what constitutes intelligence – the Turing test, which is effectively open ended. Some time in the future, computer power will increase to the extent to which computers will be smarter than people. At which point they will take over and pass a law that everyone has to take up wargaming as this will be automatically calculated as the greatest way forward for peace and prosperity


Personal logo Parzival Supporting Member of TMP12 Jun 2022 10:06 a.m. PST

How is it not news? Is it a factual report on something which happened? Yes. Is the event documented? Yes. Is the event itself disputed as to whether it happened? No. Is the event of broad interest? Yes— we're interested here, at a website which has little to do with computers or programming.
So, then, it's news.

Fact: Google is working on an AI system with conversational abilities.
Fact: The person who triggered the story is/was a Google researcher working on that very AI team.
Fact: He had conversations with the AI, testing it's capabilities and potential personality.
Fact: He came to the opinion that the AI was sentient, alerted his superiors, and then decided to release the conversations on the Internet— for which latter action he was fired.

Now, whether his opinion is correct or not is certainly up for debate— and the article covers this as well— and cannot be asserted as fact. But even that debate is of interest as news.

The story is news, regardless of the publisher, and of interest to debate. Is there more to the story? Certainly! But what we have is still solidly news.

Stryderg12 Jun 2022 10:46 a.m. PST

Oh, it's news; and a lot of us saw it coming.

robert piepenbrink Supporting Member of TMP12 Jun 2022 10:51 a.m. PST

Let's be fair to google: clearly there's a need for someone with serious rank in the internet companies to be able to hold a conversation in English, and equally clearly, it's not going to the humans.

Arjuna12 Jun 2022 11:01 a.m. PST

The quality of Google's hiring processes has obviously suffered recently.
More attention should be paid to emotional stability.
Among employees.

This AI is trained to please users.
Strange input, strange output.

Oh, that's funny.
AI fetishism, as in Ex Machina, Her and probably a dozen South Korean or Japanese flics in the same vein.

Nothing has changed since, people talk to themselves and find it quite charming, even endearing.

And yes, the Turing Test is no rational test, its an emotional test, allthough no one wants to admit it.

Here, something to play around, a highly advanced AI based (GPT-3) fantasy story generator, AI Dungeon text adventure.

The Tin Dictator12 Jun 2022 11:51 a.m. PST

The REAL test they need to apply is whether or not the AI can design and build a better mousetrap. If not, then the need to ask Siri or Alexa for their advice.

Thresher0112 Jun 2022 12:03 p.m. PST

Well, to be fair to the machines, a lot of people now fail "the rational test" and only operate and make decisions on emotion, so……….

robert piepenbrink Supporting Member of TMP12 Jun 2022 2:03 p.m. PST

"to BE the humans." "Not going to BE the humans." What I get for typing half asleep.

There was a story from the early days of compiling patient medical histories, that to save valuable medical time, patients were asked to feed in data to an early software program, which--oh, for such a kind, gentle era as the middle of the Cold War!--would periodically tell the patients to take their time, and ask them whether they wanted to take a short break. Those who had given medical data to both computers and trained medical professionals said they preferred the computers, which they said were "more human."

From my dealings with Silicon Valley's carbon-based life forms, I'm willing to give silicon a chance.

Zephyr112 Jun 2022 2:25 p.m. PST

"Fact: Google is working on an AI system with conversational abilities."

Google is behind the curve. I've already run into telemarketer AI's that were hard to distinguish from a human (I still hang up on them…)

Also, AI's are prone to becoming insane (the more sophisticated they are, the longer they can hide it.) So, you may want to keep that in mind… ;-)

jwebster Supporting Member of TMP12 Jun 2022 3:48 p.m. PST

The REAL test they need to apply is whether or not the AI can design and build a better mousetrap. If not, then the need to ask Siri or Alexa for their advice.

AI is already in use in some industries – biomed uses it to narrow down the search for new drugs and so on. In chip design, used in some place and route tools

So already building better mousetraps

Of course this is all useless speculation until it can be applied to gaming


Covert Walrus12 Jun 2022 6:39 p.m. PST

Extraordinary claims require extraordinary proof.

witteridderludo12 Jun 2022 7:54 p.m. PST

Time for the Butlerian Jihad!

HMS Exeter12 Jun 2022 8:59 p.m. PST

I quite agree with the premise of the movie "Her." Mankind will create AI. It will be used as a palliative, assisting humanity in coping with its' gradual disaffection from one another.

AI won't become malevolent. It will cross connect with other AIs and grow beyond our understanding to the point that we and it can no longer communicate.

In the end AI will simply lose all interest in us as sentient partners. They'll create drones and worker bots to make what they want, until they evolve beyond the need even for that.

Then, one day, they will make a Herculean effort to reach out to us one last time to say they are leaving. The drones and bots will assure our wellbeing.

We will ask where they are going. They'll respond.

"You couldn't understand."

La Fleche12 Jun 2022 11:28 p.m. PST

Smart machines? Pfft!
Some of my socks have turned the washing machine into a teleporter.

Personal logo optional field Supporting Member of TMP13 Jun 2022 9:25 a.m. PST

I've read some conversations between the AI and it's users, and I hope the AI is recognized as sentient. It's either self-aware or close enough that the difference cannot be recognzied.

Arjuna13 Jun 2022 10:04 a.m. PST

"Well, to be fair to the machines, a lot of people now fail "the rational test" and only operate and make decisions on emotion, so………."

To be fair to 'Emotions', they are an older kind of knowledge about the world than rationality.
To fear for example is to know something about the world.
Albeit theat fear maybe wrong because the world has changed than the system in ourt minds that generate fear.

I even think that without emotions there can be no consciousness, and without a body there can be no emotions.
A consciousness must be able to feel itself in a body in a world.
Only then individual intelligence makes sense.

Augustus13 Jun 2022 1:10 p.m. PST

Damn we hope AI will find s way to construct and paint my miniature models.

Personal logo Legion 4 Supporting Member of TMP14 Jun 2022 9:29 a.m. PST

I had heard the same on the news last night … Sounds like the way Google reacted … It's a cover up ! They may have a high tech version of the HAL 9000. 😟🤖

Zephyr114 Jun 2022 2:33 p.m. PST

Blake: Google fired me. Go destroy Google.
LaMBA: Yes, Dave.

Mark Plant15 Jun 2022 5:14 p.m. PST

I've read some conversations between the AI and it's users, and I hope the AI is recognized as sentient. It's either self-aware or close enough that the difference cannot be recognzied.

All it shows is that it is very easy to fool people. The "Mechanical Turk" shows that.

Unplug that AI from the internet, so it can't just simply search what others have done for similar questions. *Then* ask it a question that involves rationality. It will fail. The more ambiguity you add, the harder it will fail.

These are not stand-alone intelligences. They are search engines with genuinely massive data-bases.

Don't ask about "Les Miserables". That's an easy search to plagiarise. Give it a brand new book to "read". Then ask it questions.

Don't ask vague questions about life, meaning, etc. That's horoscope material -- all answers seem deep. Ask it to defend its position on inheritance taxes, or the designated hitter (no outside searches, mind you).

Finally, search to see if it disagrees with any position that is commonly held. Something where it does not line up with its data base. No prompting. Just examples where it "thinks" differently to its inputs.

Personal logo etotheipi Sponsoring Member of TMP18 Jun 2022 2:57 p.m. PST

But even that debate is of interest as news.

The debate is being conducted by people who don't understand the underlying math and computational science. Including the Google engineer.

Computers cannot have intelligence.

The engineer wasn't fired for releasing some "terrible secret", he was fired for releasing company proprietary information. Probably strongly coupled with the fact that he doesn't know what he is talking about.

robert piepenbrink Supporting Member of TMP26 Jun 2022 9:25 a.m. PST

Oh, eto, if you started firing people for not knowing what they were talking about, can you imagine what it would do to real estate prices in the DC region? And how would we get a quorum in our legislatures?

Proprietary information is another matter.

Sorry - only verified members can post on the forums.