Help support TMP


"Bad News: Artificial Intelligence Is Racist, Too" Topic


12 Posts

All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.

Please do not use bad language on the forums.

For more information, see the TMP FAQ.


Back to the Science Plus Board


Areas of Interest

General

Featured Hobby News Article


Featured Recent Link


Featured Ruleset


Featured Showcase Article

Transporting the Simians

How to store and transport an army of giant apes?


Featured Workbench Article


Featured Profile Article

Escaping to Paradise

Personal logo Editor Gwen The Editor of TMP has been spending time in paradise lately.


Current Poll


Featured Book Review


332 hits since 13 Apr 2017
©1994-2024 Bill Armintrout
Comments or corrections?

Tango0113 Apr 2017 12:55 p.m. PST

"When Microsoft released an artificially intelligent chatbot named Tay on Twitter last March, things took a predictably disastrous turn. Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it.

Unfortunately, new research finds that Twitter trolls aren't the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists.

The researchers experimented with a widely used machine-learning system called the Global Vectors for Word Representation (GloVe) and found that every sort of human bias they tested showed up in the artificial system. [Super-Intelligent Machines: 7 Robotic Futures…"
Main page
link

Amicalement
Armand

Cacique Caribe13 Apr 2017 3:33 p.m. PST

Lol.

Dan

Bowman13 Apr 2017 4:51 p.m. PST

Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it.

Is anyone surprised at this?

Cacique Caribe13 Apr 2017 6:23 p.m. PST

I doubt a robot would be able to distinguish between "Nazis" and those who claim to be against "fascism" these days.

Dan

Bowman14 Apr 2017 5:46 a.m. PST

Of course not, but it's mimicking the speech and behaviour of others.

Martin From Canada14 Apr 2017 10:55 a.m. PST

Cathy O'Neill wrote a very good book on this a few years ago called "Weapons of Math Destruction​", showing 'value neutral' algorithms happen to perpetuate the mores of the programmers.

Personal logo Parzival Supporting Member of TMP17 Apr 2017 10:45 a.m. PST

All programmers program to get the results they expect or will accept as valid.
Whether the results are actually valid is an entirely different matter.

Or, more succinctly: "Garbage in, garbage out."

Mithmee17 Apr 2017 12:33 p.m. PST

No it is not.

It was the scumbags who were using Twitter who are.

Bowman18 Apr 2017 5:41 a.m. PST

I find myself in the odd position of agreeing with Mithmee and disagreeing with Parzival. The "chatbot" software consists of inductive algorithms that interact with and mimic the behavior of the people it is exposed to.

So GIGO usually implies bad programming as the "garbage in" part. However, the "garbage in" part here is the interaction with the Twitter twits.

Col Durnford18 Apr 2017 10:15 a.m. PST

GIGO refers to the user input data and output results, not the program.

Weasel28 Apr 2017 9:13 a.m. PST

And people are surprised I don't use twitter :)

Tango0128 Apr 2017 12:05 p.m. PST

(smile)


Amicalement
Armand

Sorry - only verified members can post on the forums.