Tango01 | 13 Apr 2017 12:55 p.m. PST |
"When Microsoft released an artificially intelligent chatbot named Tay on Twitter last March, things took a predictably disastrous turn. Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it. Unfortunately, new research finds that Twitter trolls aren't the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists. The researchers experimented with a widely used machine-learning system called the Global Vectors for Word Representation (GloVe) and found that every sort of human bias they tested showed up in the artificial system. [Super-Intelligent Machines: 7 Robotic Futures…" Main page link Amicalement Armand |
Cacique Caribe | 13 Apr 2017 3:33 p.m. PST |
|
Bowman | 13 Apr 2017 4:51 p.m. PST |
Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it. Is anyone surprised at this? |
Cacique Caribe | 13 Apr 2017 6:23 p.m. PST |
I doubt a robot would be able to distinguish between "Nazis" and those who claim to be against "fascism" these days. Dan |
Bowman | 14 Apr 2017 5:46 a.m. PST |
Of course not, but it's mimicking the speech and behaviour of others. |
Martin From Canada | 14 Apr 2017 10:55 a.m. PST |
Cathy O'Neill wrote a very good book on this a few years ago called "Weapons of Math Destruction", showing 'value neutral' algorithms happen to perpetuate the mores of the programmers. |
Parzival | 17 Apr 2017 10:45 a.m. PST |
All programmers program to get the results they expect or will accept as valid. Whether the results are actually valid is an entirely different matter. Or, more succinctly: "Garbage in, garbage out." |
Mithmee | 17 Apr 2017 12:33 p.m. PST |
No it is not. It was the scumbags who were using Twitter who are. |
Bowman | 18 Apr 2017 5:41 a.m. PST |
I find myself in the odd position of agreeing with Mithmee and disagreeing with Parzival. The "chatbot" software consists of inductive algorithms that interact with and mimic the behavior of the people it is exposed to. So GIGO usually implies bad programming as the "garbage in" part. However, the "garbage in" part here is the interaction with the Twitter twits. |
Col Durnford | 18 Apr 2017 10:15 a.m. PST |
GIGO refers to the user input data and output results, not the program. |
Weasel | 28 Apr 2017 9:13 a.m. PST |
And people are surprised I don't use twitter :) |
Tango01 | 28 Apr 2017 12:05 p.m. PST |
|