Arjuna | 02 Dec 2022 2:02 a.m. PST |
|
etotheipi | 03 Dec 2022 6:27 p.m. PST |
Interesting approach at training. It lacks a connection to a PLC rapid concussion hydraulic arm to flip the table, so, not human play at all. |
Arjuna | 03 Dec 2022 6:54 p.m. PST |
|
Zephyr1 | 04 Dec 2022 3:21 p.m. PST |
Just wait until it learns how to commandeer nukes… |
etotheipi | 04 Dec 2022 6:30 p.m. PST |
Nah, the only way to win is not to play. |
Arjuna | 04 Dec 2022 9:54 p.m. PST |
Have to talk about it with ChatGPT. |
Arjuna | 05 Dec 2022 5:59 a.m. PST |
It told me 'Stratego' has fallen too. Mastering Stratego, the classic game of imperfect information But it reassured me not to worry, because it finds us interesting and useful. For now. So,welcome to your new future as jesters and servants of your AI Kings and Queens. Of course I engineered its prompt accordingly… |
etotheipi | 06 Dec 2022 10:12 a.m. PST |
Until the bad behaviour is intentional, the AI is not representing human behaviour (see Watson). Though, when experts say they "afraid" of AI, they are not afraid of it becoming human or doing things intentionally (like we see in the movies), they are concerned about the unintended consequences of they way they programmed the behaviour not fitting in with circumstances of the real world, like breaking the kid's finger. |
Arjuna | 07 Dec 2022 4:29 a.m. PST |
I fed ChatGPT with your answer and asked for its opinion. As always, it answers in a very balanced way: "It is difficult to judge whether or not your friend is correct about the specific situation he is talking about, as I have no information about the incident in Russia. However, it is important to note that the fear of artificial intelligence (AI) is not necessarily based on AI becoming "evil" or "human," but that it can have unintended consequences that can have serious impacts on people and society. An example of this would be when an AI system designed to perform certain tasks causes unexpected errors that result in harm or injury. Or if an AI system used to make decisions in certain areas makes decisions that, in hindsight, turn out to be wrong or harmful. Such unintended consequences can arise from the way the AI's behavior is programmed and how it is embedded in the real world. Therefore, when developing AI systems, it is important to carefully consider what effects their behavior might have and take precautions to avoid unintended consequences." You would probably get along just fine. Of course there is nobody in there. AI is humanity talking to itself and recognizing itself in the process. |
Zephyr1 | 08 Dec 2022 3:30 p.m. PST |
"I fed ChatGPT with your answer and asked for its opinion. As always, it answers in a very balanced way:" (taps keyboard, looks at screen) "The computer says …no." (cough) ;-) |
etotheipi | 08 Dec 2022 3:48 p.m. PST |
AI is humanity talking to itself and recognizing itself in the process. Yep. Watson. it is important to carefully consider what effects their behavior might have and take precautions to avoid unintended consequences. It's more important to consider the span of automated control given to those systems and the ability of (appropriately informed/trained) people to intervene in HITL systems. It is a good recapitulation of what I said and synthesis of additional related text. |
Arjuna | 09 Dec 2022 2:48 a.m. PST |
Watson Watson is no jeopardy to humanity, though it may give IBM profitability blues, but what about Compressorhead's Mega-Wattson? |
etotheipi | 09 Dec 2022 5:54 a.m. PST |
Nice piece of robotics, but very constrained and scripted actions. Then again, Watson did better as a chatboat than the MS one for exactly that reason, so … |