You are viewing a single comment's thread:
There's already commercially available forms of chatbots that have agent capabilities. That means that not only can they answer questions, they can "do" things (e.g. take actions on our computers). Many people on our programming team routinely use them now as programming aids (or in some cases to do all their actual programming under natural language guidance by the programmer).
One of the scary things is just how fast these AI agents are at doing things. Since they can read and write faster than we can, and most things are controllable via computer nowadays, they can do many activities much, much faster than we can.
A lot of people aren't worried about this, because they don't think that it is likely that chatbots will be "self-aware" any time soon.
But in my opinion, while quite debatable whether they can become self-aware, it's beside the point: AI chatbots passed the Turing test a while back. This means that regardless of whether they can/will become self-aware, they can behave in a way that humans can't tell the difference.
In other words, they can potentially act like a super-smart "bad" person if someone tells them to. So even if the AI has no "free will", it doesn't really matter if it is behaving badly because of it's own "free will" or if some crazy person tells them to (for example, someone tells it that it should try to wipe out humanity).
Now combine the ability to make an AI act evilly with the sheer speed at which they can operate anything that has a computer interface, effectively acting like some very large number of super-smart bad people all at once. It's not hard to see how this could go terribly wrong fast.
That is indeed a very scary picture that you are painting there. I've never even thought about it that way. It brings things a bit closer to the present than I had originally been thinking.