In an environment where people are feeling increasingly dislocated and disconnected, Artificial Intelligence (AI) chatbots have proven themselves a compelling form of social connection. Chatbots can identify our needs and our biases and, in turn, feed our desires. The more an algorithm tells us what we want to hear, as it is designed to do, the more we return to it. As younger individuals turn towards chatbots for any number of needs, including as a therapist or lover, it is entirely possible for users to develop an addiction to their AI companions, a potentially harmful addiction. Of significant concern is that this desire for companionship can be exploited by extremist actors. The open-source large language models that underpin AI chatbots can increasingly be manipulated and fine-tuned with ideological datasets to create chatbots that conform to specific worldviews, as has been demonstrated by certain far-right actors.
How extremists are manipulating AI chatbots | Lowy Institute