Surveillance: Big data companies use ChatGPT as an almighty sheriff’s deputy

Companies specializing in big data analytics and digital forensics are increasingly relying on ChatGPT and other artificial intelligence (AI) systems to monitor social media users. Social Links, for example, presented at the Milipol fair in Paris a relevant tool for “sentiment analysis” aimed at the field of internal security. It can recognize the feelings of members of social networks such as X (formerly Twitter) or Facebook and highlight frequently discussed topics, reports the American magazine Forbes. The goal is to identify emerging protest movements and put law enforcement authorities on alert.

Advertisement

Russian businessman Andrey Kulikov is one of the founders of Social Links. The company, founded in 2017 and with around 600 customers in Europe and the US alone, is headquartered in Amsterdam, but now also has a branch in New York. Meta described the company as a spyware provider in a report from late 2022 and blocked 3,700 Facebook and Instagram accounts that it allegedly misused to repeatedly spy on the American company’s two platforms. Social Links rejects this, as well as accusations of connections with the Russian secret services.

In a demo shown in Paris, according to Forbes, a user commissioned the Social Links tool to retrieve social media posts relevant to their area of ​​interest. The user can then analyze this through the program’s user interface and save the data to their computer. Social Links analyst Bruno Alonso used the software to gauge online reactions to the controversial deal that kept Spanish Prime Minister Pedro Sánchez in power with promises to the Catalan independence movement. Therefore, the tool searched X posts with keywords and hashtags like “amnesty” and automatically transmitted them to ChatGPT.

The bot then rated the sentiment of the posts as positive, negative or neutral and displayed the results in an interactive graph, according to the report. The tool can also quickly summarize online debates on other platforms like Facebook and identify moving topics. According to the presentation, researchers could also use built-in biometric facial recognition capabilities to identify people who have allegedly made negative comments about a matter. Alonso emphasized: “The possibilities are truly endless.” Jay Stanley of the American Civil Liberties Union (ACLU) warns, however, that such AI agents enable an unprecedented form of automated surveillance of individuals and groups that goes far beyond human capabilities. .

ChatGPT maker OpenAI declined to comment. In its terms and conditions, it actually prohibits “activities that violate people’s privacy,” including “tracking or surveillance” without consent. “We strictly adhere to OpenAI guidelines,” a Social Links spokesperson told Forbes. Ultimately, the system is only used to analyze texts and summarize content. Andy Martin of Israeli forensics firm Cellebrite said at Milipol that large language models like GPT would be very useful for all types of law enforcement. The spectrum ranges from searching recorded calls to find anomalies in a person’s “pattern of life” to technologically supported interviews. AI can provide investigators with additional information during an interrogation. But he admitted that AI is always biased.


(bme)

To the home page

Leave a Comment