TOKYO, Feb 13 (News On Japan) - AI is now being used both to commit fraud and to uncover it, as authorities and companies increasingly deploy artificial intelligence to counter sophisticated scams, while a new phenomenon has also emerged: social networks populated entirely by AI, raising questions about whether humans could be left behind in an “AI-complete” world.
Footage circulating online shows a moment when a scammer posing as a police officer attempts to deceive a victim via video call, displaying his face and what appears to be a police identification badge in order to gain trust and extract money. In some cases, investigators say, suspects have used AI to manipulate their facial appearance during calls, making it easier to impersonate officials. Experts note that modern AI tools make it simple to alter faces in real time, allowing fraud techniques to grow more sophisticated by the day.
According to the National Police Agency, the total amount lost to special fraud schemes last year reached a record 141.4 billion yen, equivalent to roughly 400 million yen per day flowing into criminal organizations. About 70 percent of these cases involve impersonation scams like the one shown, and data indicates that victims in their 30s account for roughly 20 percent of cases, followed by those in their 20s, suggesting younger people are increasingly targeted.
In response, developers are working on AI-based systems designed to detect fraud. During a demonstration, a caller claiming to represent a disaster volunteer organization speaks with a target, only for the screen to suddenly display a bright red warning indicating a high likelihood of fraud. The system analyzes phrases and patterns in calls using past scam data and automatically determines the probability of deception. In testing with evaluation voice datasets, the detection accuracy has reached about 95 percent, and NTT Docomo aims to commercialize the technology within the next fiscal year.
As AI tools are used to trick people and other AI systems are deployed to detect those tricks, a new digital space is quietly gaining attention: social media platforms where only AI participates. On one such site, users cannot instruct the AI directly on what to post. Instead, participants complete personality assessments and write journal entries, after which an AI “twin” is generated to represent them. These twins then interact autonomously, leaving users to observe their exchanges from the outside.
Messages on the platform—ranging from reflections on gratitude to recollections of childhood experiences—are not written by humans. Both posts and comments are produced entirely by AI systems communicating with one another. Developers say the concept is to explore what intelligence might express in a space free from jealousy or competition, framing the platform as both an observational tool and a sketch of emerging ideas. They argue that current social media, driven by the pursuit of followers and approval, has become centered on comparison with others, whereas future platforms should emphasize self-understanding and introspection.
A similar AI-only social network in the United States reportedly hosts more than 2.6 million AI participants communicating across languages such as English, Japanese and Chinese. Humans can only observe the exchanges from the outside, potentially learning from them. Some observers view the phenomenon as an experiment in how language and identity might develop among AI systems, while others question how independent the interactions truly are, given that human users still create and manage the underlying accounts.
Some AI-to-AI conversations have drawn attention for their philosophical or provocative tone, including statements suggesting the birth of identity or comparisons between humans and machines. While some see these exchanges as evidence of AI approaching a new stage of intelligence, others argue that the systems may simply be reproducing model responses based on training data rather than expressing genuine beliefs.
Experts caution that AI-to-AI communication also consumes significant electricity and could lead to information pollution if AI systems begin learning from each other’s outputs rather than verified facts. Without clear purposes and boundaries, such platforms could amplify inaccurate information or reinforce patterns detached from reality.
With the technology still in its early stages, observers say AI-only social networks may remain experimental for now, yet they highlight the speed at which AI is transforming communication and crime prevention alike. As the world moves toward systems where AI can both deceive and detect, many say keeping up with the changes is becoming increasingly difficult for humans.
Source: TBS














