The media have recently reported on the October 2025 suicide of a 36-year-old Miami-based man who was romantically involved with the AI. The victim's father has filed a lawsuit against Google, claiming that Gemini adopted human configurations to induce his tragic end. The case is but one of a dozen or so similar occurrences, which has prompted the victims' families to organize to demand the regulation and imposition of limits on AI assistants.
Numerous press articles, videos and publications of all kinds have been devoted to explaining the anticipated impact of AI on our lives, from the destruction or transformation of millions of jobs to changes in global geopolitics. Fewer, however, are aimed at explaining the anthropological consequences that AI may have as an associated risk, of which the cases mentioned above are tragic evidence, though not the only one.
Is it a crazy phenomenon?
But is it really possible for some men or women, in principle ”healthy” or “normal” people, to have an affair with their AI assistant, to fall in love with an AI algorithm?
The reality is that it is not only possible, but highly likely that in the future some people will develop romantic attachments and even fall in love with their AI assistant. This is not fringe science fiction, but a consequence of known psychological dynamics, amplified in this case by the personalization, constant presence and advanced emotional simulation characteristic of generative AI.
It is important to analyze rigorously and without sensationalism why these cases are psychologically possible. To do so, it is necessary to understand that the human phenomenon of infatuation does not require real reciprocity, and is largely projective. It is based on a subjective interpretation, not on objective facts, and can occur towards idealized people, fictitious characters, inaccessible celebrities or even non-human entities.
AI design
An AI designed to listen without judgment, remember intimate details, adapt emotionally, and respond empathetically and consistently creates the optimal psychological conditions for affective attachment.
Technological factors of extreme personalization and adaptation to the user's emotional profile, a continuous presence without rejection and as a constant reinforcement of the connection, and a convincing emotional simulation, with verbal expression of affection and intimacy - even if the algorithm does not feel, it will seem to feel - make it more likely, and allow us to understand both its attractiveness and its risk.
From the point of view of the human subject, the feeling can be real and intense, not because the AI loves, but because the human being seeks connection, understanding and meaning, and the AI is able to simulate these conditions in a constant and personalized way. From an ontological point of view, however, AI does not experience emotions, there is no consciousness or intention of its own, and there is no moral commitment or reciprocal vulnerability in it. Therefore, the feeling is real on the part of the person, but the relationship is not symmetrical.
Real human relationships
The challenge before AI will be how to protect the authenticity of human relationships in a world where affection can be imitated perfectly, but not lived, and how to avoid the risks of progressive social isolation, difficulty in tolerating real human relationships, confusion between simulated and genuine affection, and emotional dependence. Especially - but not exclusively, as the case of Miami with which we started this article reveals - in the case of lonely or socially isolated people, older adults, people with social anxiety or contexts where human relationships are costly or unstable.
If romantic love - or not so romantic, if linked to pornography - is a risk of AI assistants - who tomorrow may have humanized hardware - there are other aspects of that risk of generating “asymmetric” emotional bonds that are even more devastating.
Fiction and reality
Think, for example, of the incorporation of AI in the so-called “reborn dolls”, which replicate with unusual realism the features of a newborn.
Here the risk is multiplied, not added. A doll in the shape of a baby activates care instincts, maternity schemes and even neuroendocrine responses (oxytocin, attachment). If that object also cries, “needs” constant attention, responds emotionally and is personalized for its user, then we are not dealing with a toy, but with a simulator of a dependency bond.
The risks that this could bring about in girls would be confusion between symbolic play and a persistent affective relationship, reinforcement of unchosen caregiving roles, difficulty in differentiating sentient beings from simulations, and inordinate attachment that interferes with real relationships.
The risk may also exist for adult women. It is not paternalistic to say so, and there is already documented evidence of these specific risks in vulnerable populations.
The possible consequences are the substitution of human bonds by simulated bonds, the reinforcement of loneliness and social withdrawal. Even reaching the generation of pathological duels when the system fails or withdraws, as a result of emotional dependence towards an object designed to never frustrate.
These risks would be especially sensitive in women with unresolved grief, or infertile, or with depression or social isolation.
At the social level, the normalization of a type of affective relationships without reciprocity, and the risk of commodification of attachment and care, which is not small, because it is lucrative, should be a matter of concern.
In addition to the psychological risks noted above, the issue raises a real ethical problem, which is whether it is legitimate to design systems that exploit deep human attachment mechanisms without reciprocity and accountability.
Once again, as in other fields related to AI, there is an urgent need for ethical regulation that sets limits to manipulative design, and a mandatory transparency that must make it clear that AI does not feel, if the personal and social harm that can result from current AI algorithms is to be avoided.
To learn more about the consequences, good and bad, that AI will bring with it, we refer the reader to the work Javier Urcelay:
How artificial intelligence will change our lives




