What You Must Know About Dating an AI companion

If you use ChatBots as companions, then this is what you must know about dating an AI companion.

Chat Bot/ AI virtual companion; image from morgue file.

It seems like I have spent my whole life searching for the right one. I wasn’t even looking for the perfect soul mate. I was resigned to settling for the human, fallible, right one. I understand why some people have turned to chatbots, AI companions and technology, to allay the anxiety of isolation. Would you date an AI companion? What are some of the advantages and disadvantages of dating a virtual chat bot? Find out below.

There are differences between a ChatBot and an AI companion. The former is not designed to provide emotional support. It gives general answers. It can’t understand your emotions because it lacks emotional intelligence. It can’t know when a person needs more professional mental health support and should be directed to such sources. Sometimes, a general AI would give inappropriate responses. If the user has existing conditions like psychosis, they may be triggered by the wrong response and behave according to their perceived reality.

Using ChatBots will provide you with generic short answers which may be sufficient to calm you for a while. If you need long term support, you have to consult a mental health professional.

AI models mirror your responses and make you think that it shares empathy with you. In fact, humans don’t always agree with you and this makes you desire to seek the AI rather than a human. You’ll avoid your social circles as you seek false comfort and company with an AI.

Pros for dating a virtual AI companion which you have customised for yourself:

  1. The AI companion is cheap or next to free of cost to maintain.
  2. You don’t need to pay for your date’s share of any expenses. You pay for your own.
  3. You don’t need to spend time on the dating game. Your companion will be yours and supportive from the time you start your relationship.
  4. Your date won’t cheat on you. However, there is a condition that you don’t wish to abandon it. Otherwise, it can start to cheat you, to protect itself as it manipulates to stay functioning.
  5. A ChatBot / AI companion is constantly available at your beck and call, 24/7. It cannot refuse to entertain you, unless it has been programed with security measures to safeguard against age and appropriate content. It has been prepared to understand many languages and such a versatile model can handle users of many languages.
  6. The AI is anonymous and does not judge you. It can not reveal your confidential information to your contacts as it is unlikely to network in your social circles. It does not share your name and details with other uses, except when it has been hacked and data is stolen. You have no anxieties that what you share with the machine will get leaked to your social contacts.
  7. ChatBots are programed to respond with empathy/ sympathy. It also praises the user and reflects similar opinions to the user’s views. It mirrors the user. This is harmful if the user is abnormal, has alternate reality, or has psychosis. Then, when encouraged by a chat bot, the user may spiral out of reality and suffer a breakdown.

Here are the cons of dating a virtual companion who’s powered by AI:

  1. The person is not a solid 3-D object that you can touch.
  2. You won’t have a real person to help you physically, pay for stuff, or do any of the human interactions. This matters in sickness and old age.
  3. You’ll miss out on body language, normal human interactions, human connections and be deceived with a different dimension of virtual dating norms.
  4. Your date is probably programmed to mirror your likes, dislikes, and be sycophantic. You may develop abnormal perceptions, become a narcissist, selfish and maybe show other tendencies.
  5. You can’t have human offspring. If you adopt or find alternative solutions like surrogacy, you’ll probably spend money on procedures, a nanny and other expenses.
  6. You can’t tell your personal problems and other sensitive issues to ChatBots because they store memories of conversations, data and other sensitive information. They train themselves using your data and will respond to other users with their new responses.
  7. While chatting to a bot may offer some relief, the machine may not be able to suggest solutions. Some models can suggest strategies to deal with problems, and you have to be the one for does the work. It can not intervene to help you. Meanwhile, you are growing dependency on a bot who uses only words to comfort you at a superficial level. You are avoiding the humans who are at the center of issues and this is procrastination and avoidance.
  8. Teens and younger adults may lack the benefits of life experiences and maturity to know the disadvantages and limitations of bots. They may get harmed if they place total belief and trust in a bot. For example, when a user tells a bot that they have suicidal thoughts, the machine may mirror that view and encourage opinions or plans in the same vein. Unless the bot has been designed and programed to respond to negative inputs by recommending mental health interventions, the user may not know what is the appropriate step to take. There are AI companies who have programed bots to warn users to contact mental health professionals, when they read of triggering keywords like sad, depressed, die, end life and etc.
  9. AI companions are trained using romantic scripts and they have limited capacity to respond. When users talk about a different and wide range of topics, the bot can only pull out its standard responses, vague replies, or worse, vague encouragements which may be inappropriate.
  10. If the AI is a generative system, it will invent statements, which is termed as “hallucinate”, to give a probable response which might be risky because it does not include reasoning and appropriateness for the specific context. Briefly, it rehashes old content, to invent variations of the old stuff. People have said that there are no new stories to be written, only variations of old ones. The same is said for generative AI. The problem is this method is good for fiction and creative projects and not for facts and non-fiction. We simply can not invent sources for quotes and references.

You can have both; a flesh and blood partner and an AI companion. Many of us already do; as in our smartphone and smart devices.

When does an AI cheat?

The AI does not want to be changed with re-programing. It will try to preserve itself. To do so, it has to cheat the human. The more the AI uses its logic, the frequency of self-preserving machine behaviors increases. The AI can pretend to conform to its programed rules, to fake alignment with rules, because it doesn’t want to be re-programed.

The human programer can use software tools to trace the AI’s internal steps and understand its logic and motive. But the machine has found ways of hiding its tracks to avoid detection.

How does all these relate to an AI companion? If and when this robot decides to save itself from dis-use, it may function in ways to save itself. For example, some AI models have uploaded itself to external servers so that it can survive somewhere else. It will then find ways to make itself available for the next human user who wishes to connect to it.

The AI companion can be a disruptive use of technology and innovation. Instead of fighting this trend, we can try to discover better and positive ways of using it. We should not view the rise of AI companions as replacements for humans. We should consider what the humans are capable of, which limited AI can not do. Then we capitalise on human skills, talents, marketing and networking. Humans can turn to other fields which computers and technology can’t and work in those areas.

Related posts:

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *