AI now knows how to speak, but have we taught him to protect?

AI now knows how to speak, but have we taught him to protect?

One day, you will receive a call: the voice will seem to be that of your boss, partner or child. It will not be them, but a cloned voice, an invented crisis, a call made to deceive you …

Scams that can cost dearly

Shols already use vocal cloning tools from Tiktok videos to target seniors, colleagues and businesses. Only one of their campaigns cost the victims more than $ 100 million. And it’s not new. We saw people convinced that they went out with Brad Pitt. Some images generated by AI, semi-convincing messages, and a woman sent hundreds of thousands of euros to the one she thought was Pitt himself, sick, abandoned by Angélina Jolie and needing help. If the trap works with a coarse imitation, what will happen when the voice is perfect? When the context will seem credible and your instinct will cry: is that necessarily true?

An increasingly formidable AI

AI must simplify communication, make it more human, save people time, not steal it from them. But it is not the powerful tools that decide on their own use. These are humans. Today, five seconds of audio and some public data – your name, your child’s school or your dog’s name – are enough, to create a credible cloned voice. Add a little context, and it becomes a plausible crisis. And it works: a secretary makes a transfer, a financial director approves a transaction, a parent won the phone and hears his child crying with the help. In reality, AI only does what has been taught to do. We just didn’t take the time to ask ourselves: what happens if it becomes too efficient

AI yes, but at what price?

It was said in the past: “Do not believe everything you read”, today “does not believe everything you hear” and soon “do not believe everything that knows your name, your agenda, and your voice”. What technology suppliers must integrate is that AI does not scare because it is intelligent. She scares because she is fast. Faster than your internal policies, your validation procedures, and even your reflexes. The danger is not in technology, but in the absence of limits. We have seen it too often: suppliers rush to sell the dream, go beyond competition, and leave ethics on the doorstep.

But when you build tools for hospitals, schools or field teams, professions for which the margin of error must remain low, none in-Pu-près is allowed. If you develop these tools, think of failure, not just function. If you buy them, require evidence that they have been properly built. Here is what it means concretely:

Necessarily complex security

If a single voice can trigger an action, there is a risk. Any interaction in high stakes, meeting, validation, reassessment of a problem must require authentication with several factors, anchored in reliable channels such as professional emails. Scams rely on what is the easiest to falsify: telephone numbers and emergency.

Hearing is no longer enough, you have to see

Voice verification is not enough. Two tests are necessary: ​​does it look like your boss? Is it really him? Real -time video remains difficult to counterfeit. If something seems strange to you, change your canal: let’s go to a secure visio. Why this number and not your professional line? Can you activate your camera?

When the shape masks the bottom

A great turn does not guarantee a good intention. AI systems should detect emotional signs or confusion and pass the hand to a human, instead of automatically continuing the script.

Partition to protect better

Keep the transcription engines separated from the response models. Remove sensitive content before they reach an LLM. This reduces the risk of leaks, intentional or not.

No deployment without control

Monitor automated calls, identity theft and policy violations.

Technology alone is not enough. Human supervision is essential. Your partners must check each deployment. And you must validate who activates what, and why, before AI comes into service.

Question and trust

Audit newspapers are not used to monitor, but to establish confidence. When a problem occurs, traceability makes the difference between chaos and control. But the best defense is human instinct. Used your teams to adopt a reflex before acting, that of asking the following question: “Does it make sense?”

I have seen what agentic AI can do when it is used. I saw her help a nurse to sort files in a few seconds, support a small business with the voice of a CAC 40 team, attend teams in five languages, without blinking. I also saw her imitate a wife. A boss. A child. Same engine. Other safeguards. So the real question is not: can AI make us more effective? But well: “Let us build it to help us, or to harm us?”. The answer is in our hands. There is still time.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment