AI is now omnipresent in our digital lives and is radically transforming online threats. Security no longer depends only on the behavior of Internet users…
Safer Internet Day has always been about protecting internet users. In 2026, this mission takes on increased urgency as artificial intelligence becomes deeply integrated into the way we work, learn, communicate and transact. AI is no longer a technology of the future: it already shapes what we see online, how decisions are made and how cybercriminals operate. As AI becomes an integral part of the Internet, online safety no longer depends only on user behavior, but also on how AI itself is intelligently and responsibly designed, governed and secured.
The challenge this year is not whether to use AI, but how to use it safely, responsibly and with awareness of the new risks it creates. While AI accelerates productivity and creativity, it also increases the attack surface of the Internet, affecting individuals, families, schools and organizations. Reacting after the fact is no longer enough in the face of lightning-fast threats.
AI has become an integral part of our daily digital lives.
From writing assistance to image generation, recommendation engines and chatbots, AI is present in almost all of our digital interactions today. Businesses are rapidly adopting generative AI (GenAI), and individuals use it every day, often without fully understanding how the data is processed or stored. AI has become a true co-pilot in our daily digital lives, discreetly influencing our decisions, the content we use and the trust signals we establish.
These risks are not just limited to businesses. When students, families, or individuals use AI tools for homework, for guidance, or to create content, the same behaviors—copying and pasting personal information, uploading images, or trusting results without verification—can expose them to risks of privacy invasion, misinformation, or manipulation. Safe use of AI therefore relies on digital mastery, not restriction.
How AI is transforming the cyber threat landscape
Cybercrime has always evolved with technology, but AI is accelerating this evolution at unprecedented speed. Attackers now combine AI, spoofing, ransomware and social engineering into coordinated, multi-stage campaigns that are faster than traditional defenses. These attacks increasingly adapt in real time, learning from their failures and automatically perfecting their techniques, much like how defensive AI works.
Three trends are particularly relevant to Safer Internet Day:
1. AI-assisted social engineering
AI makes phishing and scams more convincing and easier to implement at scale. Attackers can now generate multilingual and culturally appropriate messages, imitating trusted voices, institutions, and even family members. Email remains the primary vector for the distribution of malicious content, accounting for 82% of malicious files transmitted, but web and multi-channel attacks are growing rapidly. This reinforces the need for AI-based threat prevention that can detect intentions and behaviors, not just known signatures.
2. Large-scale ransomware
According to Check Point Research’s December cybersecurity statistics, 945 ransomware attacks were publicly reported in December 2025 alone, a 60% increase from December 2024. Ransomware groups are increasingly fragmented, automated and aggressive, often combining data theft with extortion and public pressure. Artificial intelligence is now being used to accelerate targeting, reconnaissance and extortion tactics.
3. Uncontrolled use of AI: an increased risk factor
AI systems themselves become targets. A study by Check Point found security vulnerabilities in 40% of them, demonstrating that AI infrastructure is now part of the attack surface. Securing AI pipelines, models and data flows is as crucial today as securing endpoints or networks.
Tips for staying safe online in an AI-dominated world
As Safer Internet Day reminds us, small habits can significantly reduce risks:
• Be careful before you trust: If AI-generated content asks you for urgent help, money, or privacy, stop and check.
• Limit the information you share with AI tools: Avoid entering personal, financial, or personally identifiable information unless absolutely necessary.
• Verify important information: Compare AI results with reliable, human-verified sources.
• Keep your systems up to date: Many attacks exploit known vulnerabilities rather than new ones.
• Talk openly about the use of AI: especially with younger users, discuss the security possibilities and limitations of AI.




