⚠ NCSC: Week 18: Parcel phishing with a devious twist – The "double phishing" scam 🔴 CVE: CVE-2026-40393 (CVSS 8.1) — In Mesa before 25.3.6 and 26 before 26.0.1, out-of-bounds memory access can o… 📰 New article: The CISO Game in Chiasso: What a Simulated Cyber Crisis Teaches That No Presentation Ever Could ⚠ NCSC: Week 18: Parcel phishing with a devious twist – The "double phishing" scam 🔴 CVE: CVE-2026-40393 (CVSS 8.1) — In Mesa before 25.3.6 and 26 before 26.0.1, out-of-bounds memory access can o… 📰 New article: The CISO Game in Chiasso: What a Simulated Cyber Crisis Teaches That No Presentation Ever Could
← Back to articles
7 min read

The Scammers Are Evolving? No — We Are Not

That viral 'rnicrosoft.com' phishing email is not a sign of sophisticated attackers. It is a sign of a collective security culture that has failed to keep pace with a 20-year-old technique.

A screenshot is circulating on social media this week. It shows a password reset email purportedly from Microsoft, with the sender address noreply@rnicrosoft.com. An arrow points to the domain with the annotation: rnicrosoft.com. The caption above the post reads: "The scammers are evolving."

They are not evolving. The technique shown in that screenshot — substituting the letter 'm' with the two-character sequence 'rn' to create a visually identical domain — is a textbook lookalike domain attack. It has been documented in security research since at least 2001. It was catalogued by MITRE in ATT&CK technique T1036.005 (Match Legitimate Name or Location). It is taught in every entry-level security awareness programme. It appears in phishing simulation toolkits used by red teams worldwide. There is nothing new here.

A Brief History of the Technique

Lookalike domain attacks — also called homograph attacks, typosquatting, or visual spoofing — exploit the limitations of human visual perception rather than technical vulnerabilities. The earliest academic documentation of the homograph attack dates to a 2001 paper by Evgeniy Gabrilovich and Alex Gontmakher, which demonstrated how Unicode characters from different scripts could be used to register domains visually indistinguishable from legitimate ones. The ASCII variant — substituting character combinations like 'rn' for 'm', 'vv' for 'w', or '1' for 'l' — is even older, predating the formalisation of the technique in the academic literature.

By the mid-2000s, lookalike domains were a standard component of phishing kits sold on criminal forums. By 2010, every major anti-phishing framework included guidance on detecting them. ICANN has published multiple advisory documents on the risks of visually similar domain registrations. Browser vendors implemented visual safeguards for Unicode homographs more than a decade ago. The technique has appeared in countless post-incident reports, security conference talks, and awareness campaigns. It is, by any reasonable measure, well-understood and thoroughly documented.

Why It Still Works

The uncomfortable reality exposed by the viral screenshot is not that attackers are becoming more sophisticated. It is that a meaningful portion of the population — including technically literate people who use corporate email daily — is still not reliably detecting a substitution that takes two seconds to spot when you know what to look for. The commenter in the original post admitted as much: 'on a Monday they would have gotten me.' This is an honest and important admission. It should not be read as a personal failing. It should be read as a systemic one.

◆ Key Takeaway

When a 20-year-old technique continues to have a meaningful success rate, the failure is not in the attackers' sophistication. It is in the collective baseline of security literacy — in organisations, in security awareness programmes, and in the design of the tools people use every day.

The Security Culture Gap

Security awareness training, as currently practiced in most organisations, is a compliance activity. It is measured by completion rates, not by behaviour change. Annual phishing simulations, click-through e-learning modules, and once-a-year password hygiene reminders produce metrics that satisfy auditors. They do not reliably produce the kind of automatic, habitual vigilance that would cause someone to scrutinise a sender domain before acting on a password reset email on a Monday morning.

The research on security awareness effectiveness is not encouraging. A 2023 meta-analysis of phishing simulation studies found that click rates on simulated phishing emails return to near-baseline within 4-6 months of training without reinforcement. Contextual, repeated, scenario-based training — the kind that actually changes behaviour — is expensive, time-consuming, and difficult to scale. Most organisations do not do it. The result is a workforce that has been told what a phishing email looks like, but has not developed the reflexive scepticism that would make the knowledge actionable under pressure.

The Design Problem

Security awareness training is not the only lever. The persistence of lookalike domain attacks also reflects a design failure in the tools we use. Most email clients display the sender's display name prominently and the actual email address in smaller, secondary text — or not at all until the user explicitly expands the header. On mobile, many email clients show only the display name by default. An attacker who sets their display name to 'Microsoft' and sends from noreply@rnicrosoft.com is exploiting a deliberate UX decision that prioritises visual cleanliness over security-relevant information.

This is not a new observation. Security researchers have been making this argument to email client vendors for years. The changes have been incremental and inconsistent. Gmail, Outlook, and Apple Mail all handle sender address display differently, and none of them defaults to showing the full address prominently in all contexts. Until the tools themselves are redesigned to surface security-relevant information at the moment of decision — when the user is reading the email, not after they have already clicked — the training burden on individual users will remain unreasonably high.

What Organisations Should Actually Do

  • Implement DMARC in enforcement mode. A properly configured DMARC policy (p=reject) would prevent rnicrosoft.com from impersonating Microsoft at the protocol level — but only if both the sending and receiving domains have DMARC correctly configured. For your own domain, there is no excuse for not having DMARC enforcement in place in 2026.
  • Deploy lookalike domain monitoring. Tools that alert when domains visually similar to your brand are registered give you advance warning of impersonation campaigns before they hit your users. Several threat intelligence platforms offer this as a standard feature.
  • Train for the specific technique, repeatedly. A single annual module that mentions 'check the sender address' is not sufficient. Run phishing simulations that specifically use lookalike domains. Debrief the results. Repeat every quarter, not every year.
  • Redesign your internal email UX where you can. For high-sensitivity communications — password resets, financial approvals, security alerts — configure your email gateway to add a clear visual banner when email originates from outside your organisation. Make the 'external sender' signal impossible to miss.
  • Treat awareness as a continuous programme, not a compliance checkbox. The most effective security cultures embed security thinking into day-to-day workflows — through regular briefings, incident debriefs shared with the whole team, and visible leadership engagement with security topics.

A Note on Kindness

The viral post generated comments ranging from knowing amusement to mockery of people who might have fallen for it. This reaction is counterproductive. Security culture is not built by making people feel foolish for nearly clicking a link. It is built by creating environments where near-misses are reported and discussed without shame, where the person who almost got phished is encouraged to share what happened so others can learn, and where the response to a successful attack is a systems review rather than a blame assignment.

The Swiss NCSC's guidance on incident reporting reflects this principle: the value of reporting a near-miss is not in documenting a failure, but in generating intelligence that protects others. Every organisation that builds a culture where 'I almost clicked this' is a safe thing to say out loud is an organisation that will detect campaigns earlier and recover faster. That is the evolution that actually matters — not in the attackers' techniques, but in our collective response to them.