The Swiss Cyber AI Conference brought together CISOs, CEOs, lawyers, and researchers to debate a single uncomfortable question: is security ready for AI, and is AI ready for security? Over the course of a full day of panels, firechats, and keynotes — featuring voices from SV Group, Google Cloud, Microsoft, Swisscom, Akamai, the ISF, and beyond — a set of consistent themes emerged. Not consensus, exactly. But a shared map of the territory. These are my ten takeaways.
1. Identity Is Still the Foundation — But Context Is the New Perimeter
Sasha Maier, CISO of SV Group, opened with a formulation worth keeping: identity at steroids. The boundaries of what we protect are no longer defined by network topology but by context — who is accessing what, from where, under what conditions, with what level of assurance. This is not a new idea, but AI changes the stakes. When an agent acts on behalf of a user, the identity question doubles: whose context governs the agent's behaviour? Doruntina Jakupi of Google Cloud Security extended this directly: AI agents are personas, and personas need onboarding. You cannot deploy an agent the way you deploy a script. You need to define its identity, its scope, its trust boundaries, and its escalation paths — before it touches production.
2. Secure by Design Still Matters — But Systems Are No Longer Deterministic
Sandro Nafzger of Bug Bounty Switzerland raised what may be the sharpest architectural challenge of the AI era: Secure by Design was built on the assumption that systems behave predictably. AI systems — especially large language models — do not. The attack surface question becomes genuinely complex: is it the same attack surface, or a different one? My position is that the attack surface is largely the same — the entry points, the interfaces, the credentials, the APIs — but the impact of a successful breach changes qualitatively. An attacker who compromises an AI system does not just exfiltrate data or encrypt files. They can drive the system to behave in ways that are unpredictable, persistent, and potentially invisible for extended periods. The blast radius of an AI compromise is harder to bound than a traditional one.
3. Treat AI Agents as Employees — Apply Least Privilege from Day One
Christian Serra of Cyberopex put it simply and correctly: treat AI agents as employees. This is not a metaphor — it is an operational framework. An employee who joins a company does not immediately get access to all systems. They go through onboarding. Their access is scoped to their role. Their actions are logged. Their behaviour is monitored. Their access is reviewed and revoked when the role changes. None of this is exotic. All of it applies directly to AI agents. The principle of least privilege, applied rigorously to agentic systems, is one of the most concrete things Swiss security teams can do right now. The basics still apply. Do not forget them because something is called AI.
◆ Key Takeaway
AI agents are not tools — they are principals. They need identities, scoped permissions, audit trails, and offboarding procedures. Applying least privilege to agentic systems is not optional; it is the minimum viable control for any organisation deploying AI in operational workflows.
4. Attackers Using AI Are Already Faster and More Effective
Gianclaudio Moresi, CISO of Frodo Group, made a point that should be uncomfortable for every defender in the room: attackers using AI are further ahead and more effective, while defenders are often constrained by compliance frameworks and internal policies that limit how they can use the same tools. This asymmetry is not new — attackers have always had more operational freedom — but AI amplifies it. Phishing and social engineering attacks supported by AI are becoming genuinely difficult to detect. The traditional signals — unusual language, odd formatting, inconsistent metadata — are disappearing. Moresi's conclusion: identification may become too complex to rely on as a primary defence layer. Controls that assume humans can detect AI-generated content are fragile. Controls that do not require detection — segmentation, authentication, authorisation — are more durable.
5. Provide Good AI Tools to Employees — Or They Will Find Their Own
Balz Zurrer, CEO of ONline Group, made the shadow IT argument for the AI era with clarity: if you do not give employees good, controlled AI tools, they will use uncontrolled ones. He named Deepseek explicitly — a model with known data residency and training data concerns — as an example of what employees reach for when the organisation provides nothing. This is not a new dynamic. BYOD, consumer cloud storage, unauthorised SaaS — the pattern repeats with every technology wave. The response is also familiar: provide sanctioned alternatives, make them genuinely useful, and make the unsanctioned path harder. The security argument for enterprise AI tooling is not just about productivity. It is about controlling the data that flows into models the organisation does not govern.
6. Human Biometrics Are No Longer a Reliable Authentication Layer
Voice cloning and deepfake capabilities have crossed a threshold. Zurrer was explicit: voice biometric authentication does not work anymore as a standalone control. The same applies to video verification in uncontrolled environments. The Swiss financial sector has specific exposure here — voice authentication for call centres and wealth management relationships has been a standard control for years. The NCSC and FINMA have not yet produced specific updated guidance, but the operational reality is ahead of the regulatory framework. Organisations still relying on voice biometrics as a meaningful authentication factor — rather than as a convenience layer backed by something stronger — should be revisiting that design now.
◆ Key Takeaway
Voice biometric authentication is no longer a reliable security control. Organisations using it as a primary or sole verification layer for sensitive operations — particularly in financial services — should treat it as deprecated and design for replacement. AI-generated voice cloning is accessible, convincing, and already being used in Swiss fraud cases.
7. The F1 Paradigm: Security Controls as Race Infrastructure
Rico Petrillo and Bogdan Carsten from Swisscom and Akamai delivered what was, for me, the most operationally useful framing of the day. The metaphor: Formula 1 as a model for AI-era security. A great engine — AI — cannot save a weak chassis. Security controls are not a constraint on AI; they are the infrastructure that makes it safe to run fast. The mapping is precise and worth repeating:
- Regulations are the track limits — FINMA, ISA, DORA define the boundaries within which you operate. You do not choose whether to comply; you design within them.
- Monitoring is the digital radio (APIs) — real-time telemetry from every system, feeding the pit wall with signal. Without it, you are driving blind.
- The guardrails are input/output validation — validating what goes into and comes out of AI systems against policy. This is the AI-specific firewall. It is not optional.
- The survival cell is sandboxing and microsegmentation — when something goes wrong, the blast radius must be contained. The survival cell does not prevent crashes; it limits what a crash destroys.
- Stress testing is continuous red teaming — you do not find out the chassis fails by crashing on race day. You test under load, repeatedly, before the race.
- The pit wall team is your MSP / SOC signal-to-noise ratio — effective managed security requires filtering signal from noise. A pit wall that cannot distinguish a tyre wear alert from a false positive is not useful.
- The 1.8-second tyre change is AI-accelerated threat response — AI makes threats faster. Your response must be faster still. The competitive window for remediation is shrinking.
Colin Chapman's dictum — "experience is the sum of the mistakes you survived" — was quoted from the stage. In security, survival increasingly depends on having built the right infrastructure before the incident, not on reacting well during it.
8. Existing Policies Are Expired — New Risks Arrive Daily, Not in Five-Year Cycles
Fabio Nigi, CISO of Philip Morris, made a point about policy obsolescence that resonated throughout the day: existing policies are not valid anymore in the AI era. This is not primarily about the content of the policies — it is about the rhythm. Traditional policy frameworks operate on annual review cycles, sometimes longer. AI risk is evolving on a weekly, sometimes daily basis. A policy written in January 2026 may be materially incomplete by April. Corrado Iorizzo from Microsoft reinforced this: generative AI and scaling are new, and the impacts on organisational risk are still being discovered. Failure is part of the game with generative LLMs — what matters is whether you have the controls and monitoring in place to detect failure quickly and limit its consequences. The implication for Swiss organisations: policy review cadences need to be restructured. Quarterly minimum for AI-related policies. Some elements may require continuous monitoring rather than periodic review.
9. Liability for AI Agent Failure Is Unresolved — and the Process Owner Bears the Risk
Rocco Talleri, appearing in the legal firechat, addressed the liability question directly: will insurance cover AI incidents? The short answer, currently, is no — or at least not reliably. Risk estimation for AI systems is genuinely difficult, and insurers are responding by excluding or heavily restricting AI-related risk from coverage. This creates a gap that will take years to close. In the absence of specific Swiss AI regulation (unlike the EU AI Act, Switzerland does not yet have equivalent national legislation), liability for AI agent failures falls to the process owner. This is my core takeaway from the legal discussion: defining processes is, once again, a foundational security topic. An organisation that cannot clearly articulate who owns an AI-driven process, what that process is authorised to do, and what the governance checkpoints are, will find itself fully exposed when something goes wrong. Accountability requires traceability, and traceability requires documented process ownership.
10. Poisoned Agents: The Invisible Long-Game Attack
Alessandro Zoncu of Critical Case raised what may be the most underappreciated risk in the room. With AI compromise, your data and your agents may still be available and apparently functional — but poisoned, subtly influencing decisions in ways that are not immediately detectable. The attack vector is patience: an attacker who gains influence over an AI system used for critical decisions does not necessarily announce themselves. They wait. They nudge. They observe. Months may pass before the manipulation becomes visible — if it ever does. Zoncu's formulation: calculating the cost of a poisoned AI decision is extremely difficult, precisely because the causal chain between manipulation and outcome is long and indirect. For Swiss financial institutions relying on AI for credit decisions, risk scoring, fraud detection, or trading — this is not a theoretical concern. It is a design constraint. Any AI system involved in consequential decisions needs integrity monitoring, anomaly detection on its outputs, and human-in-the-loop checkpoints at appropriate intervals. Trust in AI outputs should be earned through continuous validation, not assumed at deployment.
◆ Key Takeaway
The most dangerous AI attack may not be a breach or a shutdown — it may be a slow, invisible corruption of the decisions your AI systems are making on your behalf. Integrity monitoring for AI outputs is not a luxury; it is a necessary control for any organisation where AI influences consequential decisions.
What Swiss Security Teams Should Do This Week
Ten takeaways from a conference can feel abstract. Here is how to translate them into immediate action. First, inventory every AI agent or automated workflow in your environment and confirm that each has a defined owner, a documented scope of access, and an audit trail — if any of those three are missing, that is a gap that needs closing before the next incident. Second, review any authentication flows that rely on voice biometrics as a meaningful control and initiate a risk assessment of what replacement looks like. Third, assess your AI policy review cadence: if it is annual, it is already out of date. Fourth, map your AI-driven decision processes and identify where human-in-the-loop checkpoints exist — and where they do not. Fifth, read the F1 framework again and ask honestly where your guardrails are for AI input and output validation. The answer for most Swiss organisations, currently, is: incomplete.
The Swiss Cyber AI Conference was not a forum for easy answers. It was a room full of practitioners who are living with these questions daily, and who are mostly being honest about the fact that the answers are still being worked out. That honesty is itself useful. The security community that acknowledges uncertainty and builds cautiously is better positioned than the one that assumes the controls from the last era will hold in this one.