99% of organisations are reliant on digital technology. The need for digital resilience is paramount in both cyber risks and artificial intelligence (AI) risks.

Our insurer partner Zurich explains more about the cyber and AI risks facing organisations today:

Building resilience to cyber risks involves creating a) practices to protect computer systems, networks, and data from cyber threats and b) strategies to respond to and recover from cyber security incidents. Traditional cyber resilience measures mitigate some risks associated with using AI, but there are other risks associated with AI that are beyond traditional cyber risk management. Before building resilience to cyber and AI risks, it is important to understand the different risks.

Building Resilience to Cyber Risks

Resilience to cyber risks requires a multi-layered approach that is broader than just cyber security. It should include preventive security measures and strategies to ensure the ability to recover quickly from cyber incidents and adapt to changing threats. For example, in the CIA triad:

  1. maintaining confidentiality requires protective measures like encryption and access controls,
  2. confirming integrity can be achieved through technologies like digital signatures,
  3. providing availability involves using backups and disaster recovery plans to counteract downtime.

The capability to detect an issue and recover from an incident is key to cyber resilience. If an organisation has trained their employees well, has mature monitoring and detection capabilities and has practiced their incident response and recovery plans, they are on the path to cyber resilience.

Building Resilience to AI Risks

Many of the measures used to build resilience to cyber risks apply to AI risks, especially for confidentiality of data and availability of systems. For both you should start with a comprehensive risk assessment to understand all potential risks and impacts. However, there are a few specific recommendations for building resilience to AI risks.


  • Identify the parts of the AI system that have access to sensitive information. Bear in mind the data inputted by users is normally stored too. Separate this data from front facing elements of the system and protect it. Ensure data is not stored for longer than necessary.
  • Any personal information is subject to standard data protection rules, no matter how it’s used. This includes the extensive data collected for the purposes of training AI. Conduct a data protection impact assessment, gain permission from data subjects and ensure no more data is collected than necessary.


  • Conduct adversarial testing by trying adversarial inputs (like ones that could cause data poisoning or prompt injects) to identify vulnerabilities.
  • Employ techniques like fairness-aware machine learning and robustness testing help maintain the accuracy of AI models.
  • Hire people with the right skill sets or upskill employees. The human factor is always important, one of the best ways to be resilient to AI risks is to have the skills in to develop, deploy and monitor AI models which are accurate and transparent.
  • To have accurate reliable information, organisations should conduct due diligence on their AI suppliers to inform data lineage, labelling practices and model development. Without this the integrity of the data can’t be validated and any issues found cannot be traced to the root.


  • Organisations should ensure there is always a ‘human in the loop’ to provide assurance and oversight of the results of AI activity. This mitigates data integrity risk because the model does not purely self-learn.
  • To avoid the risks associated with bias, data for training purposes should be representative of all groups and users should have the opportunity to help development by challenging the outputs.
  • AI use should be transparent and generally understood, to the point where internal and external users are aware of any interactions with it, even if it is a small part of a process.
  • Use regulatory sandboxes for anti-bias experimentation and encourage inclusive design principles.
  • Maintain awareness of regulatory and legal frameworks. If your organisation becomes overly reliant on a method or algorithm that is then prohibited it could have a major impact on your resilience.
  • Build organisational resilience to the risks associated with using AI, involve different business units, such as ethics or governance, risk and compliance. Organisational AI policies should be created, managed and governed collaboratively.

Many AI risks are similar to traditional cyber risks. Building cyber resilience can enhance AI resilience. However, building resilience in AI is not a one-time effort; it’s an ongoing commitment to data quality, model validation, transparency, human oversight, security, and ethical considerations. Resilient AI systems not only perform effectively but also adapt to new challenges and reflect ethical values. By implementing these strategies and best practices, we can create AI technologies that are robust, trustworthy, and capable of navigating the complexities of our ever-evolving digital landscape. In doing so, we ensure AI’s continued positive impact on society.


Source: Zurich Insurance