99% of organisations are reliant on digital technology. The need for digital resilience is paramount in both cyber risks and artificial intelligence (AI) risks.
Our insurer partner Zurich explains more about the cyber and AI risks facing organisations today:
Building resilience to cyber risks involves creating a) practices to protect computer systems, networks, and data from cyber threats and b) strategies to respond to and recover from cyber security incidents. Traditional cyber resilience measures mitigate some risks associated with using AI, but there are other risks associated with AI that are beyond traditional cyber risk management. Before building resilience to cyber and AI risks, it is important to understand the different risks.
Building Resilience to Cyber Risks
Resilience to cyber risks requires a multi-layered approach that is broader than just cyber security. It should include preventive security measures and strategies to ensure the ability to recover quickly from cyber incidents and adapt to changing threats. For example, in the CIA triad:
- maintaining confidentiality requires protective measures like encryption and access controls,
- confirming integrity can be achieved through technologies like digital signatures,
- providing availability involves using backups and disaster recovery plans to counteract downtime.
The capability to detect an issue and recover from an incident is key to cyber resilience. If an organisation has trained their employees well, has mature monitoring and detection capabilities and has practiced their incident response and recovery plans, they are on the path to cyber resilience.
Building Resilience to AI Risks
Many of the measures used to build resilience to cyber risks apply to AI risks, especially for confidentiality of data and availability of systems. For both you should start with a comprehensive risk assessment to understand all potential risks and impacts. However, there are a few specific recommendations for building resilience to AI risks.
Confidentiality
- Identify the parts of the AI system that have access to sensitive information. Bear in mind the data inputted by users is normally stored too. Separate this data from front facing elements of the system and protect it. Ensure data is not stored for longer than necessary.
- Any personal information is subject to standard data protection rules, no matter how it’s used. This includes the extensive data collected for the purposes of training AI. Conduct a data protection impact assessment, gain permission from data subjects and ensure no more data is collected than necessary.
Integrity
- Conduct adversarial testing by trying adversarial inputs (like ones that could cause data poisoning or prompt injects) to identify vulnerabilities.
- Employ techniques like fairness-aware machine learning and robustness testing help maintain the accuracy of AI models.
- Hire people with the right skill sets or upskill employees. The human factor is always important, one of the best ways to be resilient to AI risks is to have the skills in to develop, deploy and monitor AI models which are accurate and transparent.
- To have accurate reliable information, organisations should conduct due diligence on their AI suppliers to inform data lineage, labelling practices and model development. Without this the integrity of the data can’t be validated and any issues found cannot be traced to the root.
Ethics
- Organisations should ensure there is always a ‘human in the loop’ to provide assurance and oversight of the results of AI activity. This mitigates data integrity risk because the model does not purely self-learn.
- To avoid the risks associated with bias, data for training purposes should be representative of all groups and users should have the opportunity to help development by challenging the outputs.
- AI use should be transparent and generally understood, to the point where internal and external users are aware of any interactions with it, even if it is a small part of a process.
- Use regulatory sandboxes for anti-bias experimentation and encourage inclusive design principles.
- Maintain awareness of regulatory and legal frameworks. If your organisation becomes overly reliant on a method or algorithm that is then prohibited it could have a major impact on your resilience.
- Build organisational resilience to the risks associated with using AI, involve different business units, such as ethics or governance, risk and compliance. Organisational AI policies should be created, managed and governed collaboratively.
Many AI risks are similar to traditional cyber risks. Building cyber resilience can enhance AI resilience. However, building resilience in AI is not a one-time effort; it’s an ongoing commitment to data quality, model validation, transparency, human oversight, security, and ethical considerations. Resilient AI systems not only perform effectively but also adapt to new challenges and reflect ethical values. By implementing these strategies and best practices, we can create AI technologies that are robust, trustworthy, and capable of navigating the complexities of our ever-evolving digital landscape. In doing so, we ensure AI’s continued positive impact on society.
Source: Zurich Insurance
Search Blog Posts
Recent Posts
Archives
- January 2025
- December 2024
- November 2024
- October 2024
- July 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- June 2017
- May 2017
- March 2017
- February 2017
- January 2017
- September 2016
- May 2016
- April 2016