Kroll AI Insights Hub - Key AI Security Risks | Redscan
Contact Us

Contact Us

Please get in touch using the form below

1000 characters left
View our privacy policy

Artificial intelligence (AI) security risks continue to evolve, presenting significant challenges for organisations in all industries.

From chatbots like ChatGPT to the large language models (LLMs) that power them, managing and mitigating potential AI vulnerabilities is an increasingly important aspect of effective cybersecurity.

Kroll’s new AI insights hub explores some of the key AI security challenges informed by our expertise in helping businesses of all sizes, in a wide range of sectors.

Some of the topics covered on the Kroll AI insights hub are outlined below. This resource will be regularly updated with the latest Kroll AI content, so check back for more insights.

Visit the Kroll AI insights hub


AI security risks and recommendations

An introduction to the types of AI security risks that security practitioners need to understand in order to mitigate the specific and niche security gaps associated with machine learning (ML), LLMs and AI products. These include model provenance, inference servers and model hosting, and lack of authentication.


Emerging chatbot security concerns

While AI chatbots such as ChatGPT, Microsoft Copilot, Meta AI and Google’s Gemini have numerous benefits and uses, they also have notable drawbacks. The technological developments surrounding AI mean that cyber threat actors were quick to attempt to use this software for malicious purposes. Although chatbots have implemented and continue to fix risks associated with the software, there is still significant potential for use in cyberattacks.


AI risks and compliance strategies

While the use of AI in financial services offers market participants new opportunities, it may also subject them to a spectrum of risks and associated compliance challenges. Unmitigated and uncontrolled AI risks could expose investment advisers regulated by the U.S. Securities and Exchange Commission (SEC) to reputational, enforcement and examination liability.


AI business strategy considerations

To deploy AI successfully, organisations need to consider cost, security, personnel, compatibility, competitive position, alongside ethical and regulatory dilemmas. A robust AI business strategy can help them to understand and address these difficult questions. This type of strategy should include a decision tree and cost/benefit analysis, together with risks and countermeasures.


AI governance in financial services

While complex AI models are now commonly deployed to assist with practices such as financial crime detection, credit scoring, risk assessment and customer engagement, their management comes under increased scrutiny and AI governance and oversight is a critical consideration. AI has the potential to dramatically increase the scale, speed and depth of data that can be analysed. However, although this approach promises to transform traditional compliance methodologies, it is not without its challenges.

Visit the Kroll AI insights hub