Introduction
The UK’s National Cyber Security Centre (NCSC) has expressed concerns regarding the potential security risks associated with AI chatbots, especially in professional settings. These chatbots, including prevalent ones like OpenAI’s ChatGPT and Google’s Bard, could pose significant vulnerabilities.
Considering the recent introduction of AI tools aimed at the corporate sector by industry behemoths such as Google and OpenAI, the NCSC advises against the hurried implementation of these technologies. Due to the relatively new nature of these tools, particularly the large language models (LLMs) that underpin them, our grasp of their full capabilities and potential weaknesses remains limited. The NCSC characterises our current understanding of LLMs as being in a “beta” development phase.
Exploring The Risks
While AI has the potential to bring positive changes to society, it is also important to address the potential security risks associated with its use. According to security experts, AI chatbots are susceptible to manipulation through tactics such as SQL and prompt injection attacks. This could potentially result in unauthorised transactions being performed by a bank’s chatbot. Furthermore, the design of these chatbots poses an inherent challenge as they may have difficulty distinguishing between commands and data, thereby increasing the risk of manipulation.
Christine Lai, the AI Security Lead at CISA, emphasises the importance of designing AI with security in mind. According to Lai, manufacturers of AI technology should make the security of their customers a primary business concern rather than just a technical aspect. They should prioritise security at every stage of the product’s development, from its conception to planning for its eventual end-of-life. Additionally, AI systems should be secure and ready to use upon purchase, without the need for extra configuration or added expenses.
Attackers can lead AI models astray by making minor manipulations to input data, which are often invisible to the untrained eye. Malicious actors can employ tactics such as data poisoning, where they covertly integrate malicious data into AI’s training reservoir. This can cause the system to exhibit harmful behaviours, skewing its outputs and decision-making processes.
The automation capabilities of AI amplify traditional cybersecurity challenges. AI can execute cyber-attacks with unparalleled speed and volume, potentially overwhelming defences and outpacing human response times. Adding to the growing list of AI-infused threats is the rise of deepfakes. Malicious actors can now produce hyper-realistic but entirely fabricated audio or visual content. This capability not only challenges the very essence of truth in digital media but can also be weaponised to spread disinformation or manipulate stock markets.
The NCSC advises caution when implementing AI chatbots. They recommend treating these chatbots like any other beta software and conducting thorough testing to identify vulnerabilities and evaluate potential threats. It is important to employ tactics such as social engineering to ensure the safety and security of the system. It is worth noting that the NCSC’s alert comes at a crucial time, as OpenAI recently launched their ChatGPT-4 enterprise platform, and Google released a suite of 20 cutting-edge AI tools designed for large businesses.
Conclusion
According to a recent survey by Deloitte, over four million people in the UK (8% of respondents) have used Generative AI tools for work. However, it is important to be cautious when using these tools frequently as they may expose sensitive corporate data. Deloitte’s partner and head of technology, Paul Lee, highlights that Generative AI technology is still in its early stages, with ongoing improvements needed in user interfaces, regulatory environment, legal status, and accuracy. More investment and development in the coming months could address these challenges and lead to greater adoption of Generative AI tools.