With growing interest in generative artificial intelligence (AI) systems around the world, researchers at the University of Surrey have created software that can test how much information AI has gleaned from an organization’s digital database.
Surrey’s verification software can be used as part of a company’s online security protocol, helping an organization understand if AI has overlearned or even accessed sensitive data.
The software can also determine if the AI has detected and is able to exploit flaws in the software code. For example, in the context of online gaming, it can determine whether an AI has learned to always win online poker by exploiting a coding error.
“In many applications, AI systems interact with each other or with humans, such as cars on the highway or hospital robots. knowing the data system of intelligent artificial intelligence is an ongoing problem that has taken us years to find one that works. solution for
“Our verification software can infer how much AI can learn from their interactions, whether they have enough knowledge to allow for successful collaboration, and whether they have too much knowledge that would compromise privacy. Through the ability to verify what AI knows, we can give. giving organizations the confidence to safely bring the power of AI to bear in safe settings.”
Surrey’s study on software won the Best Paper Award at the 25th International Symposium on Formal Methods.
Professor Adrian Hilton, Director of the Human Centered AI Institute at the University of Surrey, said:
“The past few months have seen a surge in public and industry interest in generative AI models, fueled by major language model advances such as ChatGPT. Building tools that can test the performance of generative AI is essential to ensure their safety and security. responsible deployment. This research is an important step towards maintaining the confidentiality and integrity of data sets used in training.”
Additional information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346