GenAI Security: The Problem with Private Data in LLMs

The advent of Large Language Models (LLMs) as a foundation of Generative Artificial Intelligence (GenAI) has revolutionized computing as we know it and has become the hottest trend of 2023, rolling into 2024. It offers remarkable capabilities for data analysis, mining, utilization, and other data related computational tasks. However, this technology also harbors significant concern related to the potential for unintentional exposure of sensitive, controlled, and private data such as Personal Identifiable Information (PII) and Protected Health Information (PHI). As a result, the importance of genAI security is expected to rise.

LLMs, by their very nature, are trained on massive datasets of text and code, often sourced from databases (unstructured/structured), the internet, or open-access repositories. While these datasets provide a rich source of information for learning, they may also inadvertently contain sensitive PII and PHI, such as names, addresses, Social Security numbers, financial information, and medical records. The viewer may have little or no knowledge about the source of the dataset trained, leading to data leakage and/or damage to the validity and trustworthiness of the data. 

The addition of PII and PHI into an LLM’s learning model possesses several risks. Once embedded, this sensitive information becomes part of the model’s core, internal representation, making it difficult to fully remove or redact. Even if individual snippets of PII or PHI are identified and deleted, the model may retain associations between these sensitive data points, potentially leading to re-identification or an inference of sensitive information.  

The ability of LLM’s to generate new or updated text and code raises genAI security concerns about the potential for inadvertent disclosure of PII or PHI. For instance, an LLM trained on a dataset containing medical notes may inadvertently reveal patient information, even if the model was never explicitly trained to do so.  

Here are some real-world examples of LLM Challenges: 

  • In 2020, a team of researchers from Google AI discovered that a large language model trained on a massive dataset of text and code could be used to extract sensitive information, such as names, addresses, and phone numbers, from the training data. The researchers were able to do this by prompting the model with specific prefixes, such as “What is the name of the person who lives at this address?” or “What is the phone number of this person?” The model was then able to autocomplete the prompts with the sensitive information, posing a genAI security risk. 
  • In 2021, a company called Clearview AI was found to have collected and stored billions of images of people from the internet without their consent. The company then used these images to train a facial recognition AI that could be used to identify individuals in real time. Clearview’s genAI security practices were widely criticized, and the company was forced to delete its database of images. 
  • In 2022, a group of hackers leaked the personal data of millions of people from a company called Robinhood. The hacked data included names, addresses, Social Security numbers, and investment information. The hackers were able to access the data by exploiting a vulnerability in Robinhood’s website. 
  • In 2023, a chatbot called LaMDA was discovered to be able to generate false and misleading information. The chatbot was trained on a massive dataset of text and code, and it was able to use this information to create realistic-looking fake news articles and social media posts. LaMDA’s capabilities were raised by Google AI researchers, who warned that the chatbot could be used to spread misinformation and propaganda. 

The challenges of mitigating PII and PHI risks in LLMs are multifaceted. Traditional methods of data anonymization or redaction may not be sufficient to prevent re-identification, especially when dealing with large and complex datasets. Moreover, the dynamic and evolving nature of language makes it difficult to anticipate all potential ways in which sensitive information could be inferred from an LLM’s responses. 

To address these challenges a comprehensive approach to genAI security is required, encompassing both technical and procedural measures. On the technical front, researchers are exploring techniques to detect and remove PII and PHI from training and trained datasets. This may prove to be an unwinnable race, as the vast amount of information leveraged, and the rate of ingestion could exceed the technical search capabilities exponentially. Also, as mentioned previously, once embedded within the core learning model, it will be difficult to fully remove or redact sensitive data. 

 

There are two main ways security practitioners are currently trying to control PII and PHI risks in LLMs—and the pitfalls of each. 

  1. Data governance & Access Controls

Security practitioners are creating data governance policies, access controls, and ongoing monitoring of LLM outputs to help minimize the risk of PII and PHI exposure. These techniques are already showing minimal effectiveness in stopping sensitive data exposure within traditional datasets. Other data owners may choose to exclude datasets from an AI learning model altogether due to these risks, causing the resulting data and representation of the facts within the dataset to be questioned and impacting the validity of that dataset. 

2. Confidential Computing 

Another technique to consider is Confidential Computing. In this case the LLM is contained within a Trusted Execution Environment (TEE). Unfortunately, Confidential Computing comes with its own challenges. The environment tends to be very complex to maintain, requiring specific technical expertise. It is also expensive and will require annual certification to assure that the TEE meets minimal Confidential Computing requirements. Lastly, the dataset within this environment is maintained as plaintext and is subject to side-channel attacks, data theft & ransom, and technology failures. 

It’s clear the above options aren’t producing the desired results. The LLM data challenge requires a new way of thinking. 

As with traditional database security, encryption is the only way to truly protect the PII and PHI (or any sensitive, controlled, and private) data. Of course, LLM’s are challenging as the data is always in a state of active use. Historically, encryption of data in use has been the realm of Homomorphic Encryption (HE). Unfortunately, HE is not a solution that can be utilized when the data needs to be immediately retrieved. Not only is HE far to slow for this type of application, expense and complexity are also noted barriers of entry.  

Paperclip SAFE®—the only always-encrypted data security solution—is the answer to upcoming genAI security concerns. 

Knowing the risks related to sensitive, controlled, and private data such as that covered under expanding PII, PHI, and Privacy Compliance requirements, The only answer is encryption of data in use. Paperclip SAFE® is the only solution capable of leveraging Searchable Symmetric Encryption (SSE) principles to deliver high-speed Create, Read, Update, and Delete (CRUD) executables upon fully encrypted data. By leveraging SSE and Paperclip’s proprietary data shredding technology, SAFE maintains full NIST approved AES256 encryption protocols on the data client’s need to secure most.   

Paperclip SAFE® enables encrypted data to be searched without decrypting the entire dataset, ensuring that sensitive information always remains protected while facilitating LLM training and query operations. This is achieved by constructing an index over the encrypted data that enables efficient keyword searches. 

When a user searches for a specific keyword, SAFE retrieves the corresponding encrypted data segments without revealing the plaintext content. 

Here are the benefits of using SAFE encryption for LLM privacy protection: 

  1. Data Privacy: SAFE ensures that sensitive data that is not queried remains encrypted throughout the LLM’s lifecycle, preventing unauthorized access or disclosure. 
  2. Efficient Search: The SAFE index facilitates efficient keyword searches on encrypted data, enabling LLMs to locate relevant information without compromising privacy.
  3. Reduced Threat Landscape: By encrypting sensitive data, SAFE greatly minimizes and/or eliminates the impact of potential data breaches, data theft, and ransom attacks.  

Want to encrypt your data all the time and stop the threats associated with LLMs? Contact us for an estimate. 

 

 

 

Sources and Citations: 

Rosenthal, D., and Villasenor, J. D. (2022). The risks of personal data in large language models. Nature Machine Intelligence, 4(11), 951-957. 

Li, J., Fang, R., and Zhao, J. (2023). Privacy and security challenges in large language models. IEEE Transactions on Information Forensics and Security, 18(1), 22-37. 

Kearns, M., Roth, A., and Wu, Z. S. (2023). The ethics of large language models. arXiv preprint arXiv:2301.11368. 

Gebru, T., Mitchell, M., and Bender, E. (2020). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:2002.07228*. 

National Academies of Sciences, Engineering, and Medicine. (2020). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Washington, DC: The National Academies Press. 

Dawn Xiaondong Song, David Wagner, and Adrian Perrig. Practical Techniques for Searches on Encrypted Data  https://people.eecs.berkeley.edu/~dawnsong/papers/se.pdf