By: Iman Jaffari

Over the past few years, generative AI systems like DALL-E, GPT-3, and Stable Diffusion have rapidly advanced in their ability to produce novel content. These systems can generate remarkably realistic images, text, audio, video, and more based on short descriptive prompts. The creative potential of systems that can “imagine” any scenario or content on demand has sparked tremendous buzz. However, serious concerns around data privacy have also emerged regarding how these AIs are developed and deployed. As generative models become more ubiquitous and influential, the need for thoughtful data governance is paramount. Without rigorous controls on the data used to train these systems, there is a substantial risk of AI misusing or spreading personal information without consent. Core principles of consent, transparency, accountability, and user control must be integrated into data practices to establish public trust in generative AI. 

These principles are fundamental in ensuring ethical and responsible AI development. They are prominently emphasized in data protection regulations such as the General Data Protection Regulation (GDPR), which outlines the importance of obtaining informed consent, maintaining transparency in data processing, enforcing accountability measures, and granting individuals control over their personal data. Incorporating these principles is not only a legal requirement but also a crucial ethical consideration in the development and deployment of AI systems. [Source: GDPR – Articles 6, 7, 5, and 15-21]. In this environment of rapid change, organizations must proactively develop governance frameworks that allow us to maximize benefits while minimizing harm.

How Generative AI Models Work

Generative AI models are akin to very advanced and specialized brains made out of computer code. These models use a special kind of technology called transformers. Transformers can be best imagined as attentive students who can look at a lot of information at once and understand how everything is connected. This helps them learn the context or the bigger picture of what they are studying, learning, and examining.

Transformers do this through a process called “self-attention”. It is like if each piece of information could look at every other piece and figure out how they all relate to each other. This helps the model create a detailed and context-rich understanding of the information it receives in forming a bigger picture and in seeing how each piece fits together as a whole.

To learn, these models are fed huge amounts of data from the internet. They practice over and over, adjusting millions of tiny settings, to get really good at mimicking the patterns that naturally are found in this data. Once they are trained, these models can then create new outputs like text, images, or even music. They do this by predicting what comes next in a sequence, similar to how one might guess the next word in a sentence. This allows them to generate realistic and varied outputs based on the trove of data they have been fed.

Privacy Risks in Generative AI

As mentioned, the training process for cutting-edge generative models uses very large datasets which contain billions of examples and reference points found in images, texts, audios and videos – usually scraped from public internet sources. Despite filtering efforts, these massive corpora likely still contain identifiable personal information – names, conversations, photos, interests, writing samples, and other details tied to individuals.[1]

Models with billions of optimized parameters have immense capacity to implicitly profile people by discovering and memorizing these traces across data points. For instance, text generators like GPT-3 can disclose private information learned from training data. Image generators like DALL-E 2 can fabricate realistic photos of people from text prompts. More advanced models increase risks of abuse or misuse without governance.[2]

Deepfake algorithms also leverage generative AI to impersonate real people, including personalized content. [3]

While companies screen obvious identifiers, the interconnectedness of web data nevertheless enables models to piece together “anonymous” data to infer private details about individuals.[4]

Data Governance: The Key to AI Privacy

Parallel to the above, establishing robust data governance is crucial for upholding privacy and ethics in AI development. Data governance refers to the policies, processes and tools used to manage data within an organization.[5]

More specifically, responsible data governance for generative AI involves extensive screening of training data to remove personal information to the greatest extent possible. It requires meticulous documentation of all data sources and collection methods.[6] Setting policies for data retention and use based on consent principles is critical, as are access controls, auditing, and technical controls aligned with privacy and ethics by design.

In practice, the first starting point for organizations is implementing proactive auditing of training datasets to identify and remove personal information to the maximum extent feasible. This requires deploying more refined techniques than basic name or ID matching since personal data can be more subtle. Next, audits should occur periodically to detect new potential leakage as models and datasets evolve. Additionally, strict access controls and encryption must be enforced to prevent leaks or misuse of sensitive training data. This includes allowing only authorized access based on least privilege principles, along with implementing multi-factor authentication. 

In tandem to the above, detailed activity logs with audit trails should also be maintained to enable monitoring for potential abuses or policy violations. Lastly, the related metadata should be captured such that it can detail key attributes of all training data sources, including origin, collection methods, and dates. 

With such rigorous policies around auditing, access controls, documentation, transparency, risk assessment and privacy-preserving design, the significant risks introduced by generative models can be mitigated.

Individual Privacy Rights and AI Regulations

Individuals have certain core privacy rights and protections regarding their personal information that must be considered as generative AI proliferates. In the European Union, the General Data Protection Regulation (GDPR) grants users’ rights to access, correct, and delete their data while requiring informed consent for processing. It mandates data minimization, purpose limitation, and privacy by design. In the United States, state regulations like the California Consumer Privacy Act (CCPA) also enhance user rights around their data. [7] However, the decentralized and opaque nature of training data sourcing for generative AI poses regulatory challenges. As datasets grow ever larger and more complex, oversight is difficult. New approaches that embed privacy protection and consent directly into data pipelines will be needed rather than just downstream regulations.

Nonetheless, important policy developments are underway to ensure ethics, fairness, transparency and accountability in AI. For instance, proposed US federal laws like the Algorithmic Accountability Act would require impact assessments for high-risk AI systems.[8]

As a result, such policy developments focused on AI ethics, fairness and accountability remain important.

Conclusion

Looking ahead, the advent of generative AI presents a fascinating technological frontier, offering unprecedented creative possibilities. However, as these systems continue to advance and become integral to the existing digital landscape, they bring forth intricate challenges regarding data privacy and ethics. The journey to harness the full potential of generative AI while safeguarding individual privacy requires multifaceted approaches. These encompass robust data governance frameworks, a deep commitment to transparency and user control, and the continuous evolution of regulations and legislations that align with the dynamic nature of AI. 

In essence, striking a balance between innovation and privacy in the world of generative AI is not just a technological imperative but a societal one, ensuring that the transformative power of AI is employed for the betterment of humanity while respecting the fundamental principles of individual privacy and ethical conduct.

Otherwise, the very promise of generative AI becomes a harbinger of a world where innovation gives way to invasion, ushering in an era of intrusion and infringement rather than the renaissance of creativity and productivity it was designed to bring about.

Disclaimer: The information provided in this response is for general informational purposes only and is not intended to be legal advice. The content provided does not create a legal client relationship, and nothing in this response should be considered as a substitute for professional legal advice. The information is based on general principles of law and may not reflect the most current legal developments or interpretations in your jurisdiction. Laws and regulations vary by jurisdiction, and the application and impact of laws can vary widely based on the specific facts and circumstances involved. You should consult with a qualified legal professional for advice regarding your specific situation.


[1] Standford, Artificial Intelligence Index Report 2023, Stanford University, Page 340, https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf

[2] Stanford, 344

[3] Brookings Institute, Villasenor, John, Artificial intelligence, deepfakes, and the uncertain future of truthhttps://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/

[4] Standford, 84

[5] International Association of Privacy Professionals (IAPP), Key Terms for AI Governance https://iapp.org/resources/article/key-terms-for-ai-governance/

[6] IAPP

[7] Bradley, Tom, Forbes, Enhancing Data Privacy And Transparency, https://www.forbes.com/sites/tonybradley/2023/08/01/enhancing-data-privacy-and-transparency/

[8] Bradley.