3 Comments

I recently read about a product: https://protopia.ai/stained-glass-transform/

What are your views on it ?

Will this be a good way to handle PII data / any sensitive data to the LLMs?

Expand full comment

Organizations implementing Gen-AI projects should also consider a Threat Modelling assessment to ensure security, privacy and compliance risks are well understood. For example, the most common Enterprise use case involves RAG based Gen-AI to generate insights from a Corpus of Internal documents. While the usual security risks associated with Model training data / model extraction / model data poisoning aren’t applicable to RAG based systems, organizations need to establish a clear threat model to identify the risks including prompt chaining, excessive agency, unauthorized data exposure, sensitive data crossing trust boundaries (for example, via Open-AI’s embeddings API), etc.

Expand full comment

All fair points! We narrowly targeted this article on the risk of unintended training. But subscribe to Deploy Securely for regular content touching on all of those things!

Expand full comment