Introducing GPT-Guard
An open-source, lightweight solution for sanitizing data sent to AI tools.
There has been a lot of discussion at the juncture of generative artificial intelligence (AI) and cybersecurity.
After making some recommendations for how to use these tools securely, I still heard concerns about OpenAI’s use of data provided to it via its APIs or ChatGPT interface.
Many large organizations have banned the use of it, primarily out of a fear of the unknown but also likely due to reports that Amazon and Samsung employees may have leaked sensitive data by using the tool.
Since I don’t think banning new technology - even in the face of risk - is usually the wisest course of action, I decided to do something about it.
Thus, GPT-Guard was born.
After some initial experimentation with launching it as a SaaS product, I instead decided that it would be best offered as an open source Python package.
Check out this site for a demo video of its capabilities.
Or go right to the GitHub repo to start using it. Pull requests welcome!
And if you find GPT-Guard useful, please show your support by subscribing or pledging to Deploy Securely.