-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Artificial Intelligence: Filter sensitive information #767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Introduces a new Artificial Intelligence section to the documentation with guidance on filtering sensitive information before sending data to LLMs. This addresses a critical security concern when working with AI services by demonstrating how to use the TopSecret gem to sanitize user messages before they are processed by external AI APIs.
- Adds a new artificial-intelligence directory with documentation structure
- Provides a practical Ruby code example showing how to filter and restore sensitive data
- Demonstrates integration with OpenAI's API using filtered messages
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| artificial-intelligence/README.md | Creates the main AI section with security guidance overview |
| artificial-intelligence/how-to/filter_sensitive_information.md | Provides detailed implementation example for filtering sensitive data before LLM processing |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Introduces new section for Artificial Intelligence, and demonstrates how to filter sensitive information from messages.
50de58f to
d5af78e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the guideline, but I think the example is too specific as this guide is more general than just Ruby.
This API might change in the future. A link to the repo is enough, IMO.
|
@stevepolitodesign I think this is more general than AI. This might fit the security section. This is related to any external service and private data (similarly to how we filter params, anonymize db dumps, etc). |
I agree, although, I do think there's value in highlighting this risk for AI specifically. I say this because anecdotally, I get the sense that the tech industry doesn't treat interactions with AI with the same level of scrutiny as other services. I'm curious to hear what others think. |
Introduces new section for Artificial Intelligence, and demonstrates how
to filter sensitive information from messages.