Skip to content

Conversation

stevepolitodesign
Copy link
Contributor

Introduces new section for Artificial Intelligence, and demonstrates how
to filter sensitive information from messages.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Introduces a new Artificial Intelligence section to the documentation with guidance on filtering sensitive information before sending data to LLMs. This addresses a critical security concern when working with AI services by demonstrating how to use the TopSecret gem to sanitize user messages before they are processed by external AI APIs.

  • Adds a new artificial-intelligence directory with documentation structure
  • Provides a practical Ruby code example showing how to filter and restore sensitive data
  • Demonstrates integration with OpenAI's API using filtered messages

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
artificial-intelligence/README.md Creates the main AI section with security guidance overview
artificial-intelligence/how-to/filter_sensitive_information.md Provides detailed implementation example for filtering sensitive data before LLM processing

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Introduces new section for Artificial Intelligence, and demonstrates how
to filter sensitive information from messages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the guideline, but I think the example is too specific as this guide is more general than just Ruby.

This API might change in the future. A link to the repo is enough, IMO.

@MatheusRich
Copy link
Contributor

MatheusRich commented Aug 29, 2025

@stevepolitodesign I think this is more general than AI. This might fit the security section. This is related to any external service and private data (similarly to how we filter params, anonymize db dumps, etc).

@stevepolitodesign
Copy link
Contributor Author

@MatheusRich

I think this is more general than AI. This might fit the security section.

I agree, although, I do think there's value in highlighting this risk for AI specifically. I say this because anecdotally, I get the sense that the tech industry doesn't treat interactions with AI with the same level of scrutiny as other services.

I'm curious to hear what others think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants