Skip to content

Commit e112d44

Browse files
authored
Merge pull request #1 from Mirascope/update-readme
Update README
2 parents fdc7371 + ec7ed31 commit e112d44

File tree

1 file changed

+39
-2
lines changed

1 file changed

+39
-2
lines changed

README.md

Lines changed: 39 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,39 @@
1-
# chat-with-docs
2-
Chat with Mirascope Documentation
1+
# Chat with Docs
2+
3+
Building a "Chat with Docs" solution for Mirascope, demonstrating how to build an AI application with best practices.
4+
5+
## Project Philosophy
6+
7+
Building AI can feel overwhelming, but it doesn't have to be. This project demonstrates how to incrementally build a real AI system using evaluation-driven development:
8+
9+
- Run experiments to improve the system
10+
- Improve the fidelity/coverage of evaluation
11+
- Improve the automation of evaluation
12+
13+
We'll focus on making many small, quick steps instead of trying to do too much too fast. Our approach will prevent the common pitfall of implementing too much complexity without proper process and methodology.
14+
15+
## Development Workflow
16+
17+
1. **Issue Creation**: An issue is opened on the GitHub project describing in detail what should be done, with references to techniques planned for use
18+
2. **Feedback Collection**: Mirascope team provides feedback on the item
19+
3. **Development Preparation**: After feedback is addressed, development will commence
20+
4. **Live Streaming**: Development will generally be livestreamed on a schedule (2 days a week for 2-3 hours at a time)
21+
5. **Next Issue Planning**: The next issue should be prepared by the end of the live stream day to allow time for feedback
22+
6. **Development Process**: Each live stream will begin by reviewing the issue and briefly discussing the techniques to be used
23+
24+
## Implementation Strategy
25+
26+
We'll follow this incremental approach:
27+
28+
1. Create minimal scaffolding without any AI to verify instrumentation
29+
2. Start with a small set of test queries (about 3)
30+
3. Implement the simplest AI solution possible
31+
4. Run evaluations with the small test set
32+
5. Plan experiments focusing on simple interventions:
33+
- Basic prompt engineering
34+
- Retrieval-Augmented Generation (RAG)
35+
6. Expand evaluation fidelity with new queries
36+
7. Continue experimenting and iterating
37+
8. Scale evaluation with AI judges and active learning when needed
38+
39+
Throughout the development process, we'll link to relevant "Effective AI Engineering" tips and best practices.

0 commit comments

Comments
 (0)