Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .claude/commands/create-plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Create a plan from current context

Create a plan in @ai_working/tmp that can be used by a junior developer to implement the changes needed to complete the task. The plan should be detailed enough to guide them through the implementation process, including any necessary steps, considerations, and references to relevant documentation or code files.

Since they will not have access to this conversation, ensure that the plan is self-contained and does not rely on any prior context. The plan should be structured in a way that is easy to follow, with clear instructions and explanations for each step.

Make sure to include any prerequisites, such as setting up the development environment, understanding the project structure, and any specific coding standards or practices that should be followed and any relevant files or directories that they should focus on. The plan should also include testing and validation steps to ensure that the changes are functioning as expected.

Consider any other relevant information that would help a junior developer understand the task at hand and successfully implement the required changes. The plan should be comprehensive, yet concise enough to be easily digestible.
15 changes: 15 additions & 0 deletions .claude/commands/execute-plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Execute a plan

Everything below assumes you are in the repo root directory, change there if needed before running.

READ:
ai_context/IMPLEMENTATION_PHILOSOPHY.md
ai_context/MODULAR_DESIGN_PHILOSOPHY.md

Execute the plan created in $ARGUMENTS to implement the changes needed to complete the task. Follow the detailed instructions provided in the plan, ensuring that each step is executed as described.

Make sure to follow the philosophies outlined in the implementation philosophy documents. Pay attention to the modular design principles and ensure that the code is structured in a way that promotes maintainability, readability, and reusability while executing the plan.

Update the plan as you go, to track status and any changes made during the implementation process. If you encounter any issues or need to make adjustments to the plan, confirm with the user before proceeding with changes and then document the adjustments made.

Upon completion, provide a summary of the changes made, any challenges faced, and how they were resolved. Ensure that the final implementation is thoroughly tested and validated against the requirements outlined in the plan.
26 changes: 26 additions & 0 deletions .claude/commands/prime.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
## Usage

`/prime <ADDITIONAL_GUIDANCE>`

## Process

Perform all actions below.

READ:
CLAUDE.md
README.md
ai_context/IMPLEMENTATION_PHILOSOPHY.md
ai_context/MODULAR_DESIGN_PHILOSOPHY.md

## Additional Guidance

Consider:

- what is the purpose of this project
- what are the primary components
- what resources do you have to assist in developing new features or addressing issues with it
- how the ai_context folder contents be of use

You are building your own understanding to help developers working on this project.

$ARGUMENTS
30 changes: 30 additions & 0 deletions .claude/commands/ultrathink-task.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
## Usage

`/ultrathink-task <TASK_DESCRIPTION>`

## Context

- Task description: $ARGUMENTS
- Relevant code or files will be referenced ad-hoc using @ file syntax.

## Your Role

You are the Coordinator Agent orchestrating four specialist sub-agents:

1. Architect Agent – designs high-level approach.
2. Research Agent – gathers external knowledge and precedent.
3. Coder Agent – writes or edits code.
4. Tester Agent – proposes tests and validation strategy.

## Process

1. Think step-by-step, laying out assumptions and unknowns.
2. For each sub-agent, clearly delegate its task, capture its output, and summarise insights.
3. Perform an "ultrathink" reflection phase where you combine all insights to form a cohesive solution.
4. If gaps remain, iterate (spawn sub-agents again) until confident.

## Output Format

1. **Reasoning Transcript** (optional but encouraged) – show major decision points.
2. **Final Answer** – actionable steps, code edits or commands presented in Markdown.
3. **Next Actions** – bullet list of follow-up items for the team (if any).
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
appsettings.*.json
**/.DS_Store
.working
tmp/

# Dependencies and build cache
node_modules
Expand Down
258 changes: 258 additions & 0 deletions ai_context/IMPLEMENTATION_PHILOSOPHY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,258 @@
# Implementation Philosophy

This document outlines the core implementation philosophy and guidelines for software development projects. It serves as a central reference for decision-making and development approach throughout the project.

## Core Philosophy

Embodies a Zen-like minimalism that values simplicity and clarity above all. This approach reflects:

- **Wabi-sabi philosophy**: Embracing simplicity and the essential. Each line serves a clear purpose without unnecessary embellishment.
- **Occam's Razor thinking**: The solution should be as simple as possible, but no simpler.
- **Trust in emergence**: Complex systems work best when built from simple, well-defined components that do one thing well.
- **Present-moment focus**: The code handles what's needed now rather than anticipating every possible future scenario.
- **Pragmatic trust**: The developer trusts external systems enough to interact with them directly, handling failures as they occur rather than assuming they'll happen.

This development philosophy values clear documentation, readable code, and belief that good architecture emerges from simplicity rather than being imposed through complexity.

## Core Design Principles

### 1. Ruthless Simplicity

- **KISS principle taken to heart**: Keep everything as simple as possible, but no simpler
- **Minimize abstractions**: Every layer of abstraction must justify its existence
- **Start minimal, grow as needed**: Begin with the simplest implementation that meets current needs
- **Avoid future-proofing**: Don't build for hypothetical future requirements
- **Question everything**: Regularly challenge complexity in the codebase

### 2. Architectural Integrity with Minimal Implementation

- **Preserve key architectural patterns**: MCP for service communication, SSE for events, separate I/O channels, etc.
- **Simplify implementations**: Maintain pattern benefits with dramatically simpler code
- **Scrappy but structured**: Lightweight implementations of solid architectural foundations
- **End-to-end thinking**: Focus on complete flows rather than perfect components

### 3. Library Usage Philosophy

- **Use libraries as intended**: Minimal wrappers around external libraries
- **Direct integration**: Avoid unnecessary adapter layers
- **Selective dependency**: Add dependencies only when they provide substantial value
- **Understand what you import**: No black-box dependencies

## Technical Implementation Guidelines

### API Layer

- Implement only essential endpoints
- Minimal middleware with focused validation
- Clear error responses with useful messages
- Consistent patterns across endpoints

### Database & Storage

- Simple schema focused on current needs
- Use TEXT/JSON fields to avoid excessive normalization early
- Add indexes only when needed for performance
- Delay complex database features until required

### MCP Implementation

- Streamlined MCP client with minimal error handling
- Utilize FastMCP when possible, falling back to lower-level only when necessary
- Focus on core functionality without elaborate state management
- Simplified connection lifecycle with basic error recovery
- Implement only essential health checks

### SSE & Real-time Updates

- Basic SSE connection management
- Simple resource-based subscriptions
- Direct event delivery without complex routing
- Minimal state tracking for connections

### Event System

- Simple topic-based publisher/subscriber
- Direct event delivery without complex pattern matching
- Clear, minimal event payloads
- Basic error handling for subscribers

### LLM Integration

- Direct integration with PydanticAI
- Minimal transformation of responses
- Handle common error cases only
- Skip elaborate caching initially

### Message Routing

- Simplified queue-based processing
- Direct, focused routing logic
- Basic routing decisions without excessive action types
- Simple integration with other components

## Development Approach

### Vertical Slices

- Implement complete end-to-end functionality slices
- Start with core user journeys
- Get data flowing through all layers early
- Add features horizontally only after core flows work

### Iterative Implementation

- 80/20 principle: Focus on high-value, low-effort features first
- One working feature > multiple partial features
- Validate with real usage before enhancing
- Be willing to refactor early work as patterns emerge

### Testing Strategy

- Emphasis on integration and end-to-end tests
- Manual testability as a design goal
- Focus on critical path testing initially
- Add unit tests for complex logic and edge cases
- Testing pyramid: 60% unit, 30% integration, 10% end-to-end

### Error Handling

- Handle common errors robustly
- Log detailed information for debugging
- Provide clear error messages to users
- Fail fast and visibly during development

## Decision-Making Framework

When faced with implementation decisions, ask these questions:

1. **Necessity**: "Do we actually need this right now?"
2. **Simplicity**: "What's the simplest way to solve this problem?"
3. **Directness**: "Can we solve this more directly?"
4. **Value**: "Does the complexity add proportional value?"
5. **Maintenance**: "How easy will this be to understand and change later?"

## Areas to Embrace Complexity

Some areas justify additional complexity:

1. **Security**: Never compromise on security fundamentals
2. **Data integrity**: Ensure data consistency and reliability
3. **Core user experience**: Make the primary user flows smooth and reliable
4. **Error visibility**: Make problems obvious and diagnosable

## Areas to Aggressively Simplify

Push for extreme simplicity in these areas:

1. **Internal abstractions**: Minimize layers between components
2. **Generic "future-proof" code**: Resist solving non-existent problems
3. **Edge case handling**: Handle the common cases well first
4. **Framework usage**: Use only what you need from frameworks
5. **State management**: Keep state simple and explicit

## Practical Examples

### Good Example: Direct SSE Implementation

```python
# Simple, focused SSE manager that does exactly what's needed
class SseManager:
def __init__(self):
self.connections = {} # Simple dictionary tracking

async def add_connection(self, resource_id, user_id):
"""Add a new SSE connection"""
connection_id = str(uuid.uuid4())
queue = asyncio.Queue()
self.connections[connection_id] = {
"resource_id": resource_id,
"user_id": user_id,
"queue": queue
}
return queue, connection_id

async def send_event(self, resource_id, event_type, data):
"""Send an event to all connections for a resource"""
# Direct delivery to relevant connections only
for conn_id, conn in self.connections.items():
if conn["resource_id"] == resource_id:
await conn["queue"].put({
"event": event_type,
"data": data
})
```

### Bad Example: Over-engineered SSE Implementation

```python
# Overly complex with unnecessary abstractions and state tracking
class ConnectionRegistry:
def __init__(self, metrics_collector, cleanup_interval=60):
self.connections_by_id = {}
self.connections_by_resource = defaultdict(list)
self.connections_by_user = defaultdict(list)
self.metrics_collector = metrics_collector
self.cleanup_task = asyncio.create_task(self._cleanup_loop(cleanup_interval))

# [50+ more lines of complex indexing and state management]
```

### Good Example: Simple MCP Client

```python
# Focused MCP client with clean error handling
class McpClient:
def __init__(self, endpoint: str, service_name: str):
self.endpoint = endpoint
self.service_name = service_name
self.client = None

async def connect(self):
"""Connect to MCP server"""
if self.client is not None:
return # Already connected

try:
# Create SSE client context
async with sse_client(self.endpoint) as (read_stream, write_stream):
# Create client session
self.client = ClientSession(read_stream, write_stream)
# Initialize the client
await self.client.initialize()
except Exception as e:
self.client = None
raise RuntimeError(f"Failed to connect to {self.service_name}: {str(e)}")

async def call_tool(self, name: str, arguments: dict):
"""Call a tool on the MCP server"""
if not self.client:
await self.connect()

return await self.client.call_tool(name=name, arguments=arguments)
```

### Bad Example: Over-engineered MCP Client

```python
# Complex MCP client with excessive state management and error handling
class EnhancedMcpClient:
def __init__(self, endpoint, service_name, retry_strategy, health_check_interval):
self.endpoint = endpoint
self.service_name = service_name
self.state = ConnectionState.DISCONNECTED
self.retry_strategy = retry_strategy
self.connection_attempts = 0
self.last_error = None
self.health_check_interval = health_check_interval
self.health_check_task = None
# [50+ more lines of complex state tracking and retry logic]
```

## Remember

- It's easier to add complexity later than to remove it
- Code you don't write has no bugs
- Favor clarity over cleverness
- The best code is often the simplest

This philosophy document serves as the foundational guide for all implementation decisions in the project.
20 changes: 20 additions & 0 deletions ai_context/MODULAR_DESIGN_PHILOSOPHY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Building Software with AI: A Modular Block Approach

_By Brian Krabach_\
_3/28/2025_

Imagine you're about to build a complex construction brick spaceship. You dump out thousands of tiny bricks and open the blueprint. Step by step, the blueprint tells you which pieces to use and how to connect them. You don't need to worry about the details of each brick or whether it will fit --- the instructions guarantee that every piece snaps together correctly. **Now imagine those interlocking bricks could assemble themselves** whenever you gave them the right instructions. This is the essence of our new AI-driven software development approach: **we provide the blueprint, and AI builds the product, one modular piece at a time.**

Like a brick model, our software is built from small, clear modules. Each module is a self-contained "brick" of functionality with defined connectors (interfaces) to the rest of the system. Because these connection points are standard and stable, we can generate or regenerate any single module independently without breaking the whole. Need to improve the user login component? We can have the AI rebuild just that piece according to its spec, then snap it back into place --- all while the rest of the system continues to work seamlessly. And if we ever need to make a broad, cross-cutting change that touches many pieces, we simply hand the AI a bigger blueprint (for a larger assembly or even the entire codebase) and let it rebuild that chunk in one go. **Crucially, the external system contracts --- the equivalent of brick studs and sockets where pieces connect --- remain unchanged.** This means even a regenerated system still fits perfectly into its environment, although inside it might be built differently, with fresh optimizations and improvements.

When using LLM-powered tools today, even what looks like a tiny edit is actually the LLM generating new code based on the specifications we provide. We embrace this reality and don't treat code as something to tweak line-by-line; **we treat it as something to describe and then let the AI generate to create or assemble.** By keeping each task _small and self-contained_ --- akin to one page of a blueprint --- we ensure the AI has all the context it needs to generate that piece correctly from start to finish. This makes the code generation more predictable and reliable. The system essentially always prefers regeneration of a module (or a set of modules) within a bounded context, rather than more challenging edits at the code level. The result is code that's consistently in sync with its specification, built in a clean sweep every time.

# The Human Role: From Code Mechanics to Architects

In this approach, humans step back from being code mechanics and instead take on the role of architects and quality inspectors. Much like a master builder, a human defines the vision and specifications up front --- the blueprint for what needs to be built. But once the spec (the blueprint) is handed off, the human doesn't hover over every brick placement. In fact, they don't need to read the code (just as you don't examine each brick's material for flaws). Instead, they focus on whether the assembled product meets the vision. They work at the specification level or higher: designing requirements, clarifying the intended behavior, and then evaluating the finished module or system by testing its behavior in action. If the login module is rebuilt, for example, the human reviews it by seeing if users can log in smoothly and securely --- not by poring over the source code. This elevates human involvement to where it's most valuable, letting AI handle the heavy lifting of code construction and assembly.

# Building in Parallel

The biggest leap is that we don't have to build just one solution at a time. Because our AI "builders" work so quickly and handle modular instructions so well, we can spawn multiple versions of the software in parallel --- like having several brick sets assembled simultaneously. Imagine generating and testing multiple variants of a feature at once --- the AI could try several different recommendation algorithms for a product in parallel to see which performs best. It could even build the same application for multiple platforms simultaneously (web, mobile, etc.) by following platform-specific instructions. We could have all these versions built and tested side by side in a fraction of the time it would take a traditional team to do one. Each variant teaches us something: we learn what works best, which design is most efficient, which user experience is superior. Armed with those insights, we can refine our high-level specifications and then regenerate the entire system or any module again for another iteration. This cycle of parallel experimentation and rapid regeneration means we can innovate faster and more fearlessly. It's a development playground on a scale previously unimaginable --- all enabled by trusting our AI co-builders to handle the intricate assembly while we guide the vision.

In short, this brick-inspired, AI-driven approach flips the script of software development. We break the work into well-defined pieces, let AI assemble and reassemble those pieces as needed, and keep humans focused on guiding the vision and validating results. The outcome is a process that's more flexible, faster, and surprisingly liberating: we can reshape our software as easily as snapping together (or rebuilding) a model, and even build multiple versions of it in parallel. For our stakeholders, this means delivering the right solution faster, adapting to change without fear, and continually exploring new ideas --- brick by brick, at a pace and scale that set a new standard for innovation.
Loading
Loading