This is a comprehensive development and testing toolkit for working with Ollama language models. The project provides tools for analyzing, simulating, and testing LLM behavior with advanced prompt engineering and response processing capabilities.
- Advanced LLM Analysis - Deep analysis of model responses and behavior patterns
- Response Processing Engine - Sophisticated engine for processing and analyzing LLM outputs
- Prompt Engineering Tools - Dynamic prompt generation using randomized linguistic features
- Model Configuration Management - Fine-tuned parameter control for optimal model performance
- Simulation Framework - Framework for running controlled LLM experiments and scenarios
- Rich Logging & Monitoring - Comprehensive logging with color-coded output and detailed metrics
analyze.py- Main analysis engine for processing LLM responses and extracting insightssim.py- Simulation framework for running controlled LLM experimentsconfig.py- Model configuration and parameter managementinstructions.py- System prompts and instruction templateslangfeatures.py- Language feature categorization for dynamic text generationjam.py- Audio/frequency processing utilities- Supporting utilities - HTTP testing, model updates, and various helper scripts
- Python 3.8+
- Ollama installed and running
- Redis server (optional, for advanced features)
git clone https://github.com/kilitary/ollama-dev.git
cd ollama-dev
pip install -r requirements.txt# Run main analysis
python analyze.py
# Run simulation experiments
python sim.py
# Test HTTP endpoints
python httptest.py- Engine Results Documentation - Understanding LLM output processing
- Algorithm Reference - Sampling techniques and methods used
- Configuration Guide - Model parameters and settings
Copyright (c) 2024-2025 [email protected]
