A playground repository for OpenHands development featuring a flexible LLM factory pattern implementation.
openhands_playground/
├── src/
│ └── openhands_playground/
│ ├── __init__.py
│ └── llm/
│ ├── __init__.py
│ ├── base.py
│ ├── factory.py
│ └── llms/
│ ├── __init__.py
│ ├── mock_llm.py
│ └── openai_llm.py
├── test/
│ ├── __init__.py
│ └── test_llm.py
├── .env.example
├── pyproject.toml
└── README.md
This project uses Poetry for dependency management.
This repository uses pre-commit hooks to ensure code quality. The hooks include:
- ruff: A fast Python linter and formatter
- mypy: Static type checking for Python
-
Install Poetry if you haven't already:
curl -sSL https://install.python-poetry.org | python3 - -
Install dependencies:
poetry install
-
Install pre-commit:
pip install pre-commit
-
Install the git hooks:
pre-commit install
The hooks will run automatically on git commit. To run them manually:
pre-commit run --all-filesThis project includes the following development tools:
- pytest: For running tests
- ruff: For linting and code formatting
- mypy: For static type checking
# Run all tests
poetry run pytest
# Run tests with coverage
poetry run pytest --cov=openhands_playground# Run linting
poetry run ruff check .
# Format code
poetry run ruff format .
# Run type checking
poetry run mypy src/# Run all quality checks
poetry run ruff check . && poetry run ruff format --check . && poetry run mypy src/ && poetry run pytestThis package provides a factory pattern for creating and using different LLM providers.
from openhands_playground.llm import LLMFactory
# Create a mock LLM for testing
llm = LLMFactory.create_mock_llm()
# Generate text
response = llm.generate("Tell me a joke")
print(response)
# Chat with messages
messages = [
{"role": "user", "content": "Hello, how are you?"}
]
response = llm.chat(messages)
print(response)from openhands_playground.llm import LLMFactory
# Create OpenAI LLM (requires API key)
llm = LLMFactory.create_openai_llm(
model_name="gpt-3.5-turbo",
api_key="your-api-key-here"
)
# Or use environment variable OPENAI_API_KEY
llm = LLMFactory.create_openai_llm()
# Generate text with parameters
response = llm.generate(
"Explain quantum computing",
max_tokens=100,
temperature=0.7
)
print(response)Create a .env file in your project root:
OPENAI_API_KEY=your-openai-api-key-herefrom openhands_playground.llm import LLMFactory
# List available providers
providers = LLMFactory.get_available_providers()
print(providers) # ['mock', 'openai']
# Create LLM by provider name
llm = LLMFactory.create_llm("mock", model_name="test-model")You can register custom LLM implementations:
from openhands_playground.llm import LLMFactory, BaseLLM
class CustomLLM(BaseLLM):
def generate(self, prompt, **kwargs):
return f"Custom response to: {prompt}"
def chat(self, messages, **kwargs):
return "Custom chat response"
# Register the custom provider
LLMFactory.register_provider("custom", CustomLLM)
# Use the custom provider
llm = LLMFactory.create_llm("custom", model_name="custom-model")