Skip to content

simonrosenberg/openhands_playground

Repository files navigation

OpenHands Playground

A playground repository for OpenHands development featuring a flexible LLM factory pattern implementation.

Project Structure

openhands_playground/
├── src/
│   └── openhands_playground/
│       ├── __init__.py
│       └── llm/
│           ├── __init__.py
│           ├── base.py
│           ├── factory.py
│           └── llms/
│               ├── __init__.py
│               ├── mock_llm.py
│               └── openai_llm.py
├── test/
│   ├── __init__.py
│   └── test_llm.py
├── .env.example
├── pyproject.toml
└── README.md

Development Setup

This project uses Poetry for dependency management.

Pre-commit Hooks

This repository uses pre-commit hooks to ensure code quality. The hooks include:

  • ruff: A fast Python linter and formatter
  • mypy: Static type checking for Python

Installation

  1. Install Poetry if you haven't already:

    curl -sSL https://install.python-poetry.org | python3 -
  2. Install dependencies:

    poetry install
  3. Install pre-commit:

    pip install pre-commit
  4. Install the git hooks:

    pre-commit install

Running the Hooks

The hooks will run automatically on git commit. To run them manually:

pre-commit run --all-files

Development Tools

This project includes the following development tools:

  • pytest: For running tests
  • ruff: For linting and code formatting
  • mypy: For static type checking

Running Tests

# Run all tests
poetry run pytest

# Run tests with coverage
poetry run pytest --cov=openhands_playground

Code Quality

# Run linting
poetry run ruff check .

# Format code
poetry run ruff format .

# Run type checking
poetry run mypy src/

Running All Quality Checks

# Run all quality checks
poetry run ruff check . && poetry run ruff format --check . && poetry run mypy src/ && poetry run pytest

LLM Usage

This package provides a factory pattern for creating and using different LLM providers.

Basic Usage

from openhands_playground.llm import LLMFactory

# Create a mock LLM for testing
llm = LLMFactory.create_mock_llm()

# Generate text
response = llm.generate("Tell me a joke")
print(response)

# Chat with messages
messages = [
    {"role": "user", "content": "Hello, how are you?"}
]
response = llm.chat(messages)
print(response)

OpenAI Integration

from openhands_playground.llm import LLMFactory

# Create OpenAI LLM (requires API key)
llm = LLMFactory.create_openai_llm(
    model_name="gpt-3.5-turbo",
    api_key="your-api-key-here"
)

# Or use environment variable OPENAI_API_KEY
llm = LLMFactory.create_openai_llm()

# Generate text with parameters
response = llm.generate(
    "Explain quantum computing",
    max_tokens=100,
    temperature=0.7
)
print(response)

Environment Variables

Create a .env file in your project root:

OPENAI_API_KEY=your-openai-api-key-here

Available Providers

from openhands_playground.llm import LLMFactory

# List available providers
providers = LLMFactory.get_available_providers()
print(providers)  # ['mock', 'openai']

# Create LLM by provider name
llm = LLMFactory.create_llm("mock", model_name="test-model")

Custom Providers

You can register custom LLM implementations:

from openhands_playground.llm import LLMFactory, BaseLLM

class CustomLLM(BaseLLM):
    def generate(self, prompt, **kwargs):
        return f"Custom response to: {prompt}"
    
    def chat(self, messages, **kwargs):
        return "Custom chat response"

# Register the custom provider
LLMFactory.register_provider("custom", CustomLLM)

# Use the custom provider
llm = LLMFactory.create_llm("custom", model_name="custom-model")

About

Playground for using openhands AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages