Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.*

# Local dashboard data
data/

Expand Down
109 changes: 82 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,28 @@
Preprocess observations and STOFS model data for viewing on sealens-like dashboard.

# Installation
### Set up conda environment
This package has so far been developed and tested using `python 3.12`. If needed, use conda to get this:
### Install with uv
This package has been updated to use `uv` for dependency management and `pyproject.toml` for configuration. It has been developed and tested using `python 3.12`.

```bash
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# or
pip install uv

# Clone repository
git clone https://github.com/oceanmodeling/stofs-event-dashboard.git
# or
git clone [email protected]:oceanmodeling/stofs-event-dashboard.git

# Install the package and dependencies
cd stofs-event-dashboard
uv sync
```

### Alternative: Set up conda environment (legacy method)
If you prefer to use conda:
```bash
# Download and set up conda:
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-Linux-x86_64.sh -b -p "${HOME}/conda"
Expand All @@ -12,43 +31,79 @@ source "${HOME}/conda/etc/profile.d/mamba.sh"
# Set up a new virtual environment:
mamba create --name=py312 python=3.12
mamba activate py312

# Install the package in development mode
uv pip install -e .
```
### Clone repository
```
git clone https://github.com/oceanmodeling/stofs-event-dashboard.git
# or
git clone [email protected]:oceanmodeling/stofs-event-dashboard.git
```
### Install dependencies
```
cd stofs-event-dashboard
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
### Cleanup
When finished, both the `venv` and (if applicable) `conda` environments need to be deactivated:
```
deactivate
# if needed:
mamba deactivate

### Development Installation
To install with development dependencies (includes pytest, coverage, black, isort, flake8, mypy):
```bash
uv sync --extra dev
```

# Usage
### Pre-process data
```bash
# Using uv
uv run process-event-data <path_to_config>

# Or with the full module path
uv run python -m stofs_event_dashboard.process_event_data <path_to_config>
```
cd stofs-event-dashboard
python process-event-data.py <path_to_config>
```

### Run dashboard
If running on a remote machine (e.g., AWS, GCP), you need to open a tunnel from your local computer to be able to view the dashboard on a local browser window.
```
```bash
ssh -i ~/.ssh/id_rsa -L8849:localhost:8849 <First.Last>@<cluster_ip_address>
```

Whether running locally (on your own laptop) or on a remote machine, the command below will start the dashboard. If running remotely, the port number (also repeated at the end of both websocket origins) needs to be the same as in the ssh command above (`8849` in this case).
```
python -m panel serve dashboard*.py --dev --address=127.0.0.1 --port=8849 --allow-websocket-origin=localhost:8849 --allow-websocket-origin=127.0.0.1:8849 --log-level debug

```bash
# Using uv
uv run python -m panel serve src/stofs_event_dashboard/dashboard_reactive.py --dev --address=127.0.0.1 --port=8849 --allow-websocket-origin=localhost:8849 --allow-websocket-origin=127.0.0.1:8849 --log-level debug

# open dashboard at:
# http://127.0.0.1:8849/dashboard
```

### Run tests
```bash
# Using uv (basic)
uv run pytest

# Using uv with verbose output
uv run pytest -v

# Using uv with coverage
uv run pytest --cov=src/stofs_event_dashboard --cov-report=term-missing

# Using uv with HTML coverage report
uv run pytest --cov=src/stofs_event_dashboard --cov-report=html

# Using the test runner script (recommended)
python run_tests.py --verbose
python run_tests.py --coverage
python run_tests.py --coverage --html
python run_tests.py --file tests/test_basic.py

# Using VS Code tasks
# Open Command Palette (Cmd+Shift+P / Ctrl+Shift+P)
# Type "Tasks: Run Task" and select "Run Tests" or "Run Tests with Coverage"
```

### Code formatting and linting
```bash
# Format code
uv run black src tests

# Sort imports
uv run isort src tests

# Check linting
uv run flake8 src tests

# Type checking
uv run mypy src
```
167 changes: 167 additions & 0 deletions TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# Testing Guide for STOFS Event Dashboard

This document explains how to run tests for the STOFS Event Dashboard project.

## Prerequisites

1. Make sure you have `uv` installed
2. Install the development dependencies:
```bash
uv sync --extra dev
```

## Running Tests

### Basic Test Execution

```bash
# Run all tests
uv run pytest

# Run tests with verbose output
uv run pytest -v

# Run specific test file
uv run pytest tests/test_basic.py

# Run specific test function
uv run pytest tests/test_basic.py::test_version
```

### Test Coverage

```bash
# Run tests with coverage report
uv run pytest --cov=src/stofs_event_dashboard --cov-report=term-missing

# Generate HTML coverage report
uv run pytest --cov=src/stofs_event_dashboard --cov-report=html

# Open HTML coverage report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
```

### Using the Test Runner Script

The project includes a convenient test runner script (`run_tests.py`) that provides an easier interface:

```bash
# Basic usage
python run_tests.py

# With verbose output
python run_tests.py --verbose

# With coverage
python run_tests.py --coverage

# With HTML coverage report
python run_tests.py --coverage --html

# Run specific test file
python run_tests.py --file tests/test_basic.py

# Combine options
python run_tests.py --verbose --coverage --html
```

### Using VS Code Tasks

The project includes VS Code tasks for running tests:

1. Open Command Palette (`Cmd+Shift+P` / `Ctrl+Shift+P`)
2. Type "Tasks: Run Task"
3. Select either:
- "Run Tests" - Basic test execution
- "Run Tests with Coverage" - Tests with coverage report

## Test Structure

The tests are organized in the `tests/` directory:

- `test_basic.py` - Basic functionality tests including version and imports
- `test_dashboard.py` - Dashboard-specific functionality tests
- `test_process_event_data.py` - Data processing functionality tests

## Test Behavior

### Skipped Tests

Some tests may be skipped due to:
- Missing dependencies
- Compatibility issues with Python 3.13 and certain libraries
- Missing data files

This is expected behavior and doesn't indicate a problem with the testing setup.

### Test Coverage

The project currently achieves about 18% test coverage. The main areas covered by tests are:
- Basic module imports and structure
- Dashboard utility functions
- Function signatures and basic behavior

## Troubleshooting

### Common Issues

1. **Import Errors**: Make sure all dependencies are installed with `uv sync --extra dev`

2. **Missing Data Directory**: Some tests may fail if the `data/` directory doesn't exist. This is expected and the tests will skip gracefully.

3. **Compatibility Issues**: Some dependencies may have compatibility issues with Python 3.13. The tests are designed to skip these gracefully.

### Dependency Issues

If you encounter issues with the `stormevents` library or other dependencies:

1. Check the dependency versions in `pyproject.toml`
2. Try updating dependencies: `uv sync --extra dev`
3. If issues persist, the tests will skip the problematic imports

## Configuration

Test configuration is managed through `pyproject.toml`:

```toml
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra -q --strict-markers --strict-config"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
```

Coverage configuration:

```toml
[tool.coverage.run]
source = ["src/stofs_event_dashboard"]
branch = true

[tool.coverage.report]
show_missing = true
skip_covered = false
exclude_lines = [
"pragma: no cover",
"def __repr__",
"if self.debug:",
"if settings.DEBUG",
"raise AssertionError",
"raise NotImplementedError",
"if 0:",
"if __name__ == .__main__.:",
"class .*\\bProtocol\\):",
"@(abc\\.)?abstractmethod",
]
```

## Adding New Tests

When adding new tests:

1. Place them in the appropriate test file in `tests/`
2. Follow the existing pattern of handling import errors gracefully
3. Use descriptive test names and docstrings
4. Include both positive and negative test cases where appropriate
5. Update this documentation if adding new test categories
Loading