Transform CSV data into actionable insights delivered via Slack
Created by AJAL ODORA JONATHAN
Relayboard is a modern data pipeline automation platform that transforms CSV/Google Sheets data into actionable insights delivered directly to your team's Slack channels. It automates the entire data processing workflow from ingestion to delivery.
Watch Relayboard transform CSV data into Slack notifications in real-time:
Screen.Recording.Oct.16.2025.1.1.mov
- π CSV dataset registration via web interface
- β‘ One-click pipeline execution
- π Real-time processing status updates
- π± Instant Slack notifications with data previews
- π― Smart data sampling for large datasets
Many teams struggle with:
- Manual data processing that's time-consuming and error-prone
- Data silos where insights don't reach the right people
- Complex data pipelines that require technical expertise
- Delayed insights that arrive too late to be actionable
Relayboard solves this by providing a "data-to-notification" system that automates the entire workflow.
CSV URL β MinIO Storage β PostgreSQL Staging β dbt Transform β PostgreSQL Warehouse β Slack
- Register CSV datasets via web interface
- Configure Slack webhook destinations
- Execute complete pipeline with single click
- Real-time feedback and error handling
- Frontend: Next.js 15 with Tailwind CSS
- API: NestJS with TypeScript
- Worker: Python/FastAPI for data processing
- Database: PostgreSQL with staging/warehouse schemas
- Storage: MinIO (S3-compatible)
- Transformations: dbt for data modeling
graph TB
A[Web UI - Next.js] --> B[API - NestJS]
B --> C[Worker - Python/FastAPI]
B --> D[PostgreSQL Database]
B --> E[MinIO Storage]
C --> D
C --> E
C --> F[dbt Transformations]
F --> D
C --> G[Slack Webhook]
H[CSV URL] --> C
I[User] --> A
- Location:
apps/web/ - Port: 3000
- Features:
- Modern, responsive UI with Tailwind CSS
- Step-by-step pipeline configuration
- Real-time loading states and feedback
- Service status monitoring
- Location:
apps/api/ - Port: 4000
- Features:
- RESTful API endpoints
- Database connection management
- File storage integration
- Pipeline orchestration
- Location:
apps/worker/ - Port: 5055
- Features:
- CSV processing and validation
- PostgreSQL data loading
- dbt model generation and execution
- Slack webhook integration
- PostgreSQL: Port 5433 (staging + warehouse schemas)
- MinIO: Port 9000 (storage) + 9001 (console)
- Redis: Port 6379 (caching/queuing)
- Node.js 18+ and pnpm
- Python 3.11+
- Docker and Docker Compose
- dbt CLI (optional, for local development)
# Start PostgreSQL, MinIO, and Redis
docker compose -f infra/docker/docker-compose.dev.yml up -d# Install all workspace dependencies
pnpm installTerminal 1 - API Server:
pnpm --filter @relayboard/api devTerminal 2 - Web Interface:
pnpm --filter @relayboard/web devTerminal 3 - Worker Service:
cd apps/worker
pip install -r requirements.txt
./start.sh- Web UI: http://localhost:3000
- API: http://localhost:4000
- Worker: http://localhost:5055
- MinIO Console: http://localhost:9001
# Register CSV dataset
POST /v1/datasets/csv
{
"name": "sales_data",
"csvUrl": "https://example.com/data.csv"
}# Configure Slack webhook
POST /v1/destinations/slack
{
"webhookUrl": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
}# Run complete pipeline
POST /v1/pipelines/run
{
"datasetName": "sales_data"
}GET /health- User provides CSV URL via web interface
- API downloads CSV and stores in MinIO
- Dataset metadata saved to PostgreSQL
- API triggers worker with pipeline parameters
- Worker downloads CSV from MinIO
- Data loaded into PostgreSQL
stagingschema - dbt models auto-generated based on CSV schema
- dbt transformations executed
- Results loaded into
warehouseschema
- Worker queries transformed data
- Results formatted and sent to Slack
- Pipeline status updated in database
-- Dataset registry
CREATE TABLE dataset (
id SERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
source_kind TEXT NOT NULL, -- 'csv'
s3_key TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Destination configuration
CREATE TABLE destination (
id SERIAL PRIMARY KEY,
kind TEXT NOT NULL, -- 'slack'
config_json JSONB NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Pipeline run tracking
CREATE TABLE run (
id SERIAL PRIMARY KEY,
dataset_id INT REFERENCES dataset(id),
status TEXT NOT NULL,
started_at TIMESTAMPTZ DEFAULT NOW(),
finished_at TIMESTAMPTZ,
error TEXT
);staging: Raw CSV data loaded from MinIOwarehouse: Transformed data from dbt models
The worker automatically generates dbt models based on CSV schema:
-- Example: sales_data_clean.sql
select
"date",
"product_name",
"quantity",
"price"
from staging."sales_data"dbt/relayboard/
βββ dbt_project.yml
βββ profiles.yml
βββ models/
βββ example.sql
βββ generated/ # Auto-generated models
βββ {dataset}_clean.sql
- Enter dataset name
- Provide CSV URL
- Click "Register CSV"
- Enter Slack webhook URL
- Click "Save Slack Destination"
- Click "Run Pipeline"
- Monitor progress with loading indicators
- View success/error feedback
- Real-time status of all services
- Connection indicators
- Service URLs and ports
# Start all services
docker compose -f infra/docker/docker-compose.dev.yml up -d
pnpm --filter @relayboard/api dev
pnpm --filter @relayboard/web dev
cd apps/worker && ./start.sh- Use environment variables for configuration
- Set up proper SSL certificates
- Configure production PostgreSQL
- Use managed MinIO or AWS S3
- Set up monitoring and logging
- Implement proper security measures
API (.env):
# Database
PG_HOST=127.0.0.1
PG_PORT=5433
PG_USER=relayboard
PG_PASSWORD=relayboard
PG_DATABASE=relayboard
# Storage
S3_ENDPOINT=http://127.0.0.1:9000
S3_ACCESS_KEY=relayboard
S3_SECRET_KEY=relayboard123
S3_BUCKET=relayboard
# Services
WORKER_BASE_URL=http://127.0.0.1:5055
SLACK_WEBHOOK=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOKWeb (.env.local):
NEXT_PUBLIC_API_BASE=http://localhost:4000PostgreSQL Connection Error:
# Check if PostgreSQL is running
docker ps | grep postgres
# Check port conflicts
lsof -i :5433MinIO Connection Error:
# Check MinIO status
docker logs docker-minio-1
# Access MinIO console
open http://localhost:9001Worker Service Error:
# Check Python dependencies
cd apps/worker
pip install -r requirements.txt
# Check dbt installation
dbt --version- CSV data ingestion
- PostgreSQL integration
- dbt transformations
- Slack delivery
- Web interface
- Google Sheets integration with OAuth
- Advanced dbt models with business logic
- Data preview with DuckDB
- Pipeline scheduling
- Error handling and retry logic
- User management and RBAC
- Audit logs and data lineage
- Advanced analytics and dashboards
- API rate limiting and security
- Multi-tenant support
- Horizontal scaling
- Advanced caching strategies
- Performance monitoring
- CI/CD pipelines
- Kubernetes deployment
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
- Frontend: ESLint + Prettier
- API: NestJS conventions
- Worker: Python PEP 8
- Database: PostgreSQL best practices
AJAL ODORA JONATHAN - @ODORA0
- π GitHub: https://github.com/ODORA0
- πΌ LinkedIn: Available on GitHub profile
- π― Tech Stack: Java, TypeScript, JavaScript, Python, React, Node.js, Firebase, AWS
Experienced full-stack developer with expertise in:
- Backend: Java, Python, Node.js, NestJS
- Frontend: React, TypeScript, Next.js, Tailwind CSS
- Cloud: AWS, Firebase, Docker
- Data: PostgreSQL, dbt, data pipelines
- Healthcare: OpenMRS contributor and billing systems expert
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with modern web technologies
- Inspired by data engineering best practices
- Designed for developer experience and ease of use
- Special thanks to the open-source community
Ready to transform your data into actionable insights? Start with Relayboard today! π
Created with β€οΈ by AJAL ODORA JONATHAN