-
Notifications
You must be signed in to change notification settings - Fork 0
Description
How to run onnx models in score
1. Drop an ai-model process in your timeline.
Let's try with BlazePose Detector. It can read from a camera or image input texture, run the model, and output results in real-time.
2. Download and load model
- BlazePose model from [Ailia](https://github.com/axinc-ai/ailia-models/tree/master/pose_estimation/blazepose)
- Download link: https://storage.googleapis.com/ailia-models/blazepose-fullbody/pose_landmark_full.onnx - Yolov8 pose, download link: https://huggingface.co/Xenova/yolov8-pose-onnx/tree/main#:~:text=yolov8n%2Dpose.onnx,Safe
(until we close [#1701]((in progress: ossia/score#1710
- DONE : add to models to package manager
- DONE : filter by "AI" category
- add the rest of the models available in the latest release
3. Add an Input camera device (or video)
Choose camera with resolution with the most square ◾ input
- For BlazePose landmark fullbody, change the (input) size to 256 x 256
Or drag & drop your video into score, pipe into the ai-model process input.
- TODO: automatically detect input size and pass to connected input camera device ?
4. Add a "Window" device for output
adjust the output size to match the model's output
5. Play
Add a trigger if you want it to run forever.

6. Extract keypoints
To extract the keypoints, we have to understand the model output. For example, here is the ordered keypoints outputted from this pose estimation model (Blazepose).
Here are the keypoints for RTMPose
And then with jq in the object filter, you can filter like this:
- Left wrist = .keypoints[15].position[]
- Right wrist = .keypoints[16].position[]
- Nose = .keypoints[0].position[]
7. Use keypoints
Once your model is running in score, and you've got camera input connected, you can extract meaningful keypoints (e.g. wrists, nose) and use them to control any parameter.
You can also send them over OSC to external tools like Wekinator for gesture recognition or interactive control
video input, and the object filter's outputs to the Wekinator wekinator:/wek/inputs
To package those 3 keypoints into a 9 values OSC message that Wekinator accepts, we use the Array Combinor process.

General TODO
- Check text from v3.4.0 release https://github.com/ossia/score/releases/tag/v3.4.0
- How to add docs to https://ossia.io/score-docs/processes/ai-recognition.html#blazepose? Where does the content live?
- Connect with wekinator (to be created) docs
Metadata
Metadata
Assignees
Labels
Type
Projects
Status




