-
Notifications
You must be signed in to change notification settings - Fork 13.6k
sampling : add support for GPU sampling (wip) #17004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
danbev
wants to merge
6
commits into
ggml-org:master
Choose a base branch
from
danbev:gpu-sampling
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
+2,278
−193
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
One place this would be useful immediately is the diffusion-cli. I'm happy to test this when it's ready |
This is a work in progress to add support for GPU sampling.
The motivation for this feature is to enable sampling to be performed
directly on the GPU as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the GPU.
For example, the GPU sampler chain might select/sample a token directly
in which case only the sampled token needs to be transferred from
device memory to host memory.
It is also possible for the GPU samplers to perform filtering of the
logits, or compute and filter the probability distribution, in which
case only the filtered logits or probabilites need to be transferred
back to system memory for further processing by CPU samplers.
Currently the GPU sampling works in a similar manner to how pooling
works, it is a function that is called by build_graph:
```c++
// add GPU sampling layers (if any)
llm->build_sampling(*this, params);
```
GPU samplers can be configured by creating sampler chains, where each
sampler chain is associated with a specific sequence id:
```c++
struct llama_sampler_chain_params params = llama_sampler_chain_default_params();
struct llama_sampler * chain = llama_sampler_chain_init(params);
llama_sampler_chain_add(chain, llama_sampler_gpu_init_greedy());
std::vector<llama_sampler_seq_config> sampler_configs = {
{ 0, gpu_sampler_chain }
};
```
The struct is defined as:
```c++
struct llama_sampler_seq_config {
llama_seq_id seq_id;
struct llama_sampler * sampler;
};
```
These sampler configs are then passed as context params:
```c++
llama_context_params cparams = llama_context_default_params();
cparams.samplers = sampler_configs.data();
cparams.n_samplers = sampler_configs.size();
```
When the graph is built, the configured sampler's _apply function is
called which allows them to add operations/nodes to the computation
graph.
This enables the sampling to happen fully, or partially on the GPU. The
samplers could sample a single token in which case that is what will be
transferred from the device memory to host memory after llama_decode has
been called. The sampled token can then be retrieved using:
```c++
llama_token id = llama_get_sampled_token_ith(test_ctx.ctx, index);
```
Is it also possible to run a GPU sampler that only filters the logits
and then only the filtered logits are transferred back to the host and
the sampling can proceed on the CPU with the normal (CPU) sampler chain.
In this case the CPU samplers are configured as usual but they will now
operate on already filtered logits.
Similar to the above handling of logits, it is possible for a GPU
samplers to compute the full probability distribution and transfer that
to the host. And the CPU samplers can then operate on the those
probabilities.
Building and running the tests:
Download a model for testing:
```console
$ cd models && wget https://huggingface.co/ggml-org/models/resolve/main/tinyllamas/stories15M-q4_0.gguf
```
Building the test:
```console
$ cmake --build build --target test-gpu-sampling -j8
```
Runing all tests:
```console
$ env LLAMACPP_TEST_MODELFILE=../models/stories15M-q4_0.gguf \
ctest --test-dir build -R '^test-gpu-sampling$' -V
```
The following individual tests are available:
```console
$ ctest --test-dir build -N -R test-gpu-sampling-
Test 35: test-gpu-sampling-greedy
Test 36: test-gpu-sampling-temp
Test 37: test-gpu-sampling-softmax
Test 38: test-gpu-sampling-top_k
Test 39: test-gpu-sampling-top_p
Test 40: test-gpu-sampling-mul_seq
Total Tests: 6
```
These can be run individually, for example:
```console
$ env LLAMACPP_TEST_MODELFILE=../models/stories15M-q4_0.gguf \
ctest --test-dir build -R 'test-gpu-sampling-temp' -V
```
TODO:
- [ ] Allow GPU samplers to pre-allocate state tensors
- [ ] Integrate GPU samplers with llama-server
- [ ] Implement true top-p sampler on GPU
- [ ] Add missing GPU samplers (e.g. typical, mirostat, etc)
This commit updates the llama_sampler_gpu_top_p_apply_ggml function to use ggml_div_inplace instead of ggml_div as this generated an error on webgpu backends: ```console /home/danbev/work/ai/llama.cpp-debug/ggml/src/ggml-webgpu/ggml-webgpu.cpp:2146: ggml_webgpu: Device error! Reason: 2, Message: Writable storage buffer binding aliasing found between [BindGroup "div_f32"] set at bind group index 0, binding index 1, and [BindGroup "div_f32"] set at bind group index 0, binding index 2, with overlapping ranges (offset: 0, size: 32) and (offset: 0, size: 32) in [Buffer "allocated_buffer"]. - While encoding [ComputePassEncoder (unlabeled)].DispatchWorkgroups(1, 1, 1). - While finishing [CommandEncoder (unlabeled)]. ``` It also sets ggml_data-filtered_ids as an output tensor as it might otherwise be reused before being read.
This commit adds a new cumulative sum (cumsum) operation to the ggml library. The motivation for this it to be able to implement GPU distribution sampler. I notice that there is work underway to add cumsum in other PRs so this commit can probably be removed once those are merged.
This commit add support for performing distribution sampling on the GPU. It adds a function to the sampler interface for setting input tensors which will be called after the computation graph has been built and scheduled. For the dist sampler this allows it to set a random uniform value that is used to sample from the cumulative distribution.
Open
5 tasks
This commit adds a function to set the ggml_backend_sched_t and ggml_backend_t GPU-based samplers. The motivation for this is that the tenors that a GPU sampler creates (new tensors and operations) should be allocated on the same backend as the logits tensor produced by the model's graph. With this change the samplers can use the scheduler and backend to set the correct backend for the tensor that it creates. I'll try to find a nice way of enforcing this as it would be easy to miss doing this step otherwise.
This commit adds check to see if the target backend can support operations like argsort (used by top-k sampling) and cont. Currently these operations are not supported in all backends (e.g., Metal backend) and will cause runtime errors. The checks in the commit allow us to avoid the error but if you print the print the schedulers debug table (GGML_SCHED_DEBUG=2) we can see that there will be a split in the graph to use the CPU backed for these operations which defeats the purpose of GPU sampling. We should probably fix/add support for the operations that are going to be used in the GPU samplers to have this work most effectively. Metal issues: * Metal ARGSORT only supports ne[0] <= 1024. GPU samplers need to sort full vocabulary. * CUMSUM not implemented for Metal backend. This was just added by me in a recent commit and there are other PRs open that look like they are also in the process of adding support for it.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ggml
changes relating to the ggml tensor library for machine learning
Nvidia GPU
Issues specific to Nvidia GPUs
testing
Everything test related
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a work in progress to add support for GPU sampling.
The motivation for this feature is to enable sampling to be performed directly on the GPU as part of the computation graph being executed, allowing for some or all of the sampling to be done on the GPU.
For example, the GPU sampler chain might select/sample a token directly in which case only the sampled token needs to be transferred from device memory to host memory.
It is also possible for the GPU samplers to perform filtering of the logits, or compute and filter the probability distribution, in which case only the filtered logits or probabilites need to be transferred back to system memory for further processing by CPU samplers.
Currently the GPU sampling works in a similar manner to how pooling works, it is a function that is called by build_graph:
GPU samplers can be configured by creating sampler chains, where each sampler chain is associated with a specific sequence id:
The struct is defined as:
These sampler configs are then passed as context params:
llama_context_params cparams = llama_context_default_params(); cparams.samplers = sampler_configs.data(); cparams.n_samplers = sampler_configs.size();When the graph is built, the configured sampler's _apply function is called which allows them to add operations/nodes to the computation graph.
This enables the sampling to happen fully, or partially on the GPU. The samplers could sample a single token in which case that is what will be transferred from the device memory to host memory after llama_decode has been called. The sampled token can then be retrieved using:
Is it also possible to run a GPU sampler that only filters the logits and then only the filtered logits are transferred back to the host and the sampling can proceed on the CPU with the normal (CPU) sampler chain. In this case the CPU samplers are configured as usual but they will now operate on already filtered logits.
Similar to the above handling of logits, it is possible for a GPU samplers to compute the full probability distribution and transfer that to the host. And the CPU samplers can then operate on the those probabilities.
Building and running the tests
Download a model for testing:
$ cd models && wget https://huggingface.co/ggml-org/models/resolve/main/tinyllamas/stories15M-q4_0.ggufBuilding the test:
$ cmake --build build --target test-gpu-sampling -j8Runing all tests:
The following individual tests are available:
These can be run individually, for example:
TODO
Basically I think we should have support in all backends for the operations that the GPU samplers use. At the moment this is not the case and currently if the target backend device (the same device that holds the logits tensor) does not support the operation a warning is printed similar to this: