-
Notifications
You must be signed in to change notification settings - Fork 13.6k
Q4/Q8 Tiled Gemm Optimization. #16999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This patch implemenrts tiled GEMM for large blocks where we pack blocks of 64x64 and perfrom matmul. 30 ~ 50 % improvement in llama-bench and llama-batched-bench with Meta-Llama3-8B Qunatized models( Q4_0 and Q8_0). Signed-off-by: Shalini Salomi Bodapati <[email protected]>
|
@taronaeo Can you please review this PR ? |
|
@ggerganov Can you please review this PR? |
|
|
||
| #include <pthread.h> | ||
|
|
||
| typedef vector unsigned char vec_t; | ||
| typedef __vector_quad acc_t; | ||
|
|
||
| static pthread_key_t t_data_key; | ||
| typedef struct { | ||
| vec_t* A_pack; | ||
| vec_t* B_pack; | ||
| int* comparray; | ||
| } thread_scratchpad_t; | ||
| void thread_cleanup(void* arg) { | ||
| thread_scratchpad_t* data = (thread_scratchpad_t*)arg; | ||
| if (data) { | ||
| delete[] data->A_pack; | ||
| delete[] data->B_pack; | ||
| delete[] data->comparray; | ||
|
|
||
| delete data; | ||
| } | ||
| } | ||
| static bool key_created = false; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to avoid dynamic allocations - none of the code currently uses those. The mechanism for this is to use the wdata from ggml_compute_params to store scratch data. You'll need to reserve the worst-case wsize for your case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ggerganov Thank you for the input. I tried to avoid dynamic allocation from the code, but lost perforamce without this pthread based code. Below is the code Performance comparison after integrating thread-local scratchpad using wdata.
void matmul_tiled(const ggml_compute_params* params,
int64_t m, int64_t n, int64_t mc, int64_t nc, int64_t kc) {
char* wdata = (char*) params->wdata;
constexpr size_t ALIGN = 128;
auto align_ptr = [&](char* ptr, size_t alignment) {
return (char*)(((uintptr_t)ptr + alignment - 1) & ~(alignment - 1));
};
char* ptr = align_ptr(wdata, ALIGN);
vec_t* A_pack = (vec_t*)ptr; ptr += sizeof(vec_t) * mc * kc * 2;
vec_t* B_pack = (vec_t*)ptr; ptr += sizeof(vec_t) * nc * kc * 2;
int* comparray = (int*)align_ptr(ptr, ALIGN); // integer part aligned too
ptr += sizeof(int) * mc * kc;
// rest of your original matmul_tiled() code unchanged
}
| Benchmark (llama-bench) | Baseline | pthread-based TLS | ggml wdata-based TLS |
|---|---|---|---|
| pp128 | 69 t/s | 89 t/s | 36 t/s |
| pp256 | 69 t/s | 94 t/s | 36 t/s |
This regression is likely due to:
- Loss of persistent per-thread cache locality — the previous pthread-based version reused buffers effectively across tiles.
- Higher memory initialization or shared buffer contention across threads.
I have also tried static allocation on stack by having just this code , But it has suffers from similar perf loss. ( 38 t/s)
vec_t A_pack [mckc2];
vec_t B_pack[nckc2];
int comparray[mc*kc];
Can you please suggest ?
This patch implemenrts tiled GEMM for large blocks where we pack blocks of 64x64 and perfrom matmul.
30 ~ 50 % improvement in llama-bench and llama-batched-bench with Meta-Llama3-8B Qunatized models( Q4_0 and Q8_0).
Make sure to read the contributing guidelines before submitting a PR