-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Integrate Marlin Kernels for Int4 GPTQ inference #2497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 7 commits
Commits
Show all changes
52 commits
Select commit
Hold shift + click to select a range
f30ed49
marlin
afd6547
Merge branch 'marlin' of github.com:neuralmagic/vllm into marlin
837d344
added marlin
7a43b29
trying to load packed weights turning out to be tricky
e034640
trying to load packed weights turning out to be tricky due to qkv
15e8f9c
integrated marlin for single gpu
d8286fb
Update llama.py
robertgshaw2-redhat 8bc625f
Fixes to Marlin quantization to allow execution via CUDA graphs captu…
alexm-redhat 2691e89
Integrate @efrantar's changes for CUDA graphs
alexm-redhat 92f7290
review comments based on zhyncs
alexm-redhat bc10e4b
(1) Integrate the latest changes from Elias that improve large batch …
alexm-redhat 47987da
add bug fix
alexm-redhat 43aa818
refactored some of alex's work to be consistent with the gptq config
5906a60
updated to load model based on hf_config from AutoGPTQ
8dfeaa2
Reduce Marlin's kernel limitation of thread_n from 256 to 64 (to avoi…
alexm-redhat c7fb928
Update checks related to MarlinConfig
alexm-redhat 1ea85f3
formatting
alexm-redhat c876b79
Merge branch 'main' into marlin
robertgshaw2-redhat a435c97
Update pybind.cpp
robertgshaw2-redhat 90e8b8f
Update ops.h
robertgshaw2-redhat b03af7d
Update ops.h
robertgshaw2-redhat 9192287
readded marlin
ce50dd4
Bug fix for determination of the scales size in marlin layer
alexm-redhat 5a305d3
Ensure marlin only compiles for GPU compute capability >= 8.0
alexm-redhat b1773aa
fix marlin compilation again
alexm-redhat 036e0ca
Merge branch 'vllm-project:main' into marlin
robertgshaw2-redhat d63627e
added marlin test
18981b1
added marlin test
828c621
updated skipping logic
4f1759b
updated skipping logic
f1714e9
added memory profiling
e3a4706
added memory profiling
efd886c
test wout memory utilization
70f5850
updating memory profiling
567fe38
adding more profiling
01f5e40
updating memory profiling
fc5310c
removed memory profiling
99ab19d
cleaned up
eabeea6
added newline
d064595
ran ./format.sh
721351e
Merge branch 'upstream-main' into marlin
9b1bc5f
merged into upstream main
013f10f
Update test_marlin.py
robertgshaw2-redhat 7f2165e
Update test_marlin.py
robertgshaw2-redhat 79081ff
Merge branch 'main' into marlin
robertgshaw2-redhat 7a9b828
updated retry testing to use pytest-flaky rather than implementing th…
c23902f
missed newline
e7aba66
formatting
2403f7d
removed silly print
aabaed2
added license
a67dc8d
format
8ff42c0
minor change for ruff
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
@@ -0,0 +1,188 @@ | ||||
import numpy as np | ||||
from typing import Any, Dict, List, Optional | ||||
|
||||
import torch | ||||
from torch.nn.parameter import Parameter | ||||
|
||||
from vllm._C import ops | ||||
from vllm.model_executor.layers.linear import (LinearMethodBase, | ||||
set_weight_attrs) | ||||
from vllm.model_executor.layers.quantization.base_config import ( | ||||
QuantizationConfig) | ||||
|
||||
# Essentially all reasonable GPUs have less than 256 SMs so this should be safe for now | ||||
MAX_SMS = 256 | ||||
# Tile size used by Marlin Kernels | ||||
TILE_SIZE = 16 | ||||
# 4 Bits Packed Into 32 Bit Dtype | ||||
PACK_FACTOR = 32 // 4 | ||||
|
||||
class MarlinConfig(QuantizationConfig): | ||||
"""Config class for Marlin. | ||||
Reference: https://github.com/IST-DASLab/marlin/tree/master | ||||
""" | ||||
|
||||
def __init__( | ||||
self, | ||||
group_size: int, | ||||
) -> None: | ||||
self.group_size = group_size | ||||
# 4Bits packed into Int32. | ||||
self.pack_factor = 32 // 4 | ||||
# Tile size of 16 used by Marlin. | ||||
self.tile_size = 16 | ||||
|
||||
# todo(rib-2): add channelwise support (-1). | ||||
if self.group_size != 128: | ||||
raise ValueError( | ||||
"Currently, only group size 128 is supported for Marlin " | ||||
f"but got {self.group_size} bits.") | ||||
|
||||
def __repr__(self) -> str: | ||||
return (f"MarlinConfig(group_size={self.group_size}") | ||||
|
||||
@classmethod | ||||
def get_name(cls) -> str: | ||||
return "marlin" | ||||
|
||||
@classmethod | ||||
def get_supported_act_dtypes(cls) -> List[torch.dtype]: | ||||
return [torch.half] | ||||
|
||||
@classmethod | ||||
# Need to figure it out | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||
def get_min_capability(cls) -> int: | ||||
return 60 | ||||
|
||||
@classmethod | ||||
def get_config_filenames(cls) -> List[str]: | ||||
return ["quantize_config.json"] | ||||
|
||||
@classmethod | ||||
def from_config(cls, config: Dict[str, Any]) -> "MarlinConfig": | ||||
group_size = cls.get_from_keys(config, ["group_size"]) | ||||
return cls(group_size) | ||||
|
||||
def get_linear_method(self) -> "MarlinLinearMethod": | ||||
return MarlinLinearMethod(self) | ||||
|
||||
def get_scaled_act_names(self) -> List[str]: | ||||
return [] | ||||
|
||||
class MarlinLinearMethod(LinearMethodBase): | ||||
"""Linear method for Marlin. | ||||
Args: | ||||
quant_config: The Marlin quantization config. | ||||
""" | ||||
|
||||
def __init__(self, quant_config: MarlinConfig): | ||||
self.quant_config = quant_config | ||||
self._perm_len = 1024 | ||||
|
||||
def create_weights( | ||||
self, | ||||
input_size_per_partition: int, | ||||
output_size_per_partition: int, | ||||
input_size: int, | ||||
output_size: int, | ||||
params_dtype: torch.dtype, | ||||
) -> Dict[str, Any]: | ||||
del output_size # Unused. | ||||
if params_dtype != torch.float16: | ||||
raise ValueError( | ||||
f"The params dtype must be float16, but got {params_dtype}") | ||||
if input_size_per_partition % self.quant_config.group_size != 0: | ||||
raise ValueError( | ||||
"The input size is not aligned with the quantized " | ||||
"weight shape. This can be caused by too large " | ||||
"tensor parallel size.") | ||||
if output_size_per_partition % self.quant_config.pack_factor != 0: | ||||
raise ValueError( | ||||
"The output size is not aligned with the quantized " | ||||
"weight shape. This can be caused by too large " | ||||
"tensor parallel size.") | ||||
if input_size_per_partition % 128 != 0: | ||||
raise ValueError( | ||||
"The input_size_per_partition must be divisible by 128, " | ||||
f"but got {input_size_per_partition}") | ||||
|
||||
if output_size_per_partition % 256 != 0: | ||||
raise ValueError( | ||||
"The output_size_per_partition must be divisible by 256, " | ||||
f"but got {output_size_per_partition}") | ||||
|
||||
# check that we have at least 4 tiles horizontally in the shard | ||||
num_tiles_per_perm = self._perm_len // (self.quant_config.tile_size ** 2) | ||||
if output_size_per_partition % num_tiles_per_perm != 0: | ||||
raise ValueError( | ||||
"Each permutation group must reside on the same gpu" | ||||
) | ||||
|
||||
# Quantized 4Bit weights packed into Int32. | ||||
qweight = Parameter( | ||||
torch.empty( | ||||
input_size_per_partition // self.quant_config.tile_size, | ||||
output_size_per_partition * self.quant_config.tile_size // self.quant_config.pack_factor, | ||||
device="cuda", | ||||
dtype=torch.int32 | ||||
), | ||||
requires_grad=False, | ||||
) | ||||
set_weight_attrs( | ||||
qweight, { | ||||
"input_dim": 0, | ||||
"output_dim": 1, | ||||
"packed_dim": 1, | ||||
"pack_factor": self.quant_config.pack_factor, | ||||
"tile_size": TILE_SIZE, | ||||
}) | ||||
|
||||
# Scales in Float16. | ||||
scales = Parameter( | ||||
torch.empty( | ||||
input_size_per_partition // self.quant_config.group_size, | ||||
output_size_per_partition, | ||||
device="cuda", | ||||
dtype=params_dtype, | ||||
), | ||||
requires_grad=False, | ||||
) | ||||
set_weight_attrs(scales, { | ||||
"input_dim": None if input_size == input_size_per_partition else 0, | ||||
"output_dim": 1, | ||||
}) | ||||
|
||||
# Workspace for the marlin kernels. | ||||
self.workspace = torch.empty(MAX_SMS, dtype=torch.int) | ||||
robertgshaw2-redhat marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||
|
||||
return { | ||||
"B": qweight, | ||||
robertgshaw2-redhat marked this conversation as resolved.
Show resolved
Hide resolved
|
||||
"s": scales, | ||||
} | ||||
|
||||
def apply_weights(self, | ||||
weights: Dict[str, Any], | ||||
x: torch.Tensor, | ||||
bias: Optional[torch.Tensor] = None) -> torch.Tensor: | ||||
qweight = weights["B"] | ||||
scales = weights["s"] | ||||
|
||||
output = torch.empty( | ||||
x.shape[:-1] + (scales.shape[1],), | ||||
dtype=x.dtype, | ||||
device=x.device | ||||
) | ||||
ops.marlin_gemm( | ||||
x.view(-1, x.shape[-1]), | ||||
qweight, | ||||
output.view(-1, output.shape[-1]), | ||||
scales, | ||||
self.workspace | ||||
) | ||||
|
||||
if bias is not None: | ||||
output = output + bias | ||||
return output | ||||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.