-
Notifications
You must be signed in to change notification settings - Fork 114
ValueError: Couldn't instantiate the backend tokenizer from one of: #101
Description
(gh_transformers-bloom-inference) amd00@MZ32-00:~/llm_dev/transformers-bloom-inference$ python bloom-inference-scripts/bloom-accelerate-inference.py --name /hf_model/bloom --batch_size 1 --benchmark/llm_dev/transformers-bloom-inference$
Using 0 gpus
Loading model /home/amd00/hf_model/bloom
Traceback (most recent call last):
File "/home/amd00/llm_dev/transformers-bloom-inference/bloom-inference-scripts/bloom-accelerate-inference.py", line 49, in
tokenizer = AutoTokenizer.from_pretrained(model_name)
File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 591, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1805, in from_pretrained
return cls._from_pretrained(
File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1950, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/models/bloom/tokenization_bloom_fast.py", line 118, in init
super().init(
File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 119, in init
raise ValueError(
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
(gh_transformers-bloom-inference) amd00@MZ32-00: