Baichuan2-13B does not have max_position_embeddings in config (#2903)

* Baichuan2-13B does not have max_position_embeddings in config
see https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat/blob/main/config.json

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* Update server/text_generation_server/models/flash_causal_lm.py

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>

* fmt

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
This commit is contained in:
Wang, Yi 2025-01-15 22:56:52 +08:00 committed by GitHub
parent e07acc7f68
commit cc8b9650bd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1595,7 +1595,9 @@ class FlashCausalLM(Model):
if max_total_tokens is None:
if get_support_chunking():
model_max_length = self.tokenizer.model_max_length
max_position_embeddings = self.config.max_position_embeddings
max_position_embeddings = getattr(
self.config, "max_position_embeddings", model_max_length
)
max_total_tokens = min(
num_blocks * BLOCK_SIZE, model_max_length, max_position_embeddings
)