text-generation-inference/integration-tests
Daniël de Kok 095775e05c
launcher: correctly get the head dimension for VLMs (#3116)
* launcher: correctly get the head dimension for VLMs

For most (?) VLMs, the head dimension is in the `text_config`
configuration section. However, since we only queried the top-level
`head_dim` (which typically doesn't exist in VLMs), we would never use
flashinfer. This change adds a method that gets the head dimension from
the top-level `Config` struct or `text_config` when that fails.

* fix: bump org name in gemma3 test

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
2025-03-17 18:19:37 +01:00
..
fixtures/neuron Avoid running neuron integration tests twice (#3054) 2025-02-26 12:15:01 +01:00
images Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
models launcher: correctly get the head dimension for VLMs (#3116) 2025-03-17 18:19:37 +01:00
neuron Update neuron backend (#3098) 2025-03-12 09:53:15 +01:00
conftest.py Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
pyproject.toml Fix tool call2 (#3076) 2025-03-07 19:45:57 +01:00
pytest.ini chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
requirements.txt Fix tool call2 (#3076) 2025-03-07 19:45:57 +01:00
uv.lock Having less logs in case of failure for checking CI more easily. (#3037) 2025-02-19 17:01:33 +01:00