text-generation-inference/integration-tests/models/__snapshots__/test_completion_prompts
drbh dc5f05f8e6
Pr 3003 ci branch (#3007)
* change ChatCompletionChunk to align with "OpenAI Chat Completions streaming API"

Moving after tool_calls2

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

add in Buffering..

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

fix: handle usage outside of stream state and add tests

Simplifying everything quite a bit.

Remove the unused model_dump.

Clippy.

Clippy ?

Ruff.

Uppgrade the flake for latest transformers.

Upgrade after rebase.

Remove potential footgun.

Fix completion test.

* Clippy.

* Tweak for multi prompt.

* Ruff.

* Update the snapshot a bit.

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2025-03-10 17:56:19 +01:00
..
test_chat_hfhub_nousage.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_chat_hfhub_usage.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_chat_openai_nousage.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_chat_openai_usage.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_flash_llama_completion_many_prompts_stream.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_flash_llama_completion_many_prompts.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_flash_llama_completion_single_prompt.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00
test_flash_llama_completion_stream_usage.json Pr 3003 ci branch (#3007) 2025-03-10 17:56:19 +01:00