text-generation-inference/load_tests/long_prompt2.py
Nicolas Patry 5df8059037
Auto max prefill (#2797)
* Attempt at automatic max batch prefill.

* Taking into account number of shards.

* Adding more cards.

* Adding A100 + H100

* Adding a few more cards.

* Logprobs cost too much.

* h100 better name, and keep factor of 2

* Damn inflated sparse tflops.

* Typo in h100.

* Updated the flops calculation (checked with fvcore).

* chunking by default.

* Fix prefix caching for chat completion since we removed logprobs.

* More tests.

* Dropping all the prefill logprobs.

* Add a flag that enables users to get logprobs back.

* Repairing prompt token counting.

* Fixing a few tests.

* Remove some scaffolding.

* Attempting to reduces the issues (workarounds for now).
2024-12-06 05:52:00 +01:00

23 lines
620 B
Python

# https://www.gutenberg.org/cache/epub/103/pg103.txt
from openai import OpenAI
import os
import requests
if not os.path.exists("pg103.txt"):
response = requests.get("https://www.gutenberg.org/cache/epub/103/pg103.txt")
with open("pg103.txt", "w") as f:
f.write(response.text)
length = 130000
with open("pg103.txt", "r") as f:
data = f.read()
messages = [{"role": "user", "content": data[: length * 4]}]
client = OpenAI(base_url="http://localhost:8000/v1", api_key="w")
completion = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct", messages=messages, max_tokens=2
)