mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 22:02:06 +00:00
* Attempt at automatic max batch prefill. * Taking into account number of shards. * Adding more cards. * Adding A100 + H100 * Adding a few more cards. * Logprobs cost too much. * h100 better name, and keep factor of 2 * Damn inflated sparse tflops. * Typo in h100. * Updated the flops calculation (checked with fvcore). * chunking by default. * Fix prefix caching for chat completion since we removed logprobs. * More tests. * Dropping all the prefill logprobs. * Add a flag that enables users to get logprobs back. * Repairing prompt token counting. * Fixing a few tests. * Remove some scaffolding. * Attempting to reduces the issues (workarounds for now).
20 lines
426 B
Python
20 lines
426 B
Python
import datasets
|
|
import json
|
|
|
|
|
|
dataset = datasets.load_dataset("ccdv/govreport-summarization")
|
|
max_new_tokens = 50
|
|
|
|
|
|
conversations = []
|
|
|
|
for i, item in enumerate(dataset["test"]):
|
|
report = item["report"]
|
|
|
|
messages = [{"from": "human", "value": f"Summarize this report: ```{report}```"}]
|
|
|
|
conversations.append({"conversations": messages})
|
|
|
|
with open("long.json", "w") as f:
|
|
json.dump(conversations, f, indent=4)
|