text-generation-inference/server/text_generation/models
OlivierDehaene 7fbfbb0dc5
feat: Add token streaming using ServerSideEvents support (#36)
Add token streaming using ServerSideEvents (SSE).

The signature of the SSE events is: 

```rust
struct Details {
    finish_reason: String,
    generated_tokens: u32,
    seed: Option<u64>,
}

struct StreamResponse {
    token: Token,
    generated_text: Option<String>,
    details: Option<Details>,
}

struct ErrorResponse {
    error: String,
}
```
2023-01-31 11:49:43 +01:00
..
__init__.py feat(server): Support SantaCoder (#26) 2023-01-20 12:24:39 +01:00
bloom.py feat: Support sampling seeding (#37) 2023-01-30 15:36:16 +01:00
causal_lm.py feat: Add token streaming using ServerSideEvents support (#36) 2023-01-31 11:49:43 +01:00
galactica.py feat: Support sampling seeding (#37) 2023-01-30 15:36:16 +01:00
model.py fix(server): Minor refactorization using new_zeros (#24) 2023-01-17 09:10:22 +01:00
santacoder.py feat: Support sampling seeding (#37) 2023-01-30 15:36:16 +01:00
seq2seq_lm.py feat: Add token streaming using ServerSideEvents support (#36) 2023-01-31 11:49:43 +01:00
types.py feat: Add token streaming using ServerSideEvents support (#36) 2023-01-31 11:49:43 +01:00