drbh
|
d031919c8a
|
fix: remove compilation artifacts from logit processor
|
2024-03-08 03:44:38 +00:00 |
|
drbh
|
1f7be736d2
|
feat: remove uncompile grammar and improve logit processor logic
|
2024-03-08 03:34:19 +00:00 |
|
drbh
|
c52a0f679e
|
feat: prefer precompiled grammar
|
2024-03-07 17:12:46 +00:00 |
|
drbh
|
ad5f562aa5
|
feat: support grammar compilation worker via py03
|
2024-03-05 17:44:16 +00:00 |
|
drbh
|
7dbaf9e901
|
fix: correctly index into mask when applying grammar (#1618)
This PR fixes how the grammar mask is index when generating text and
adds a new test to ensure the grammars work with non flash models
|
2024-03-01 18:22:01 +01:00 |
|
OlivierDehaene
|
4139054b82
|
v1.4.1 (#1568)
|
2024-02-16 17:50:57 +01:00 |
|
OlivierDehaene
|
9946165ee0
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
|
drbh
|
cef0553d59
|
Outlines guided generation (#1539)
This WIP PR starts to add grammar support via outlines, currently this
PR supports very simple regex grammars and does not optimize for
precompiling or caching grammar fsm's.
todo:
- [X] add simple outlines guidance to `NextTokenChooser`
- [X] update protos for grammar
- [X] update generation params API
- [X] constrain simple grammar
- [ ] support parsing more complex grammar into fsm
- [ ] support all outline support grammar types
- [ ] explore optimizations to avoid recompiling grammars
guided request
```bash
curl -s 'http://localhost:3000/generate' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputs": "make an email for david: \n",
"parameters": {
"max_new_tokens": 6,
"grammar": "[\\w-]+@([\\w-]+\\.)+[\\w-]+"
}
}' | jq
```
response
```json
{
"generated_text": "david@example.com"
}
```
unguided request
```bash
curl -s 'http://localhost:3000/generate' \
--header 'Content-Type: application/json' \
--data '{
"inputs": "make an email for david: \n",
"parameters": {
"max_new_tokens": 6
}
}' | jq
```
response
```json
{
"generated_text": " email = 'david"
}
```
|
2024-02-15 10:28:10 +01:00 |
|
OlivierDehaene
|
09b7c26bbd
|
feat(server): add frequency penalty (#1541)
|
2024-02-08 18:41:25 +01:00 |
|
Nick Hill
|
e4b26aa10b
|
fix(server): avoid errors for very small top_p values (#544)
See https://github.com/huggingface/transformers/pull/24111
I didn't add validation to the `__init__` method since it's not done for
other values/warpers.
|
2023-07-04 20:11:33 +02:00 |
|
OlivierDehaene
|
53aa9194c8
|
fix(server): fix warpers on CPU (#472)
Closes #471
|
2023-06-20 11:06:10 +02:00 |
|
OlivierDehaene
|
62f91f78ac
|
feat(server): support vectorized warpers in flash causal lm (#317)
Co-authored-by: Joel Lamy-Poirier <joel.lamy-poirier@servicenow.com>
|
2023-05-26 12:30:27 +02:00 |
|