Update docs

This commit is contained in:
Roy Han 2024-06-26 14:30:28 -07:00
parent cb42e607c5
commit 02169f3e60

View File

@ -27,6 +27,11 @@ chat_completion = client.chat.completions.create(
], ],
model='llama3', model='llama3',
) )
completion = client.completions.create(
model="llama3",
prompt="Say this is a test"
)
``` ```
### OpenAI JavaScript library ### OpenAI JavaScript library
@ -45,6 +50,11 @@ const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }], messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama3', model: 'llama3',
}) })
const completion = await openai.completions.create({
model: "llama3",
prompt: "Say this is a test.",
})
``` ```
### `curl` ### `curl`
@ -65,6 +75,13 @@ curl http://localhost:11434/v1/chat/completions \
} }
] ]
}' }'
curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3",
"prompt": "Say this is a test"
}'
``` ```
## Endpoints ## Endpoints
@ -107,6 +124,40 @@ curl http://localhost:11434/v1/chat/completions \
- `finish_reason` will always be `stop` - `finish_reason` will always be `stop`
- `usage.prompt_tokens` will be 0 for completions where prompt evaluation is cached - `usage.prompt_tokens` will be 0 for completions where prompt evaluation is cached
### `/v1/completions`
#### Supported features
- [x] Completions
- [x] Streaming
- [x] JSON mode
- [x] Reproducible outputs
- [ ] Logprobs
#### Supported request fields
- [x] `model`
- [x] `prompt`
- [x] `frequency_penalty`
- [x] `presence_penalty`
- [x] `seed`
- [x] `stop`
- [x] `stream`
- [x] `temperature`
- [x] `top_p`
- [x] `max_tokens`
- [ ] `best_of`
- [ ] `echo`
- [ ] `suffix`
- [ ] `logit_bias`
- [ ] `user`
- [ ] `n`
#### Notes
- `prompt` currently only accepts a string
- `usage.prompt_tokens` will be 0 for completions where prompt evaluation is cached
## Models ## Models
Before using a model, pull it locally `ollama pull`: Before using a model, pull it locally `ollama pull`: