update docs

This commit is contained in:
ParthSareen 2024-12-15 23:35:43 -08:00
parent a5e66a1163
commit da35ad878b

View File

@ -14,7 +14,7 @@
- [Generate Embeddings](#generate-embeddings) - [Generate Embeddings](#generate-embeddings)
- [List Running Models](#list-running-models) - [List Running Models](#list-running-models)
- [Tokenize Text](#tokenize-text) - [Tokenize Text](#tokenize-text)
- [Detokenize Text](#detokenize-text) - [Detokenize Tokens](#detokenize-tokens)
## Conventions ## Conventions
@ -1489,7 +1489,7 @@ A single JSON object will be returned.
## Tokenize Text ## Tokenize Text
Tokenize text using a model Tokenize text to an array of tokens using a specific model.
```shell ```shell
POST /api/tokenize POST /api/tokenize
@ -1497,13 +1497,8 @@ POST /api/tokenize
##### Parameters ##### Parameters
- `model`: name of model to generate tokens from - `model`: name of model to use for tokenization
- `prompt`: text to generate tokens for - `text`: text to tokenize
##### Advanced parameters:
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
### Examples ### Examples
@ -1512,7 +1507,7 @@ POST /api/tokenize
```shell ```shell
curl -X POST http://localhost:11434/api/tokenize -d '{ curl -X POST http://localhost:11434/api/tokenize -d '{
"model": "llama3.1:8b", "model": "llama3.1:8b",
"prompt": "Why the sky is blue?" "text": "Why the sky is blue?"
}' }'
``` ```
@ -1520,14 +1515,13 @@ curl -X POST http://localhost:11434/api/tokenize -d '{
```json ```json
{ {
"model": "llama3.1:8b",
"tokens": [10445,279,13180,374,6437,30] "tokens": [10445,279,13180,374,6437,30]
} }
``` ```
## Detokenize Text ## Detokenize Tokens
Detokenize text using a model Detokenize tokens to text using a specific model.
```shell ```shell
POST /api/detokenize POST /api/detokenize
@ -1535,13 +1529,8 @@ POST /api/detokenize
#### Parameters #### Parameters
- `model`: name of model to generate text from - `model`: name of model to use for detokenization
- `tokens`: list of tokens to generate text from - `tokens`: list of tokens to detokenize
##### Advanced parameters:
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
### Examples ### Examples
@ -1557,10 +1546,7 @@ curl -X POST http://localhost:11434/api/detokenize -d '{
#### Response #### Response
```json ```json
{ {"text":"Why the sky is blue?"}
"model": "llama3.1:8b",
"text": "Why the sky is blue?"
}
``` ```