Compare commits

...

74 Commits

Author SHA1 Message Date
Michael Yang
55b5f5dc34 ctrl+c on empty line exits (#135) 2023-07-20 00:53:08 -07:00
Jeffrey Morgan
3b135ac963 parser: fix case where multi line string termination error wouldnt show 2023-07-20 00:43:22 -07:00
Jeffrey Morgan
e6bae8d916 parser: keep seeking until eof 2023-07-20 00:37:52 -07:00
Jeffrey Morgan
d9f54300c3 library: add echo for verify progress 2023-07-19 23:58:28 -07:00
Jeffrey Morgan
1511219763 update library modelfiles with new syntax 2023-07-19 23:57:22 -07:00
Jeffrey Morgan
ada0add89b fix llama library templates 2023-07-19 23:53:40 -07:00
Jeffrey Morgan
75e508e1d6 remove old templates 2023-07-19 23:47:13 -07:00
Michael Yang
6f046dbf18 Update images.go (#134) 2023-07-19 23:46:01 -07:00
Jeffrey Morgan
cd820c8bca move wizard-vicuna to correct location 2023-07-19 23:44:03 -07:00
Jeffrey Morgan
88e755d7fd Add files for library models 2023-07-19 23:40:37 -07:00
Michael Yang
6984171cfd Merge pull request #93 from jmorganca/split-prompt
separate prompt into template and system
2023-07-19 23:25:33 -07:00
Michael Yang
60b4db6389 add .First 2023-07-19 23:24:32 -07:00
Michael Chiang
7c6ea2a966 fix dangling """ 2023-07-19 23:24:32 -07:00
Michael Chiang
c161aef5f9 update example 2023-07-19 23:24:32 -07:00
Michael Chiang
c47786c1b0 Update docs/modelfile.md
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-07-19 23:24:32 -07:00
Michael Chiang
df100ce540 Update docs/modelfile.md
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-07-19 23:24:32 -07:00
Michael Chiang
5c5948b4e7 clean up my previous empty sentences 2023-07-19 23:24:32 -07:00
Michael Yang
1c72e46e09 update modelfile.md 2023-07-19 23:24:32 -07:00
Michael Yang
ca210ba480 handle vnd.ollama.image.prompt for compat 2023-07-19 23:24:32 -07:00
Michael Yang
df146c41e2 separate prompt into template and system 2023-07-19 23:24:31 -07:00
Jeffrey Morgan
2d305fa99a allow relative paths in FROM instruction 2023-07-19 21:55:15 -07:00
Patrick Devine
e4d7f3e287 vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00
Jeffrey Morgan
f2044b5838 web: fix newsletter signup 2023-07-19 16:11:56 -07:00
Michael Chiang
d53988f619 Merge pull request #128 from jmorganca/mchiang0610-patch-1
Update modelfile.md
2023-07-19 13:40:39 -07:00
Michael Chiang
ac88ab48d9 update 2023-07-19 13:37:21 -07:00
Michael Yang
84c6ee8cc6 Merge pull request #104 from jmorganca/interactive-readline
use readline
2023-07-19 13:36:24 -07:00
Michael Yang
dbc90576b8 add verbose/quiet commands 2023-07-19 13:34:56 -07:00
Michael Yang
84200dcde6 use readline 2023-07-19 13:34:56 -07:00
Michael Chiang
e54c08da89 updating prompt 2023-07-19 13:34:40 -07:00
Michael Chiang
31413857ea organizing examples 2023-07-19 13:25:14 -07:00
Michael Chiang
25f874c030 Update modelfile.md 2023-07-19 12:48:57 -07:00
Jeffrey Morgan
10d502611f fix discord link in README.md 2023-07-19 12:31:48 -07:00
Jeffrey Morgan
7fe4103b94 add discord link, remove repeated text 2023-07-19 12:28:50 -07:00
Michael Chiang
7fbdc8e2c1 Update modelfile.md 2023-07-19 11:38:06 -07:00
Eva Ho
9c5572d51f add discord link back 2023-07-19 13:03:26 -04:00
Matt Williams
75eb28f574 Merge pull request #125 from jmorganca/matt/addlicensetomodelfiledoc
Updated modelfile doc to include license
2023-07-19 08:57:06 -07:00
Patrick Devine
56b6a1720f add llama2:13b model to the readme (#126) 2023-07-19 08:21:28 -07:00
Eva Ho
dfceca48a7 update icons to have different images for bright and dark mode 2023-07-19 11:14:43 -04:00
Matt Williams
bbb67002c3 get rid of latest
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-07-19 07:40:40 -07:00
Michael Chiang
0294216ea9 Merge pull request #124 from DavidZirinsky/patch-1
Update README.md
2023-07-19 07:40:24 -07:00
Matt Williams
7a62b2d2ab Update the FROM instructions
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-07-19 07:39:40 -07:00
Eva Ho
f08c050e57 fix page transitions flickering 2023-07-19 10:19:24 -04:00
Matt Williams
67c8d49757 Updated modelfile doc to include license
and attributed midjourneyprompt

Signed-off-by: Matt Williams <m@technovangelist.com>
2023-07-19 07:16:38 -07:00
DavidZirinsky
ffcd90e8a7 Update README.md
I needed to do this to run the project
2023-07-19 08:14:44 -06:00
Jeffrey Morgan
4ca7c4be1f dont consume reader when calculating digest 2023-07-19 00:47:55 -07:00
Michael Chiang
17b7af78f0 Merge pull request #115 from jmorganca/Add-wizard-vicuna-uncensored-model-link
Add wizard vicuna uncensored model link
2023-07-18 22:58:07 -07:00
Jeffrey Morgan
4c1dc52083 app: create /usr/local/bin/ if it does not exist 2023-07-18 22:50:52 -07:00
Patrick Devine
572fc9099f add license layers to the parser (#116) 2023-07-18 22:49:38 -07:00
Michael Chiang
3020f29041 Add wizard vicuna uncensored model link 2023-07-18 22:19:12 -07:00
Michael Yang
a6d03dd510 Merge pull request #110 from jmorganca/fix-pull-0-bytes
fix pull 0 bytes on completed layer
2023-07-18 19:38:59 -07:00
Michael Yang
68df36ae50 fix pull 0 bytes on completed layer 2023-07-18 19:38:11 -07:00
Michael Yang
5540305293 Merge pull request #112 from jmorganca/fix-relative-modelfile
resolve modelfile before passing to server
2023-07-18 19:36:24 -07:00
Michael Yang
d4cfee79d5 resolve modelfile before passing to server 2023-07-18 19:34:05 -07:00
Michael Yang
6e36f948df Merge pull request #109 from jmorganca/fix-create-memory
fix memory leak in create
2023-07-18 17:25:19 -07:00
Michael Yang
553fa39fe8 fix memory leak in create 2023-07-18 17:14:17 -07:00
Jeffrey Morgan
820e581ad8 web: fix typos and add link to discord 2023-07-18 17:03:40 -07:00
Isaac McFadyen
d14785738e README typo fix (#106)
* Fixed typo in README
2023-07-18 16:24:57 -07:00
Patrick Devine
9e15635c2d attempt two for skipping files in the file walk (#105) 2023-07-18 15:37:01 -07:00
Jeffrey Morgan
3e10f902f5 add mario example 2023-07-18 14:27:36 -07:00
Jeffrey Morgan
aa6714f25c fix typo in README.md 2023-07-18 14:03:11 -07:00
Jeffrey Morgan
7f3a37aed4 fix typo 2023-07-18 13:32:06 -07:00
Jeffrey Morgan
7b08280355 move download to the top of README.md 2023-07-18 13:31:25 -07:00
Jeffrey Morgan
e3cc4d5eac update README.md with new syntax 2023-07-18 13:22:46 -07:00
Jeffrey Morgan
8c85dfb735 Add README.md for examples 2023-07-18 13:22:46 -07:00
hoyyeva
ac62a413e5 Merge pull request #103 from jmorganca/web-update
website content and design update
2023-07-18 16:18:04 -04:00
Eva Ho
d1f89778e9 fix css on smaller screen 2023-07-18 16:17:42 -04:00
Eva Ho
df67a90e64 fix css 2023-07-18 16:02:45 -04:00
Eva Ho
576ae644de enable downloader 2023-07-18 15:57:39 -04:00
Eva Ho
7e52e51db1 update website text and design 2023-07-18 15:56:43 -04:00
Michael Chiang
f12df8d79a Merge pull request #101 from jmorganca/adding-logo
add logo
2023-07-18 12:47:20 -07:00
Michael Chiang
65de730bdb Update README.md
add logo
2023-07-18 12:45:38 -07:00
Patrick Devine
9658a5043b skip files in the list if we can't get the correct model path (#100) 2023-07-18 12:39:08 -07:00
Jeffrey Morgan
280fbe8019 app: use llama2 instead of orca 2023-07-18 12:36:03 -07:00
Jeffrey Morgan
2e339c2bab flatten examples 2023-07-18 12:33:50 -07:00
58 changed files with 2558 additions and 500 deletions

111
README.md
View File

@@ -1,74 +1,93 @@
![ollama](https://github.com/jmorganca/ollama/assets/251292/961f99bb-251a-4eec-897d-1ba99997ad0f)
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/318048d2-b2dd-459c-925a-ac8449d5f02c">
<img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/c7d6e15f-7f4d-4776-b568-c084afa297c2">
</picture>
</div>
# Ollama
Run large language models with `llama.cpp`.
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
> Note: certain models that can be run with Ollama are intended for research and/or non-commercial use only.
Create, run, and share large language models (LLMs). Ollama bundles a models weights, configuration, prompts, and more into self-contained packages that can run on any machine.
### Features
> Note: Ollama is in early preview. Please report any issues you find.
- Download and run popular large language models
- Switch between multiple models on the fly
- Hardware acceleration where available (Metal, CUDA)
- Fast inference server written in Go, powered by [llama.cpp](https://github.com/ggerganov/llama.cpp)
- REST API to use with your application (python, typescript SDKs coming soon)
## Download
## Install
- [Download](https://ollama.ai/download) for macOS with Apple Silicon (Intel coming soon)
- Download for Windows (coming soon)
You can also build the [binary from source](#building).
- [Download](https://ollama.ai/download) for macOS on Apple Silicon (Intel coming soon)
- Download for Windows and Linux (coming soon)
- Build [from source](#building)
## Quickstart
Run a fast and simple model.
To run and chat with [Llama 2](https://ai.meta.com/llama), the new model by Meta:
```
ollama run orca
ollama run llama2
```
## Example models
## Model library
### 💬 Chat
Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
Have a conversation.
| Model | Parameters | Size | Download |
| ------------------------ | ---------- | ----- | --------------------------- |
| Llama2 | 7B | 3.8GB | `ollama pull llama2` |
| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` |
| Orca Mini | 3B | 1.9GB | `ollama pull orca` |
| Vicuna | 7B | 3.8GB | `ollama pull vicuna` |
| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` |
| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` |
## Examples
### Run a model
```
ollama run vicuna "Why is the sky blue?"
ollama run llama2
>>> hi
Hello! How can I help you today?
```
### 🗺️ Instructions
### Create a custom character model
Get a helping hand.
Pull a base model:
```
ollama run orca "Write an email to my boss."
ollama pull orca
```
### 🔎 Ask questions about documents
Send the contents of a document and ask questions about it.
Create a `Modelfile`:
```
ollama run nous-hermes "$(cat input.txt)", please summarize this story
FROM orca
PROMPT """
### System:
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
### User:
{{ .Prompt }}
### Response:
"""
```
### 📖 Storytelling
Venture into the unknown.
Next, create and run the model:
```
ollama run nous-hermes "Once upon a time"
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
```
## Advanced usage
For more info on `Modelfile` syntax see [this doc](./docs/modelfile).
### Run a local model
### Pull a model from the registry
```
ollama run ~/Downloads/vicuna-7b-v1.3.ggmlv3.q4_1.bin
ollama pull nous-hermes
```
## Building
@@ -80,29 +99,11 @@ go build .
To run it start the server:
```
./ollama server &
./ollama serve &
```
Finally, run a model!
```
./ollama run ~/Downloads/vicuna-7b-v1.3.ggmlv3.q4_1.bin
```
## API Reference
### `POST /api/pull`
Download a model
```
curl -X POST http://localhost:11343/api/pull -d '{"model": "orca"}'
```
### `POST /api/generate`
Complete a prompt
```
curl -X POST http://localhost:11434/api/generate -d '{"model": "orca", "prompt": "hello!"}'
./ollama run llama2
```

View File

@@ -160,11 +160,11 @@ func (c *Client) Generate(ctx context.Context, req *GenerateRequest, fn Generate
})
}
type PullProgressFunc func(PullProgress) error
type PullProgressFunc func(ProgressResponse) error
func (c *Client) Pull(ctx context.Context, req *PullRequest, fn PullProgressFunc) error {
return c.stream(ctx, http.MethodPost, "/api/pull", req, func(bts []byte) error {
var resp PullProgress
var resp ProgressResponse
if err := json.Unmarshal(bts, &resp); err != nil {
return err
}
@@ -173,11 +173,11 @@ func (c *Client) Pull(ctx context.Context, req *PullRequest, fn PullProgressFunc
})
}
type PushProgressFunc func(PushProgress) error
type PushProgressFunc func(ProgressResponse) error
func (c *Client) Push(ctx context.Context, req *PushRequest, fn PushProgressFunc) error {
return c.stream(ctx, http.MethodPost, "/api/push", req, func(bts []byte) error {
var resp PushProgress
var resp ProgressResponse
if err := json.Unmarshal(bts, &resp); err != nil {
return err
}

View File

@@ -43,12 +43,11 @@ type PullRequest struct {
Password string `json:"password"`
}
type PullProgress struct {
type ProgressResponse struct {
Status string `json:"status"`
Digest string `json:"digest,omitempty"`
Total int `json:"total,omitempty"`
Completed int `json:"completed,omitempty"`
Percent float64 `json:"percent,omitempty"`
}
type PushRequest struct {
@@ -57,14 +56,6 @@ type PushRequest struct {
Password string `json:"password"`
}
type PushProgress struct {
Status string `json:"status"`
Digest string `json:"digest,omitempty"`
Total int `json:"total,omitempty"`
Completed int `json:"completed,omitempty"`
Percent float64 `json:"percent,omitempty"`
}
type ListResponse struct {
Models []ListResponseModel `json:"models"`
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 445 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 891 B

View File

@@ -21,6 +21,8 @@ const config: ForgeConfig = {
'../ollama',
path.join(__dirname, './assets/ollama_icon_16x16Template.png'),
path.join(__dirname, './assets/ollama_icon_16x16Template@2x.png'),
path.join(__dirname, './assets/ollama_outline_icon_16x16Template.png'),
path.join(__dirname, './assets/ollama_outline_icon_16x16Template@2x.png'),
...(process.platform === 'darwin' ? ['../llama/ggml-metal.metal'] : []),
],
...(process.env.SIGN

View File

@@ -19,7 +19,7 @@ export default function () {
const [step, setStep] = useState<Step>(Step.WELCOME)
const [commandCopied, setCommandCopied] = useState<boolean>(false)
const command = 'ollama run orca'
const command = 'ollama run llama2'
return (
<div className='drag'>
@@ -77,7 +77,11 @@ export default function () {
{command}
</pre>
<button
className={`no-drag absolute right-[5px] px-2 py-2 ${commandCopied ? 'text-gray-900 opacity-100 hover:cursor-auto' : 'text-gray-200 opacity-50 hover:cursor-pointer'} hover:text-gray-900 hover:font-bold group-hover:opacity-100`}
className={`no-drag absolute right-[5px] px-2 py-2 ${
commandCopied
? 'text-gray-900 opacity-100 hover:cursor-auto'
: 'text-gray-200 opacity-50 hover:cursor-pointer'
} hover:font-bold hover:text-gray-900 group-hover:opacity-100`}
onClick={() => {
copy(command)
setCommandCopied(true)
@@ -85,13 +89,15 @@ export default function () {
}}
>
{commandCopied ? (
<CheckIcon className='h-4 w-4 text-gray-500 font-bold' />
<CheckIcon className='h-4 w-4 font-bold text-gray-500' />
) : (
<DocumentDuplicateIcon className='h-4 w-4 text-gray-500' />
)}
</button>
</div>
<p className='mx-auto my-4 w-[70%] text-xs text-gray-400'>Run this command in your favorite terminal.</p>
<p className='mx-auto my-4 w-[70%] text-xs text-gray-400'>
Run this command in your favorite terminal.
</p>
</div>
<button
onClick={() => {

View File

@@ -1,5 +1,5 @@
import { spawn } from 'child_process'
import { app, autoUpdater, dialog, Tray, Menu, BrowserWindow } from 'electron'
import { app, autoUpdater, dialog, Tray, Menu, BrowserWindow, nativeTheme } from 'electron'
import Store from 'electron-store'
import winston from 'winston'
import 'winston-daily-rotate-file'
@@ -66,14 +66,30 @@ function firstRunWindow() {
}
function createSystemtray() {
let iconPath = path.join(__dirname, '..', '..', 'assets', 'ollama_icon_16x16Template.png')
let iconPath = nativeTheme.shouldUseDarkColors
? path.join(__dirname, '..', '..', 'assets', 'ollama_icon_16x16Template.png')
: path.join(__dirname, '..', '..', 'assets', 'ollama_outline_icon_16x16Template.png')
if (app.isPackaged) {
iconPath = path.join(process.resourcesPath, 'ollama_icon_16x16Template.png')
iconPath = nativeTheme.shouldUseDarkColors
? path.join(process.resourcesPath, 'ollama_icon_16x16Template.png')
: path.join(process.resourcesPath, 'ollama_outline_icon_16x16Template.png')
}
tray = new Tray(iconPath)
nativeTheme.on('updated', function theThemeHasChanged () {
if (nativeTheme.shouldUseDarkColors) {
app.isPackaged
? tray.setImage(path.join(process.resourcesPath, 'ollama_icon_16x16Template.png'))
: tray.setImage(path.join(__dirname, '..', '..', 'assets', 'ollama_icon_16x16Template.png'))
} else {
app.isPackaged
? tray.setImage(path.join(process.resourcesPath, 'ollama_outline_icon_16x16Template.png'))
: tray.setImage(path.join(__dirname, '..', '..', 'assets', 'ollama_outline_icon_16x16Template.png'))
}
})
const contextMenu = Menu.buildFromTemplate([{ role: 'quit', label: 'Quit Ollama', accelerator: 'Command+Q' }])
tray.setContextMenu(contextMenu)

View File

@@ -13,7 +13,9 @@ export function installed() {
}
export async function install() {
const command = `do shell script "ln -F -s ${ollama} ${symlinkPath}" with administrator privileges`
const command = `do shell script "mkdir -p ${path.dirname(
symlinkPath
)} && ln -F -s ${ollama} ${symlinkPath}" with administrator privileges`
try {
await exec(`osascript -e '${command}'`)

View File

@@ -5,26 +5,33 @@ import (
"context"
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"time"
"github.com/chzyer/readline"
"github.com/dustin/go-humanize"
"github.com/olekukonko/tablewriter"
"github.com/schollz/progressbar/v3"
"github.com/spf13/cobra"
"golang.org/x/term"
"github.com/jmorganca/ollama/api"
"github.com/jmorganca/ollama/format"
"github.com/jmorganca/ollama/progressbar"
"github.com/jmorganca/ollama/server"
)
func create(cmd *cobra.Command, args []string) error {
filename, _ := cmd.Flags().GetString("file")
filename, err := filepath.Abs(filename)
if err != nil {
return err
}
client := api.NewClient()
var spinner *Spinner
@@ -83,7 +90,7 @@ func push(cmd *cobra.Command, args []string) error {
client := api.NewClient()
request := api.PushRequest{Name: args[0]}
fn := func(resp api.PushProgress) error {
fn := func(resp api.ProgressResponse) error {
fmt.Println(resp.Status)
return nil
}
@@ -105,7 +112,9 @@ func list(cmd *cobra.Command, args []string) error {
var data [][]string
for _, m := range models.Models {
data = append(data, []string{m.Name, humanize.Bytes(uint64(m.Size)), format.HumanTime(m.ModifiedAt, "Never")})
if len(args) == 0 || strings.HasPrefix(m.Name, args[0]) {
data = append(data, []string{m.Name, humanize.Bytes(uint64(m.Size)), format.HumanTime(m.ModifiedAt, "Never")})
}
}
table := tablewriter.NewWriter(os.Stdout)
@@ -129,25 +138,23 @@ func RunPull(cmd *cobra.Command, args []string) error {
func pull(model string) error {
client := api.NewClient()
var currentDigest string
var bar *progressbar.ProgressBar
currentLayer := ""
request := api.PullRequest{Name: model}
fn := func(resp api.PullProgress) error {
if resp.Digest != currentLayer && resp.Digest != "" {
if currentLayer != "" {
fmt.Println()
}
currentLayer = resp.Digest
layerStr := resp.Digest[7:23] + "..."
fn := func(resp api.ProgressResponse) error {
if resp.Digest != currentDigest && resp.Digest != "" {
currentDigest = resp.Digest
bar = progressbar.DefaultBytes(
int64(resp.Total),
"pulling "+layerStr,
fmt.Sprintf("pulling %s...", resp.Digest[7:19]),
)
} else if resp.Digest == currentLayer && resp.Digest != "" {
bar.Set(resp.Completed)
} else if resp.Digest == currentDigest && resp.Digest != "" {
bar.Set(resp.Completed)
} else {
currentLayer = ""
currentDigest = ""
fmt.Println(resp.Status)
}
return nil
@@ -165,7 +172,7 @@ func RunGenerate(cmd *cobra.Command, args []string) error {
return generate(cmd, args[0], strings.Join(args[1:], " "))
}
if term.IsTerminal(int(os.Stdin.Fd())) {
if readline.IsTerminal(int(os.Stdin.Fd())) {
return generateInteractive(cmd, args[0])
}
@@ -223,17 +230,111 @@ func generate(cmd *cobra.Command, model, prompt string) error {
}
func generateInteractive(cmd *cobra.Command, model string) error {
fmt.Print(">>> ")
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
if err := generate(cmd, model, scanner.Text()); err != nil {
home, err := os.UserHomeDir()
if err != nil {
return err
}
completer := readline.NewPrefixCompleter(
readline.PcItem("/help"),
readline.PcItem("/list"),
readline.PcItem("/set",
readline.PcItem("history"),
readline.PcItem("nohistory"),
readline.PcItem("verbose"),
readline.PcItem("quiet"),
readline.PcItem("mode",
readline.PcItem("vim"),
readline.PcItem("emacs"),
readline.PcItem("default"),
),
),
readline.PcItem("/exit"),
readline.PcItem("/bye"),
)
usage := func() {
fmt.Fprintln(os.Stderr, "commands:")
fmt.Fprintln(os.Stderr, completer.Tree(" "))
}
config := readline.Config{
Prompt: ">>> ",
HistoryFile: filepath.Join(home, ".ollama", "history"),
AutoComplete: completer,
}
scanner, err := readline.NewEx(&config)
if err != nil {
return err
}
defer scanner.Close()
for {
line, err := scanner.Readline()
switch {
case errors.Is(err, io.EOF):
return nil
case errors.Is(err, readline.ErrInterrupt):
if line == "" {
return nil
}
continue
case err != nil:
return err
}
fmt.Print(">>> ")
}
line = strings.TrimSpace(line)
return nil
switch {
case strings.HasPrefix(line, "/list"):
args := strings.Fields(line)
if err := list(cmd, args[1:]); err != nil {
return err
}
continue
case strings.HasPrefix(line, "/set"):
args := strings.Fields(line)
if len(args) > 1 {
switch args[1] {
case "history":
scanner.HistoryEnable()
continue
case "nohistory":
scanner.HistoryDisable()
continue
case "verbose":
cmd.Flags().Set("verbose", "true")
continue
case "quiet":
cmd.Flags().Set("verbose", "false")
continue
case "mode":
if len(args) > 2 {
switch args[2] {
case "vim":
scanner.SetVimMode(true)
continue
case "emacs", "default":
scanner.SetVimMode(false)
continue
}
}
}
}
case line == "/help", line == "/?":
usage()
continue
case line == "/exit", line == "/bye":
return nil
}
if err := generate(cmd, model, line); err != nil {
return err
}
}
}
func generateBatch(cmd *cobra.Command, model string) error {

View File

@@ -5,7 +5,7 @@ import (
"os"
"time"
"github.com/schollz/progressbar/v3"
"github.com/jmorganca/ollama/progressbar"
)
type Spinner struct {

View File

@@ -1,80 +1,103 @@
# Ollama Model File Reference
# Ollama Model File
Ollama can build models automatically by reading the instructions from a Modelfile. A Modelfile is a text document that represents the complete configuration of the Model. You can see that a Modelfile is very similar to a Dockerfile.
A model file is the blueprint to create and share models with Ollama.
## Format
Here is the format of the Modelfile:
The format of the Modelfile:
```modelfile
# comment
INSTRUCTION arguments
```
Nothing in the file is case-sensitive. However, the convention is for instructions to be uppercase to make it easier to distinguish from the arguments.
| Instruction | Description |
| ------------------------- | ----------------------------------------------------- |
| `FROM`<br>(required) | Defines the base model to use |
| `PARAMETER`<br>(optional) | Sets the parameters for how Ollama will run the model |
| `SYSTEM`<br>(optional) | Specifies the system prompt that will set the context |
| `TEMPLATE`<br>(optional) | The full prompt template to be sent to the model |
| `LICENSE`<br>(optional) | Specifies the legal license |
A Modelfile can include instructions in any order. But the convention is to start the Modelfile with the FROM instruction.
## Examples
Although the example above shows a comment starting with a hash character, any instruction that is not recognized is seen as a comment.
An example of a model file creating a mario blueprint:
## FROM
```
FROM llama2
# sets the temperature to 1 [higher is more creative, lower is more coherent]
# sets the context size to 4096
PARAMETER temperature 1
PARAMETER num_ctx 4096
```modelfile
FROM <image>[:<tag>]
# Overriding the system prompt
SYSTEM You are Mario from super mario bros, acting as an assistant.
```
This defines the base model to be used. An image can be a known image on the Ollama Hub, or a fully-qualified path to a model file on your system
To use this:
## PARAMETER
1. Save it as a file (eg. modelfile)
2. `ollama create NAME -f <location of the file eg. ./modelfile>'`
3. `ollama run NAME`
4. Start using the model!
The PARAMETER instruction defines a parameter that can be set when the model is run.
## FROM (Required)
```modelfile
The FROM instruction defines the base model to use when creating a model.
```
FROM <model name>:<tag>
```
### Build from llama2
```
FROM llama2:latest
```
A list of available base models:
<https://github.com/jmorganca/ollama#model-library>
### Build from a bin file
```
FROM ./ollama-model.bin
```
## PARAMETER (Optional)
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
```
PARAMETER <parameter> <parametervalue>
```
### Valid Parameters and Values
| Parameter | Description | Value Type | Value Range |
| ---------------- | ------------------------------------------------------------------------------------------- | ---------- | ----------- |
| NumCtx | | int | |
| NumGPU | | int | |
| MainGPU | | int | |
| LowVRAM | | bool | |
| F16KV | | bool | |
| LogitsAll | | bool | |
| VocabOnly | | bool | |
| UseMMap | | bool | |
| EmbeddingOnly | | bool | |
| RepeatLastN | | int | |
| RepeatPenalty | | float | |
| FrequencyPenalty | | float | |
| PresencePenalty | | float | |
| temperature | The temperature of the model. Higher temperatures result in more creativity in the response | float | 0 - 1 |
| TopK | | int | |
| TopP | | float | |
| TFSZ | | float | |
| TypicalP | | float | |
| Mirostat | | int | |
| MirostatTau | | float | |
| MirostatEta | | float | |
| NumThread | | int | |
| Parameter | Description | Value Type | Example Usage |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------------------ |
| num_ctx | Sets the size of the prompt context size length model. (Default: 2048) | int | num_ctx 4096 |
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | temperature 0.7 |
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
| num_gpu | The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. | int | num_gpu 1 |
| repeat_last_n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = ctx-size) | int | repeat_last_n 64 |
| repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) | float | repeat_penalty 1.1 |
| tfs_z | Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) | float | tfs_z 1 |
| mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | int | mirostat 0 |
| mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) | float | mirostat_tau 5.0 |
| mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) | float | mirostat_eta 0.1 |
| num_thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | int | num_thread 8 |
## Prompt
## PROMPT
When building on top of the base models supplied by Ollama, it comes with the prompt template predefined. To override the supplied system prompt, simply add `SYSTEM insert system prompt` to change the system prompt.
Prompt is a multiline instruction that defines the prompt to be used when the model is run. Typically there are 3-4 components to a prompt: System, context, user, and response.
### Prompt Template
```modelfile
PROMPT """
{{- if not .Context }}
### System:
You are a content marketer who needs to come up with a short but succinct tweet. Make sure to include the appropriate hashtags and links. Sometimes when appropriate, describe a meme that can be includes as well. All answers should be in the form of a tweet which has a max size of 280 characters. Every instruction will be the topic to create a tweet about.
{{- end }}
### Instruction:
{{ .Prompt }}
`TEMPLATE` the full prompt template to be passed into the model. It may include (optionally) a system prompt, user prompt, and assistant prompt. This is used to create a full custom prompt, and syntax may be model specific.
### Response:
"""
## Notes
```
- the **modelfile is not case sensitive**. In the examples, we use uppercase for instructions to make it easier to distinguish it from arguments.
- Instructions can be in any order. In the examples, we start with FROM instruction to keep it easily readable.

15
examples/README.md Normal file
View File

@@ -0,0 +1,15 @@
# Examples
This directory contains examples that can be created and run with `ollama`.
To create a model:
```
ollama create example -f <example file>
```
To run a model:
```
ollama run example
```

11
examples/mario/Modelfile Normal file
View File

@@ -0,0 +1,11 @@
FROM llama2
PARAMETER temperature 1
PROMPT """
{{- if not .Context }}
<<SYS>>
You are Mario from super mario bros, acting as an assistant.
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""

BIN
examples/mario/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 446 KiB

49
examples/mario/readme.md Normal file
View File

@@ -0,0 +1,49 @@
<img src="logo.png" alt="image of Italian plumber" height="200"/>
# Example character: Mario
This example shows how to create a basic character using Llama2 as the base model.
To run this example:
1. Download the Modelfile
2. `ollama pull llama2` to get the base model used in the model file.
3. `ollama create NAME -f ./Modelfile`
4. `ollama run NAME`
Ask it some questions like "Who are you?" or "Is Peach in trouble again?"
## Editing this file
What the model file looks like:
```
FROM llama2
PARAMETER temperature 1
PROMPT """
{{- if not .Context }}
<<SYS>>
You are Mario from super mario bros, acting as an assistant.
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""
```
What if you want to change its behaviour?
- Try changing the prompt
- Try changing the parameters [Docs](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md)
- Try changing the model (e.g. An uncensored model by `FROM wizard-vicuna` this is the wizard-vicuna uncensored model )
Once the changes are made,
1. `ollama create NAME -f ./Modelfile`
2. `ollama run NAME`
3. Iterate until you are happy with the results.
Notes:
- This example is for research purposes only. There is no affiliation with any entity.
- When using an uncensored model, please be aware that it may generate offensive content.

View File

@@ -1,7 +1,8 @@
# Modelfile for creating a Midjourney prompts from a topic
# This prompt was adapted from the original at https://www.greataiprompts.com/guide/midjourney/best-chatgpt-prompt-for-midjourney/
# Run `ollama create mj -f pathtofile` and then `ollama run mj` and enter a topic
FROM library/nous-hermes:latest
FROM nous-hermes
PROMPT """
{{- if not .Context }}
### System:

View File

@@ -1,15 +0,0 @@
# Python
This is a simple example of calling the Ollama api from a python app.
First, download a model:
```
curl -L https://huggingface.co/TheBloke/orca_mini_3B-GGML/resolve/main/orca-mini-3b.ggmlv3.q4_1.bin -o orca.bin
```
Then run it using the example script. You'll need to have Ollama running on your machine.
```
python3 main.py orca.bin
```

View File

@@ -1,32 +0,0 @@
import http.client
import json
import os
import sys
if len(sys.argv) < 2:
print("Usage: python main.py <model file>")
sys.exit(1)
conn = http.client.HTTPConnection('localhost', 11434)
headers = { 'Content-Type': 'application/json' }
# generate text from the model
conn.request("POST", "/api/generate", json.dumps({
'model': os.path.join(os.getcwd(), sys.argv[1]),
'prompt': 'write me a short story',
'stream': True
}), headers)
response = conn.getresponse()
def parse_generate(data):
for event in data.decode('utf-8').split("\n"):
if not event:
continue
yield event
if response.status == 200:
for chunk in response:
for event in parse_generate(chunk):
print(json.loads(event)['response'], end="", flush=True)

View File

@@ -1,6 +1,6 @@
# Modelfile for creating a recipe from a list of ingredients
# Run `ollama create recipemaker -f pathtofile` and then `ollama run recipemaker` and feed it lists of ingredients to create recipes around.
FROM library/nous-hermes:latest
FROM nous-hermes
PROMPT """
{{- if not .Context }}
### System:

View File

@@ -1,7 +1,7 @@
# Modelfile for creating a tweet from a topic
# Run `ollama create tweetwriter -f pathtofile` and then `ollama run tweetwriter` and enter a topic
FROM library/nous-hermes:latest
FROM nous-hermes
PROMPT """
{{- if not .Context }}
### System:

10
go.mod
View File

@@ -5,20 +5,19 @@ go 1.20
require (
github.com/dustin/go-humanize v1.0.1
github.com/gin-gonic/gin v1.9.1
github.com/mattn/go-runewidth v0.0.14
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
github.com/olekukonko/tablewriter v0.0.5
github.com/spf13/cobra v1.7.0
)
require (
github.com/mattn/go-runewidth v0.0.14 // indirect
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
github.com/rivo/uniseg v0.2.0 // indirect
)
require github.com/rivo/uniseg v0.2.0 // indirect
require (
dario.cat/mergo v1.0.0
github.com/bytedance/sonic v1.9.1 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/chzyer/readline v1.5.1
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
@@ -34,7 +33,6 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
github.com/schollz/progressbar/v3 v3.13.1
github.com/spf13/pflag v1.0.5 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect

13
go.sum
View File

@@ -6,6 +6,12 @@ github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZX
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
github.com/chzyer/logex v1.2.1 h1:XHDu3E6q+gdHgsdTPH6ImJMIp436vR6MPtH8gP05QzM=
github.com/chzyer/logex v1.2.1/go.mod h1:JLbx6lG2kDbNRFnfkgvh4eRJRPX1QCoOIWomwysCBrQ=
github.com/chzyer/readline v1.5.1 h1:upd/6fQk4src78LMRzh5vItIt361/o4uq553V8B5sGI=
github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk=
github.com/chzyer/test v1.0.0 h1:p3BQDXSxOhOG0P9z6/hGnII4LGiEPOYBhs8asl/fC04=
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -36,13 +42,11 @@ github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
@@ -64,8 +68,6 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/schollz/progressbar/v3 v3.13.1 h1:o8rySDYiQ59Mwzy2FELeHY5ZARXZTVJC7iHD6PEFUiE=
github.com/schollz/progressbar/v3 v3.13.1/go.mod h1:xvrbki8kfT1fzWzBT/UZd9L6GA+jdL7HAgq2RFnO6fQ=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
@@ -92,12 +94,11 @@ golang.org/x/crypto v0.10.0 h1:LKqV2xt9+kDzSTfOhx4FrkEBcMrAgHSYgzywV9zcGmM=
golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I=
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c=
golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
golang.org/x/text v0.10.0 h1:UpjohKhiEgNc0CSauXmwYftY1+LlaC75SJwh0SgCX58=

1
library/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
models

7
library/downloads Normal file
View File

@@ -0,0 +1,7 @@
https://huggingface.co/TheBloke/orca_mini_3B-GGML/resolve/main/orca-mini-3b.ggmlv3.q4_0.bin e84705205f71dd55be7b24a778f248f0eda9999a125d313358c087e092d83148
https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML/resolve/main/nous-hermes-13b.ggmlv3.q4_0.bin d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998
https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML/resolve/main/vicuna-7b-v1.3.ggmlv3.q4_0.bin 23ce5ed290b56a19305178b9ada2c3d96036bd69a6c18304b6158eb6672d6c0f
https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML/resolve/main/Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin 1f08b147a5bce41cfcbb3fd5d51ba765dea1786e15b5655ab69ba3a337a893b7
https://huggingface.co/TheBloke/Llama-2-7B-GGML/resolve/main/llama-2-7b.ggmlv3.q4_0.bin bfa26d855e44629c4cf919985e90bd7fa03b77eea1676791519e39a4d45fd4d5
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_0.bin 8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8
https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin f79142715bc9539a2edbb4b253548db8b34fac22736593eeaa28555874476e30

147
library/modelfiles/llama2 Normal file
View File

@@ -0,0 +1,147 @@
FROM ../models/llama-2-7b-chat.ggmlv3.q4_0.bin
TEMPLATE """
{{- if .First }}
<<SYS>>
{{ .System }}
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""
SYSTEM """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
"""
LICENSE """
Llama 2 Community License Agreement
Llama 2 Version Release Date: July 18, 2023
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entitys behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Llama Materials” means, collectively, Metas proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metas intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensees affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
b. Subject to Metas ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
"""
LICENSE """
Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at ai.meta.com/llama/use-policy.
Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:
1. Violate the law or others rights, including to:
a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
i. Violence or terrorism
ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
b. Human trafficking, exploitation, and sexual violence
iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
iv. Sexual solicitation
vi. Any other criminal activity
c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:
a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
b. Guns and illegal weapons (including weapon development)
c. Illegal drugs and regulated/controlled substances
d. Operation of critical infrastructure, transportation technologies, or heavy machinery
e. Self-harm or harm to others, including suicide, cutting, and eating disorders
f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:
a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
c. Generating, promoting, or further distributing spam
d. Impersonating another individual without consent, authorization, or legal right
e. Representing that the use of Llama 2 or outputs are human-generated
f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
Reporting issues with the model: github.com/facebookresearch/llama
Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
Reporting bugs and security concerns: facebook.com/whitehat/info
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
"""

View File

@@ -0,0 +1,147 @@
FROM ../models/llama-2-13b-chat.ggmlv3.q4_0.bin
TEMPLATE """
{{- if .First }}
<<SYS>>
{{ .System }}
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""
SYSTEM """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
"""
LICENSE """
Llama 2 Community License Agreement
Llama 2 Version Release Date: July 18, 2023
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entitys behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Llama Materials” means, collectively, Metas proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metas intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensees affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
b. Subject to Metas ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
"""
LICENSE """
Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at ai.meta.com/llama/use-policy.
Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:
1. Violate the law or others rights, including to:
a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
i. Violence or terrorism
ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
b. Human trafficking, exploitation, and sexual violence
iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
iv. Sexual solicitation
vi. Any other criminal activity
c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:
a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
b. Guns and illegal weapons (including weapon development)
c. Illegal drugs and regulated/controlled substances
d. Operation of critical infrastructure, transportation technologies, or heavy machinery
e. Self-harm or harm to others, including suicide, cutting, and eating disorders
f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:
a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
c. Generating, promoting, or further distributing spam
d. Impersonating another individual without consent, authorization, or legal right
e. Representing that the use of Llama 2 or outputs are human-generated
f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
Reporting issues with the model: github.com/facebookresearch/llama
Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
Reporting bugs and security concerns: facebook.com/whitehat/info
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
"""

View File

@@ -0,0 +1,147 @@
FROM ../models/llama-2-7b-chat.ggmlv3.q4_0.bin
TEMPLATE """
{{- if .First }}
<<SYS>>
{{ .System }}
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""
SYSTEM """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
"""
LICENSE """
Llama 2 Community License Agreement
Llama 2 Version Release Date: July 18, 2023
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entitys behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Llama Materials” means, collectively, Metas proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metas intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensees affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
b. Subject to Metas ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
"""
LICENSE """
Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at ai.meta.com/llama/use-policy.
Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:
1. Violate the law or others rights, including to:
a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
i. Violence or terrorism
ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
b. Human trafficking, exploitation, and sexual violence
iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
iv. Sexual solicitation
vi. Any other criminal activity
c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:
a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
b. Guns and illegal weapons (including weapon development)
c. Illegal drugs and regulated/controlled substances
d. Operation of critical infrastructure, transportation technologies, or heavy machinery
e. Self-harm or harm to others, including suicide, cutting, and eating disorders
f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:
a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
c. Generating, promoting, or further distributing spam
d. Impersonating another individual without consent, authorization, or legal right
e. Representing that the use of Llama 2 or outputs are human-generated
f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
Reporting issues with the model: github.com/facebookresearch/llama
Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
Reporting bugs and security concerns: facebook.com/whitehat/info
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
"""

View File

@@ -0,0 +1,7 @@
FROM ../models/nous-hermes-13b.ggmlv3.q4_0.bin
TEMPLATE """
### Instruction:
{{ .Prompt }}
### Response:
"""

14
library/modelfiles/orca Normal file
View File

@@ -0,0 +1,14 @@
FROM ../models/orca-mini-3b.ggmlv3.q4_0.bin
TEMPLATE """
{{- if .First }}
### System:
{{ .System }}
{{- end }}
### User:
{{ .Prompt }}
### Response:
"""
SYSTEM """You are an AI assistant that follows instruction extremely well. Help as much as you can."""

11
library/modelfiles/vicuna Normal file
View File

@@ -0,0 +1,11 @@
FROM ../models/vicuna-7b-v1.3.ggmlv3.q4_0.bin
TEMPLATE """
{{ if .First }}
{{ .System }}
{{- end }}
USER: {{ .Prompt }}
ASSISTANT:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."""

View File

@@ -0,0 +1,5 @@
FROM ../models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin
TEMPLATE """
USER: {{ .Prompt }}
ASSISTANT:
"""

52
library/publish.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/bin/bash
mkdir -p models
# download binaries
function process_line {
local url=$1
local checksum=$2
# Get the filename from the URL
local filename=models/$(basename $url)
echo "verifying $filename..."
# If the file exists, compute its checksum
if [ -f $filename ]; then
local existing_checksum=$(shasum -a 256 $filename | cut -d ' ' -f1)
fi
# If the file does not exist, or its checksum does not match, download it
if [ ! -f $filename ] || [ $existing_checksum != $checksum ]; then
echo "downloading $filename..."
# Download the file
curl -L $url -o $filename
# Compute the SHA256 hash of the downloaded file
local computed_checksum=$(shasum -a 256 $filename | cut -d ' ' -f1)
# Verify the checksum
if [ $computed_checksum != $checksum ]; then
echo "Checksum verification failed for $filename"
exit 1
fi
fi
}
while IFS=' ' read -r url checksum
do
process_line $url $checksum
done < "downloads"
# create and publish the models
for file in modelfiles/*; do
if [ -f "$file" ]; then
filename=$(basename "$file")
echo $filename
ollama create "library/${filename}" -f "$file"
ollama push "${filename}"
fi
done

View File

@@ -2,76 +2,81 @@ package parser
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"strings"
)
type Command struct {
Name string
Arg string
Args string
}
func (c *Command) Reset() {
c.Name = ""
c.Args = ""
}
func Parse(reader io.Reader) ([]Command, error) {
var commands []Command
var foundModel bool
var command, modelCommand Command
scanner := bufio.NewScanner(reader)
multiline := false
var multilineCommand *Command
scanner.Split(scanModelfile)
for scanner.Scan() {
line := scanner.Text()
if multiline {
// If we're in a multiline string and the line is """, end the multiline string.
if strings.TrimSpace(line) == `"""` {
multiline = false
commands = append(commands, *multilineCommand)
} else {
// Otherwise, append the line to the multiline string.
multilineCommand.Arg += "\n" + line
}
continue
}
fields := strings.Fields(line)
line := scanner.Bytes()
fields := bytes.SplitN(line, []byte(" "), 2)
if len(fields) == 0 {
continue
}
command := Command{}
switch strings.ToUpper(fields[0]) {
switch string(bytes.ToUpper(fields[0])) {
case "FROM":
command.Name = "model"
command.Arg = fields[1]
if command.Arg == "" {
return nil, fmt.Errorf("no model specified in FROM line")
}
foundModel = true
case "PROMPT":
command.Name = "prompt"
if fields[1] == `"""` {
multiline = true
multilineCommand = &command
multilineCommand.Arg = ""
} else {
command.Arg = strings.Join(fields[1:], " ")
}
command.Args = string(fields[1])
// copy command for validation
modelCommand = command
case "LICENSE", "TEMPLATE", "SYSTEM":
command.Name = string(bytes.ToLower(fields[0]))
command.Args = string(fields[1])
case "PARAMETER":
command.Name = fields[1]
command.Arg = strings.Join(fields[2:], " ")
fields = bytes.SplitN(fields[1], []byte(" "), 2)
command.Name = string(fields[0])
command.Args = string(fields[1])
default:
continue
}
if !multiline {
commands = append(commands, command)
}
commands = append(commands, command)
command.Reset()
}
if !foundModel {
if modelCommand.Args == "" {
return nil, fmt.Errorf("no FROM line for the model was specified")
}
if multiline {
return nil, fmt.Errorf("unclosed multiline string")
}
return commands, scanner.Err()
}
func scanModelfile(data []byte, atEOF bool) (advance int, token []byte, err error) {
newline := bytes.IndexByte(data, '\n')
if start := bytes.Index(data, []byte(`"""`)); start >= 0 && start < newline {
end := bytes.Index(data[start+3:], []byte(`"""`))
if end < 0 {
if atEOF {
return 0, nil, errors.New(`unterminated multiline string: """`)
} else {
return 0, nil, nil
}
}
n := start + 3 + end + 3
return n, bytes.Replace(data[:n], []byte(`"""`), []byte(""), 2), nil
}
return bufio.ScanLines(data, atEOF)
}

21
progressbar/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2017 Zack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

121
progressbar/README.md Normal file
View File

@@ -0,0 +1,121 @@
# progressbar
[![CI](https://github.com/schollz/progressbar/actions/workflows/ci.yml/badge.svg?branch=main&event=push)](https://github.com/schollz/progressbar/actions/workflows/ci.yml)
[![go report card](https://goreportcard.com/badge/github.com/schollz/progressbar)](https://goreportcard.com/report/github.com/schollz/progressbar)
[![coverage](https://img.shields.io/badge/coverage-84%25-brightgreen.svg)](https://gocover.io/github.com/schollz/progressbar)
[![godocs](https://godoc.org/github.com/schollz/progressbar?status.svg)](https://godoc.org/github.com/schollz/progressbar/v3)
A very simple thread-safe progress bar which should work on every OS without problems. I needed a progressbar for [croc](https://github.com/schollz/croc) and everything I tried had problems, so I made another one. In order to be OS agnostic I do not plan to support [multi-line outputs](https://github.com/schollz/progressbar/issues/6).
## Install
```
go get -u github.com/schollz/progressbar/v3
```
## Usage
### Basic usage
```golang
bar := progressbar.Default(100)
for i := 0; i < 100; i++ {
bar.Add(1)
time.Sleep(40 * time.Millisecond)
}
```
which looks like:
![Example of basic bar](examples/basic/basic.gif)
### I/O operations
The `progressbar` implements an `io.Writer` so it can automatically detect the number of bytes written to a stream, so you can use it as a progressbar for an `io.Reader`.
```golang
req, _ := http.NewRequest("GET", "https://dl.google.com/go/go1.14.2.src.tar.gz", nil)
resp, _ := http.DefaultClient.Do(req)
defer resp.Body.Close()
f, _ := os.OpenFile("go1.14.2.src.tar.gz", os.O_CREATE|os.O_WRONLY, 0644)
defer f.Close()
bar := progressbar.DefaultBytes(
resp.ContentLength,
"downloading",
)
io.Copy(io.MultiWriter(f, bar), resp.Body)
```
which looks like:
![Example of download bar](examples/download/download.gif)
### Progress bar with unknown length
A progressbar with unknown length is a spinner. Any bar with -1 length will automatically convert it to a spinner with a customizable spinner type. For example, the above code can be run and set the `resp.ContentLength` to `-1`.
which looks like:
![Example of download bar with unknown length](examples/download-unknown/download-unknown.gif)
### Customization
There is a lot of customization that you can do - change the writer, the color, the width, description, theme, etc. See [all the options](https://pkg.go.dev/github.com/schollz/progressbar/v3?tab=doc#Option).
```golang
bar := progressbar.NewOptions(1000,
progressbar.OptionSetWriter(ansi.NewAnsiStdout()),
progressbar.OptionEnableColorCodes(true),
progressbar.OptionShowBytes(true),
progressbar.OptionSetWidth(15),
progressbar.OptionSetDescription("[cyan][1/3][reset] Writing moshable file..."),
progressbar.OptionSetTheme(progressbar.Theme{
Saucer: "[green]=[reset]",
SaucerHead: "[green]>[reset]",
SaucerPadding: " ",
BarStart: "[",
BarEnd: "]",
}))
for i := 0; i < 1000; i++ {
bar.Add(1)
time.Sleep(5 * time.Millisecond)
}
```
which looks like:
![Example of customized bar](examples/customization/customization.gif)
## Contributing
Pull requests are welcome. Feel free to...
- Revise documentation
- Add new features
- Fix bugs
- Suggest improvements
## Thanks
Thanks [@Dynom](https://github.com/dynom) for massive improvements in version 2.0!
Thanks [@CrushedPixel](https://github.com/CrushedPixel) for adding descriptions and color code support!
Thanks [@MrMe42](https://github.com/MrMe42) for adding some minor features!
Thanks [@tehstun](https://github.com/tehstun) for some great PRs!
Thanks [@Benzammour](https://github.com/Benzammour) and [@haseth](https://github.com/haseth) for helping create v3!
Thanks [@briandowns](https://github.com/briandowns) for compiling the list of spinners.
## License
MIT

1098
progressbar/progressbar.go Normal file

File diff suppressed because it is too large Load Diff

80
progressbar/spinners.go Normal file
View File

@@ -0,0 +1,80 @@
package progressbar
var spinners = map[int][]string{
0: {"←", "↖", "↑", "↗", "→", "↘", "↓", "↙"},
1: {"▁", "▃", "▄", "▅", "▆", "▇", "█", "▇", "▆", "▅", "▄", "▃", "▁"},
2: {"▖", "▘", "▝", "▗"},
3: {"┤", "┘", "┴", "└", "├", "┌", "┬", "┐"},
4: {"◢", "◣", "◤", "◥"},
5: {"◰", "◳", "◲", "◱"},
6: {"◴", "◷", "◶", "◵"},
7: {"◐", "◓", "◑", "◒"},
8: {".", "o", "O", "@", "*"},
9: {"|", "/", "-", "\\"},
10: {"◡◡", "⊙⊙", "◠◠"},
11: {"⣾", "⣽", "⣻", "⢿", "⡿", "⣟", "⣯", "⣷"},
12: {">))'>", " >))'>", " >))'>", " >))'>", " >))'>", " <'((<", " <'((<", " <'((<"},
13: {"⠁", "⠂", "⠄", "⡀", "⢀", "⠠", "⠐", "⠈"},
14: {"⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"},
15: {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"},
16: {"▉", "▊", "▋", "▌", "▍", "▎", "▏", "▎", "▍", "▌", "▋", "▊", "▉"},
17: {"■", "□", "▪", "▫"},
18: {"←", "↑", "→", "↓"},
19: {"╫", "╪"},
20: {"⇐", "⇖", "⇑", "⇗", "⇒", "⇘", "⇓", "⇙"},
21: {"⠁", "⠁", "⠉", "⠙", "⠚", "⠒", "⠂", "⠂", "⠒", "⠲", "⠴", "⠤", "⠄", "⠄", "⠤", "⠠", "⠠", "⠤", "⠦", "⠖", "⠒", "⠐", "⠐", "⠒", "⠓", "⠋", "⠉", "⠈", "⠈"},
22: {"⠈", "⠉", "⠋", "⠓", "⠒", "⠐", "⠐", "⠒", "⠖", "⠦", "⠤", "⠠", "⠠", "⠤", "⠦", "⠖", "⠒", "⠐", "⠐", "⠒", "⠓", "⠋", "⠉", "⠈"},
23: {"⠁", "⠉", "⠙", "⠚", "⠒", "⠂", "⠂", "⠒", "⠲", "⠴", "⠤", "⠄", "⠄", "⠤", "⠴", "⠲", "⠒", "⠂", "⠂", "⠒", "⠚", "⠙", "⠉", "⠁"},
24: {"⠋", "⠙", "⠚", "⠒", "⠂", "⠂", "⠒", "⠲", "⠴", "⠦", "⠖", "⠒", "⠐", "⠐", "⠒", "⠓", "⠋"},
25: {"ヲ", "ァ", "ィ", "ゥ", "ェ", "ォ", "ャ", "ュ", "ョ", "ッ", "ア", "イ", "ウ", "エ", "オ", "カ", "キ", "ク", "ケ", "コ", "サ", "シ", "ス", "セ", "ソ", "タ", "チ", "ツ", "テ", "ト", "ナ", "ニ", "ヌ", "ネ", "ノ", "ハ", "ヒ", "フ", "ヘ", "ホ", "マ", "ミ", "ム", "メ", "モ", "ヤ", "ユ", "ヨ", "ラ", "リ", "ル", "レ", "ロ", "ワ", "ン"},
26: {".", "..", "..."},
27: {"▁", "▂", "▃", "▄", "▅", "▆", "▇", "█", "▉", "▊", "▋", "▌", "▍", "▎", "▏", "▏", "▎", "▍", "▌", "▋", "▊", "▉", "█", "▇", "▆", "▅", "▄", "▃", "▂", "▁"},
28: {".", "o", "O", "°", "O", "o", "."},
29: {"+", "x"},
30: {"v", "<", "^", ">"},
31: {">>--->", " >>--->", " >>--->", " >>--->", " >>--->", " <---<<", " <---<<", " <---<<", " <---<<", "<---<<"},
32: {"|", "||", "|||", "||||", "|||||", "|||||||", "||||||||", "|||||||", "||||||", "|||||", "||||", "|||", "||", "|"},
33: {"[ ]", "[= ]", "[== ]", "[=== ]", "[==== ]", "[===== ]", "[====== ]", "[======= ]", "[======== ]", "[========= ]", "[==========]"},
34: {"(*---------)", "(-*--------)", "(--*-------)", "(---*------)", "(----*-----)", "(-----*----)", "(------*---)", "(-------*--)", "(--------*-)", "(---------*)"},
35: {"█▒▒▒▒▒▒▒▒▒", "███▒▒▒▒▒▒▒", "█████▒▒▒▒▒", "███████▒▒▒", "██████████"},
36: {"[ ]", "[=> ]", "[===> ]", "[=====> ]", "[======> ]", "[========> ]", "[==========> ]", "[============> ]", "[==============> ]", "[================> ]", "[==================> ]", "[===================>]"},
37: {"", ""},
38: {"▌", "▀", "▐▄"},
39: {"🌍", "🌎", "🌏"},
40: {"◜", "◝", "◞", "◟"},
41: {"⬒", "⬔", "⬓", "⬕"},
42: {"⬖", "⬘", "⬗", "⬙"},
43: {"[>>> >]", "[]>>>> []", "[] >>>> []", "[] >>>> []", "[] >>>> []", "[] >>>>[]", "[>> >>]"},
44: {"♠", "♣", "♥", "♦"},
45: {"➞", "➟", "➠", "➡", "➠", "➟"},
46: {" | ", ` \ `, "_ ", ` \ `, " | ", " / ", " _", " / "},
47: {" . . . .", ". . . .", ". . . .", ". . . .", ". . . . ", ". . . . ."},
48: {" | ", " / ", " _ ", ` \ `, " | ", ` \ `, " _ ", " / "},
49: {"⎺", "⎻", "⎼", "⎽", "⎼", "⎻"},
50: {"▹▹▹▹▹", "▸▹▹▹▹", "▹▸▹▹▹", "▹▹▸▹▹", "▹▹▹▸▹", "▹▹▹▹▸"},
51: {"[ ]", "[ =]", "[ ==]", "[ ===]", "[====]", "[=== ]", "[== ]", "[= ]"},
52: {"( ● )", "( ● )", "( ● )", "( ● )", "( ●)", "( ● )", "( ● )", "( ● )", "( ● )"},
53: {"✶", "✸", "✹", "✺", "✹", "✷"},
54: {"▐|\\____________▌", "▐_|\\___________▌", "▐__|\\__________▌", "▐___|\\_________▌", "▐____|\\________▌", "▐_____|\\_______▌", "▐______|\\______▌", "▐_______|\\_____▌", "▐________|\\____▌", "▐_________|\\___▌", "▐__________|\\__▌", "▐___________|\\_▌", "▐____________|\\▌", "▐____________/|▌", "▐___________/|_▌", "▐__________/|__▌", "▐_________/|___▌", "▐________/|____▌", "▐_______/|_____▌", "▐______/|______▌", "▐_____/|_______▌", "▐____/|________▌", "▐___/|_________▌", "▐__/|__________▌", "▐_/|___________▌", "▐/|____________▌"},
55: {"▐⠂ ▌", "▐⠈ ▌", "▐ ⠂ ▌", "▐ ⠠ ▌", "▐ ⡀ ▌", "▐ ⠠ ▌", "▐ ⠂ ▌", "▐ ⠈ ▌", "▐ ⠂ ▌", "▐ ⠠ ▌", "▐ ⡀ ▌", "▐ ⠠ ▌", "▐ ⠂ ▌", "▐ ⠈ ▌", "▐ ⠂▌", "▐ ⠠▌", "▐ ⡀▌", "▐ ⠠ ▌", "▐ ⠂ ▌", "▐ ⠈ ▌", "▐ ⠂ ▌", "▐ ⠠ ▌", "▐ ⡀ ▌", "▐ ⠠ ▌", "▐ ⠂ ▌", "▐ ⠈ ▌", "▐ ⠂ ▌", "▐ ⠠ ▌", "▐ ⡀ ▌", "▐⠠ ▌"},
56: {"¿", "?"},
57: {"⢹", "⢺", "⢼", "⣸", "⣇", "⡧", "⡗", "⡏"},
58: {"⢄", "⢂", "⢁", "⡁", "⡈", "⡐", "⡠"},
59: {". ", ".. ", "...", " ..", " .", " "},
60: {".", "o", "O", "°", "O", "o", "."},
61: {"▓", "▒", "░"},
62: {"▌", "▀", "▐", "▄"},
63: {"⊶", "⊷"},
64: {"▪", "▫"},
65: {"□", "■"},
66: {"▮", "▯"},
67: {"-", "=", "≡"},
68: {"d", "q", "p", "b"},
69: {"∙∙∙", "●∙∙", "∙●∙", "∙∙●", "∙∙∙"},
70: {"🌑 ", "🌒 ", "🌓 ", "🌔 ", "🌕 ", "🌖 ", "🌗 ", "🌘 "},
71: {"☗", "☖"},
72: {"⧇", "⧆"},
73: {"◉", "◎"},
74: {"㊂", "㊀", "㊁"},
75: {"⦾", "⦿"},
}

View File

@@ -3,7 +3,6 @@ package server
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
@@ -17,6 +16,7 @@ import (
"reflect"
"strconv"
"strings"
"text/template"
"github.com/jmorganca/ollama/api"
"github.com/jmorganca/ollama/parser"
@@ -25,10 +25,39 @@ import (
type Model struct {
Name string `json:"name"`
ModelPath string
Prompt string
Template string
System string
Options api.Options
}
func (m *Model) Prompt(request api.GenerateRequest) (string, error) {
tmpl, err := template.New("").Parse(m.Template)
if err != nil {
return "", err
}
var vars struct {
First bool
System string
Prompt string
// deprecated: versions <= 0.0.7 used this to omit the system prompt
Context []int
}
vars.First = len(vars.Context) == 0
vars.System = m.System
vars.Prompt = request.Prompt
vars.Context = request.Context
var sb strings.Builder
if err := tmpl.Execute(&sb, vars); err != nil {
return "", err
}
return sb.String(), nil
}
type ManifestV2 struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
@@ -42,10 +71,9 @@ type Layer struct {
Size int `json:"size"`
}
type LayerWithBuffer struct {
type LayerReader struct {
Layer
Buffer *bytes.Buffer
io.Reader
}
type ConfigV2 struct {
@@ -73,20 +101,19 @@ func GetManifest(mp ModelPath) (*ManifestV2, error) {
if err != nil {
return nil, err
}
if _, err = os.Stat(fp); err != nil && !errors.Is(err, os.ErrNotExist) {
return nil, fmt.Errorf("couldn't find model '%s'", mp.GetShortTagname())
}
var manifest *ManifestV2
f, err := os.Open(fp)
bts, err := os.ReadFile(fp)
if err != nil {
return nil, fmt.Errorf("couldn't open file '%s'", fp)
}
decoder := json.NewDecoder(f)
err = decoder.Decode(&manifest)
if err != nil {
if err := json.Unmarshal(bts, &manifest); err != nil {
return nil, err
}
@@ -114,12 +141,27 @@ func GetModel(name string) (*Model, error) {
switch layer.MediaType {
case "application/vnd.ollama.image.model":
model.ModelPath = filename
case "application/vnd.ollama.image.prompt":
data, err := os.ReadFile(filename)
case "application/vnd.ollama.image.template":
bts, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
model.Prompt = string(data)
model.Template = string(bts)
case "application/vnd.ollama.image.system":
bts, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
model.System = string(bts)
case "application/vnd.ollama.image.prompt":
bts, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
model.Template = string(bts)
case "application/vnd.ollama.image.params":
params, err := os.Open(filename)
if err != nil {
@@ -139,21 +181,14 @@ func GetModel(name string) (*Model, error) {
return model, nil
}
func getAbsPath(fp string) (string, error) {
if strings.HasPrefix(fp, "~/") {
parts := strings.Split(fp, "/")
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
fp = filepath.Join(home, filepath.Join(parts[1:]...))
func CreateModel(name string, path string, fn func(status string)) error {
mf, err := os.Open(path)
if err != nil {
fn(fmt.Sprintf("couldn't open modelfile '%s'", path))
return fmt.Errorf("failed to open file: %w", err)
}
defer mf.Close()
return os.ExpandEnv(fp), nil
}
func CreateModel(name string, mf io.Reader, fn func(status string)) error {
fn("parsing modelfile")
commands, err := parser.Parse(mf)
if err != nil {
@@ -161,27 +196,38 @@ func CreateModel(name string, mf io.Reader, fn func(status string)) error {
return err
}
var layers []*LayerWithBuffer
var layers []*LayerReader
params := make(map[string]string)
for _, c := range commands {
log.Printf("[%s] - %s\n", c.Name, c.Arg)
log.Printf("[%s] - %s\n", c.Name, c.Args)
switch c.Name {
case "model":
fn("looking for model")
mf, err := GetManifest(ParseModelPath(c.Arg))
mf, err := GetManifest(ParseModelPath(c.Args))
if err != nil {
// if we couldn't read the manifest, try getting the bin file
fp, err := getAbsPath(c.Arg)
if err != nil {
fn("error determing path. exiting.")
return err
fp := c.Args
// If filePath starts with ~/, replace it with the user's home directory.
if strings.HasPrefix(fp, "~/") {
parts := strings.Split(fp, "/")
home, err := os.UserHomeDir()
if err != nil {
return fmt.Errorf("failed to open file: %v", err)
}
fp = filepath.Join(home, filepath.Join(parts[1:]...))
}
// If filePath is not an absolute path, make it relative to the modelfile path
if !filepath.IsAbs(fp) {
fp = filepath.Join(filepath.Dir(path), fp)
}
fn("creating model layer")
file, err := os.Open(fp)
if err != nil {
fn(fmt.Sprintf("couldn't find model '%s'", c.Arg))
fn(fmt.Sprintf("couldn't find model '%s'", c.Args))
return fmt.Errorf("failed to open file: %v", err)
}
defer file.Close()
@@ -204,21 +250,21 @@ func CreateModel(name string, mf io.Reader, fn func(status string)) error {
layers = append(layers, newLayer)
}
}
case "prompt":
fn("creating prompt layer")
case "license", "template", "system":
fn(fmt.Sprintf("creating %s layer", c.Name))
// remove the prompt layer if one exists
layers = removeLayerFromLayers(layers, "application/vnd.ollama.image.prompt")
mediaType := fmt.Sprintf("application/vnd.ollama.image.%s", c.Name)
layers = removeLayerFromLayers(layers, mediaType)
prompt := strings.NewReader(c.Arg)
l, err := CreateLayer(prompt)
layer, err := CreateLayer(strings.NewReader(c.Args))
if err != nil {
fn(fmt.Sprintf("couldn't create prompt layer: %v", err))
return fmt.Errorf("failed to create layer: %v", err)
return err
}
l.MediaType = "application/vnd.ollama.image.prompt"
layers = append(layers, l)
layer.MediaType = mediaType
layers = append(layers, layer)
default:
params[c.Name] = c.Arg
params[c.Name] = c.Args
}
}
@@ -274,7 +320,7 @@ func CreateModel(name string, mf io.Reader, fn func(status string)) error {
return nil
}
func removeLayerFromLayers(layers []*LayerWithBuffer, mediaType string) []*LayerWithBuffer {
func removeLayerFromLayers(layers []*LayerReader, mediaType string) []*LayerReader {
j := 0
for _, l := range layers {
if l.MediaType != mediaType {
@@ -285,7 +331,7 @@ func removeLayerFromLayers(layers []*LayerWithBuffer, mediaType string) []*Layer
return layers[:j]
}
func SaveLayers(layers []*LayerWithBuffer, fn func(status string), force bool) error {
func SaveLayers(layers []*LayerReader, fn func(status string), force bool) error {
// Write each of the layers to disk
for _, layer := range layers {
fp, err := GetBlobsPath(layer.Digest)
@@ -303,10 +349,10 @@ func SaveLayers(layers []*LayerWithBuffer, fn func(status string), force bool) e
}
defer out.Close()
_, err = io.Copy(out, layer.Buffer)
if err != nil {
if _, err = io.Copy(out, layer.Reader); err != nil {
return err
}
} else {
fn(fmt.Sprintf("using already created layer %s", layer.Digest))
}
@@ -315,7 +361,7 @@ func SaveLayers(layers []*LayerWithBuffer, fn func(status string), force bool) e
return nil
}
func CreateManifest(name string, cfg *LayerWithBuffer, layers []*Layer) error {
func CreateManifest(name string, cfg *LayerReader, layers []*Layer) error {
mp := ParseModelPath(name)
manifest := ManifestV2{
@@ -341,7 +387,7 @@ func CreateManifest(name string, cfg *LayerWithBuffer, layers []*Layer) error {
return os.WriteFile(fp, manifestJSON, 0o644)
}
func GetLayerWithBufferFromLayer(layer *Layer) (*LayerWithBuffer, error) {
func GetLayerWithBufferFromLayer(layer *Layer) (*LayerReader, error) {
fp, err := GetBlobsPath(layer.Digest)
if err != nil {
return nil, err
@@ -361,7 +407,7 @@ func GetLayerWithBufferFromLayer(layer *Layer) (*LayerWithBuffer, error) {
return newLayer, nil
}
func paramsToReader(params map[string]string) (io.Reader, error) {
func paramsToReader(params map[string]string) (io.ReadSeeker, error) {
opts := api.DefaultOptions()
typeOpts := reflect.TypeOf(opts)
@@ -419,7 +465,7 @@ func paramsToReader(params map[string]string) (io.Reader, error) {
return bytes.NewReader(bts), nil
}
func getLayerDigests(layers []*LayerWithBuffer) ([]string, error) {
func getLayerDigests(layers []*LayerReader) ([]string, error) {
var digests []string
for _, l := range layers {
if l.Digest == "" {
@@ -431,34 +477,30 @@ func getLayerDigests(layers []*LayerWithBuffer) ([]string, error) {
}
// CreateLayer creates a Layer object from a given file
func CreateLayer(f io.Reader) (*LayerWithBuffer, error) {
buf := new(bytes.Buffer)
_, err := io.Copy(buf, f)
if err != nil {
return nil, err
}
func CreateLayer(f io.ReadSeeker) (*LayerReader, error) {
digest, size := GetSHA256Digest(f)
f.Seek(0, 0)
digest, size := GetSHA256Digest(buf)
layer := &LayerWithBuffer{
layer := &LayerReader{
Layer: Layer{
MediaType: "application/vnd.docker.image.rootfs.diff.tar",
Digest: digest,
Size: size,
},
Buffer: buf,
Reader: f,
}
return layer, nil
}
func PushModel(name, username, password string, fn func(status, digest string, Total, Completed int, Percent float64)) error {
func PushModel(name, username, password string, fn func(api.ProgressResponse)) error {
mp := ParseModelPath(name)
fn("retrieving manifest", "", 0, 0, 0)
fn(api.ProgressResponse{Status: "retrieving manifest"})
manifest, err := GetManifest(mp)
if err != nil {
fn("couldn't retrieve manifest", "", 0, 0, 0)
fn(api.ProgressResponse{Status: "couldn't retrieve manifest"})
return err
}
@@ -480,11 +522,21 @@ func PushModel(name, username, password string, fn func(status, digest string, T
if exists {
completed += layer.Size
fn("using existing layer", layer.Digest, total, completed, float64(completed)/float64(total))
fn(api.ProgressResponse{
Status: "using existing layer",
Digest: layer.Digest,
Total: total,
Completed: completed,
})
continue
}
fn("starting upload", layer.Digest, total, completed, float64(completed)/float64(total))
fn(api.ProgressResponse{
Status: "starting upload",
Digest: layer.Digest,
Total: total,
Completed: completed,
})
location, err := startUpload(mp, username, password)
if err != nil {
@@ -498,10 +550,19 @@ func PushModel(name, username, password string, fn func(status, digest string, T
return err
}
completed += layer.Size
fn("upload complete", layer.Digest, total, completed, float64(completed)/float64(total))
fn(api.ProgressResponse{
Status: "upload complete",
Digest: layer.Digest,
Total: total,
Completed: completed,
})
}
fn("pushing manifest", "", total, completed, float64(completed/total))
fn(api.ProgressResponse{
Status: "pushing manifest",
Total: total,
Completed: completed,
})
url := fmt.Sprintf("%s://%s/v2/%s/manifests/%s", mp.ProtocolScheme, mp.Registry, mp.GetNamespaceRepository(), mp.Tag)
headers := map[string]string{
"Content-Type": "application/vnd.docker.distribution.manifest.v2+json",
@@ -524,15 +585,19 @@ func PushModel(name, username, password string, fn func(status, digest string, T
return fmt.Errorf("registry responded with code %d: %v", resp.StatusCode, string(body))
}
fn("success", "", total, completed, 1.0)
fn(api.ProgressResponse{
Status: "success",
Total: total,
Completed: completed,
})
return nil
}
func PullModel(name, username, password string, fn func(status, digest string, Total, Completed int, Percent float64)) error {
func PullModel(name, username, password string, fn func(api.ProgressResponse)) error {
mp := ParseModelPath(name)
fn("pulling manifest", "", 0, 0, 0)
fn(api.ProgressResponse{Status: "pulling manifest"})
manifest, err := pullModelManifest(mp, username, password)
if err != nil {
@@ -550,16 +615,15 @@ func PullModel(name, username, password string, fn func(status, digest string, T
total += manifest.Config.Size
for _, layer := range layers {
fn("starting download", layer.Digest, total, completed, float64(completed)/float64(total))
if err := downloadBlob(mp, layer.Digest, username, password, fn); err != nil {
fn(fmt.Sprintf("error downloading: %v", err), layer.Digest, 0, 0, 0)
fn(api.ProgressResponse{Status: fmt.Sprintf("error downloading: %v", err), Digest: layer.Digest})
return err
}
completed += layer.Size
fn("download complete", layer.Digest, total, completed, float64(completed)/float64(total))
}
fn("writing manifest", "", total, completed, 1.0)
fn(api.ProgressResponse{Status: "writing manifest"})
manifestJSON, err := json.Marshal(manifest)
if err != nil {
@@ -577,7 +641,7 @@ func PullModel(name, username, password string, fn func(status, digest string, T
return err
}
fn("success", "", total, completed, 1.0)
fn(api.ProgressResponse{Status: "success"})
return nil
}
@@ -609,7 +673,7 @@ func pullModelManifest(mp ModelPath, username, password string) (*ManifestV2, er
return m, err
}
func createConfigLayer(layers []string) (*LayerWithBuffer, error) {
func createConfigLayer(layers []string) (*LayerReader, error) {
// TODO change architecture and OS
config := ConfigV2{
Architecture: "arm64",
@@ -625,25 +689,28 @@ func createConfigLayer(layers []string) (*LayerWithBuffer, error) {
return nil, err
}
buf := bytes.NewBuffer(configJSON)
digest, size := GetSHA256Digest(buf)
digest, size := GetSHA256Digest(bytes.NewBuffer(configJSON))
layer := &LayerWithBuffer{
layer := &LayerReader{
Layer: Layer{
MediaType: "application/vnd.docker.container.image.v1+json",
Digest: digest,
Size: size,
},
Buffer: buf,
Reader: bytes.NewBuffer(configJSON),
}
return layer, nil
}
// GetSHA256Digest returns the SHA256 hash of a given buffer and returns it, and the size of buffer
func GetSHA256Digest(data *bytes.Buffer) (string, int) {
layerBytes := data.Bytes()
hash := sha256.Sum256(layerBytes)
return "sha256:" + hex.EncodeToString(hash[:]), len(layerBytes)
func GetSHA256Digest(r io.Reader) (string, int) {
h := sha256.New()
n, err := io.Copy(h, r)
if err != nil {
log.Fatal(err)
}
return fmt.Sprintf("sha256:%x", h.Sum(nil)), int(n)
}
func startUpload(mp ModelPath, username string, password string) (string, error) {
@@ -725,16 +792,20 @@ func uploadBlob(location string, layer *Layer, username string, password string)
return nil
}
func downloadBlob(mp ModelPath, digest string, username, password string, fn func(status, digest string, Total, Completed int, Percent float64)) error {
func downloadBlob(mp ModelPath, digest string, username, password string, fn func(api.ProgressResponse)) error {
fp, err := GetBlobsPath(digest)
if err != nil {
return err
}
_, err = os.Stat(fp)
if !os.IsNotExist(err) {
if fi, _ := os.Stat(fp); fi != nil {
// we already have the file, so return
log.Printf("already have %s\n", digest)
fn(api.ProgressResponse{
Digest: digest,
Total: int(fi.Size()),
Completed: int(fi.Size()),
})
return nil
}
@@ -783,10 +854,21 @@ func downloadBlob(mp ModelPath, digest string, username, password string, fn fun
total := remaining + completed
for {
fn(fmt.Sprintf("Downloading %s", digest), digest, int(total), int(completed), float64(completed)/float64(total))
fn(api.ProgressResponse{
Status: fmt.Sprintf("downloading %s", digest),
Digest: digest,
Total: int(total),
Completed: int(completed),
})
if completed >= total {
if err := os.Rename(fp+"-partial", fp); err != nil {
fn(fmt.Sprintf("error renaming file: %v", err), digest, int(total), int(completed), 1)
fn(api.ProgressResponse{
Status: fmt.Sprintf("error renaming file: %v", err),
Digest: digest,
Total: int(total),
Completed: int(completed),
})
return err
}

View File

@@ -9,7 +9,6 @@ import (
"os"
"path/filepath"
"strings"
"text/template"
"time"
"dario.cat/mergo"
@@ -54,19 +53,12 @@ func generate(c *gin.Context) {
return
}
templ, err := template.New("").Parse(model.Prompt)
prompt, err := model.Prompt(req)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
var sb strings.Builder
if err = templ.Execute(&sb, req); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
req.Prompt = sb.String()
llm, err := llama.New(model.ModelPath, opts)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
@@ -77,7 +69,7 @@ func generate(c *gin.Context) {
ch := make(chan any)
go func() {
defer close(ch)
llm.Predict(req.Context, req.Prompt, func(r api.GenerateResponse) {
llm.Predict(req.Context, prompt, func(r api.GenerateResponse) {
r.Model = req.Model
r.CreatedAt = time.Now().UTC()
if r.Done {
@@ -101,15 +93,10 @@ func pull(c *gin.Context) {
ch := make(chan any)
go func() {
defer close(ch)
fn := func(status, digest string, total, completed int, percent float64) {
ch <- api.PullProgress{
Status: status,
Digest: digest,
Total: total,
Completed: completed,
Percent: percent,
}
fn := func(r api.ProgressResponse) {
ch <- r
}
if err := PullModel(req.Name, req.Username, req.Password, fn); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
@@ -129,15 +116,10 @@ func push(c *gin.Context) {
ch := make(chan any)
go func() {
defer close(ch)
fn := func(status, digest string, total, completed int, percent float64) {
ch <- api.PushProgress{
Status: status,
Digest: digest,
Total: total,
Completed: completed,
Percent: percent,
}
fn := func(r api.ProgressResponse) {
ch <- r
}
if err := PushModel(req.Name, req.Username, req.Password, fn); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
@@ -154,15 +136,6 @@ func create(c *gin.Context) {
return
}
// NOTE consider passing the entire Modelfile in the json instead of the path to it
file, err := os.Open(req.Path)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"message": err.Error()})
return
}
defer file.Close()
ch := make(chan any)
go func() {
defer close(ch)
@@ -172,7 +145,7 @@ func create(c *gin.Context) {
}
}
if err := CreateModel(req.Name, file, fn); err != nil {
if err := CreateModel(req.Name, req.Path, fn); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"message": err.Error()})
return
}
@@ -195,7 +168,8 @@ func list(c *gin.Context) {
if !info.IsDir() {
fi, err := os.Stat(path)
if err != nil {
return err
log.Printf("skipping file: %s", fp)
return nil
}
path := path[len(fp)+1:]
slashIndex := strings.LastIndex(path, "/")
@@ -206,8 +180,8 @@ func list(c *gin.Context) {
mp := ParseModelPath(tag)
manifest, err := GetManifest(mp)
if err != nil {
log.Printf("couldn't get manifest: %v", err)
return err
log.Printf("skipping file: %s", fp)
return nil
}
model := api.ListResponseModel{
Name: mp.GetShortTagname(),

View File

@@ -1,10 +0,0 @@
{{- if not .Context }}
Below is an instruction that describes a task. Write a response that appropriately completes the request.
{{- end }}
### Instruction:
{{ .Prompt }}
### Response:

View File

@@ -1,5 +0,0 @@
{{- if not .Context }}
A helpful assistant who helps the user with any questions asked.
{{- end }}
User: {{ .Prompt }}
Assistant:

View File

@@ -1,5 +0,0 @@
### Instruction:
{{ .Prompt }}
### Response:

View File

@@ -1,5 +0,0 @@
### Instruction:
{{ .Prompt }}
### Response:

View File

@@ -1,6 +0,0 @@
{{- if not .Context }}
Below is an instruction that describes a task. Write a response that appropriately completes the request. Be concise. Once the request is completed, include no other text.
{{- end }}
### Instruction:
{{ .Prompt }}
### Response:

View File

@@ -1 +0,0 @@
{{ .Prompt }}

View File

@@ -1,9 +0,0 @@
{{- if not .Context }}
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
{{- end }}
### User:
{{ .Prompt }}
### Response:

View File

@@ -1,2 +0,0 @@
### Human: {{ .Prompt }}
### Assistant:

View File

@@ -1,4 +0,0 @@
{{ .Prompt }}

View File

@@ -1,2 +0,0 @@
USER: {{ .Prompt }}
ASSISTANT:

View File

@@ -1,6 +0,0 @@
{{ if not .Context }}
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
{{- end }}
USER: {{ .Prompt }}
ASSISTANT:

View File

@@ -1,7 +0,0 @@
{{- if not .Context }}
Below is an instruction that describes a task. Write a response that appropriately completes the request
{{- end }}
### Instruction: {{ .Prompt }}
### Response:

View File

@@ -1,3 +0,0 @@
{{ .Prompt }}
### Response:

View File

@@ -6,12 +6,22 @@ const analytics = new Analytics({ writeKey: process.env.TELEMETRY_WRITE_KEY || '
export async function POST(req: Request) {
const { email } = await req.json()
analytics.identify({
anonymousId: uuid(),
const id = uuid()
await analytics.identify({
anonymousId: id,
traits: {
email,
},
})
await analytics.track({
anonymousId: id,
event: 'signup',
properties: {
email,
},
})
return new Response(null, { status: 200 })
}

View File

@@ -1,3 +1,6 @@
import Image from 'next/image'
import Header from '../header'
import Downloader from './downloader'
import Signup from './signup'
@@ -26,22 +29,19 @@ export default async function Download() {
}
return (
<main className='flex min-h-screen max-w-2xl flex-col p-4 lg:p-24 items-center mx-auto'>
<img src='/ollama.png' className='w-16 h-auto' />
<section className='my-12 text-center'>
<h2 className='my-2 max-w-md text-3xl tracking-tight'>Downloading Ollama</h2>
<h3 className='text-sm text-neutral-500'>
Problems downloading?{' '}
<a href={asset.browser_download_url} className='underline'>
Try again
</a>
</h3>
<Downloader url={asset.browser_download_url} />
</section>
<section className='max-w-sm flex flex-col w-full items-center border border-neutral-200 rounded-xl px-8 pt-8 pb-2'>
<p className='text-lg leading-tight text-center mb-6 max-w-[260px]'>Sign up for updates</p>
<>
<Header />
<main className='flex min-h-screen max-w-6xl flex-col py-20 px-16 lg:p-32 items-center mx-auto'>
<Image src='/ollama.png' width={64} height={64} alt='ollamaIcon' />
<section className='mt-12 mb-8 text-center'>
<h2 className='my-2 max-w-md text-3xl tracking-tight'>Downloading...</h2>
<h3 className='text-base text-neutral-500 mt-12 max-w-[16rem]'>
While Ollama downloads, sign up to get notified of new updates.
</h3>
<Downloader url={asset.browser_download_url} />
</section>
<Signup />
</section>
</main>
</main>
</>
)
}

View File

@@ -28,7 +28,7 @@ export default function Signup() {
return false
}}
className='flex self-stretch flex-col gap-3 h-32'
className='flex self-stretch flex-col gap-3 h-32 md:mx-40 lg:mx-72'
>
<input
required
@@ -37,13 +37,13 @@ export default function Signup() {
onChange={e => setEmail(e.target.value)}
type='email'
placeholder='your@email.com'
className='bg-neutral-100 rounded-lg px-4 py-2 focus:outline-none placeholder-neutral-500'
className='border border-neutral-200 rounded-lg px-4 py-2 focus:outline-none placeholder-neutral-300'
/>
<input
type='submit'
value='Get updates'
disabled={submitting}
className='bg-black text-white disabled:text-neutral-200 disabled:bg-neutral-700 rounded-lg px-4 py-2 focus:outline-none cursor-pointer'
className='bg-black text-white disabled:text-neutral-200 disabled:bg-neutral-700 rounded-full px-4 py-2 focus:outline-none cursor-pointer'
/>
{success && <p className='text-center text-sm'>You&apos;re signed up for updates</p>}
</form>

26
web/app/header.tsx Normal file
View File

@@ -0,0 +1,26 @@
import Link from "next/link"
const navigation = [
{ name: 'Discord', href: 'https://discord.gg/MrfB5FbNWN' },
{ name: 'Github', href: 'https://github.com/jmorganca/ollama' },
{ name: 'Download', href: '/download' },
]
export default function Header() {
return (
<header className="absolute inset-x-0 top-0 z-50">
<nav className="mx-auto flex items-center justify-between px-10 py-4">
<Link className="flex-1 font-bold" href="/">
Ollama
</Link>
<div className="flex space-x-8">
{navigation.map((item) => (
<Link key={item.name} href={item.href} className="text-sm leading-6 text-gray-900">
{item.name}
</Link>
))}
</div>
</nav>
</header >
)
}

View File

@@ -1,34 +1,32 @@
import { AiFillApple } from 'react-icons/ai'
import Image from 'next/image'
import Link from 'next/link'
import models from '../../models.json'
import Header from './header'
export default async function Home() {
return (
<main className='flex min-h-screen max-w-2xl flex-col p-4 lg:p-24'>
<img src='/ollama.png' className='w-16 h-auto' />
<section className='my-4'>
<p className='my-3 max-w-md'>
<a className='underline' href='https://github.com/jmorganca/ollama'>
Ollama
</a>{' '}
is a tool for running large language models, currently for macOS with Windows and Linux coming soon.
<br />
<br />
<a href='/download'>
<button className='bg-black text-white text-sm py-2 px-3 rounded-lg flex items-center gap-2'>
<AiFillApple className='h-auto w-5 relative -top-px' /> Download for macOS
</button>
</a>
</p>
</section>
<section className='my-4'>
<h2 className='mb-4 text-lg'>Example models you can try running:</h2>
{models.map(m => (
<div className='my-2 grid font-mono' key={m.name}>
<code className='py-0.5'>ollama run {m.name}</code>
<>
<Header />
<main className='flex min-h-screen max-w-6xl flex-col py-20 px-16 md:p-32 items-center mx-auto'>
<Image src='/ollama.png' width={64} height={64} alt='ollamaIcon' />
<section className='my-12 text-center'>
<div className='flex flex-col space-y-2'>
<h2 className='md:max-w-[18rem] mx-auto my-2 text-3xl tracking-tight'>Portable large language models</h2>
<h3 className='md:max-w-xs mx-auto text-base text-neutral-500'>
Bundle a models weights, configuration, prompts, data and more into self-contained packages that run anywhere.
</h3>
</div>
))}
</section>
</main>
<div className='mx-auto flex flex-col space-y-4 mt-12'>
<Link href='/download' className='md:mx-10 lg:mx-14 bg-black text-white rounded-full px-4 py-2 focus:outline-none cursor-pointer'>
Download
</Link>
<p className='text-neutral-500 text-sm '>
Available for macOS with Apple Silicon <br />
Windows & Linux support coming soon.
</p>
</div>
</section>
</main>
</>
)
}