Compare commits
582 Commits
v0.3.14
...
parth/samp
Author | SHA1 | Date | |
---|---|---|---|
![]() |
4450f871db | ||
![]() |
5ec6bb52a0 | ||
![]() |
1fd9967558 | ||
![]() |
131f0355a5 | ||
![]() |
ce929984a3 | ||
![]() |
4b34930a31 | ||
![]() |
74bd09652d | ||
![]() |
fb6252d786 | ||
![]() |
c794fef2f2 | ||
![]() |
00ebda8cc4 | ||
![]() |
d14ce75b95 | ||
![]() |
2d6eac9084 | ||
![]() |
3ed7ad3ab3 | ||
![]() |
6d1103048e | ||
![]() |
0ff28758b3 | ||
![]() |
d3e9ca3eda | ||
![]() |
0fbfcf3c9c | ||
![]() |
0c220935bd | ||
![]() |
ffbfe833da | ||
![]() |
42a14f7f63 | ||
![]() |
f8c3dbe5b5 | ||
![]() |
b078dd157c | ||
![]() |
2ddacd7516 | ||
![]() |
da0e345200 | ||
![]() |
df94175a0f | ||
![]() |
61a8825216 | ||
![]() |
021dcf089d | ||
![]() |
bf24498b1e | ||
![]() |
95e271d98f | ||
![]() |
364629b8d6 | ||
![]() |
108fe02165 | ||
![]() |
4561fff36e | ||
![]() |
50b5962042 | ||
![]() |
e27e4a3c1b | ||
![]() |
088514bbd4 | ||
![]() |
2c8b484643 | ||
![]() |
8294676150 | ||
![]() |
ef378ad673 | ||
![]() |
2d2247e59e | ||
![]() |
7bf793a600 | ||
![]() |
282bfaaa95 | ||
![]() |
9679f40146 | ||
![]() |
3892c3a703 | ||
![]() |
4e320b8b90 | ||
![]() |
eb2b22b042 | ||
![]() |
4ea4d2b189 | ||
![]() |
8d76fa23ef | ||
![]() |
74b44fdf8f | ||
![]() |
65b88c544f | ||
![]() |
a422ba39c9 | ||
![]() |
d2ec22371e | ||
![]() |
033cec232a | ||
![]() |
543240fb5f | ||
![]() |
4bed739259 | ||
![]() |
80c7ce381b | ||
![]() |
ccfd41c4f0 | ||
![]() |
3e102b7dad | ||
![]() |
ec46f3286c | ||
![]() |
5e2e0b46b1 | ||
![]() |
45a13b1dec | ||
![]() |
5c0b663969 | ||
![]() |
30d7a59ba8 | ||
![]() |
4aeb67ef4c | ||
![]() |
3ba91634c1 | ||
![]() |
1b7433b71e | ||
![]() |
a70820daa0 | ||
![]() |
6b45b1d6b4 | ||
![]() |
85ab552028 | ||
![]() |
b3af953a55 | ||
![]() |
ad4e0bf3be | ||
![]() |
aee28501b5 | ||
![]() |
83f0ec8269 | ||
![]() |
c6b6938b3a | ||
![]() |
fb4664fcec | ||
![]() |
20e3593863 | ||
![]() |
63a394068c | ||
![]() |
ab39e08eb9 | ||
![]() |
11bfa62796 | ||
![]() |
f63e62e546 | ||
![]() |
65b0f329d1 | ||
![]() |
06007c0a18 | ||
![]() |
a8e83a7654 | ||
![]() |
475005504e | ||
![]() |
2c40c4d35e | ||
![]() |
e95278932b | ||
![]() |
9d2a20a763 | ||
![]() |
2e54d72fc3 | ||
![]() |
6b32a2d549 | ||
![]() |
c5cbe4fc2a | ||
![]() |
f888912870 | ||
![]() |
9e4642e9b3 | ||
![]() |
6b0486c216 | ||
![]() |
d368c039f0 | ||
![]() |
9b54267e69 | ||
![]() |
46bb0169c4 | ||
![]() |
8934324b72 | ||
![]() |
0e886595bf | ||
![]() |
c62861f4fa | ||
![]() |
0df1800436 | ||
![]() |
631fecc6d9 | ||
![]() |
4346c2409d | ||
![]() |
4b037a97dc | ||
![]() |
5f74d1fd47 | ||
![]() |
4dcf80167a | ||
![]() |
26a26998fb | ||
![]() |
9926eae015 | ||
![]() |
8585b7b151 | ||
![]() |
7e34f4fbfa | ||
![]() |
fe776293f7 | ||
![]() |
d8a5d96b98 | ||
![]() |
757668c42f | ||
![]() |
96ec8afd09 | ||
![]() |
e093db92c4 | ||
![]() |
a1cda80bcb | ||
![]() |
4614fafae0 | ||
![]() |
4100ed7bdd | ||
![]() |
f52b2615ef | ||
![]() |
25f9b152f9 | ||
![]() |
6da8b6a879 | ||
![]() |
0daaaef8c9 | ||
![]() |
98272fbd58 | ||
![]() |
b27e8f3f10 | ||
![]() |
45df786f09 | ||
![]() |
daaf42e4a4 | ||
![]() |
2dc60d4620 | ||
![]() |
b5312f30e8 | ||
![]() |
26c2e0bd35 | ||
![]() |
bf920883d5 | ||
![]() |
58b9ec1f6b | ||
![]() |
7bae7fa5ce | ||
![]() |
764e199d67 | ||
![]() |
bfce55db3d | ||
![]() |
bab6f34dc0 | ||
![]() |
0682dae027 | ||
![]() |
1f6986e919 | ||
![]() |
4289c74359 | ||
![]() |
25248f4bd5 | ||
![]() |
a7e63b82be | ||
![]() |
b70fc4d51e | ||
![]() |
e2252d0fc6 | ||
![]() |
cae5d4d4ea | ||
![]() |
05a01fdecb | ||
![]() |
8fe6f69f28 | ||
![]() |
1fdb351c37 | ||
![]() |
7a01ad7614 | ||
![]() |
55ab9f371a | ||
![]() |
fefbf8f74b | ||
![]() |
b428ddd796 | ||
![]() |
ba7d31240e | ||
![]() |
d25efe3954 | ||
![]() |
36dfb906bb | ||
![]() |
a6f0f908b9 | ||
![]() |
3b1ddb2b3a | ||
![]() |
1579c4f06d | ||
![]() |
3519dd1c6e | ||
![]() |
e41c4cbea7 | ||
![]() |
ee048b76d4 | ||
![]() |
af68d60a58 | ||
![]() |
21aa666a1e | ||
![]() |
ee141cc821 | ||
![]() |
55e5776c44 | ||
![]() |
854a9195f3 | ||
![]() |
96a97adf9b | ||
![]() |
e75c6126e9 | ||
![]() |
cda6f5c66c | ||
![]() |
bebb6823c0 | ||
![]() |
31e472baa4 | ||
![]() |
657685e85d | ||
![]() |
a14912858e | ||
![]() |
eed11ded30 | ||
![]() |
b42aba40ed | ||
![]() |
25885e5335 | ||
![]() |
98d44fa39d | ||
![]() |
2099e2d267 | ||
![]() |
0c1041ad85 | ||
![]() |
c245b0406f | ||
![]() |
8b194b7520 | ||
![]() |
3e8b8a1933 | ||
![]() |
41dc280491 | ||
![]() |
53d2990d9b | ||
![]() |
e185c08ad9 | ||
![]() |
2412adf42b | ||
![]() |
be2ac1ed93 | ||
![]() |
dc13813a03 | ||
![]() |
d6af13efed | ||
![]() |
a59f665235 | ||
![]() |
688925aca9 | ||
![]() |
76e903cf9d | ||
![]() |
a5272130c4 | ||
![]() |
d7d7e99662 | ||
![]() |
2db96c18e7 | ||
![]() |
e12af460ed | ||
![]() |
3ad4bc8afe | ||
![]() |
0d694793f2 | ||
![]() |
e91ae3d47d | ||
![]() |
6ecd7f64ba | ||
![]() |
888855675e | ||
![]() |
b16367b4b2 | ||
![]() |
a499390648 | ||
![]() |
4df98f3eb5 | ||
![]() |
348b3e0983 | ||
![]() |
0b7e1676eb | ||
![]() |
314573bfe8 | ||
![]() |
4604b10306 | ||
![]() |
8c13cfa4dd | ||
![]() |
7cfd4aee4d | ||
![]() |
68bac1e0a6 | ||
![]() |
f53f4198c3 | ||
![]() |
2192a28eed | ||
![]() |
5d81c1a184 | ||
![]() |
5c5535c064 | ||
![]() |
e5bcc51ae1 | ||
![]() |
bd6a7d5e64 | ||
![]() |
14b5a9a150 | ||
![]() |
ba9ec3d05e | ||
![]() |
7c168b08c9 | ||
![]() |
3d4cc7833c | ||
![]() |
351a85d9ea | ||
![]() |
bda4ef6c56 | ||
![]() |
1e438b237c | ||
![]() |
d721a02e7d | ||
![]() |
778603a818 | ||
![]() |
3c874df46e | ||
![]() |
d2eb226c91 | ||
![]() |
e13e7c8d94 | ||
![]() |
78f403ff45 | ||
![]() |
5f8c03189e | ||
![]() |
08a299e1d0 | ||
![]() |
7b5d916a9a | ||
![]() |
33ad61b112 | ||
![]() |
716e365615 | ||
![]() |
3b4424ff98 | ||
![]() |
f9c7ead160 | ||
![]() |
5930aaeb1a | ||
![]() |
faf67db089 | ||
![]() |
0667baddc6 | ||
![]() |
d006e1e09b | ||
![]() |
df2680b4b9 | ||
![]() |
010313bb63 | ||
![]() |
5296f487a8 | ||
![]() |
f05774b04c | ||
![]() |
6600bd7d91 | ||
![]() |
ed443a0393 | ||
![]() |
6945617af5 | ||
![]() |
7916f55009 | ||
![]() |
d650ad398f | ||
![]() |
d223f3b697 | ||
![]() |
60830695c2 | ||
![]() |
01d9a46854 | ||
![]() |
d773b7d671 | ||
![]() |
4d4463b2bd | ||
![]() |
0e38297f87 | ||
![]() |
7e13f568dc | ||
![]() |
58245413f4 | ||
![]() |
8cf16063a5 | ||
![]() |
3a4449e2f1 | ||
![]() |
10d59d5f90 | ||
![]() |
a4f69a0191 | ||
![]() |
82658c3eec | ||
![]() |
378d6e1e6a | ||
![]() |
afa55bc70c | ||
![]() |
49df03da9a | ||
![]() |
0189bdd0b7 | ||
![]() |
f4711da7bd | ||
![]() |
38117fba83 | ||
![]() |
1f766c36fb | ||
![]() |
484a99e428 | ||
![]() |
ec6121c331 | ||
![]() |
b86c0a1500 | ||
![]() |
7e402ebb8c | ||
![]() |
b901a712c6 | ||
![]() |
abb8dd57f8 | ||
![]() |
a400df48c0 | ||
![]() |
6ab4ba4c26 | ||
![]() |
e8d4eb3e68 | ||
![]() |
ae7e368f75 | ||
![]() |
31acd1ebf9 | ||
![]() |
9a4757ae66 | ||
![]() |
7814019708 | ||
![]() |
b698f9a0d8 | ||
![]() |
32285a6d19 | ||
![]() |
1c198977ec | ||
![]() |
330b6c50b0 | ||
![]() |
928911bc68 | ||
![]() |
5b446cc815 | ||
![]() |
451c1596af | ||
![]() |
932bded12f | ||
![]() |
070ad913ac | ||
![]() |
8d8b9f83ae | ||
![]() |
f00d359a67 | ||
![]() |
291def6adb | ||
![]() |
cd3fbf1c49 | ||
![]() |
c852b8e021 | ||
![]() |
d8932c55e7 | ||
![]() |
63f0269f7f | ||
![]() |
4759ecae19 | ||
![]() |
65b7ecac7b | ||
![]() |
f9d2d89135 | ||
![]() |
669dc31cf3 | ||
![]() |
d4d338c224 | ||
![]() |
bfdeffc375 | ||
![]() |
e806184023 | ||
![]() |
50566113ac | ||
![]() |
ad22ace439 | ||
![]() |
f4321a421c | ||
![]() |
475333d533 | ||
![]() |
39fd89308c | ||
![]() |
548a9f56a6 | ||
![]() |
3f0cb36bdb | ||
![]() |
bea1f1fac6 | ||
![]() |
5d75d837ef | ||
![]() |
711648c9bb | ||
![]() |
dcfb7a105c | ||
![]() |
2ef3c803a1 | ||
![]() |
453e4d090b | ||
![]() |
ca2f9843c8 | ||
![]() |
294b6f5a22 | ||
![]() |
7bb356c680 | ||
![]() |
021817e59a | ||
![]() |
a420a453b4 | ||
![]() |
42cf4db601 | ||
![]() |
93a8daf285 | ||
![]() |
a041b4df7c | ||
![]() |
2539f2dbf9 | ||
![]() |
61676fb506 | ||
![]() |
f6f3713001 | ||
![]() |
a30f347201 | ||
![]() |
74ea4fb604 | ||
![]() |
6982e9cc96 | ||
![]() |
ab39872cb4 | ||
![]() |
84a2314463 | ||
![]() |
17fcdea698 | ||
![]() |
32bd37adf8 | ||
![]() |
9446c2c902 | ||
![]() |
9aa141d023 | ||
![]() |
8bccae4f92 | ||
![]() |
6ae2adc1af | ||
![]() |
1deafd8254 | ||
![]() |
57f038ec7b | ||
![]() |
cdf3a181dc | ||
![]() |
3919f4ba3d | ||
![]() |
2d33c4e97d | ||
![]() |
29a8975c66 | ||
![]() |
86a622cbdc | ||
![]() |
459d822b51 | ||
![]() |
844899440a | ||
![]() |
103db4216d | ||
![]() |
6daddcde01 | ||
![]() |
07f7e69b36 | ||
![]() |
b68e8e5727 | ||
![]() |
369fb529e2 | ||
![]() |
023e4bca14 | ||
![]() |
51af455f62 | ||
![]() |
ffe3549064 | ||
![]() |
928de9050e | ||
![]() |
36aea6154a | ||
![]() |
dd352ab27f | ||
![]() |
cb40d60469 | ||
![]() |
d8bab8ea44 | ||
![]() |
9ab62eb96f | ||
![]() |
290cf2040a | ||
![]() |
a72f2dce45 | ||
![]() |
08a832b482 | ||
![]() |
2ddc32d5c5 | ||
![]() |
2cde4b8817 | ||
![]() |
87f0a49fe6 | ||
![]() |
0f06a6daa7 | ||
![]() |
8f805dd74b | ||
![]() |
89d5e2f2fd | ||
![]() |
297ada6c87 | ||
![]() |
8c9fb8eb73 | ||
![]() |
b75ccfc5ec | ||
![]() |
7a81daf026 | ||
![]() |
60f75560a2 | ||
![]() |
e28f2d4900 | ||
![]() |
c216850523 | ||
![]() |
18f6a98bd6 | ||
![]() |
b1fd7fef86 | ||
![]() |
36d111e788 | ||
![]() |
9039c821a2 | ||
![]() |
581a4a5553 | ||
![]() |
cf4d7c52c4 | ||
![]() |
6a6328a5e9 | ||
![]() |
527cc97899 | ||
![]() |
a37f4a86a7 | ||
![]() |
46f74e0cb5 | ||
![]() |
7622ea21af | ||
![]() |
c5d3947084 | ||
![]() |
757eeacc1b | ||
![]() |
dd42acf737 | ||
![]() |
b9ccb3741e | ||
![]() |
abfdc4710f | ||
![]() |
82a02e18d9 | ||
![]() |
4879a234c4 | ||
![]() |
63269668c0 | ||
![]() |
900f64e6be | ||
![]() |
da09488fbf | ||
![]() |
7f0ccc8a9d | ||
![]() |
de52b6c2f9 | ||
![]() |
acd7d03266 | ||
![]() |
f6e87fd628 | ||
![]() |
aed1419c64 | ||
![]() |
c6c526275d | ||
![]() |
630e7dc6ff | ||
![]() |
eb8366d658 | ||
![]() |
4456012956 | ||
![]() |
539be43640 | ||
![]() |
1bdab9fdb1 | ||
![]() |
2b82c5a8a1 | ||
![]() |
55c3efa900 | ||
![]() |
1aedffad93 | ||
![]() |
ff6c2d6dc8 | ||
![]() |
d543b282a7 | ||
![]() |
5f8051180e | ||
![]() |
39e29ae5dd | ||
![]() |
30a9f063c9 | ||
![]() |
ce7455a8e1 | ||
![]() |
e3936d4fb3 | ||
![]() |
940e62772e | ||
![]() |
71e6a0d0d1 | ||
![]() |
2cd11ae365 | ||
![]() |
52bbad12f9 | ||
![]() |
30e88d7f31 | ||
![]() |
2b7ed61ca2 | ||
![]() |
647513a7d4 | ||
![]() |
a210ec74d2 | ||
![]() |
cfb1ddd6fc | ||
![]() |
3987acd7ec | ||
![]() |
fda1e6b563 | ||
![]() |
3440ffb37b | ||
![]() |
a820d2b267 | ||
![]() |
2ebdb54fb3 | ||
![]() |
bb52abfa55 | ||
![]() |
31cb1ca9e5 | ||
![]() |
78f779a323 | ||
![]() |
3478b2cf14 | ||
![]() |
7b5585b9cb | ||
![]() |
f0a351810c | ||
![]() |
b85520bfb9 | ||
![]() |
d88972ea48 | ||
![]() |
25c9339e2d | ||
![]() |
597072ef1b | ||
![]() |
84b3e07f1b | ||
![]() |
422d52858c | ||
![]() |
723f285813 | ||
![]() |
eaaf5d309d | ||
![]() |
27d9c749d5 | ||
![]() |
b7bddeebc1 | ||
![]() |
6a0c2ec50f | ||
![]() |
baa41be2aa | ||
![]() |
2157b1232e | ||
![]() |
37711578a2 | ||
![]() |
fb2c9594e0 | ||
![]() |
7fbcd55da3 | ||
![]() |
b4348bdd25 | ||
![]() |
155734e09a | ||
![]() |
883d80e097 | ||
![]() |
e4c9f75b23 | ||
![]() |
f5ec7cc872 | ||
![]() |
811bafba82 | ||
![]() |
431075fcbb | ||
![]() |
c4f27225ac | ||
![]() |
b7aa5ee06c | ||
![]() |
3f87f71755 | ||
![]() |
20623cec13 | ||
![]() |
0e5f31a86d | ||
![]() |
7e92091751 | ||
![]() |
1a742f54c9 | ||
![]() |
6a89dcf848 | ||
![]() |
c5e238e8e5 | ||
![]() |
fce30f407a | ||
![]() |
d863298210 | ||
![]() |
c4b34f2a2a | ||
![]() |
c3ff916431 | ||
![]() |
3fc1dc0e6f | ||
![]() |
7121dfa309 | ||
![]() |
5f68fcab12 | ||
![]() |
ecf41eed05 | ||
![]() |
b8c66d3307 | ||
![]() |
303f4bc79e | ||
![]() |
d2a25206b1 | ||
![]() |
2f0a8c8778 | ||
![]() |
bfd30f4286 | ||
![]() |
0ef17ede89 | ||
![]() |
909a88c5c0 | ||
![]() |
f602ab4de4 | ||
![]() |
807ace5b1f | ||
![]() |
4b8a2e341a | ||
![]() |
e66c29261a | ||
![]() |
712d63c3f0 | ||
![]() |
6cdf27d154 | ||
![]() |
5c18e66384 | ||
![]() |
35096a7eff | ||
![]() |
81d55d3e4d | ||
![]() |
a14f76491d | ||
![]() |
760cfa27e5 | ||
![]() |
c9a5aca3da | ||
![]() |
d5da2ab7e8 | ||
![]() |
1c04117114 | ||
![]() |
8b4b243f5f | ||
![]() |
b42a596425 | ||
![]() |
4759d879f2 | ||
![]() |
d875e99e46 | ||
![]() |
8a35bb926e | ||
![]() |
a0ea067b63 | ||
![]() |
4efb98cb4f | ||
![]() |
0679d491fe | ||
![]() |
c25ffde91d | ||
![]() |
17b386a891 | ||
![]() |
549c2bdfcf | ||
![]() |
67691e410d | ||
![]() |
5b3393b6a2 | ||
![]() |
d7eb05b936 | ||
![]() |
636a743c2b | ||
![]() |
df011054fa | ||
![]() |
ac07160c8d | ||
![]() |
6606e4243c | ||
![]() |
65973ceb64 | ||
![]() |
bebef1e50d | ||
![]() |
d48c1c5a44 | ||
![]() |
36a8372b28 | ||
![]() |
4e94227b5d | ||
![]() |
479d551766 | ||
![]() |
76b2b723b2 | ||
![]() |
b8d77cdeab | ||
![]() |
c2e8cbaa14 | ||
![]() |
771fab1dd8 | ||
![]() |
3a5239e6bf | ||
![]() |
3d25e7bf8c | ||
![]() |
1618700c5a | ||
![]() |
b111aa5a91 | ||
![]() |
9e83e550e1 | ||
![]() |
fc2a0715df | ||
![]() |
3020d2dc58 | ||
![]() |
a909417602 | ||
![]() |
6cd566872b | ||
![]() |
9d71bcc3e2 | ||
![]() |
a4c70fe157 | ||
![]() |
34a75102f7 | ||
![]() |
4157d1f7b6 | ||
![]() |
4ebfa2cb91 | ||
![]() |
046054fa3b | ||
![]() |
95483f348b | ||
![]() |
f247a6233e | ||
![]() |
44bd9e5994 | ||
![]() |
18237be9b2 | ||
![]() |
29ab9fa7d7 | ||
![]() |
b8d5036e33 | ||
![]() |
312d9de1d1 | ||
![]() |
a103dae01e | ||
![]() |
d07cf41a97 | ||
![]() |
8c238e70ab | ||
![]() |
8a9bb0d000 | ||
![]() |
26acdcf44e | ||
![]() |
921779bb10 | ||
![]() |
16f4eabe2d | ||
![]() |
c826e57475 | ||
![]() |
712e99d477 | ||
![]() |
b754f5a6a3 | ||
![]() |
a805e5947e | ||
![]() |
91dfbb1bba | ||
![]() |
db1842b9e1 | ||
![]() |
c9ca386131 | ||
![]() |
078f666f73 | ||
![]() |
de1557a0dc | ||
![]() |
084929c293 | ||
![]() |
abd5dfd06a | ||
![]() |
099f7077a1 | ||
![]() |
d7c94e0ca6 | ||
![]() |
35ec7f079f | ||
![]() |
5231ae52d9 | ||
![]() |
3085c47bea | ||
![]() |
0ccc73251a | ||
![]() |
dc6fe82051 | ||
![]() |
d78fb62056 | ||
![]() |
5c44461ccf | ||
![]() |
03e40efa51 | ||
![]() |
23f746508d | ||
![]() |
48708ca0d5 | ||
![]() |
c7cb0f0602 | ||
![]() |
bf4018b9ec | ||
![]() |
f86d00cd95 |
@@ -3,9 +3,9 @@ ollama
|
|||||||
app
|
app
|
||||||
macapp
|
macapp
|
||||||
dist
|
dist
|
||||||
llm/llama.cpp
|
build
|
||||||
.env
|
.env
|
||||||
.cache
|
.cache
|
||||||
test_data
|
test_data
|
||||||
llm/build
|
.git
|
||||||
llama/build
|
|
||||||
|
14
.gitattributes
vendored
14
.gitattributes
vendored
@@ -1,4 +1,3 @@
|
|||||||
llm/ext_server/* linguist-vendored
|
|
||||||
llama/**/*.cpp linguist-vendored
|
llama/**/*.cpp linguist-vendored
|
||||||
llama/**/*.hpp linguist-vendored
|
llama/**/*.hpp linguist-vendored
|
||||||
llama/**/*.h linguist-vendored
|
llama/**/*.h linguist-vendored
|
||||||
@@ -8,5 +7,18 @@ llama/**/*.cuh linguist-vendored
|
|||||||
llama/**/*.m linguist-vendored
|
llama/**/*.m linguist-vendored
|
||||||
llama/**/*.metal linguist-vendored
|
llama/**/*.metal linguist-vendored
|
||||||
|
|
||||||
|
ml/backend/**/*.c linguist-vendored
|
||||||
|
ml/backend/**/*.h linguist-vendored
|
||||||
|
ml/backend/**/*.cpp linguist-vendored
|
||||||
|
ml/backend/**/*.hpp linguist-vendored
|
||||||
|
ml/backend/**/*.cu linguist-vendored
|
||||||
|
ml/backend/**/*.cuh linguist-vendored
|
||||||
|
ml/backend/**/*.m linguist-vendored
|
||||||
|
ml/backend/**/*.metal linguist-vendored
|
||||||
|
ml/backend/**/CMakeLists.txt linguist-vendored
|
||||||
|
|
||||||
|
llama/build-info.cpp linguist-generated
|
||||||
|
ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s linguist-generated
|
||||||
|
|
||||||
* text=auto
|
* text=auto
|
||||||
*.go text eol=lf
|
*.go text eol=lf
|
||||||
|
8
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
8
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
@@ -9,6 +9,14 @@ body:
|
|||||||
description: What happened? What did you expect to happen?
|
description: What happened? What did you expect to happen?
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Relevant log output
|
||||||
|
description: Please copy and paste any relevant log output. See [Troubleshooting Guide](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) for details.
|
||||||
|
render: shell
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
- type: dropdown
|
- type: dropdown
|
||||||
id: os
|
id: os
|
||||||
attributes:
|
attributes:
|
||||||
|
1042
.github/workflows/release.yaml
vendored
1042
.github/workflows/release.yaml
vendored
File diff suppressed because it is too large
Load Diff
453
.github/workflows/test.yaml
vendored
453
.github/workflows/test.yaml
vendored
@@ -21,10 +21,7 @@ jobs:
|
|||||||
changes:
|
changes:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
outputs:
|
outputs:
|
||||||
GENERATE: ${{ steps.changes.outputs.GENERATE }}
|
changed: ${{ steps.changes.outputs.changed }}
|
||||||
GENERATE_CUDA: ${{ steps.changes.outputs.GENERATE_CUDA }}
|
|
||||||
GENERATE_ROCM: ${{ steps.changes.outputs.GENERATE_ROCM }}
|
|
||||||
RUNNERS: ${{ steps.changes.outputs.RUNNERS }}
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -32,305 +29,213 @@ jobs:
|
|||||||
- id: changes
|
- id: changes
|
||||||
run: |
|
run: |
|
||||||
changed() {
|
changed() {
|
||||||
git diff-tree -r --no-commit-id --name-only \
|
local BASE=${{ github.event.pull_request.base.sha }}
|
||||||
$(git merge-base ${{ github.event.pull_request.base.sha }} ${{ github.event.pull_request.head.sha }}) \
|
local HEAD=${{ github.event.pull_request.head.sha }}
|
||||||
${{ github.event.pull_request.head.sha }} \
|
local MERGE_BASE=$(git merge-base $BASE $HEAD)
|
||||||
|
git diff-tree -r --no-commit-id --name-only "$MERGE_BASE" "$HEAD" \
|
||||||
| xargs python3 -c "import sys; from pathlib import Path; print(any(Path(x).match(glob) for x in sys.argv[1:] for glob in '$*'.split(' ')))"
|
| xargs python3 -c "import sys; from pathlib import Path; print(any(Path(x).match(glob) for x in sys.argv[1:] for glob in '$*'.split(' ')))"
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
echo changed=$(changed 'llama/llama.cpp/**' 'ml/backend/ggml/ggml/**') | tee -a $GITHUB_OUTPUT
|
||||||
echo GENERATE=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
echo GENERATE_CUDA=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
echo GENERATE_ROCM=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
echo RUNNERS=$(changed 'llama/**')
|
|
||||||
} >>$GITHUB_OUTPUT
|
|
||||||
|
|
||||||
generate:
|
linux:
|
||||||
needs: [changes]
|
needs: [changes]
|
||||||
if: ${{ needs.changes.outputs.GENERATE == 'True' }}
|
if: needs.changes.outputs.changed == 'True'
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
include:
|
||||||
arch: [amd64, arm64]
|
- preset: CPU
|
||||||
exclude:
|
- preset: CUDA
|
||||||
- os: ubuntu-latest
|
container: nvidia/cuda:11.8.0-devel-ubuntu22.04
|
||||||
arch: arm64
|
flags: '-DCMAKE_CUDA_ARCHITECTURES=87'
|
||||||
- os: windows-2019
|
- preset: ROCm
|
||||||
arch: arm64
|
container: rocm/dev-ubuntu-22.04:6.1.2
|
||||||
runs-on: ${{ matrix.os }}
|
extra-packages: rocm-libs
|
||||||
env:
|
flags: '-DAMDGPU_TARGETS=gfx1010 -DCMAKE_PREFIX_PATH=/opt/rocm'
|
||||||
GOARCH: ${{ matrix.arch }}
|
runs-on: linux
|
||||||
CGO_ENABLED: '1'
|
container: ${{ matrix.container }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
- run: |
|
||||||
$gopath=(get-command go).source | split-path -parent
|
[ -n "${{ matrix.container }}" ] || sudo=sudo
|
||||||
$gccpath=(get-command gcc).source | split-path -parent
|
$sudo apt-get update
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
$sudo apt-get install -y cmake ccache ${{ matrix.extra-packages }}
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$gccpath;$env:PATH"
|
|
||||||
echo $env:PATH
|
|
||||||
go generate -x ./...
|
|
||||||
if: ${{ startsWith(matrix.os, 'windows-') }}
|
|
||||||
name: 'Windows Go Generate'
|
|
||||||
- run: go generate -x ./...
|
|
||||||
if: ${{ ! startsWith(matrix.os, 'windows-') }}
|
|
||||||
name: 'Unix Go Generate'
|
|
||||||
- run: go build .
|
|
||||||
generate-cuda:
|
|
||||||
needs: [changes]
|
|
||||||
if: ${{ needs.changes.outputs.GENERATE_CUDA == 'True' }}
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
cuda-version:
|
|
||||||
- '11.8.0'
|
|
||||||
runs-on: linux
|
|
||||||
container: nvidia/cuda:${{ matrix.cuda-version }}-devel-ubuntu20.04
|
|
||||||
steps:
|
|
||||||
- run: |
|
|
||||||
apt-get update && apt-get install -y git build-essential curl
|
|
||||||
curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.28.1/cmake-3.28.1-linux-x86_64.tar.gz \
|
|
||||||
| tar -zx -C /usr --strip-components 1
|
|
||||||
env:
|
env:
|
||||||
DEBIAN_FRONTEND: noninteractive
|
DEBIAN_FRONTEND: noninteractive
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/cache@v4
|
||||||
- uses: actions/setup-go@v4
|
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
path: /github/home/.cache/ccache
|
||||||
cache: true
|
key: ccache-${{ runner.os }}-${{ runner.arch }}-${{ matrix.preset }}
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
- run: |
|
||||||
git config --global --add safe.directory /__w/ollama/ollama
|
cmake --preset ${{ matrix.preset }} ${{ matrix.flags }}
|
||||||
go generate -x ./...
|
cmake --build --preset ${{ matrix.preset }} --parallel
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
windows:
|
||||||
generate-rocm:
|
|
||||||
needs: [changes]
|
needs: [changes]
|
||||||
if: ${{ needs.changes.outputs.GENERATE_ROCM == 'True' }}
|
if: needs.changes.outputs.changed == 'True'
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
rocm-version:
|
include:
|
||||||
- '6.1.2'
|
- preset: CPU
|
||||||
runs-on: linux
|
- preset: CUDA
|
||||||
container: rocm/dev-ubuntu-20.04:${{ matrix.rocm-version }}
|
install: https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe
|
||||||
steps:
|
flags: '-DCMAKE_CUDA_ARCHITECTURES=80'
|
||||||
- run: |
|
- preset: ROCm
|
||||||
apt-get update && apt-get install -y git build-essential curl rocm-libs
|
install: https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q4-WinSvr2022-For-HIP.exe
|
||||||
curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.28.1/cmake-3.28.1-linux-x86_64.tar.gz \
|
flags: '-DAMDGPU_TARGETS=gfx1010'
|
||||||
| tar -zx -C /usr --strip-components 1
|
|
||||||
env:
|
|
||||||
DEBIAN_FRONTEND: noninteractive
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v4
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
git config --global --add safe.directory /__w/ollama/ollama
|
|
||||||
go generate -x ./...
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
|
|
||||||
# ROCm generation step
|
|
||||||
generate-windows-rocm:
|
|
||||||
needs: [changes]
|
|
||||||
if: ${{ needs.changes.outputs.GENERATE_ROCM == 'True' }}
|
|
||||||
runs-on: windows
|
runs-on: windows
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- run: |
|
||||||
- uses: actions/setup-go@v5
|
choco install -y --no-progress ccache ninja
|
||||||
|
ccache -o cache_dir=${{ github.workspace }}\.ccache
|
||||||
|
- if: matrix.preset == 'CUDA' || matrix.preset == 'ROCm'
|
||||||
|
id: cache-install
|
||||||
|
uses: actions/cache/restore@v4
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
path: |
|
||||||
cache: true
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
- name: 'Install ROCm'
|
C:\Program Files\AMD\ROCm
|
||||||
|
key: ${{ matrix.install }}
|
||||||
|
- if: matrix.preset == 'CUDA'
|
||||||
|
name: Install CUDA ${{ matrix.cuda-version }}
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading AMD HIP Installer"
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
Invoke-WebRequest -Uri "https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q3-WinSvr2022-For-HIP.exe" -OutFile "${env:RUNNER_TEMP}\rocm-install.exe"
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
write-host "Installing AMD HIP"
|
Start-Process -FilePath .\install.exe -ArgumentList (@("-s", "cudart_11.3", "nvcc_11.3", "cublas_11.3", "cublas_dev_11.3")) -NoNewWindow -Wait
|
||||||
Start-Process "${env:RUNNER_TEMP}\rocm-install.exe" -ArgumentList '-install' -NoNewWindow -Wait
|
}
|
||||||
write-host "Completed AMD HIP"
|
|
||||||
- name: 'Verify ROCm'
|
|
||||||
run: |
|
|
||||||
& 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' --version
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
|
||||||
$env:HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
|
|
||||||
go generate -x ./...
|
|
||||||
name: go generate
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
|
|
||||||
# CUDA generation step
|
$cudaPath = (Resolve-Path "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\*").path
|
||||||
generate-windows-cuda:
|
echo "$cudaPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
needs: [changes]
|
- if: matrix.preset == 'ROCm'
|
||||||
if: ${{ needs.changes.outputs.GENERATE_CUDA == 'True' }}
|
name: Install ROCm ${{ matrix.rocm-version }}
|
||||||
runs-on: windows
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- name: 'Install CUDA'
|
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading CUDA Installer"
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
Invoke-WebRequest -Uri "https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe" -OutFile "${env:RUNNER_TEMP}\cuda-install.exe"
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
write-host "Installing CUDA"
|
Start-Process -FilePath .\install.exe -ArgumentList '-install' -NoNewWindow -Wait
|
||||||
Start-Process "${env:RUNNER_TEMP}\cuda-install.exe" -ArgumentList '-s' -NoNewWindow -Wait
|
}
|
||||||
write-host "Completed CUDA"
|
|
||||||
$cudaPath=((resolve-path "c:\Program Files\NVIDIA*\CUDA\v*\bin\nvcc.exe")[0].path | split-path | split-path)
|
$hipPath = (Resolve-Path "C:\Program Files\AMD\ROCm\*").path
|
||||||
$cudaVer=($cudaPath | split-path -leaf ) -replace 'v(\d+).(\d+)', '$1_$2'
|
echo "$hipPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
echo "$cudaPath\bin" >> $env:GITHUB_PATH
|
echo "CC=$hipPath\bin\clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
echo "CUDA_PATH=$cudaPath" >> $env:GITHUB_ENV
|
echo "CXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
echo "CUDA_PATH_V${cudaVer}=$cudaPath" >> $env:GITHUB_ENV
|
- if: ${{ !cancelled() && steps.cache-install.outputs.cache-hit != 'true' }}
|
||||||
echo "CUDA_PATH_VX_Y=CUDA_PATH_V${cudaVer}" >> $env:GITHUB_ENV
|
uses: actions/cache/save@v4
|
||||||
- name: 'Verify CUDA'
|
with:
|
||||||
run: nvcc -V
|
path: |
|
||||||
- run: go get ./...
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
- name: go generate
|
C:\Program Files\AMD\ROCm
|
||||||
run: |
|
key: ${{ matrix.install }}
|
||||||
$gopath=(get-command go).source | split-path -parent
|
- uses: actions/checkout@v4
|
||||||
$cudabin=(get-command nvcc).source | split-path
|
- uses: actions/cache@v4
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
with:
|
||||||
cd $env:GITHUB_WORKSPACE
|
path: ${{ github.workspace }}\.ccache
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
key: ccache-${{ runner.os }}-${{ runner.arch }}-${{ matrix.preset }}
|
||||||
$env:PATH="$gopath;$cudabin;$env:PATH"
|
- run: |
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
Import-Module 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
|
||||||
go generate -x ./...
|
Enter-VsDevShell -VsInstallPath 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
|
||||||
|
cmake --preset "${{ matrix.preset }}" ${{ matrix.flags }}
|
||||||
|
cmake --build --parallel --preset "${{ matrix.preset }}"
|
||||||
env:
|
env:
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
CMAKE_GENERATOR: Ninja
|
||||||
|
|
||||||
runners:
|
go_mod_tidy:
|
||||||
needs: [changes]
|
|
||||||
if: ${{ needs.changes.outputs.RUNNERS == 'True' }}
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
|
||||||
arch: [amd64, arm64]
|
|
||||||
exclude:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
arch: arm64
|
|
||||||
- os: windows-2019
|
|
||||||
arch: arm64
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
env:
|
|
||||||
GOARCH: ${{ matrix.arch }}
|
|
||||||
ARCH: ${{ matrix.arch }}
|
|
||||||
CGO_ENABLED: '1'
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- name: 'Build Windows Go Runners'
|
|
||||||
if: ${{ startsWith(matrix.os, 'windows-') }}
|
|
||||||
run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
$gccpath=(get-command gcc).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$gccpath;$env:PATH"
|
|
||||||
echo $env:PATH
|
|
||||||
make -C llama -j 4
|
|
||||||
- name: 'Build Unix Go Runners'
|
|
||||||
if: ${{ ! startsWith(matrix.os, 'windows-') }}
|
|
||||||
run: make -C llama -j 4
|
|
||||||
- run: go build .
|
|
||||||
|
|
||||||
lint:
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
|
||||||
arch: [amd64, arm64]
|
|
||||||
exclude:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
arch: arm64
|
|
||||||
- os: windows-2019
|
|
||||||
arch: arm64
|
|
||||||
- os: macos-latest
|
|
||||||
arch: amd64
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
env:
|
|
||||||
GOARCH: ${{ matrix.arch }}
|
|
||||||
CGO_ENABLED: '1'
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
submodules: recursive
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: false
|
|
||||||
- run: |
|
|
||||||
case ${{ matrix.arch }} in
|
|
||||||
amd64) echo ARCH=x86_64 ;;
|
|
||||||
arm64) echo ARCH=arm64 ;;
|
|
||||||
esac >>$GITHUB_ENV
|
|
||||||
shell: bash
|
|
||||||
- uses: golangci/golangci-lint-action@v6
|
|
||||||
with:
|
|
||||||
args: --timeout 8m0s -v
|
|
||||||
test:
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
|
||||||
arch: [amd64]
|
|
||||||
exclude:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
arch: arm64
|
|
||||||
- os: windows-2019
|
|
||||||
arch: arm64
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
env:
|
|
||||||
GOARCH: ${{ matrix.arch }}
|
|
||||||
CGO_ENABLED: '1'
|
|
||||||
OLLAMA_CPU_TARGET: 'static'
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
OLLAMA_SKIP_METAL_GENERATE: '1'
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
submodules: recursive
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: |
|
|
||||||
case ${{ matrix.arch }} in
|
|
||||||
amd64) echo ARCH=amd64 ;;
|
|
||||||
arm64) echo ARCH=arm64 ;;
|
|
||||||
esac >>$GITHUB_ENV
|
|
||||||
shell: bash
|
|
||||||
- run: go generate ./...
|
|
||||||
- run: go build
|
|
||||||
- run: go test -v ./...
|
|
||||||
|
|
||||||
patches:
|
|
||||||
needs: [changes]
|
|
||||||
if: ${{ needs.changes.outputs.RUNNERS == 'True' }}
|
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
- name: check that 'go mod tidy' is clean
|
||||||
|
run: go mod tidy --diff || (echo "Please run 'go mod tidy'." && exit 1)
|
||||||
|
|
||||||
|
test:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
env:
|
||||||
|
CGO_ENABLED: '1'
|
||||||
|
GOEXPERIMENT: 'synctest'
|
||||||
|
steps:
|
||||||
|
- name: checkout
|
||||||
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
|
||||||
|
|
||||||
|
- name: cache restore
|
||||||
|
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
|
||||||
with:
|
with:
|
||||||
submodules: recursive
|
# Note: unlike the other setups, this is only grabbing the mod download
|
||||||
- name: Verify patches carry all the changes
|
# cache, rather than the whole mod directory, as the download cache
|
||||||
|
# contains zips that can be unpacked in parallel faster than they can be
|
||||||
|
# fetched and extracted by tar
|
||||||
|
path: |
|
||||||
|
~/.cache/go-build
|
||||||
|
~/go/pkg/mod/cache
|
||||||
|
~\AppData\Local\go-build
|
||||||
|
# NOTE: The -3- here should be incremented when the scheme of data to be
|
||||||
|
# cached changes (e.g. path above changes).
|
||||||
|
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}
|
||||||
|
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-
|
||||||
|
|
||||||
|
- name: Setup Go
|
||||||
|
uses: actions/setup-go@v5
|
||||||
|
with:
|
||||||
|
# The caching strategy of setup-go is less than ideal, and wastes
|
||||||
|
# time by not saving artifacts due to small failures like the linter
|
||||||
|
# complaining, etc. This means subsequent have to rebuild their world
|
||||||
|
# again until all checks pass. For instance, if you mispell a word,
|
||||||
|
# you're punished until you fix it. This is more hostile than
|
||||||
|
# helpful.
|
||||||
|
cache: false
|
||||||
|
|
||||||
|
go-version-file: go.mod
|
||||||
|
|
||||||
|
# It is tempting to run this in a platform independent way, but the past
|
||||||
|
# shows this codebase will see introductions of platform specific code
|
||||||
|
# generation, and so we need to check this per platform to ensure we
|
||||||
|
# don't abuse go generate on specific platforms.
|
||||||
|
- name: check that 'go generate' is clean
|
||||||
|
if: always()
|
||||||
run: |
|
run: |
|
||||||
cd llama && ./sync.sh && git diff --compact-summary --exit-code .
|
go generate ./...
|
||||||
|
git diff --name-only --exit-code || (echo "Please run 'go generate ./...'." && exit 1)
|
||||||
|
|
||||||
|
- name: go test
|
||||||
|
if: always()
|
||||||
|
run: go test -count=1 -benchtime=1x ./...
|
||||||
|
|
||||||
|
# TODO(bmizerany): replace this heavy tool with just the
|
||||||
|
# tools/checks/binaries we want and then make them all run in parallel
|
||||||
|
# across jobs, not on a single tiny vm on Github Actions.
|
||||||
|
- uses: golangci/golangci-lint-action@v6
|
||||||
|
with:
|
||||||
|
args: --timeout 10m0s -v
|
||||||
|
|
||||||
|
- name: cache save
|
||||||
|
# Always save the cache, even if the job fails. The artifacts produced
|
||||||
|
# during the building of test binaries are not all for naught. They can
|
||||||
|
# be used to speed up subsequent runs.
|
||||||
|
if: always()
|
||||||
|
|
||||||
|
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
|
||||||
|
with:
|
||||||
|
# Note: unlike the other setups, this is only grabbing the mod download
|
||||||
|
# cache, rather than the whole mod directory, as the download cache
|
||||||
|
# contains zips that can be unpacked in parallel faster than they can be
|
||||||
|
# fetched and extracted by tar
|
||||||
|
path: |
|
||||||
|
~/.cache/go-build
|
||||||
|
~/go/pkg/mod/cache
|
||||||
|
~\AppData\Local\go-build
|
||||||
|
# NOTE: The -3- here should be incremented when the scheme of data to be
|
||||||
|
# cached changes (e.g. path above changes).
|
||||||
|
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
|
||||||
|
|
||||||
|
patches:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Verify patches apply cleanly and do not change files
|
||||||
|
run: |
|
||||||
|
make -f Makefile.sync clean sync
|
||||||
|
git diff --compact-summary --exit-code
|
||||||
|
9
.gitignore
vendored
9
.gitignore
vendored
@@ -4,14 +4,13 @@
|
|||||||
.venv
|
.venv
|
||||||
.swp
|
.swp
|
||||||
dist
|
dist
|
||||||
ollama
|
build
|
||||||
.cache
|
.cache
|
||||||
*.exe
|
*.exe
|
||||||
.idea
|
.idea
|
||||||
test_data
|
test_data
|
||||||
*.crt
|
*.crt
|
||||||
llm/build
|
__debug_bin*
|
||||||
build/*/*/*
|
|
||||||
!build/**/placeholder
|
|
||||||
llama/build
|
llama/build
|
||||||
__debug_bin*
|
llama/vendor
|
||||||
|
/ollama
|
||||||
|
4
.gitmodules
vendored
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
|||||||
[submodule "llama.cpp"]
|
|
||||||
path = llm/llama.cpp
|
|
||||||
url = https://github.com/ggerganov/llama.cpp.git
|
|
||||||
shallow = true
|
|
@@ -6,10 +6,6 @@ linters:
|
|||||||
- bidichk
|
- bidichk
|
||||||
- bodyclose
|
- bodyclose
|
||||||
- containedctx
|
- containedctx
|
||||||
- contextcheck
|
|
||||||
- errcheck
|
|
||||||
- exportloopref
|
|
||||||
- gci
|
|
||||||
- gocheckcompilerdirectives
|
- gocheckcompilerdirectives
|
||||||
- gofmt
|
- gofmt
|
||||||
- gofumpt
|
- gofumpt
|
||||||
@@ -25,13 +21,12 @@ linters:
|
|||||||
- staticcheck
|
- staticcheck
|
||||||
- tenv
|
- tenv
|
||||||
- unconvert
|
- unconvert
|
||||||
- unused
|
|
||||||
- usestdlibvars
|
|
||||||
- wastedassign
|
- wastedassign
|
||||||
- whitespace
|
- whitespace
|
||||||
|
disable:
|
||||||
|
- usestdlibvars
|
||||||
|
- errcheck
|
||||||
linters-settings:
|
linters-settings:
|
||||||
gci:
|
|
||||||
sections: [standard, default, localmodule]
|
|
||||||
staticcheck:
|
staticcheck:
|
||||||
checks:
|
checks:
|
||||||
- all
|
- all
|
||||||
@@ -43,5 +38,4 @@ severity:
|
|||||||
- gofmt
|
- gofmt
|
||||||
- goimports
|
- goimports
|
||||||
- intrange
|
- intrange
|
||||||
- usestdlibvars
|
|
||||||
severity: info
|
severity: info
|
||||||
|
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"trailingComma": "es5",
|
|
||||||
"tabWidth": 2,
|
|
||||||
"useTabs": false,
|
|
||||||
"semi": false,
|
|
||||||
"singleQuote": true,
|
|
||||||
"jsxSingleQuote": true,
|
|
||||||
"printWidth": 120,
|
|
||||||
"arrowParens": "avoid"
|
|
||||||
}
|
|
132
CMakeLists.txt
Normal file
132
CMakeLists.txt
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
cmake_minimum_required(VERSION 3.21)
|
||||||
|
|
||||||
|
project(Ollama C CXX)
|
||||||
|
|
||||||
|
include(CheckLanguage)
|
||||||
|
|
||||||
|
find_package(Threads REQUIRED)
|
||||||
|
|
||||||
|
set(CMAKE_BUILD_TYPE Release)
|
||||||
|
set(BUILD_SHARED_LIBS ON)
|
||||||
|
|
||||||
|
set(CMAKE_CXX_STANDARD 17)
|
||||||
|
set(CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||||
|
set(CMAKE_CXX_EXTENSIONS OFF)
|
||||||
|
|
||||||
|
set(GGML_BUILD ON)
|
||||||
|
set(GGML_SHARED ON)
|
||||||
|
set(GGML_CCACHE ON)
|
||||||
|
set(GGML_BACKEND_DL ON)
|
||||||
|
set(GGML_BACKEND_SHARED ON)
|
||||||
|
set(GGML_SCHED_MAX_COPIES 4)
|
||||||
|
|
||||||
|
set(GGML_LLAMAFILE ON)
|
||||||
|
set(GGML_CUDA_PEER_MAX_BATCH_SIZE 128)
|
||||||
|
set(GGML_CUDA_GRAPHS ON)
|
||||||
|
set(GGML_CUDA_FA ON)
|
||||||
|
|
||||||
|
if((CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_OSX_ARCHITECTURES MATCHES "arm64")
|
||||||
|
OR (NOT CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_SYSTEM_PROCESSOR MATCHES "arm|aarch64|ARM64|ARMv[0-9]+"))
|
||||||
|
set(GGML_CPU_ALL_VARIANTS ON)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (CMAKE_OSX_ARCHITECTURES MATCHES "x86_64")
|
||||||
|
set(CMAKE_BUILD_RPATH "@loader_path")
|
||||||
|
set(CMAKE_INSTALL_RPATH "@loader_path")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(OLLAMA_BUILD_DIR ${CMAKE_BINARY_DIR}/lib/ollama)
|
||||||
|
set(OLLAMA_INSTALL_DIR ${CMAKE_INSTALL_PREFIX}/lib/ollama)
|
||||||
|
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_DEBUG ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE ${OLLAMA_BUILD_DIR})
|
||||||
|
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/include)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu/amx)
|
||||||
|
|
||||||
|
set(GGML_CPU ON)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src)
|
||||||
|
set_property(TARGET ggml PROPERTY EXCLUDE_FROM_ALL TRUE)
|
||||||
|
|
||||||
|
get_target_property(CPU_VARIANTS ggml-cpu MANUALLY_ADDED_DEPENDENCIES)
|
||||||
|
if(NOT CPU_VARIANTS)
|
||||||
|
set(CPU_VARIANTS "ggml-cpu")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
install(TARGETS ggml-base ${CPU_VARIANTS}
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
FRAMEWORK DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
)
|
||||||
|
|
||||||
|
check_language(CUDA)
|
||||||
|
if(CMAKE_CUDA_COMPILER)
|
||||||
|
if(CMAKE_VERSION VERSION_GREATER_EQUAL "3.24" AND NOT CMAKE_CUDA_ARCHITECTURES)
|
||||||
|
set(CMAKE_CUDA_ARCHITECTURES "native")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
find_package(CUDAToolkit)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cuda)
|
||||||
|
set(OLLAMA_CUDA_INSTALL_DIR ${OLLAMA_INSTALL_DIR}/cuda_v${CUDAToolkit_VERSION_MAJOR})
|
||||||
|
install(TARGETS ggml-cuda
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
DIRECTORIES ${CUDAToolkit_BIN_DIR} ${CUDAToolkit_LIBRARY_DIR}
|
||||||
|
PRE_INCLUDE_REGEXES cublas cublasLt cudart
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_CUDA_INSTALL_DIR} COMPONENT CUDA
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_CUDA_INSTALL_DIR} COMPONENT CUDA
|
||||||
|
)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX "^gfx(906|908|90a):xnack[+-]$"
|
||||||
|
CACHE STRING
|
||||||
|
"Regular expression describing AMDGPU_TARGETS not supported on Windows. Override to force building these targets. Default \"^gfx(906|908|90a):xnack[+-]$\"."
|
||||||
|
)
|
||||||
|
|
||||||
|
check_language(HIP)
|
||||||
|
if(CMAKE_HIP_COMPILER)
|
||||||
|
set(HIP_PLATFORM "amd")
|
||||||
|
|
||||||
|
find_package(hip REQUIRED)
|
||||||
|
if(NOT AMDGPU_TARGETS)
|
||||||
|
list(FILTER AMDGPU_TARGETS INCLUDE REGEX "^gfx(900|94[012]|101[02]|1030|110[012])$")
|
||||||
|
elseif(WIN32 AND WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX)
|
||||||
|
list(FILTER AMDGPU_TARGETS EXCLUDE REGEX ${WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX})
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(AMDGPU_TARGETS)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-hip)
|
||||||
|
|
||||||
|
if (WIN32)
|
||||||
|
target_compile_definitions(ggml-hip PRIVATE GGML_CUDA_NO_PEER_COPY)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
target_compile_definitions(ggml-hip PRIVATE GGML_HIP_NO_VMM)
|
||||||
|
|
||||||
|
set(OLLAMA_HIP_INSTALL_DIR ${OLLAMA_INSTALL_DIR}/rocm)
|
||||||
|
install(TARGETS ggml-hip
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
DIRECTORIES ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR}
|
||||||
|
PRE_INCLUDE_REGEXES hipblas rocblas amdhip64 rocsolver amd_comgr hsa-runtime64 rocsparse tinfo rocprofiler-register drm drm_amdgpu numa elf
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
POST_EXCLUDE_REGEXES "system32"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP
|
||||||
|
)
|
||||||
|
|
||||||
|
foreach(HIP_LIB_BIN_INSTALL_DIR IN ITEMS ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR})
|
||||||
|
if(EXISTS ${HIP_LIB_BIN_INSTALL_DIR}/rocblas)
|
||||||
|
install(DIRECTORY ${HIP_LIB_BIN_INSTALL_DIR}/rocblas DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP)
|
||||||
|
break()
|
||||||
|
endif()
|
||||||
|
endforeach()
|
||||||
|
endif()
|
||||||
|
endif()
|
110
CMakePresets.json
Normal file
110
CMakePresets.json
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
{
|
||||||
|
"version": 3,
|
||||||
|
"configurePresets": [
|
||||||
|
{
|
||||||
|
"name": "Default",
|
||||||
|
"binaryDir": "${sourceDir}/build",
|
||||||
|
"installDir": "${sourceDir}/dist",
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_BUILD_TYPE": "Release"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CPU",
|
||||||
|
"inherits": [ "Default" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA",
|
||||||
|
"inherits": [ "Default" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 11",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "50;52;53;60;61;70;75;80;86"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 12",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "50;60;61;70;75;80;86;87;89;90;90a;120"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 5",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "72;87"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 6",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "87"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm",
|
||||||
|
"inherits": [ "Default" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_HIP_PLATFORM": "amd"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm 6",
|
||||||
|
"inherits": [ "ROCm" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"AMDGPU_TARGETS": "gfx900;gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1151;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"buildPresets": [
|
||||||
|
{
|
||||||
|
"name": "Default",
|
||||||
|
"configurePreset": "Default",
|
||||||
|
"configuration": "Release"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CPU",
|
||||||
|
"configurePreset": "Default",
|
||||||
|
"targets": [ "ggml-cpu" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA",
|
||||||
|
"configurePreset": "CUDA",
|
||||||
|
"targets": [ "ggml-cuda" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 11",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "CUDA 11"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 12",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "CUDA 12"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 5",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "JetPack 5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 6",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "JetPack 6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm",
|
||||||
|
"configurePreset": "ROCm",
|
||||||
|
"targets": [ "ggml-hip" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm 6",
|
||||||
|
"inherits": [ "ROCm" ],
|
||||||
|
"configurePreset": "ROCm 6"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
@@ -6,8 +6,6 @@ Thank you for your interest in contributing to Ollama! Here are a few guidelines
|
|||||||
|
|
||||||
See the [development documentation](./docs/development.md) for instructions on how to build and run Ollama locally.
|
See the [development documentation](./docs/development.md) for instructions on how to build and run Ollama locally.
|
||||||
|
|
||||||
## Pull requests
|
|
||||||
|
|
||||||
### Ideal issues
|
### Ideal issues
|
||||||
|
|
||||||
* [Bugs](https://github.com/ollama/ollama/issues?q=is%3Aissue+is%3Aopen+label%3Abug): issues where Ollama stops working or where it results in an unexpected error.
|
* [Bugs](https://github.com/ollama/ollama/issues?q=is%3Aissue+is%3Aopen+label%3Abug): issues where Ollama stops working or where it results in an unexpected error.
|
||||||
@@ -26,11 +24,64 @@ See the [development documentation](./docs/development.md) for instructions on h
|
|||||||
* Changes that add significant friction to the user experience
|
* Changes that add significant friction to the user experience
|
||||||
* Changes that create a large future maintenance burden for maintainers and contributors
|
* Changes that create a large future maintenance burden for maintainers and contributors
|
||||||
|
|
||||||
### Best practices
|
## Proposing a (non-trivial) change
|
||||||
|
|
||||||
* Commit messages: please leave both a title and a description in your commit messages. The title should be a short summary of the changes, with a leading word that explains the section of the code being changed (e.g. `api: fix parsing of prompt field`) . In the description, leave a short 2-3 sentences that explain more about the change and its impact.
|
> By "non-trivial", we mean a change that is not a bug fix or small
|
||||||
* Tests: please add test coverage to changes where possible.
|
> documentation update. If you are unsure, please ask us on our [Discord
|
||||||
* Minimize dependencies: avoid adding new dependencies unless absolutely necessary.
|
> server](https://discord.gg/ollama).
|
||||||
|
|
||||||
|
Before opening a non-trivial Pull Request, please open an issue to discuss the change and
|
||||||
|
get feedback from the maintainers. This helps us understand the context of the
|
||||||
|
change and how it fits into Ollama's roadmap and prevents us from duplicating
|
||||||
|
work or you from spending time on a change that we may not be able to accept.
|
||||||
|
|
||||||
|
Tips for proposals:
|
||||||
|
|
||||||
|
* Explain the problem you are trying to solve, not what you are trying to do.
|
||||||
|
* Explain why the change is important.
|
||||||
|
* Explain how the change will be used.
|
||||||
|
* Explain how the change will be tested.
|
||||||
|
|
||||||
|
Additionally, for bonus points: Provide draft documentation you would expect to
|
||||||
|
see if the change were accepted.
|
||||||
|
|
||||||
|
## Pull requests
|
||||||
|
|
||||||
|
**Commit messages**
|
||||||
|
|
||||||
|
The title should look like:
|
||||||
|
|
||||||
|
<package>: <short description>
|
||||||
|
|
||||||
|
The package is the most affected Go package. If the change does not affect Go
|
||||||
|
code, then use the directory name instead. Changes to a single well-known
|
||||||
|
file in the root directory may use the file name.
|
||||||
|
|
||||||
|
The short description should start with a lowercase letter and be a
|
||||||
|
continuation of the sentence:
|
||||||
|
|
||||||
|
"This changes Ollama to..."
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
llm/backend/mlx: support the llama architecture
|
||||||
|
CONTRIBUTING: provide clairity on good commit messages, and bad
|
||||||
|
|
||||||
|
Bad Examples:
|
||||||
|
|
||||||
|
feat: add more emoji
|
||||||
|
fix: was not using famous web framework
|
||||||
|
chore: generify code
|
||||||
|
|
||||||
|
**Tests**
|
||||||
|
|
||||||
|
Please include tests. Strive to test behavior, not implementation.
|
||||||
|
|
||||||
|
**New dependencies**
|
||||||
|
|
||||||
|
Dependencies should be added sparingly. If you are adding a new dependency,
|
||||||
|
please explain why it is necessary and what other ways you attempted that
|
||||||
|
did not work without it.
|
||||||
|
|
||||||
## Need help?
|
## Need help?
|
||||||
|
|
||||||
|
327
Dockerfile
327
Dockerfile
@@ -1,250 +1,131 @@
|
|||||||
ARG GOLANG_VERSION=1.22.5
|
# vim: filetype=dockerfile
|
||||||
ARG CMAKE_VERSION=3.22.1
|
|
||||||
ARG CUDA_VERSION_11=11.3.1
|
|
||||||
ARG CUDA_V11_ARCHITECTURES="50;52;53;60;61;62;70;72;75;80;86"
|
|
||||||
ARG CUDA_VERSION_12=12.4.0
|
|
||||||
ARG CUDA_V12_ARCHITECTURES="60;61;62;70;72;75;80;86;87;89;90;90a"
|
|
||||||
ARG ROCM_VERSION=6.1.2
|
|
||||||
|
|
||||||
# Copy the minimal context we need to run the generate scripts
|
ARG FLAVOR=${TARGETARCH}
|
||||||
FROM scratch AS llm-code
|
|
||||||
COPY .git .git
|
|
||||||
COPY .gitmodules .gitmodules
|
|
||||||
COPY llm llm
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 nvidia/cuda:$CUDA_VERSION_11-devel-centos7 AS cuda-11-build-amd64
|
ARG ROCMVERSION=6.3.3
|
||||||
ARG CMAKE_VERSION
|
ARG JETPACK5VERSION=r35.4.1
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
ARG JETPACK6VERSION=r36.4.0
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
ARG CMAKEVERSION=3.31.2
|
||||||
ENV PATH=/opt/rh/devtoolset-10/root/usr/bin:$PATH
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V11_ARCHITECTURES
|
|
||||||
ENV GOARCH=amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V11_ARCHITECTURES}" \
|
|
||||||
CUDA_VARIANT="_v11" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 nvidia/cuda:$CUDA_VERSION_12-devel-centos7 AS cuda-12-build-amd64
|
# CUDA v11 requires gcc v10. v10.3 has regressions, so the rockylinux 8.5 AppStream has the latest compatible version
|
||||||
ARG CMAKE_VERSION
|
FROM --platform=linux/amd64 rocm/dev-almalinux-8:${ROCMVERSION}-complete AS base-amd64
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
RUN yum install -y yum-utils \
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
&& yum-config-manager --add-repo https://dl.rockylinux.org/vault/rocky/8.5/AppStream/\$basearch/os/ \
|
||||||
ENV PATH=/opt/rh/devtoolset-10/root/usr/bin:$PATH
|
&& rpm --import https://dl.rockylinux.org/pub/rocky/RPM-GPG-KEY-Rocky-8 \
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
&& dnf install -y yum-utils ccache gcc-toolset-10-gcc-10.2.1-8.2.el8 gcc-toolset-10-gcc-c++-10.2.1-8.2.el8 gcc-toolset-10-binutils-2.35-11.el8 \
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
&& yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V12_ARCHITECTURES
|
|
||||||
ENV GOARCH=amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V12_ARCHITECTURES}" \
|
|
||||||
CUDA_VARIANT="_v12" \
|
|
||||||
OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_USE_GRAPHS=on" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 nvidia/cuda:$CUDA_VERSION_11-devel-rockylinux8 AS cuda-11-build-runner-arm64
|
|
||||||
ARG CMAKE_VERSION
|
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V11_ARCHITECTURES
|
|
||||||
ENV GOARCH=arm64
|
|
||||||
RUN OLLAMA_SKIP_STATIC_GENERATE=1 \
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V11_ARCHITECTURES}" \
|
|
||||||
CUDA_VARIANT="_v11" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 nvidia/cuda:$CUDA_VERSION_12-devel-rockylinux8 AS cuda-12-build-runner-arm64
|
FROM --platform=linux/arm64 almalinux:8 AS base-arm64
|
||||||
ARG CMAKE_VERSION
|
# install epel-release for ccache
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
RUN yum install -y yum-utils epel-release \
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
&& dnf install -y clang ccache \
|
||||||
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
&& yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
ENV CC=clang CXX=clang++
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
FROM base-${TARGETARCH} AS base
|
||||||
ARG CUDA_V12_ARCHITECTURES
|
ARG CMAKEVERSION
|
||||||
ENV GOARCH=arm64
|
RUN curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
|
ENV LDFLAGS=-s
|
||||||
|
|
||||||
|
FROM base AS cpu
|
||||||
|
RUN dnf install -y gcc-toolset-11-gcc gcc-toolset-11-gcc-c++
|
||||||
|
ENV PATH=/opt/rh/gcc-toolset-11/root/usr/bin:$PATH
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
cmake --preset 'CPU' \
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
&& cmake --build --parallel --preset 'CPU' \
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V12_ARCHITECTURES}" \
|
&& cmake --install build --component CPU --strip --parallel 8
|
||||||
CUDA_VARIANT="_v12" \
|
|
||||||
OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_USE_GRAPHS=on" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
|
FROM base AS cuda-11
|
||||||
FROM --platform=linux/amd64 rocm/dev-centos-7:${ROCM_VERSION}-complete AS rocm-build-amd64
|
ARG CUDA11VERSION=11.3
|
||||||
ARG CMAKE_VERSION
|
RUN dnf install -y cuda-toolkit-${CUDA11VERSION//./-}
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
ENV PATH=/usr/local/cuda-11/bin:$PATH
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH=/opt/rh/devtoolset-10/root/usr/bin:$PATH
|
|
||||||
ENV LIBRARY_PATH=/opt/amdgpu/lib64
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG AMDGPU_TARGETS
|
|
||||||
ENV GOARCH=amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_SKIP_CPU_GENERATE=1 bash gen_linux.sh
|
cmake --preset 'CUDA 11' \
|
||||||
RUN mkdir -p ../../dist/linux-amd64-rocm/lib/ollama && \
|
&& cmake --build --parallel --preset 'CUDA 11' \
|
||||||
(cd /opt/rocm/lib && tar cf - rocblas/library) | (cd ../../dist/linux-amd64-rocm/lib/ollama && tar xf - )
|
&& cmake --install build --component CUDA --strip --parallel 8
|
||||||
|
|
||||||
FROM --platform=linux/amd64 centos:7 AS cpu-builder-amd64
|
FROM base AS cuda-12
|
||||||
ARG CMAKE_VERSION
|
ARG CUDA12VERSION=12.8
|
||||||
ARG GOLANG_VERSION
|
RUN dnf install -y cuda-toolkit-${CUDA12VERSION//./-}
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
ENV PATH=/usr/local/cuda-12/bin:$PATH
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH=/opt/rh/devtoolset-10/root/usr/bin:$PATH
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
ARG OLLAMA_CUSTOM_CPU_DEFS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ENV GOARCH=amd64
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
cmake --preset 'CUDA 12' \
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu_avx-build-amd64
|
&& cmake --build --parallel --preset 'CUDA 12' \
|
||||||
|
&& cmake --install build --component CUDA --strip --parallel 8
|
||||||
|
|
||||||
|
FROM base AS rocm-6
|
||||||
|
ENV PATH=/opt/rocm/hcc/bin:/opt/rocm/hip/bin:/opt/rocm/bin:/opt/rocm/hcc/bin:$PATH
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu_avx" bash gen_linux.sh
|
cmake --preset 'ROCm 6' \
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu_avx2-build-amd64
|
&& cmake --build --parallel --preset 'ROCm 6' \
|
||||||
|
&& cmake --install build --component HIP --strip --parallel 8
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK5VERSION} AS jetpack-5
|
||||||
|
ARG CMAKEVERSION
|
||||||
|
RUN apt-get update && apt-get install -y curl ccache \
|
||||||
|
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu_avx2" bash gen_linux.sh
|
cmake --preset 'JetPack 5' \
|
||||||
|
&& cmake --build --parallel --preset 'JetPack 5' \
|
||||||
|
&& cmake --install build --component CUDA --strip --parallel 8
|
||||||
|
|
||||||
FROM --platform=linux/arm64 rockylinux:8 AS cpu-builder-arm64
|
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK6VERSION} AS jetpack-6
|
||||||
ARG CMAKE_VERSION
|
ARG CMAKEVERSION
|
||||||
ARG GOLANG_VERSION
|
RUN apt-get update && apt-get install -y curl ccache \
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
ARG OLLAMA_CUSTOM_CPU_DEFS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ENV GOARCH=arm64
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS cpu-build-arm64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
cmake --preset 'JetPack 6' \
|
||||||
|
&& cmake --build --parallel --preset 'JetPack 6' \
|
||||||
|
&& cmake --install build --component CUDA --strip --parallel 8
|
||||||
|
|
||||||
|
FROM base AS build
|
||||||
# Intermediate stages used for ./scripts/build_linux.sh
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
FROM --platform=linux/amd64 cpu-build-amd64 AS build-amd64
|
COPY go.mod go.sum .
|
||||||
|
RUN curl -fsSL https://golang.org/dl/go$(awk '/^go/ { print $2 }' go.mod).linux-$(case $(uname -m) in x86_64) echo amd64 ;; aarch64) echo arm64 ;; esac).tar.gz | tar xz -C /usr/local
|
||||||
|
ENV PATH=/usr/local/go/bin:$PATH
|
||||||
|
RUN go mod download
|
||||||
|
COPY . .
|
||||||
|
ARG GOFLAGS="'-ldflags=-w -s'"
|
||||||
ENV CGO_ENABLED=1
|
ENV CGO_ENABLED=1
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
RUN --mount=type=cache,target=/root/.cache/go-build \
|
||||||
COPY . .
|
go build -trimpath -buildmode=pie -o /bin/ollama .
|
||||||
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
COPY --from=cuda-12-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-12-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
ARG GOFLAGS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
|
||||||
RUN cd dist/linux-$GOARCH && \
|
|
||||||
tar --exclude runners -cf - . | pigz --best > ../ollama-linux-$GOARCH.tgz
|
|
||||||
RUN cd dist/linux-$GOARCH-rocm && \
|
|
||||||
tar -cf - . | pigz --best > ../ollama-linux-$GOARCH-rocm.tgz
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 cpu-build-arm64 AS build-arm64
|
FROM --platform=linux/amd64 scratch AS amd64
|
||||||
ENV CGO_ENABLED=1
|
COPY --from=cuda-11 dist/lib/ollama/cuda_v11 /lib/ollama/cuda_v11
|
||||||
ARG GOLANG_VERSION
|
COPY --from=cuda-12 dist/lib/ollama/cuda_v12 /lib/ollama/cuda_v12
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
|
||||||
COPY . .
|
|
||||||
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
COPY --from=cuda-12-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-12-build-runner-arm64 /go/src/github.com/ollama/ollama/build/ build/
|
|
||||||
ARG GOFLAGS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
go build -trimpath -o dist/linux-arm64/bin/ollama .
|
|
||||||
RUN cd dist/linux-$GOARCH && \
|
|
||||||
tar --exclude runners -cf - . | pigz --best > ../ollama-linux-$GOARCH.tgz
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 scratch AS dist-amd64
|
FROM --platform=linux/arm64 scratch AS arm64
|
||||||
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/ollama-linux-*.tgz /
|
COPY --from=cuda-11 dist/lib/ollama/cuda_v11 /lib/ollama/cuda_v11
|
||||||
FROM --platform=linux/arm64 scratch AS dist-arm64
|
COPY --from=cuda-12 dist/lib/ollama/cuda_v12 /lib/ollama/cuda_v12
|
||||||
COPY --from=build-arm64 /go/src/github.com/ollama/ollama/dist/ollama-linux-*.tgz /
|
COPY --from=jetpack-5 dist/lib/ollama/cuda_v11 lib/ollama/cuda_jetpack5
|
||||||
FROM dist-$TARGETARCH as dist
|
COPY --from=jetpack-6 dist/lib/ollama/cuda_v12 lib/ollama/cuda_jetpack6
|
||||||
|
|
||||||
|
FROM scratch AS rocm
|
||||||
|
COPY --from=rocm-6 dist/lib/ollama/rocm /lib/ollama/rocm
|
||||||
|
|
||||||
# Optimized container images do not cary nested payloads
|
FROM ${FLAVOR} AS archive
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS container-build-amd64
|
COPY --from=cpu dist/lib/ollama /lib/ollama
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
COPY --from=build /bin/ollama /bin/ollama
|
||||||
COPY . .
|
|
||||||
ARG GOFLAGS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS container-build-arm64
|
FROM ubuntu:20.04
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
RUN apt-get update \
|
||||||
COPY . .
|
&& apt-get install -y ca-certificates \
|
||||||
ARG GOFLAGS
|
&& apt-get clean \
|
||||||
ARG CGO_CFLAGS
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
COPY --from=archive /bin /usr/bin
|
||||||
go build -trimpath -o dist/linux-arm64/bin/ollama .
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 ubuntu:22.04 AS runtime-amd64
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y ca-certificates && \
|
|
||||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
|
||||||
COPY --from=container-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
|
||||||
COPY --from=cpu-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cuda-12-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 ubuntu:22.04 AS runtime-arm64
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y ca-certificates && \
|
|
||||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
|
||||||
COPY --from=container-build-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/bin/ /bin/
|
|
||||||
COPY --from=cpu-build-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/lib/ /lib/
|
|
||||||
COPY --from=cuda-11-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/lib/ /lib/
|
|
||||||
COPY --from=cuda-12-build-runner-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/lib/ /lib/
|
|
||||||
|
|
||||||
# ROCm libraries larger so we keep it distinct from the CPU/CUDA image
|
|
||||||
FROM --platform=linux/amd64 ubuntu:22.04 AS runtime-rocm
|
|
||||||
# Frontload the rocm libraries which are large, and rarely change to increase chance of a common layer
|
|
||||||
# across releases
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ /lib/
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y ca-certificates && \
|
|
||||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
|
||||||
COPY --from=container-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
|
||||||
COPY --from=cpu-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /lib/
|
|
||||||
EXPOSE 11434
|
|
||||||
ENV OLLAMA_HOST=0.0.0.0
|
|
||||||
|
|
||||||
ENTRYPOINT ["/bin/ollama"]
|
|
||||||
CMD ["serve"]
|
|
||||||
|
|
||||||
FROM runtime-$TARGETARCH
|
|
||||||
EXPOSE 11434
|
|
||||||
ENV OLLAMA_HOST=0.0.0.0
|
|
||||||
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
COPY --from=archive /lib/ollama /usr/lib/ollama
|
||||||
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
||||||
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
||||||
ENV NVIDIA_VISIBLE_DEVICES=all
|
ENV NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
ENV OLLAMA_HOST=0.0.0.0:11434
|
||||||
|
EXPOSE 11434
|
||||||
ENTRYPOINT ["/bin/ollama"]
|
ENTRYPOINT ["/bin/ollama"]
|
||||||
CMD ["serve"]
|
CMD ["serve"]
|
||||||
|
60
Makefile.sync
Normal file
60
Makefile.sync
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
UPSTREAM=https://github.com/ggerganov/llama.cpp.git
|
||||||
|
WORKDIR=llama/vendor
|
||||||
|
FETCH_HEAD=d7cfe1ffe0f435d0048a6058d529daf76e072d9c
|
||||||
|
|
||||||
|
.PHONY: help
|
||||||
|
help:
|
||||||
|
@echo "Available targets:"
|
||||||
|
@echo " sync Sync with upstream repositories"
|
||||||
|
@echo " checkout Checkout upstream repository"
|
||||||
|
@echo " apply-patches Apply patches to local repository"
|
||||||
|
@echo " format-patches Format patches from local repository"
|
||||||
|
@echo " clean Clean local repository"
|
||||||
|
@echo
|
||||||
|
@echo "Example:"
|
||||||
|
@echo " make -f $(lastword $(MAKEFILE_LIST)) clean sync"
|
||||||
|
|
||||||
|
.PHONY: sync
|
||||||
|
sync: llama/build-info.cpp llama/llama.cpp ml/backend/ggml/ggml apply-patches
|
||||||
|
|
||||||
|
.PHONY: llama/build-info.cpp
|
||||||
|
llama/build-info.cpp: llama/build-info.cpp.in
|
||||||
|
sed -e 's|@FETCH_HEAD@|$(FETCH_HEAD)|' $< > $@
|
||||||
|
|
||||||
|
.PHONY: llama/llama.cpp
|
||||||
|
llama/llama.cpp: llama/vendor/ apply-patches
|
||||||
|
rsync -arvzc -f "merge $@/.rsync-filter" $< $@
|
||||||
|
|
||||||
|
.PHONY: ml/backend/ggml/ggml apply-patches
|
||||||
|
ml/backend/ggml/ggml: llama/vendor/ggml/ apply-patches
|
||||||
|
rsync -arvzc -f "merge $@/.rsync-filter" $< $@
|
||||||
|
|
||||||
|
PATCHES=$(wildcard llama/patches/*.patch)
|
||||||
|
|
||||||
|
.PHONY: apply-patches
|
||||||
|
.NOTPARALLEL:
|
||||||
|
apply-patches: $(addsuffix ed, $(PATCHES))
|
||||||
|
|
||||||
|
%.patched: %.patch
|
||||||
|
@if git -c user.name=nobody -c 'user.email=<>' -C $(WORKDIR) am -3 $(realpath $<); then touch $@; else git -C $(WORKDIR) am --abort; exit 1; fi
|
||||||
|
|
||||||
|
.PHONY: checkout
|
||||||
|
checkout: $(WORKDIR)
|
||||||
|
git -C $(WORKDIR) fetch
|
||||||
|
git -C $(WORKDIR) checkout -f $(FETCH_HEAD)
|
||||||
|
|
||||||
|
$(WORKDIR):
|
||||||
|
git clone $(UPSTREAM) $(WORKDIR)
|
||||||
|
|
||||||
|
.PHONE: format-patches
|
||||||
|
format-patches: llama/patches
|
||||||
|
git -C $(WORKDIR) format-patch \
|
||||||
|
--no-signature \
|
||||||
|
--no-numbered \
|
||||||
|
--zero-commit \
|
||||||
|
-o $(realpath $<) \
|
||||||
|
$(FETCH_HEAD)
|
||||||
|
|
||||||
|
.PHONE: clean
|
||||||
|
clean: checkout
|
||||||
|
$(RM) $(addsuffix ed, $(PATCHES))
|
225
README.md
225
README.md
@@ -1,24 +1,24 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
<img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
|
<a href="https://ollama.com">
|
||||||
|
<img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
|
||||||
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
# Ollama
|
# Ollama
|
||||||
|
|
||||||
[](https://discord.gg/ollama)
|
|
||||||
|
|
||||||
Get up and running with large language models.
|
Get up and running with large language models.
|
||||||
|
|
||||||
### macOS
|
### macOS
|
||||||
|
|
||||||
[Download](https://ollama.com/download/Ollama-darwin.zip)
|
[Download](https://ollama.com/download/Ollama-darwin.zip)
|
||||||
|
|
||||||
### Windows preview
|
### Windows
|
||||||
|
|
||||||
[Download](https://ollama.com/download/OllamaSetup.exe)
|
[Download](https://ollama.com/download/OllamaSetup.exe)
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl -fsSL https://ollama.com/install.sh | sh
|
curl -fsSL https://ollama.com/install.sh | sh
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -33,11 +33,16 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla
|
|||||||
- [ollama-python](https://github.com/ollama/ollama-python)
|
- [ollama-python](https://github.com/ollama/ollama-python)
|
||||||
- [ollama-js](https://github.com/ollama/ollama-js)
|
- [ollama-js](https://github.com/ollama/ollama-js)
|
||||||
|
|
||||||
|
### Community
|
||||||
|
|
||||||
|
- [Discord](https://discord.gg/ollama)
|
||||||
|
- [Reddit](https://reddit.com/r/ollama)
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
To run and chat with [Llama 3.2](https://ollama.com/library/llama3.2):
|
To run and chat with [Llama 3.2](https://ollama.com/library/llama3.2):
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama run llama3.2
|
ollama run llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -47,26 +52,32 @@ Ollama supports a list of models available on [ollama.com/library](https://ollam
|
|||||||
|
|
||||||
Here are some example models that can be downloaded:
|
Here are some example models that can be downloaded:
|
||||||
|
|
||||||
| Model | Parameters | Size | Download |
|
| Model | Parameters | Size | Download |
|
||||||
| ------------------ | ---------- | ----- | ------------------------------ |
|
| ------------------ | ---------- | ----- | -------------------------------- |
|
||||||
| Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` |
|
| Gemma 3 | 1B | 815MB | `ollama run gemma3:1b` |
|
||||||
| Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` |
|
| Gemma 3 | 4B | 3.3GB | `ollama run gemma3` |
|
||||||
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
|
| Gemma 3 | 12B | 8.1GB | `ollama run gemma3:12b` |
|
||||||
| Llama 3.1 | 70B | 40GB | `ollama run llama3.1:70b` |
|
| Gemma 3 | 27B | 17GB | `ollama run gemma3:27b` |
|
||||||
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
|
| QwQ | 32B | 20GB | `ollama run qwq` |
|
||||||
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
|
| DeepSeek-R1 | 7B | 4.7GB | `ollama run deepseek-r1` |
|
||||||
| Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` |
|
| DeepSeek-R1 | 671B | 404GB | `ollama run deepseek-r1:671b` |
|
||||||
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` |
|
| Llama 3.3 | 70B | 43GB | `ollama run llama3.3` |
|
||||||
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` |
|
| Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` |
|
||||||
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` |
|
| Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` |
|
||||||
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
| Llama 3.2 Vision | 11B | 7.9GB | `ollama run llama3.2-vision` |
|
||||||
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
|
| Llama 3.2 Vision | 90B | 55GB | `ollama run llama3.2-vision:90b` |
|
||||||
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
|
||||||
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
|
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
|
||||||
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
| Phi 4 | 14B | 9.1GB | `ollama run phi4` |
|
||||||
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
| Phi 4 Mini | 3.8B | 2.5GB | `ollama run phi4-mini` |
|
||||||
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
||||||
| Solar | 10.7B | 6.1GB | `ollama run solar` |
|
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
|
||||||
|
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
||||||
|
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
|
||||||
|
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
||||||
|
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
||||||
|
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
||||||
|
| Granite-3.2 | 8B | 4.9GB | `ollama run granite3.2` |
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
||||||
@@ -85,17 +96,17 @@ Ollama supports importing GGUF models in the Modelfile:
|
|||||||
|
|
||||||
2. Create the model in Ollama
|
2. Create the model in Ollama
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama create example -f Modelfile
|
ollama create example -f Modelfile
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Run the model
|
3. Run the model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama run example
|
ollama run example
|
||||||
```
|
```
|
||||||
|
|
||||||
### Import from PyTorch or Safetensors
|
### Import from Safetensors
|
||||||
|
|
||||||
See the [guide](docs/import.md) on importing models for more information.
|
See the [guide](docs/import.md) on importing models for more information.
|
||||||
|
|
||||||
@@ -103,7 +114,7 @@ See the [guide](docs/import.md) on importing models for more information.
|
|||||||
|
|
||||||
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:
|
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama pull llama3.2
|
ollama pull llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -130,7 +141,7 @@ ollama run mario
|
|||||||
Hello! It's your friend Mario.
|
Hello! It's your friend Mario.
|
||||||
```
|
```
|
||||||
|
|
||||||
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
|
For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
|
||||||
|
|
||||||
## CLI Reference
|
## CLI Reference
|
||||||
|
|
||||||
@@ -138,13 +149,13 @@ For more examples, see the [examples](examples) directory. For more information
|
|||||||
|
|
||||||
`ollama create` is used to create a model from a Modelfile.
|
`ollama create` is used to create a model from a Modelfile.
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama create mymodel -f ./Modelfile
|
ollama create mymodel -f ./Modelfile
|
||||||
```
|
```
|
||||||
|
|
||||||
### Pull a model
|
### Pull a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama pull llama3.2
|
ollama pull llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -152,13 +163,13 @@ ollama pull llama3.2
|
|||||||
|
|
||||||
### Remove a model
|
### Remove a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama rm llama3.2
|
ollama rm llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
### Copy a model
|
### Copy a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama cp llama3.2 my-model
|
ollama cp llama3.2 my-model
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -177,37 +188,39 @@ I'm a basic program that prints the famous "Hello, world!" message to the consol
|
|||||||
|
|
||||||
```
|
```
|
||||||
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
|
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
|
||||||
The image features a yellow smiley face, which is likely the central focus of the picture.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Output**: The image features a yellow smiley face, which is likely the central focus of the picture.
|
||||||
|
|
||||||
### Pass the prompt as an argument
|
### Pass the prompt as an argument
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ollama run llama3.2 "Summarize this file: $(cat README.md)"
|
||||||
```
|
```
|
||||||
$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
|
|
||||||
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
> **Output**: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
||||||
```
|
|
||||||
|
|
||||||
### Show model information
|
### Show model information
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama show llama3.2
|
ollama show llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
### List models on your computer
|
### List models on your computer
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama list
|
ollama list
|
||||||
```
|
```
|
||||||
|
|
||||||
### List which models are currently loaded
|
### List which models are currently loaded
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama ps
|
ollama ps
|
||||||
```
|
```
|
||||||
|
|
||||||
### Stop a model which is currently running
|
### Stop a model which is currently running
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama stop llama3.2
|
ollama stop llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -223,13 +236,13 @@ See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/develo
|
|||||||
|
|
||||||
Next, start the server:
|
Next, start the server:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
./ollama serve
|
./ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, in a separate shell, run a model:
|
Finally, in a separate shell, run a model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
./ollama run llama3.2
|
./ollama run llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -239,7 +252,7 @@ Ollama has a REST API for running and managing models.
|
|||||||
|
|
||||||
### Generate a response
|
### Generate a response
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/generate -d '{
|
curl http://localhost:11434/api/generate -d '{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"prompt":"Why is the sky blue?"
|
"prompt":"Why is the sky blue?"
|
||||||
@@ -248,7 +261,7 @@ curl http://localhost:11434/api/generate -d '{
|
|||||||
|
|
||||||
### Chat with a model
|
### Chat with a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"messages": [
|
"messages": [
|
||||||
@@ -264,6 +277,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
### Web & Desktop
|
### Web & Desktop
|
||||||
|
|
||||||
- [Open WebUI](https://github.com/open-webui/open-webui)
|
- [Open WebUI](https://github.com/open-webui/open-webui)
|
||||||
|
- [SwiftChat (macOS with ReactNative)](https://github.com/aws-samples/swift-chat)
|
||||||
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
|
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
|
||||||
- [Hollama](https://github.com/fmaclen/hollama)
|
- [Hollama](https://github.com/fmaclen/hollama)
|
||||||
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
|
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
|
||||||
@@ -296,7 +310,8 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
|
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
|
||||||
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
|
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
|
||||||
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
|
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
|
||||||
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Chat with Code Repository)
|
- [IntelliBar](https://intellibar.app/) (AI-powered assistant for macOS)
|
||||||
|
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Interactive chat tool that can leverage Ollama models for rapid understanding and navigation of GitHub code repositories)
|
||||||
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
|
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
|
||||||
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
|
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
|
||||||
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
|
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
|
||||||
@@ -306,11 +321,17 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
|
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
|
||||||
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
|
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
|
||||||
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
|
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
|
||||||
|
- [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) (RWKV offline LLM deployment tool, also usable as a client for ChatGPT and Ollama)
|
||||||
|
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) (app to evaluate and compare models)
|
||||||
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
|
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
|
||||||
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
|
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
|
||||||
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
|
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
|
||||||
|
- [Shinkai Desktop](https://github.com/dcSpark/shinkai-apps) (Two click install Local AI using Ollama + Files + RAG)
|
||||||
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
|
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
|
||||||
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
||||||
|
- [R2R](https://github.com/SciPhi-AI/R2R) (Open-source RAG engine)
|
||||||
|
- [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy to use GUI with sample custom LLM for Drivers Education)
|
||||||
|
- [OpenGPA](https://opengpa.org) (Open-source offline-first Enterprise Agentic Application)
|
||||||
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
||||||
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
||||||
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
||||||
@@ -318,6 +339,9 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
|
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
|
||||||
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
|
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
|
||||||
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
|
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
|
||||||
|
- [PyGPT](https://github.com/szczyglis-dev/py-gpt) (AI desktop assistant for Linux, Windows and Mac)
|
||||||
|
- [Alpaca](https://github.com/Jeffser/Alpaca) (An Ollama client application for linux and macos made with GTK4 and Adwaita)
|
||||||
|
- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT/blob/master/docs/content/platform/ollama.md) (AutoGPT Ollama integration)
|
||||||
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
|
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
|
||||||
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
|
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
|
||||||
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
|
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
|
||||||
@@ -327,16 +351,62 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
||||||
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
|
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
|
||||||
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
|
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
|
||||||
|
- [Tkinter-based client](https://github.com/chyok/ollama-gui) (Python tkinter-based Client for Ollama)
|
||||||
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
|
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
|
||||||
|
- [Local Multimodal AI Chat](https://github.com/Leon-Sander/Local-Multimodal-AI-Chat) (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
|
||||||
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
|
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
|
||||||
|
- [OrionChat](https://github.com/EliasPereirah/OrionChat) - OrionChat is a web interface for chatting with different AI providers
|
||||||
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
|
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
|
||||||
|
- [Web management](https://github.com/lemonit-eric-mao/ollama-web-management) (Web management page)
|
||||||
|
- [Promptery](https://github.com/promptery/promptery) (desktop client for Ollama.)
|
||||||
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
||||||
|
- [chat-ollama](https://github.com/annilq/chat-ollama) (a React Native client for Ollama)
|
||||||
|
- [SpaceLlama](https://github.com/tcsenpai/spacellama) (Firefox and Chrome extension to quickly summarize web pages with ollama in a sidebar)
|
||||||
|
- [YouLama](https://github.com/tcsenpai/youlama) (Webapp to quickly summarize any YouTube video, supporting Invidious as well)
|
||||||
|
- [DualMind](https://github.com/tcsenpai/dualmind) (Experimental app allowing two models to talk to each other in the terminal or in a web interface)
|
||||||
|
- [ollamarama-matrix](https://github.com/h1ddenpr0cess20/ollamarama-matrix) (Ollama chatbot for the Matrix chat protocol)
|
||||||
|
- [ollama-chat-app](https://github.com/anan1213095357/ollama-chat-app) (Flutter-based chat app)
|
||||||
|
- [Perfect Memory AI](https://www.perfectmemory.ai/) (Productivity AI assists personalized by what you have seen on your screen, heard and said in the meetings)
|
||||||
|
- [Hexabot](https://github.com/hexastack/hexabot) (A conversational AI builder)
|
||||||
|
- [Reddit Rate](https://github.com/rapidarchitect/reddit_analyzer) (Search and Rate Reddit topics with a weighted summation)
|
||||||
|
- [OpenTalkGpt](https://github.com/adarshM84/OpenTalkGpt) (Chrome Extension to manage open-source models supported by Ollama, create custom models, and chat with models from a user-friendly UI)
|
||||||
|
- [VT](https://github.com/vinhnx/vt.ai) (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
|
||||||
|
- [Nosia](https://github.com/nosia-ai/nosia) (Easy to install and use RAG platform based on Ollama)
|
||||||
|
- [Witsy](https://github.com/nbonamy/witsy) (An AI Desktop application available for Mac/Windows/Linux)
|
||||||
|
- [Abbey](https://github.com/US-Artificial-Intelligence/abbey) (A configurable AI interface server with notebooks, document storage, and YouTube support)
|
||||||
|
- [Minima](https://github.com/dmayboroda/minima) (RAG with on-premises or fully local workflow)
|
||||||
|
- [aidful-ollama-model-delete](https://github.com/AidfulAI/aidful-ollama-model-delete) (User interface for simplified model cleanup)
|
||||||
|
- [Perplexica](https://github.com/ItzCrazyKns/Perplexica) (An AI-powered search engine & an open-source alternative to Perplexity AI)
|
||||||
|
- [Ollama Chat WebUI for Docker ](https://github.com/oslook/ollama-webui) (Support for local docker deployment, lightweight ollama webui)
|
||||||
|
- [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
|
||||||
|
- [MinimalNextOllamaChat](https://github.com/anilkay/MinimalNextOllamaChat) (Minimal Web UI for Chat and Model Control)
|
||||||
|
- [Chipper](https://github.com/TilmanGriesel/chipper) AI interface for tinkerers (Ollama, Haystack RAG, Python)
|
||||||
|
- [ChibiChat](https://github.com/CosmicEventHorizon/ChibiChat) (Kotlin-based Android app to chat with Ollama and Koboldcpp API endpoints)
|
||||||
|
- [LocalLLM](https://github.com/qusaismael/localllm) (Minimal Web-App to run ollama models on it with a GUI)
|
||||||
|
- [Ollamazing](https://github.com/buiducnhat/ollamazing) (Web extension to run Ollama models)
|
||||||
|
- [OpenDeepResearcher-via-searxng](https://github.com/benhaotang/OpenDeepResearcher-via-searxng) (A Deep Research equivent endpoint with Ollama support for running locally)
|
||||||
|
- [AntSK](https://github.com/AIDotNet/AntSK) (Out-of-the-box & Adaptable RAG Chatbot)
|
||||||
|
- [MaxKB](https://github.com/1Panel-dev/MaxKB/) (Ready-to-use & flexible RAG Chatbot)
|
||||||
|
- [yla](https://github.com/danielekp/yla) (Web interface to freely interact with your customized models)
|
||||||
|
- [LangBot](https://github.com/RockChinQ/LangBot) (LLM-based instant messaging bots platform, with Agents, RAG features, supports multiple platforms)
|
||||||
|
- [1Panel](https://github.com/1Panel-dev/1Panel/) (Web-based Linux Server Management Tool)
|
||||||
|
- [AstrBot](https://github.com/Soulter/AstrBot/) (User-friendly LLM-based multi-platform chatbot with a WebUI, supporting RAG, LLM agents, and plugins integration)
|
||||||
|
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
|
||||||
|
- [Ellama](https://github.com/zeozeozeo/ellama) (Friendly native app to chat with an Ollama instance)
|
||||||
|
- [screenpipe](https://github.com/mediar-ai/screenpipe) Build agents powered by your screen history
|
||||||
|
|
||||||
|
### Cloud
|
||||||
|
|
||||||
|
- [Google Cloud](https://cloud.google.com/run/docs/tutorials/gpu-gemma2-with-ollama)
|
||||||
|
- [Fly.io](https://fly.io/docs/python/do-more/add-ollama/)
|
||||||
|
- [Koyeb](https://www.koyeb.com/deploy/ollama)
|
||||||
|
|
||||||
### Terminal
|
### Terminal
|
||||||
|
|
||||||
- [oterm](https://github.com/ggozad/oterm)
|
- [oterm](https://github.com/ggozad/oterm)
|
||||||
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
|
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
|
||||||
- [Emacs client](https://github.com/zweifisch/ollama)
|
- [Emacs client](https://github.com/zweifisch/ollama)
|
||||||
|
- [neollama](https://github.com/paradoxical-dev/neollama) UI client for interacting with models from within Neovim
|
||||||
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
|
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
|
||||||
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
|
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
|
||||||
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
|
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
|
||||||
@@ -346,7 +416,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
|
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
|
||||||
- [cmdh](https://github.com/pgibler/cmdh)
|
- [cmdh](https://github.com/pgibler/cmdh)
|
||||||
- [ooo](https://github.com/npahlfer/ooo)
|
- [ooo](https://github.com/npahlfer/ooo)
|
||||||
- [shell-pilot](https://github.com/reid41/shell-pilot)
|
- [shell-pilot](https://github.com/reid41/shell-pilot)(Interact with models via pure shell scripts on Linux or macOS)
|
||||||
- [tenere](https://github.com/pythops/tenere)
|
- [tenere](https://github.com/pythops/tenere)
|
||||||
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
|
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
|
||||||
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
|
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
|
||||||
@@ -354,25 +424,38 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [tlm](https://github.com/yusufcanb/tlm)
|
- [tlm](https://github.com/yusufcanb/tlm)
|
||||||
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
|
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
|
||||||
- [gollama](https://github.com/sammcj/gollama)
|
- [gollama](https://github.com/sammcj/gollama)
|
||||||
|
- [ParLlama](https://github.com/paulrobello/parllama)
|
||||||
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
|
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
|
||||||
- [Ollama Mixture of Experts (MOE) in 50 lines of code](https://github.com/rapidarchitect/ollama_moe)
|
- [Ollama Mixture of Experts (MOE) in 50 lines of code](https://github.com/rapidarchitect/ollama_moe)
|
||||||
- [vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
|
- [vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
|
||||||
|
- [x-cmd ollama](https://x-cmd.com/mod/ollama)
|
||||||
|
- [bb7](https://github.com/drunkwcodes/bb7)
|
||||||
|
- [SwollamaCLI](https://github.com/marcusziade/Swollama) bundled with the Swollama Swift package. [Demo](https://github.com/marcusziade/Swollama?tab=readme-ov-file#cli-usage)
|
||||||
|
- [aichat](https://github.com/sigoden/aichat) All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
|
||||||
|
- [PowershAI](https://github.com/rrg92/powershai) PowerShell module that brings AI to terminal on Windows, including support for Ollama
|
||||||
|
- [orbiton](https://github.com/xyproto/orbiton) Configuration-free text editor and IDE with support for tab completion with Ollama.
|
||||||
|
|
||||||
### Apple Vision Pro
|
### Apple Vision Pro
|
||||||
|
|
||||||
|
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Cross-platform AI chat app supporting Apple Vision Pro via "Designed for iPad")
|
||||||
- [Enchanted](https://github.com/AugustDev/enchanted)
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
||||||
|
|
||||||
### Database
|
### Database
|
||||||
|
|
||||||
|
- [pgai](https://github.com/timescale/pgai) - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
|
||||||
|
- [Get started guide](https://github.com/timescale/pgai/blob/main/docs/vectorizer-quick-start.md)
|
||||||
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
|
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
|
||||||
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
|
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
|
||||||
|
- [Kangaroo](https://github.com/dbkangaroo/kangaroo) (AI-powered SQL client and admin tool for popular databases)
|
||||||
|
|
||||||
### Package managers
|
### Package managers
|
||||||
|
|
||||||
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
|
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
|
||||||
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
|
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
|
||||||
|
- [Homebrew](https://formulae.brew.sh/formula/ollama)
|
||||||
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
|
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
|
||||||
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
|
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
|
||||||
- [Nix package](https://search.nixos.org/packages?channel=24.05&show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
|
- [Nix package](https://search.nixos.org/packages?show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
|
||||||
- [Flox](https://flox.dev/blog/ollama-part-one)
|
- [Flox](https://flox.dev/blog/ollama-part-one)
|
||||||
|
|
||||||
### Libraries
|
### Libraries
|
||||||
@@ -380,9 +463,13 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
|
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
|
||||||
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
|
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
|
||||||
- [crewAI](https://github.com/crewAIInc/crewAI)
|
- [crewAI](https://github.com/crewAIInc/crewAI)
|
||||||
|
- [Yacana](https://remembersoftwares.github.io/yacana/) (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
|
||||||
|
- [Spring AI](https://github.com/spring-projects/spring-ai) with [reference](https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html) and [example](https://github.com/tzolov/ollama-tools)
|
||||||
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
|
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
|
||||||
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
|
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
|
||||||
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
|
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
|
||||||
|
- [LangChain for .NET](https://github.com/tryAGI/LangChain) with [example](https://github.com/tryAGI/LangChain/blob/main/examples/LangChain.Samples.OpenAI/Program.cs)
|
||||||
|
- [LLPhant](https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama)
|
||||||
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama)
|
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama)
|
||||||
- [LiteLLM](https://github.com/BerriAI/litellm)
|
- [LiteLLM](https://github.com/BerriAI/litellm)
|
||||||
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
|
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
|
||||||
@@ -407,24 +494,42 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
|
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
|
||||||
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
|
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
|
||||||
- [LlamaScript](https://github.com/Project-Llama/llamascript)
|
- [LlamaScript](https://github.com/Project-Llama/llamascript)
|
||||||
|
- [llm-axe](https://github.com/emirsahin1/llm-axe) (Python Toolkit for Building LLM Powered Apps)
|
||||||
- [Gollm](https://docs.gollm.co/examples/ollama-example)
|
- [Gollm](https://docs.gollm.co/examples/ollama-example)
|
||||||
|
- [Gollama for Golang](https://github.com/jonathanhecl/gollama)
|
||||||
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
|
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
|
||||||
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
|
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
|
||||||
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
|
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
|
||||||
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) with [example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama)
|
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) with [example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama)
|
||||||
|
- [Parakeet](https://github.com/parakeet-nest/parakeet) is a GoLang library, made to simplify the development of small generative AI applications with Ollama.
|
||||||
|
- [Haverscript](https://github.com/andygill/haverscript) with [examples](https://github.com/andygill/haverscript/tree/main/examples)
|
||||||
|
- [Ollama for Swift](https://github.com/mattt/ollama-swift)
|
||||||
|
- [Swollama for Swift](https://github.com/marcusziade/Swollama) with [DocC](https://marcusziade.github.io/Swollama/documentation/swollama/)
|
||||||
|
- [GoLamify](https://github.com/prasad89/golamify)
|
||||||
|
- [Ollama for Haskell](https://github.com/tusharad/ollama-haskell)
|
||||||
|
- [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to different LLM in unified API)
|
||||||
|
- [LlmTornado](https://github.com/lofcz/llmtornado) (C# library providing a unified interface for major FOSS & Commercial inference APIs)
|
||||||
|
- [Ollama for Zig](https://github.com/dravenk/ollama-zig)
|
||||||
|
- [Abso](https://github.com/lunary-ai/abso) (OpenAI-compatible TypeScript SDK for any LLM provider)
|
||||||
|
- [Nichey](https://github.com/goodreasonai/nichey) is a Python package for generating custom wikis for your research topic
|
||||||
|
- [Ollama for D](https://github.com/kassane/ollama-d)
|
||||||
|
|
||||||
### Mobile
|
### Mobile
|
||||||
|
|
||||||
|
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Lightning-fast Cross-platform AI chat app with native UI for Android, iOS and iPad)
|
||||||
- [Enchanted](https://github.com/AugustDev/enchanted)
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
||||||
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
|
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
|
||||||
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
||||||
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
||||||
|
- [Ollama Android Chat](https://github.com/sunshine0523/OllamaServer) (No need for Termux, start the Ollama service with one click on an Android device)
|
||||||
|
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
|
||||||
|
|
||||||
### Extensions & Plugins
|
### Extensions & Plugins
|
||||||
|
|
||||||
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
|
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
|
||||||
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
|
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
|
||||||
- [Continue](https://github.com/continuedev/continue)
|
- [Continue](https://github.com/continuedev/continue)
|
||||||
|
- [Vibe](https://github.com/thewh1teagle/vibe) (Transcribe and analyze meetings with Ollama)
|
||||||
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
|
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
|
||||||
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
|
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
|
||||||
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
|
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
|
||||||
@@ -447,14 +552,32 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
|
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
|
||||||
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
|
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
|
||||||
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
|
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
|
||||||
|
- [ChatGPTBox: All in one browser extension](https://github.com/josStorer/chatGPTBox) with [Integrating Tutorial](https://github.com/josStorer/chatGPTBox/issues/616#issuecomment-1975186467)
|
||||||
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
|
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
|
||||||
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
|
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
|
||||||
- [vnc-lm](https://github.com/jk011ru/vnc-lm) (A containerized Discord bot with support for attachments and web links)
|
- [Terraform AWS Ollama & Open WebUI](https://github.com/xuyangbocn/terraform-aws-self-host-llm) (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front end Open WebUI service.)
|
||||||
|
- [node-red-contrib-ollama](https://github.com/jakubburkiewicz/node-red-contrib-ollama)
|
||||||
|
- [Local AI Helper](https://github.com/ivostoykov/localAI) (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
|
||||||
|
- [vnc-lm](https://github.com/jake83741/vnc-lm) (Discord bot for messaging with LLMs through Ollama and LiteLLM. Seamlessly move between local and flagship models.)
|
||||||
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
|
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
|
||||||
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
|
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
|
||||||
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
|
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
|
||||||
|
- [AI Summmary Helper plugin](https://github.com/philffm/ai-summary-helper)
|
||||||
|
- [TextCraft](https://github.com/suncloudsmoon/TextCraft) (Copilot in Word alternative using Ollama)
|
||||||
|
- [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) (Alfred Workflow)
|
||||||
|
- [TextLLaMA](https://github.com/adarshM84/TextLLaMA) A Chrome Extension that helps you write emails, correct grammar, and translate into any language
|
||||||
|
- [Simple-Discord-AI](https://github.com/zyphixor/simple-discord-ai)
|
||||||
|
- [LLM Telegram Bot](https://github.com/innightwolfsleep/llm_telegram_bot) (telegram bot, primary for RP. Oobabooga-like buttons, [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) API integration e.t.c)
|
||||||
|
- [mcp-llm](https://github.com/sammcj/mcp-llm) (MCP Server to allow LLMs to call other LLMs)
|
||||||
|
|
||||||
### Supported backends
|
### Supported backends
|
||||||
|
|
||||||
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.
|
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.
|
||||||
|
|
||||||
|
### Observability
|
||||||
|
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native intergration to Ollama.
|
||||||
|
- [Lunary](https://lunary.ai/docs/integrations/ollama) is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
|
||||||
|
- [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
|
||||||
|
- [HoneyHive](https://docs.honeyhive.ai/integrations/ollama) is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
|
||||||
|
- [Langfuse](https://langfuse.com/docs/integrations/ollama) is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
|
||||||
|
- [MLflow Tracing](https://mlflow.org/docs/latest/llms/tracing/index.html#automatic-tracing) is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.
|
||||||
|
@@ -10,7 +10,7 @@
|
|||||||
// repository].
|
// repository].
|
||||||
//
|
//
|
||||||
// [the API documentation]: https://github.com/ollama/ollama/blob/main/docs/api.md
|
// [the API documentation]: https://github.com/ollama/ollama/blob/main/docs/api.md
|
||||||
// [in the GitHub repository]: https://github.com/ollama/ollama/tree/main/examples
|
// [in the GitHub repository]: https://github.com/ollama/ollama/tree/main/api/examples
|
||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -55,7 +55,7 @@ func checkError(resp *http.Response, body []byte) error {
|
|||||||
|
|
||||||
// ClientFromEnvironment creates a new [Client] using configuration from the
|
// ClientFromEnvironment creates a new [Client] using configuration from the
|
||||||
// environment variable OLLAMA_HOST, which points to the network host and
|
// environment variable OLLAMA_HOST, which points to the network host and
|
||||||
// port on which the ollama service is listenting. The format of this variable
|
// port on which the ollama service is listening. The format of this variable
|
||||||
// is:
|
// is:
|
||||||
//
|
//
|
||||||
// <scheme>://<host>:<port>
|
// <scheme>://<host>:<port>
|
||||||
@@ -132,7 +132,7 @@ func (c *Client) do(ctx context.Context, method, path string, reqData, respData
|
|||||||
const maxBufferSize = 512 * format.KiloByte
|
const maxBufferSize = 512 * format.KiloByte
|
||||||
|
|
||||||
func (c *Client) stream(ctx context.Context, method, path string, data any, fn func([]byte) error) error {
|
func (c *Client) stream(ctx context.Context, method, path string, data any, fn func([]byte) error) error {
|
||||||
var buf *bytes.Buffer
|
var buf io.Reader
|
||||||
if data != nil {
|
if data != nil {
|
||||||
bts, err := json.Marshal(data)
|
bts, err := json.Marshal(data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@@ -1,6 +1,13 @@
|
|||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -43,3 +50,206 @@ func TestClientFromEnvironment(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// testError represents an internal error type with status code and message
|
||||||
|
// this is used since the error response from the server is not a standard error struct
|
||||||
|
type testError struct {
|
||||||
|
message string
|
||||||
|
statusCode int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e testError) Error() string {
|
||||||
|
return e.message
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientStream(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
responses []any
|
||||||
|
wantErr string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "immediate error response",
|
||||||
|
responses: []any{
|
||||||
|
testError{
|
||||||
|
message: "test error message",
|
||||||
|
statusCode: http.StatusBadRequest,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: "test error message",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "error after successful chunks, ok response",
|
||||||
|
responses: []any{
|
||||||
|
ChatResponse{Message: Message{Content: "partial response 1"}},
|
||||||
|
ChatResponse{Message: Message{Content: "partial response 2"}},
|
||||||
|
testError{
|
||||||
|
message: "mid-stream error",
|
||||||
|
statusCode: http.StatusOK,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: "mid-stream error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "successful stream completion",
|
||||||
|
responses: []any{
|
||||||
|
ChatResponse{Message: Message{Content: "chunk 1"}},
|
||||||
|
ChatResponse{Message: Message{Content: "chunk 2"}},
|
||||||
|
ChatResponse{
|
||||||
|
Message: Message{Content: "final chunk"},
|
||||||
|
Done: true,
|
||||||
|
DoneReason: "stop",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
flusher, ok := w.(http.Flusher)
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected http.Flusher")
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Type", "application/x-ndjson")
|
||||||
|
|
||||||
|
for _, resp := range tc.responses {
|
||||||
|
if errResp, ok := resp.(testError); ok {
|
||||||
|
w.WriteHeader(errResp.statusCode)
|
||||||
|
err := json.NewEncoder(w).Encode(map[string]string{
|
||||||
|
"error": errResp.message,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal("failed to encode error response:", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.NewEncoder(w).Encode(resp); err != nil {
|
||||||
|
t.Fatalf("failed to encode response: %v", err)
|
||||||
|
}
|
||||||
|
flusher.Flush()
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
client := NewClient(&url.URL{Scheme: "http", Host: ts.Listener.Addr().String()}, http.DefaultClient)
|
||||||
|
|
||||||
|
var receivedChunks []ChatResponse
|
||||||
|
err := client.stream(context.Background(), http.MethodPost, "/v1/chat", nil, func(chunk []byte) error {
|
||||||
|
var resp ChatResponse
|
||||||
|
if err := json.Unmarshal(chunk, &resp); err != nil {
|
||||||
|
return fmt.Errorf("failed to unmarshal chunk: %w", err)
|
||||||
|
}
|
||||||
|
receivedChunks = append(receivedChunks, resp)
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if tc.wantErr != "" {
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected error but got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), tc.wantErr) {
|
||||||
|
t.Errorf("expected error containing %q, got %v", tc.wantErr, err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientDo(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
response any
|
||||||
|
wantErr string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "immediate error response",
|
||||||
|
response: testError{
|
||||||
|
message: "test error message",
|
||||||
|
statusCode: http.StatusBadRequest,
|
||||||
|
},
|
||||||
|
wantErr: "test error message",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "server error response",
|
||||||
|
response: testError{
|
||||||
|
message: "internal error",
|
||||||
|
statusCode: http.StatusInternalServerError,
|
||||||
|
},
|
||||||
|
wantErr: "internal error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "successful response",
|
||||||
|
response: struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}{
|
||||||
|
ID: "msg_123",
|
||||||
|
Success: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if errResp, ok := tc.response.(testError); ok {
|
||||||
|
w.WriteHeader(errResp.statusCode)
|
||||||
|
err := json.NewEncoder(w).Encode(map[string]string{
|
||||||
|
"error": errResp.message,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal("failed to encode error response:", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
if err := json.NewEncoder(w).Encode(tc.response); err != nil {
|
||||||
|
t.Fatalf("failed to encode response: %v", err)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
client := NewClient(&url.URL{Scheme: "http", Host: ts.Listener.Addr().String()}, http.DefaultClient)
|
||||||
|
|
||||||
|
var resp struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}
|
||||||
|
err := client.do(context.Background(), http.MethodPost, "/v1/messages", nil, &resp)
|
||||||
|
|
||||||
|
if tc.wantErr != "" {
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("got nil, want error %q", tc.wantErr)
|
||||||
|
}
|
||||||
|
if err.Error() != tc.wantErr {
|
||||||
|
t.Errorf("error message mismatch: got %q, want %q", err.Error(), tc.wantErr)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("got error %q, want nil", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if expectedResp, ok := tc.response.(struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}); ok {
|
||||||
|
if resp.ID != expectedResp.ID {
|
||||||
|
t.Errorf("response ID mismatch: got %q, want %q", resp.ID, expectedResp.ID)
|
||||||
|
}
|
||||||
|
if resp.Success != expectedResp.Success {
|
||||||
|
t.Errorf("response Success mismatch: got %v, want %v", resp.Success, expectedResp.Success)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
18
api/examples/README.md
Normal file
18
api/examples/README.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Ollama API Examples
|
||||||
|
|
||||||
|
Run the examples in this directory with:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go run example_name/main.go
|
||||||
|
```
|
||||||
|
|
||||||
|
## Chat - Chat with a model
|
||||||
|
- [chat/main.go](chat/main.go)
|
||||||
|
|
||||||
|
## Generate - Generate text from a model
|
||||||
|
- [generate/main.go](generate/main.go)
|
||||||
|
- [generate-streaming/main.go](generate-streaming/main.go)
|
||||||
|
|
||||||
|
## Pull - Pull a model
|
||||||
|
- [pull-progress/main.go](pull-progress/main.go)
|
||||||
|
|
62
api/types.go
62
api/types.go
@@ -10,9 +10,11 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/envconfig"
|
||||||
)
|
)
|
||||||
|
|
||||||
// StatusError is an error with and HTTP status code.
|
// StatusError is an error with an HTTP status code and message.
|
||||||
type StatusError struct {
|
type StatusError struct {
|
||||||
StatusCode int
|
StatusCode int
|
||||||
Status string
|
Status string
|
||||||
@@ -57,7 +59,7 @@ type GenerateRequest struct {
|
|||||||
Template string `json:"template"`
|
Template string `json:"template"`
|
||||||
|
|
||||||
// Context is the context parameter returned from a previous call to
|
// Context is the context parameter returned from a previous call to
|
||||||
// Generate call. It can be used to keep a short conversational memory.
|
// [Client.Generate]. It can be used to keep a short conversational memory.
|
||||||
Context []int `json:"context,omitempty"`
|
Context []int `json:"context,omitempty"`
|
||||||
|
|
||||||
// Stream specifies whether the response is streaming; it is true by default.
|
// Stream specifies whether the response is streaming; it is true by default.
|
||||||
@@ -67,7 +69,7 @@ type GenerateRequest struct {
|
|||||||
Raw bool `json:"raw,omitempty"`
|
Raw bool `json:"raw,omitempty"`
|
||||||
|
|
||||||
// Format specifies the format to return a response in.
|
// Format specifies the format to return a response in.
|
||||||
Format string `json:"format"`
|
Format json.RawMessage `json:"format,omitempty"`
|
||||||
|
|
||||||
// KeepAlive controls how long the model will stay loaded in memory following
|
// KeepAlive controls how long the model will stay loaded in memory following
|
||||||
// this request.
|
// this request.
|
||||||
@@ -90,14 +92,14 @@ type ChatRequest struct {
|
|||||||
// Messages is the messages of the chat - can be used to keep a chat memory.
|
// Messages is the messages of the chat - can be used to keep a chat memory.
|
||||||
Messages []Message `json:"messages"`
|
Messages []Message `json:"messages"`
|
||||||
|
|
||||||
// Stream enable streaming of returned response; true by default.
|
// Stream enables streaming of returned responses; true by default.
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
|
|
||||||
// Format is the format to return the response in (e.g. "json").
|
// Format is the format to return the response in (e.g. "json").
|
||||||
Format string `json:"format"`
|
Format json.RawMessage `json:"format,omitempty"`
|
||||||
|
|
||||||
// KeepAlive controls how long the model will stay loaded into memory
|
// KeepAlive controls how long the model will stay loaded into memory
|
||||||
// followin the request.
|
// following the request.
|
||||||
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
||||||
|
|
||||||
// Tools is an optional list of tools the model has access to.
|
// Tools is an optional list of tools the model has access to.
|
||||||
@@ -146,6 +148,7 @@ type ToolCall struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ToolCallFunction struct {
|
type ToolCallFunction struct {
|
||||||
|
Index int `json:"index,omitempty"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Arguments ToolCallFunctionArguments `json:"arguments"`
|
Arguments ToolCallFunctionArguments `json:"arguments"`
|
||||||
}
|
}
|
||||||
@@ -203,8 +206,8 @@ type Metrics struct {
|
|||||||
EvalDuration time.Duration `json:"eval_duration,omitempty"`
|
EvalDuration time.Duration `json:"eval_duration,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Options specified in [GenerateRequest], if you add a new option here add it
|
// Options specified in [GenerateRequest]. If you add a new option here, also
|
||||||
// to the API docs also.
|
// add it to the API docs.
|
||||||
type Options struct {
|
type Options struct {
|
||||||
Runner
|
Runner
|
||||||
|
|
||||||
@@ -215,7 +218,6 @@ type Options struct {
|
|||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
TopP float32 `json:"top_p,omitempty"`
|
TopP float32 `json:"top_p,omitempty"`
|
||||||
MinP float32 `json:"min_p,omitempty"`
|
MinP float32 `json:"min_p,omitempty"`
|
||||||
TFSZ float32 `json:"tfs_z,omitempty"`
|
|
||||||
TypicalP float32 `json:"typical_p,omitempty"`
|
TypicalP float32 `json:"typical_p,omitempty"`
|
||||||
RepeatLastN int `json:"repeat_last_n,omitempty"`
|
RepeatLastN int `json:"repeat_last_n,omitempty"`
|
||||||
Temperature float32 `json:"temperature,omitempty"`
|
Temperature float32 `json:"temperature,omitempty"`
|
||||||
@@ -225,7 +227,6 @@ type Options struct {
|
|||||||
Mirostat int `json:"mirostat,omitempty"`
|
Mirostat int `json:"mirostat,omitempty"`
|
||||||
MirostatTau float32 `json:"mirostat_tau,omitempty"`
|
MirostatTau float32 `json:"mirostat_tau,omitempty"`
|
||||||
MirostatEta float32 `json:"mirostat_eta,omitempty"`
|
MirostatEta float32 `json:"mirostat_eta,omitempty"`
|
||||||
PenalizeNewline bool `json:"penalize_newline,omitempty"`
|
|
||||||
Stop []string `json:"stop,omitempty"`
|
Stop []string `json:"stop,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -236,7 +237,7 @@ type Runner struct {
|
|||||||
NumGPU int `json:"num_gpu,omitempty"`
|
NumGPU int `json:"num_gpu,omitempty"`
|
||||||
MainGPU int `json:"main_gpu,omitempty"`
|
MainGPU int `json:"main_gpu,omitempty"`
|
||||||
LowVRAM bool `json:"low_vram,omitempty"`
|
LowVRAM bool `json:"low_vram,omitempty"`
|
||||||
F16KV bool `json:"f16_kv,omitempty"`
|
F16KV bool `json:"f16_kv,omitempty"` // Deprecated: This option is ignored
|
||||||
LogitsAll bool `json:"logits_all,omitempty"`
|
LogitsAll bool `json:"logits_all,omitempty"`
|
||||||
VocabOnly bool `json:"vocab_only,omitempty"`
|
VocabOnly bool `json:"vocab_only,omitempty"`
|
||||||
UseMMap *bool `json:"use_mmap,omitempty"`
|
UseMMap *bool `json:"use_mmap,omitempty"`
|
||||||
@@ -295,17 +296,21 @@ type EmbeddingResponse struct {
|
|||||||
|
|
||||||
// CreateRequest is the request passed to [Client.Create].
|
// CreateRequest is the request passed to [Client.Create].
|
||||||
type CreateRequest struct {
|
type CreateRequest struct {
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
Modelfile string `json:"modelfile"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Quantize string `json:"quantize,omitempty"`
|
||||||
Quantize string `json:"quantize,omitempty"`
|
|
||||||
|
From string `json:"from,omitempty"`
|
||||||
|
Files map[string]string `json:"files,omitempty"`
|
||||||
|
Adapters map[string]string `json:"adapters,omitempty"`
|
||||||
|
Template string `json:"template,omitempty"`
|
||||||
|
License any `json:"license,omitempty"`
|
||||||
|
System string `json:"system,omitempty"`
|
||||||
|
Parameters map[string]any `json:"parameters,omitempty"`
|
||||||
|
Messages []Message `json:"messages,omitempty"`
|
||||||
|
|
||||||
// Deprecated: set the model name with Model instead
|
// Deprecated: set the model name with Model instead
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
|
|
||||||
// Deprecated: set the file content with Modelfile instead
|
|
||||||
Path string `json:"path"`
|
|
||||||
|
|
||||||
// Deprecated: use Quantize instead
|
// Deprecated: use Quantize instead
|
||||||
Quantization string `json:"quantization,omitempty"`
|
Quantization string `json:"quantization,omitempty"`
|
||||||
}
|
}
|
||||||
@@ -344,6 +349,7 @@ type ShowResponse struct {
|
|||||||
Messages []Message `json:"messages,omitempty"`
|
Messages []Message `json:"messages,omitempty"`
|
||||||
ModelInfo map[string]any `json:"model_info,omitempty"`
|
ModelInfo map[string]any `json:"model_info,omitempty"`
|
||||||
ProjectorInfo map[string]any `json:"projector_info,omitempty"`
|
ProjectorInfo map[string]any `json:"projector_info,omitempty"`
|
||||||
|
Tensors []Tensor `json:"tensors,omitempty"`
|
||||||
ModifiedAt time.Time `json:"modified_at,omitempty"`
|
ModifiedAt time.Time `json:"modified_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -356,9 +362,9 @@ type CopyRequest struct {
|
|||||||
// PullRequest is the request passed to [Client.Pull].
|
// PullRequest is the request passed to [Client.Pull].
|
||||||
type PullRequest struct {
|
type PullRequest struct {
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
Insecure bool `json:"insecure,omitempty"`
|
Insecure bool `json:"insecure,omitempty"` // Deprecated: ignored
|
||||||
Username string `json:"username"`
|
Username string `json:"username"` // Deprecated: ignored
|
||||||
Password string `json:"password"`
|
Password string `json:"password"` // Deprecated: ignored
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
|
|
||||||
// Deprecated: set the model name with Model instead
|
// Deprecated: set the model name with Model instead
|
||||||
@@ -462,6 +468,13 @@ type ModelDetails struct {
|
|||||||
QuantizationLevel string `json:"quantization_level"`
|
QuantizationLevel string `json:"quantization_level"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Tensor describes the metadata for a given tensor.
|
||||||
|
type Tensor struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
Shape []uint64 `json:"shape"`
|
||||||
|
}
|
||||||
|
|
||||||
func (m *Metrics) Summary() {
|
func (m *Metrics) Summary() {
|
||||||
if m.TotalDuration > 0 {
|
if m.TotalDuration > 0 {
|
||||||
fmt.Fprintf(os.Stderr, "total duration: %v\n", m.TotalDuration)
|
fmt.Fprintf(os.Stderr, "total duration: %v\n", m.TotalDuration)
|
||||||
@@ -594,7 +607,6 @@ func DefaultOptions() Options {
|
|||||||
Temperature: 0.8,
|
Temperature: 0.8,
|
||||||
TopK: 40,
|
TopK: 40,
|
||||||
TopP: 0.9,
|
TopP: 0.9,
|
||||||
TFSZ: 1.0,
|
|
||||||
TypicalP: 1.0,
|
TypicalP: 1.0,
|
||||||
RepeatLastN: 64,
|
RepeatLastN: 64,
|
||||||
RepeatPenalty: 1.1,
|
RepeatPenalty: 1.1,
|
||||||
@@ -603,17 +615,15 @@ func DefaultOptions() Options {
|
|||||||
Mirostat: 0,
|
Mirostat: 0,
|
||||||
MirostatTau: 5.0,
|
MirostatTau: 5.0,
|
||||||
MirostatEta: 0.1,
|
MirostatEta: 0.1,
|
||||||
PenalizeNewline: true,
|
|
||||||
Seed: -1,
|
Seed: -1,
|
||||||
|
|
||||||
Runner: Runner{
|
Runner: Runner{
|
||||||
// options set when the model is loaded
|
// options set when the model is loaded
|
||||||
NumCtx: 2048,
|
NumCtx: int(envconfig.ContextLength()),
|
||||||
NumBatch: 512,
|
NumBatch: 512,
|
||||||
NumGPU: -1, // -1 here indicates that NumGPU should be set dynamically
|
NumGPU: -1, // -1 here indicates that NumGPU should be set dynamically
|
||||||
NumThread: 0, // let the runtime decide
|
NumThread: 0, // let the runtime decide
|
||||||
LowVRAM: false,
|
LowVRAM: false,
|
||||||
F16KV: true,
|
|
||||||
UseMLock: false,
|
UseMLock: false,
|
||||||
UseMMap: nil,
|
UseMMap: nil,
|
||||||
},
|
},
|
||||||
|
@@ -17,6 +17,6 @@ If you want to build the installer, youll need to install
|
|||||||
In the top directory of this repo, run the following powershell script
|
In the top directory of this repo, run the following powershell script
|
||||||
to build the ollama CLI, ollama app, and ollama installer.
|
to build the ollama CLI, ollama app, and ollama installer.
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
|
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
|
||||||
```
|
```
|
||||||
|
@@ -11,10 +11,12 @@ import (
|
|||||||
|
|
||||||
"github.com/ollama/ollama/app/store"
|
"github.com/ollama/ollama/app/store"
|
||||||
"github.com/ollama/ollama/app/tray"
|
"github.com/ollama/ollama/app/tray"
|
||||||
|
"github.com/ollama/ollama/envconfig"
|
||||||
)
|
)
|
||||||
|
|
||||||
func Run() {
|
func Run() {
|
||||||
InitLogging()
|
InitLogging()
|
||||||
|
slog.Info("app config", "env", envconfig.Values())
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
var done chan int
|
var done chan int
|
||||||
|
@@ -36,8 +36,13 @@ func init() {
|
|||||||
ServerLogFile = filepath.Join(AppDataDir, "server.log")
|
ServerLogFile = filepath.Join(AppDataDir, "server.log")
|
||||||
UpgradeLogFile = filepath.Join(AppDataDir, "upgrade.log")
|
UpgradeLogFile = filepath.Join(AppDataDir, "upgrade.log")
|
||||||
|
|
||||||
// Executables are stored in APPDATA
|
exe, err := os.Executable()
|
||||||
AppDir = filepath.Join(localAppData, "Programs", "Ollama")
|
if err != nil {
|
||||||
|
slog.Warn("error discovering executable directory", "error", err)
|
||||||
|
AppDir = filepath.Join(localAppData, "Programs", "Ollama")
|
||||||
|
} else {
|
||||||
|
AppDir = filepath.Dir(exe)
|
||||||
|
}
|
||||||
|
|
||||||
// Make sure we have PATH set correctly for any spawned children
|
// Make sure we have PATH set correctly for any spawned children
|
||||||
paths := strings.Split(os.Getenv("PATH"), ";")
|
paths := strings.Split(os.Getenv("PATH"), ";")
|
||||||
@@ -64,7 +69,7 @@ func init() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Make sure our logging dir exists
|
// Make sure our logging dir exists
|
||||||
_, err := os.Stat(AppDataDir)
|
_, err = os.Stat(AppDataDir)
|
||||||
if errors.Is(err, os.ErrNotExist) {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
if err := os.MkdirAll(AppDataDir, 0o755); err != nil {
|
if err := os.MkdirAll(AppDataDir, 0o755); err != nil {
|
||||||
slog.Error(fmt.Sprintf("create ollama dir %s: %v", AppDataDir, err))
|
slog.Error(fmt.Sprintf("create ollama dir %s: %v", AppDataDir, err))
|
||||||
|
@@ -18,11 +18,17 @@ func getCLIFullPath(command string) string {
|
|||||||
var cmdPath string
|
var cmdPath string
|
||||||
appExe, err := os.Executable()
|
appExe, err := os.Executable()
|
||||||
if err == nil {
|
if err == nil {
|
||||||
|
// Check both the same location as the tray app, as well as ./bin
|
||||||
cmdPath = filepath.Join(filepath.Dir(appExe), command)
|
cmdPath = filepath.Join(filepath.Dir(appExe), command)
|
||||||
_, err := os.Stat(cmdPath)
|
_, err := os.Stat(cmdPath)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return cmdPath
|
return cmdPath
|
||||||
}
|
}
|
||||||
|
cmdPath = filepath.Join(filepath.Dir(appExe), "bin", command)
|
||||||
|
_, err = os.Stat(cmdPath)
|
||||||
|
if err == nil {
|
||||||
|
return cmdPath
|
||||||
|
}
|
||||||
}
|
}
|
||||||
cmdPath, err = exec.LookPath(command)
|
cmdPath, err = exec.LookPath(command)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
|
@@ -26,19 +26,15 @@ func DoUpgrade(cancel context.CancelFunc, done chan int) error {
|
|||||||
slog.Info("starting upgrade with " + installerExe)
|
slog.Info("starting upgrade with " + installerExe)
|
||||||
slog.Info("upgrade log file " + UpgradeLogFile)
|
slog.Info("upgrade log file " + UpgradeLogFile)
|
||||||
|
|
||||||
// When running in debug mode, we'll be "verbose" and let the installer pop up and prompt
|
// make the upgrade show progress, but non interactive
|
||||||
installArgs := []string{
|
installArgs := []string{
|
||||||
"/CLOSEAPPLICATIONS", // Quit the tray app if it's still running
|
"/CLOSEAPPLICATIONS", // Quit the tray app if it's still running
|
||||||
"/LOG=" + filepath.Base(UpgradeLogFile), // Only relative seems reliable, so set pwd
|
"/LOG=" + filepath.Base(UpgradeLogFile), // Only relative seems reliable, so set pwd
|
||||||
"/FORCECLOSEAPPLICATIONS", // Force close the tray app - might be needed
|
"/FORCECLOSEAPPLICATIONS", // Force close the tray app - might be needed
|
||||||
}
|
"/SP", // Skip the "This will install... Do you wish to continue" prompt
|
||||||
// make the upgrade as quiet as possible (no GUI, no prompts)
|
"/NOCANCEL", // Disable the ability to cancel upgrade mid-flight to avoid partially installed upgrades
|
||||||
installArgs = append(installArgs,
|
|
||||||
"/SP", // Skip the "This will install... Do you wish to continue" prompt
|
|
||||||
"/SUPPRESSMSGBOXES",
|
|
||||||
"/SILENT",
|
"/SILENT",
|
||||||
"/VERYSILENT",
|
}
|
||||||
)
|
|
||||||
|
|
||||||
// Safeguard in case we have requests in flight that need to drain...
|
// Safeguard in case we have requests in flight that need to drain...
|
||||||
slog.Info("Waiting for server to shutdown")
|
slog.Info("Waiting for server to shutdown")
|
||||||
|
@@ -53,8 +53,8 @@ RestartIfNeededByRun=no
|
|||||||
; https://jrsoftware.org/ishelp/index.php?topic=setup_wizardimagefile
|
; https://jrsoftware.org/ishelp/index.php?topic=setup_wizardimagefile
|
||||||
WizardSmallImageFile=.\assets\setup.bmp
|
WizardSmallImageFile=.\assets\setup.bmp
|
||||||
|
|
||||||
; TODO verifty actual min windows version...
|
; Ollama requires Windows 10 22H2 or newer for proper unicode rendering
|
||||||
; OG Win 10
|
; TODO: consider setting this to 10.0.19045
|
||||||
MinVersion=10.0.10240
|
MinVersion=10.0.10240
|
||||||
|
|
||||||
; First release that supports WinRT UI Composition for win32 apps
|
; First release that supports WinRT UI Composition for win32 apps
|
||||||
@@ -97,7 +97,6 @@ Source: "..\dist\windows-amd64\lib\ollama\*"; DestDir: "{app}\lib\ollama\"; Chec
|
|||||||
Source: "..\dist\windows-arm64\vc_redist.arm64.exe"; DestDir: "{tmp}"; Check: IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
|
Source: "..\dist\windows-arm64\vc_redist.arm64.exe"; DestDir: "{tmp}"; Check: IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
|
||||||
Source: "..\dist\windows-arm64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit
|
Source: "..\dist\windows-arm64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit
|
||||||
Source: "..\dist\windows-arm64\ollama.exe"; DestDir: "{app}"; Check: IsArm64(); Flags: ignoreversion 64bit
|
Source: "..\dist\windows-arm64\ollama.exe"; DestDir: "{app}"; Check: IsArm64(); Flags: ignoreversion 64bit
|
||||||
Source: "..\dist\windows-arm64\lib\ollama\*"; DestDir: "{app}\lib\ollama\"; Check: IsArm64(); Flags: ignoreversion 64bit recursesubdirs
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
Source: "..\dist\ollama_welcome.ps1"; DestDir: "{app}"; Flags: ignoreversion
|
Source: "..\dist\ollama_welcome.ps1"; DestDir: "{app}"; Flags: ignoreversion
|
||||||
@@ -136,7 +135,7 @@ Type: filesandordirs; Name: "{%TEMP}\ollama*"
|
|||||||
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
|
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
|
||||||
|
|
||||||
[Messages]
|
[Messages]
|
||||||
WizardReady=Ollama Windows Preview
|
WizardReady=Ollama
|
||||||
ReadyLabel1=%nLet's get you up and running with your own large language models.
|
ReadyLabel1=%nLet's get you up and running with your own large language models.
|
||||||
SetupAppRunningError=Another Ollama installer is running.%n%nPlease cancel or finish the other installer, then click OK to continue with this install, or Cancel to exit.
|
SetupAppRunningError=Another Ollama installer is running.%n%nPlease cancel or finish the other installer, then click OK to continue with this install, or Cancel to exit.
|
||||||
|
|
||||||
|
@@ -64,7 +64,7 @@ func initStore() {
|
|||||||
slog.Debug(fmt.Sprintf("unexpected error searching for store: %s", err))
|
slog.Debug(fmt.Sprintf("unexpected error searching for store: %s", err))
|
||||||
}
|
}
|
||||||
slog.Debug("initializing new store")
|
slog.Debug("initializing new store")
|
||||||
store.ID = uuid.New().String()
|
store.ID = uuid.NewString()
|
||||||
writeStore(getStorePath())
|
writeStore(getStorePath())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -98,7 +98,7 @@ func (t *winTray) wndProc(hWnd windows.Handle, message uint32, wParam, lParam ui
|
|||||||
}
|
}
|
||||||
err = t.wcex.unregister()
|
err = t.wcex.unregister()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Error(fmt.Sprintf("failed to uregister windo %s", err))
|
slog.Error(fmt.Sprintf("failed to unregister window %s", err))
|
||||||
}
|
}
|
||||||
case WM_DESTROY:
|
case WM_DESTROY:
|
||||||
// same as WM_ENDSESSION, but throws 0 exit code after all
|
// same as WM_ENDSESSION, but throws 0 exit code after all
|
||||||
|
@@ -11,12 +11,13 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
updateAvailableMenuID = 1
|
_ = iota
|
||||||
updateMenuID = updateAvailableMenuID + 1
|
updateAvailableMenuID
|
||||||
separatorMenuID = updateMenuID + 1
|
updateMenuID
|
||||||
diagLogsMenuID = separatorMenuID + 1
|
separatorMenuID
|
||||||
diagSeparatorMenuID = diagLogsMenuID + 1
|
diagLogsMenuID
|
||||||
quitMenuID = diagSeparatorMenuID + 1
|
diagSeparatorMenuID
|
||||||
|
quitMenuID
|
||||||
)
|
)
|
||||||
|
|
||||||
func (t *winTray) initMenus() error {
|
func (t *winTray) initMenus() error {
|
||||||
@@ -38,7 +39,7 @@ func (t *winTray) UpdateAvailable(ver string) error {
|
|||||||
if err := t.addOrUpdateMenuItem(updateAvailableMenuID, 0, updateAvailableMenuTitle, true); err != nil {
|
if err := t.addOrUpdateMenuItem(updateAvailableMenuID, 0, updateAvailableMenuTitle, true); err != nil {
|
||||||
return fmt.Errorf("unable to create menu entries %w", err)
|
return fmt.Errorf("unable to create menu entries %w", err)
|
||||||
}
|
}
|
||||||
if err := t.addOrUpdateMenuItem(updateMenuID, 0, updateMenutTitle, false); err != nil {
|
if err := t.addOrUpdateMenuItem(updateMenuID, 0, updateMenuTitle, false); err != nil {
|
||||||
return fmt.Errorf("unable to create menu entries %w", err)
|
return fmt.Errorf("unable to create menu entries %w", err)
|
||||||
}
|
}
|
||||||
if err := t.addSeparatorMenuItem(separatorMenuID, 0); err != nil {
|
if err := t.addSeparatorMenuItem(separatorMenuID, 0); err != nil {
|
||||||
|
@@ -10,6 +10,6 @@ const (
|
|||||||
|
|
||||||
quitMenuTitle = "Quit Ollama"
|
quitMenuTitle = "Quit Ollama"
|
||||||
updateAvailableMenuTitle = "An update is available"
|
updateAvailableMenuTitle = "An update is available"
|
||||||
updateMenutTitle = "Restart to update"
|
updateMenuTitle = "Restart to update"
|
||||||
diagLogsMenuTitle = "View logs"
|
diagLogsMenuTitle = "View logs"
|
||||||
)
|
)
|
||||||
|
@@ -361,7 +361,7 @@ func (t *winTray) showMenu() error {
|
|||||||
|
|
||||||
boolRet, _, err = pTrackPopupMenu.Call(
|
boolRet, _, err = pTrackPopupMenu.Call(
|
||||||
uintptr(t.menus[0]),
|
uintptr(t.menus[0]),
|
||||||
TPM_BOTTOMALIGN|TPM_LEFTALIGN,
|
TPM_BOTTOMALIGN|TPM_LEFTALIGN|TPM_RIGHTBUTTON,
|
||||||
uintptr(p.X),
|
uintptr(p.X),
|
||||||
uintptr(p.Y),
|
uintptr(p.Y),
|
||||||
0,
|
0,
|
||||||
|
@@ -67,6 +67,7 @@ const (
|
|||||||
SW_HIDE = 0
|
SW_HIDE = 0
|
||||||
TPM_BOTTOMALIGN = 0x0020
|
TPM_BOTTOMALIGN = 0x0020
|
||||||
TPM_LEFTALIGN = 0x0000
|
TPM_LEFTALIGN = 0x0000
|
||||||
|
TPM_RIGHTBUTTON = 0x0002
|
||||||
WM_CLOSE = 0x0010
|
WM_CLOSE = 0x0010
|
||||||
WM_USER = 0x0400
|
WM_USER = 0x0400
|
||||||
WS_CAPTION = 0x00C00000
|
WS_CAPTION = 0x00C00000
|
||||||
|
178
benchmark/server_benchmark_test.go
Normal file
178
benchmark/server_benchmark_test.go
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
package benchmark
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/api"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Command line flags
|
||||||
|
var modelFlag string
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
flag.StringVar(&modelFlag, "m", "", "Name of the model to benchmark")
|
||||||
|
flag.Lookup("m").DefValue = "model"
|
||||||
|
}
|
||||||
|
|
||||||
|
// modelName returns the model name from flags, failing the test if not set
|
||||||
|
func modelName(b *testing.B) string {
|
||||||
|
if modelFlag == "" {
|
||||||
|
b.Fatal("Error: -m flag is required for benchmark tests")
|
||||||
|
}
|
||||||
|
return modelFlag
|
||||||
|
}
|
||||||
|
|
||||||
|
type TestCase struct {
|
||||||
|
name string
|
||||||
|
prompt string
|
||||||
|
maxTokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
// runGenerateBenchmark contains the common generate and metrics logic
|
||||||
|
func runGenerateBenchmark(b *testing.B, ctx context.Context, client *api.Client, req *api.GenerateRequest) {
|
||||||
|
start := time.Now()
|
||||||
|
var ttft time.Duration
|
||||||
|
var metrics api.Metrics
|
||||||
|
|
||||||
|
err := client.Generate(ctx, req, func(resp api.GenerateResponse) error {
|
||||||
|
if ttft == 0 && resp.Response != "" {
|
||||||
|
ttft = time.Since(start)
|
||||||
|
}
|
||||||
|
if resp.Done {
|
||||||
|
metrics = resp.Metrics
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
// Report custom metrics as part of the benchmark results
|
||||||
|
b.ReportMetric(float64(ttft.Milliseconds()), "ttft_ms")
|
||||||
|
b.ReportMetric(float64(metrics.LoadDuration.Milliseconds()), "load_ms")
|
||||||
|
|
||||||
|
// Token throughput metrics
|
||||||
|
promptThroughput := float64(metrics.PromptEvalCount) / metrics.PromptEvalDuration.Seconds()
|
||||||
|
genThroughput := float64(metrics.EvalCount) / metrics.EvalDuration.Seconds()
|
||||||
|
b.ReportMetric(promptThroughput, "prompt_tok/s")
|
||||||
|
b.ReportMetric(genThroughput, "gen_tok/s")
|
||||||
|
|
||||||
|
// Token counts
|
||||||
|
b.ReportMetric(float64(metrics.PromptEvalCount), "prompt_tokens")
|
||||||
|
b.ReportMetric(float64(metrics.EvalCount), "gen_tokens")
|
||||||
|
if err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BenchmarkColdStart runs benchmarks with model loading from cold state
|
||||||
|
func BenchmarkColdStart(b *testing.B) {
|
||||||
|
client := setup(b)
|
||||||
|
tests := []TestCase{
|
||||||
|
{"short_prompt", "Write a long story", 100},
|
||||||
|
{"medium_prompt", "Write a detailed economic analysis", 500},
|
||||||
|
{"long_prompt", "Write a comprehensive AI research paper", 1000},
|
||||||
|
}
|
||||||
|
m := modelName(b)
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
b.Run(fmt.Sprintf("%s/cold/%s", m, tt.name), func(b *testing.B) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Set number of tokens as our throughput metric
|
||||||
|
b.SetBytes(int64(tt.maxTokens))
|
||||||
|
|
||||||
|
for b.Loop() {
|
||||||
|
b.StopTimer()
|
||||||
|
// Ensure model is unloaded before each iteration
|
||||||
|
unload(client, m, b)
|
||||||
|
b.StartTimer()
|
||||||
|
|
||||||
|
req := &api.GenerateRequest{
|
||||||
|
Model: m,
|
||||||
|
Prompt: tt.prompt,
|
||||||
|
Options: map[string]interface{}{"num_predict": tt.maxTokens, "temperature": 0.1},
|
||||||
|
}
|
||||||
|
|
||||||
|
runGenerateBenchmark(b, ctx, client, req)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BenchmarkWarmStart runs benchmarks with pre-loaded model
|
||||||
|
func BenchmarkWarmStart(b *testing.B) {
|
||||||
|
client := setup(b)
|
||||||
|
tests := []TestCase{
|
||||||
|
{"short_prompt", "Write a long story", 100},
|
||||||
|
{"medium_prompt", "Write a detailed economic analysis", 500},
|
||||||
|
{"long_prompt", "Write a comprehensive AI research paper", 1000},
|
||||||
|
}
|
||||||
|
m := modelName(b)
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
b.Run(fmt.Sprintf("%s/warm/%s", m, tt.name), func(b *testing.B) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Pre-warm the model
|
||||||
|
warmup(client, m, tt.prompt, b)
|
||||||
|
|
||||||
|
// Set number of tokens as our throughput metric
|
||||||
|
b.SetBytes(int64(tt.maxTokens))
|
||||||
|
|
||||||
|
for b.Loop() {
|
||||||
|
req := &api.GenerateRequest{
|
||||||
|
Model: m,
|
||||||
|
Prompt: tt.prompt,
|
||||||
|
Options: map[string]any{"num_predict": tt.maxTokens, "temperature": 0.1},
|
||||||
|
}
|
||||||
|
|
||||||
|
runGenerateBenchmark(b, ctx, client, req)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// setup verifies server and model availability
|
||||||
|
func setup(b *testing.B) *api.Client {
|
||||||
|
client, err := api.ClientFromEnvironment()
|
||||||
|
if err != nil {
|
||||||
|
b.Fatal(err)
|
||||||
|
}
|
||||||
|
if _, err := client.Show(context.Background(), &api.ShowRequest{Model: modelName(b)}); err != nil {
|
||||||
|
b.Fatalf("Model unavailable: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
|
// warmup ensures the model is loaded and warmed up
|
||||||
|
func warmup(client *api.Client, model string, prompt string, b *testing.B) {
|
||||||
|
for range 3 {
|
||||||
|
err := client.Generate(
|
||||||
|
context.Background(),
|
||||||
|
&api.GenerateRequest{
|
||||||
|
Model: model,
|
||||||
|
Prompt: prompt,
|
||||||
|
Options: map[string]interface{}{"num_predict": 50, "temperature": 0.1},
|
||||||
|
},
|
||||||
|
func(api.GenerateResponse) error { return nil },
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
b.Logf("Error during model warm-up: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// unload forces model unloading using KeepAlive: 0 parameter
|
||||||
|
func unload(client *api.Client, model string, b *testing.B) {
|
||||||
|
req := &api.GenerateRequest{
|
||||||
|
Model: model,
|
||||||
|
KeepAlive: &api.Duration{Duration: 0},
|
||||||
|
}
|
||||||
|
if err := client.Generate(context.Background(), req, func(api.GenerateResponse) error { return nil }); err != nil {
|
||||||
|
b.Logf("Unload error: %v", err)
|
||||||
|
}
|
||||||
|
time.Sleep(1 * time.Second)
|
||||||
|
}
|
@@ -1 +0,0 @@
|
|||||||
This is here to make sure the build/ directory exists for the go:embed command
|
|
@@ -1 +0,0 @@
|
|||||||
This is here to make sure the build/ directory exists for the go:embed command
|
|
@@ -1,8 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import "embed"
|
|
||||||
|
|
||||||
// Darwin payloads separated by architecture to avoid duplicate payloads when cross compiling
|
|
||||||
|
|
||||||
//go:embed darwin/amd64/*
|
|
||||||
var EmbedFS embed.FS
|
|
@@ -1,8 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import "embed"
|
|
||||||
|
|
||||||
// Darwin payloads separated by architecture to avoid duplicate payloads when cross compiling
|
|
||||||
|
|
||||||
//go:embed darwin/arm64/*
|
|
||||||
var EmbedFS embed.FS
|
|
@@ -1,6 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import "embed"
|
|
||||||
|
|
||||||
//go:embed linux/*
|
|
||||||
var EmbedFS embed.FS
|
|
@@ -1,8 +0,0 @@
|
|||||||
//go:build !linux && !darwin
|
|
||||||
|
|
||||||
package build
|
|
||||||
|
|
||||||
import "embed"
|
|
||||||
|
|
||||||
// unused on windows
|
|
||||||
var EmbedFS embed.FS
|
|
@@ -1 +0,0 @@
|
|||||||
This is here to make sure the build/ directory exists for the go:embed command
|
|
@@ -1 +0,0 @@
|
|||||||
This is here to make sure the build/ directory exists for the go:embed command
|
|
474
cmd/cmd.go
474
cmd/cmd.go
@@ -1,13 +1,11 @@
|
|||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"archive/zip"
|
|
||||||
"bufio"
|
"bufio"
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"crypto/ed25519"
|
"crypto/ed25519"
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
"crypto/sha256"
|
"encoding/json"
|
||||||
"encoding/pem"
|
"encoding/pem"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -19,9 +17,8 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"regexp"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
"slices"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
@@ -36,100 +33,114 @@ import (
|
|||||||
"golang.org/x/term"
|
"golang.org/x/term"
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
"github.com/ollama/ollama/api"
|
||||||
"github.com/ollama/ollama/auth"
|
|
||||||
"github.com/ollama/ollama/envconfig"
|
"github.com/ollama/ollama/envconfig"
|
||||||
"github.com/ollama/ollama/format"
|
"github.com/ollama/ollama/format"
|
||||||
"github.com/ollama/ollama/parser"
|
"github.com/ollama/ollama/parser"
|
||||||
"github.com/ollama/ollama/progress"
|
"github.com/ollama/ollama/progress"
|
||||||
|
"github.com/ollama/ollama/runner"
|
||||||
"github.com/ollama/ollama/server"
|
"github.com/ollama/ollama/server"
|
||||||
"github.com/ollama/ollama/types/errtypes"
|
|
||||||
"github.com/ollama/ollama/types/model"
|
"github.com/ollama/ollama/types/model"
|
||||||
"github.com/ollama/ollama/version"
|
"github.com/ollama/ollama/version"
|
||||||
)
|
)
|
||||||
|
|
||||||
func CreateHandler(cmd *cobra.Command, args []string) error {
|
var errModelfileNotFound = errors.New("specified Modelfile wasn't found")
|
||||||
|
|
||||||
|
func getModelfileName(cmd *cobra.Command) (string, error) {
|
||||||
filename, _ := cmd.Flags().GetString("file")
|
filename, _ := cmd.Flags().GetString("file")
|
||||||
filename, err := filepath.Abs(filename)
|
|
||||||
|
if filename == "" {
|
||||||
|
filename = "Modelfile"
|
||||||
|
}
|
||||||
|
|
||||||
|
absName, err := filepath.Abs(filename)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = os.Stat(absName)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return absName, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func CreateHandler(cmd *cobra.Command, args []string) error {
|
||||||
|
p := progress.NewProgress(os.Stderr)
|
||||||
|
defer p.Stop()
|
||||||
|
|
||||||
|
var reader io.Reader
|
||||||
|
|
||||||
|
filename, err := getModelfileName(cmd)
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
if filename == "" {
|
||||||
|
reader = strings.NewReader("FROM .\n")
|
||||||
|
} else {
|
||||||
|
return errModelfileNotFound
|
||||||
|
}
|
||||||
|
} else if err != nil {
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
f, err := os.Open(filename)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
reader = f
|
||||||
|
defer f.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
modelfile, err := parser.ParseFile(reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
status := "gathering model components"
|
||||||
|
spinner := progress.NewSpinner(status)
|
||||||
|
p.Add(status, spinner)
|
||||||
|
|
||||||
|
req, err := modelfile.CreateRequest(filepath.Dir(filename))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
spinner.Stop()
|
||||||
|
|
||||||
|
req.Name = args[0]
|
||||||
|
quantize, _ := cmd.Flags().GetString("quantize")
|
||||||
|
if quantize != "" {
|
||||||
|
req.Quantize = quantize
|
||||||
|
}
|
||||||
|
|
||||||
client, err := api.ClientFromEnvironment()
|
client, err := api.ClientFromEnvironment()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
p := progress.NewProgress(os.Stderr)
|
if len(req.Files) > 0 {
|
||||||
defer p.Stop()
|
fileMap := map[string]string{}
|
||||||
|
for f, digest := range req.Files {
|
||||||
f, err := os.Open(filename)
|
if _, err := createBlob(cmd, client, f, digest, p); err != nil {
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
modelfile, err := parser.ParseFile(f)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
home, err := os.UserHomeDir()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
status := "transferring model data"
|
|
||||||
spinner := progress.NewSpinner(status)
|
|
||||||
p.Add(status, spinner)
|
|
||||||
defer p.Stop()
|
|
||||||
|
|
||||||
for i := range modelfile.Commands {
|
|
||||||
switch modelfile.Commands[i].Name {
|
|
||||||
case "model", "adapter":
|
|
||||||
path := modelfile.Commands[i].Args
|
|
||||||
if path == "~" {
|
|
||||||
path = home
|
|
||||||
} else if strings.HasPrefix(path, "~/") {
|
|
||||||
path = filepath.Join(home, path[2:])
|
|
||||||
}
|
|
||||||
|
|
||||||
if !filepath.IsAbs(path) {
|
|
||||||
path = filepath.Join(filepath.Dir(filename), path)
|
|
||||||
}
|
|
||||||
|
|
||||||
fi, err := os.Stat(path)
|
|
||||||
if errors.Is(err, os.ErrNotExist) && modelfile.Commands[i].Name == "model" {
|
|
||||||
continue
|
|
||||||
} else if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
fileMap[filepath.Base(f)] = digest
|
||||||
if fi.IsDir() {
|
|
||||||
// this is likely a safetensors or pytorch directory
|
|
||||||
// TODO make this work w/ adapters
|
|
||||||
tempfile, err := tempZipFiles(path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(tempfile)
|
|
||||||
|
|
||||||
path = tempfile
|
|
||||||
}
|
|
||||||
|
|
||||||
digest, err := createBlob(cmd, client, path, spinner)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
modelfile.Commands[i].Args = "@" + digest
|
|
||||||
}
|
}
|
||||||
|
req.Files = fileMap
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(req.Adapters) > 0 {
|
||||||
|
fileMap := map[string]string{}
|
||||||
|
for f, digest := range req.Adapters {
|
||||||
|
if _, err := createBlob(cmd, client, f, digest, p); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fileMap[filepath.Base(f)] = digest
|
||||||
|
}
|
||||||
|
req.Adapters = fileMap
|
||||||
}
|
}
|
||||||
|
|
||||||
bars := make(map[string]*progress.Bar)
|
bars := make(map[string]*progress.Bar)
|
||||||
fn := func(resp api.ProgressResponse) error {
|
fn := func(resp api.ProgressResponse) error {
|
||||||
if resp.Digest != "" {
|
if resp.Digest != "" {
|
||||||
spinner.Stop()
|
|
||||||
|
|
||||||
bar, ok := bars[resp.Digest]
|
bar, ok := bars[resp.Digest]
|
||||||
if !ok {
|
if !ok {
|
||||||
bar = progress.NewBar(fmt.Sprintf("pulling %s...", resp.Digest[7:19]), resp.Total, resp.Completed)
|
bar = progress.NewBar(fmt.Sprintf("pulling %s...", resp.Digest[7:19]), resp.Total, resp.Completed)
|
||||||
@@ -149,145 +160,23 @@ func CreateHandler(cmd *cobra.Command, args []string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
quantize, _ := cmd.Flags().GetString("quantize")
|
if err := client.Create(cmd.Context(), req, fn); err != nil {
|
||||||
|
if strings.Contains(err.Error(), "path or Modelfile are required") {
|
||||||
request := api.CreateRequest{Name: args[0], Modelfile: modelfile.String(), Quantize: quantize}
|
return fmt.Errorf("the ollama server must be updated to use `ollama create` with this client")
|
||||||
if err := client.Create(cmd.Context(), &request, fn); err != nil {
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func tempZipFiles(path string) (string, error) {
|
func createBlob(cmd *cobra.Command, client *api.Client, path string, digest string, p *progress.Progress) (string, error) {
|
||||||
tempfile, err := os.CreateTemp("", "ollama-tf")
|
realPath, err := filepath.EvalSymlinks(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
defer tempfile.Close()
|
|
||||||
|
|
||||||
detectContentType := func(path string) (string, error) {
|
bin, err := os.Open(realPath)
|
||||||
f, err := os.Open(path)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
var b bytes.Buffer
|
|
||||||
b.Grow(512)
|
|
||||||
|
|
||||||
if _, err := io.CopyN(&b, f, 512); err != nil && !errors.Is(err, io.EOF) {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
contentType, _, _ := strings.Cut(http.DetectContentType(b.Bytes()), ";")
|
|
||||||
return contentType, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
glob := func(pattern, contentType string) ([]string, error) {
|
|
||||||
matches, err := filepath.Glob(pattern)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, safetensor := range matches {
|
|
||||||
if ct, err := detectContentType(safetensor); err != nil {
|
|
||||||
return nil, err
|
|
||||||
} else if ct != contentType {
|
|
||||||
return nil, fmt.Errorf("invalid content type: expected %s for %s", ct, safetensor)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var files []string
|
|
||||||
if st, _ := glob(filepath.Join(path, "model*.safetensors"), "application/octet-stream"); len(st) > 0 {
|
|
||||||
// safetensors files might be unresolved git lfs references; skip if they are
|
|
||||||
// covers model-x-of-y.safetensors, model.fp32-x-of-y.safetensors, model.safetensors
|
|
||||||
files = append(files, st...)
|
|
||||||
} else if st, _ := glob(filepath.Join(path, "adapters.safetensors"), "application/octet-stream"); len(st) > 0 {
|
|
||||||
// covers adapters.safetensors
|
|
||||||
files = append(files, st...)
|
|
||||||
} else if st, _ := glob(filepath.Join(path, "adapter_model.safetensors"), "application/octet-stream"); len(st) > 0 {
|
|
||||||
// covers adapter_model.safetensors
|
|
||||||
files = append(files, st...)
|
|
||||||
} else if pt, _ := glob(filepath.Join(path, "pytorch_model*.bin"), "application/zip"); len(pt) > 0 {
|
|
||||||
// pytorch files might also be unresolved git lfs references; skip if they are
|
|
||||||
// covers pytorch_model-x-of-y.bin, pytorch_model.fp32-x-of-y.bin, pytorch_model.bin
|
|
||||||
files = append(files, pt...)
|
|
||||||
} else if pt, _ := glob(filepath.Join(path, "consolidated*.pth"), "application/zip"); len(pt) > 0 {
|
|
||||||
// pytorch files might also be unresolved git lfs references; skip if they are
|
|
||||||
// covers consolidated.x.pth, consolidated.pth
|
|
||||||
files = append(files, pt...)
|
|
||||||
} else {
|
|
||||||
return "", errors.New("no safetensors or torch files found")
|
|
||||||
}
|
|
||||||
|
|
||||||
// add configuration files, json files are detected as text/plain
|
|
||||||
js, err := glob(filepath.Join(path, "*.json"), "text/plain")
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
files = append(files, js...)
|
|
||||||
|
|
||||||
// bert models require a nested config.json
|
|
||||||
// TODO(mxyng): merge this with the glob above
|
|
||||||
js, err = glob(filepath.Join(path, "**/*.json"), "text/plain")
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
files = append(files, js...)
|
|
||||||
|
|
||||||
if tks, _ := glob(filepath.Join(path, "tokenizer.model"), "application/octet-stream"); len(tks) > 0 {
|
|
||||||
// add tokenizer.model if it exists, tokenizer.json is automatically picked up by the previous glob
|
|
||||||
// tokenizer.model might be a unresolved git lfs reference; error if it is
|
|
||||||
files = append(files, tks...)
|
|
||||||
} else if tks, _ := glob(filepath.Join(path, "**/tokenizer.model"), "text/plain"); len(tks) > 0 {
|
|
||||||
// some times tokenizer.model is in a subdirectory (e.g. meta-llama/Meta-Llama-3-8B)
|
|
||||||
files = append(files, tks...)
|
|
||||||
}
|
|
||||||
|
|
||||||
zipfile := zip.NewWriter(tempfile)
|
|
||||||
defer zipfile.Close()
|
|
||||||
|
|
||||||
for _, file := range files {
|
|
||||||
f, err := os.Open(file)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
fi, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
zfi, err := zip.FileInfoHeader(fi)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
zfi.Name, err = filepath.Rel(path, file)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
zf, err := zipfile.CreateHeader(zfi)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := io.Copy(zf, f); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return tempfile.Name(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func createBlob(cmd *cobra.Command, client *api.Client, path string, spinner *progress.Spinner) (string, error) {
|
|
||||||
bin, err := os.Open(path)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
@@ -300,18 +189,11 @@ func createBlob(cmd *cobra.Command, client *api.Client, path string, spinner *pr
|
|||||||
}
|
}
|
||||||
fileSize := fileInfo.Size()
|
fileSize := fileInfo.Size()
|
||||||
|
|
||||||
hash := sha256.New()
|
|
||||||
if _, err := io.Copy(hash, bin); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := bin.Seek(0, io.SeekStart); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
var pw progressWriter
|
var pw progressWriter
|
||||||
status := "transferring model data 0%"
|
status := fmt.Sprintf("copying file %s 0%%", digest)
|
||||||
spinner.SetMessage(status)
|
spinner := progress.NewSpinner(status)
|
||||||
|
p.Add(status, spinner)
|
||||||
|
defer spinner.Stop()
|
||||||
|
|
||||||
done := make(chan struct{})
|
done := make(chan struct{})
|
||||||
defer close(done)
|
defer close(done)
|
||||||
@@ -322,15 +204,14 @@ func createBlob(cmd *cobra.Command, client *api.Client, path string, spinner *pr
|
|||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
spinner.SetMessage(fmt.Sprintf("transferring model data %d%%", int(100*pw.n.Load()/fileSize)))
|
spinner.SetMessage(fmt.Sprintf("copying file %s %d%%", digest, int(100*pw.n.Load()/fileSize)))
|
||||||
case <-done:
|
case <-done:
|
||||||
spinner.SetMessage("transferring model data 100%")
|
spinner.SetMessage(fmt.Sprintf("copying file %s 100%%", digest))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
digest := fmt.Sprintf("sha256:%x", hash.Sum(nil))
|
|
||||||
if err = client.CreateBlob(cmd.Context(), digest, io.TeeReader(bin, &pw)); err != nil {
|
if err = client.CreateBlob(cmd.Context(), digest, io.TeeReader(bin, &pw)); err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
@@ -375,6 +256,7 @@ func StopHandler(cmd *cobra.Command, args []string) error {
|
|||||||
if strings.Contains(err.Error(), "not found") {
|
if strings.Contains(err.Error(), "not found") {
|
||||||
return fmt.Errorf("couldn't find model \"%s\" to stop", args[0])
|
return fmt.Errorf("couldn't find model \"%s\" to stop", args[0])
|
||||||
}
|
}
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -422,6 +304,10 @@ func RunHandler(cmd *cobra.Command, args []string) error {
|
|||||||
if len(prompts) > 0 {
|
if len(prompts) > 0 {
|
||||||
interactive = false
|
interactive = false
|
||||||
}
|
}
|
||||||
|
// Be quiet if we're redirecting to a pipe or file
|
||||||
|
if !term.IsTerminal(int(os.Stdout.Fd())) {
|
||||||
|
interactive = false
|
||||||
|
}
|
||||||
|
|
||||||
nowrap, err := cmd.Flags().GetBool("nowordwrap")
|
nowrap, err := cmd.Flags().GetBool("nowordwrap")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -453,7 +339,16 @@ func RunHandler(cmd *cobra.Command, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
opts.MultiModal = slices.Contains(info.Details.Families, "clip")
|
if len(info.ProjectorInfo) != 0 {
|
||||||
|
opts.MultiModal = true
|
||||||
|
}
|
||||||
|
for k := range info.ModelInfo {
|
||||||
|
if strings.Contains(k, ".vision.") {
|
||||||
|
opts.MultiModal = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
opts.ParentModel = info.Details.ParentModel
|
opts.ParentModel = info.Details.ParentModel
|
||||||
|
|
||||||
if interactive {
|
if interactive {
|
||||||
@@ -478,47 +373,6 @@ func RunHandler(cmd *cobra.Command, args []string) error {
|
|||||||
return generate(cmd, opts)
|
return generate(cmd, opts)
|
||||||
}
|
}
|
||||||
|
|
||||||
func errFromUnknownKey(unknownKeyErr error) error {
|
|
||||||
// find SSH public key in the error message
|
|
||||||
sshKeyPattern := `ssh-\w+ [^\s"]+`
|
|
||||||
re := regexp.MustCompile(sshKeyPattern)
|
|
||||||
matches := re.FindStringSubmatch(unknownKeyErr.Error())
|
|
||||||
|
|
||||||
if len(matches) > 0 {
|
|
||||||
serverPubKey := matches[0]
|
|
||||||
|
|
||||||
localPubKey, err := auth.GetPublicKey()
|
|
||||||
if err != nil {
|
|
||||||
return unknownKeyErr
|
|
||||||
}
|
|
||||||
|
|
||||||
if runtime.GOOS == "linux" && serverPubKey != localPubKey {
|
|
||||||
// try the ollama service public key
|
|
||||||
svcPubKey, err := os.ReadFile("/usr/share/ollama/.ollama/id_ed25519.pub")
|
|
||||||
if err != nil {
|
|
||||||
return unknownKeyErr
|
|
||||||
}
|
|
||||||
localPubKey = strings.TrimSpace(string(svcPubKey))
|
|
||||||
}
|
|
||||||
|
|
||||||
// check if the returned public key matches the local public key, this prevents adding a remote key to the user's account
|
|
||||||
if serverPubKey != localPubKey {
|
|
||||||
return unknownKeyErr
|
|
||||||
}
|
|
||||||
|
|
||||||
var msg strings.Builder
|
|
||||||
msg.WriteString(unknownKeyErr.Error())
|
|
||||||
msg.WriteString("\n\nYour ollama key is:\n")
|
|
||||||
msg.WriteString(localPubKey)
|
|
||||||
msg.WriteString("\nAdd your key at:\n")
|
|
||||||
msg.WriteString("https://ollama.com/settings/keys")
|
|
||||||
|
|
||||||
return errors.New(msg.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
return unknownKeyErr
|
|
||||||
}
|
|
||||||
|
|
||||||
func PushHandler(cmd *cobra.Command, args []string) error {
|
func PushHandler(cmd *cobra.Command, args []string) error {
|
||||||
client, err := api.ClientFromEnvironment()
|
client, err := api.ClientFromEnvironment()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -565,6 +419,8 @@ func PushHandler(cmd *cobra.Command, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
request := api.PushRequest{Name: args[0], Insecure: insecure}
|
request := api.PushRequest{Name: args[0], Insecure: insecure}
|
||||||
|
|
||||||
|
n := model.ParseName(args[0])
|
||||||
if err := client.Push(cmd.Context(), &request, fn); err != nil {
|
if err := client.Push(cmd.Context(), &request, fn); err != nil {
|
||||||
if spinner != nil {
|
if spinner != nil {
|
||||||
spinner.Stop()
|
spinner.Stop()
|
||||||
@@ -572,18 +428,19 @@ func PushHandler(cmd *cobra.Command, args []string) error {
|
|||||||
if strings.Contains(err.Error(), "access denied") {
|
if strings.Contains(err.Error(), "access denied") {
|
||||||
return errors.New("you are not authorized to push to this namespace, create the model under a namespace you own")
|
return errors.New("you are not authorized to push to this namespace, create the model under a namespace you own")
|
||||||
}
|
}
|
||||||
host := model.ParseName(args[0]).Host
|
|
||||||
isOllamaHost := strings.HasSuffix(host, ".ollama.ai") || strings.HasSuffix(host, ".ollama.com")
|
|
||||||
if strings.Contains(err.Error(), errtypes.UnknownOllamaKeyErrMsg) && isOllamaHost {
|
|
||||||
// the user has not added their ollama key to ollama.com
|
|
||||||
// re-throw an error with a more user-friendly message
|
|
||||||
return errFromUnknownKey(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
p.Stop()
|
||||||
spinner.Stop()
|
spinner.Stop()
|
||||||
|
|
||||||
|
destination := n.String()
|
||||||
|
if strings.HasSuffix(n.Host, ".ollama.ai") || strings.HasSuffix(n.Host, ".ollama.com") {
|
||||||
|
destination = "https://ollama.com/" + strings.TrimSuffix(n.DisplayShortest(), ":latest")
|
||||||
|
}
|
||||||
|
fmt.Printf("\nYou can find your model at:\n\n")
|
||||||
|
fmt.Printf("\t%s\n", destination)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -601,7 +458,7 @@ func ListHandler(cmd *cobra.Command, args []string) error {
|
|||||||
var data [][]string
|
var data [][]string
|
||||||
|
|
||||||
for _, m := range models.Models {
|
for _, m := range models.Models {
|
||||||
if len(args) == 0 || strings.HasPrefix(m.Name, args[0]) {
|
if len(args) == 0 || strings.HasPrefix(strings.ToLower(m.Name), strings.ToLower(args[0])) {
|
||||||
data = append(data, []string{m.Name, m.Digest[:12], format.HumanBytes(m.Size), format.HumanTime(m.ModifiedAt, "Never")})
|
data = append(data, []string{m.Name, m.Digest[:12], format.HumanBytes(m.Size), format.HumanTime(m.ModifiedAt, "Never")})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -712,8 +569,9 @@ func ShowHandler(cmd *cobra.Command, args []string) error {
|
|||||||
parameters, errParams := cmd.Flags().GetBool("parameters")
|
parameters, errParams := cmd.Flags().GetBool("parameters")
|
||||||
system, errSystem := cmd.Flags().GetBool("system")
|
system, errSystem := cmd.Flags().GetBool("system")
|
||||||
template, errTemplate := cmd.Flags().GetBool("template")
|
template, errTemplate := cmd.Flags().GetBool("template")
|
||||||
|
verbose, errVerbose := cmd.Flags().GetBool("verbose")
|
||||||
|
|
||||||
for _, boolErr := range []error{errLicense, errModelfile, errParams, errSystem, errTemplate} {
|
for _, boolErr := range []error{errLicense, errModelfile, errParams, errSystem, errTemplate, errVerbose} {
|
||||||
if boolErr != nil {
|
if boolErr != nil {
|
||||||
return errors.New("error retrieving flags")
|
return errors.New("error retrieving flags")
|
||||||
}
|
}
|
||||||
@@ -751,7 +609,7 @@ func ShowHandler(cmd *cobra.Command, args []string) error {
|
|||||||
return errors.New("only one of '--license', '--modelfile', '--parameters', '--system', or '--template' can be specified")
|
return errors.New("only one of '--license', '--modelfile', '--parameters', '--system', or '--template' can be specified")
|
||||||
}
|
}
|
||||||
|
|
||||||
req := api.ShowRequest{Name: args[0]}
|
req := api.ShowRequest{Name: args[0], Verbose: verbose}
|
||||||
resp, err := client.Show(cmd.Context(), &req)
|
resp, err := client.Show(cmd.Context(), &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -766,18 +624,18 @@ func ShowHandler(cmd *cobra.Command, args []string) error {
|
|||||||
case "parameters":
|
case "parameters":
|
||||||
fmt.Println(resp.Parameters)
|
fmt.Println(resp.Parameters)
|
||||||
case "system":
|
case "system":
|
||||||
fmt.Println(resp.System)
|
fmt.Print(resp.System)
|
||||||
case "template":
|
case "template":
|
||||||
fmt.Println(resp.Template)
|
fmt.Print(resp.Template)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return showInfo(resp, os.Stdout)
|
return showInfo(resp, verbose, os.Stdout)
|
||||||
}
|
}
|
||||||
|
|
||||||
func showInfo(resp *api.ShowResponse, w io.Writer) error {
|
func showInfo(resp *api.ShowResponse, verbose bool, w io.Writer) error {
|
||||||
tableRender := func(header string, rows func() [][]string) {
|
tableRender := func(header string, rows func() [][]string) {
|
||||||
fmt.Fprintln(w, " ", header)
|
fmt.Fprintln(w, " ", header)
|
||||||
table := tablewriter.NewWriter(w)
|
table := tablewriter.NewWriter(w)
|
||||||
@@ -834,6 +692,47 @@ func showInfo(resp *api.ShowResponse, w io.Writer) error {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if resp.ModelInfo != nil && verbose {
|
||||||
|
tableRender("Metadata", func() (rows [][]string) {
|
||||||
|
keys := make([]string, 0, len(resp.ModelInfo))
|
||||||
|
for k := range resp.ModelInfo {
|
||||||
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
|
||||||
|
for _, k := range keys {
|
||||||
|
var v string
|
||||||
|
switch vData := resp.ModelInfo[k].(type) {
|
||||||
|
case bool:
|
||||||
|
v = fmt.Sprintf("%t", vData)
|
||||||
|
case string:
|
||||||
|
v = vData
|
||||||
|
case float64:
|
||||||
|
v = fmt.Sprintf("%g", vData)
|
||||||
|
case []any:
|
||||||
|
n := 3
|
||||||
|
if len(vData) < n {
|
||||||
|
n = len(vData)
|
||||||
|
}
|
||||||
|
v = fmt.Sprintf("%v", vData[:n])
|
||||||
|
default:
|
||||||
|
v = fmt.Sprintf("%T", vData)
|
||||||
|
}
|
||||||
|
rows = append(rows, []string{"", k, v})
|
||||||
|
}
|
||||||
|
return
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(resp.Tensors) > 0 && verbose {
|
||||||
|
tableRender("Tensors", func() (rows [][]string) {
|
||||||
|
for _, t := range resp.Tensors {
|
||||||
|
rows = append(rows, []string{"", t.Name, t.Type, fmt.Sprint(t.Shape)})
|
||||||
|
}
|
||||||
|
return
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
head := func(s string, n int) (rows [][]string) {
|
head := func(s string, n int) (rows [][]string) {
|
||||||
scanner := bufio.NewScanner(strings.NewReader(s))
|
scanner := bufio.NewScanner(strings.NewReader(s))
|
||||||
for scanner.Scan() && (len(rows) < n || n < 0) {
|
for scanner.Scan() && (len(rows) < n || n < 0) {
|
||||||
@@ -1038,10 +937,14 @@ func chat(cmd *cobra.Command, opts runOptions) (*api.Message, error) {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if opts.Format == "json" {
|
||||||
|
opts.Format = `"` + opts.Format + `"`
|
||||||
|
}
|
||||||
|
|
||||||
req := &api.ChatRequest{
|
req := &api.ChatRequest{
|
||||||
Model: opts.Model,
|
Model: opts.Model,
|
||||||
Messages: opts.Messages,
|
Messages: opts.Messages,
|
||||||
Format: opts.Format,
|
Format: json.RawMessage(opts.Format),
|
||||||
Options: opts.Options,
|
Options: opts.Options,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1123,12 +1026,16 @@ func generate(cmd *cobra.Command, opts runOptions) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if opts.Format == "json" {
|
||||||
|
opts.Format = `"` + opts.Format + `"`
|
||||||
|
}
|
||||||
|
|
||||||
request := api.GenerateRequest{
|
request := api.GenerateRequest{
|
||||||
Model: opts.Model,
|
Model: opts.Model,
|
||||||
Prompt: opts.Prompt,
|
Prompt: opts.Prompt,
|
||||||
Context: generateContext,
|
Context: generateContext,
|
||||||
Images: opts.Images,
|
Images: opts.Images,
|
||||||
Format: opts.Format,
|
Format: json.RawMessage(opts.Format),
|
||||||
System: opts.System,
|
System: opts.System,
|
||||||
Options: opts.Options,
|
Options: opts.Options,
|
||||||
KeepAlive: opts.KeepAlive,
|
KeepAlive: opts.KeepAlive,
|
||||||
@@ -1284,7 +1191,7 @@ func NewCLI() *cobra.Command {
|
|||||||
log.SetFlags(log.LstdFlags | log.Lshortfile)
|
log.SetFlags(log.LstdFlags | log.Lshortfile)
|
||||||
cobra.EnableCommandSorting = false
|
cobra.EnableCommandSorting = false
|
||||||
|
|
||||||
if runtime.GOOS == "windows" {
|
if runtime.GOOS == "windows" && term.IsTerminal(int(os.Stdout.Fd())) {
|
||||||
console.ConsoleFromFile(os.Stdin) //nolint:errcheck
|
console.ConsoleFromFile(os.Stdin) //nolint:errcheck
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1316,7 +1223,7 @@ func NewCLI() *cobra.Command {
|
|||||||
RunE: CreateHandler,
|
RunE: CreateHandler,
|
||||||
}
|
}
|
||||||
|
|
||||||
createCmd.Flags().StringP("file", "f", "Modelfile", "Name of the Modelfile")
|
createCmd.Flags().StringP("file", "f", "", "Name of the Modelfile (default \"Modelfile\"")
|
||||||
createCmd.Flags().StringP("quantize", "q", "", "Quantize model to this level (e.g. q4_0)")
|
createCmd.Flags().StringP("quantize", "q", "", "Quantize model to this level (e.g. q4_0)")
|
||||||
|
|
||||||
showCmd := &cobra.Command{
|
showCmd := &cobra.Command{
|
||||||
@@ -1332,6 +1239,7 @@ func NewCLI() *cobra.Command {
|
|||||||
showCmd.Flags().Bool("parameters", false, "Show parameters of a model")
|
showCmd.Flags().Bool("parameters", false, "Show parameters of a model")
|
||||||
showCmd.Flags().Bool("template", false, "Show template of a model")
|
showCmd.Flags().Bool("template", false, "Show template of a model")
|
||||||
showCmd.Flags().Bool("system", false, "Show system message of a model")
|
showCmd.Flags().Bool("system", false, "Show system message of a model")
|
||||||
|
showCmd.Flags().BoolP("verbose", "v", false, "Show detailed model information")
|
||||||
|
|
||||||
runCmd := &cobra.Command{
|
runCmd := &cobra.Command{
|
||||||
Use: "run MODEL [PROMPT]",
|
Use: "run MODEL [PROMPT]",
|
||||||
@@ -1414,6 +1322,18 @@ func NewCLI() *cobra.Command {
|
|||||||
RunE: DeleteHandler,
|
RunE: DeleteHandler,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
runnerCmd := &cobra.Command{
|
||||||
|
Use: "runner",
|
||||||
|
Hidden: true,
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
return runner.Execute(os.Args[1:])
|
||||||
|
},
|
||||||
|
FParseErrWhitelist: cobra.FParseErrWhitelist{UnknownFlags: true},
|
||||||
|
}
|
||||||
|
runnerCmd.SetHelpFunc(func(cmd *cobra.Command, args []string) {
|
||||||
|
_ = runner.Execute(args[1:])
|
||||||
|
})
|
||||||
|
|
||||||
envVars := envconfig.AsMap()
|
envVars := envconfig.AsMap()
|
||||||
|
|
||||||
envs := []envconfig.EnvVar{envVars["OLLAMA_HOST"]}
|
envs := []envconfig.EnvVar{envVars["OLLAMA_HOST"]}
|
||||||
@@ -1448,6 +1368,7 @@ func NewCLI() *cobra.Command {
|
|||||||
envVars["OLLAMA_SCHED_SPREAD"],
|
envVars["OLLAMA_SCHED_SPREAD"],
|
||||||
envVars["OLLAMA_TMPDIR"],
|
envVars["OLLAMA_TMPDIR"],
|
||||||
envVars["OLLAMA_FLASH_ATTENTION"],
|
envVars["OLLAMA_FLASH_ATTENTION"],
|
||||||
|
envVars["OLLAMA_KV_CACHE_TYPE"],
|
||||||
envVars["OLLAMA_LLM_LIBRARY"],
|
envVars["OLLAMA_LLM_LIBRARY"],
|
||||||
envVars["OLLAMA_GPU_OVERHEAD"],
|
envVars["OLLAMA_GPU_OVERHEAD"],
|
||||||
envVars["OLLAMA_LOAD_TIMEOUT"],
|
envVars["OLLAMA_LOAD_TIMEOUT"],
|
||||||
@@ -1469,6 +1390,7 @@ func NewCLI() *cobra.Command {
|
|||||||
psCmd,
|
psCmd,
|
||||||
copyCmd,
|
copyCmd,
|
||||||
deleteCmd,
|
deleteCmd,
|
||||||
|
runnerCmd,
|
||||||
)
|
)
|
||||||
|
|
||||||
return rootCmd
|
return rootCmd
|
||||||
|
646
cmd/cmd_test.go
646
cmd/cmd_test.go
@@ -4,12 +4,13 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
"github.com/google/go-cmp/cmp"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
@@ -26,7 +27,7 @@ func TestShowInfo(t *testing.T) {
|
|||||||
ParameterSize: "7B",
|
ParameterSize: "7B",
|
||||||
QuantizationLevel: "FP16",
|
QuantizationLevel: "FP16",
|
||||||
},
|
},
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -56,7 +57,7 @@ func TestShowInfo(t *testing.T) {
|
|||||||
ParameterSize: "7B",
|
ParameterSize: "7B",
|
||||||
QuantizationLevel: "FP16",
|
QuantizationLevel: "FP16",
|
||||||
},
|
},
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -67,6 +68,60 @@ func TestShowInfo(t *testing.T) {
|
|||||||
embedding length 0
|
embedding length 0
|
||||||
quantization FP16
|
quantization FP16
|
||||||
|
|
||||||
|
`
|
||||||
|
if diff := cmp.Diff(expect, b.String()); diff != "" {
|
||||||
|
t.Errorf("unexpected output (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("verbose model", func(t *testing.T) {
|
||||||
|
var b bytes.Buffer
|
||||||
|
if err := showInfo(&api.ShowResponse{
|
||||||
|
Details: api.ModelDetails{
|
||||||
|
Family: "test",
|
||||||
|
ParameterSize: "8B",
|
||||||
|
QuantizationLevel: "FP16",
|
||||||
|
},
|
||||||
|
Parameters: `
|
||||||
|
stop up`,
|
||||||
|
ModelInfo: map[string]any{
|
||||||
|
"general.architecture": "test",
|
||||||
|
"general.parameter_count": float64(8_000_000_000),
|
||||||
|
"some.true_bool": true,
|
||||||
|
"some.false_bool": false,
|
||||||
|
"test.context_length": float64(1000),
|
||||||
|
"test.embedding_length": float64(11434),
|
||||||
|
},
|
||||||
|
Tensors: []api.Tensor{
|
||||||
|
{Name: "blk.0.attn_k.weight", Type: "BF16", Shape: []uint64{42, 3117}},
|
||||||
|
{Name: "blk.0.attn_q.weight", Type: "FP16", Shape: []uint64{3117, 42}},
|
||||||
|
},
|
||||||
|
}, true, &b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expect := ` Model
|
||||||
|
architecture test
|
||||||
|
parameters 8B
|
||||||
|
context length 1000
|
||||||
|
embedding length 11434
|
||||||
|
quantization FP16
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
stop up
|
||||||
|
|
||||||
|
Metadata
|
||||||
|
general.architecture test
|
||||||
|
general.parameter_count 8e+09
|
||||||
|
some.false_bool false
|
||||||
|
some.true_bool true
|
||||||
|
test.context_length 1000
|
||||||
|
test.embedding_length 11434
|
||||||
|
|
||||||
|
Tensors
|
||||||
|
blk.0.attn_k.weight BF16 [42 3117]
|
||||||
|
blk.0.attn_q.weight FP16 [3117 42]
|
||||||
|
|
||||||
`
|
`
|
||||||
if diff := cmp.Diff(expect, b.String()); diff != "" {
|
if diff := cmp.Diff(expect, b.String()); diff != "" {
|
||||||
t.Errorf("unexpected output (-want +got):\n%s", diff)
|
t.Errorf("unexpected output (-want +got):\n%s", diff)
|
||||||
@@ -88,7 +143,7 @@ func TestShowInfo(t *testing.T) {
|
|||||||
stop you
|
stop you
|
||||||
stop up
|
stop up
|
||||||
temperature 99`,
|
temperature 99`,
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -125,7 +180,7 @@ func TestShowInfo(t *testing.T) {
|
|||||||
"clip.vision.embedding_length": float64(0),
|
"clip.vision.embedding_length": float64(0),
|
||||||
"clip.vision.projection_dim": float64(0),
|
"clip.vision.projection_dim": float64(0),
|
||||||
},
|
},
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -158,7 +213,7 @@ func TestShowInfo(t *testing.T) {
|
|||||||
Ahoy, matey!
|
Ahoy, matey!
|
||||||
Weigh anchor!
|
Weigh anchor!
|
||||||
`,
|
`,
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -179,19 +234,15 @@ Weigh anchor!
|
|||||||
|
|
||||||
t.Run("license", func(t *testing.T) {
|
t.Run("license", func(t *testing.T) {
|
||||||
var b bytes.Buffer
|
var b bytes.Buffer
|
||||||
license, err := os.ReadFile(filepath.Join("..", "LICENSE"))
|
license := "MIT License\nCopyright (c) Ollama\n"
|
||||||
if err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := showInfo(&api.ShowResponse{
|
if err := showInfo(&api.ShowResponse{
|
||||||
Details: api.ModelDetails{
|
Details: api.ModelDetails{
|
||||||
Family: "test",
|
Family: "test",
|
||||||
ParameterSize: "7B",
|
ParameterSize: "7B",
|
||||||
QuantizationLevel: "FP16",
|
QuantizationLevel: "FP16",
|
||||||
},
|
},
|
||||||
License: string(license),
|
License: license,
|
||||||
}, &b); err != nil {
|
}, false, &b); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -270,3 +321,572 @@ func TestDeleteHandler(t *testing.T) {
|
|||||||
t.Fatalf("DeleteHandler failed: expected error about stopping non-existent model, got %v", err)
|
t.Fatalf("DeleteHandler failed: expected error about stopping non-existent model, got %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestGetModelfileName(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
modelfileName string
|
||||||
|
fileExists bool
|
||||||
|
expectedName string
|
||||||
|
expectedErr error
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "no modelfile specified, no modelfile exists",
|
||||||
|
modelfileName: "",
|
||||||
|
fileExists: false,
|
||||||
|
expectedName: "",
|
||||||
|
expectedErr: os.ErrNotExist,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no modelfile specified, modelfile exists",
|
||||||
|
modelfileName: "",
|
||||||
|
fileExists: true,
|
||||||
|
expectedName: "Modelfile",
|
||||||
|
expectedErr: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "modelfile specified, no modelfile exists",
|
||||||
|
modelfileName: "crazyfile",
|
||||||
|
fileExists: false,
|
||||||
|
expectedName: "",
|
||||||
|
expectedErr: os.ErrNotExist,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "modelfile specified, modelfile exists",
|
||||||
|
modelfileName: "anotherfile",
|
||||||
|
fileExists: true,
|
||||||
|
expectedName: "anotherfile",
|
||||||
|
expectedErr: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "fakecmd",
|
||||||
|
}
|
||||||
|
cmd.Flags().String("file", "", "path to modelfile")
|
||||||
|
|
||||||
|
var expectedFilename string
|
||||||
|
|
||||||
|
if tt.fileExists {
|
||||||
|
tempDir, err := os.MkdirTemp("", "modelfiledir")
|
||||||
|
defer os.RemoveAll(tempDir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("temp modelfile dir creation failed: %v", err)
|
||||||
|
}
|
||||||
|
var fn string
|
||||||
|
if tt.modelfileName != "" {
|
||||||
|
fn = tt.modelfileName
|
||||||
|
} else {
|
||||||
|
fn = "Modelfile"
|
||||||
|
}
|
||||||
|
|
||||||
|
tempFile, err := os.CreateTemp(tempDir, fn)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("temp modelfile creation failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedFilename = tempFile.Name()
|
||||||
|
err = cmd.Flags().Set("file", expectedFilename)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("couldn't set file flag: %v", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
expectedFilename = tt.expectedName
|
||||||
|
if tt.modelfileName != "" {
|
||||||
|
err := cmd.Flags().Set("file", tt.modelfileName)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("couldn't set file flag: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
actualFilename, actualErr := getModelfileName(cmd)
|
||||||
|
|
||||||
|
if actualFilename != expectedFilename {
|
||||||
|
t.Errorf("expected filename: '%s' actual filename: '%s'", expectedFilename, actualFilename)
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.expectedErr != os.ErrNotExist {
|
||||||
|
if actualErr != tt.expectedErr {
|
||||||
|
t.Errorf("expected err: %v actual err: %v", tt.expectedErr, actualErr)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if !os.IsNotExist(actualErr) {
|
||||||
|
t.Errorf("expected err: %v actual err: %v", tt.expectedErr, actualErr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushHandler(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
modelName string
|
||||||
|
serverResponse map[string]func(w http.ResponseWriter, r *http.Request)
|
||||||
|
expectedError string
|
||||||
|
expectedOutput string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "successful push",
|
||||||
|
modelName: "test-model",
|
||||||
|
serverResponse: map[string]func(w http.ResponseWriter, r *http.Request){
|
||||||
|
"/api/push": func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Method != http.MethodPost {
|
||||||
|
t.Errorf("expected POST request, got %s", r.Method)
|
||||||
|
}
|
||||||
|
|
||||||
|
var req api.PushRequest
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.Name != "test-model" {
|
||||||
|
t.Errorf("expected model name 'test-model', got %s", req.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulate progress updates
|
||||||
|
responses := []api.ProgressResponse{
|
||||||
|
{Status: "preparing manifest"},
|
||||||
|
{Digest: "sha256:abc123456789", Total: 100, Completed: 50},
|
||||||
|
{Digest: "sha256:abc123456789", Total: 100, Completed: 100},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, resp := range responses {
|
||||||
|
if err := json.NewEncoder(w).Encode(resp); err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.(http.Flusher).Flush()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOutput: "\nYou can find your model at:\n\n\thttps://ollama.com/test-model\n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unauthorized push",
|
||||||
|
modelName: "unauthorized-model",
|
||||||
|
serverResponse: map[string]func(w http.ResponseWriter, r *http.Request){
|
||||||
|
"/api/push": func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
w.WriteHeader(http.StatusUnauthorized)
|
||||||
|
err := json.NewEncoder(w).Encode(map[string]string{
|
||||||
|
"error": "access denied",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedError: "you are not authorized to push to this namespace, create the model under a namespace you own",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if handler, ok := tt.serverResponse[r.URL.Path]; ok {
|
||||||
|
handler(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
http.Error(w, "not found", http.StatusNotFound)
|
||||||
|
}))
|
||||||
|
defer mockServer.Close()
|
||||||
|
|
||||||
|
t.Setenv("OLLAMA_HOST", mockServer.URL)
|
||||||
|
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
cmd.Flags().Bool("insecure", false, "")
|
||||||
|
cmd.SetContext(context.TODO())
|
||||||
|
|
||||||
|
// Redirect stderr to capture progress output
|
||||||
|
oldStderr := os.Stderr
|
||||||
|
r, w, _ := os.Pipe()
|
||||||
|
os.Stderr = w
|
||||||
|
|
||||||
|
// Capture stdout for the "Model pushed" message
|
||||||
|
oldStdout := os.Stdout
|
||||||
|
outR, outW, _ := os.Pipe()
|
||||||
|
os.Stdout = outW
|
||||||
|
|
||||||
|
err := PushHandler(cmd, []string{tt.modelName})
|
||||||
|
|
||||||
|
// Restore stderr
|
||||||
|
w.Close()
|
||||||
|
os.Stderr = oldStderr
|
||||||
|
// drain the pipe
|
||||||
|
if _, err := io.ReadAll(r); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Restore stdout and get output
|
||||||
|
outW.Close()
|
||||||
|
os.Stdout = oldStdout
|
||||||
|
stdout, _ := io.ReadAll(outR)
|
||||||
|
|
||||||
|
if tt.expectedError == "" {
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("expected no error, got %v", err)
|
||||||
|
}
|
||||||
|
if tt.expectedOutput != "" {
|
||||||
|
if got := string(stdout); got != tt.expectedOutput {
|
||||||
|
t.Errorf("expected output %q, got %q", tt.expectedOutput, got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err == nil || !strings.Contains(err.Error(), tt.expectedError) {
|
||||||
|
t.Errorf("expected error containing %q, got %v", tt.expectedError, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestListHandler(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
args []string
|
||||||
|
serverResponse []api.ListModelResponse
|
||||||
|
expectedError string
|
||||||
|
expectedOutput string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "list all models",
|
||||||
|
args: []string{},
|
||||||
|
serverResponse: []api.ListModelResponse{
|
||||||
|
{Name: "model1", Digest: "sha256:abc123", Size: 1024, ModifiedAt: time.Now().Add(-24 * time.Hour)},
|
||||||
|
{Name: "model2", Digest: "sha256:def456", Size: 2048, ModifiedAt: time.Now().Add(-48 * time.Hour)},
|
||||||
|
},
|
||||||
|
expectedOutput: "NAME ID SIZE MODIFIED \n" +
|
||||||
|
"model1 sha256:abc12 1.0 KB 24 hours ago \n" +
|
||||||
|
"model2 sha256:def45 2.0 KB 2 days ago \n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "filter models by prefix",
|
||||||
|
args: []string{"model1"},
|
||||||
|
serverResponse: []api.ListModelResponse{
|
||||||
|
{Name: "model1", Digest: "sha256:abc123", Size: 1024, ModifiedAt: time.Now().Add(-24 * time.Hour)},
|
||||||
|
{Name: "model2", Digest: "sha256:def456", Size: 2048, ModifiedAt: time.Now().Add(-24 * time.Hour)},
|
||||||
|
},
|
||||||
|
expectedOutput: "NAME ID SIZE MODIFIED \n" +
|
||||||
|
"model1 sha256:abc12 1.0 KB 24 hours ago \n",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "server error",
|
||||||
|
args: []string{},
|
||||||
|
expectedError: "server error",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path != "/api/tags" || r.Method != http.MethodGet {
|
||||||
|
t.Errorf("unexpected request to %s %s", r.Method, r.URL.Path)
|
||||||
|
http.Error(w, "not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.expectedError != "" {
|
||||||
|
http.Error(w, tt.expectedError, http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response := api.ListResponse{Models: tt.serverResponse}
|
||||||
|
if err := json.NewEncoder(w).Encode(response); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer mockServer.Close()
|
||||||
|
|
||||||
|
t.Setenv("OLLAMA_HOST", mockServer.URL)
|
||||||
|
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
cmd.SetContext(context.TODO())
|
||||||
|
|
||||||
|
// Capture stdout
|
||||||
|
oldStdout := os.Stdout
|
||||||
|
r, w, _ := os.Pipe()
|
||||||
|
os.Stdout = w
|
||||||
|
|
||||||
|
err := ListHandler(cmd, tt.args)
|
||||||
|
|
||||||
|
// Restore stdout and get output
|
||||||
|
w.Close()
|
||||||
|
os.Stdout = oldStdout
|
||||||
|
output, _ := io.ReadAll(r)
|
||||||
|
|
||||||
|
if tt.expectedError == "" {
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("expected no error, got %v", err)
|
||||||
|
}
|
||||||
|
if got := string(output); got != tt.expectedOutput {
|
||||||
|
t.Errorf("expected output:\n%s\ngot:\n%s", tt.expectedOutput, got)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err == nil || !strings.Contains(err.Error(), tt.expectedError) {
|
||||||
|
t.Errorf("expected error containing %q, got %v", tt.expectedError, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateHandler(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
modelName string
|
||||||
|
modelFile string
|
||||||
|
serverResponse map[string]func(w http.ResponseWriter, r *http.Request)
|
||||||
|
expectedError string
|
||||||
|
expectedOutput string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "successful create",
|
||||||
|
modelName: "test-model",
|
||||||
|
modelFile: "FROM foo",
|
||||||
|
serverResponse: map[string]func(w http.ResponseWriter, r *http.Request){
|
||||||
|
"/api/create": func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Method != http.MethodPost {
|
||||||
|
t.Errorf("expected POST request, got %s", r.Method)
|
||||||
|
}
|
||||||
|
|
||||||
|
req := api.CreateRequest{}
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.Name != "test-model" {
|
||||||
|
t.Errorf("expected model name 'test-model', got %s", req.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.From != "foo" {
|
||||||
|
t.Errorf("expected from 'foo', got %s", req.From)
|
||||||
|
}
|
||||||
|
|
||||||
|
responses := []api.ProgressResponse{
|
||||||
|
{Status: "using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb"},
|
||||||
|
{Status: "writing manifest"},
|
||||||
|
{Status: "success"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, resp := range responses {
|
||||||
|
if err := json.NewEncoder(w).Encode(resp); err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.(http.Flusher).Flush()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expectedOutput: "",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
handler, ok := tt.serverResponse[r.URL.Path]
|
||||||
|
if !ok {
|
||||||
|
t.Errorf("unexpected request to %s", r.URL.Path)
|
||||||
|
http.Error(w, "not found", http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
handler(w, r)
|
||||||
|
}))
|
||||||
|
t.Setenv("OLLAMA_HOST", mockServer.URL)
|
||||||
|
t.Cleanup(mockServer.Close)
|
||||||
|
tempFile, err := os.CreateTemp("", "modelfile")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.Remove(tempFile.Name())
|
||||||
|
|
||||||
|
if _, err := tempFile.WriteString(tt.modelFile); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if err := tempFile.Close(); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
cmd.Flags().String("file", "", "")
|
||||||
|
if err := cmd.Flags().Set("file", tempFile.Name()); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.Flags().Bool("insecure", false, "")
|
||||||
|
cmd.SetContext(context.TODO())
|
||||||
|
|
||||||
|
// Redirect stderr to capture progress output
|
||||||
|
oldStderr := os.Stderr
|
||||||
|
r, w, _ := os.Pipe()
|
||||||
|
os.Stderr = w
|
||||||
|
|
||||||
|
// Capture stdout for the "Model pushed" message
|
||||||
|
oldStdout := os.Stdout
|
||||||
|
outR, outW, _ := os.Pipe()
|
||||||
|
os.Stdout = outW
|
||||||
|
|
||||||
|
err = CreateHandler(cmd, []string{tt.modelName})
|
||||||
|
|
||||||
|
// Restore stderr
|
||||||
|
w.Close()
|
||||||
|
os.Stderr = oldStderr
|
||||||
|
// drain the pipe
|
||||||
|
if _, err := io.ReadAll(r); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Restore stdout and get output
|
||||||
|
outW.Close()
|
||||||
|
os.Stdout = oldStdout
|
||||||
|
stdout, _ := io.ReadAll(outR)
|
||||||
|
|
||||||
|
if tt.expectedError == "" {
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("expected no error, got %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.expectedOutput != "" {
|
||||||
|
if got := string(stdout); got != tt.expectedOutput {
|
||||||
|
t.Errorf("expected output %q, got %q", tt.expectedOutput, got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewCreateRequest(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
from string
|
||||||
|
opts runOptions
|
||||||
|
expected *api.CreateRequest
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
"basic test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "",
|
||||||
|
Prompt: "You are a fun AI agent",
|
||||||
|
Messages: []api.Message{},
|
||||||
|
WordWrap: true,
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "mymodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"parent model test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "parentmodel",
|
||||||
|
Messages: []api.Message{},
|
||||||
|
WordWrap: true,
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "parentmodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"parent model as filepath test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "/some/file/like/etc/passwd",
|
||||||
|
Messages: []api.Message{},
|
||||||
|
WordWrap: true,
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "mymodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"parent model as windows filepath test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "D:\\some\\file\\like\\etc\\passwd",
|
||||||
|
Messages: []api.Message{},
|
||||||
|
WordWrap: true,
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "mymodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"options test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "parentmodel",
|
||||||
|
Options: map[string]any{
|
||||||
|
"temperature": 1.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "parentmodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
Parameters: map[string]any{
|
||||||
|
"temperature": 1.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages test",
|
||||||
|
"newmodel",
|
||||||
|
runOptions{
|
||||||
|
Model: "mymodel",
|
||||||
|
ParentModel: "parentmodel",
|
||||||
|
System: "You are a fun AI agent",
|
||||||
|
Messages: []api.Message{
|
||||||
|
{
|
||||||
|
Role: "user",
|
||||||
|
Content: "hello there!",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Role: "assistant",
|
||||||
|
Content: "hello to you!",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
WordWrap: true,
|
||||||
|
},
|
||||||
|
&api.CreateRequest{
|
||||||
|
From: "parentmodel",
|
||||||
|
Model: "newmodel",
|
||||||
|
System: "You are a fun AI agent",
|
||||||
|
Messages: []api.Message{
|
||||||
|
{
|
||||||
|
Role: "user",
|
||||||
|
Content: "hello there!",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Role: "assistant",
|
||||||
|
Content: "hello to you!",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
actual := NewCreateRequest(tt.from, tt.opts)
|
||||||
|
if !cmp.Equal(actual, tt.expected) {
|
||||||
|
t.Errorf("expected output %#v, got %#v", tt.expected, actual)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@@ -13,13 +13,12 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/exp/maps"
|
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
"github.com/ollama/ollama/api"
|
||||||
"github.com/ollama/ollama/envconfig"
|
"github.com/ollama/ollama/envconfig"
|
||||||
"github.com/ollama/ollama/parser"
|
|
||||||
"github.com/ollama/ollama/readline"
|
"github.com/ollama/ollama/readline"
|
||||||
"github.com/ollama/ollama/types/errtypes"
|
"github.com/ollama/ollama/types/errtypes"
|
||||||
|
"github.com/ollama/ollama/types/model"
|
||||||
)
|
)
|
||||||
|
|
||||||
type MultilineState int
|
type MultilineState int
|
||||||
@@ -197,6 +196,10 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
opts.Messages = []api.Message{}
|
opts.Messages = []api.Message{}
|
||||||
fmt.Printf("Loading model '%s'\n", opts.Model)
|
fmt.Printf("Loading model '%s'\n", opts.Model)
|
||||||
if err := loadOrUnloadModel(cmd, &opts); err != nil {
|
if err := loadOrUnloadModel(cmd, &opts); err != nil {
|
||||||
|
if strings.Contains(err.Error(), "not found") {
|
||||||
|
fmt.Printf("error: %v\n", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
@@ -213,10 +216,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
req := &api.CreateRequest{
|
req := NewCreateRequest(args[1], opts)
|
||||||
Name: args[1],
|
|
||||||
Modelfile: buildModelfile(opts),
|
|
||||||
}
|
|
||||||
fn := func(resp api.ProgressResponse) error { return nil }
|
fn := func(resp api.ProgressResponse) error { return nil }
|
||||||
err = client.Create(cmd.Context(), req, fn)
|
err = client.Create(cmd.Context(), req, fn)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -319,8 +319,6 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
opts.Messages = append(opts.Messages, newMessage)
|
opts.Messages = append(opts.Messages, newMessage)
|
||||||
}
|
}
|
||||||
fmt.Println("Set system message.")
|
fmt.Println("Set system message.")
|
||||||
sb.Reset()
|
|
||||||
|
|
||||||
sb.Reset()
|
sb.Reset()
|
||||||
continue
|
continue
|
||||||
default:
|
default:
|
||||||
@@ -350,7 +348,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
|
|
||||||
switch args[1] {
|
switch args[1] {
|
||||||
case "info":
|
case "info":
|
||||||
_ = showInfo(resp, os.Stderr)
|
_ = showInfo(resp, false, os.Stderr)
|
||||||
case "license":
|
case "license":
|
||||||
if resp.License == "" {
|
if resp.License == "" {
|
||||||
fmt.Println("No license was specified for this model.")
|
fmt.Println("No license was specified for this model.")
|
||||||
@@ -461,68 +459,58 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func buildModelfile(opts runOptions) string {
|
func NewCreateRequest(name string, opts runOptions) *api.CreateRequest {
|
||||||
var f parser.File
|
parentModel := opts.ParentModel
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "model", Args: cmp.Or(opts.ParentModel, opts.Model)})
|
|
||||||
|
modelName := model.ParseName(parentModel)
|
||||||
|
if !modelName.IsValid() {
|
||||||
|
parentModel = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &api.CreateRequest{
|
||||||
|
Model: name,
|
||||||
|
From: cmp.Or(parentModel, opts.Model),
|
||||||
|
}
|
||||||
|
|
||||||
if opts.System != "" {
|
if opts.System != "" {
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "system", Args: opts.System})
|
req.System = opts.System
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := maps.Keys(opts.Options)
|
if len(opts.Options) > 0 {
|
||||||
slices.Sort(keys)
|
req.Parameters = opts.Options
|
||||||
for _, k := range keys {
|
|
||||||
v := opts.Options[k]
|
|
||||||
var cmds []parser.Command
|
|
||||||
switch t := v.(type) {
|
|
||||||
case []string:
|
|
||||||
for _, s := range t {
|
|
||||||
cmds = append(cmds, parser.Command{Name: k, Args: s})
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
cmds = append(cmds, parser.Command{Name: k, Args: fmt.Sprintf("%v", t)})
|
|
||||||
}
|
|
||||||
|
|
||||||
f.Commands = append(f.Commands, cmds...)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, msg := range opts.Messages {
|
if len(opts.Messages) > 0 {
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "message", Args: fmt.Sprintf("%s: %s", msg.Role, msg.Content)})
|
req.Messages = opts.Messages
|
||||||
}
|
}
|
||||||
|
|
||||||
return f.String()
|
return req
|
||||||
}
|
}
|
||||||
|
|
||||||
func normalizeFilePath(fp string) string {
|
func normalizeFilePath(fp string) string {
|
||||||
// Define a map of escaped characters and their replacements
|
return strings.NewReplacer(
|
||||||
replacements := map[string]string{
|
"\\ ", " ", // Escaped space
|
||||||
"\\ ": " ", // Escaped space
|
"\\(", "(", // Escaped left parenthesis
|
||||||
"\\(": "(", // Escaped left parenthesis
|
"\\)", ")", // Escaped right parenthesis
|
||||||
"\\)": ")", // Escaped right parenthesis
|
"\\[", "[", // Escaped left square bracket
|
||||||
"\\[": "[", // Escaped left square bracket
|
"\\]", "]", // Escaped right square bracket
|
||||||
"\\]": "]", // Escaped right square bracket
|
"\\{", "{", // Escaped left curly brace
|
||||||
"\\{": "{", // Escaped left curly brace
|
"\\}", "}", // Escaped right curly brace
|
||||||
"\\}": "}", // Escaped right curly brace
|
"\\$", "$", // Escaped dollar sign
|
||||||
"\\$": "$", // Escaped dollar sign
|
"\\&", "&", // Escaped ampersand
|
||||||
"\\&": "&", // Escaped ampersand
|
"\\;", ";", // Escaped semicolon
|
||||||
"\\;": ";", // Escaped semicolon
|
"\\'", "'", // Escaped single quote
|
||||||
"\\'": "'", // Escaped single quote
|
"\\\\", "\\", // Escaped backslash
|
||||||
"\\\\": "\\", // Escaped backslash
|
"\\*", "*", // Escaped asterisk
|
||||||
"\\*": "*", // Escaped asterisk
|
"\\?", "?", // Escaped question mark
|
||||||
"\\?": "?", // Escaped question mark
|
).Replace(fp)
|
||||||
}
|
|
||||||
|
|
||||||
for escaped, actual := range replacements {
|
|
||||||
fp = strings.ReplaceAll(fp, escaped, actual)
|
|
||||||
}
|
|
||||||
return fp
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func extractFileNames(input string) []string {
|
func extractFileNames(input string) []string {
|
||||||
// Regex to match file paths starting with optional drive letter, / ./ \ or .\ and include escaped or unescaped spaces (\ or %20)
|
// Regex to match file paths starting with optional drive letter, / ./ \ or .\ and include escaped or unescaped spaces (\ or %20)
|
||||||
// and followed by more characters and a file extension
|
// and followed by more characters and a file extension
|
||||||
// This will capture non filename strings, but we'll check for file existence to remove mismatches
|
// This will capture non filename strings, but we'll check for file existence to remove mismatches
|
||||||
regexPattern := `(?:[a-zA-Z]:)?(?:\./|/|\\)[\S\\ ]+?\.(?i:jpg|jpeg|png|svg)\b`
|
regexPattern := `(?:[a-zA-Z]:)?(?:\./|/|\\)[\S\\ ]+?\.(?i:jpg|jpeg|png)\b`
|
||||||
re := regexp.MustCompile(regexPattern)
|
re := regexp.MustCompile(regexPattern)
|
||||||
|
|
||||||
return re.FindAllString(input, -1)
|
return re.FindAllString(input, -1)
|
||||||
@@ -535,10 +523,9 @@ func extractFileData(input string) (string, []api.ImageData, error) {
|
|||||||
for _, fp := range filePaths {
|
for _, fp := range filePaths {
|
||||||
nfp := normalizeFilePath(fp)
|
nfp := normalizeFilePath(fp)
|
||||||
data, err := getImageData(nfp)
|
data, err := getImageData(nfp)
|
||||||
if err != nil {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
if os.IsNotExist(err) {
|
continue
|
||||||
continue
|
} else if err != nil {
|
||||||
}
|
|
||||||
fmt.Fprintf(os.Stderr, "Couldn't process image: %q\n", err)
|
fmt.Fprintf(os.Stderr, "Couldn't process image: %q\n", err)
|
||||||
return "", imgs, err
|
return "", imgs, err
|
||||||
}
|
}
|
||||||
@@ -546,7 +533,7 @@ func extractFileData(input string) (string, []api.ImageData, error) {
|
|||||||
input = strings.ReplaceAll(input, fp, "")
|
input = strings.ReplaceAll(input, fp, "")
|
||||||
imgs = append(imgs, data)
|
imgs = append(imgs, data)
|
||||||
}
|
}
|
||||||
return input, imgs, nil
|
return strings.TrimSpace(input), imgs, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getImageData(filePath string) ([]byte, error) {
|
func getImageData(filePath string) ([]byte, error) {
|
||||||
|
@@ -3,105 +3,50 @@ package cmd
|
|||||||
import (
|
import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestExtractFilenames(t *testing.T) {
|
func TestExtractFilenames(t *testing.T) {
|
||||||
// Unix style paths
|
// Unix style paths
|
||||||
input := ` some preamble
|
input := ` some preamble
|
||||||
./relative\ path/one.png inbetween1 ./not a valid two.jpg inbetween2
|
./relative\ path/one.png inbetween1 ./not a valid two.jpg inbetween2 ./1.svg
|
||||||
/unescaped space /three.jpeg inbetween3 /valid\ path/dir/four.png "./quoted with spaces/five.svg`
|
/unescaped space /three.jpeg inbetween3 /valid\ path/dir/four.png "./quoted with spaces/five.JPG`
|
||||||
res := extractFileNames(input)
|
res := extractFileNames(input)
|
||||||
assert.Len(t, res, 5)
|
assert.Len(t, res, 5)
|
||||||
assert.Contains(t, res[0], "one.png")
|
assert.Contains(t, res[0], "one.png")
|
||||||
assert.Contains(t, res[1], "two.jpg")
|
assert.Contains(t, res[1], "two.jpg")
|
||||||
assert.Contains(t, res[2], "three.jpeg")
|
assert.Contains(t, res[2], "three.jpeg")
|
||||||
assert.Contains(t, res[3], "four.png")
|
assert.Contains(t, res[3], "four.png")
|
||||||
assert.Contains(t, res[4], "five.svg")
|
assert.Contains(t, res[4], "five.JPG")
|
||||||
assert.NotContains(t, res[4], '"')
|
assert.NotContains(t, res[4], '"')
|
||||||
assert.NotContains(t, res, "inbtween")
|
assert.NotContains(t, res, "inbetween1")
|
||||||
|
assert.NotContains(t, res, "./1.svg")
|
||||||
|
|
||||||
// Windows style paths
|
// Windows style paths
|
||||||
input = ` some preamble
|
input = ` some preamble
|
||||||
c:/users/jdoe/one.png inbetween1 c:/program files/someplace/two.jpg inbetween2
|
c:/users/jdoe/one.png inbetween1 c:/program files/someplace/two.jpg inbetween2
|
||||||
/absolute/nospace/three.jpeg inbetween3 /absolute/with space/four.png inbetween4
|
/absolute/nospace/three.jpeg inbetween3 /absolute/with space/four.png inbetween4
|
||||||
./relative\ path/five.svg inbetween5 "./relative with/spaces/six.png inbetween6
|
./relative\ path/five.JPG inbetween5 "./relative with/spaces/six.png inbetween6
|
||||||
d:\path with\spaces\seven.svg inbetween7 c:\users\jdoe\eight.png inbetween8
|
d:\path with\spaces\seven.JPEG inbetween7 c:\users\jdoe\eight.png inbetween8
|
||||||
d:\program files\someplace\nine.png inbetween9 "E:\program files\someplace\ten.svg some ending
|
d:\program files\someplace\nine.png inbetween9 "E:\program files\someplace\ten.PNG some ending
|
||||||
`
|
`
|
||||||
res = extractFileNames(input)
|
res = extractFileNames(input)
|
||||||
assert.Len(t, res, 10)
|
assert.Len(t, res, 10)
|
||||||
assert.NotContains(t, res, "inbtween")
|
assert.NotContains(t, res, "inbetween2")
|
||||||
assert.Contains(t, res[0], "one.png")
|
assert.Contains(t, res[0], "one.png")
|
||||||
assert.Contains(t, res[0], "c:")
|
assert.Contains(t, res[0], "c:")
|
||||||
assert.Contains(t, res[1], "two.jpg")
|
assert.Contains(t, res[1], "two.jpg")
|
||||||
assert.Contains(t, res[1], "c:")
|
assert.Contains(t, res[1], "c:")
|
||||||
assert.Contains(t, res[2], "three.jpeg")
|
assert.Contains(t, res[2], "three.jpeg")
|
||||||
assert.Contains(t, res[3], "four.png")
|
assert.Contains(t, res[3], "four.png")
|
||||||
assert.Contains(t, res[4], "five.svg")
|
assert.Contains(t, res[4], "five.JPG")
|
||||||
assert.Contains(t, res[5], "six.png")
|
assert.Contains(t, res[5], "six.png")
|
||||||
assert.Contains(t, res[6], "seven.svg")
|
assert.Contains(t, res[6], "seven.JPEG")
|
||||||
assert.Contains(t, res[6], "d:")
|
assert.Contains(t, res[6], "d:")
|
||||||
assert.Contains(t, res[7], "eight.png")
|
assert.Contains(t, res[7], "eight.png")
|
||||||
assert.Contains(t, res[7], "c:")
|
assert.Contains(t, res[7], "c:")
|
||||||
assert.Contains(t, res[8], "nine.png")
|
assert.Contains(t, res[8], "nine.png")
|
||||||
assert.Contains(t, res[8], "d:")
|
assert.Contains(t, res[8], "d:")
|
||||||
assert.Contains(t, res[9], "ten.svg")
|
assert.Contains(t, res[9], "ten.PNG")
|
||||||
assert.Contains(t, res[9], "E:")
|
assert.Contains(t, res[9], "E:")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestModelfileBuilder(t *testing.T) {
|
|
||||||
opts := runOptions{
|
|
||||||
Model: "hork",
|
|
||||||
System: "You are part horse and part shark, but all hork. Do horklike things",
|
|
||||||
Messages: []api.Message{
|
|
||||||
{Role: "user", Content: "Hey there hork!"},
|
|
||||||
{Role: "assistant", Content: "Yes it is true, I am half horse, half shark."},
|
|
||||||
},
|
|
||||||
Options: map[string]any{
|
|
||||||
"temperature": 0.9,
|
|
||||||
"seed": 42,
|
|
||||||
"penalize_newline": false,
|
|
||||||
"stop": []string{"hi", "there"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Run("model", func(t *testing.T) {
|
|
||||||
expect := `FROM hork
|
|
||||||
SYSTEM You are part horse and part shark, but all hork. Do horklike things
|
|
||||||
PARAMETER penalize_newline false
|
|
||||||
PARAMETER seed 42
|
|
||||||
PARAMETER stop hi
|
|
||||||
PARAMETER stop there
|
|
||||||
PARAMETER temperature 0.9
|
|
||||||
MESSAGE user Hey there hork!
|
|
||||||
MESSAGE assistant Yes it is true, I am half horse, half shark.
|
|
||||||
`
|
|
||||||
|
|
||||||
actual := buildModelfile(opts)
|
|
||||||
if diff := cmp.Diff(expect, actual); diff != "" {
|
|
||||||
t.Errorf("mismatch (-want +got):\n%s", diff)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
t.Run("parent model", func(t *testing.T) {
|
|
||||||
opts.ParentModel = "horseshark"
|
|
||||||
expect := `FROM horseshark
|
|
||||||
SYSTEM You are part horse and part shark, but all hork. Do horklike things
|
|
||||||
PARAMETER penalize_newline false
|
|
||||||
PARAMETER seed 42
|
|
||||||
PARAMETER stop hi
|
|
||||||
PARAMETER stop there
|
|
||||||
PARAMETER temperature 0.9
|
|
||||||
MESSAGE user Hey there hork!
|
|
||||||
MESSAGE assistant Yes it is true, I am half horse, half shark.
|
|
||||||
`
|
|
||||||
actual := buildModelfile(opts)
|
|
||||||
if diff := cmp.Diff(expect, actual); diff != "" {
|
|
||||||
t.Errorf("mismatch (-want +got):\n%s", diff)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
15
cmd/runner/main.go
Normal file
15
cmd/runner/main.go
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/runner"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
if err := runner.Execute(os.Args[1:]); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "error: %s\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
@@ -9,12 +9,17 @@ import (
|
|||||||
"log/slog"
|
"log/slog"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type ModelParameters struct {
|
type ModelParameters struct {
|
||||||
Architectures []string `json:"architectures"`
|
Architectures []string `json:"architectures"`
|
||||||
VocabSize uint32 `json:"vocab_size"`
|
VocabSize uint32 `json:"vocab_size"`
|
||||||
|
TextModel TextParameters `json:"text_config"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TextParameters struct {
|
||||||
|
VocabSize uint32 `json:"vocab_size"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type AdapterParameters struct {
|
type AdapterParameters struct {
|
||||||
@@ -27,8 +32,8 @@ type AdapterParameters struct {
|
|||||||
} `json:"lora_parameters"`
|
} `json:"lora_parameters"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ModelParameters) KV(t *Tokenizer) llm.KV {
|
func (ModelParameters) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := llm.KV{
|
kv := ggml.KV{
|
||||||
"general.file_type": uint32(1),
|
"general.file_type": uint32(1),
|
||||||
"general.quantization_version": uint32(2),
|
"general.quantization_version": uint32(2),
|
||||||
"tokenizer.ggml.pre": t.Pre,
|
"tokenizer.ggml.pre": t.Pre,
|
||||||
@@ -54,7 +59,7 @@ func (ModelParameters) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p AdapterParameters) KV() llm.KV {
|
func (p AdapterParameters) KV() ggml.KV {
|
||||||
var alpha float32
|
var alpha float32
|
||||||
if p.LoraParameters.Alpha == 0 {
|
if p.LoraParameters.Alpha == 0 {
|
||||||
alpha = float32(p.Alpha)
|
alpha = float32(p.Alpha)
|
||||||
@@ -62,7 +67,7 @@ func (p AdapterParameters) KV() llm.KV {
|
|||||||
alpha = p.LoraParameters.Alpha
|
alpha = p.LoraParameters.Alpha
|
||||||
}
|
}
|
||||||
|
|
||||||
kv := llm.KV{
|
kv := ggml.KV{
|
||||||
"adapter.lora.alpha": alpha,
|
"adapter.lora.alpha": alpha,
|
||||||
"adapter.type": "lora",
|
"adapter.type": "lora",
|
||||||
"general.file_type": uint32(1),
|
"general.file_type": uint32(1),
|
||||||
@@ -79,19 +84,19 @@ func (ModelParameters) specialTokenTypes() []string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ModelParameters) writeFile(ws io.WriteSeeker, kv llm.KV, ts []llm.Tensor) error {
|
func (ModelParameters) writeFile(ws io.WriteSeeker, kv ggml.KV, ts []ggml.Tensor) error {
|
||||||
return llm.WriteGGUF(ws, kv, ts)
|
return ggml.WriteGGUF(ws, kv, ts)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (AdapterParameters) writeFile(ws io.WriteSeeker, kv llm.KV, ts []llm.Tensor) error {
|
func (AdapterParameters) writeFile(ws io.WriteSeeker, kv ggml.KV, ts []ggml.Tensor) error {
|
||||||
return llm.WriteGGUF(ws, kv, ts)
|
return ggml.WriteGGUF(ws, kv, ts)
|
||||||
}
|
}
|
||||||
|
|
||||||
type ModelConverter interface {
|
type ModelConverter interface {
|
||||||
// KV maps parameters to LLM key-values
|
// KV maps parameters to LLM key-values
|
||||||
KV(*Tokenizer) llm.KV
|
KV(*Tokenizer) ggml.KV
|
||||||
// Tensors maps input tensors to LLM tensors. Model specific modifications can be done here.
|
// Tensors maps input tensors to LLM tensors. Model specific modifications can be done here.
|
||||||
Tensors([]Tensor) []llm.Tensor
|
Tensors([]Tensor) []ggml.Tensor
|
||||||
// Replacements returns a list of string pairs to replace in tensor names.
|
// Replacements returns a list of string pairs to replace in tensor names.
|
||||||
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
||||||
Replacements() []string
|
Replacements() []string
|
||||||
@@ -99,7 +104,7 @@ type ModelConverter interface {
|
|||||||
// specialTokenTypes returns any special token types the model uses
|
// specialTokenTypes returns any special token types the model uses
|
||||||
specialTokenTypes() []string
|
specialTokenTypes() []string
|
||||||
// writeFile writes the model to the provided io.WriteSeeker
|
// writeFile writes the model to the provided io.WriteSeeker
|
||||||
writeFile(io.WriteSeeker, llm.KV, []llm.Tensor) error
|
writeFile(io.WriteSeeker, ggml.KV, []ggml.Tensor) error
|
||||||
}
|
}
|
||||||
|
|
||||||
type moreParser interface {
|
type moreParser interface {
|
||||||
@@ -108,17 +113,17 @@ type moreParser interface {
|
|||||||
|
|
||||||
type AdapterConverter interface {
|
type AdapterConverter interface {
|
||||||
// KV maps parameters to LLM key-values
|
// KV maps parameters to LLM key-values
|
||||||
KV(llm.KV) llm.KV
|
KV(ggml.KV) ggml.KV
|
||||||
// Tensors maps input tensors to LLM tensors. Adapter specific modifications can be done here.
|
// Tensors maps input tensors to LLM tensors. Adapter specific modifications can be done here.
|
||||||
Tensors([]Tensor) []llm.Tensor
|
Tensors([]Tensor) []ggml.Tensor
|
||||||
// Replacements returns a list of string pairs to replace in tensor names.
|
// Replacements returns a list of string pairs to replace in tensor names.
|
||||||
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
||||||
Replacements() []string
|
Replacements() []string
|
||||||
|
|
||||||
writeFile(io.WriteSeeker, llm.KV, []llm.Tensor) error
|
writeFile(io.WriteSeeker, ggml.KV, []ggml.Tensor) error
|
||||||
}
|
}
|
||||||
|
|
||||||
func ConvertAdapter(fsys fs.FS, ws io.WriteSeeker, baseKV llm.KV) error {
|
func ConvertAdapter(fsys fs.FS, ws io.WriteSeeker, baseKV ggml.KV) error {
|
||||||
bts, err := fs.ReadFile(fsys, "adapter_config.json")
|
bts, err := fs.ReadFile(fsys, "adapter_config.json")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -185,12 +190,18 @@ func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
|||||||
conv = &gemmaModel{}
|
conv = &gemmaModel{}
|
||||||
case "Gemma2ForCausalLM":
|
case "Gemma2ForCausalLM":
|
||||||
conv = &gemma2Model{}
|
conv = &gemma2Model{}
|
||||||
|
case "Gemma3ForCausalLM", "Gemma3ForConditionalGeneration":
|
||||||
|
conv = &gemma3Model{Architecture: p.Architectures[0]}
|
||||||
case "Phi3ForCausalLM":
|
case "Phi3ForCausalLM":
|
||||||
conv = &phi3Model{}
|
conv = &phi3Model{}
|
||||||
|
case "Qwen2ForCausalLM":
|
||||||
|
conv = &qwen2Model{}
|
||||||
case "BertModel":
|
case "BertModel":
|
||||||
conv = &bertModel{}
|
conv = &bertModel{}
|
||||||
|
case "CohereForCausalLM":
|
||||||
|
conv = &commandrModel{}
|
||||||
default:
|
default:
|
||||||
return errors.New("unsupported architecture")
|
return fmt.Errorf("unsupported architecture %q", p.Architectures[0])
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.Unmarshal(bts, conv); err != nil {
|
if err := json.Unmarshal(bts, conv); err != nil {
|
||||||
@@ -209,7 +220,14 @@ func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
vocabSize := int(p.VocabSize)
|
vocabSize := int(p.VocabSize)
|
||||||
|
if vocabSize == 0 {
|
||||||
|
tVocabSize := int(p.TextModel.VocabSize)
|
||||||
|
vocabSize = tVocabSize
|
||||||
|
}
|
||||||
|
|
||||||
switch {
|
switch {
|
||||||
|
case vocabSize == 0:
|
||||||
|
slog.Warn("vocabulary size was not explicitly set by the model", "default size", len(t.Vocabulary.Tokens))
|
||||||
case vocabSize > len(t.Vocabulary.Tokens):
|
case vocabSize > len(t.Vocabulary.Tokens):
|
||||||
slog.Warn("vocabulary is smaller than expected, padding with dummy tokens", "expect", vocabSize, "actual", len(t.Vocabulary.Tokens))
|
slog.Warn("vocabulary is smaller than expected, padding with dummy tokens", "expect", vocabSize, "actual", len(t.Vocabulary.Tokens))
|
||||||
for i := range vocabSize - len(t.Vocabulary.Tokens) {
|
for i := range vocabSize - len(t.Vocabulary.Tokens) {
|
||||||
|
@@ -8,7 +8,7 @@ import (
|
|||||||
"slices"
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type bertModel struct {
|
type bertModel struct {
|
||||||
@@ -85,7 +85,7 @@ func (p *bertModel) parseMore(fsys fs.FS) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *bertModel) KV(t *Tokenizer) llm.KV {
|
func (p *bertModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "bert"
|
kv["general.architecture"] = "bert"
|
||||||
kv["bert.attention.causal"] = false
|
kv["bert.attention.causal"] = false
|
||||||
@@ -132,8 +132,8 @@ func (p *bertModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *bertModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *bertModel) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if slices.Contains([]string{
|
if slices.Contains([]string{
|
||||||
"embeddings.position_ids",
|
"embeddings.position_ids",
|
||||||
@@ -143,7 +143,7 @@ func (p *bertModel) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
76
convert/convert_commandr.go
Normal file
76
convert/convert_commandr.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type commandrModel struct {
|
||||||
|
ModelParameters
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
LayerNormEPS float32 `json:"layer_norm_eps"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
UseQKNorm bool `json:"use_qk_norm"`
|
||||||
|
MaxLength uint32 `json:"model_max_length"`
|
||||||
|
LogitScale float32 `json:"logit_scale"`
|
||||||
|
NCtx uint32 `json:"n_ctx"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*commandrModel)(nil)
|
||||||
|
|
||||||
|
func (p *commandrModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "command-r"
|
||||||
|
kv["general.name"] = "command-r"
|
||||||
|
kv["command-r.context_length"] = cmp.Or(p.MaxLength, p.MaxPositionEmbeddings, p.NCtx)
|
||||||
|
kv["command-r.embedding_length"] = p.HiddenSize
|
||||||
|
kv["command-r.block_count"] = p.HiddenLayers
|
||||||
|
kv["command-r.feed_forward_length"] = p.IntermediateSize
|
||||||
|
kv["command-r.attention.head_count"] = p.NumAttentionHeads
|
||||||
|
kv["command-r.attention.head_count_kv"] = p.NumKeyValueHeads
|
||||||
|
kv["command-r.attention.layer_norm_epsilon"] = p.LayerNormEPS
|
||||||
|
kv["command-r.rope.freq_base"] = p.RopeTheta
|
||||||
|
kv["command-r.max_position_embeddings"] = cmp.Or(p.MaxLength, p.MaxPositionEmbeddings)
|
||||||
|
kv["command-r.logit_scale"] = p.LogitScale
|
||||||
|
kv["command-r.rope.scaling.type"] = "none"
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *commandrModel) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
|
var out []ggml.Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
out = append(out, ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *commandrModel) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"self_attn.q_norm", "attn_q_norm",
|
||||||
|
"self_attn.k_norm", "attn_k_norm",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
}
|
||||||
|
}
|
@@ -6,7 +6,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type gemmaModel struct {
|
type gemmaModel struct {
|
||||||
@@ -23,7 +23,7 @@ type gemmaModel struct {
|
|||||||
|
|
||||||
var _ ModelConverter = (*gemmaModel)(nil)
|
var _ ModelConverter = (*gemmaModel)(nil)
|
||||||
|
|
||||||
func (p *gemmaModel) KV(t *Tokenizer) llm.KV {
|
func (p *gemmaModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "gemma"
|
kv["general.architecture"] = "gemma"
|
||||||
kv["gemma.context_length"] = p.MaxPositionEmbeddings
|
kv["gemma.context_length"] = p.MaxPositionEmbeddings
|
||||||
@@ -42,14 +42,14 @@ func (p *gemmaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemmaModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *gemmaModel) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if strings.HasSuffix(t.Name(), "_norm.weight") {
|
if !strings.HasPrefix(t.Name(), "v.") && strings.HasSuffix(t.Name(), "_norm.weight") {
|
||||||
t.SetRepacker(p.addOne)
|
t.SetRepacker(p.addOne)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
@@ -1,8 +1,6 @@
|
|||||||
package convert
|
package convert
|
||||||
|
|
||||||
import (
|
import "github.com/ollama/ollama/fs/ggml"
|
||||||
"github.com/ollama/ollama/llm"
|
|
||||||
)
|
|
||||||
|
|
||||||
type gemma2Model struct {
|
type gemma2Model struct {
|
||||||
gemmaModel
|
gemmaModel
|
||||||
@@ -11,7 +9,7 @@ type gemma2Model struct {
|
|||||||
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemma2Model) KV(t *Tokenizer) llm.KV {
|
func (p *gemma2Model) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "gemma2"
|
kv["general.architecture"] = "gemma2"
|
||||||
kv["gemma2.context_length"] = p.MaxPositionEmbeddings
|
kv["gemma2.context_length"] = p.MaxPositionEmbeddings
|
||||||
|
@@ -6,7 +6,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type gemma2Adapter struct {
|
type gemma2Adapter struct {
|
||||||
@@ -15,14 +15,14 @@ type gemma2Adapter struct {
|
|||||||
|
|
||||||
var _ AdapterConverter = (*gemma2Adapter)(nil)
|
var _ AdapterConverter = (*gemma2Adapter)(nil)
|
||||||
|
|
||||||
func (p *gemma2Adapter) KV(baseKV llm.KV) llm.KV {
|
func (p *gemma2Adapter) KV(baseKV ggml.KV) ggml.KV {
|
||||||
kv := p.AdapterParameters.KV()
|
kv := p.AdapterParameters.KV()
|
||||||
kv["general.architecture"] = "gemma2"
|
kv["general.architecture"] = "gemma2"
|
||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemma2Adapter) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *gemma2Adapter) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
shape := t.Shape()
|
shape := t.Shape()
|
||||||
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
||||||
@@ -31,7 +31,7 @@ func (p *gemma2Adapter) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
142
convert/convert_gemma3.go
Normal file
142
convert/convert_gemma3.go
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type gemma3Model struct {
|
||||||
|
gemmaModel
|
||||||
|
Architecture string
|
||||||
|
TextModel struct {
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct {
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"` // attention.head_count 16
|
||||||
|
LayerNormEpsilon float32 `json:"layer_norm_eps"` // attention.layer_norm_epsilon 1e-05
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"` // block_count 32
|
||||||
|
HiddenSize uint32 `json:"hidden_size"` // embedding_length 1280
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"` // feed_forward_length 5120
|
||||||
|
ImageSize uint32 `json:"image_size"` // image_size 560
|
||||||
|
NumChannels uint32 `json:"num_channels"` // num_channels 3
|
||||||
|
PatchSize uint32 `json:"patch_size"` // patch_size 14
|
||||||
|
} `json:"vision_config"`
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
||||||
|
RopeLocalTheta float32 `json:"rope_local_base_freq"`
|
||||||
|
RopeGlobalTheta float32 `json:"rope_global_base_freq"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
MultiModalTokensPerImage uint32 `json:"mm_tokens_per_image"`
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
gemma4BLayerCount = 34
|
||||||
|
gemma12BLayerCount = 48
|
||||||
|
gemma27BLayerCount = 62
|
||||||
|
)
|
||||||
|
|
||||||
|
func (p *gemma3Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "gemma3"
|
||||||
|
|
||||||
|
numBlocks := cmp.Or(p.HiddenLayers, p.TextModel.HiddenLayers)
|
||||||
|
kv["gemma3.block_count"] = numBlocks
|
||||||
|
|
||||||
|
var (
|
||||||
|
numHeads uint32
|
||||||
|
numKVHeads uint32
|
||||||
|
)
|
||||||
|
|
||||||
|
switch numBlocks {
|
||||||
|
case gemma4BLayerCount:
|
||||||
|
numHeads = 8
|
||||||
|
numKVHeads = 4
|
||||||
|
case gemma12BLayerCount:
|
||||||
|
numHeads = 16
|
||||||
|
numKVHeads = 8
|
||||||
|
case gemma27BLayerCount:
|
||||||
|
numHeads = 32
|
||||||
|
numKVHeads = 16
|
||||||
|
default:
|
||||||
|
numHeads = p.NumAttentionHeads
|
||||||
|
numKVHeads = p.NumKeyValueHeads
|
||||||
|
}
|
||||||
|
|
||||||
|
kv["gemma3.attention.head_count"] = numHeads
|
||||||
|
kv["gemma3.attention.head_count_kv"] = numKVHeads
|
||||||
|
|
||||||
|
switch p.Architecture {
|
||||||
|
case "Gemma3ForCausalLM":
|
||||||
|
kv["gemma3.context_length"] = p.MaxPositionEmbeddings
|
||||||
|
kv["gemma3.attention.layer_norm_rms_epsilon"] = p.RMSNormEPS
|
||||||
|
kv["gemma3.attention.key_length"] = p.HeadDim
|
||||||
|
kv["gemma3.attention.value_length"] = p.HeadDim
|
||||||
|
kv["gemma3.attention.sliding_window"] = p.SlidingWindow
|
||||||
|
kv["gemma3.final_logit_softcapping"] = cmp.Or(p.FinalLogitSoftcap, 30)
|
||||||
|
kv["gemma3.rope.local.freq_base"] = cmp.Or(p.RopeLocalTheta, 10000.0)
|
||||||
|
kv["gemma3.rope.global.freq_base"] = cmp.Or(p.RopeGlobalTheta, 1000000.0)
|
||||||
|
kv["gemma3.embedding_length"] = p.HiddenSize
|
||||||
|
kv["gemma3.feed_forward_length"] = p.IntermediateSize
|
||||||
|
default:
|
||||||
|
kv["gemma3.context_length"] = cmp.Or(p.MaxPositionEmbeddings, 131072)
|
||||||
|
kv["gemma3.embedding_length"] = p.TextModel.HiddenSize
|
||||||
|
kv["gemma3.feed_forward_length"] = p.TextModel.IntermediateSize
|
||||||
|
kv["gemma3.attention.sliding_window"] = p.TextModel.SlidingWindow
|
||||||
|
kv["gemma3.vision.block_count"] = p.VisionModel.NumHiddenLayers
|
||||||
|
kv["gemma3.vision.embedding_length"] = p.VisionModel.HiddenSize
|
||||||
|
kv["gemma3.vision.feed_forward_length"] = p.VisionModel.IntermediateSize
|
||||||
|
kv["gemma3.vision.image_size"] = p.VisionModel.ImageSize
|
||||||
|
kv["gemma3.vision.patch_size"] = p.VisionModel.PatchSize
|
||||||
|
kv["gemma3.vision.num_channels"] = cmp.Or(p.VisionModel.NumChannels, 3)
|
||||||
|
kv["gemma3.vision.attention.head_count"] = p.VisionModel.NumAttentionHeads
|
||||||
|
kv["gemma3.vision.attention.layer_norm_epsilon"] = cmp.Or(p.VisionModel.LayerNormEpsilon, 1e-6)
|
||||||
|
kv["gemma3.attention.key_length"] = cmp.Or(p.TextModel.HeadDim, 256)
|
||||||
|
kv["gemma3.attention.value_length"] = cmp.Or(p.TextModel.HeadDim, 256)
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.MultiModalTokensPerImage > 0 {
|
||||||
|
kv["gemma3.mm.tokens_per_image"] = p.MultiModalTokensPerImage
|
||||||
|
}
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *gemma3Model) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"lm_head", "output",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
"vision_tower.vision_model.embeddings", "v",
|
||||||
|
"vision_tower.vision_model", "v",
|
||||||
|
"vision_model.vision_model.embeddings", "v",
|
||||||
|
"vision_model.vision_model", "v",
|
||||||
|
"language_model.", "",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"encoder.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.q_norm", "attn_q_norm",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.k_norm", "attn_k_norm",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"self_attn.out_proj", "attn_output",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"post_attention_layernorm", "post_attention_norm",
|
||||||
|
"pre_feedforward_layernorm", "ffn_norm",
|
||||||
|
"post_feedforward_layernorm", "post_ffw_norm",
|
||||||
|
"input_projection_weight", "input_projection.weight",
|
||||||
|
"multi_modal_projector", "mm",
|
||||||
|
}
|
||||||
|
}
|
@@ -9,7 +9,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type llamaModel struct {
|
type llamaModel struct {
|
||||||
@@ -46,7 +46,7 @@ type llamaModel struct {
|
|||||||
|
|
||||||
var _ ModelConverter = (*llamaModel)(nil)
|
var _ ModelConverter = (*llamaModel)(nil)
|
||||||
|
|
||||||
func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
func (p *llamaModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "llama"
|
kv["general.architecture"] = "llama"
|
||||||
kv["llama.vocab_size"] = p.VocabSize
|
kv["llama.vocab_size"] = p.VocabSize
|
||||||
@@ -120,11 +120,11 @@ func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *llamaModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *llamaModel) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
|
|
||||||
if p.RopeScaling.factors != nil {
|
if p.RopeScaling.factors != nil {
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: "rope_freqs.weight",
|
Name: "rope_freqs.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.factors))},
|
Shape: []uint64{uint64(len(p.RopeScaling.factors))},
|
||||||
@@ -138,7 +138,7 @@ func (p *llamaModel) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
@@ -7,7 +7,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type llamaAdapter struct {
|
type llamaAdapter struct {
|
||||||
@@ -18,7 +18,7 @@ type llamaAdapter struct {
|
|||||||
|
|
||||||
var _ AdapterConverter = (*llamaAdapter)(nil)
|
var _ AdapterConverter = (*llamaAdapter)(nil)
|
||||||
|
|
||||||
func (p *llamaAdapter) KV(baseKV llm.KV) llm.KV {
|
func (p *llamaAdapter) KV(baseKV ggml.KV) ggml.KV {
|
||||||
kv := p.AdapterParameters.KV()
|
kv := p.AdapterParameters.KV()
|
||||||
kv["general.architecture"] = "llama"
|
kv["general.architecture"] = "llama"
|
||||||
kv["llama.attention.head_count"] = baseKV["llama.attention.head_count"]
|
kv["llama.attention.head_count"] = baseKV["llama.attention.head_count"]
|
||||||
@@ -29,8 +29,8 @@ func (p *llamaAdapter) KV(baseKV llm.KV) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *llamaAdapter) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *llamaAdapter) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
shape := t.Shape()
|
shape := t.Shape()
|
||||||
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
||||||
@@ -41,7 +41,7 @@ func (p *llamaAdapter) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: shape,
|
Shape: shape,
|
||||||
|
@@ -6,7 +6,7 @@ import (
|
|||||||
"slices"
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type mixtralModel struct {
|
type mixtralModel struct {
|
||||||
@@ -15,7 +15,7 @@ type mixtralModel struct {
|
|||||||
NumExpertsPerToken uint32 `json:"num_experts_per_tok"`
|
NumExpertsPerToken uint32 `json:"num_experts_per_tok"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *mixtralModel) KV(t *Tokenizer) llm.KV {
|
func (p *mixtralModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.llamaModel.KV(t)
|
kv := p.llamaModel.KV(t)
|
||||||
|
|
||||||
if p.NumLocalExperts > 0 {
|
if p.NumLocalExperts > 0 {
|
||||||
@@ -29,7 +29,7 @@ func (p *mixtralModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *mixtralModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *mixtralModel) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
oldnew := []string{
|
oldnew := []string{
|
||||||
"model.layers", "blk",
|
"model.layers", "blk",
|
||||||
"w1", "ffn_gate_exps",
|
"w1", "ffn_gate_exps",
|
||||||
@@ -56,10 +56,10 @@ func (p *mixtralModel) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
return true
|
return true
|
||||||
})
|
})
|
||||||
|
|
||||||
var out []llm.Tensor
|
var out []ggml.Tensor
|
||||||
for n, e := range experts {
|
for n, e := range experts {
|
||||||
// TODO(mxyng): sanity check experts
|
// TODO(mxyng): sanity check experts
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: n,
|
Name: n,
|
||||||
Kind: e[0].Kind(),
|
Kind: e[0].Kind(),
|
||||||
Shape: append([]uint64{uint64(len(e))}, e[0].Shape()...),
|
Shape: append([]uint64{uint64(len(e))}, e[0].Shape()...),
|
||||||
|
@@ -8,7 +8,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type phi3Model struct {
|
type phi3Model struct {
|
||||||
@@ -37,7 +37,7 @@ type phi3Model struct {
|
|||||||
|
|
||||||
var _ ModelConverter = (*phi3Model)(nil)
|
var _ ModelConverter = (*phi3Model)(nil)
|
||||||
|
|
||||||
func (p *phi3Model) KV(t *Tokenizer) llm.KV {
|
func (p *phi3Model) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "phi3"
|
kv["general.architecture"] = "phi3"
|
||||||
kv["phi3.context_length"] = p.MaxPositionEmbeddings
|
kv["phi3.context_length"] = p.MaxPositionEmbeddings
|
||||||
@@ -68,19 +68,19 @@ func (p *phi3Model) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *phi3Model) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *phi3Model) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
var addRopeFactors sync.Once
|
var addRopeFactors sync.Once
|
||||||
|
|
||||||
out := make([]llm.Tensor, 0, len(ts)+2)
|
out := make([]ggml.Tensor, 0, len(ts)+2)
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if strings.HasPrefix(t.Name(), "blk.0.") {
|
if strings.HasPrefix(t.Name(), "blk.0.") {
|
||||||
addRopeFactors.Do(func() {
|
addRopeFactors.Do(func() {
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: "rope_factors_long.weight",
|
Name: "rope_factors_long.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.LongFactor))},
|
Shape: []uint64{uint64(len(p.RopeScaling.LongFactor))},
|
||||||
WriterTo: p.RopeScaling.LongFactor,
|
WriterTo: p.RopeScaling.LongFactor,
|
||||||
}, llm.Tensor{
|
}, ggml.Tensor{
|
||||||
Name: "rope_factors_short.weight",
|
Name: "rope_factors_short.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.ShortFactor))},
|
Shape: []uint64{uint64(len(p.RopeScaling.ShortFactor))},
|
||||||
@@ -89,7 +89,7 @@ func (p *phi3Model) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
78
convert/convert_qwen2.go
Normal file
78
convert/convert_qwen2.go
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import "github.com/ollama/ollama/fs/ggml"
|
||||||
|
|
||||||
|
type qwen2Model struct {
|
||||||
|
ModelParameters
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
RopeScaling struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
Factor ropeFactor `json:"factor"`
|
||||||
|
OriginalMaxPositionEmbeddings uint32 `json:"original_max_position_embeddings"`
|
||||||
|
} `json:"rope_scaling"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*qwen2Model)(nil)
|
||||||
|
|
||||||
|
func (q *qwen2Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := q.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "qwen2"
|
||||||
|
kv["qwen2.block_count"] = q.HiddenLayers
|
||||||
|
kv["qwen2.context_length"] = q.MaxPositionEmbeddings
|
||||||
|
kv["qwen2.embedding_length"] = q.HiddenSize
|
||||||
|
kv["qwen2.feed_forward_length"] = q.IntermediateSize
|
||||||
|
kv["qwen2.attention.head_count"] = q.NumAttentionHeads
|
||||||
|
kv["qwen2.attention.head_count_kv"] = q.NumKeyValueHeads
|
||||||
|
kv["qwen2.rope.freq_base"] = q.RopeTheta
|
||||||
|
kv["qwen2.attention.layer_norm_rms_epsilon"] = q.RMSNormEPS
|
||||||
|
|
||||||
|
switch q.RopeScaling.Type {
|
||||||
|
case "":
|
||||||
|
// no scaling
|
||||||
|
case "yarn":
|
||||||
|
kv["qwen2.rope.scaling.type"] = q.RopeScaling.Type
|
||||||
|
kv["qwen2.rope.scaling.factor"] = q.RopeScaling.Factor
|
||||||
|
default:
|
||||||
|
panic("unknown rope scaling type")
|
||||||
|
}
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (q *qwen2Model) Tensors(ts []Tensor) []ggml.Tensor {
|
||||||
|
var out []ggml.Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
out = append(out, ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *qwen2Model) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"lm_head", "output",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"post_attention_layernorm", "ffn_norm",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
}
|
||||||
|
}
|
@@ -20,7 +20,7 @@ import (
|
|||||||
|
|
||||||
"golang.org/x/exp/maps"
|
"golang.org/x/exp/maps"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type tensorData struct {
|
type tensorData struct {
|
||||||
@@ -29,7 +29,7 @@ type tensorData struct {
|
|||||||
Shape []int `json:"shape"`
|
Shape []int `json:"shape"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
func convertFull(t *testing.T, fsys fs.FS) (*os.File, ggml.KV, ggml.Tensors) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
f, err := os.CreateTemp(t.TempDir(), "f16")
|
f, err := os.CreateTemp(t.TempDir(), "f16")
|
||||||
@@ -48,7 +48,7 @@ func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
|||||||
}
|
}
|
||||||
t.Cleanup(func() { r.Close() })
|
t.Cleanup(func() { r.Close() })
|
||||||
|
|
||||||
m, _, err := llm.DecodeGGML(r, math.MaxInt)
|
m, _, err := ggml.Decode(r, math.MaxInt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
@@ -60,7 +60,7 @@ func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
|||||||
return r, m.KV(), m.Tensors()
|
return r, m.KV(), m.Tensors()
|
||||||
}
|
}
|
||||||
|
|
||||||
func generateResultsJSON(t *testing.T, f *os.File, kv llm.KV, tensors llm.Tensors) map[string]string {
|
func generateResultsJSON(t *testing.T, f *os.File, kv ggml.KV, tensors ggml.Tensors) map[string]string {
|
||||||
actual := make(map[string]string)
|
actual := make(map[string]string)
|
||||||
for k, v := range kv {
|
for k, v := range kv {
|
||||||
if s, ok := v.(json.Marshaler); !ok {
|
if s, ok := v.(json.Marshaler); !ok {
|
||||||
@@ -75,7 +75,7 @@ func generateResultsJSON(t *testing.T, f *os.File, kv llm.KV, tensors llm.Tensor
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tensor := range tensors.Items {
|
for _, tensor := range tensors.Items() {
|
||||||
sha256sum := sha256.New()
|
sha256sum := sha256.New()
|
||||||
sr := io.NewSectionReader(f, int64(tensors.Offset+tensor.Offset), int64(tensor.Size()))
|
sr := io.NewSectionReader(f, int64(tensors.Offset+tensor.Offset), int64(tensor.Size()))
|
||||||
if _, err := io.Copy(sha256sum, sr); err != nil {
|
if _, err := io.Copy(sha256sum, sr); err != nil {
|
||||||
@@ -108,6 +108,8 @@ func TestConvertModel(t *testing.T) {
|
|||||||
"Phi-3-mini-128k-instruct",
|
"Phi-3-mini-128k-instruct",
|
||||||
"all-MiniLM-L6-v2",
|
"all-MiniLM-L6-v2",
|
||||||
"gemma-2-9b-it",
|
"gemma-2-9b-it",
|
||||||
|
"Qwen2.5-0.5B-Instruct",
|
||||||
|
"c4ai-command-r-v01",
|
||||||
}
|
}
|
||||||
|
|
||||||
for i := range cases {
|
for i := range cases {
|
||||||
@@ -330,7 +332,7 @@ func TestConvertAdapter(t *testing.T) {
|
|||||||
}
|
}
|
||||||
defer r.Close()
|
defer r.Close()
|
||||||
|
|
||||||
m, _, err := llm.DecodeGGML(r, math.MaxInt)
|
m, _, err := ggml.Decode(r, math.MaxInt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
@@ -331,7 +331,7 @@ type TrainerSpec struct {
|
|||||||
// Reserved special meta tokens.
|
// Reserved special meta tokens.
|
||||||
// * -1 is not used.
|
// * -1 is not used.
|
||||||
// * unk_id must not be -1.
|
// * unk_id must not be -1.
|
||||||
// Id must starts with 0 and be contigous.
|
// Id must start with 0 and be contiguous.
|
||||||
UnkId *int32 `protobuf:"varint,40,opt,name=unk_id,json=unkId,def=0" json:"unk_id,omitempty"` // <unk>
|
UnkId *int32 `protobuf:"varint,40,opt,name=unk_id,json=unkId,def=0" json:"unk_id,omitempty"` // <unk>
|
||||||
BosId *int32 `protobuf:"varint,41,opt,name=bos_id,json=bosId,def=1" json:"bos_id,omitempty"` // <s>
|
BosId *int32 `protobuf:"varint,41,opt,name=bos_id,json=bosId,def=1" json:"bos_id,omitempty"` // <s>
|
||||||
EosId *int32 `protobuf:"varint,42,opt,name=eos_id,json=eosId,def=2" json:"eos_id,omitempty"` // </s>
|
EosId *int32 `protobuf:"varint,42,opt,name=eos_id,json=eosId,def=2" json:"eos_id,omitempty"` // </s>
|
||||||
|
@@ -213,7 +213,7 @@ message TrainerSpec {
|
|||||||
// Reserved special meta tokens.
|
// Reserved special meta tokens.
|
||||||
// * -1 is not used.
|
// * -1 is not used.
|
||||||
// * unk_id must not be -1.
|
// * unk_id must not be -1.
|
||||||
// Id must starts with 0 and be contigous.
|
// Id must start with 0 and be contiguous.
|
||||||
optional int32 unk_id = 40 [default = 0]; // <unk>
|
optional int32 unk_id = 40 [default = 0]; // <unk>
|
||||||
optional int32 bos_id = 41 [default = 1]; // <s>
|
optional int32 bos_id = 41 [default = 1]; // <s>
|
||||||
optional int32 eos_id = 42 [default = 2]; // </s>
|
optional int32 eos_id = 42 [default = 2]; // </s>
|
||||||
|
314
convert/testdata/Qwen2.5-0.5B-Instruct.json
vendored
Normal file
314
convert/testdata/Qwen2.5-0.5B-Instruct.json
vendored
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
{
|
||||||
|
"general.architecture": "qwen2",
|
||||||
|
"general.file_type": "1",
|
||||||
|
"general.parameter_count": "494032768",
|
||||||
|
"general.quantization_version": "2",
|
||||||
|
"output_norm.weight": "93a01a6db3419e85320a244bbf8ae81c43033b1d10c342bea3797ff2ce348390",
|
||||||
|
"qwen2.attention.head_count": "14",
|
||||||
|
"qwen2.attention.head_count_kv": "2",
|
||||||
|
"qwen2.attention.layer_norm_rms_epsilon": "1e-06",
|
||||||
|
"qwen2.block_count": "24",
|
||||||
|
"qwen2.context_length": "32768",
|
||||||
|
"qwen2.embedding_length": "896",
|
||||||
|
"qwen2.feed_forward_length": "4864",
|
||||||
|
"qwen2.rope.freq_base": "1e+06",
|
||||||
|
"token_embd.weight": "d74257dc547b48be5ae7b93f1c9af072c0c42dbbb85503078e25c59cd09e68d0",
|
||||||
|
"tokenizer.ggml.add_eos_token": "false",
|
||||||
|
"tokenizer.ggml.add_padding_token": "false",
|
||||||
|
"tokenizer.ggml.eos_token_id": "151645",
|
||||||
|
"tokenizer.ggml.merges": "6b1b1c58f1223d74f9095929d3e6416cdd74784440221a5507b87b8197f2bfd2",
|
||||||
|
"tokenizer.ggml.model": "gpt2",
|
||||||
|
"tokenizer.ggml.padding_token_id": "151643",
|
||||||
|
"tokenizer.ggml.pre": "qwen2",
|
||||||
|
"tokenizer.ggml.scores": "94e247e531e8b0fa3d248f3de09c9beae0c87da8106208a8edfaac0b8ec4b53d",
|
||||||
|
"tokenizer.ggml.token_type": "b178dbc9d1b2e08f84d02918e00fc2de2619a250e6c188c91a6605f701860055",
|
||||||
|
"tokenizer.ggml.tokens": "1d93f6679b23a1152b725f7f473792d54d53c1040c5250d3e46b42f81e0a1a34",
|
||||||
|
"blk.0.attn_k.bias": "5ce6617845f66c34515978d23d52e729c298d8bffa28c356a0428bef17142cf1",
|
||||||
|
"blk.0.attn_k.weight": "a960832a9e0e83e4d95402e5d1a01cc74300fcca0c381237162126330e1a7af8",
|
||||||
|
"blk.0.attn_norm.weight": "32c7d51cd0958f1f1771174192db341f9770516d7595a2f0fd18a4d78bd5aba3",
|
||||||
|
"blk.0.attn_output.weight": "c67e6e7e868354a11bf9121c70ee56c140b20eec611a8955e7dfe54a21d40a98",
|
||||||
|
"blk.0.attn_q.bias": "3e9e994eb1f03bccfc82f8bb3c324c920d42d547e07de5be83be12c428645063",
|
||||||
|
"blk.0.attn_q.weight": "dc12132f789b97cfa1e3f5775ceb835247fa67aa47400fd09c8f9f3769208583",
|
||||||
|
"blk.0.attn_v.bias": "a3fd0757b31fdc78af5ec320332d239c1a79d34e8804df06c5454e86955e8cc9",
|
||||||
|
"blk.0.attn_v.weight": "f43094a2134c7ee2dcc52aac3c8b7d9d64fb0295a8adb94cabfd49213f017b84",
|
||||||
|
"blk.0.ffn_down.weight": "18c2aec92db14f21976838a8c35d5575f80d0e4b1e05ccc0d8388d5877e80147",
|
||||||
|
"blk.0.ffn_gate.weight": "a3a1c4ef38f8f750eabadfe3d83bbb0f77941eec1cc1a388e51852e99c8691f6",
|
||||||
|
"blk.0.ffn_norm.weight": "b59b779c42d44b5c4cec41e39b4eb61e0491a07c1b3e946ccb5b8d5c657eda3f",
|
||||||
|
"blk.0.ffn_up.weight": "db64f09987ea59449e90abae5a2ffcc20efd9203f0eebec77a6aacb5809d6cff",
|
||||||
|
"blk.1.attn_k.bias": "a5c8c5671703ec0aa0143ff70a20ffdd67b5d5790ca1dfa5bba4e87e4071ed9f",
|
||||||
|
"blk.1.attn_k.weight": "835c7c7cc95b3cb2e55bd9cac585aa0760a033896621d3e06421f3378c540f7d",
|
||||||
|
"blk.1.attn_norm.weight": "f4c36fb6c14fce721fab0de78cc118d6f66e3a3d3ea0017bb14aade24c3c5434",
|
||||||
|
"blk.1.attn_output.weight": "cc1e80310c97cef068e48e40b7096f32fa2138519d6209c6a1a9994985999016",
|
||||||
|
"blk.1.attn_q.bias": "bc332780e66b0aac80ec5e63ac32344919a840db2fcc8f87bcef16a43a54138e",
|
||||||
|
"blk.1.attn_q.weight": "d766f06c925cce38d4b31b2165b3448e1fb49a7d561985f95d9cd2fcba52367a",
|
||||||
|
"blk.1.attn_v.bias": "9f486626fb6ed9ac84970a71e9b9818dd2758501fd3f61bb1c08540dcc7a8631",
|
||||||
|
"blk.1.attn_v.weight": "e873d1e5bd4f4d6abfd47c0f55119c2c111105838753ee273a03c5ccea25ce5c",
|
||||||
|
"blk.1.ffn_down.weight": "b3ce82b093f187344de04284b1783a452de1b72640914609b8f830dc81580521",
|
||||||
|
"blk.1.ffn_gate.weight": "5cd44ad237edaca525a28a3ac13975d1b565f576d6a8003237a341ae0d156f2e",
|
||||||
|
"blk.1.ffn_norm.weight": "4ac774ee8afaee119610c46aa1ff89fc6c9084a29d226075bc4aa4d2f15f746c",
|
||||||
|
"blk.1.ffn_up.weight": "042d81ab5f1983d85c81213232f3bfc05a9302d9dfaa98d931ebba326b6058b8",
|
||||||
|
"blk.10.attn_k.bias": "767ecfeacd60a2c2221ac4d76c357190849dd9cdf64ced418d9d0c7949101401",
|
||||||
|
"blk.10.attn_k.weight": "a9f3df343227537636be8202303453086375091944e498bad11e0b91e45e8c71",
|
||||||
|
"blk.10.attn_norm.weight": "01acd0e7b3e363f873dbfde6f0995ffcce83f5aaa10ff91c31dbf775035f6d5a",
|
||||||
|
"blk.10.attn_output.weight": "a531fe660769604ab869f01b203eb115e025cad4c0baeacdd1bcca99cf6d0264",
|
||||||
|
"blk.10.attn_q.bias": "356a02c9163dd660c1340fbe1e049b335ac6178891e00996131bba9ab4cb3e59",
|
||||||
|
"blk.10.attn_q.weight": "81be0cfb227339d83f954cd8dcf35828441211c6e1d184060e3eb76085041e2f",
|
||||||
|
"blk.10.attn_v.bias": "ed0450653284b62f8bf2c2db19c0ff7a6cf3cda1324d0a044c5e3db7bb692bd3",
|
||||||
|
"blk.10.attn_v.weight": "c1247ff7092babd2ed979883095b9aa022b2996cab1c77fb9e6176ddc1498d16",
|
||||||
|
"blk.10.ffn_down.weight": "fda7544965dc9af874f1062c22151c6cefc8ba08cbe15dc67aa89979e77b2de4",
|
||||||
|
"blk.10.ffn_gate.weight": "9f2632b1dee7304d10c70bd38d85bb1f148a628a8468f894f57975b8a2f1d945",
|
||||||
|
"blk.10.ffn_norm.weight": "94f8cbd6b17a4d5aabd93fa32930a687db3b11f086142f1cd71c535c11adcad4",
|
||||||
|
"blk.10.ffn_up.weight": "8dc2f8db0474939a277a3d89db34c3bcc3381cfea57bd05a8426a164634d9112",
|
||||||
|
"blk.11.attn_k.bias": "3b8e5a662b19411e3f6530714b766aad2ee41eebc8161bec9db0bc82d383a6e0",
|
||||||
|
"blk.11.attn_k.weight": "2c29f1ed1ce53ce9604e9ea3663c2c373157e909a0d6064a8920005f6d15dad9",
|
||||||
|
"blk.11.attn_norm.weight": "48f68a99c3da4ab4c9e492677b606d1b8e0e3de1fdbf6a977523f97b8c21ec31",
|
||||||
|
"blk.11.attn_output.weight": "5859f3838a94898b020c23040941ed88f4fcb132db400d0849f30a01f62c0f1c",
|
||||||
|
"blk.11.attn_q.bias": "c5ad89a5628f2bd81252ef44ef6bbcbff15c33ad16fba66435509b959c2af6d3",
|
||||||
|
"blk.11.attn_q.weight": "d102104e5d61c1e3219564f1d0149fd593db6c6daa9f3872460c84403323cfef",
|
||||||
|
"blk.11.attn_v.bias": "8653f7d48c5f75a5b55630819f99ecf01c932f12d33fd1a3ee634613e70edde8",
|
||||||
|
"blk.11.attn_v.weight": "e0a7c7d89b9f2d0d781ce85330022229126e130a8600a09d4a5f920f0bbd50b2",
|
||||||
|
"blk.11.ffn_down.weight": "4a22b3361eba8bbe1d9a6fda1812618e894c49f13bcacb505defa9badb6b96a6",
|
||||||
|
"blk.11.ffn_gate.weight": "484698b206760d3fd8df68b252a3c5bae65c8bf6392fb53a5261b021b6f39144",
|
||||||
|
"blk.11.ffn_norm.weight": "da69e96338cbe30882cf5a9544004387f5bbc0bcb6038e61ba2baabbd2623bac",
|
||||||
|
"blk.11.ffn_up.weight": "26ec74f1f504d1281715680dfbcc321db4e9900c53932fa40955daceb891b9aa",
|
||||||
|
"blk.12.attn_k.bias": "f94b49ec3e498f14f6bc3ebefe1f82018935bbe594df03253bfffae36bc20751",
|
||||||
|
"blk.12.attn_k.weight": "ae6323d0bbcfcea01f598d308993d1a7530317e78c1f64923e36d4b1649e9e73",
|
||||||
|
"blk.12.attn_norm.weight": "3784536a7611a839a42a29a5cc538c74ee4f9793092e5efe1b227b48f8c4d37f",
|
||||||
|
"blk.12.attn_output.weight": "46826c00b066829355db78293ab216e890f5eaaed3a70499ee68785189a6b0d9",
|
||||||
|
"blk.12.attn_q.bias": "b14db2d327ce0deec97beda7d3965a56c43e1e63dc9181840fb176b114cf643a",
|
||||||
|
"blk.12.attn_q.weight": "30f67df52ced06f76b6c85531657584276a454d6ec9bb7d0c7d2ca8f067f5551",
|
||||||
|
"blk.12.attn_v.bias": "57ab4b7e43f4fc5853bca7bfbb2702f8c2c391a49252a760abbb7b26330dc4aa",
|
||||||
|
"blk.12.attn_v.weight": "3ccd9da0cfe241cd33a63310f3ca6d81c5bc5a50d200bfea6612ac376166aca2",
|
||||||
|
"blk.12.ffn_down.weight": "a095774413198a83c549ce132d7c9684c0baef33145eaa889be370ef9c881c81",
|
||||||
|
"blk.12.ffn_gate.weight": "bb3b2bbdfb065d2a0a795909c53beec327781a4a7e974bf9f99c436cea459991",
|
||||||
|
"blk.12.ffn_norm.weight": "3b486c6cd97eb4b17967d9d6c0cc3821a1a6ad73d96b4d8fbf980101b32b8dab",
|
||||||
|
"blk.12.ffn_up.weight": "d020b82dd39a5d5a9d3881397bf53a567790a07f395284e6eb0f5fe0fef53de3",
|
||||||
|
"blk.13.attn_k.bias": "69381f8254586eba3623eceb18697fe79f9b4d8f2c30136acb10d5926e3ba1d0",
|
||||||
|
"blk.13.attn_k.weight": "c4d7a31495d71269f81b586203a50abea3a9e2985667faf258c9306ec6030f1d",
|
||||||
|
"blk.13.attn_norm.weight": "907da11075d16eda668dabe548af3cfd794df26b8ab53939af1344d91bec6fba",
|
||||||
|
"blk.13.attn_output.weight": "ca01cf6d2b8ece2fb3b0f56f1eb76194471ac27b54fe264f99c909f5eb7fef4a",
|
||||||
|
"blk.13.attn_q.bias": "2f5ecebafe03b1d485b93c41cff756ca57fb65b02e9d8336f14a3d26ab5d159a",
|
||||||
|
"blk.13.attn_q.weight": "f557f8acad7f0fa62da06b5da134182fe04a5bed8bdb269e316f970c9cc440fb",
|
||||||
|
"blk.13.attn_v.bias": "a492a88ae131e95714b092545a8752eaea7c7d2f9cb77852628ca8296c415525",
|
||||||
|
"blk.13.attn_v.weight": "d1220b1fe9f1cc0a5a88ee239d65fec900f5eaf6c448b6c2cbe74c81e15ed333",
|
||||||
|
"blk.13.ffn_down.weight": "53184e33440b49848a896304eb16a983efbc6b8bee0b93de8c8de716e1585fcb",
|
||||||
|
"blk.13.ffn_gate.weight": "684bf8896f148c851506c62717e45c426921b93c10d536ecdeb0fb28259a106d",
|
||||||
|
"blk.13.ffn_norm.weight": "6cb4e547ad8665eb7c174855c08afe1e5490fece66122522c1e9e8132d9064eb",
|
||||||
|
"blk.13.ffn_up.weight": "c64107897e38c06727075aba4ea7940b2cdd0e278b5c555dffb2790ef553bb57",
|
||||||
|
"blk.14.attn_k.bias": "2814ca9b160b16ae39557c9b629482fbe3a7592d372c1e1bf1ac59a2d578fde1",
|
||||||
|
"blk.14.attn_k.weight": "3377177396463afba667742972920ebb45dfdc37e9950e1f0e1d60a2f936b27d",
|
||||||
|
"blk.14.attn_norm.weight": "5cae870477d51dd35a6d22aaeacfce4dff218ffba693820ede6a4e11f02afd6d",
|
||||||
|
"blk.14.attn_output.weight": "3cfe9ccf3d48ae9e95b93a132a1c6240189a277d764f58590fb36fdbb714cad0",
|
||||||
|
"blk.14.attn_q.bias": "6a75acc2f090b2e67bfc26f7fca080ae8bd7c7aa090ec252e694be66b8b8f038",
|
||||||
|
"blk.14.attn_q.weight": "5ef45c86d7dda1df585aa1b827b89823adf679a6bb9c164bd0f97b2aa6eb96f1",
|
||||||
|
"blk.14.attn_v.bias": "5534480443e10ed72c31a917f3d104b0f49df5e6dbfa58d0eb5e7318120e3aee",
|
||||||
|
"blk.14.attn_v.weight": "58f45cf3240c4623626ec415c7d5441eaa8d2fb184f101aba973f222989422d1",
|
||||||
|
"blk.14.ffn_down.weight": "2dc82a0f20c05b77512458738130d8d05ce150cc078680ae7ee6dd7ed68d955d",
|
||||||
|
"blk.14.ffn_gate.weight": "d4a6c6f0fcccddfd1fcaa074846622f4a74cb22b9a654ab497abdc1d0dde9450",
|
||||||
|
"blk.14.ffn_norm.weight": "777e444932a0212ff3feac98442444e17bd8a98cb758ea3356697d0846d12c56",
|
||||||
|
"blk.14.ffn_up.weight": "6b75f6bd00195198447b69a417ed9d98f8ca28b3cb8be82f4bad908be0777d57",
|
||||||
|
"blk.15.attn_k.bias": "2d07211a58e6c2f23aa3a6dc03c80a7d135dfb28726b60b0e0fdd0f35ea5c37b",
|
||||||
|
"blk.15.attn_k.weight": "e77f3c0075a1810e70df956cc51fd08612f576cc09b6de8708dcae5daedb0739",
|
||||||
|
"blk.15.attn_norm.weight": "379a10d90609a5d5ba67d633803eda1424fc61ba5cca8d3bffe70c8b18b58ebf",
|
||||||
|
"blk.15.attn_output.weight": "402751c12ee9dbc9db5e3bf66a7b23ebe7d36c0500e0be67be4c8b1c4357fa62",
|
||||||
|
"blk.15.attn_q.bias": "acb37fc409ee725ceedf7a3a41b40106086abc47b76780728f781942c5120208",
|
||||||
|
"blk.15.attn_q.weight": "89cd3047a09b46ed2bb57c69dd687f67a1f0235149b30376fa31b525898e4a55",
|
||||||
|
"blk.15.attn_v.bias": "f081a37289cbe811978feb4da3ef543bdeb7355414d476f44e09b498da10cb2c",
|
||||||
|
"blk.15.attn_v.weight": "8404f242a11e6d512c9ead9b2f083cda031e9b269f8a0a83f57ee4c56934764e",
|
||||||
|
"blk.15.ffn_down.weight": "93438f43ee8cc4f1a7fd3840a6afdd5f02123e76db4f0d9474430c0100d148fc",
|
||||||
|
"blk.15.ffn_gate.weight": "ff935a2698843e87fad9dbf7125f53e460190ec71ee128b650b3fc027fe37bfc",
|
||||||
|
"blk.15.ffn_norm.weight": "4be80f199841cba831982e988451e1833c3c938a4d6ca1169319087bf0bd723e",
|
||||||
|
"blk.15.ffn_up.weight": "ee9ba63c66d71053e33551ddd519878bb30b88eeb03cfe047119c5c4000fb0a6",
|
||||||
|
"blk.16.attn_k.bias": "3f5fbabed4510c620b99d9d542739295fa6a262a7157f3a00a4889253f8341b8",
|
||||||
|
"blk.16.attn_k.weight": "8ca6eb139b281c257324cddea97a8e9aa7c048b53075cf00153123b967c27ee5",
|
||||||
|
"blk.16.attn_norm.weight": "290157f005e5aa7dddf4bd60100e7ee7b0baa7f11ec5c2cea5e0ead2aad3a4c6",
|
||||||
|
"blk.16.attn_output.weight": "b1f4d80a7447f08f1c331712527f750d00147f35c042442ade96fd029dadc5a1",
|
||||||
|
"blk.16.attn_q.bias": "e3e4e442ad4416791b468cad8de0d0d2d68c7e7df8d06002f4d49b4da9cb25e4",
|
||||||
|
"blk.16.attn_q.weight": "cc7392fa5bb1107d3816e7e7363de252d37efd4165d065e258806291ce0a147b",
|
||||||
|
"blk.16.attn_v.bias": "a7629830f2f6293e018916849614636d40b1bcd11245f75dbc34d38abae8f324",
|
||||||
|
"blk.16.attn_v.weight": "b6c7856c7d594437630929c8cf3b31d476e817875daf1095334ec08e40c5e355",
|
||||||
|
"blk.16.ffn_down.weight": "f9c0a777a00170990a4982d5a06717511bf9b0dd08aeaab64d9040d59bcbebba",
|
||||||
|
"blk.16.ffn_gate.weight": "ed88f11bc3176c9f22004e3559ccb9830a278b75edd05e11971d51c014bd5cd2",
|
||||||
|
"blk.16.ffn_norm.weight": "ab24abdcc4957895e434c6bb3a5237a71ff5044efb9f76c1a9e76e280c128410",
|
||||||
|
"blk.16.ffn_up.weight": "99f594dc8db37f554efa606e71d215fbc3907aa464a54038d6e40e9229a547ff",
|
||||||
|
"blk.17.attn_k.bias": "f236625676f9b2faa6781c7184d12d84c089c130d2a9350a6cf70210990f6bf1",
|
||||||
|
"blk.17.attn_k.weight": "c2a4f20cd3e98538308a13afe9cc5880bdd90d543449c6072dedd694b511ee1a",
|
||||||
|
"blk.17.attn_norm.weight": "5a9da4ee168311f487a79fc9d065a035432c6cafa8adb963a84954cf32f57a2a",
|
||||||
|
"blk.17.attn_output.weight": "d5df7031e354186ce65dc09d6f8a92eb721c0319816f8596b0c8a5d148ed0a2a",
|
||||||
|
"blk.17.attn_q.bias": "3212d5eeaa7ed7fac93cc99e16544de93c01bb681ae9391256ed4a8671fc6b00",
|
||||||
|
"blk.17.attn_q.weight": "d18cd9aa7ee10c551cb705549fa1ae974aea233f86471c9a19022dc29b63d0d5",
|
||||||
|
"blk.17.attn_v.bias": "a74ad11a1f8357742f80e2a0c0b3a2578fc8bbaf14c8223000767e07a5d79703",
|
||||||
|
"blk.17.attn_v.weight": "da18ac0e90884436a1cb0ad6a067f97a37f321b03c70b8b03bf481339fef5c80",
|
||||||
|
"blk.17.ffn_down.weight": "81a8a5d7a194fb53d976558e0347efbe9fdb1effffde9634c70162e1a20eff51",
|
||||||
|
"blk.17.ffn_gate.weight": "72870d83ab62f2dcd45f593924e291a45e4ae1b87f804b5b88aa34cfd76dd15e",
|
||||||
|
"blk.17.ffn_norm.weight": "cae39ac69b9bdaeefab7533796fdf11dbb7a4bdbdeed601e20f209503aafe008",
|
||||||
|
"blk.17.ffn_up.weight": "e7cb40b0842468507cec0e502bbed8a86428b51d439e3466bc12f44b2754e28f",
|
||||||
|
"blk.18.attn_k.bias": "8bfc02b94f9587aa125e2d8bbc2b15f0a5eb8f378d8b3e64a8150ae0a8ca3df2",
|
||||||
|
"blk.18.attn_k.weight": "434bc3b3332ea48afee890aa689eb458a75c50bc783492b0cbf64d42db40e8ad",
|
||||||
|
"blk.18.attn_norm.weight": "d6ffc09396c42a70d1f0e97d81113eee704d3bfc9eeae2bed022075a5dd08075",
|
||||||
|
"blk.18.attn_output.weight": "133f001f81f3b082468a7de67cb2e7a76508fce34bcc4dee7f0858e06eee082c",
|
||||||
|
"blk.18.attn_q.bias": "758d0e28bf5e660b3090aafb70e2a3191b4f3bb218d65e9139a086ceacaf599f",
|
||||||
|
"blk.18.attn_q.weight": "12d7b86fc1b09b9fa7f8b7ed43d8a410892cec8672d0c752f8346f6193343696",
|
||||||
|
"blk.18.attn_v.bias": "9efd15bab0519462431d6c6e8a5b7dd4e151dc449468097ee0ddca369c0ecc2e",
|
||||||
|
"blk.18.attn_v.weight": "f631231a79d4a2e9730fb2e386d8c18621eb3fb7900fbfdff5e6d52cc42db122",
|
||||||
|
"blk.18.ffn_down.weight": "874a2dddf456f3ab56b958b0860d71c8c680a6f89322c9bf6b2f32a113592300",
|
||||||
|
"blk.18.ffn_gate.weight": "4549ef8976c345a511df4a7133bdaf6fe387335f52dfd8a4605a8ae3f728c403",
|
||||||
|
"blk.18.ffn_norm.weight": "80c258a2536a860e19bfcbd9f29afa13214fbb4c34bde0d4da51287d354e9a59",
|
||||||
|
"blk.18.ffn_up.weight": "8b03308a581457a3c038b7a086f3cdf14941d7ad4107c4bd6d9d6b062fd00d73",
|
||||||
|
"blk.19.attn_k.bias": "e77f7b0c8e3e0a9b0d61918cd88371047752a1b02b1576936f4ec807d4d870ee",
|
||||||
|
"blk.19.attn_k.weight": "a2a318e93355230c0d0f95c441b080bf9c4914507255f363fb67a5e771d4d1e6",
|
||||||
|
"blk.19.attn_norm.weight": "9a4bdeb3970be21ac74a94c2c81eb36986533db81b78db6edec48d9802910d59",
|
||||||
|
"blk.19.attn_output.weight": "2369b103dd3947e2cef02b2669b405af5957fb3a7f9d0ff40646078c4b4317ad",
|
||||||
|
"blk.19.attn_q.bias": "e20bf427bef69059ae84a5d9f98f7d688489627f198fb6153def018ff9fd2e34",
|
||||||
|
"blk.19.attn_q.weight": "45a3bb3bdfd2f29dd76e5f78ddae73678b9a2a85dfaf609e460240ef5b7be2ad",
|
||||||
|
"blk.19.attn_v.bias": "a441f58a3e02ed86ee1819eefc9bd4e8b70d11b864a929d58a2c2ac0aeb8203d",
|
||||||
|
"blk.19.attn_v.weight": "30b0b04480c510450a7abb2ce9fa05c65b150a3cc4dc76f8916bf8d013f1b6be",
|
||||||
|
"blk.19.ffn_down.weight": "eebb9ab8fdb6a6efcfff8cf383adac9ec2d64aeeff703d16ed60d3621f86c395",
|
||||||
|
"blk.19.ffn_gate.weight": "3fef1493029298378886586478410b3d2e4e879f6aa83c07e210a7ce6481817f",
|
||||||
|
"blk.19.ffn_norm.weight": "e1be99ea1e8fb9678f7b8ba200f3f37e03878f3574d65d57bcd3a9fd796e2112",
|
||||||
|
"blk.19.ffn_up.weight": "f07cf25e09394fb69fe3ef324bdc0df9a4cecf3dc53070b8acc39e6d1689bf82",
|
||||||
|
"blk.2.attn_k.bias": "b29baa8221f125eff6b8ac1a950fa1d7cfc1bce7bdc636bf3df7d4065ab6466c",
|
||||||
|
"blk.2.attn_k.weight": "4bd0c179bced8bc37a09f5748c394e0cf50273942fb38a866e5cf50b6c96c437",
|
||||||
|
"blk.2.attn_norm.weight": "07b3edc6a6325c3428aa12f29bcae0be0de363ce61a6af487bc5c93fb8c468d9",
|
||||||
|
"blk.2.attn_output.weight": "056b5b31dbc81087c81b9d41c25960aa66c7190004c842ba343979644d7f4d88",
|
||||||
|
"blk.2.attn_q.bias": "479b6212401e097767c9d52b12a1adb8961c0fce9fcaaab81f202a9d85744376",
|
||||||
|
"blk.2.attn_q.weight": "f89196076f446a6dd8a9eee017f303504f9c03094c326449cee5a7fc0a97fade",
|
||||||
|
"blk.2.attn_v.bias": "ef9b1b986dbd9d7291027a88b67dc31434435b20e76e4f1e9d6273ebd31224f0",
|
||||||
|
"blk.2.attn_v.weight": "9322f4f00e85f8c0936845c51ca64b202a93df104f36886986a8452a8e4967a5",
|
||||||
|
"blk.2.ffn_down.weight": "7beac0d2440dc49af33ededb85a6cc3ba23ab33ad3ffa5760714b2ef84d94f6e",
|
||||||
|
"blk.2.ffn_gate.weight": "818a93864a5890c1f4dc66429004fad07645a50142350e9bff9a68fe24608a52",
|
||||||
|
"blk.2.ffn_norm.weight": "152c924d5514942ad274aafb8cc91b35c1db3627c3d973d92f60ff75f3daf9ba",
|
||||||
|
"blk.2.ffn_up.weight": "9c9579e600f209546db6015c9acfeda4f51b6d3cca6e8db4d20a04285fe61a37",
|
||||||
|
"blk.20.attn_k.bias": "fd22bfeffb63d818ce2ff1ea2ace0db5d940f7a9489b6bfc1ec4a5398848d7fe",
|
||||||
|
"blk.20.attn_k.weight": "f74439bc74c2f9252130c9c28384fd7352368b58bb7ce3f2444cf0288dfff861",
|
||||||
|
"blk.20.attn_norm.weight": "5c15d2613df87be6495fb7546b7dcedd2801d12fa5ecc02c877df889330e8f37",
|
||||||
|
"blk.20.attn_output.weight": "6731a39286a67f6859832f96695732e579e14e0c36956eccd1edce3db11595b8",
|
||||||
|
"blk.20.attn_q.bias": "04466e5a3f454a19b9b433fc2585396feac780027ece7ccb4e4bb3e406fc14d8",
|
||||||
|
"blk.20.attn_q.weight": "ead4c71daaeb17bf20d014a34c88b97f238456488e815ae0f281a5daf6fc99b8",
|
||||||
|
"blk.20.attn_v.bias": "adcc848e043025de9bd55ccb14dd8fb6343e8b5185ed07e12964be41d0faf99f",
|
||||||
|
"blk.20.attn_v.weight": "81bfc23f83526386a4761c2c16b6a93cd0bbf9d846c1a51b82c71f1474a465f1",
|
||||||
|
"blk.20.ffn_down.weight": "9bf660af3bafad919d03173c89a65fc9c89440a76c42c9e55e4d171076f3c17f",
|
||||||
|
"blk.20.ffn_gate.weight": "c04b4f3ccce44917ee228b998e2c19dd702aef10a43413afb152e808b5ac5c42",
|
||||||
|
"blk.20.ffn_norm.weight": "3d5b555d7746a71220143c6b8fff5ce4eb63283d9d9c772f1233d848f69f4ff4",
|
||||||
|
"blk.20.ffn_up.weight": "d7a196505c39e5469dfc7c6958bdbb54e93629ac1a047a6663ed96b318753094",
|
||||||
|
"blk.21.attn_k.bias": "4db1f48e5c6a3bc5720a5da813bbef08283e6269e12d83f8a9c54e52715d8011",
|
||||||
|
"blk.21.attn_k.weight": "c687b2f0e132a5e220a2a059b61aa2a537f37d8a674d7709f87880637b263b31",
|
||||||
|
"blk.21.attn_norm.weight": "ec23b0ff847a4b45585ab8e04f10fc20bb1637c5f1fbcdc4d73f336bcb5d1bd0",
|
||||||
|
"blk.21.attn_output.weight": "01255390576316c1731ef201e32c6e934eba356c28438cd06d9027ac6a3ff84f",
|
||||||
|
"blk.21.attn_q.bias": "3098f37205a15418e1681e407c82b7ce7c6fda6c6826b0590a13e1b68a38a1ea",
|
||||||
|
"blk.21.attn_q.weight": "30ea62cbb702a5359229dc96819df17ee535e2e9988d044b005c73ea536e1005",
|
||||||
|
"blk.21.attn_v.bias": "7bbedb2c22a04737f21993115701d4a06b985b7ca3b64681f53cd1be8d7ea39e",
|
||||||
|
"blk.21.attn_v.weight": "e11905e63579e36fbee978062af7599339ae29633765a4835628d79a795ec8df",
|
||||||
|
"blk.21.ffn_down.weight": "84def2ffd8aca766f9ce12ed9ac76919ab81eb34bdeae44fa4224417c38af527",
|
||||||
|
"blk.21.ffn_gate.weight": "4e99f05377b4a0b8d875045530a5c59dee6a46ac8a45597f6579f6fdfa800787",
|
||||||
|
"blk.21.ffn_norm.weight": "af48f13d03fba38ff8794a5f5005e666e501f971ca2e30bbded2777a8096f37d",
|
||||||
|
"blk.21.ffn_up.weight": "a29541c39a6acbc364be86994632a5bf55d701027cb7f23320f8c6d55ee42c91",
|
||||||
|
"blk.22.attn_k.bias": "c97f84db6c75422df6ef5768676d4e9abefaa3b8337aa2730ff260f8fc350480",
|
||||||
|
"blk.22.attn_k.weight": "af9a0c56f68779513e95be11611b7be6175ddae27d48bee9dd72fdbf05f6cbfa",
|
||||||
|
"blk.22.attn_norm.weight": "1c7518eb5bcff4a202c6f4a2827f14abd76f9bcc64ce75fe9db60b69437a5c9c",
|
||||||
|
"blk.22.attn_output.weight": "1abcf1f3caa2f59dd018646b93f9cf8fd30d49e98a473e6a8704419a751be46f",
|
||||||
|
"blk.22.attn_q.bias": "7221e01cb692faf2f7f8c2eb6e2fac38a1b751a9c9fdb6a21a0a936eb0bf4b96",
|
||||||
|
"blk.22.attn_q.weight": "faaf8fb7b6c19f343d47f3ea6b57151fb46c787e0b3bd2c292fd327d3d4d8e35",
|
||||||
|
"blk.22.attn_v.bias": "3ec05942e82d735de99dfd0d8228d8425e63e2fc584da98b3326bdef89ecb2e5",
|
||||||
|
"blk.22.attn_v.weight": "42e7b0ad06db76227837da9d4e74b2db97f3df4050ecb3a87cb9b55e08dfcb42",
|
||||||
|
"blk.22.ffn_down.weight": "87ef98ad2d0e824b0fa5ad8aa18787162922e527c9b1b721a99bc07d3bf97c82",
|
||||||
|
"blk.22.ffn_gate.weight": "562d6e5a1654b03aaa0e33864d23c10297fd4bcaa72d30fac69fb771ee1df9d6",
|
||||||
|
"blk.22.ffn_norm.weight": "f8a405dee467749d59427ce05cdd4b9c11bb18934a89258ea461f013b7d251f5",
|
||||||
|
"blk.22.ffn_up.weight": "90e1f4ae4062649d4d838399eb353e8bb8d56a49982b6a7f64aa3945377f7187",
|
||||||
|
"blk.23.attn_k.bias": "9ad22178a85f3be7e25f5aff462f31627466364f2f5e92f265cc91db0da9a8a8",
|
||||||
|
"blk.23.attn_k.weight": "d813beffb10f03278f5b58eea0f9d73cdcb7b5b4045ae025c379592e854f7dfd",
|
||||||
|
"blk.23.attn_norm.weight": "f583c9836044bdb056d6f8911088ac28add68e500043ae1f97b5d9158fe3d769",
|
||||||
|
"blk.23.attn_output.weight": "02789911ac3b97f6b761e958b7dd6dc7da61a46a1be92bd0b346039ca7ecd2b2",
|
||||||
|
"blk.23.attn_q.bias": "38c4970fb9b4f7e4a139258a45639d848653814b4bc89ea9849709b13f16414b",
|
||||||
|
"blk.23.attn_q.weight": "eb694be9a5ab5858b8dab064ee4cce247dc757424e65282989bd4d015b8580ce",
|
||||||
|
"blk.23.attn_v.bias": "0a25f6533aa7e7a152a4b198cf6c411c2408a34afa4f161bb4d5ffba2f74e33f",
|
||||||
|
"blk.23.attn_v.weight": "187e1bac6b70f74e6364de226565aa8275ee2854d09cbe5895451a689596049e",
|
||||||
|
"blk.23.ffn_down.weight": "88880dd9ba7ee80ade972927f810b5d2c30a69520c615190b27f9daabc0a8c5a",
|
||||||
|
"blk.23.ffn_gate.weight": "5abec63197935ab3eb8e6de0a5307396ec46cdb1cc5de25d87c845f3c4a3e887",
|
||||||
|
"blk.23.ffn_norm.weight": "60e1f5e6310c3a531c554a6bb7cd883aed58db1e51853f739436ea461c1843d7",
|
||||||
|
"blk.23.ffn_up.weight": "3d7f502771743f4a634188dfcd8b8a384fb07467ca8528366aee59ddb25b7bce",
|
||||||
|
"blk.3.attn_k.bias": "0b6b442ebbac29c8c4b67e8e3876d0382dd2dc52efdf4ab0ebbc6f71b6252393",
|
||||||
|
"blk.3.attn_k.weight": "480f40584fbda692c26f2cee45f5923780b236f8b4e8ec7bbee0237777a0918d",
|
||||||
|
"blk.3.attn_norm.weight": "39872be2af31bc9cd6b583ebba6fb759f621d586d66e5a2fc0b85991615a8923",
|
||||||
|
"blk.3.attn_output.weight": "924b2c80d8513bf637f8ebb3756a340d9cf2243de723fd08d7f5dccd46b3f8b6",
|
||||||
|
"blk.3.attn_q.bias": "863c9d848156847a3fe9bbc44415a4395245b5d13e95673c014fdb71e494ab0a",
|
||||||
|
"blk.3.attn_q.weight": "bff73ee5de92fba8f6c089bbb19ce57e17ab3c9c29295712804bb752711b882e",
|
||||||
|
"blk.3.attn_v.bias": "e1b6fea126e86189112fcdfee79ffc66a087461527bc9c2dc52dc80f3b7de95e",
|
||||||
|
"blk.3.attn_v.weight": "7812b7f5133636f06cdbb4dcc48ef7803206538641b6c960777b37f60a8e6752",
|
||||||
|
"blk.3.ffn_down.weight": "00b393d6a7e3ad9b5224211ccdbc54a96aae151f24ed631764ac224972a6bc82",
|
||||||
|
"blk.3.ffn_gate.weight": "cfd63fa3a038af05dc53c6eeb3c192f1602f26ff24cb840bcf1510fcb37b5513",
|
||||||
|
"blk.3.ffn_norm.weight": "7389fc240a282949580ea2f5b0d7973ac79f32f76dc0155b537bb6b751f8e27a",
|
||||||
|
"blk.3.ffn_up.weight": "2a945f47090df9cb16f92f1f06c520f156f8e232182eaaed09f257b8947a2a62",
|
||||||
|
"blk.4.attn_k.bias": "62533c31f0de498187593f238c6597503fef2a92e920cd540a96bc5311b3b2a0",
|
||||||
|
"blk.4.attn_k.weight": "93e829868bffd980a8e589b9c4566cd81e6ce4296a5f357a2ae93febe1284156",
|
||||||
|
"blk.4.attn_norm.weight": "9e0aaa4bbdd1389890f8abec20533f3ab16d61b872b1a8dbd623023921c660a9",
|
||||||
|
"blk.4.attn_output.weight": "74467d6f44357d67f452ac49da861468b38e98057017bd38bc9a449f9d3538e6",
|
||||||
|
"blk.4.attn_q.bias": "8e6d9026fd69b314c1773c5946be2e11daf806ef22a5d91d744344fd30c58c59",
|
||||||
|
"blk.4.attn_q.weight": "e5bfbafd94a4d530f3769f5edbba8cc08d9b5bee8f66ebf4cb54e69bc0b7f63b",
|
||||||
|
"blk.4.attn_v.bias": "20c570f92022d9905eb85c0e41d1fdb30db22007a9628b51f512f8268d6c34a2",
|
||||||
|
"blk.4.attn_v.weight": "9638d459d61da03c9dd34dad985e03c43b4f8a5bc9701a82153478329b0517e0",
|
||||||
|
"blk.4.ffn_down.weight": "9d91b06e89d52f4365dece7eaeec50f81e52cb2407b333248a81e6e2f84c05b8",
|
||||||
|
"blk.4.ffn_gate.weight": "bf6350a79c6a6ee9146edfd788b88d4a4c2b54db1aa0adcc1464dbba8a84b646",
|
||||||
|
"blk.4.ffn_norm.weight": "11a70a6b9f7ce336292f4e3a2c6c92d366d4ee4306ad4fdb1870fde107e9cc31",
|
||||||
|
"blk.4.ffn_up.weight": "64f23f493d02b147a72a59605e6b7dd1c4c74f6813a38a2a60818bd66f697347",
|
||||||
|
"blk.5.attn_k.bias": "f6c2c279c0ed686f298ad1e5514b5cd882199341f896abbb2c2129d4c64ce9c5",
|
||||||
|
"blk.5.attn_k.weight": "0e682f75870abf9efaca10dac5f04c580f42820ecf4e234d43af967019acb86f",
|
||||||
|
"blk.5.attn_norm.weight": "01efae7653705e741932fcd79dff3be643d7e97f4b5719b887835dffe44b3a82",
|
||||||
|
"blk.5.attn_output.weight": "69e841d00d196acc489cd70bc5ffbbb63530ac5fabb169d40c4fb3a32ebb8ed8",
|
||||||
|
"blk.5.attn_q.bias": "f3304d76ccd44fed887565857c8e513b1211d89a5d3e81782de507ab3f6fc045",
|
||||||
|
"blk.5.attn_q.weight": "98612a6b7920a247853ada95c240807d4ca8e43604279e7a2fc9bb265ae40469",
|
||||||
|
"blk.5.attn_v.bias": "39940a9b353ceed3edfd4a39b985c9520490aa1b9f11749c94fdf6d879d1a259",
|
||||||
|
"blk.5.attn_v.weight": "839f84b828cf83aecf479a0dc7bc86cce05145ef77dcf29916dc3e0680f5b665",
|
||||||
|
"blk.5.ffn_down.weight": "1f48cbb0960f15e06ab8a3754ade792995a655856389ddbca629c07e89d1b114",
|
||||||
|
"blk.5.ffn_gate.weight": "33d8219fce3189e1aab376039896eebd4ad36ebd26a8278cd19b26e4357e4f81",
|
||||||
|
"blk.5.ffn_norm.weight": "0f4a0f83d37127fa4483f2905cb4f38ef6ddc71584b6cb05632c62a9af313dda",
|
||||||
|
"blk.5.ffn_up.weight": "22a64a11e5f0a1ff45ca327bf9e1efa258f085ff6a96edc398b7474f725b4514",
|
||||||
|
"blk.6.attn_k.bias": "baa91df99d4df2d25e8d590bca4e334b97f2d9aa3df8e748fedc8a6188499111",
|
||||||
|
"blk.6.attn_k.weight": "121f3b9f4b9491996499392e2688a929cafe102a67920b4cb2a039349c43d8eb",
|
||||||
|
"blk.6.attn_norm.weight": "b4cf987e923d71f2f84c58d20ea8af7576b225bf61952145b489fdd395e3d411",
|
||||||
|
"blk.6.attn_output.weight": "a112642150a138d54b2a4038042fd33619035a35694771e966f3575856c635d6",
|
||||||
|
"blk.6.attn_q.bias": "a97ea10469cdfa3fdddf8bad6de683ef99f6170eb8d29d15dcf6bf4bce37c5a3",
|
||||||
|
"blk.6.attn_q.weight": "d80c787019317a87361de6bbc7df6701357216bdd9b404522cede34a719a5500",
|
||||||
|
"blk.6.attn_v.bias": "d846269db9cd77ae28da26ba0914cace1b6754bd5301af9c44607085dfcbd2d7",
|
||||||
|
"blk.6.attn_v.weight": "06567c433e8a391647633291b50828a076ad7c2436106bb9278c60a3f8fccb3b",
|
||||||
|
"blk.6.ffn_down.weight": "f15f66f56b3c474eac8c6315c5fff07c3e29c6e483d7efd4d303c7f43814be91",
|
||||||
|
"blk.6.ffn_gate.weight": "47768f89c6da8eefb29adb766ff4eb38c9dfd79320bbc1386248319fcbcf567f",
|
||||||
|
"blk.6.ffn_norm.weight": "7f8195e6b148212967145fc9d86ce36b699cff0de026042245c2d344f1ef8510",
|
||||||
|
"blk.6.ffn_up.weight": "53d7707ae4347aadb445289f9f87a008b72df5cb855b00080a605442fdd8edf3",
|
||||||
|
"blk.7.attn_k.bias": "63e274df3217dde25b8369a383e480fe4f6b403a74385f15ac0b5db71dce2744",
|
||||||
|
"blk.7.attn_k.weight": "f6fce88602f5945eee09767acbcad387d132614e6da39ae359f2bbf380d94b1f",
|
||||||
|
"blk.7.attn_norm.weight": "bbf5dc7336c0f9a511afef6bf5efeffd78f1b83940850c3eb7eb20c621b75656",
|
||||||
|
"blk.7.attn_output.weight": "d9fb907a138396a859cecbfcb377927308dc93c24c7fb52dba5eb59265feadec",
|
||||||
|
"blk.7.attn_q.bias": "f02ba1318346af77e309f40aee716e2de7ee8cab67e67b17636db9bf40894fb0",
|
||||||
|
"blk.7.attn_q.weight": "54a691e824be287a61c35c172edc01922ed792d2addeee029afc17ba6c7e11b9",
|
||||||
|
"blk.7.attn_v.bias": "3a4f182f51e84ce862d558fb2751b91802b65d74596bb14d624808513a8a83ec",
|
||||||
|
"blk.7.attn_v.weight": "a142fe6e106d3ab484e2dc6f9c72b8fc0a385279dde08deb1ad1fd05ac25deb1",
|
||||||
|
"blk.7.ffn_down.weight": "8daf7e8c430d183a4d6ab3eb575fafa4b5e31689f68b290c8b370411ad9d0f12",
|
||||||
|
"blk.7.ffn_gate.weight": "a2a786b45eb660994254b48e2aaf22f3e9821cfb383dee0ba04cc4350a2f8e72",
|
||||||
|
"blk.7.ffn_norm.weight": "73828bbc8c9610cc139fcf03e96272648cdc291263251fe3a67367408deb69e1",
|
||||||
|
"blk.7.ffn_up.weight": "e85dd0f63fed449ce16893c5795ea6a050a2d7a66d9534410a227e22c905dafa",
|
||||||
|
"blk.8.attn_k.bias": "91a752a6e2c364e5ee6a015770fe289aece4911ae6c6bbfe74ac52f465465f93",
|
||||||
|
"blk.8.attn_k.weight": "99c069e92c43a2efb74e23188256b3cabbbe06399878e681ce203a05d5da378a",
|
||||||
|
"blk.8.attn_norm.weight": "c76d36d3cc06aa2a9edb1abf9f602bb7ed61ac9d61f8ef7ed736a1e619abe717",
|
||||||
|
"blk.8.attn_output.weight": "ee5ff156a2625e1f203f65e69b514f9df04bd9a5e82b28e3876e16cf1c6f65c5",
|
||||||
|
"blk.8.attn_q.bias": "8fbd868a93b330c8b0418b488c5301f42a7eb0c58445a4e515d56777f1d96ed5",
|
||||||
|
"blk.8.attn_q.weight": "9f20ef86e80098ba52a3a31ebcc315bea3a614dac9cba7ac1db02f156db9b577",
|
||||||
|
"blk.8.attn_v.bias": "c4813571d5d618742183a7890c0b89cd7f18e210c758f63aad564659bc38a26d",
|
||||||
|
"blk.8.attn_v.weight": "ea88e1a4cf8bd56e9a88ada427d2b0cd352234827640757ee2a9ed594fb67a53",
|
||||||
|
"blk.8.ffn_down.weight": "b0d1a7495811580b189aaa3e20ea871d6d01ed7b6c23e59825078ef786944ff2",
|
||||||
|
"blk.8.ffn_gate.weight": "0a17c0caa0b06721c49b59b2a63a5dcbf744dd1cffa55962b404ba910c658a62",
|
||||||
|
"blk.8.ffn_norm.weight": "f15f109d4a8e9d1ff7c71fa5bc6373df7ee80c5f7d1de3fa0d4849d747e36bcb",
|
||||||
|
"blk.8.ffn_up.weight": "bbf4c5c4c5c8a0f9ae8b88e3cc8b86f81b98148722d5a350995af176c0b774f2",
|
||||||
|
"blk.9.attn_k.bias": "a7f60d962686b8ca60f69643e0e0fa8614688be738fb0b1c6bd54de35c2beb5e",
|
||||||
|
"blk.9.attn_k.weight": "dd80ce4adb00e338fc04b307e4c18a27071f4ba4397184a24d765e6e4a268ef4",
|
||||||
|
"blk.9.attn_norm.weight": "721e6487547e2b3986ab4b4e2500ceade59d908bccf4436e1e8031f246deb2bd",
|
||||||
|
"blk.9.attn_output.weight": "5a800af39107b363861e5f5173483cdcd644d8ac3b0c8a443b9c759d71285db8",
|
||||||
|
"blk.9.attn_q.bias": "0a19b4925ea8ca8067acc909b058adc327de3874cfc94cc9eb4a106d3f370123",
|
||||||
|
"blk.9.attn_q.weight": "93e84906684c0c7ede79967236d9fc8344da84a9f1daa04e8295c2c9b6b26a24",
|
||||||
|
"blk.9.attn_v.bias": "615421f812f821e230ecde4e6da35d868823248355ce7e4e51e2d650ead565f9",
|
||||||
|
"blk.9.attn_v.weight": "7f4913e289aefd9ceecbdaf9767b1e95303f5d59dd67ecb2cc15768477f4d08e",
|
||||||
|
"blk.9.ffn_down.weight": "95d1b3933221e87dc4af70dd566daec9498bf358070b8d26f1fc70766a84a152",
|
||||||
|
"blk.9.ffn_gate.weight": "530f2d04f6a1fbffaaa5f2fbc3a328ebed7b330e3af14b4fc7d8a51b13ad8d42",
|
||||||
|
"blk.9.ffn_norm.weight": "28077de416217ea1df94b96017bef4cc562ab62e51b1a03a671c70abc29ce52a",
|
||||||
|
"blk.9.ffn_up.weight": "b87b6190778aaee4695938e24ac6c90dbbee6dce7c5c2ab5bc26ba4564581822"
|
||||||
|
}
|
344
convert/testdata/c4ai-command-r-v01.json
vendored
Normal file
344
convert/testdata/c4ai-command-r-v01.json
vendored
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
{
|
||||||
|
"general.architecture": "command-r",
|
||||||
|
"general.name": "command-r",
|
||||||
|
"command-r.attention.head_count": "64",
|
||||||
|
"command-r.attention.head_count_kv": "64",
|
||||||
|
"command-r.attention.layer_norm_epsilon": "1e-05",
|
||||||
|
"command-r.block_count": "40",
|
||||||
|
"command-r.context_length": "131072",
|
||||||
|
"command-r.embedding_length": "8192",
|
||||||
|
"command-r.feed_forward_length": "22528",
|
||||||
|
"command-r.logit_scale": "0.0625",
|
||||||
|
"command-r.rope.freq_base": "8e+06",
|
||||||
|
"command-r.rope.scaling.type": "none",
|
||||||
|
"tokenizer.ggml.add_bos_token": "true",
|
||||||
|
"tokenizer.ggml.add_eos_token": "false",
|
||||||
|
"tokenizer.ggml.bos_token_id": "5",
|
||||||
|
"tokenizer.ggml.eos_token_id": "255001",
|
||||||
|
"tokenizer.ggml.merges": "902a060cac8884a5793d2a857dd2e53a259de46c8d08c4deb243c239671e1350",
|
||||||
|
"tokenizer.ggml.model": "gpt2",
|
||||||
|
"tokenizer.ggml.padding_token_id": "0",
|
||||||
|
"tokenizer.ggml.token_type": "b7a352ccd1c99d4413bcf452c2db707b0526d0e1216616b865560fab80296462",
|
||||||
|
"tokenizer.ggml.tokens": "815ac90ff23565081522d7258f46648c8a0619eb847a9c7c31b238a9b984e4ae",
|
||||||
|
"blk.0.attn_k.weight": "6fcfdb466f9ceb1229404ce4ec4e480751b8d00da12707a11783dad7256cb864",
|
||||||
|
"blk.0.attn_norm.weight": "6063317f731371864049c7704a70772f1eb632194201ebdc2ed0f8e483507c72",
|
||||||
|
"blk.0.attn_output.weight": "920f49716a1e2fc73b6794ec777947f1c122701e63ed302422ac89e90f06e9da",
|
||||||
|
"blk.0.attn_q.weight": "ddbcd7cde197e632564ac58e4f25d9e3a8ca52917329eeb6081eb41a797932ab",
|
||||||
|
"blk.0.attn_v.weight": "318fc02a189d87420f0cbf57f47f11e00c21ec1ed472ce0a2a895b44f7fa0fca",
|
||||||
|
"blk.0.ffn_down.weight": "aa71975b6eb1f4c77b03d2ac4a194cf8d95718efac741bb12f0f3ff79a27f9bc",
|
||||||
|
"blk.0.ffn_gate.weight": "42967702fa0bc738b88dc50007ace26dbe74a5a9e0978124dd093f818241a9e1",
|
||||||
|
"blk.0.ffn_up.weight": "5282c8788b086bd30f46525e7995a17464882a72703fd27165491afdd8bfd4af",
|
||||||
|
"blk.1.attn_k.weight": "cd248882e64fd2c3402c44790ebe12440133dc671b6893fdad0564c461973adc",
|
||||||
|
"blk.1.attn_norm.weight": "ba84e1c8fd30af6ec94208db4078befac8c921aad3acb887812887f3282ea2be",
|
||||||
|
"blk.1.attn_output.weight": "2efa3ef7c5666ccceb05e339b83ad680cc0d2c3ec78203f5da5959f23a80e14f",
|
||||||
|
"blk.1.attn_q.weight": "5106f2e255358a1303c22e8b5f0ec044852bb30a866c52cabefd30017a7a6b7d",
|
||||||
|
"blk.1.attn_v.weight": "a211a634a1a5df1d5f973645438be0461dd922210f9747c6b04e386c7f1ebe95",
|
||||||
|
"blk.1.ffn_down.weight": "37093afe48d32c578ec956c9ed85242cd000d6aa979e60526aafa10c822dbb10",
|
||||||
|
"blk.1.ffn_gate.weight": "469860819e9159caefb1aad0bc66db790f3393f05fd87b08e52256a7ed256543",
|
||||||
|
"blk.1.ffn_up.weight": "736742c97d35d1a011f9cafd3c0ce947ad559bb2fba6da73c816f6bfd0fa9aeb",
|
||||||
|
"blk.2.attn_k.weight": "92c219d92804d832ab404bd6dc7339c90877bb7cf405dd030c121f8b27757739",
|
||||||
|
"blk.2.attn_norm.weight": "61e4466069474b76b6d1e702566420eb669faf3556b00ff7b824784aca13a2d6",
|
||||||
|
"blk.2.attn_output.weight": "d2fb38a2b2171fd91caf037faa585a62225819aa232d86fd4f7f9d2c3c8a45e9",
|
||||||
|
"blk.2.attn_q.weight": "f6faf5cc6844e3daa4f9f68d90f5458c64879de68a7728860e38374e30c3429d",
|
||||||
|
"blk.2.attn_v.weight": "f340ef8f7341d987a6f37c0e9afe0aef5be67be00c0ce5f57612daf73319cce1",
|
||||||
|
"blk.2.ffn_down.weight": "c7be61a701d779860b621b143fb6365b607bf99ec7c0f153b07908ac8120885a",
|
||||||
|
"blk.2.ffn_gate.weight": "b64f0878187bd3392abfa4c3e8ad2f8b4c133903e54246747ff8f3b4639ad83e",
|
||||||
|
"blk.2.ffn_up.weight": "50b11c712652e90ee7428dbb45cffebb80662ac982bc72bd9eafff361b5eb5a8",
|
||||||
|
"blk.3.attn_k.weight": "2b7bcbe9ee5c9c630c8c8d7483887e78b73581016f4cbb6933db2a147a25f431",
|
||||||
|
"blk.3.attn_norm.weight": "0181dac7f4eee7252980323e8032cf339bef2046ce0a16c0fd72af7c98a8a37b",
|
||||||
|
"blk.3.attn_output.weight": "aef8843b636ce231da9e7c9acbee197883cc15df0e2887709324c6a50f16da7b",
|
||||||
|
"blk.3.attn_q.weight": "55404130fa10e81322d33eb378aa0de31a92990ce7730f1338c0ace0406bb1b1",
|
||||||
|
"blk.3.attn_v.weight": "76f7fb8040d82b957d689ce34fea2302a6640ad5bbaa0052ad2b7ebce270c33d",
|
||||||
|
"blk.3.ffn_down.weight": "648628933eff3b357c3729c33c5b1ae51c28e59b9c19acd1601a2ff7c5d5d9a5",
|
||||||
|
"blk.3.ffn_gate.weight": "6a588885d16e98d5f50ebed05af089154f680085ca9c97691e5b489088630a4a",
|
||||||
|
"blk.3.ffn_up.weight": "e12455a1d702f4986e1a663493e3d5102b367af74d45557522002a35d63ecac2",
|
||||||
|
"blk.4.attn_k.weight": "40d943380a8a85e4eab147934bf6e16f23cc8ab753f6636526382c074d182288",
|
||||||
|
"blk.4.attn_norm.weight": "4ab2c098983d4599fe540eef624c4df954adb7473faebda7471ef0ba4134814c",
|
||||||
|
"blk.4.attn_output.weight": "d14b91e40f58bf4a3c8c2eca0b12bb541de406574af39027d56f6c588a147082",
|
||||||
|
"blk.4.attn_q.weight": "e1224960a3562107488589f883fa32414bae41712fa8dbd47c5f3e3a7801452f",
|
||||||
|
"blk.4.attn_v.weight": "063f297bc4aa6e709fc32c4c32e35af7d07d80e83cb939b76adbba858006c03d",
|
||||||
|
"blk.4.ffn_down.weight": "f88a18020c5e1caaa29596895eb348e76ee5bfad27ed57651a86cd8cd1f9b5aa",
|
||||||
|
"blk.4.ffn_gate.weight": "48e7e1eed3fb52e92e61d3557dd0ec002418327090e034ce4322fd68542266f8",
|
||||||
|
"blk.4.ffn_up.weight": "1ca8a7aa17355b6ce0d9ad5539fdad3899fa47fd359c285fbfb31f19f47bf073",
|
||||||
|
"blk.5.attn_k.weight": "2bdf15f8e73d068d972380f25d207004cf0bf3b5bfa46946803ba6fba07d9175",
|
||||||
|
"blk.5.attn_norm.weight": "60448d7cde6e1b6467aa31bdea012e39cdb08c88081cee7d102dca4f93f766ef",
|
||||||
|
"blk.5.attn_output.weight": "f9f687d7c457537f9fca8a4087a59f1c3bebfaf5537b94e42c831a13224f7799",
|
||||||
|
"blk.5.attn_q.weight": "987db7a2ad68657a92625e1980effbb1f79697c2183f2b9f3b3a0570c51b0ab9",
|
||||||
|
"blk.5.attn_v.weight": "cf696891148f3e4783ad1d20f93462ae091eb8651c656bba9b662253b6263e02",
|
||||||
|
"blk.5.ffn_down.weight": "c0662b0bd0929136005fb9d691fdd9b2c33867d9ce9622339a6a456b720b059a",
|
||||||
|
"blk.5.ffn_gate.weight": "200bbdfab615d7a3a84719b6ced7751e3ce52757ef212d96f87798bc1de5e987",
|
||||||
|
"blk.5.ffn_up.weight": "df5d23e7e035fb1b9d163da7ddfdfe38da6a37e86e96534dc02ad20f011b55b3",
|
||||||
|
"blk.6.attn_k.weight": "c0dae2d272a7c5a2fa004bbb8475dbab362fc1f6d008e73d5a4434a9382ac6ba",
|
||||||
|
"blk.6.attn_norm.weight": "51c57ac8b55e04354d5dca6bb9c0cf4177639d3b038e80209e33036209688f64",
|
||||||
|
"blk.6.attn_output.weight": "229d97892c62f85bcdf431675250e01c976ad69ffa450b01fb543bf88f14a2fb",
|
||||||
|
"blk.6.attn_q.weight": "c20e49621821bd46ed156e6823864a5bda4f317750e71ab8dc54e44eb48cf7c2",
|
||||||
|
"blk.6.attn_v.weight": "53ceb1a2ee43fce3c7b5b33c58a9fc5ee7f44dc1c6f29bc9dbefc37582102dc9",
|
||||||
|
"blk.6.ffn_down.weight": "7923c943b7629d560a032d1efa210d1d75c6692140f1be94464ee7ed24f44ed0",
|
||||||
|
"blk.6.ffn_gate.weight": "57593d350361af753a6a39f53b066282634c0fb44f396f6f2966a574b01d8f8c",
|
||||||
|
"blk.6.ffn_up.weight": "327b6a7a387098b8899d3ded04a4d4e7c658ca61b80d4e7b17594be232721602",
|
||||||
|
"blk.7.attn_k.weight": "9ca48b87a10116fd8868e62b76f211d4bb91f166096be9061439ee2e1c3a5c20",
|
||||||
|
"blk.7.attn_norm.weight": "cd56cfcc4e2ad6b96e23ea7b0d32b4caf236107d99a0b22c56760b62e63c8cfd",
|
||||||
|
"blk.7.attn_output.weight": "7352b509a03cae2491ffc060e577d189341a0f861233f18c96f9d275dc4234bf",
|
||||||
|
"blk.7.attn_q.weight": "2b3791c8c008c33ddbe12bedba8191322ceea2dcce5cf0eb7a93d40ad254e672",
|
||||||
|
"blk.7.attn_v.weight": "3ae721d52466487a3d48150581e57f6d64ea1e83ab929f23b28c3d777422eeb6",
|
||||||
|
"blk.7.ffn_down.weight": "3b6fa8ececdb3c34af3a5363863d6f94289c1c95bf47fce3a3ddcf184c5f0848",
|
||||||
|
"blk.7.ffn_gate.weight": "dbd7df6c5ae5eb4adb859f0d36453813a4e289a359a1ba8f72d67fcbf21c3e22",
|
||||||
|
"blk.7.ffn_up.weight": "de68380a334b4c5cfd4c318b0e9854aec59bd79aa0f0c30af3f56414f83482b0",
|
||||||
|
"blk.8.attn_k.weight": "7303c4e4480abc72a7ee271811311199245fb5c2ea27a2bd3b8cad3a53a03c27",
|
||||||
|
"blk.8.attn_norm.weight": "2e3d1921898d1b943ce1a1b6818546c8b471d6d542da24f51a8b514b8c3dd4ef",
|
||||||
|
"blk.8.attn_output.weight": "30421520887b66bf97a18dbcdc283bc8d0b60590b612fd638a319a6eae923227",
|
||||||
|
"blk.8.attn_q.weight": "73e064d5433c9b500068a1c31744dbd53f4ade298fb450a0e8c97f62cf1f8a8d",
|
||||||
|
"blk.8.attn_v.weight": "27e21f8b9a9a8533e8178ca34a72aa1d786393d57302b7806dcdf3e51de511a8",
|
||||||
|
"blk.8.ffn_down.weight": "bf694bd8e00047982108000e7b3dee7b225db8b19abc595e5697b6bbefd92e7c",
|
||||||
|
"blk.8.ffn_gate.weight": "d55fdbf8606d9141b774b0500c58944fd1253b9e69d1f765eaa9a680b9f2ca40",
|
||||||
|
"blk.8.ffn_up.weight": "1ae3f580655e7c8e8dd6c34fa4ac574fdfc5e3f1a8536da0c5442d3a2976f0e7",
|
||||||
|
"blk.9.attn_k.weight": "b18080626012d8aabcf78542d6c7bf31c712bf55a70172fbfe173fcf34481036",
|
||||||
|
"blk.9.attn_norm.weight": "2e3620620dc09998c6d3063a7d5de5433fbbae8c11e5b00d13f145d39140e162",
|
||||||
|
"blk.9.attn_output.weight": "69c3c0e27ef1c0fc933eeb7b612b70909f18cde238873c0d576a2ba9714ef174",
|
||||||
|
"blk.9.attn_q.weight": "68330e5aa28a28873c9a6e67f032186ef651df2df5844e0f27094ba349fbe4ab",
|
||||||
|
"blk.9.attn_v.weight": "3df8d45a102be082d0793a51cb82aa62a43cd0e9d047ba4115ca0f2414b39325",
|
||||||
|
"blk.9.ffn_down.weight": "1d6cc162b73745b135b4f040a0aac3c06d5135a3dc5b2421e7ee2af48662fd7f",
|
||||||
|
"blk.9.ffn_gate.weight": "034a9d40fb1e32b534b45f4bccd65cbe43c4a6a3f5d01132bd245ca0005de5fc",
|
||||||
|
"blk.9.ffn_up.weight": "c838c38d0e1a0ac0da17eb2a66023ed31929f07d8fcfe1cc546df26096c91f0c",
|
||||||
|
"blk.10.attn_k.weight": "a78507cb72f744b86ceaa032596e74e5571c822d0226d334881169addb32cbd5",
|
||||||
|
"blk.10.attn_norm.weight": "35f48d0b28ee0e6b4cad4e983925737562d64824be5b168b3e26df3d6b260cf1",
|
||||||
|
"blk.10.attn_output.weight": "53712db06796de39b131323e7abf9a58551b6d52da6db66a471580386d396252",
|
||||||
|
"blk.10.attn_q.weight": "efe08429ba196026b81cd1c471e1c7418afd9e966659feb3936b674aa0803b58",
|
||||||
|
"blk.10.attn_v.weight": "7ec6055e134f89da0cbe79ec9f13ef2e442ac584b1f03c3e13e7d0cdad0078bd",
|
||||||
|
"blk.10.ffn_down.weight": "37e66af4bcd1f3079e841e892255b8255070655901864ea3a8c602a7f681a640",
|
||||||
|
"blk.10.ffn_gate.weight": "1825282bc34830d371c6edcc3c1e73e6ecc1e10f4aea0122dbb7acc1d6f7b1bc",
|
||||||
|
"blk.10.ffn_up.weight": "819b3b276a4d4c14a35ed6682d5ef18a5e8ed468e5ce3f12e8c75ec18ac20ec4",
|
||||||
|
"blk.11.attn_k.weight": "5327e6a2af82dfff0619a14971f5864a15553c36fead84e1af42c7630f2729c6",
|
||||||
|
"blk.11.attn_norm.weight": "fec363b3c4a43036d2c635fb8aa9e122dd87ee79811839f2f6cd955be3373e7b",
|
||||||
|
"blk.11.attn_output.weight": "ccf7b38f18ee8798b8a6a35018e2df3eb3e007de62876befb68025dd66c79763",
|
||||||
|
"blk.11.attn_q.weight": "da8c4a1c824ffe174e39f126cd72f7ef83c56aff1259d452a1212de80f98f5e9",
|
||||||
|
"blk.11.attn_v.weight": "d17ae6bb77f03982b55d341eb67acb5969e9ad3da5994b96eafc09793dcfe3a0",
|
||||||
|
"blk.11.ffn_down.weight": "a6bac521e2791345f22c57205fa1c2f2f687794dfd24d0e98d50ae0d0eb6088a",
|
||||||
|
"blk.11.ffn_gate.weight": "5ed902c488cb51ba5635f3df08258c5f84f31a679a00211ea5f9d8b824ef6d9d",
|
||||||
|
"blk.11.ffn_up.weight": "ee9f1437eb890d2cf9df2574afa1cecf20aafdd847cd75b152d7eb74419afd34",
|
||||||
|
"blk.12.attn_k.weight": "5a069c06e1019b0f889088e67458f7a11ec77fa190ada6069e46211f62219947",
|
||||||
|
"blk.12.attn_norm.weight": "194d7e5fcc8c49aea62daf1940532419cf3c505afdce6be377286b677db5db8f",
|
||||||
|
"blk.12.attn_output.weight": "6534995fd4d6fecb55e317add4b1723aba4d825e1e9471d0b08813dfdc247176",
|
||||||
|
"blk.12.attn_q.weight": "4ab51ca519b5995581fa34f846276feca3b907ef2b51f192f6cc0b3263c3f5a2",
|
||||||
|
"blk.12.attn_v.weight": "5652ca3fa81ef9a1ac1543d71fc6813f8517f8ec54b25c701f6f98061614830f",
|
||||||
|
"blk.12.ffn_down.weight": "4b2c263f54c88516b8eb273bb8d9615b01c5c8b484dc70358adb91b50b300edd",
|
||||||
|
"blk.12.ffn_gate.weight": "8f50c3c3e3e8568991d6c1b0e74b500cf4f208e7700bbb8e87c3f6a6d359b6b5",
|
||||||
|
"blk.12.ffn_up.weight": "1c1a581fec1fbe959e1427fa513f400100b5e1ee9d83932630be9905fb49c231",
|
||||||
|
"blk.13.attn_k.weight": "efd7a38c46f08d8376d82974f33c644e3a02220e142d63b1704718699a8a884c",
|
||||||
|
"blk.13.attn_norm.weight": "d28fa4f1bd75abbd063b0e622e08f579c89cd0c0c5ce63c1952ec9f944f8ee13",
|
||||||
|
"blk.13.attn_output.weight": "71e0068a639288718bdb70a6cfdefd50bc8b3ec3993347a65129e70001ca5827",
|
||||||
|
"blk.13.attn_q.weight": "b97077adc92cff07a2e07d80ee38f214ad8713571c69cd5c70ebd43dc501ac87",
|
||||||
|
"blk.13.attn_v.weight": "79b3e2749ab4b459c81e96e322b215f1e8af645eb346e176c326bd00cf6ed2fd",
|
||||||
|
"blk.13.ffn_down.weight": "9f8687d11effa1db7cfecf7bec5631734bcf2962aad74a9f519144491e08ec85",
|
||||||
|
"blk.13.ffn_gate.weight": "7d14dfa0543852e7777fe8fff29ca533744cbcf1ebcf10067e5adfc4eb345e65",
|
||||||
|
"blk.13.ffn_up.weight": "852b9527b97fdab211ff3f832a660ee1d93ccb56906144c50f01319a6e8ee615",
|
||||||
|
"blk.14.attn_k.weight": "79e926b20f36f66d58226cb358881f2f68ae7b468787d33cafae5110287a14a0",
|
||||||
|
"blk.14.attn_norm.weight": "97d481b63deb0df6142c2c6cd23043720c62eb609e390f47a7113751c79974ec",
|
||||||
|
"blk.14.attn_output.weight": "aa6e94d7176d5c79fbb89b96e5f13ce75702ce3dd23ee52986446da436a6c3d6",
|
||||||
|
"blk.14.attn_q.weight": "214becb6d1bb460da9fb8ace0f99b9a5afa9edf7aa7acc19606c7401b11d6305",
|
||||||
|
"blk.14.attn_v.weight": "488b0e6d7f1a7a2ed0972aaa6d10ef9c775ee5373460324efcf5b3e3da9311df",
|
||||||
|
"blk.14.ffn_down.weight": "29c7ad16cf9542e30996a1a01ab95b844533b28051f04cc7949c371afb796471",
|
||||||
|
"blk.14.ffn_gate.weight": "b7ef208f2b054803665b377f5a5980c122c026841809cf855c6ba06d1c3a885a",
|
||||||
|
"blk.14.ffn_up.weight": "76a5cc28100748d79c4398ce7b9176aab4d661548b6293a82f99144812e5b70e",
|
||||||
|
"blk.15.attn_k.weight": "a6b8f9e98ab878fa7ebc5d080978ebf2d050acc2ab2fa8ea9188eb10e27702c8",
|
||||||
|
"blk.15.attn_norm.weight": "a26d07a9752d6dccb68e3a8a2a49fd0752cdd0a415e05547819bc37d9ba63d5e",
|
||||||
|
"blk.15.attn_output.weight": "c63616c69048ccbee801e05be4f56d21fda21aa0cc470f41d57c31b4d9283a4d",
|
||||||
|
"blk.15.attn_q.weight": "fd595a67bf96c6ba16eb148a9d02fa52fa3c1d33ed10be28a08f851409fd6e64",
|
||||||
|
"blk.15.attn_v.weight": "1c5c9d33fa07c05d5f4ed0032c6c4aa83d863f0d31c94a66109d239dcd03cea3",
|
||||||
|
"blk.15.ffn_down.weight": "585ea62ab8aff7d7d212ea5c1a03226fda6b68370c890b776834af70c948dcbc",
|
||||||
|
"blk.15.ffn_gate.weight": "a13c63f86f879b03a573d5dd2a25cfd1f4dc73e8132e6454ecc23e538b4cdf6f",
|
||||||
|
"blk.15.ffn_up.weight": "f7112450f57c12fcd511f049e0dc0b541625a107a7901c3261ed9e984299f65c",
|
||||||
|
"blk.16.attn_k.weight": "2d2c8b11dd71fba6d1c106aa1673c113a5448653cca7eab897c8739212ed5003",
|
||||||
|
"blk.16.attn_norm.weight": "95c2ec7be9469690e18a9a1779684acb3e9da44b13e263a0da840305646fbf8a",
|
||||||
|
"blk.16.attn_output.weight": "31a65046e677f54dae654ded4e733479fcc0f7283d83076b7dc7cbcae8528230",
|
||||||
|
"blk.16.attn_q.weight": "bfc6292b9c6d49b7118d08060242a138182eb182d136ba5dfaf469437c16081d",
|
||||||
|
"blk.16.attn_v.weight": "68f81d037340217d87c7853ff4d6edfbc46d9e827ee6d5bff7c3f6238e3a95ad",
|
||||||
|
"blk.16.ffn_down.weight": "bbd6629691950cef4d5113e1c6670e91b216a9b872cb92cee02dfda4d6c4f7b8",
|
||||||
|
"blk.16.ffn_gate.weight": "63cb56f282b7401ed6c76e5bb6fdf1bf68a64f9af0c82c014209b55bcb5191d0",
|
||||||
|
"blk.16.ffn_up.weight": "b54f39a2541063cbfb6f713aa81c3b69a04100e999aa2ebbeec195dc382eceec",
|
||||||
|
"blk.17.attn_k.weight": "3d9ba49799cc56664ec30a002bcad61eb651294212a68c3ddb573eb042aef5a4",
|
||||||
|
"blk.17.attn_norm.weight": "42ee0db4b9d63257bca0012a30b12737ead1caafeb5ed3d93c8f48ffec4b46de",
|
||||||
|
"blk.17.attn_output.weight": "a38fd100f05c9041c592bc739e287de0b10d08ef2bda41a879225bdca9002f71",
|
||||||
|
"blk.17.attn_q.weight": "8a3bee285b0180a9eb35662e449ee4cbe16d992bdd48fb3a94bc4a347728cfa2",
|
||||||
|
"blk.17.attn_v.weight": "d7f8f1b8b863494ed4392a1656775912e9b264ad36016547b12e832a1d6757d6",
|
||||||
|
"blk.17.ffn_down.weight": "bb7ee58f61da8630972e25b621996fbe8ec06f4dc9ab1e268ab5b120c526ca28",
|
||||||
|
"blk.17.ffn_gate.weight": "6b652dbf167fee09a45ebfd78d500ff6548fb2756dbe5343ffec3f7e6207179f",
|
||||||
|
"blk.17.ffn_up.weight": "3b67f727e55e742715de978fab80457781e7a3762bc48f79d13b45dcb8de664c",
|
||||||
|
"blk.18.attn_k.weight": "ff7fe57c57b90c6fcc0aefc39ec24593c3a7d1ea1c23770480075a015450e0f5",
|
||||||
|
"blk.18.attn_norm.weight": "1d40faca082d2633ef0ccf19e121870dd6c7c3e2154607c7f3543fa96e99cb2d",
|
||||||
|
"blk.18.attn_output.weight": "9adfecaaa397a92db4687efd5fcabfa0daef9e6b0493763b7ff5ebc185c43a6c",
|
||||||
|
"blk.18.attn_q.weight": "ad1803eb9b291948639277afe981e666b07167eb3fcae903ba5b73bf86d8f50b",
|
||||||
|
"blk.18.attn_v.weight": "308cf23399adccf27401a4ab60d74dac6fb9d4cd4b9c5940d9145118d1881b34",
|
||||||
|
"blk.18.ffn_down.weight": "7de4ac9a561fb580619b745687dfd7ca8a69ef70471dee978741b80e9ff7bead",
|
||||||
|
"blk.18.ffn_gate.weight": "0c66970f696b33bd5ee8f1f2fbcb41fd78fa5ccabdc927e11a4d5a4089f19c69",
|
||||||
|
"blk.18.ffn_up.weight": "66a42e988e8a1f468fabf976c48e9e4bb045eaac6916ef16555ac101cd674abc",
|
||||||
|
"blk.19.attn_k.weight": "a928ab50390bacbcebe2e4b66922498134ce22d7b93beaa87d6cf4ab52eb7174",
|
||||||
|
"blk.19.attn_norm.weight": "b4a02c55b46c2a96aec9c64a254087cf48e6c1d4b6f31782c77a46fc4daebad1",
|
||||||
|
"blk.19.attn_output.weight": "b768319c641dff1eac5d1f8ceb960c9899c795bf2b24c1d6bf70aa24fda45f77",
|
||||||
|
"blk.19.attn_q.weight": "79ef3f57d187d3954a26362096e1b6c222d76f537dff73e034d6e9999935b8bc",
|
||||||
|
"blk.19.attn_v.weight": "ce13d6b13e24fcb2d5bc6a2662e5bd295b31b12db10a6d0307f86cf29b8d5001",
|
||||||
|
"blk.19.ffn_down.weight": "cf90d7e2137482cfd50934a8223ad774621d08554969da80a9712df5e6227eb0",
|
||||||
|
"blk.19.ffn_gate.weight": "71ce30150f003b6eeb3bf7464e05b6ae615f135110d8e47f0a47fd973e537c0f",
|
||||||
|
"blk.19.ffn_up.weight": "7f92aca0cc29866633feec701ec01a85a8ee2fd4e2b9630173a6cffb1d9d50ee",
|
||||||
|
"blk.20.attn_k.weight": "a2df23159d6fb74ef28e14b61028fe8b00a693a2fc9234a980be74f20b958682",
|
||||||
|
"blk.20.attn_norm.weight": "c6cd5f1b096fc5efa4eb59ca1c8c4bd28730f3dcedd59a63601663eccc6724ed",
|
||||||
|
"blk.20.attn_output.weight": "896a8a166d0f006d4b09867ae4345426303cbc3fb13a18d3d4e1bde00f16dbdf",
|
||||||
|
"blk.20.attn_q.weight": "01eb79588fe61baea0da43e99f4dc5939590e1bafd01e12dadb8326f102bfea2",
|
||||||
|
"blk.20.attn_v.weight": "bd39630fdd5a7c859ac1addaf53e63faf524c3f32f5f4896d86b6e746b1d5c06",
|
||||||
|
"blk.20.ffn_down.weight": "0304a5d39957a0e3f031c4bcc4549a135d396c8d97c8d276fd1c823ce86560c2",
|
||||||
|
"blk.20.ffn_gate.weight": "117b79d595b1dca0c8b37586beaecc4d84411507276212dc286cde7fc36c9bef",
|
||||||
|
"blk.20.ffn_up.weight": "6e799346db145c125f01783539749d3828fcc451cd4f10c5352f047a47e28714",
|
||||||
|
"blk.21.attn_k.weight": "1c37e4c0664147e775bb006b226b9553e3421140cd96288ea755f81731ab80ba",
|
||||||
|
"blk.21.attn_norm.weight": "00ae783a29000ccda5e4bdbff03df0752fb82805dc3f9b987500ebd80714476e",
|
||||||
|
"blk.21.attn_output.weight": "7588b84f9fb19f15095b5265c60b4a4e7ae74bcc47d4607dfa5d0bfab6f136cb",
|
||||||
|
"blk.21.attn_q.weight": "a65f1c0dd06d45bb97532d3e932689c1eecfe7359089b39174a96a149335cbc1",
|
||||||
|
"blk.21.attn_v.weight": "4220b77e7d5e8709b4eef33a679b5dad11f297085ef44c9977f9e54ef08f7a2d",
|
||||||
|
"blk.21.ffn_down.weight": "b8c082a0530d4b5328e67db0df84c5498f2af956de23c639fa0198ffea853950",
|
||||||
|
"blk.21.ffn_gate.weight": "cd1b656ee72d00e9835ef667c19ef89a88de261eb8eb7c0e936e0f9ddf83ef9f",
|
||||||
|
"blk.21.ffn_up.weight": "dc445f73e36ec7a3bd86884186b728f8e0187f32848c3b8b69d4d41f8571bf31",
|
||||||
|
"blk.22.attn_k.weight": "e37cf0b893ec8b9ee8c78dd139b8d9c45cb997a3bc0c3d93a70ca1c3f6af8859",
|
||||||
|
"blk.22.attn_norm.weight": "248a27838d3c46cc03a5c312facc84e2e0e2c990ef8401e93da25918497f88d1",
|
||||||
|
"blk.22.attn_output.weight": "fc191a18f6d18332c66761f7ab28008bfe295dd1f5c8741a2488442f9e00d0f5",
|
||||||
|
"blk.22.attn_q.weight": "4b193a2ab8bc2b085db18f2bf3eeba26e02b537b2cdd738160c8f14b165d0f5a",
|
||||||
|
"blk.22.attn_v.weight": "7a60ce5ccac7e045e55ba1e1e85bd2a0f93f8c781daee96c5223665e22f0c666",
|
||||||
|
"blk.22.ffn_down.weight": "e0a34fb4244e2c7168f3dbaa1904c15d339ec39999cdf27128bbaf619ee0a237",
|
||||||
|
"blk.22.ffn_gate.weight": "8bac872d4b8549c8812f927efa309f1792b524f33601095fff61b826de5a5615",
|
||||||
|
"blk.22.ffn_up.weight": "b67fa2b94dd901b6ec64c0853ce8ca2d86fe9cb1cc6d2f15fbbbe0e691c0c648",
|
||||||
|
"blk.23.attn_k.weight": "2c32e66ad01942b819ac09a197c71579fe66f02226a264fdd72ad1e02c67a27e",
|
||||||
|
"blk.23.attn_norm.weight": "825fdc94deb439cb93c713eeb077c1052b90ed658d6d464fc4ad3d611e911d48",
|
||||||
|
"blk.23.attn_output.weight": "95ca6707a95b8750b0c7c5d379d368f0f2e7ebef631954e7d4d8ec0f41f13a3a",
|
||||||
|
"blk.23.attn_q.weight": "6eccc84faca5fac015d1b26e2854501edcfd292a302228fe14cf99f5eb59a34b",
|
||||||
|
"blk.23.attn_v.weight": "b343ac3d226040f1033ee049668aa1d89b1774bc18431965682e5dbdce78ccdc",
|
||||||
|
"blk.23.ffn_down.weight": "9fc599befea8d3b1e342d564a110074f66d2542df406c4b90b6bdc5828fbb2b2",
|
||||||
|
"blk.23.ffn_gate.weight": "488556c1b0c9f0b20b0c99b4bac2e0f4046b81edb601d7b91e7e5b3bab47d667",
|
||||||
|
"blk.23.ffn_up.weight": "1088e291d7008dd9c7c2dd6830af686a8a84b724d123a016209bd5156d6898f1",
|
||||||
|
"blk.24.attn_k.weight": "a923fbe35e61e009a53927d7828818e0592bb737d6a1106c4b0b5a1efc367e07",
|
||||||
|
"blk.24.attn_norm.weight": "9b51aaaa939cefafdd9b13a7e5b74ac7fa2d603427e55a16a909d6f3f353750a",
|
||||||
|
"blk.24.attn_output.weight": "1beb2baba56f8409466434b037771248c2f620ec5f53e15f44c271d5a2d9ecf4",
|
||||||
|
"blk.24.attn_q.weight": "4b0194fe5bfae0c6bf6131dcf8cb6e2b994f6ea10b27cb03574f0f4f8cc0c950",
|
||||||
|
"blk.24.attn_v.weight": "6ac34b1ab0f66226d85bca1194a7c212cd93d384ecbc8b8395de48aec0970a61",
|
||||||
|
"blk.24.ffn_down.weight": "5508f74cb732a662c2936b32ac5e90742d172b9f961a747b0e5cba0e5906a89d",
|
||||||
|
"blk.24.ffn_gate.weight": "095e39b8584403835f9bb1ac33e0e81f54175575e4800273d281b845bff381e7",
|
||||||
|
"blk.24.ffn_up.weight": "2d43ec21637dda12973de367b0113ee9840b0d815bf6fce042f7c3f270b0b530",
|
||||||
|
"blk.25.attn_k.weight": "9e2aee029f3d2c7f67dfc7926e72c8228fb978382c8e5a4701bbf82c93801419",
|
||||||
|
"blk.25.attn_norm.weight": "220cd7164fb4cdbe22d26058e4153b26c27c7b5ce2bec8e95bf2c0ea08d23103",
|
||||||
|
"blk.25.attn_output.weight": "a17f4a5dc6aa51f03dbd75602d98e9491767c205cdc2c3a5f8667fc54bbf7c64",
|
||||||
|
"blk.25.attn_q.weight": "f60827496835c440c794bf57ce9780704d10a59d8229886bf75ebb18900ba4ef",
|
||||||
|
"blk.25.attn_v.weight": "9cac217e9e9f4f4c85f14ee51165a77c580165bd4a34b202389169bbe61a1ced",
|
||||||
|
"blk.25.ffn_down.weight": "a0f36949b663e80849581dfb71e7babcc73580793bbcb0c80ab26d5a6e000359",
|
||||||
|
"blk.25.ffn_gate.weight": "df4d1be4d50d6afe5ad3ef0d0e0fac76a33e85c963dea769641d612dd53e7d13",
|
||||||
|
"blk.25.ffn_up.weight": "992da76be762632e25ebc5ef4d03728eece1b43f7c4e31827df19ca724aea694",
|
||||||
|
"blk.26.attn_k.weight": "34199ff856ac32a500c754539d070258574192a34ecba87a182897cb59fdff52",
|
||||||
|
"blk.26.attn_norm.weight": "a8e9dfb2dae5d22b5c0aec5f3675991c0e3c3e6a44153db2579136b73f456e00",
|
||||||
|
"blk.26.attn_output.weight": "1c4f257ffb0d7db0f11cfb275e38b4af736917b43ad82de1badce3f1d227da4d",
|
||||||
|
"blk.26.attn_q.weight": "33d55786274c2e718cf61e8fbecf3dfa5ee0c208f0b716d42b061f55459acb3c",
|
||||||
|
"blk.26.attn_v.weight": "684b636939cd4ffcfec5a6238a0790ffa43d853c95783af9b9e8275e74071a7a",
|
||||||
|
"blk.26.ffn_down.weight": "89d0bf066db154e6d312b5433aed1714f6a28b40f4c52e3e1530ee07703303c8",
|
||||||
|
"blk.26.ffn_gate.weight": "393d649bebe5e2940e1b043649f6c860b4b8b9f380f30e9da1744a830f358156",
|
||||||
|
"blk.26.ffn_up.weight": "179edc85ababd9d8440cc6093eecd1004290aa1cb96434b26ecf7585b6cca17b",
|
||||||
|
"blk.27.attn_k.weight": "334841445a7f1e14731b08f56eb0b1f0938c63823d28bc6d078c4c5f05b36f19",
|
||||||
|
"blk.27.attn_norm.weight": "57344471bbda2e9deffdfdb2dd05a07aa47f8761e24de53525588639145bf551",
|
||||||
|
"blk.27.attn_output.weight": "506126af9ee54b535d49f97e36f630e74834f480329f098d6d62e96246d8d65a",
|
||||||
|
"blk.27.attn_q.weight": "dd984df1acb4783849e25ba7ae378bfd385cd9efc540fb798cd5bdd873f0118f",
|
||||||
|
"blk.27.attn_v.weight": "b4b3fe9a4455d34c297ff20a2f537b647cef424741d840a747b265f23d320ac0",
|
||||||
|
"blk.27.ffn_down.weight": "621fdb185ba0d35ba5476dae73d2c81ec1482a0e878d5bfd5c3b29fe837af013",
|
||||||
|
"blk.27.ffn_gate.weight": "e4fbab45f2ec506fa374103251a0bdb7baa6f576080bdd796f3e9db92098e08f",
|
||||||
|
"blk.27.ffn_up.weight": "a0c57e463e988002bbd6a6c6792baa21a65e6f89ae303a2c301951b0ae6e4bbe",
|
||||||
|
"blk.28.attn_k.weight": "bac36cbd52ec5056841663865e1291ddab4b47ef9a2544dd285d4503bfb0e4a0",
|
||||||
|
"blk.28.attn_norm.weight": "5774a9df2bbb2e86d1f70179c7b92d81e1f401160148b3328fb64db6646a5425",
|
||||||
|
"blk.28.attn_output.weight": "e8712622d1569557000c75f26c3f55fad267fd300463c2c2cfe3afbfa1c8f908",
|
||||||
|
"blk.28.attn_q.weight": "11677751fddee52cc739699c02836f7be54d96038be4240be5d4f53d00161608",
|
||||||
|
"blk.28.attn_v.weight": "e5ee459b8958d65e1445997b9aa1e90e2f5d17761ebcf5357313119a45322507",
|
||||||
|
"blk.28.ffn_down.weight": "3934518f9f85292da8475fe38a8edcbfc4e24ac56c351b472d6351f98750871e",
|
||||||
|
"blk.28.ffn_gate.weight": "6ba735d57e98d0847e487f25ffaa25256deaa8abec76f428cb70bd9774279d83",
|
||||||
|
"blk.28.ffn_up.weight": "977fae6e1e5353114fc645dd98429464749758765cbc6e6457593d596e57850c",
|
||||||
|
"blk.29.attn_k.weight": "8122a457307d580ad6f1e0acea09a2f593d97f595ba0d6737f5fea16d2433642",
|
||||||
|
"blk.29.attn_norm.weight": "d626f721e05aa1202439b01027031d4caf1adace61ed37870a277cb6297c77cc",
|
||||||
|
"blk.29.attn_output.weight": "7fb7122ab1b6b1e6615ca746897da27bc52c92cb70d3147183cdde61795b72b3",
|
||||||
|
"blk.29.attn_q.weight": "be43e94ff6b6e391024dc824101efa0ddf4005d5b002ac26cb03765c0c73c2fa",
|
||||||
|
"blk.29.attn_v.weight": "af93c85ebff908f74f9935b81bde0516ca487c84139868a1ce079c3ae20036b1",
|
||||||
|
"blk.29.ffn_down.weight": "39dae12340ed3120bd19c495fe0872b559613641e41fde69d02d8631900b84c0",
|
||||||
|
"blk.29.ffn_gate.weight": "36fd482439840ef197c9f3b8905d86acfcea49bcf018544106ca465d4bf8d5c7",
|
||||||
|
"blk.29.ffn_up.weight": "5243fbdfdc1e2a1dd84b6210a9869d18a014db9088897e345240cdc99990bd5d",
|
||||||
|
"blk.30.attn_k.weight": "948f263616bd3788b2b968baafd69b9c5bd1b77578665f096c4b7e247b4cea42",
|
||||||
|
"blk.30.attn_norm.weight": "e168df981e744874ff303faf2eb470e5f6868c2040ba5f383f6c5148669975e7",
|
||||||
|
"blk.30.attn_output.weight": "4cf0ccca04b792573b756655a24fc89cfb1f272da8305633f0bc66ef14990b93",
|
||||||
|
"blk.30.attn_q.weight": "21e07d6cba6c50d65350289258209717174a13c42be57e8141d69712cbaf32c1",
|
||||||
|
"blk.30.attn_v.weight": "65a8ca29c7237b3182ccf03e2fc94e84f9a53d0e160fb679ab401c853170dd9c",
|
||||||
|
"blk.30.ffn_down.weight": "8b00500a6d00d84058f6658ee1d6f06fb4fcae2f90d4341792259362923b3c13",
|
||||||
|
"blk.30.ffn_gate.weight": "5bc0e19ab7a31b50ac2118ad1b36e31055271a322cd8ff661d47c3ac0210703c",
|
||||||
|
"blk.30.ffn_up.weight": "f37a0561955725bd59ee2d064fa9f4e00a12a1b620b624db3bc3add5330bc321",
|
||||||
|
"blk.31.attn_k.weight": "9a5663edda227f5d87533897146764f8e8a7481b9e71fae197c39204f8463221",
|
||||||
|
"blk.31.attn_norm.weight": "060a4f438a1ee5e220b5b5278ad2f5c085a428bf38c515766781815597c87529",
|
||||||
|
"blk.31.attn_output.weight": "6ada5d3cad9dea4780ffbb43302bb6ccc2f24eddd0fc4f5f84c9ce0fc0c6e5dd",
|
||||||
|
"blk.31.attn_q.weight": "bb5d08c08603907981ad388d5d8b70fcc9b98034ba264b8474c8890cc0297af0",
|
||||||
|
"blk.31.attn_v.weight": "e01b4252ea9c6a889c32b21144b441a347464d04536ef4f6572425be55759796",
|
||||||
|
"blk.31.ffn_down.weight": "8ba4d679c36e93ba65ba03180385ef35ea86b3b7cdf2fded9df59369f1c09630",
|
||||||
|
"blk.31.ffn_gate.weight": "e5b41dc93645f8b5e8eebae3ada3ea43a18f97ce2654228655170b07b463ccb0",
|
||||||
|
"blk.31.ffn_up.weight": "25b88cdddc8b547af294ed107d3d1312e90b983cae87936fa6062ecd8ea02539",
|
||||||
|
"blk.32.attn_k.weight": "4bcf86dc0858c8ca2fbdf6aa76674d43eb698f78979fdc1a38f556a7af1facc4",
|
||||||
|
"blk.32.attn_norm.weight": "cdcc12f3b8b9773c6722736bfb748a2729230b21478cbcc4104859d3148df815",
|
||||||
|
"blk.32.attn_output.weight": "d43f1196822995ed89a9365c97054753a8b30ce20b6e273c8edcc42673a1e141",
|
||||||
|
"blk.32.attn_q.weight": "ebf2972bb3865cbc5be4840113a322089752038344beab2a0122c7cb4fb399b6",
|
||||||
|
"blk.32.attn_v.weight": "714db81704ff34fa137512903c1013acee7877467473e46600728b9240582eb7",
|
||||||
|
"blk.32.ffn_down.weight": "2cde3da1258bb170a79d5d3cdfe10c86a71eb34b77da46b74c5ed71e7f4fe274",
|
||||||
|
"blk.32.ffn_gate.weight": "c7e1ed792532613ff9d4e5834b6536e2e0f47df2303bc0fdaa90aac0c1f4e8db",
|
||||||
|
"blk.32.ffn_up.weight": "d8d6f13fe66a716e28f79101a29817f0c0d6f99969a6f017d51bafd1a16c600c",
|
||||||
|
"blk.33.attn_k.weight": "a0a28f6cbca88da00cab2ca37094d9b0503bf9defdae77b91895b911c408cbb6",
|
||||||
|
"blk.33.attn_norm.weight": "0251200c24cc8445607ace6dc8c5aa0566567997262b7cca53a11ac23cc564b2",
|
||||||
|
"blk.33.attn_output.weight": "b2423205bdf6a1096d43c44d8d12f1a84fcd4e1bb70fcf6dc8542b8b8a71a13c",
|
||||||
|
"blk.33.attn_q.weight": "00b425c3ef71065ce5e0234e702bf38143b4952da78a85f52ab2c2e3073d97ab",
|
||||||
|
"blk.33.attn_v.weight": "035edd2335df816c42c765a5e66b9d9b9e15a822a8dc1863508145499c942c14",
|
||||||
|
"blk.33.ffn_down.weight": "4894a923a3db75bae4496ba3ce5f28796ad31fe33996a066271fb8654964310e",
|
||||||
|
"blk.33.ffn_gate.weight": "8f6c819b8bbfbe3357fae89e1ac5a3d58be85b3b04be3bacf7b62775869046ff",
|
||||||
|
"blk.33.ffn_up.weight": "257c3544b5b544fd5d839665bf5caf107a329b59dbc3751efcaa24ae63c56179",
|
||||||
|
"blk.34.attn_k.weight": "b6cd8bba892e38dac4a2ebc3ba1bce49e71b967fc436fde30c6d76f54a18935f",
|
||||||
|
"blk.34.attn_norm.weight": "2b3c8e60a064cba9955752bbbbdd92c71ba5c2f1bd721097bdbe88b5abc68787",
|
||||||
|
"blk.34.attn_output.weight": "8cc272551c9aaca9db5a660c6927bab94a0243d74a30b2bc165f06bd577714ea",
|
||||||
|
"blk.34.attn_q.weight": "74b561eb4792484e6a94b58fe2583848c3ae28ff2f1bf3d02939a0cfdfa49990",
|
||||||
|
"blk.34.attn_v.weight": "dba19e24ff05154dc5a1f55c023729303a583d13d68732ce22ea74d4410dc8f0",
|
||||||
|
"blk.34.ffn_down.weight": "76eca5dfeb274c35774e0bf9f22ee420ed9085c8e99aa2cd5a236e4918b44c61",
|
||||||
|
"blk.34.ffn_gate.weight": "9af0862d5fcbc24732846488e653db8242a467765c0cdbc00332b3a40256b4a6",
|
||||||
|
"blk.34.ffn_up.weight": "2a03126bf73587eaba99ece2066103d12e47bcd4ce30ff6c17b2f383b81d40df",
|
||||||
|
"blk.35.attn_k.weight": "52513fc0cd4e997a842729af7d21dd09399bce0a339558374738be266d0fa2f0",
|
||||||
|
"blk.35.attn_norm.weight": "e5281fa911964263ccf1630b14762edbd41d0b9472d6ec695fc600fed4892c35",
|
||||||
|
"blk.35.attn_output.weight": "b391d6705d5dc6f48326b5fd16573f679edf64109d86fb729a498819676590ca",
|
||||||
|
"blk.35.attn_q.weight": "d16446921966db9b0e0539626ad22a2511ace780e59379d6a4162d8c5441440b",
|
||||||
|
"blk.35.attn_v.weight": "9d8cdf23ffdb0c5c74106843390b94b24c9f33ef0eb9998d39f78c73390101ea",
|
||||||
|
"blk.35.ffn_down.weight": "938eb6301f7bbf162d7dd965682a5ed11d0a4a530c6fedd7e5469ce80012fc17",
|
||||||
|
"blk.35.ffn_gate.weight": "5ad84f5a0c8edcfea1ecf1a3e3d21d85ceda0c4ad9e3c6ca68885eeff8ed3c2f",
|
||||||
|
"blk.35.ffn_up.weight": "1c4330d9dc71bf4c98812c34356c51f520f47610a534152aa6d29284b758090d",
|
||||||
|
"blk.36.attn_k.weight": "ef720655e5ca2465f13db2dfc4732fb4ef2c9d53acde52f514fd4f301e974081",
|
||||||
|
"blk.36.attn_norm.weight": "88f4b9310b3c8c2644e3029160cd35678c79dfa59280430e03f5c29a6fe84a58",
|
||||||
|
"blk.36.attn_output.weight": "aec6f915fffd7bb72cd783273e871b4f09605950089d45e72059d1316b6c4b01",
|
||||||
|
"blk.36.attn_q.weight": "72f9408a2405d42f8db6ce5fcf1d26a3660b6f225fc60e77d0277109cfcb82ed",
|
||||||
|
"blk.36.attn_v.weight": "0f3b3d851dc44b3893ef53f6cca5b4acc9658bacfe1cc2d13c3d704ddd409b67",
|
||||||
|
"blk.36.ffn_down.weight": "470aec48ce8c5129a6654d9fd26fcae72776f9fc1429a8bb05818072a876475d",
|
||||||
|
"blk.36.ffn_gate.weight": "7f5f296d09cf55679767b5d15de3eff489c456782119f25204be4b1647f18dcf",
|
||||||
|
"blk.36.ffn_up.weight": "b7ef74a1f7ffb4982711d93f1787be3a70edc3d2358d5203c41d8900508037d4",
|
||||||
|
"blk.37.attn_k.weight": "c4ffa5412e4ff2dcfe1aed991c1f54169fd171a4c7638e4b9f21a1ca64c5e1d6",
|
||||||
|
"blk.37.attn_norm.weight": "4eb6c888d841cccfacf5b963f8611120f6ff24b84af0b5714fd9ab36dcda422f",
|
||||||
|
"blk.37.attn_output.weight": "db2a7bbf9682f9f6eea672dae8e150738f1bf74dbc80edc7022017a3f040c8ac",
|
||||||
|
"blk.37.attn_q.weight": "e38c0462aff139afcbab289189823527e453abc9e541154adde5e7af88cacf0b",
|
||||||
|
"blk.37.attn_v.weight": "952eb2492ed452a72f96bcc12d4b2affad9dfdf46ee39ce4a5d7b57a5dc301e5",
|
||||||
|
"blk.37.ffn_down.weight": "25f23a8fbc44febf6dc4848fd7fe03a580e2822bd3b3b5a51f4990826bfe3e4e",
|
||||||
|
"blk.37.ffn_gate.weight": "707da5eb40118b035305d3262444382351f170a20a537386a70e90c5a83a7817",
|
||||||
|
"blk.37.ffn_up.weight": "d2d2ba5cfc4ef47338dd7384219e22bf030a5a2209e0354d88f5bbaaafd20e87",
|
||||||
|
"blk.38.attn_k.weight": "abc4bb189dedf7ce661e79028427623a4f91ac091c2cd60e31b58bc62b1cda71",
|
||||||
|
"blk.38.attn_norm.weight": "9f4803a7d03fd40fcb83d85f84eb1d5682ea4e5bb084f210c02850675d804c3d",
|
||||||
|
"blk.38.attn_output.weight": "77cb66007f1a41df7135d0e7f900ceb499c2f667dfc3f1a6ac01a3203bbd3ccf",
|
||||||
|
"blk.38.attn_q.weight": "d94a8b26cd375bf2bcaa76597e314aa8268ee50a479d00931e5e0e021feadb5d",
|
||||||
|
"blk.38.attn_v.weight": "660c907888bc5016dc69b7d35fe6f55c7ded697c93be0e2d332a2f17aff88758",
|
||||||
|
"blk.38.ffn_down.weight": "6f06173bae5b00ffaf88ef383619a8b9c6a8d0d5c6494695d17f6c1de1a68a13",
|
||||||
|
"blk.38.ffn_gate.weight": "89f99be149d03f116527bfcabe073c50001c874de40fb6e817f6619027f3cd05",
|
||||||
|
"blk.38.ffn_up.weight": "8d57557c8d5e2d2688b73f01dddf1ce8d5194990cda6358153320aea88aac7f8",
|
||||||
|
"blk.39.attn_k.weight": "21be09c988b46c8393e6c2ec9230f3b5136eb7607dd1953ba92d0811c2f0dd75",
|
||||||
|
"blk.39.attn_norm.weight": "ba7c1912dd1c4e2d16917201f62396fd0600e4a451137eaddff255548c209abd",
|
||||||
|
"blk.39.attn_output.weight": "acfaf4abb3fd27fd899b5563c3877f176b597d8f6cdb2f2fd3f3a0bd4da15ed6",
|
||||||
|
"blk.39.attn_q.weight": "e8adbc140d4c8f0db2a27ca584c5531d5b1e080555fe627e34d80d0814a92bed",
|
||||||
|
"blk.39.attn_v.weight": "92f96b0e1f724e73a0f90a76c145654418844c04a6d4b14c05eb5af8a62bf8dc",
|
||||||
|
"blk.39.ffn_down.weight": "4d9ee7c65fc16fe95d10c47b79ac6a525741947600a64b5fcea5d300a82c50de",
|
||||||
|
"blk.39.ffn_gate.weight": "7e18507989f39b32191133d2657c2ee3b74f42f070579204d727eb72215793d1",
|
||||||
|
"blk.39.ffn_up.weight": "22cda752269c9757ba918abede1df95bb0f83a5c772dea13c8deea3d5f2723d9",
|
||||||
|
"output_norm.weight": "2858cf0e39d32caf52b7861378ace076000241e147f10b9eb21d8a5cd149e3cb"
|
||||||
|
}
|
@@ -10,6 +10,7 @@ import (
|
|||||||
"log/slog"
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
"slices"
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"golang.org/x/exp/maps"
|
"golang.org/x/exp/maps"
|
||||||
)
|
)
|
||||||
@@ -60,7 +61,25 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
addedTokens[t.Content] = t
|
addedTokens[t.Content] = t
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Merges = tt.Model.Merges
|
if len(tt.Model.Merges) == 0 {
|
||||||
|
// noop; merges is empty
|
||||||
|
} else if err := json.Unmarshal(tt.Model.Merges, &t.Merges); err == nil {
|
||||||
|
// noop; merges is []string
|
||||||
|
} else if merges, err := func() ([][]string, error) {
|
||||||
|
var merges [][]string
|
||||||
|
if err := json.Unmarshal(tt.Model.Merges, &merges); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return merges, nil
|
||||||
|
}(); err == nil {
|
||||||
|
t.Merges = make([]string, len(merges))
|
||||||
|
for i := range merges {
|
||||||
|
t.Merges[i] = strings.Join(merges[i], " ")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return nil, fmt.Errorf("could not parse tokenizer merges. expected []string or [][]string: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
sha256sum := sha256.New()
|
sha256sum := sha256.New()
|
||||||
for _, pt := range tt.PreTokenizer.PreTokenizers {
|
for _, pt := range tt.PreTokenizer.PreTokenizers {
|
||||||
@@ -81,6 +100,8 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
t.Pre = "deepseek-llm"
|
t.Pre = "deepseek-llm"
|
||||||
case "21cde974d587f0d54dc8d56b183cc1e6239600172035c68fbd6d4b9f8da0576e":
|
case "21cde974d587f0d54dc8d56b183cc1e6239600172035c68fbd6d4b9f8da0576e":
|
||||||
t.Pre = "deepseek-coder"
|
t.Pre = "deepseek-coder"
|
||||||
|
case "1ff7f41064896984db5d1bb6ff64fa4bc29007d08c1b439e505b7392777a319e":
|
||||||
|
t.Pre = "qwen2"
|
||||||
case "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855":
|
case "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855":
|
||||||
// noop, empty pretokenizer
|
// noop, empty pretokenizer
|
||||||
default:
|
default:
|
||||||
@@ -156,9 +177,9 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
type tokenizer struct {
|
type tokenizer struct {
|
||||||
AddedTokens []token `json:"added_tokens"`
|
AddedTokens []token `json:"added_tokens"`
|
||||||
Model struct {
|
Model struct {
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Vocab map[string]int `json:"vocab"`
|
Vocab map[string]int `json:"vocab"`
|
||||||
Merges []string `json:"merges"`
|
Merges json.RawMessage `json:"merges"`
|
||||||
} `json:"model"`
|
} `json:"model"`
|
||||||
|
|
||||||
PreTokenizer struct {
|
PreTokenizer struct {
|
||||||
|
@@ -6,7 +6,9 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
|
"reflect"
|
||||||
"slices"
|
"slices"
|
||||||
|
|
||||||
"google.golang.org/protobuf/proto"
|
"google.golang.org/protobuf/proto"
|
||||||
@@ -15,6 +17,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
||||||
|
slog.Debug("using spm vocabulary")
|
||||||
|
|
||||||
ast, err := parseAdditionalSpecialTokens(fsys)
|
ast, err := parseAdditionalSpecialTokens(fsys)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -43,10 +47,19 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
v.Types = append(v.Types, int32(t))
|
v.Types = append(v.Types, int32(t))
|
||||||
default:
|
default:
|
||||||
tt := int32(sentencepiece.ModelProto_SentencePiece_NORMAL)
|
tt := int32(sentencepiece.ModelProto_SentencePiece_NORMAL)
|
||||||
if slices.Contains(ast, piece.GetPiece()) {
|
|
||||||
|
// temporary fix to handle gemma3 broken configs
|
||||||
|
if slices.Contains([]string{"<end_of_turn>", "<start_of_turn>"}, piece.GetPiece()) {
|
||||||
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, t := range ast {
|
||||||
|
if t.Content == piece.GetPiece() {
|
||||||
|
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
v.Types = append(v.Types, tt)
|
v.Types = append(v.Types, tt)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -78,10 +91,16 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
return cmp.Compare(i.id, j.id)
|
return cmp.Compare(i.id, j.id)
|
||||||
})
|
})
|
||||||
|
|
||||||
n := len(v.Tokens)
|
for _, t := range ts {
|
||||||
for i, t := range ts {
|
if t.id < len(v.Tokens) {
|
||||||
if t.id != i+n {
|
if v.Tokens[t.id] == t.content {
|
||||||
return nil, fmt.Errorf("invalid token id: %d", t.id)
|
slog.Warn("tokenizer", "duplicate token", t.content, "id", t.id)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("token mismatch: %s != %s at pos [%d]", t.content, v.Tokens[t.id], t.id)
|
||||||
|
}
|
||||||
|
if t.id != len(v.Tokens) {
|
||||||
|
return nil, fmt.Errorf("invalid token id: [%d] as pos [%d]", t.id, len(v.Tokens))
|
||||||
}
|
}
|
||||||
|
|
||||||
v.Tokens = append(v.Tokens, t.content)
|
v.Tokens = append(v.Tokens, t.content)
|
||||||
@@ -92,7 +111,15 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
return &v, nil
|
return &v, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseAdditionalSpecialTokens(fsys fs.FS) ([]string, error) {
|
type specialToken struct {
|
||||||
|
Content string `json:"content"`
|
||||||
|
Lstrip bool `json:"lstrip"`
|
||||||
|
Normalized bool `json:"normalized"`
|
||||||
|
Rstrip bool `json:"rstrip"`
|
||||||
|
SingleWord bool `json:"single_word"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseAdditionalSpecialTokens(fsys fs.FS) ([]specialToken, error) {
|
||||||
f, err := fsys.Open("special_tokens_map.json")
|
f, err := fsys.Open("special_tokens_map.json")
|
||||||
if errors.Is(err, os.ErrNotExist) {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
@@ -102,12 +129,43 @@ func parseAdditionalSpecialTokens(fsys fs.FS) ([]string, error) {
|
|||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
|
||||||
var m struct {
|
var m struct {
|
||||||
AdditionalSpecialTokens []string `json:"additional_special_tokens"`
|
AdditionalSpecialTokens any `json:"additional_special_tokens"`
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.NewDecoder(f).Decode(&m); err != nil {
|
if err := json.NewDecoder(f).Decode(&m); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return m.AdditionalSpecialTokens, nil
|
var ast []specialToken
|
||||||
|
|
||||||
|
switch st := m.AdditionalSpecialTokens.(type) {
|
||||||
|
case []string:
|
||||||
|
for _, s := range st {
|
||||||
|
ast = append(ast, specialToken{Content: s})
|
||||||
|
}
|
||||||
|
case []any:
|
||||||
|
for _, s := range st {
|
||||||
|
// marshal and unmarshal the object to get the special token
|
||||||
|
tMap := s.(map[string]any)
|
||||||
|
data, err := json.Marshal(tMap)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var token specialToken
|
||||||
|
err = json.Unmarshal(data, &token)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ast = append(ast, token)
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
slog.Warn("special token", "unknown token", reflect.TypeOf(st))
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Debug("spm tokenizer", "additional tokens", ast)
|
||||||
|
|
||||||
|
return ast, nil
|
||||||
}
|
}
|
||||||
|
@@ -191,6 +191,62 @@ func TestParseTokenizer(t *testing.T) {
|
|||||||
Pre: "default",
|
Pre: "default",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "list string merges",
|
||||||
|
fsys: createTokenizerFS(t, t.TempDir(), map[string]io.Reader{
|
||||||
|
"tokenizer.json": strings.NewReader(`{
|
||||||
|
"model": {
|
||||||
|
"merges": [
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}),
|
||||||
|
want: &Tokenizer{
|
||||||
|
Vocabulary: &Vocabulary{
|
||||||
|
Model: "gpt2",
|
||||||
|
},
|
||||||
|
Merges: []string{
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f",
|
||||||
|
},
|
||||||
|
Pre: "default",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "list list string merges",
|
||||||
|
fsys: createTokenizerFS(t, t.TempDir(), map[string]io.Reader{
|
||||||
|
"tokenizer.json": strings.NewReader(`{
|
||||||
|
"model": {
|
||||||
|
"merges": [
|
||||||
|
[
|
||||||
|
"a", "b"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"c", "d"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"e", "f"
|
||||||
|
]
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}),
|
||||||
|
want: &Tokenizer{
|
||||||
|
Vocabulary: &Vocabulary{
|
||||||
|
Model: "gpt2",
|
||||||
|
},
|
||||||
|
Merges: []string{
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f",
|
||||||
|
},
|
||||||
|
Pre: "default",
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
|
@@ -9,8 +9,6 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/envconfig"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Determine if the given ROCm lib directory is usable by checking for existence of some glob patterns
|
// Determine if the given ROCm lib directory is usable by checking for existence of some glob patterns
|
||||||
@@ -37,30 +35,14 @@ func GetSupportedGFX(libDir string) ([]string, error) {
|
|||||||
return ret, nil
|
return ret, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func rocmGetVisibleDevicesEnv(gpuInfo []GpuInfo) (string, string) {
|
|
||||||
ids := []string{}
|
|
||||||
for _, info := range gpuInfo {
|
|
||||||
if info.Library != "rocm" {
|
|
||||||
// TODO shouldn't happen if things are wired correctly...
|
|
||||||
slog.Debug("rocmGetVisibleDevicesEnv skipping over non-rocm device", "library", info.Library)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
ids = append(ids, info.ID)
|
|
||||||
}
|
|
||||||
return "HIP_VISIBLE_DEVICES", strings.Join(ids, ",")
|
|
||||||
}
|
|
||||||
|
|
||||||
func commonAMDValidateLibDir() (string, error) {
|
func commonAMDValidateLibDir() (string, error) {
|
||||||
// Favor our bundled version
|
// Favor our bundled version
|
||||||
|
|
||||||
// Installer payload location if we're running the installed binary
|
// Installer payload location if we're running the installed binary
|
||||||
exe, err := os.Executable()
|
rocmTargetDir := filepath.Join(LibOllamaPath, "rocm")
|
||||||
if err == nil {
|
if rocmLibUsable(rocmTargetDir) {
|
||||||
rocmTargetDir := filepath.Join(filepath.Dir(exe), envconfig.LibRelativeToExe(), "lib", "ollama")
|
slog.Debug("detected ROCM next to ollama executable " + rocmTargetDir)
|
||||||
if rocmLibUsable(rocmTargetDir) {
|
return rocmTargetDir, nil
|
||||||
slog.Debug("detected ROCM next to ollama executable " + rocmTargetDir)
|
|
||||||
return rocmTargetDir, nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Prefer explicit HIP env var
|
// Prefer explicit HIP env var
|
||||||
|
@@ -64,7 +64,7 @@ func NewHipLib() (*HipLib, error) {
|
|||||||
return hl, nil
|
return hl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// The hip library only evaluates the HIP_VISIBLE_DEVICES variable at startup
|
// The hip library only evaluates the ROCR_VISIBLE_DEVICES variable at startup
|
||||||
// so we have to unload/reset the library after we do our initial discovery
|
// so we have to unload/reset the library after we do our initial discovery
|
||||||
// to make sure our updates to that variable are processed by llama.cpp
|
// to make sure our updates to that variable are processed by llama.cpp
|
||||||
func (hl *HipLib) Release() {
|
func (hl *HipLib) Release() {
|
||||||
|
@@ -64,23 +64,20 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
// Determine if the user has already pre-selected which GPUs to look at, then ignore the others
|
// Determine if the user has already pre-selected which GPUs to look at, then ignore the others
|
||||||
var visibleDevices []string
|
var visibleDevices []string
|
||||||
hipVD := envconfig.HipVisibleDevices() // zero based index only
|
hipVD := envconfig.HipVisibleDevices() // zero based index only
|
||||||
rocrVD := envconfig.RocrVisibleDevices() // zero based index or UUID, but consumer cards seem to not support UUID
|
rocrVD := envconfig.RocrVisibleDevices() // zero based index or UUID
|
||||||
gpuDO := envconfig.GpuDeviceOrdinal() // zero based index
|
gpuDO := envconfig.GpuDeviceOrdinal() // zero based index
|
||||||
switch {
|
switch {
|
||||||
// TODO is this priorty order right?
|
|
||||||
case hipVD != "":
|
|
||||||
visibleDevices = strings.Split(hipVD, ",")
|
|
||||||
case rocrVD != "":
|
case rocrVD != "":
|
||||||
visibleDevices = strings.Split(rocrVD, ",")
|
visibleDevices = strings.Split(rocrVD, ",")
|
||||||
// TODO - since we don't yet support UUIDs, consider detecting and reporting here
|
case hipVD != "":
|
||||||
// all our test systems show GPU-XX indicating UUID is not supported
|
visibleDevices = strings.Split(hipVD, ",")
|
||||||
case gpuDO != "":
|
case gpuDO != "":
|
||||||
visibleDevices = strings.Split(gpuDO, ",")
|
visibleDevices = strings.Split(gpuDO, ",")
|
||||||
}
|
}
|
||||||
|
|
||||||
gfxOverride := envconfig.HsaOverrideGfxVersion()
|
gfxOverride := envconfig.HsaOverrideGfxVersion()
|
||||||
var supported []string
|
var supported []string
|
||||||
libDir := ""
|
var libDir string
|
||||||
|
|
||||||
// The amdgpu driver always exposes the host CPU(s) first, but we have to skip them and subtract
|
// The amdgpu driver always exposes the host CPU(s) first, but we have to skip them and subtract
|
||||||
// from the other IDs to get alignment with the HIP libraries expectations (zero is the first GPU, not the CPU)
|
// from the other IDs to get alignment with the HIP libraries expectations (zero is the first GPU, not the CPU)
|
||||||
@@ -99,7 +96,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
}
|
}
|
||||||
return a < b
|
return a < b
|
||||||
})
|
})
|
||||||
cpuCount := 0
|
gpuCount := 0
|
||||||
for _, match := range matches {
|
for _, match := range matches {
|
||||||
slog.Debug("evaluating amdgpu node " + match)
|
slog.Debug("evaluating amdgpu node " + match)
|
||||||
fp, err := os.Open(match)
|
fp, err := os.Open(match)
|
||||||
@@ -108,11 +105,6 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
defer fp.Close()
|
defer fp.Close()
|
||||||
nodeID, err := strconv.Atoi(filepath.Base(filepath.Dir(match)))
|
|
||||||
if err != nil {
|
|
||||||
slog.Debug("failed to parse node ID", "error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
scanner := bufio.NewScanner(fp)
|
scanner := bufio.NewScanner(fp)
|
||||||
isCPU := false
|
isCPU := false
|
||||||
@@ -186,20 +178,19 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
// do reliably report VRAM usage.
|
// do reliably report VRAM usage.
|
||||||
|
|
||||||
if isCPU {
|
if isCPU {
|
||||||
cpuCount++
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// CPUs are always first in the list
|
// Skip over any GPUs that are masked
|
||||||
gpuID := nodeID - cpuCount
|
if major == 0 && minor == 0 && patch == 0 {
|
||||||
|
slog.Debug("skipping gpu with gfx000")
|
||||||
// Shouldn't happen, but just in case...
|
continue
|
||||||
if gpuID < 0 {
|
|
||||||
err := fmt.Errorf("unexpected amdgpu sysfs data resulted in negative GPU ID, please set OLLAMA_DEBUG=1 and report an issue")
|
|
||||||
slog.Error(err.Error())
|
|
||||||
return nil, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Keep track of numeric IDs based on valid GPUs
|
||||||
|
gpuID := gpuCount
|
||||||
|
gpuCount += 1
|
||||||
|
|
||||||
// Look up the memory for the current node
|
// Look up the memory for the current node
|
||||||
totalMemory := uint64(0)
|
totalMemory := uint64(0)
|
||||||
usedMemory := uint64(0)
|
usedMemory := uint64(0)
|
||||||
@@ -273,6 +264,14 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
name = fmt.Sprintf("%04x:%04x", vendor, device)
|
name = fmt.Sprintf("%04x:%04x", vendor, device)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Favor UUIDs if available to reduce possibility of getting the numeric IDs wrong
|
||||||
|
var ID string
|
||||||
|
if uniqueID != 0 {
|
||||||
|
ID = fmt.Sprintf("GPU-%016x", uniqueID)
|
||||||
|
} else {
|
||||||
|
ID = strconv.Itoa(gpuID)
|
||||||
|
}
|
||||||
|
|
||||||
gpuInfo := RocmGPUInfo{
|
gpuInfo := RocmGPUInfo{
|
||||||
GpuInfo: GpuInfo{
|
GpuInfo: GpuInfo{
|
||||||
Library: "rocm",
|
Library: "rocm",
|
||||||
@@ -280,7 +279,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
TotalMemory: totalMemory,
|
TotalMemory: totalMemory,
|
||||||
FreeMemory: (totalMemory - usedMemory),
|
FreeMemory: (totalMemory - usedMemory),
|
||||||
},
|
},
|
||||||
ID: strconv.Itoa(gpuID),
|
ID: ID,
|
||||||
Name: name,
|
Name: name,
|
||||||
Compute: fmt.Sprintf("gfx%d%x%x", major, minor, patch),
|
Compute: fmt.Sprintf("gfx%d%x%x", major, minor, patch),
|
||||||
MinimumMemory: rocmMinimumMemory,
|
MinimumMemory: rocmMinimumMemory,
|
||||||
@@ -288,6 +287,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
DriverMinor: driverMinor,
|
DriverMinor: driverMinor,
|
||||||
},
|
},
|
||||||
usedFilepath: usedFile,
|
usedFilepath: usedFile,
|
||||||
|
index: gpuID,
|
||||||
}
|
}
|
||||||
|
|
||||||
// iGPU detection, remove this check once we can support an iGPU variant of the rocm library
|
// iGPU detection, remove this check once we can support an iGPU variant of the rocm library
|
||||||
@@ -300,8 +300,11 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
})
|
})
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
minVer, err := strconv.Atoi(RocmComputeMajorMin)
|
||||||
if int(major) < RocmComputeMin {
|
if err != nil {
|
||||||
|
slog.Error("invalid RocmComputeMajorMin setting", "value", RocmComputeMajorMin, "error", err)
|
||||||
|
}
|
||||||
|
if int(major) < minVer {
|
||||||
reason := fmt.Sprintf("amdgpu too old gfx%d%x%x", major, minor, patch)
|
reason := fmt.Sprintf("amdgpu too old gfx%d%x%x", major, minor, patch)
|
||||||
slog.Warn(reason, "gpu", gpuID)
|
slog.Warn(reason, "gpu", gpuID)
|
||||||
unsupportedGPUs = append(unsupportedGPUs, UnsupportedGPUInfo{
|
unsupportedGPUs = append(unsupportedGPUs, UnsupportedGPUInfo{
|
||||||
@@ -319,7 +322,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
if len(visibleDevices) > 0 {
|
if len(visibleDevices) > 0 {
|
||||||
include := false
|
include := false
|
||||||
for _, visible := range visibleDevices {
|
for _, visible := range visibleDevices {
|
||||||
if visible == gpuInfo.ID {
|
if visible == gpuInfo.ID || visible == strconv.Itoa(gpuInfo.index) {
|
||||||
include = true
|
include = true
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -350,7 +353,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
gpuInfo.DependencyPath = libDir
|
gpuInfo.DependencyPath = []string{libDir}
|
||||||
|
|
||||||
if gfxOverride == "" {
|
if gfxOverride == "" {
|
||||||
// Only load supported list once
|
// Only load supported list once
|
||||||
@@ -516,3 +519,20 @@ func verifyKFDDriverAccess() error {
|
|||||||
fd.Close()
|
fd.Close()
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func rocmGetVisibleDevicesEnv(gpuInfo []GpuInfo) (string, string) {
|
||||||
|
ids := []string{}
|
||||||
|
for _, info := range gpuInfo {
|
||||||
|
if info.Library != "rocm" {
|
||||||
|
// TODO shouldn't happen if things are wired correctly...
|
||||||
|
slog.Debug("rocmGetVisibleDevicesEnv skipping over non-rocm device", "library", info.Library)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ids = append(ids, info.ID)
|
||||||
|
}
|
||||||
|
// There are 3 potential env vars to use to select GPUs.
|
||||||
|
// ROCR_VISIBLE_DEVICES supports UUID or numeric so is our preferred on linux
|
||||||
|
// GPU_DEVICE_ORDINAL supports numeric IDs only
|
||||||
|
// HIP_VISIBLE_DEVICES supports numeric IDs only
|
||||||
|
return "ROCR_VISIBLE_DEVICES", strings.Join(ids, ",")
|
||||||
|
}
|
||||||
|
@@ -5,7 +5,6 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"slices"
|
"slices"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -43,13 +42,14 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
slog.Debug("error looking up amd driver version", "error", err)
|
slog.Debug("error looking up amd driver version", "error", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Note: the HIP library automatically handles subsetting to any HIP_VISIBLE_DEVICES the user specified
|
// Note: the HIP library automatically handles subsetting to any *_VISIBLE_DEVICES the user specified
|
||||||
count := hl.HipGetDeviceCount()
|
count := hl.HipGetDeviceCount()
|
||||||
if count == 0 {
|
if count == 0 {
|
||||||
err := fmt.Errorf("no compatible amdgpu devices detected")
|
err := fmt.Errorf("no compatible amdgpu devices detected")
|
||||||
slog.Info(err.Error())
|
slog.Info(err.Error())
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
libDir, err := AMDValidateLibDir()
|
libDir, err := AMDValidateLibDir()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
err = fmt.Errorf("unable to verify rocm library: %w", err)
|
err = fmt.Errorf("unable to verify rocm library: %w", err)
|
||||||
@@ -111,7 +111,7 @@ func AMDGetGPUInfo() ([]RocmGPUInfo, error) {
|
|||||||
UnreliableFreeMemory: true,
|
UnreliableFreeMemory: true,
|
||||||
|
|
||||||
ID: strconv.Itoa(i), // TODO this is probably wrong if we specify visible devices
|
ID: strconv.Itoa(i), // TODO this is probably wrong if we specify visible devices
|
||||||
DependencyPath: libDir,
|
DependencyPath: []string{libDir},
|
||||||
MinimumMemory: rocmMinimumMemory,
|
MinimumMemory: rocmMinimumMemory,
|
||||||
Name: name,
|
Name: name,
|
||||||
Compute: gfx,
|
Compute: gfx,
|
||||||
@@ -162,9 +162,7 @@ func AMDValidateLibDir() (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Installer payload (if we're running from some other location)
|
// Installer payload (if we're running from some other location)
|
||||||
localAppData := os.Getenv("LOCALAPPDATA")
|
rocmTargetDir := filepath.Join(LibOllamaPath, "rocm")
|
||||||
appDir := filepath.Join(localAppData, "Programs", "Ollama")
|
|
||||||
rocmTargetDir := filepath.Join(appDir, envconfig.LibRelativeToExe(), "lib", "ollama")
|
|
||||||
if rocmLibUsable(rocmTargetDir) {
|
if rocmLibUsable(rocmTargetDir) {
|
||||||
slog.Debug("detected ollama installed ROCm at " + rocmTargetDir)
|
slog.Debug("detected ollama installed ROCm at " + rocmTargetDir)
|
||||||
return rocmTargetDir, nil
|
return rocmTargetDir, nil
|
||||||
@@ -182,7 +180,7 @@ func (gpus RocmGPUInfoList) RefreshFreeMemory() error {
|
|||||||
hl, err := NewHipLib()
|
hl, err := NewHipLib()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Debug(err.Error())
|
slog.Debug(err.Error())
|
||||||
return nil
|
return err
|
||||||
}
|
}
|
||||||
defer hl.Release()
|
defer hl.Release()
|
||||||
|
|
||||||
@@ -201,3 +199,20 @@ func (gpus RocmGPUInfoList) RefreshFreeMemory() error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func rocmGetVisibleDevicesEnv(gpuInfo []GpuInfo) (string, string) {
|
||||||
|
ids := []string{}
|
||||||
|
for _, info := range gpuInfo {
|
||||||
|
if info.Library != "rocm" {
|
||||||
|
// TODO shouldn't happen if things are wired correctly...
|
||||||
|
slog.Debug("rocmGetVisibleDevicesEnv skipping over non-rocm device", "library", info.Library)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ids = append(ids, info.ID)
|
||||||
|
}
|
||||||
|
// There are 3 potential env vars to use to select GPUs.
|
||||||
|
// ROCR_VISIBLE_DEVICES supports UUID or numeric but does not work on Windows
|
||||||
|
// HIP_VISIBLE_DEVICES supports numeric IDs only
|
||||||
|
// GPU_DEVICE_ORDINAL supports numeric IDs only
|
||||||
|
return "HIP_VISIBLE_DEVICES", strings.Join(ids, ",")
|
||||||
|
}
|
||||||
|
@@ -5,21 +5,8 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"golang.org/x/sys/cpu"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func GetCPUCapability() CPUCapability {
|
|
||||||
if cpu.X86.HasAVX2 {
|
|
||||||
return CPUCapabilityAVX2
|
|
||||||
}
|
|
||||||
if cpu.X86.HasAVX {
|
|
||||||
return CPUCapabilityAVX
|
|
||||||
}
|
|
||||||
// else LCD
|
|
||||||
return CPUCapabilityNone
|
|
||||||
}
|
|
||||||
|
|
||||||
func IsNUMA() bool {
|
func IsNUMA() bool {
|
||||||
if runtime.GOOS != "linux" {
|
if runtime.GOOS != "linux" {
|
||||||
// numa support in llama.cpp is linux only
|
// numa support in llama.cpp is linux only
|
||||||
|
@@ -57,7 +57,8 @@ func cudaVariant(gpuInfo CudaGPUInfo) string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if gpuInfo.computeMajor < 6 || gpuInfo.DriverMajor < 12 || (gpuInfo.DriverMajor == 12 && gpuInfo.DriverMinor == 0) {
|
// driver 12.0 has problems with the cuda v12 library, so run v11 on those older drivers
|
||||||
|
if gpuInfo.DriverMajor < 12 || (gpuInfo.DriverMajor == 12 && gpuInfo.DriverMinor == 0) {
|
||||||
return "v11"
|
return "v11"
|
||||||
}
|
}
|
||||||
return "v12"
|
return "v12"
|
||||||
|
128
discover/gpu.go
128
discover/gpu.go
@@ -16,6 +16,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"unsafe"
|
"unsafe"
|
||||||
@@ -45,7 +46,6 @@ const (
|
|||||||
var (
|
var (
|
||||||
gpuMutex sync.Mutex
|
gpuMutex sync.Mutex
|
||||||
bootstrapped bool
|
bootstrapped bool
|
||||||
cpuCapability CPUCapability
|
|
||||||
cpus []CPUInfo
|
cpus []CPUInfo
|
||||||
cudaGPUs []CudaGPUInfo
|
cudaGPUs []CudaGPUInfo
|
||||||
nvcudaLibPath string
|
nvcudaLibPath string
|
||||||
@@ -64,9 +64,13 @@ var (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// With our current CUDA compile flags, older than 5.0 will not work properly
|
// With our current CUDA compile flags, older than 5.0 will not work properly
|
||||||
var CudaComputeMin = [2]C.int{5, 0}
|
// (string values used to allow ldflags overrides at build time)
|
||||||
|
var (
|
||||||
|
CudaComputeMajorMin = "5"
|
||||||
|
CudaComputeMinorMin = "0"
|
||||||
|
)
|
||||||
|
|
||||||
var RocmComputeMin = 9
|
var RocmComputeMajorMin = "9"
|
||||||
|
|
||||||
// TODO find a better way to detect iGPU instead of minimum memory
|
// TODO find a better way to detect iGPU instead of minimum memory
|
||||||
const IGPUMemLimit = 1 * format.GibiByte // 512G is what they typically report, so anything less than 1G must be iGPU
|
const IGPUMemLimit = 1 * format.GibiByte // 512G is what they typically report, so anything less than 1G must be iGPU
|
||||||
@@ -96,15 +100,7 @@ func initCudaHandles() *cudaHandles {
|
|||||||
|
|
||||||
// Aligned with driver, we can't carry as payloads
|
// Aligned with driver, we can't carry as payloads
|
||||||
nvcudaMgmtPatterns := NvcudaGlobs
|
nvcudaMgmtPatterns := NvcudaGlobs
|
||||||
|
cudartMgmtPatterns = append(cudartMgmtPatterns, filepath.Join(LibOllamaPath, "cuda_v*", CudartMgmtName))
|
||||||
if runtime.GOOS == "windows" {
|
|
||||||
localAppData := os.Getenv("LOCALAPPDATA")
|
|
||||||
cudartMgmtPatterns = []string{filepath.Join(localAppData, "Programs", "Ollama", CudartMgmtName)}
|
|
||||||
}
|
|
||||||
libDir := LibraryDir()
|
|
||||||
if libDir != "" {
|
|
||||||
cudartMgmtPatterns = []string{filepath.Join(libDir, CudartMgmtName)}
|
|
||||||
}
|
|
||||||
cudartMgmtPatterns = append(cudartMgmtPatterns, CudartGlobs...)
|
cudartMgmtPatterns = append(cudartMgmtPatterns, CudartGlobs...)
|
||||||
|
|
||||||
if len(NvmlGlobs) > 0 {
|
if len(NvmlGlobs) > 0 {
|
||||||
@@ -219,16 +215,23 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
|
|
||||||
if !bootstrapped {
|
if !bootstrapped {
|
||||||
slog.Info("looking for compatible GPUs")
|
slog.Info("looking for compatible GPUs")
|
||||||
|
cudaComputeMajorMin, err := strconv.Atoi(CudaComputeMajorMin)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("invalid CudaComputeMajorMin setting", "value", CudaComputeMajorMin, "error", err)
|
||||||
|
}
|
||||||
|
cudaComputeMinorMin, err := strconv.Atoi(CudaComputeMinorMin)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("invalid CudaComputeMinorMin setting", "value", CudaComputeMinorMin, "error", err)
|
||||||
|
}
|
||||||
bootstrapErrors = []error{}
|
bootstrapErrors = []error{}
|
||||||
needRefresh = false
|
needRefresh = false
|
||||||
cpuCapability = GetCPUCapability()
|
|
||||||
var memInfo C.mem_info_t
|
var memInfo C.mem_info_t
|
||||||
|
|
||||||
mem, err := GetCPUMem()
|
mem, err := GetCPUMem()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Warn("error looking up system memory", "error", err)
|
slog.Warn("error looking up system memory", "error", err)
|
||||||
}
|
}
|
||||||
depPath := LibraryDir()
|
|
||||||
details, err := GetCPUDetails()
|
details, err := GetCPUDetails()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Warn("failed to lookup CPU details", "error", err)
|
slog.Warn("failed to lookup CPU details", "error", err)
|
||||||
@@ -236,26 +239,14 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
cpus = []CPUInfo{
|
cpus = []CPUInfo{
|
||||||
{
|
{
|
||||||
GpuInfo: GpuInfo{
|
GpuInfo: GpuInfo{
|
||||||
memInfo: mem,
|
memInfo: mem,
|
||||||
Library: "cpu",
|
Library: "cpu",
|
||||||
Variant: cpuCapability.String(),
|
ID: "0",
|
||||||
ID: "0",
|
|
||||||
DependencyPath: depPath,
|
|
||||||
},
|
},
|
||||||
CPUs: details,
|
CPUs: details,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to CPU mode if we're lacking required vector extensions on x86
|
|
||||||
if cpuCapability < GPURunnerCPUCapability && runtime.GOARCH == "amd64" {
|
|
||||||
err := fmt.Errorf("CPU does not have minimum vector extensions, GPU inference disabled. Required:%s Detected:%s", GPURunnerCPUCapability, cpuCapability)
|
|
||||||
slog.Warn(err.Error())
|
|
||||||
bootstrapErrors = append(bootstrapErrors, err)
|
|
||||||
bootstrapped = true
|
|
||||||
// No need to do any GPU discovery, since we can't run on them
|
|
||||||
return GpuInfoList{cpus[0].GpuInfo}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load ALL libraries
|
// Load ALL libraries
|
||||||
cHandles = initCudaHandles()
|
cHandles = initCudaHandles()
|
||||||
|
|
||||||
@@ -292,19 +283,19 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
gpuInfo.DriverMajor = driverMajor
|
gpuInfo.DriverMajor = driverMajor
|
||||||
gpuInfo.DriverMinor = driverMinor
|
gpuInfo.DriverMinor = driverMinor
|
||||||
variant := cudaVariant(gpuInfo)
|
variant := cudaVariant(gpuInfo)
|
||||||
if depPath != "" {
|
|
||||||
gpuInfo.DependencyPath = depPath
|
// Start with our bundled libraries
|
||||||
// Check for variant specific directory
|
if variant != "" {
|
||||||
if variant != "" {
|
variantPath := filepath.Join(LibOllamaPath, "cuda_"+variant)
|
||||||
if _, err := os.Stat(filepath.Join(depPath, "cuda_"+variant)); err == nil {
|
if _, err := os.Stat(variantPath); err == nil {
|
||||||
gpuInfo.DependencyPath = filepath.Join(depPath, "cuda_"+variant)
|
// Put the variant directory first in the search path to avoid runtime linking to the wrong library
|
||||||
}
|
gpuInfo.DependencyPath = append([]string{variantPath}, gpuInfo.DependencyPath...)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
gpuInfo.Name = C.GoString(&memInfo.gpu_name[0])
|
gpuInfo.Name = C.GoString(&memInfo.gpu_name[0])
|
||||||
gpuInfo.Variant = variant
|
gpuInfo.Variant = variant
|
||||||
|
|
||||||
if memInfo.major < CudaComputeMin[0] || (memInfo.major == CudaComputeMin[0] && memInfo.minor < CudaComputeMin[1]) {
|
if int(memInfo.major) < cudaComputeMajorMin || (int(memInfo.major) == cudaComputeMajorMin && int(memInfo.minor) < cudaComputeMinorMin) {
|
||||||
unsupportedGPUs = append(unsupportedGPUs,
|
unsupportedGPUs = append(unsupportedGPUs,
|
||||||
UnsupportedGPUInfo{
|
UnsupportedGPUInfo{
|
||||||
GpuInfo: gpuInfo.GpuInfo,
|
GpuInfo: gpuInfo.GpuInfo,
|
||||||
@@ -316,7 +307,9 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
// query the management library as well so we can record any skew between the two
|
// query the management library as well so we can record any skew between the two
|
||||||
// which represents overhead on the GPU we must set aside on subsequent updates
|
// which represents overhead on the GPU we must set aside on subsequent updates
|
||||||
if cHandles.nvml != nil {
|
if cHandles.nvml != nil {
|
||||||
C.nvml_get_free(*cHandles.nvml, C.int(gpuInfo.index), &memInfo.free, &memInfo.total, &memInfo.used)
|
uuid := C.CString(gpuInfo.ID)
|
||||||
|
defer C.free(unsafe.Pointer(uuid))
|
||||||
|
C.nvml_get_free(*cHandles.nvml, uuid, &memInfo.free, &memInfo.total, &memInfo.used)
|
||||||
if memInfo.err != nil {
|
if memInfo.err != nil {
|
||||||
slog.Warn("error looking up nvidia GPU memory", "error", C.GoString(memInfo.err))
|
slog.Warn("error looking up nvidia GPU memory", "error", C.GoString(memInfo.err))
|
||||||
C.free(unsafe.Pointer(memInfo.err))
|
C.free(unsafe.Pointer(memInfo.err))
|
||||||
@@ -368,7 +361,7 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
gpuInfo.FreeMemory = uint64(memInfo.free)
|
gpuInfo.FreeMemory = uint64(memInfo.free)
|
||||||
gpuInfo.ID = C.GoString(&memInfo.gpu_id[0])
|
gpuInfo.ID = C.GoString(&memInfo.gpu_id[0])
|
||||||
gpuInfo.Name = C.GoString(&memInfo.gpu_name[0])
|
gpuInfo.Name = C.GoString(&memInfo.gpu_name[0])
|
||||||
gpuInfo.DependencyPath = depPath
|
gpuInfo.DependencyPath = []string{LibOllamaPath}
|
||||||
oneapiGPUs = append(oneapiGPUs, gpuInfo)
|
oneapiGPUs = append(oneapiGPUs, gpuInfo)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -383,6 +376,8 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
if len(cudaGPUs) == 0 && len(rocmGPUs) == 0 && len(oneapiGPUs) == 0 {
|
if len(cudaGPUs) == 0 && len(rocmGPUs) == 0 && len(oneapiGPUs) == 0 {
|
||||||
slog.Info("no compatible GPUs were discovered")
|
slog.Info("no compatible GPUs were discovered")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO verify we have runners for the discovered GPUs, filter out any that aren't supported with good error messages
|
||||||
}
|
}
|
||||||
|
|
||||||
// For detected GPUs, load library if not loaded
|
// For detected GPUs, load library if not loaded
|
||||||
@@ -417,7 +412,9 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
}
|
}
|
||||||
for i, gpu := range cudaGPUs {
|
for i, gpu := range cudaGPUs {
|
||||||
if cHandles.nvml != nil {
|
if cHandles.nvml != nil {
|
||||||
C.nvml_get_free(*cHandles.nvml, C.int(gpu.index), &memInfo.free, &memInfo.total, &memInfo.used)
|
uuid := C.CString(gpu.ID)
|
||||||
|
defer C.free(unsafe.Pointer(uuid))
|
||||||
|
C.nvml_get_free(*cHandles.nvml, uuid, &memInfo.free, &memInfo.total, &memInfo.used)
|
||||||
} else if cHandles.cudart != nil {
|
} else if cHandles.cudart != nil {
|
||||||
C.cudart_bootstrap(*cHandles.cudart, C.int(gpu.index), &memInfo)
|
C.cudart_bootstrap(*cHandles.cudart, C.int(gpu.index), &memInfo)
|
||||||
} else if cHandles.nvcuda != nil {
|
} else if cHandles.nvcuda != nil {
|
||||||
@@ -500,34 +497,33 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
|
|
||||||
func FindGPULibs(baseLibName string, defaultPatterns []string) []string {
|
func FindGPULibs(baseLibName string, defaultPatterns []string) []string {
|
||||||
// Multiple GPU libraries may exist, and some may not work, so keep trying until we exhaust them
|
// Multiple GPU libraries may exist, and some may not work, so keep trying until we exhaust them
|
||||||
var ldPaths []string
|
|
||||||
gpuLibPaths := []string{}
|
gpuLibPaths := []string{}
|
||||||
slog.Debug("Searching for GPU library", "name", baseLibName)
|
slog.Debug("Searching for GPU library", "name", baseLibName)
|
||||||
|
|
||||||
// Start with our bundled libraries
|
// search our bundled libraries first
|
||||||
patterns := []string{filepath.Join(LibraryDir(), baseLibName)}
|
patterns := []string{filepath.Join(LibOllamaPath, baseLibName)}
|
||||||
|
|
||||||
|
var ldPaths []string
|
||||||
switch runtime.GOOS {
|
switch runtime.GOOS {
|
||||||
case "windows":
|
case "windows":
|
||||||
ldPaths = strings.Split(os.Getenv("PATH"), ";")
|
ldPaths = strings.Split(os.Getenv("PATH"), string(os.PathListSeparator))
|
||||||
case "linux":
|
case "linux":
|
||||||
ldPaths = strings.Split(os.Getenv("LD_LIBRARY_PATH"), ":")
|
ldPaths = strings.Split(os.Getenv("LD_LIBRARY_PATH"), string(os.PathListSeparator))
|
||||||
default:
|
|
||||||
return gpuLibPaths
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Then with whatever we find in the PATH/LD_LIBRARY_PATH
|
// then search the system's LD_LIBRARY_PATH
|
||||||
for _, ldPath := range ldPaths {
|
for _, p := range ldPaths {
|
||||||
d, err := filepath.Abs(ldPath)
|
p, err := filepath.Abs(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
patterns = append(patterns, filepath.Join(d, baseLibName))
|
patterns = append(patterns, filepath.Join(p, baseLibName))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// finally, search the default patterns provided by the caller
|
||||||
patterns = append(patterns, defaultPatterns...)
|
patterns = append(patterns, defaultPatterns...)
|
||||||
slog.Debug("gpu library search", "globs", patterns)
|
slog.Debug("gpu library search", "globs", patterns)
|
||||||
for _, pattern := range patterns {
|
for _, pattern := range patterns {
|
||||||
|
|
||||||
// Nvidia PhysX known to return bogus results
|
// Nvidia PhysX known to return bogus results
|
||||||
if strings.Contains(pattern, "PhysX") {
|
if strings.Contains(pattern, "PhysX") {
|
||||||
slog.Debug("skipping PhysX cuda library path", "path", pattern)
|
slog.Debug("skipping PhysX cuda library path", "path", pattern)
|
||||||
@@ -701,34 +697,6 @@ func (l GpuInfoList) GetVisibleDevicesEnv() (string, string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func LibraryDir() string {
|
|
||||||
// On Windows/linux we bundle the dependencies at the same level as the executable
|
|
||||||
appExe, err := os.Executable()
|
|
||||||
if err != nil {
|
|
||||||
slog.Warn("failed to lookup executable path", "error", err)
|
|
||||||
}
|
|
||||||
cwd, err := os.Getwd()
|
|
||||||
if err != nil {
|
|
||||||
slog.Warn("failed to lookup working directory", "error", err)
|
|
||||||
}
|
|
||||||
// Scan for any of our dependeices, and pick first match
|
|
||||||
for _, root := range []string{filepath.Dir(appExe), filepath.Join(filepath.Dir(appExe), envconfig.LibRelativeToExe()), cwd} {
|
|
||||||
libDep := filepath.Join("lib", "ollama")
|
|
||||||
if _, err := os.Stat(filepath.Join(root, libDep)); err == nil {
|
|
||||||
return filepath.Join(root, libDep)
|
|
||||||
}
|
|
||||||
// Developer mode, local build
|
|
||||||
if _, err := os.Stat(filepath.Join(root, runtime.GOOS+"-"+runtime.GOARCH, libDep)); err == nil {
|
|
||||||
return filepath.Join(root, runtime.GOOS+"-"+runtime.GOARCH, libDep)
|
|
||||||
}
|
|
||||||
if _, err := os.Stat(filepath.Join(root, "dist", runtime.GOOS+"-"+runtime.GOARCH, libDep)); err == nil {
|
|
||||||
return filepath.Join(root, "dist", runtime.GOOS+"-"+runtime.GOARCH, libDep)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
slog.Warn("unable to locate gpu dependency libraries")
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetSystemInfo() SystemInfo {
|
func GetSystemInfo() SystemInfo {
|
||||||
gpus := GetGPUInfo()
|
gpus := GetGPUInfo()
|
||||||
gpuMutex.Lock()
|
gpuMutex.Lock()
|
||||||
|
@@ -27,7 +27,6 @@ func GetGPUInfo() GpuInfoList {
|
|||||||
return []GpuInfo{
|
return []GpuInfo{
|
||||||
{
|
{
|
||||||
Library: "cpu",
|
Library: "cpu",
|
||||||
Variant: GetCPUCapability().String(),
|
|
||||||
memInfo: mem,
|
memInfo: mem,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -50,7 +49,6 @@ func GetCPUInfo() GpuInfoList {
|
|||||||
return []GpuInfo{
|
return []GpuInfo{
|
||||||
{
|
{
|
||||||
Library: "cpu",
|
Library: "cpu",
|
||||||
Variant: GetCPUCapability().String(),
|
|
||||||
memInfo: mem,
|
memInfo: mem,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
@@ -4,6 +4,7 @@
|
|||||||
#include "gpu_info_nvcuda.h"
|
#include "gpu_info_nvcuda.h"
|
||||||
|
|
||||||
void nvcuda_init(char *nvcuda_lib_path, nvcuda_init_resp_t *resp) {
|
void nvcuda_init(char *nvcuda_lib_path, nvcuda_init_resp_t *resp) {
|
||||||
|
LOG(resp->ch.verbose, "initializing %s\n", nvcuda_lib_path);
|
||||||
CUresult ret;
|
CUresult ret;
|
||||||
resp->err = NULL;
|
resp->err = NULL;
|
||||||
resp->num_devices = 0;
|
resp->num_devices = 0;
|
||||||
@@ -57,8 +58,10 @@ void nvcuda_init(char *nvcuda_lib_path, nvcuda_init_resp_t *resp) {
|
|||||||
resp->cudaErr = -1;
|
resp->cudaErr = -1;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
LOG(resp->ch.verbose, "dlsym: %s - %p\n", l[i].s, *l[i].p);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
LOG(resp->ch.verbose, "calling cuInit\n");
|
||||||
ret = (*resp->ch.cuInit)(0);
|
ret = (*resp->ch.cuInit)(0);
|
||||||
if (ret != CUDA_SUCCESS) {
|
if (ret != CUDA_SUCCESS) {
|
||||||
LOG(resp->ch.verbose, "cuInit err: %d\n", ret);
|
LOG(resp->ch.verbose, "cuInit err: %d\n", ret);
|
||||||
@@ -75,15 +78,18 @@ void nvcuda_init(char *nvcuda_lib_path, nvcuda_init_resp_t *resp) {
|
|||||||
resp->ch.driver_minor = 0;
|
resp->ch.driver_minor = 0;
|
||||||
|
|
||||||
// Report driver version if we're in verbose mode, ignore errors
|
// Report driver version if we're in verbose mode, ignore errors
|
||||||
|
LOG(resp->ch.verbose, "calling cuDriverGetVersion\n");
|
||||||
ret = (*resp->ch.cuDriverGetVersion)(&version);
|
ret = (*resp->ch.cuDriverGetVersion)(&version);
|
||||||
if (ret != CUDA_SUCCESS) {
|
if (ret != CUDA_SUCCESS) {
|
||||||
LOG(resp->ch.verbose, "cuDriverGetVersion failed: %d\n", ret);
|
LOG(resp->ch.verbose, "cuDriverGetVersion failed: %d\n", ret);
|
||||||
} else {
|
} else {
|
||||||
|
LOG(resp->ch.verbose, "raw version 0x%x\n", version);
|
||||||
resp->ch.driver_major = version / 1000;
|
resp->ch.driver_major = version / 1000;
|
||||||
resp->ch.driver_minor = (version - (resp->ch.driver_major * 1000)) / 10;
|
resp->ch.driver_minor = (version - (resp->ch.driver_major * 1000)) / 10;
|
||||||
LOG(resp->ch.verbose, "CUDA driver version: %d.%d\n", resp->ch.driver_major, resp->ch.driver_minor);
|
LOG(resp->ch.verbose, "CUDA driver version: %d.%d\n", resp->ch.driver_major, resp->ch.driver_minor);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
LOG(resp->ch.verbose, "calling cuDeviceGetCount\n");
|
||||||
ret = (*resp->ch.cuDeviceGetCount)(&resp->num_devices);
|
ret = (*resp->ch.cuDeviceGetCount)(&resp->num_devices);
|
||||||
if (ret != CUDA_SUCCESS) {
|
if (ret != CUDA_SUCCESS) {
|
||||||
LOG(resp->ch.verbose, "cuDeviceGetCount err: %d\n", ret);
|
LOG(resp->ch.verbose, "cuDeviceGetCount err: %d\n", ret);
|
||||||
@@ -94,6 +100,7 @@ void nvcuda_init(char *nvcuda_lib_path, nvcuda_init_resp_t *resp) {
|
|||||||
resp->cudaErr = ret;
|
resp->cudaErr = ret;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
LOG(resp->ch.verbose, "device count %d\n", resp->num_devices);
|
||||||
}
|
}
|
||||||
|
|
||||||
const int buflen = 256;
|
const int buflen = 256;
|
||||||
|
@@ -17,7 +17,7 @@ void nvml_init(char *nvml_lib_path, nvml_init_resp_t *resp) {
|
|||||||
} l[] = {
|
} l[] = {
|
||||||
{"nvmlInit_v2", (void *)&resp->ch.nvmlInit_v2},
|
{"nvmlInit_v2", (void *)&resp->ch.nvmlInit_v2},
|
||||||
{"nvmlShutdown", (void *)&resp->ch.nvmlShutdown},
|
{"nvmlShutdown", (void *)&resp->ch.nvmlShutdown},
|
||||||
{"nvmlDeviceGetHandleByIndex", (void *)&resp->ch.nvmlDeviceGetHandleByIndex},
|
{"nvmlDeviceGetHandleByUUID", (void *)&resp->ch.nvmlDeviceGetHandleByUUID},
|
||||||
{"nvmlDeviceGetMemoryInfo", (void *)&resp->ch.nvmlDeviceGetMemoryInfo},
|
{"nvmlDeviceGetMemoryInfo", (void *)&resp->ch.nvmlDeviceGetMemoryInfo},
|
||||||
{NULL, NULL},
|
{NULL, NULL},
|
||||||
};
|
};
|
||||||
@@ -67,20 +67,20 @@ void nvml_init(char *nvml_lib_path, nvml_init_resp_t *resp) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void nvml_get_free(nvml_handle_t h, int device_id, uint64_t *free, uint64_t *total, uint64_t *used) {
|
void nvml_get_free(nvml_handle_t h, char *uuid, uint64_t *free, uint64_t *total, uint64_t *used) {
|
||||||
nvmlDevice_t device;
|
nvmlDevice_t device;
|
||||||
nvmlMemory_t memInfo = {0};
|
nvmlMemory_t memInfo = {0};
|
||||||
nvmlReturn_t ret;
|
nvmlReturn_t ret;
|
||||||
ret = (*h.nvmlDeviceGetHandleByIndex)(device_id, &device);
|
ret = (*h.nvmlDeviceGetHandleByUUID)((const char *)(uuid), &device);
|
||||||
if (ret != NVML_SUCCESS) {
|
if (ret != NVML_SUCCESS) {
|
||||||
LOG(1, "unable to get device handle %d: %d", device_id, ret);
|
LOG(1, "unable to get device handle %s: %d", uuid, ret);
|
||||||
*free = 0;
|
*free = 0;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = (*h.nvmlDeviceGetMemoryInfo)(device, &memInfo);
|
ret = (*h.nvmlDeviceGetMemoryInfo)(device, &memInfo);
|
||||||
if (ret != NVML_SUCCESS) {
|
if (ret != NVML_SUCCESS) {
|
||||||
LOG(1, "device memory info lookup failure %d: %d", device_id, ret);
|
LOG(1, "device memory info lookup failure %s: %d", uuid, ret);
|
||||||
*free = 0;
|
*free = 0;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@@ -25,7 +25,7 @@ typedef struct nvml_handle {
|
|||||||
uint16_t verbose;
|
uint16_t verbose;
|
||||||
nvmlReturn_t (*nvmlInit_v2)(void);
|
nvmlReturn_t (*nvmlInit_v2)(void);
|
||||||
nvmlReturn_t (*nvmlShutdown)(void);
|
nvmlReturn_t (*nvmlShutdown)(void);
|
||||||
nvmlReturn_t (*nvmlDeviceGetHandleByIndex)(unsigned int, nvmlDevice_t *);
|
nvmlReturn_t (*nvmlDeviceGetHandleByUUID)(const char *, nvmlDevice_t *);
|
||||||
nvmlReturn_t (*nvmlDeviceGetMemoryInfo)(nvmlDevice_t, nvmlMemory_t *);
|
nvmlReturn_t (*nvmlDeviceGetMemoryInfo)(nvmlDevice_t, nvmlMemory_t *);
|
||||||
} nvml_handle_t;
|
} nvml_handle_t;
|
||||||
|
|
||||||
@@ -41,7 +41,7 @@ typedef struct nvml_compute_capability {
|
|||||||
} nvml_compute_capability_t;
|
} nvml_compute_capability_t;
|
||||||
|
|
||||||
void nvml_init(char *nvml_lib_path, nvml_init_resp_t *resp);
|
void nvml_init(char *nvml_lib_path, nvml_init_resp_t *resp);
|
||||||
void nvml_get_free(nvml_handle_t ch, int device_id, uint64_t *free, uint64_t *total, uint64_t *used);
|
void nvml_get_free(nvml_handle_t ch, char *uuid, uint64_t *free, uint64_t *total, uint64_t *used);
|
||||||
void nvml_release(nvml_handle_t ch);
|
void nvml_release(nvml_handle_t ch);
|
||||||
|
|
||||||
#endif // __GPU_INFO_NVML_H__
|
#endif // __GPU_INFO_NVML_H__
|
||||||
|
@@ -3,9 +3,11 @@ package discover
|
|||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"reflect"
|
"reflect"
|
||||||
"regexp"
|
"regexp"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/format"
|
"github.com/ollama/ollama/format"
|
||||||
@@ -109,6 +111,10 @@ func GetCPUDetails() ([]CPU, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
return linuxCPUDetails(file)
|
||||||
|
}
|
||||||
|
|
||||||
|
func linuxCPUDetails(file io.Reader) ([]CPU, error) {
|
||||||
reColumns := regexp.MustCompile("\t+: ")
|
reColumns := regexp.MustCompile("\t+: ")
|
||||||
scanner := bufio.NewScanner(file)
|
scanner := bufio.NewScanner(file)
|
||||||
cpuInfos := []linuxCpuInfo{}
|
cpuInfos := []linuxCpuInfo{}
|
||||||
@@ -131,6 +137,9 @@ func GetCPUDetails() ([]CPU, error) {
|
|||||||
cpu = &linuxCpuInfo{}
|
cpu = &linuxCpuInfo{}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if cpu.ID != "" {
|
||||||
|
cpuInfos = append(cpuInfos, *cpu)
|
||||||
|
}
|
||||||
|
|
||||||
// Process the sockets/cores/threads
|
// Process the sockets/cores/threads
|
||||||
socketByID := map[string]*CPU{}
|
socketByID := map[string]*CPU{}
|
||||||
@@ -177,10 +186,14 @@ func GetCPUDetails() ([]CPU, error) {
|
|||||||
s.EfficiencyCoreCount = efficiencyCoreCount
|
s.EfficiencyCoreCount = efficiencyCoreCount
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
keys := make([]string, 0, len(socketByID))
|
||||||
result := []CPU{}
|
result := make([]CPU, 0, len(socketByID))
|
||||||
for _, c := range socketByID {
|
for k := range socketByID {
|
||||||
result = append(result, *c)
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
for _, k := range keys {
|
||||||
|
result = append(result, *socketByID[k])
|
||||||
}
|
}
|
||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
2097
discover/gpu_linux_test.go
Normal file
2097
discover/gpu_linux_test.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -209,7 +209,7 @@ func processSystemLogicalProcessorInforationList(buf []byte) []*winPackage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sumarize the results
|
// Summarize the results
|
||||||
for i, pkg := range packages {
|
for i, pkg := range packages {
|
||||||
slog.Info("", "package", i, "cores", pkg.coreCount, "efficiency", pkg.efficiencyCoreCount, "threads", pkg.threadCount)
|
slog.Info("", "package", i, "cores", pkg.coreCount, "efficiency", pkg.efficiencyCoreCount, "threads", pkg.threadCount)
|
||||||
}
|
}
|
||||||
|
56
discover/path.go
Normal file
56
discover/path.go
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
)
|
||||||
|
|
||||||
|
// LibPath is a path to lookup dynamic libraries
|
||||||
|
// in development it's usually 'build/lib/ollama'
|
||||||
|
// in distribution builds it's 'lib/ollama' on Windows
|
||||||
|
// '../lib/ollama' on Linux and the executable's directory on macOS
|
||||||
|
// note: distribution builds, additional GPU-specific libraries are
|
||||||
|
// found in subdirectories of the returned path, such as
|
||||||
|
// 'cuda_v11', 'cuda_v12', 'rocm', etc.
|
||||||
|
var LibOllamaPath string = func() string {
|
||||||
|
exe, err := os.Executable()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
if eval, err := filepath.EvalSymlinks(exe); err == nil {
|
||||||
|
exe = eval
|
||||||
|
}
|
||||||
|
|
||||||
|
var libPath string
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "windows":
|
||||||
|
libPath = filepath.Join(filepath.Dir(exe), "lib", "ollama")
|
||||||
|
case "linux":
|
||||||
|
libPath = filepath.Join(filepath.Dir(exe), "..", "lib", "ollama")
|
||||||
|
case "darwin":
|
||||||
|
libPath = filepath.Dir(exe)
|
||||||
|
}
|
||||||
|
|
||||||
|
cwd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
paths := []string{
|
||||||
|
libPath,
|
||||||
|
|
||||||
|
// build paths for development
|
||||||
|
filepath.Join(filepath.Dir(exe), "build", "lib", "ollama"),
|
||||||
|
filepath.Join(cwd, "build", "lib", "ollama"),
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, p := range paths {
|
||||||
|
if _, err := os.Stat(p); err == nil {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return filepath.Dir(exe)
|
||||||
|
}()
|
@@ -25,7 +25,7 @@ type GpuInfo struct { // TODO better name maybe "InferenceProcessor"?
|
|||||||
MinimumMemory uint64 `json:"-"`
|
MinimumMemory uint64 `json:"-"`
|
||||||
|
|
||||||
// Any extra PATH/LD_LIBRARY_PATH dependencies required for the Library to operate properly
|
// Any extra PATH/LD_LIBRARY_PATH dependencies required for the Library to operate properly
|
||||||
DependencyPath string `json:"lib_path,omitempty"`
|
DependencyPath []string `json:"lib_path,omitempty"`
|
||||||
|
|
||||||
// Extra environment variables specific to the GPU as list of [key,value]
|
// Extra environment variables specific to the GPU as list of [key,value]
|
||||||
EnvWorkarounds [][2]string `json:"envs,omitempty"`
|
EnvWorkarounds [][2]string `json:"envs,omitempty"`
|
||||||
@@ -47,6 +47,13 @@ type GpuInfo struct { // TODO better name maybe "InferenceProcessor"?
|
|||||||
// TODO other performance capability info to help in scheduling decisions
|
// TODO other performance capability info to help in scheduling decisions
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (gpu GpuInfo) RunnerName() string {
|
||||||
|
if gpu.Variant != "" {
|
||||||
|
return gpu.Library + "_" + gpu.Variant
|
||||||
|
}
|
||||||
|
return gpu.Library
|
||||||
|
}
|
||||||
|
|
||||||
type CPUInfo struct {
|
type CPUInfo struct {
|
||||||
GpuInfo
|
GpuInfo
|
||||||
CPUs []CPU
|
CPUs []CPU
|
||||||
@@ -99,7 +106,7 @@ func (l GpuInfoList) ByLibrary() []GpuInfoList {
|
|||||||
for _, info := range l {
|
for _, info := range l {
|
||||||
found := false
|
found := false
|
||||||
requested := info.Library
|
requested := info.Library
|
||||||
if info.Variant != CPUCapabilityNone.String() {
|
if info.Variant != "" {
|
||||||
requested += "_" + info.Variant
|
requested += "_" + info.Variant
|
||||||
}
|
}
|
||||||
for i, lib := range libs {
|
for i, lib := range libs {
|
||||||
@@ -140,29 +147,6 @@ func (a ByFreeMemory) Len() int { return len(a) }
|
|||||||
func (a ByFreeMemory) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
func (a ByFreeMemory) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||||
func (a ByFreeMemory) Less(i, j int) bool { return a[i].FreeMemory < a[j].FreeMemory }
|
func (a ByFreeMemory) Less(i, j int) bool { return a[i].FreeMemory < a[j].FreeMemory }
|
||||||
|
|
||||||
type CPUCapability uint32
|
|
||||||
|
|
||||||
// Override at build time when building base GPU runners
|
|
||||||
var GPURunnerCPUCapability = CPUCapabilityAVX
|
|
||||||
|
|
||||||
const (
|
|
||||||
CPUCapabilityNone CPUCapability = iota
|
|
||||||
CPUCapabilityAVX
|
|
||||||
CPUCapabilityAVX2
|
|
||||||
// TODO AVX512
|
|
||||||
)
|
|
||||||
|
|
||||||
func (c CPUCapability) String() string {
|
|
||||||
switch c {
|
|
||||||
case CPUCapabilityAVX:
|
|
||||||
return "avx"
|
|
||||||
case CPUCapabilityAVX2:
|
|
||||||
return "avx2"
|
|
||||||
default:
|
|
||||||
return "no vector extensions"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type SystemInfo struct {
|
type SystemInfo struct {
|
||||||
System CPUInfo `json:"system"`
|
System CPUInfo `json:"system"`
|
||||||
GPUs []GpuInfo `json:"gpus"`
|
GPUs []GpuInfo `json:"gpus"`
|
||||||
@@ -175,6 +159,25 @@ func (si SystemInfo) GetOptimalThreadCount() int {
|
|||||||
if len(si.System.CPUs) == 0 {
|
if len(si.System.CPUs) == 0 {
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
// Allocate thread count matching the performance cores on a single socket
|
|
||||||
return si.System.CPUs[0].CoreCount - si.System.CPUs[0].EfficiencyCoreCount
|
coreCount := 0
|
||||||
|
for _, c := range si.System.CPUs {
|
||||||
|
coreCount += c.CoreCount - c.EfficiencyCoreCount
|
||||||
|
}
|
||||||
|
|
||||||
|
return coreCount
|
||||||
|
}
|
||||||
|
|
||||||
|
// For each GPU, check if it does NOT support flash attention
|
||||||
|
func (l GpuInfoList) FlashAttentionSupported() bool {
|
||||||
|
for _, gpu := range l {
|
||||||
|
supportsFA := gpu.Library == "metal" ||
|
||||||
|
(gpu.Library == "cuda" && gpu.DriverMajor >= 7) ||
|
||||||
|
gpu.Library == "rocm"
|
||||||
|
|
||||||
|
if !supportsFA {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
}
|
}
|
||||||
|
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
### Getting Started
|
### Getting Started
|
||||||
* [Quickstart](../README.md#quickstart)
|
* [Quickstart](../README.md#quickstart)
|
||||||
* [Examples](../examples)
|
* [Examples](./examples.md)
|
||||||
* [Importing models](./import.md)
|
* [Importing models](./import.md)
|
||||||
* [Linux Documentation](./linux.md)
|
* [Linux Documentation](./linux.md)
|
||||||
* [Windows Documentation](./windows.md)
|
* [Windows Documentation](./windows.md)
|
||||||
|
360
docs/api.md
360
docs/api.md
@@ -13,6 +13,7 @@
|
|||||||
- [Push a Model](#push-a-model)
|
- [Push a Model](#push-a-model)
|
||||||
- [Generate Embeddings](#generate-embeddings)
|
- [Generate Embeddings](#generate-embeddings)
|
||||||
- [List Running Models](#list-running-models)
|
- [List Running Models](#list-running-models)
|
||||||
|
- [Version](#version)
|
||||||
|
|
||||||
## Conventions
|
## Conventions
|
||||||
|
|
||||||
@@ -30,7 +31,7 @@ Certain endpoints stream responses as JSON objects. Streaming can be disabled by
|
|||||||
|
|
||||||
## Generate a completion
|
## Generate a completion
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/generate
|
POST /api/generate
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -45,14 +46,18 @@ Generate a response for a given prompt with a provided model. This is a streamin
|
|||||||
|
|
||||||
Advanced parameters (optional):
|
Advanced parameters (optional):
|
||||||
|
|
||||||
- `format`: the format to return a response in. Currently the only accepted value is `json`
|
- `format`: the format to return a response in. Format can be `json` or a JSON schema
|
||||||
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
||||||
- `system`: system message to (overrides what is defined in the `Modelfile`)
|
- `system`: system message to (overrides what is defined in the `Modelfile`)
|
||||||
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
|
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
|
||||||
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
|
|
||||||
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
||||||
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
|
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
|
||||||
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
|
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
|
||||||
|
- `context` (deprecated): the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
|
||||||
|
|
||||||
|
#### Structured outputs
|
||||||
|
|
||||||
|
Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [structured outputs](#request-structured-outputs) example below.
|
||||||
|
|
||||||
#### JSON mode
|
#### JSON mode
|
||||||
|
|
||||||
@@ -185,6 +190,52 @@ curl http://localhost:11434/api/generate -d '{
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Request (Structured outputs)
|
||||||
|
|
||||||
|
##### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{
|
||||||
|
"model": "llama3.1:8b",
|
||||||
|
"prompt": "Ollama is 22 years old and is busy saving the world. Respond using JSON",
|
||||||
|
"stream": false,
|
||||||
|
"format": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"age": {
|
||||||
|
"type": "integer"
|
||||||
|
},
|
||||||
|
"available": {
|
||||||
|
"type": "boolean"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"age",
|
||||||
|
"available"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": "llama3.1:8b",
|
||||||
|
"created_at": "2024-12-06T00:48:09.983619Z",
|
||||||
|
"response": "{\n \"age\": 22,\n \"available\": true\n}",
|
||||||
|
"done": true,
|
||||||
|
"done_reason": "stop",
|
||||||
|
"context": [1, 2, 3],
|
||||||
|
"total_duration": 1075509083,
|
||||||
|
"load_duration": 567678166,
|
||||||
|
"prompt_eval_count": 28,
|
||||||
|
"prompt_eval_duration": 236000000,
|
||||||
|
"eval_count": 16,
|
||||||
|
"eval_duration": 269000000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Request (JSON mode)
|
#### Request (JSON mode)
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
@@ -255,7 +306,7 @@ curl http://localhost:11434/api/generate -d '{
|
|||||||
|
|
||||||
#### Response
|
#### Response
|
||||||
|
|
||||||
```
|
```json
|
||||||
{
|
{
|
||||||
"model": "llava",
|
"model": "llava",
|
||||||
"created_at": "2023-11-03T15:36:02.583064Z",
|
"created_at": "2023-11-03T15:36:02.583064Z",
|
||||||
@@ -337,7 +388,6 @@ curl http://localhost:11434/api/generate -d '{
|
|||||||
"top_k": 20,
|
"top_k": 20,
|
||||||
"top_p": 0.9,
|
"top_p": 0.9,
|
||||||
"min_p": 0.0,
|
"min_p": 0.0,
|
||||||
"tfs_z": 0.5,
|
|
||||||
"typical_p": 0.7,
|
"typical_p": 0.7,
|
||||||
"repeat_last_n": 33,
|
"repeat_last_n": 33,
|
||||||
"temperature": 0.8,
|
"temperature": 0.8,
|
||||||
@@ -355,7 +405,6 @@ curl http://localhost:11434/api/generate -d '{
|
|||||||
"num_gpu": 1,
|
"num_gpu": 1,
|
||||||
"main_gpu": 0,
|
"main_gpu": 0,
|
||||||
"low_vram": false,
|
"low_vram": false,
|
||||||
"f16_kv": true,
|
|
||||||
"vocab_only": false,
|
"vocab_only": false,
|
||||||
"use_mmap": true,
|
"use_mmap": true,
|
||||||
"use_mlock": false,
|
"use_mlock": false,
|
||||||
@@ -436,7 +485,7 @@ A single JSON object is returned:
|
|||||||
|
|
||||||
## Generate a chat completion
|
## Generate a chat completion
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/chat
|
POST /api/chat
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -446,22 +495,26 @@ Generate the next message in a chat with a provided model. This is a streaming e
|
|||||||
|
|
||||||
- `model`: (required) the [model name](#model-names)
|
- `model`: (required) the [model name](#model-names)
|
||||||
- `messages`: the messages of the chat, this can be used to keep a chat memory
|
- `messages`: the messages of the chat, this can be used to keep a chat memory
|
||||||
- `tools`: tools for the model to use if supported. Requires `stream` to be set to `false`
|
- `tools`: list of tools in JSON for the model to use if supported
|
||||||
|
|
||||||
The `message` object has the following fields:
|
The `message` object has the following fields:
|
||||||
|
|
||||||
- `role`: the role of the message, either `system`, `user`, `assistant`, or `tool`
|
- `role`: the role of the message, either `system`, `user`, `assistant`, or `tool`
|
||||||
- `content`: the content of the message
|
- `content`: the content of the message
|
||||||
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
|
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
|
||||||
- `tool_calls` (optional): a list of tools the model wants to use
|
- `tool_calls` (optional): a list of tools in JSON that the model wants to use
|
||||||
|
|
||||||
Advanced parameters (optional):
|
Advanced parameters (optional):
|
||||||
|
|
||||||
- `format`: the format to return a response in. Currently the only accepted value is `json`
|
- `format`: the format to return a response in. Format can be `json` or a JSON schema.
|
||||||
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
||||||
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
||||||
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
|
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
|
||||||
|
|
||||||
|
### Structured outputs
|
||||||
|
|
||||||
|
Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [Chat request (Structured outputs)](#chat-request-structured-outputs) example below.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
#### Chat Request (Streaming)
|
#### Chat Request (Streaming)
|
||||||
@@ -505,6 +558,10 @@ Final response:
|
|||||||
{
|
{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"created_at": "2023-08-04T19:22:45.499127Z",
|
"created_at": "2023-08-04T19:22:45.499127Z",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": ""
|
||||||
|
},
|
||||||
"done": true,
|
"done": true,
|
||||||
"total_duration": 4883583458,
|
"total_duration": 4883583458,
|
||||||
"load_duration": 1334875,
|
"load_duration": 1334875,
|
||||||
@@ -552,6 +609,54 @@ curl http://localhost:11434/api/chat -d '{
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Chat request (Structured outputs)
|
||||||
|
|
||||||
|
##### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||||
|
"model": "llama3.1",
|
||||||
|
"messages": [{"role": "user", "content": "Ollama is 22 years old and busy saving the world. Return a JSON object with the age and availability."}],
|
||||||
|
"stream": false,
|
||||||
|
"format": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"age": {
|
||||||
|
"type": "integer"
|
||||||
|
},
|
||||||
|
"available": {
|
||||||
|
"type": "boolean"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"age",
|
||||||
|
"available"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"options": {
|
||||||
|
"temperature": 0
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": "llama3.1",
|
||||||
|
"created_at": "2024-12-06T00:46:58.265747Z",
|
||||||
|
"message": { "role": "assistant", "content": "{\"age\": 22, \"available\": false}" },
|
||||||
|
"done_reason": "stop",
|
||||||
|
"done": true,
|
||||||
|
"total_duration": 2254970291,
|
||||||
|
"load_duration": 574751416,
|
||||||
|
"prompt_eval_count": 34,
|
||||||
|
"prompt_eval_duration": 1502000000,
|
||||||
|
"eval_count": 12,
|
||||||
|
"eval_duration": 175000000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Chat request (With History)
|
#### Chat request (With History)
|
||||||
|
|
||||||
Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.
|
Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.
|
||||||
@@ -694,7 +799,7 @@ curl http://localhost:11434/api/chat -d '{
|
|||||||
|
|
||||||
##### Request
|
##### Request
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"messages": [
|
"messages": [
|
||||||
@@ -769,7 +874,7 @@ If the messages array is empty, the model will be loaded into memory.
|
|||||||
|
|
||||||
##### Request
|
##### Request
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"messages": []
|
"messages": []
|
||||||
@@ -777,6 +882,7 @@ curl http://localhost:11434/api/chat -d '{
|
|||||||
```
|
```
|
||||||
|
|
||||||
##### Response
|
##### Response
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
@@ -796,7 +902,7 @@ If the messages array is empty and the `keep_alive` parameter is set to `0`, a m
|
|||||||
|
|
||||||
##### Request
|
##### Request
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "llama3.2",
|
"model": "llama3.2",
|
||||||
"messages": [],
|
"messages": [],
|
||||||
@@ -823,37 +929,69 @@ A single JSON object is returned:
|
|||||||
|
|
||||||
## Create a Model
|
## Create a Model
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/create
|
POST /api/create
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
|
Create a model from:
|
||||||
|
* another model;
|
||||||
|
* a safetensors directory; or
|
||||||
|
* a GGUF file.
|
||||||
|
|
||||||
|
If you are creating a model from a safetensors directory or from a GGUF file, you must [create a blob](#create-a-blob) for each of the files and then use the file name and SHA256 digest associated with each blob in the `files` field.
|
||||||
|
|
||||||
### Parameters
|
### Parameters
|
||||||
|
|
||||||
- `name`: name of the model to create
|
- `model`: name of the model to create
|
||||||
- `modelfile` (optional): contents of the Modelfile
|
- `from`: (optional) name of an existing model to create the new model from
|
||||||
|
- `files`: (optional) a dictionary of file names to SHA256 digests of blobs to create the model from
|
||||||
|
- `adapters`: (optional) a dictionary of file names to SHA256 digests of blobs for LORA adapters
|
||||||
|
- `template`: (optional) the prompt template for the model
|
||||||
|
- `license`: (optional) a string or list of strings containing the license or licenses for the model
|
||||||
|
- `system`: (optional) a string containing the system prompt for the model
|
||||||
|
- `parameters`: (optional) a dictionary of parameters for the model (see [Modelfile](./modelfile.md#valid-parameters-and-values) for a list of parameters)
|
||||||
|
- `messages`: (optional) a list of message objects used to create a conversation
|
||||||
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
||||||
- `path` (optional): path to the Modelfile
|
- `quantize` (optional): quantize a non-quantized (e.g. float16) model
|
||||||
|
|
||||||
|
#### Quantization types
|
||||||
|
|
||||||
|
| Type | Recommended |
|
||||||
|
| --- | :-: |
|
||||||
|
| q2_K | |
|
||||||
|
| q3_K_L | |
|
||||||
|
| q3_K_M | |
|
||||||
|
| q3_K_S | |
|
||||||
|
| q4_0 | |
|
||||||
|
| q4_1 | |
|
||||||
|
| q4_K_M | * |
|
||||||
|
| q4_K_S | |
|
||||||
|
| q5_0 | |
|
||||||
|
| q5_1 | |
|
||||||
|
| q5_K_M | |
|
||||||
|
| q5_K_S | |
|
||||||
|
| q6_K | |
|
||||||
|
| q8_0 | * |
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
#### Create a new model
|
#### Create a new model
|
||||||
|
|
||||||
Create a new model from a `Modelfile`.
|
Create a new model from an existing model.
|
||||||
|
|
||||||
##### Request
|
##### Request
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/create -d '{
|
curl http://localhost:11434/api/create -d '{
|
||||||
"name": "mario",
|
"model": "mario",
|
||||||
"modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
|
"from": "llama3.2",
|
||||||
|
"system": "You are Mario from Super Mario Bros."
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
##### Response
|
##### Response
|
||||||
|
|
||||||
A stream of JSON objects. Notice that the final JSON object shows a `"status": "success"`.
|
A stream of JSON objects is returned:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{"status":"reading model metadata"}
|
{"status":"reading model metadata"}
|
||||||
@@ -869,57 +1007,147 @@ A stream of JSON objects. Notice that the final JSON object shows a `"status": "
|
|||||||
{"status":"success"}
|
{"status":"success"}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check if a Blob Exists
|
#### Quantize a model
|
||||||
|
|
||||||
|
Quantize a non-quantized model.
|
||||||
|
|
||||||
|
##### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl http://localhost:11434/api/create -d '{
|
||||||
|
"model": "llama3.1:quantized",
|
||||||
|
"from": "llama3.1:8b-instruct-fp16",
|
||||||
|
"quantize": "q4_K_M"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Response
|
||||||
|
|
||||||
|
A stream of JSON objects is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"status":"quantizing F16 model to Q4_K_M"}
|
||||||
|
{"status":"creating new layer sha256:667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29"}
|
||||||
|
{"status":"using existing layer sha256:11ce4ee3e170f6adebac9a991c22e22ab3f8530e154ee669954c4bc73061c258"}
|
||||||
|
{"status":"using existing layer sha256:0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177"}
|
||||||
|
{"status":"using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb"}
|
||||||
|
{"status":"creating new layer sha256:455f34728c9b5dd3376378bfb809ee166c145b0b4c1f1a6feca069055066ef9a"}
|
||||||
|
{"status":"writing manifest"}
|
||||||
|
{"status":"success"}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Create a model from GGUF
|
||||||
|
|
||||||
|
Create a model from a GGUF file. The `files` parameter should be filled out with the file name and SHA256 digest of the GGUF file you wish to use. Use [/api/blobs/:digest](#push-a-blob) to push the GGUF file to the server before calling this API.
|
||||||
|
|
||||||
|
|
||||||
|
##### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl http://localhost:11434/api/create -d '{
|
||||||
|
"model": "my-gguf-model",
|
||||||
|
"files": {
|
||||||
|
"test.gguf": "sha256:432f310a77f4650a88d0fd59ecdd7cebed8d684bafea53cbff0473542964f0c3"
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Response
|
||||||
|
|
||||||
|
A stream of JSON objects is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"status":"parsing GGUF"}
|
||||||
|
{"status":"using existing layer sha256:432f310a77f4650a88d0fd59ecdd7cebed8d684bafea53cbff0473542964f0c3"}
|
||||||
|
{"status":"writing manifest"}
|
||||||
|
{"status":"success"}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
#### Create a model from a Safetensors directory
|
||||||
|
|
||||||
|
The `files` parameter should include a dictionary of files for the safetensors model which includes the file names and SHA256 digest of each file. Use [/api/blobs/:digest](#push-a-blob) to first push each of the files to the server before calling this API. Files will remain in the cache until the Ollama server is restarted.
|
||||||
|
|
||||||
|
##### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl http://localhost:11434/api/create -d '{
|
||||||
|
"model": "fred",
|
||||||
|
"files": {
|
||||||
|
"config.json": "sha256:dd3443e529fb2290423a0c65c2d633e67b419d273f170259e27297219828e389",
|
||||||
|
"generation_config.json": "sha256:88effbb63300dbbc7390143fbbdd9d9fa50587b37e8bfd16c8c90d4970a74a36",
|
||||||
|
"special_tokens_map.json": "sha256:b7455f0e8f00539108837bfa586c4fbf424e31f8717819a6798be74bef813d05",
|
||||||
|
"tokenizer.json": "sha256:bbc1904d35169c542dffbe1f7589a5994ec7426d9e5b609d07bab876f32e97ab",
|
||||||
|
"tokenizer_config.json": "sha256:24e8a6dc2547164b7002e3125f10b415105644fcf02bf9ad8b674c87b1eaaed6",
|
||||||
|
"model.safetensors": "sha256:1ff795ff6a07e6a68085d206fb84417da2f083f68391c2843cd2b8ac6df8538f"
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Response
|
||||||
|
|
||||||
|
A stream of JSON objects is returned:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
{"status":"converting model"}
|
||||||
|
{"status":"creating new layer sha256:05ca5b813af4a53d2c2922933936e398958855c44ee534858fcfd830940618b6"}
|
||||||
|
{"status":"using autodetected template llama3-instruct"}
|
||||||
|
{"status":"using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb"}
|
||||||
|
{"status":"writing manifest"}
|
||||||
|
{"status":"success"}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Check if a Blob Exists
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
HEAD /api/blobs/:digest
|
HEAD /api/blobs/:digest
|
||||||
```
|
```
|
||||||
|
|
||||||
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai.
|
Ensures that the file blob (Binary Large Object) used with create a model exists on the server. This checks your Ollama server and not ollama.com.
|
||||||
|
|
||||||
#### Query Parameters
|
### Query Parameters
|
||||||
|
|
||||||
- `digest`: the SHA256 digest of the blob
|
- `digest`: the SHA256 digest of the blob
|
||||||
|
|
||||||
#### Examples
|
### Examples
|
||||||
|
|
||||||
##### Request
|
#### Request
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
|
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
|
||||||
```
|
```
|
||||||
|
|
||||||
##### Response
|
#### Response
|
||||||
|
|
||||||
Return 200 OK if the blob exists, 404 Not Found if it does not.
|
Return 200 OK if the blob exists, 404 Not Found if it does not.
|
||||||
|
|
||||||
### Create a Blob
|
## Push a Blob
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/blobs/:digest
|
POST /api/blobs/:digest
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a blob from a file on the server. Returns the server file path.
|
Push a file to the Ollama server to create a "blob" (Binary Large Object).
|
||||||
|
|
||||||
#### Query Parameters
|
### Query Parameters
|
||||||
|
|
||||||
- `digest`: the expected SHA256 digest of the file
|
- `digest`: the expected SHA256 digest of the file
|
||||||
|
|
||||||
#### Examples
|
### Examples
|
||||||
|
|
||||||
##### Request
|
#### Request
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
|
curl -T model.gguf -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
|
||||||
```
|
```
|
||||||
|
|
||||||
##### Response
|
#### Response
|
||||||
|
|
||||||
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
|
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
|
||||||
|
|
||||||
## List Local Models
|
## List Local Models
|
||||||
|
|
||||||
```shell
|
```
|
||||||
GET /api/tags
|
GET /api/tags
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -972,7 +1200,7 @@ A single JSON object will be returned.
|
|||||||
|
|
||||||
## Show Model Information
|
## Show Model Information
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/show
|
POST /api/show
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -980,7 +1208,7 @@ Show information about a model including details, modelfile, template, parameter
|
|||||||
|
|
||||||
### Parameters
|
### Parameters
|
||||||
|
|
||||||
- `name`: name of the model to show
|
- `model`: name of the model to show
|
||||||
- `verbose`: (optional) if set to `true`, returns full data for verbose response fields
|
- `verbose`: (optional) if set to `true`, returns full data for verbose response fields
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
@@ -989,7 +1217,7 @@ Show information about a model including details, modelfile, template, parameter
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/show -d '{
|
curl http://localhost:11434/api/show -d '{
|
||||||
"name": "llama3.2"
|
"model": "llama3.2"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1038,7 +1266,7 @@ curl http://localhost:11434/api/show -d '{
|
|||||||
|
|
||||||
## Copy a Model
|
## Copy a Model
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/copy
|
POST /api/copy
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1061,7 +1289,7 @@ Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't e
|
|||||||
|
|
||||||
## Delete a Model
|
## Delete a Model
|
||||||
|
|
||||||
```shell
|
```
|
||||||
DELETE /api/delete
|
DELETE /api/delete
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1069,7 +1297,7 @@ Delete a model and its data.
|
|||||||
|
|
||||||
### Parameters
|
### Parameters
|
||||||
|
|
||||||
- `name`: model name to delete
|
- `model`: model name to delete
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
@@ -1077,7 +1305,7 @@ Delete a model and its data.
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -X DELETE http://localhost:11434/api/delete -d '{
|
curl -X DELETE http://localhost:11434/api/delete -d '{
|
||||||
"name": "llama3:13b"
|
"model": "llama3:13b"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1087,7 +1315,7 @@ Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't
|
|||||||
|
|
||||||
## Pull a Model
|
## Pull a Model
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/pull
|
POST /api/pull
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1095,7 +1323,7 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
|
|||||||
|
|
||||||
### Parameters
|
### Parameters
|
||||||
|
|
||||||
- `name`: name of the model to pull
|
- `model`: name of the model to pull
|
||||||
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
|
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
|
||||||
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
||||||
|
|
||||||
@@ -1105,7 +1333,7 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/pull -d '{
|
curl http://localhost:11434/api/pull -d '{
|
||||||
"name": "llama3.2"
|
"model": "llama3.2"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1159,7 +1387,7 @@ if `stream` is set to false, then the response is a single JSON object:
|
|||||||
|
|
||||||
## Push a Model
|
## Push a Model
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/push
|
POST /api/push
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1167,7 +1395,7 @@ Upload a model to a model library. Requires registering for ollama.ai and adding
|
|||||||
|
|
||||||
### Parameters
|
### Parameters
|
||||||
|
|
||||||
- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
|
- `model`: name of the model to push in the form of `<namespace>/<model>:<tag>`
|
||||||
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
|
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
|
||||||
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
|
||||||
|
|
||||||
@@ -1177,7 +1405,7 @@ Upload a model to a model library. Requires registering for ollama.ai and adding
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/push -d '{
|
curl http://localhost:11434/api/push -d '{
|
||||||
"name": "mattw/pygmalion:latest"
|
"model": "mattw/pygmalion:latest"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1224,7 +1452,7 @@ If `stream` is set to `false`, then the response is a single JSON object:
|
|||||||
|
|
||||||
## Generate Embeddings
|
## Generate Embeddings
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/embed
|
POST /api/embed
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1292,7 +1520,7 @@ curl http://localhost:11434/api/embed -d '{
|
|||||||
```
|
```
|
||||||
|
|
||||||
## List Running Models
|
## List Running Models
|
||||||
```shell
|
```
|
||||||
GET /api/ps
|
GET /api/ps
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1339,7 +1567,7 @@ A single JSON object will be returned.
|
|||||||
|
|
||||||
> Note: this endpoint has been superseded by `/api/embed`
|
> Note: this endpoint has been superseded by `/api/embed`
|
||||||
|
|
||||||
```shell
|
```
|
||||||
POST /api/embeddings
|
POST /api/embeddings
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -1376,3 +1604,29 @@ curl http://localhost:11434/api/embeddings -d '{
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Version
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/version
|
||||||
|
```
|
||||||
|
|
||||||
|
Retrieve the Ollama version
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
#### Request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl http://localhost:11434/api/version
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"version": "0.5.1"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
59
docs/benchmark.md
Normal file
59
docs/benchmark.md
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
# Benchmark
|
||||||
|
|
||||||
|
Go benchmark tests that measure end-to-end performance of a running Ollama server. Run these tests to evaluate model inference performance on your hardware and measure the impact of code changes.
|
||||||
|
|
||||||
|
## When to use
|
||||||
|
|
||||||
|
Run these benchmarks when:
|
||||||
|
- Making changes to the model inference engine
|
||||||
|
- Modifying model loading/unloading logic
|
||||||
|
- Changing prompt processing or token generation code
|
||||||
|
- Implementing a new model architecture
|
||||||
|
- Testing performance across different hardware setups
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
- Ollama server running locally with `ollama serve` on `127.0.0.1:11434`
|
||||||
|
## Usage and Examples
|
||||||
|
|
||||||
|
>[!NOTE]
|
||||||
|
>All commands must be run from the root directory of the Ollama project.
|
||||||
|
|
||||||
|
Basic syntax:
|
||||||
|
```bash
|
||||||
|
go test -bench=. ./benchmark/... -m $MODEL_NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
Required flags:
|
||||||
|
- `-bench=.`: Run all benchmarks
|
||||||
|
- `-m`: Model name to benchmark
|
||||||
|
|
||||||
|
Optional flags:
|
||||||
|
- `-count N`: Number of times to run the benchmark (useful for statistical analysis)
|
||||||
|
- `-timeout T`: Maximum time for the benchmark to run (e.g. "10m" for 10 minutes)
|
||||||
|
|
||||||
|
Common usage patterns:
|
||||||
|
|
||||||
|
Single benchmark run with a model specified:
|
||||||
|
```bash
|
||||||
|
go test -bench=. ./benchmark/... -m llama3.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output metrics
|
||||||
|
|
||||||
|
The benchmark reports several key metrics:
|
||||||
|
|
||||||
|
- `gen_tok/s`: Generated tokens per second
|
||||||
|
- `prompt_tok/s`: Prompt processing tokens per second
|
||||||
|
- `ttft_ms`: Time to first token in milliseconds
|
||||||
|
- `load_ms`: Model load time in milliseconds
|
||||||
|
- `gen_tokens`: Total tokens generated
|
||||||
|
- `prompt_tokens`: Total prompt tokens processed
|
||||||
|
|
||||||
|
Each benchmark runs two scenarios:
|
||||||
|
- Cold start: Model is loaded from disk for each test
|
||||||
|
- Warm start: Model is pre-loaded in memory
|
||||||
|
|
||||||
|
Three prompt lengths are tested for each scenario:
|
||||||
|
- Short prompt (100 tokens)
|
||||||
|
- Medium prompt (500 tokens)
|
||||||
|
- Long prompt (1000 tokens)
|
@@ -1,350 +1,159 @@
|
|||||||
# Development
|
# Development
|
||||||
|
|
||||||
|
Install prerequisites:
|
||||||
|
|
||||||
|
- [Go](https://go.dev/doc/install)
|
||||||
|
- C/C++ Compiler e.g. Clang on macOS, [TDM-GCC](https://github.com/jmeubank/tdm-gcc/releases/latest) (Windows amd64) or [llvm-mingw](https://github.com/mstorsjo/llvm-mingw) (Windows arm64), GCC/Clang on Linux.
|
||||||
|
|
||||||
|
Then build and run Ollama from the root directory of the repository:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go run . serve
|
||||||
|
```
|
||||||
|
|
||||||
|
## macOS (Apple Silicon)
|
||||||
|
|
||||||
|
macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.
|
||||||
|
|
||||||
|
## macOS (Intel)
|
||||||
|
|
||||||
|
Install prerequisites:
|
||||||
|
|
||||||
|
- [CMake](https://cmake.org/download/) or `brew install cmake`
|
||||||
|
|
||||||
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build
|
||||||
|
```
|
||||||
|
|
||||||
|
Lastly, run Ollama:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go run . serve
|
||||||
|
```
|
||||||
|
|
||||||
|
## Windows
|
||||||
|
|
||||||
|
Install prerequisites:
|
||||||
|
|
||||||
|
- [CMake](https://cmake.org/download/)
|
||||||
|
- [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) including the Native Desktop Workload
|
||||||
|
- (Optional) AMD GPU support
|
||||||
|
- [ROCm](https://rocm.docs.amd.com/en/latest/)
|
||||||
|
- [Ninja](https://github.com/ninja-build/ninja/releases)
|
||||||
|
- (Optional) NVIDIA GPU support
|
||||||
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_network)
|
||||||
|
|
||||||
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build --config Release
|
||||||
|
```
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> The `llm` package that loads and runs models is being updated to use a new [Go runner](#transition-to-go-runner): this should only impact a small set of PRs however it does change how the project is built.
|
> Building for ROCm requires additional flags:
|
||||||
|
> ```
|
||||||
|
> cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++
|
||||||
|
> cmake --build build --config Release
|
||||||
|
> ```
|
||||||
|
|
||||||
Install required tools:
|
|
||||||
|
|
||||||
- cmake version 3.24 or higher
|
Lastly, run Ollama:
|
||||||
- go version 1.22 or higher
|
|
||||||
- gcc version 11.4.0 or higher
|
|
||||||
|
|
||||||
### MacOS
|
```shell
|
||||||
|
go run . serve
|
||||||
```bash
|
|
||||||
brew install go cmake gcc
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Optionally enable debugging and more verbose logging:
|
## Windows (ARM)
|
||||||
|
|
||||||
```bash
|
Windows ARM does not support additional acceleration libraries at this time. Do not use cmake, simply `go run` or `go build`.
|
||||||
# At build time
|
|
||||||
export CGO_CFLAGS="-g"
|
|
||||||
|
|
||||||
# At runtime
|
## Linux
|
||||||
export OLLAMA_DEBUG=1
|
|
||||||
|
Install prerequisites:
|
||||||
|
|
||||||
|
- [CMake](https://cmake.org/download/) or `sudo apt install cmake` or `sudo dnf install cmake`
|
||||||
|
- (Optional) AMD GPU support
|
||||||
|
- [ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html)
|
||||||
|
- (Optional) NVIDIA GPU support
|
||||||
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads)
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> Ensure prerequisites are in `PATH` before running CMake.
|
||||||
|
|
||||||
|
|
||||||
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build
|
||||||
```
|
```
|
||||||
|
|
||||||
Get the required libraries and build the native LLM code:
|
Lastly, run Ollama:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
go generate ./...
|
go run . serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build ollama:
|
## Docker
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
go build .
|
docker build .
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you can run `ollama`:
|
### ROCm
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
./ollama
|
docker build --build-arg FLAVOR=rocm .
|
||||||
```
|
```
|
||||||
|
|
||||||
### Linux
|
## Running tests
|
||||||
|
|
||||||
#### Linux CUDA (NVIDIA)
|
To run tests, use `go test`:
|
||||||
|
|
||||||
_Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
```shell
|
||||||
|
go test ./...
|
||||||
Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
|
||||||
development and runtime packages.
|
|
||||||
|
|
||||||
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
|
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
|
||||||
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
|
|
||||||
libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
|
|
||||||
a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
|
|
||||||
|
|
||||||
Then generate dependencies:
|
|
||||||
|
|
||||||
```
|
|
||||||
go generate ./...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build the binary:
|
> NOTE: In rare cirumstances, you may nedd to change a package using the new
|
||||||
|
> "synctest" package in go1.24.
|
||||||
```
|
>
|
||||||
go build .
|
> If you do not have the "synctest" package enabled, you will not see build or
|
||||||
```
|
> test failures resulting from your change(s), if any, locally, but CI will
|
||||||
|
> break.
|
||||||
#### Linux ROCm (AMD)
|
>
|
||||||
|
> If you see failures in CI, you can either keep pushing changes to see if the
|
||||||
_Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
> CI build passes, or you can enable the "synctest" package locally to see the
|
||||||
|
> failures before pushing.
|
||||||
Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/) development packages first, as well as `cmake` and `golang`.
|
>
|
||||||
|
> To enable the "synctest" package for testing, run the following command:
|
||||||
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
|
>
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
> ```shell
|
||||||
specifying an environment variable `ROCM_PATH` to the location of the ROCm
|
> GOEXPERIMENT=synctest go test ./...
|
||||||
install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
|
> ```
|
||||||
CLBlast install (typically `/usr/lib/cmake/CLBlast`). You can also customize
|
>
|
||||||
the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
|
> If you wish to enable synctest for all go commands, you can set the
|
||||||
|
> `GOEXPERIMENT` environment variable in your shell profile or by using:
|
||||||
```
|
>
|
||||||
go generate ./...
|
> ```shell
|
||||||
```
|
> go env -w GOEXPERIMENT=synctest
|
||||||
|
> ```
|
||||||
Then build the binary:
|
>
|
||||||
|
> Which will enable the "synctest" package for all go commands without needing
|
||||||
```
|
> to set it for all shell sessions.
|
||||||
go build .
|
>
|
||||||
```
|
> The synctest package is not required for production builds.
|
||||||
|
|
||||||
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
|
## Library detection
|
||||||
|
|
||||||
#### Advanced CPU Settings
|
Ollama looks for acceleration libraries in the following paths relative to the `ollama` executable:
|
||||||
|
|
||||||
By default, running `go generate ./...` will compile a few different variations
|
* `./lib/ollama` (Windows)
|
||||||
of the LLM library based on common CPU families and vector math capabilities,
|
* `../lib/ollama` (Linux)
|
||||||
including a lowest-common-denominator which should run on almost any 64 bit CPU
|
* `.` (macOS)
|
||||||
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
|
* `build/lib/ollama` (for development)
|
||||||
load. If you would like to build a CPU-based build customized for your
|
|
||||||
processor, you can set `OLLAMA_CUSTOM_CPU_DEFS` to the llama.cpp flags you would
|
If the libraries are not found, Ollama will not run with any acceleration libraries.
|
||||||
like to use. For example, to compile an optimized binary for an Intel i9-9880H,
|
|
||||||
you might use:
|
|
||||||
|
|
||||||
```
|
|
||||||
OLLAMA_CUSTOM_CPU_DEFS="-DGGML_AVX=on -DGGML_AVX2=on -DGGML_F16C=on -DGGML_FMA=on" go generate ./...
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Containerized Linux Build
|
|
||||||
|
|
||||||
If you have Docker available, you can build linux binaries with `./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
|
|
||||||
|
|
||||||
### Windows
|
|
||||||
|
|
||||||
Note: The Windows build for Ollama is still under development.
|
|
||||||
|
|
||||||
First, install required tools:
|
|
||||||
|
|
||||||
- MSVC toolchain - C/C++ and cmake as minimal requirements
|
|
||||||
- Go version 1.22 or higher
|
|
||||||
- MinGW (pick one variant) with GCC.
|
|
||||||
- [MinGW-w64](https://www.mingw-w64.org/)
|
|
||||||
- [MSYS2](https://www.msys2.org/)
|
|
||||||
- The `ThreadJob` Powershell module: `Install-Module -Name ThreadJob -Scope CurrentUser`
|
|
||||||
|
|
||||||
Then, build the `ollama` binary:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
$env:CGO_ENABLED="1"
|
|
||||||
go generate ./...
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Windows CUDA (NVIDIA)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools described above, install CUDA after installing MSVC.
|
|
||||||
|
|
||||||
- [NVIDIA CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)
|
|
||||||
|
|
||||||
|
|
||||||
#### Windows ROCm (AMD Radeon)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools described above, install AMDs HIP package after installing MSVC.
|
|
||||||
|
|
||||||
- [AMD HIP](https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html)
|
|
||||||
- [Strawberry Perl](https://strawberryperl.com/)
|
|
||||||
|
|
||||||
Lastly, add `ninja.exe` included with MSVC to the system path (e.g. `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja`).
|
|
||||||
|
|
||||||
#### Windows arm64
|
|
||||||
|
|
||||||
The default `Developer PowerShell for VS 2022` may default to x86 which is not what you want. To ensure you get an arm64 development environment, start a plain PowerShell terminal and run:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
import-module 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'
|
|
||||||
Enter-VsDevShell -Arch arm64 -vsinstallpath 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community' -skipautomaticlocation
|
|
||||||
```
|
|
||||||
|
|
||||||
You can confirm with `write-host $env:VSCMD_ARG_TGT_ARCH`
|
|
||||||
|
|
||||||
Follow the instructions at https://www.msys2.org/wiki/arm64/ to set up an arm64 msys2 environment. Ollama requires gcc and mingw32-make to compile, which is not currently available on Windows arm64, but a gcc compatibility adapter is available via `mingw-w64-clang-aarch64-gcc-compat`. At a minimum you will need to install the following:
|
|
||||||
|
|
||||||
```
|
|
||||||
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
|
|
||||||
```
|
|
||||||
|
|
||||||
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically `C:\msys64\clangarm64\bin\`)
|
|
||||||
|
|
||||||
|
|
||||||
## Transition to Go runner
|
|
||||||
|
|
||||||
The Ollama team is working on moving to a new Go based runner that loads and runs models in a subprocess to replace the previous code under `ext_server`. During this transition period, this new Go runner is "opt in" at build time, and requires using a different approach to build.
|
|
||||||
|
|
||||||
After the transition to use the Go server exclusively, both `make` and `go generate` will build the Go runner.
|
|
||||||
|
|
||||||
Install required tools:
|
|
||||||
|
|
||||||
- go version 1.22 or higher
|
|
||||||
- gcc version 11.4.0 or higher
|
|
||||||
|
|
||||||
|
|
||||||
### MacOS
|
|
||||||
|
|
||||||
[Download Go](https://go.dev/dl/)
|
|
||||||
|
|
||||||
Optionally enable debugging and more verbose logging:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# At build time
|
|
||||||
export CGO_CFLAGS="-g"
|
|
||||||
|
|
||||||
# At runtime
|
|
||||||
export OLLAMA_DEBUG=1
|
|
||||||
```
|
|
||||||
|
|
||||||
Get the required libraries and build the native LLM code: (Adjust the job count based on your number of processors for a faster build)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make -C llama -j 5
|
|
||||||
```
|
|
||||||
|
|
||||||
Then build ollama:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you can run `ollama`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./ollama
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Xcode 15 warnings
|
|
||||||
|
|
||||||
If you are using Xcode newer than version 14, you may see a warning during `go build` about `ld: warning: ignoring duplicate libraries: '-lobjc'` due to Golang issue https://github.com/golang/go/issues/67799 which can be safely ignored. You can suppress the warning with `export CGO_LDFLAGS="-Wl,-no_warn_duplicate_libraries"`
|
|
||||||
|
|
||||||
### Linux
|
|
||||||
|
|
||||||
#### Linux CUDA (NVIDIA)
|
|
||||||
|
|
||||||
_Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
|
||||||
|
|
||||||
Install `make`, `gcc` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
|
||||||
development and runtime packages.
|
|
||||||
|
|
||||||
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
|
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
|
||||||
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
|
|
||||||
libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
|
|
||||||
a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
|
|
||||||
|
|
||||||
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
|
|
||||||
|
|
||||||
```
|
|
||||||
make -C llama -j 5
|
|
||||||
```
|
|
||||||
|
|
||||||
Then build the binary:
|
|
||||||
|
|
||||||
```
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Linux ROCm (AMD)
|
|
||||||
|
|
||||||
_Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
|
||||||
|
|
||||||
Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/) development packages first, as well as `make`, `gcc`, and `golang`.
|
|
||||||
|
|
||||||
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
|
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
|
||||||
specifying an environment variable `ROCM_PATH` to the location of the ROCm
|
|
||||||
install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
|
|
||||||
CLBlast install (typically `/usr/lib/cmake/CLBlast`). You can also customize
|
|
||||||
the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
|
|
||||||
|
|
||||||
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
|
|
||||||
|
|
||||||
```
|
|
||||||
make -C llama -j 5
|
|
||||||
```
|
|
||||||
|
|
||||||
Then build the binary:
|
|
||||||
|
|
||||||
```
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
|
|
||||||
|
|
||||||
#### Advanced CPU Settings
|
|
||||||
|
|
||||||
By default, running `make` will compile a few different variations
|
|
||||||
of the LLM library based on common CPU families and vector math capabilities,
|
|
||||||
including a lowest-common-denominator which should run on almost any 64 bit CPU
|
|
||||||
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
|
|
||||||
load.
|
|
||||||
|
|
||||||
Custom CPU settings are not currently supported in the new Go server build but will be added back after we complete the transition.
|
|
||||||
|
|
||||||
#### Containerized Linux Build
|
|
||||||
|
|
||||||
If you have Docker available, you can build linux binaries with `OLLAMA_NEW_RUNNERS=1 ./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
|
|
||||||
|
|
||||||
### Windows
|
|
||||||
|
|
||||||
The following tools are required as a minimal development environment to build CPU inference support.
|
|
||||||
|
|
||||||
- Go version 1.22 or higher
|
|
||||||
- https://go.dev/dl/
|
|
||||||
- Git
|
|
||||||
- https://git-scm.com/download/win
|
|
||||||
- GCC and Make. There are multiple options on how to go about installing these tools on Windows. We have verified the following, but others may work as well:
|
|
||||||
- [MSYS2](https://www.msys2.org/)
|
|
||||||
- After installing, from an MSYS2 terminal, run `pacman -S mingw-w64-ucrt-x86_64-gcc make` to install the required tools
|
|
||||||
- Assuming you used the default install prefix for msys2 above, add `c:\msys64\ucrt64\bin` and `c:\msys64\usr\bin` to your environment variable `PATH` where you will perform the build steps below (e.g. system-wide, account-level, powershell, cmd, etc.)
|
|
||||||
|
|
||||||
Then, build the `ollama` binary:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
$env:CGO_ENABLED="1"
|
|
||||||
make -C llama -j 8
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### GPU Support
|
|
||||||
|
|
||||||
The GPU tools require the Microsoft native build tools. To build either CUDA or ROCm, you must first install MSVC via Visual Studio:
|
|
||||||
|
|
||||||
- Make sure to select `Desktop development with C++` as a Workload during the Visual Studio install
|
|
||||||
- You must complete the Visual Studio install and run it once **BEFORE** installing CUDA or ROCm for the tools to properly register
|
|
||||||
- Add the location of the **64 bit (x64)** compiler (`cl.exe`) to your `PATH`
|
|
||||||
- Note: the default Developer Shell may configure the 32 bit (x86) compiler which will lead to build failures. Ollama requires a 64 bit toolchain.
|
|
||||||
|
|
||||||
#### Windows CUDA (NVIDIA)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools and MSVC described above:
|
|
||||||
|
|
||||||
- [NVIDIA CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)
|
|
||||||
|
|
||||||
#### Windows ROCm (AMD Radeon)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools and MSVC described above:
|
|
||||||
|
|
||||||
- [AMD HIP](https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html)
|
|
||||||
|
|
||||||
#### Windows arm64
|
|
||||||
|
|
||||||
The default `Developer PowerShell for VS 2022` may default to x86 which is not what you want. To ensure you get an arm64 development environment, start a plain PowerShell terminal and run:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
import-module 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'
|
|
||||||
Enter-VsDevShell -Arch arm64 -vsinstallpath 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community' -skipautomaticlocation
|
|
||||||
```
|
|
||||||
|
|
||||||
You can confirm with `write-host $env:VSCMD_ARG_TGT_ARCH`
|
|
||||||
|
|
||||||
Follow the instructions at https://www.msys2.org/wiki/arm64/ to set up an arm64 msys2 environment. Ollama requires gcc and mingw32-make to compile, which is not currently available on Windows arm64, but a gcc compatibility adapter is available via `mingw-w64-clang-aarch64-gcc-compat`. At a minimum you will need to install the following:
|
|
||||||
|
|
||||||
```
|
|
||||||
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
|
|
||||||
```
|
|
||||||
|
|
||||||
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically `C:\msys64\clangarm64\bin\`)
|
|
||||||
|
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
### CPU only
|
### CPU only
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -11,50 +11,57 @@ Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-
|
|||||||
|
|
||||||
#### Install with Apt
|
#### Install with Apt
|
||||||
1. Configure the repository
|
1. Configure the repository
|
||||||
```bash
|
|
||||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
|
```shell
|
||||||
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||||
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
||||||
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
|
||||||
sudo apt-get update
|
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||||
```
|
sudo apt-get update
|
||||||
|
```
|
||||||
|
|
||||||
2. Install the NVIDIA Container Toolkit packages
|
2. Install the NVIDIA Container Toolkit packages
|
||||||
```bash
|
|
||||||
sudo apt-get install -y nvidia-container-toolkit
|
```shell
|
||||||
```
|
sudo apt-get install -y nvidia-container-toolkit
|
||||||
|
```
|
||||||
|
|
||||||
#### Install with Yum or Dnf
|
#### Install with Yum or Dnf
|
||||||
1. Configure the repository
|
1. Configure the repository
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
||||||
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
|
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Install the NVIDIA Container Toolkit packages
|
2. Install the NVIDIA Container Toolkit packages
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
sudo yum install -y nvidia-container-toolkit
|
sudo yum install -y nvidia-container-toolkit
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Configure Docker to use Nvidia driver
|
#### Configure Docker to use Nvidia driver
|
||||||
```
|
|
||||||
|
```shell
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
sudo systemctl restart docker
|
sudo systemctl restart docker
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Start the container
|
#### Start the container
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If you're running on an NVIDIA JetPack system, Ollama can't automatically discover the correct JetPack version. Pass the environment variable JETSON_JETPACK=5 or JETSON_JETPACK=6 to the container to select version 5 or 6.
|
||||||
|
|
||||||
### AMD GPU
|
### AMD GPU
|
||||||
|
|
||||||
To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:
|
To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
|
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -62,7 +69,7 @@ docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 114
|
|||||||
|
|
||||||
Now you can run a model:
|
Now you can run a model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
docker exec -it ollama ollama run llama3.2
|
docker exec -it ollama ollama run llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
|
20
docs/examples.md
Normal file
20
docs/examples.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# Examples
|
||||||
|
|
||||||
|
This directory contains different examples of using Ollama.
|
||||||
|
|
||||||
|
## Python examples
|
||||||
|
Ollama Python examples at [ollama-python/examples](https://github.com/ollama/ollama-python/tree/main/examples)
|
||||||
|
|
||||||
|
|
||||||
|
## JavaScript examples
|
||||||
|
Ollama JavaScript examples at [ollama-js/examples](https://github.com/ollama/ollama-js/tree/main/examples)
|
||||||
|
|
||||||
|
|
||||||
|
## OpenAI compatibility examples
|
||||||
|
Ollama OpenAI compatibility examples at [ollama/examples/openai](../docs/openai.md)
|
||||||
|
|
||||||
|
|
||||||
|
## Community examples
|
||||||
|
|
||||||
|
- [LangChain Ollama Python](https://python.langchain.com/docs/integrations/chat/ollama/)
|
||||||
|
- [LangChain Ollama JS](https://js.langchain.com/docs/integrations/chat/ollama/)
|
59
docs/faq.md
59
docs/faq.md
@@ -20,11 +20,11 @@ Please refer to the [GPU docs](./gpu.md).
|
|||||||
|
|
||||||
## How can I specify the context window size?
|
## How can I specify the context window size?
|
||||||
|
|
||||||
By default, Ollama uses a context window size of 2048 tokens.
|
By default, Ollama uses a context window size of 2048 tokens. This can be overridden with the `OLLAMA_CONTEXT_LENGTH` environment variable. For example, to set the default context length to 8K, use: `OLLAMA_CONTEXT_LENGTH=8192 ollama serve`.
|
||||||
|
|
||||||
To change this when using `ollama run`, use `/set parameter`:
|
To change this when using `ollama run`, use `/set parameter`:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
/set parameter num_ctx 4096
|
/set parameter num_ctx 4096
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -46,10 +46,15 @@ Use the `ollama ps` command to see what models are currently loaded into memory.
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
ollama ps
|
ollama ps
|
||||||
NAME ID SIZE PROCESSOR UNTIL
|
|
||||||
llama3:70b bcfb190ca3a7 42 GB 100% GPU 4 minutes from now
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Output**:
|
||||||
|
>
|
||||||
|
> ```
|
||||||
|
> NAME ID SIZE PROCESSOR UNTIL
|
||||||
|
> llama3:70b bcfb190ca3a7 42 GB 100% GPU 4 minutes from now
|
||||||
|
> ```
|
||||||
|
|
||||||
The `Processor` column will show which memory the model was loaded in to:
|
The `Processor` column will show which memory the model was loaded in to:
|
||||||
* `100% GPU` means the model was loaded entirely into the GPU
|
* `100% GPU` means the model was loaded entirely into the GPU
|
||||||
* `100% CPU` means the model was loaded entirely in system memory
|
* `100% CPU` means the model was loaded entirely in system memory
|
||||||
@@ -66,7 +71,7 @@ If Ollama is run as a macOS application, environment variables should be set usi
|
|||||||
1. For each environment variable, call `launchctl setenv`.
|
1. For each environment variable, call `launchctl setenv`.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
launchctl setenv OLLAMA_HOST "0.0.0.0"
|
launchctl setenv OLLAMA_HOST "0.0.0.0:11434"
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Restart Ollama application.
|
2. Restart Ollama application.
|
||||||
@@ -81,14 +86,14 @@ If Ollama is run as a systemd service, environment variables should be set using
|
|||||||
|
|
||||||
```ini
|
```ini
|
||||||
[Service]
|
[Service]
|
||||||
Environment="OLLAMA_HOST=0.0.0.0"
|
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Save and exit.
|
3. Save and exit.
|
||||||
|
|
||||||
4. Reload `systemd` and restart Ollama:
|
4. Reload `systemd` and restart Ollama:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart ollama
|
systemctl restart ollama
|
||||||
```
|
```
|
||||||
@@ -151,7 +156,7 @@ Refer to the section [above](#how-do-i-configure-ollama-server) for how to set e
|
|||||||
|
|
||||||
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx:
|
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx:
|
||||||
|
|
||||||
```
|
```nginx
|
||||||
server {
|
server {
|
||||||
listen 80;
|
listen 80;
|
||||||
server_name example.com; # Replace with your domain or IP
|
server_name example.com; # Replace with your domain or IP
|
||||||
@@ -182,6 +187,13 @@ cloudflared tunnel --url http://localhost:11434 --http-host-header="localhost:11
|
|||||||
|
|
||||||
Ollama allows cross-origin requests from `127.0.0.1` and `0.0.0.0` by default. Additional origins can be configured with `OLLAMA_ORIGINS`.
|
Ollama allows cross-origin requests from `127.0.0.1` and `0.0.0.0` by default. Additional origins can be configured with `OLLAMA_ORIGINS`.
|
||||||
|
|
||||||
|
For browser extensions, you'll need to explicitly allow the extension's origin pattern. Set `OLLAMA_ORIGINS` to include `chrome-extension://*`, `moz-extension://*`, and `safari-web-extension://*` if you wish to allow all browser extensions access, or specific extensions as needed:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Allow all Chrome, Firefox, and Safari extensions
|
||||||
|
OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*,safari-web-extension://* ollama serve
|
||||||
|
```
|
||||||
|
|
||||||
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
|
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
|
||||||
|
|
||||||
## Where are models stored?
|
## Where are models stored?
|
||||||
@@ -221,16 +233,19 @@ properties.
|
|||||||
If you are using the API you can preload a model by sending the Ollama server an empty request. This works with both the `/api/generate` and `/api/chat` API endpoints.
|
If you are using the API you can preload a model by sending the Ollama server an empty request. This works with both the `/api/generate` and `/api/chat` API endpoints.
|
||||||
|
|
||||||
To preload the mistral model using the generate endpoint, use:
|
To preload the mistral model using the generate endpoint, use:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/generate -d '{"model": "mistral"}'
|
curl http://localhost:11434/api/generate -d '{"model": "mistral"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
To use the chat completions endpoint, use:
|
To use the chat completions endpoint, use:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{"model": "mistral"}'
|
curl http://localhost:11434/api/chat -d '{"model": "mistral"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
To preload a model using the CLI, use the command:
|
To preload a model using the CLI, use the command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ollama run llama3.2 ""
|
ollama run llama3.2 ""
|
||||||
```
|
```
|
||||||
@@ -250,11 +265,13 @@ If you're using the API, use the `keep_alive` parameter with the `/api/generate`
|
|||||||
* '0' which will unload the model immediately after generating a response
|
* '0' which will unload the model immediately after generating a response
|
||||||
|
|
||||||
For example, to preload a model and leave it in memory use:
|
For example, to preload a model and leave it in memory use:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": -1}'
|
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": -1}'
|
||||||
```
|
```
|
||||||
|
|
||||||
To unload the model and free up memory use:
|
To unload the model and free up memory use:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": 0}'
|
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": 0}'
|
||||||
```
|
```
|
||||||
@@ -285,4 +302,28 @@ Note: Windows with Radeon GPUs currently default to 1 model maximum due to limit
|
|||||||
|
|
||||||
## How does Ollama load models on multiple GPUs?
|
## How does Ollama load models on multiple GPUs?
|
||||||
|
|
||||||
Installing multiple GPUs of the same brand can be a great way to increase your available VRAM to load larger models. When you load a new model, Ollama evaluates the required VRAM for the model against what is currently available. If the model will entirely fit on any single GPU, Ollama will load the model on that GPU. This typically provides the best performance as it reduces the amount of data transfering across the PCI bus during inference. If the model does not fit entirely on one GPU, then it will be spread across all the available GPUs.
|
When loading a new model, Ollama evaluates the required VRAM for the model against what is currently available. If the model will entirely fit on any single GPU, Ollama will load the model on that GPU. This typically provides the best performance as it reduces the amount of data transferring across the PCI bus during inference. If the model does not fit entirely on one GPU, then it will be spread across all the available GPUs.
|
||||||
|
|
||||||
|
## How can I enable Flash Attention?
|
||||||
|
|
||||||
|
Flash Attention is a feature of most modern models that can significantly reduce memory usage as the context size grows. To enable Flash Attention, set the `OLLAMA_FLASH_ATTENTION` environment variable to `1` when starting the Ollama server.
|
||||||
|
|
||||||
|
## How can I set the quantization type for the K/V cache?
|
||||||
|
|
||||||
|
The K/V context cache can be quantized to significantly reduce memory usage when Flash Attention is enabled.
|
||||||
|
|
||||||
|
To use quantized K/V cache with Ollama you can set the following environment variable:
|
||||||
|
|
||||||
|
- `OLLAMA_KV_CACHE_TYPE` - The quantization type for the K/V cache. Default is `f16`.
|
||||||
|
|
||||||
|
> Note: Currently this is a global option - meaning all models will run with the specified quantization type.
|
||||||
|
|
||||||
|
The currently available K/V cache quantization types are:
|
||||||
|
|
||||||
|
- `f16` - high precision and memory usage (default).
|
||||||
|
- `q8_0` - 8-bit quantization, uses approximately 1/2 the memory of `f16` with a very small loss in precision, this usually has no noticeable impact on the model's quality (recommended if not using f16).
|
||||||
|
- `q4_0` - 4-bit quantization, uses approximately 1/4 the memory of `f16` with a small-medium loss in precision that may be more noticeable at higher context sizes.
|
||||||
|
|
||||||
|
How much the cache quantization impacts the model's response quality will depend on the model and the task. Models that have a high GQA count (e.g. Qwen2) may see a larger impact on precision from quantization than models with a low GQA count.
|
||||||
|
|
||||||
|
You may need to experiment with different quantization types to find the best balance between memory usage and quality.
|
||||||
|
14
docs/gpu.md
14
docs/gpu.md
@@ -7,7 +7,7 @@ Check your compute compatibility to see if your card is supported:
|
|||||||
|
|
||||||
| Compute Capability | Family | Cards |
|
| Compute Capability | Family | Cards |
|
||||||
| ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------- |
|
| ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------- |
|
||||||
| 9.0 | NVIDIA | `H100` |
|
| 9.0 | NVIDIA | `H200` `H100` |
|
||||||
| 8.9 | GeForce RTX 40xx | `RTX 4090` `RTX 4080 SUPER` `RTX 4080` `RTX 4070 Ti SUPER` `RTX 4070 Ti` `RTX 4070 SUPER` `RTX 4070` `RTX 4060 Ti` `RTX 4060` |
|
| 8.9 | GeForce RTX 40xx | `RTX 4090` `RTX 4080 SUPER` `RTX 4080` `RTX 4070 Ti SUPER` `RTX 4070 Ti` `RTX 4070 SUPER` `RTX 4070` `RTX 4060 Ti` `RTX 4060` |
|
||||||
| | NVIDIA Professional | `L4` `L40` `RTX 6000` |
|
| | NVIDIA Professional | `L4` `L40` `RTX 6000` |
|
||||||
| 8.6 | GeForce RTX 30xx | `RTX 3090 Ti` `RTX 3090` `RTX 3080 Ti` `RTX 3080` `RTX 3070 Ti` `RTX 3070` `RTX 3060 Ti` `RTX 3060` `RTX 3050 Ti` `RTX 3050` |
|
| 8.6 | GeForce RTX 30xx | `RTX 3090 Ti` `RTX 3090` `RTX 3080 Ti` `RTX 3080` `RTX 3070 Ti` `RTX 3070` `RTX 3060 Ti` `RTX 3060` `RTX 3050 Ti` `RTX 3050` |
|
||||||
@@ -28,6 +28,7 @@ Check your compute compatibility to see if your card is supported:
|
|||||||
| 5.0 | GeForce GTX | `GTX 750 Ti` `GTX 750` `NVS 810` |
|
| 5.0 | GeForce GTX | `GTX 750 Ti` `GTX 750` `NVS 810` |
|
||||||
| | Quadro | `K2200` `K1200` `K620` `M1200` `M520` `M5000M` `M4000M` `M3000M` `M2000M` `M1000M` `K620M` `M600M` `M500M` |
|
| | Quadro | `K2200` `K1200` `K620` `M1200` `M520` `M5000M` `M4000M` `M3000M` `M2000M` `M1000M` `K620M` `M600M` `M500M` |
|
||||||
|
|
||||||
|
For building locally to support older GPUs, see [developer.md](./development.md#linux-cuda-nvidia)
|
||||||
|
|
||||||
### GPU Selection
|
### GPU Selection
|
||||||
|
|
||||||
@@ -37,7 +38,7 @@ Numeric IDs may be used, however ordering may vary, so UUIDs are more reliable.
|
|||||||
You can discover the UUID of your GPUs by running `nvidia-smi -L` If you want to
|
You can discover the UUID of your GPUs by running `nvidia-smi -L` If you want to
|
||||||
ignore the GPUs and force CPU usage, use an invalid GPU ID (e.g., "-1")
|
ignore the GPUs and force CPU usage, use an invalid GPU ID (e.g., "-1")
|
||||||
|
|
||||||
### Laptop Suspend Resume
|
### Linux Suspend Resume
|
||||||
|
|
||||||
On linux, after a suspend/resume cycle, sometimes Ollama will fail to discover
|
On linux, after a suspend/resume cycle, sometimes Ollama will fail to discover
|
||||||
your NVIDIA GPU, and fallback to running on the CPU. You can workaround this
|
your NVIDIA GPU, and fallback to running on the CPU. You can workaround this
|
||||||
@@ -74,6 +75,10 @@ would set `HSA_OVERRIDE_GFX_VERSION="10.3.0"` as an environment variable for the
|
|||||||
server. If you have an unsupported AMD GPU you can experiment using the list of
|
server. If you have an unsupported AMD GPU you can experiment using the list of
|
||||||
supported types below.
|
supported types below.
|
||||||
|
|
||||||
|
If you have multiple GPUs with different GFX versions, append the numeric device
|
||||||
|
number to the environment variable to set them individually. For example,
|
||||||
|
`HSA_OVERRIDE_GFX_VERSION_0=10.3.0` and `HSA_OVERRIDE_GFX_VERSION_1=11.0.0`
|
||||||
|
|
||||||
At this time, the known supported GPU types on linux are the following LLVM Targets.
|
At this time, the known supported GPU types on linux are the following LLVM Targets.
|
||||||
This table shows some example GPUs that map to these LLVM targets:
|
This table shows some example GPUs that map to these LLVM targets:
|
||||||
| **LLVM Target** | **An Example GPU** |
|
| **LLVM Target** | **An Example GPU** |
|
||||||
@@ -99,9 +104,10 @@ Reach out on [Discord](https://discord.gg/ollama) or file an
|
|||||||
### GPU Selection
|
### GPU Selection
|
||||||
|
|
||||||
If you have multiple AMD GPUs in your system and want to limit Ollama to use a
|
If you have multiple AMD GPUs in your system and want to limit Ollama to use a
|
||||||
subset, you can set `HIP_VISIBLE_DEVICES` to a comma separated list of GPUs.
|
subset, you can set `ROCR_VISIBLE_DEVICES` to a comma separated list of GPUs.
|
||||||
You can see the list of devices with `rocminfo`. If you want to ignore the GPUs
|
You can see the list of devices with `rocminfo`. If you want to ignore the GPUs
|
||||||
and force CPU usage, use an invalid GPU ID (e.g., "-1")
|
and force CPU usage, use an invalid GPU ID (e.g., "-1"). When available, use the
|
||||||
|
`Uuid` to uniquely identify the device instead of numeric value.
|
||||||
|
|
||||||
### Container Permission
|
### Container Permission
|
||||||
|
|
||||||
|
@@ -20,19 +20,19 @@ Make sure that you use the same base model in the `FROM` command as you used to
|
|||||||
|
|
||||||
Now run `ollama create` from the directory where the `Modelfile` was created:
|
Now run `ollama create` from the directory where the `Modelfile` was created:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
ollama create my-model
|
ollama create my-model
|
||||||
```
|
```
|
||||||
|
|
||||||
Lastly, test the model:
|
Lastly, test the model:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
ollama run my-model
|
ollama run my-model
|
||||||
```
|
```
|
||||||
|
|
||||||
Ollama supports importing adapters based on several different model architectures including:
|
Ollama supports importing adapters based on several different model architectures including:
|
||||||
|
|
||||||
* Llama (including Llama 2, Llama 3, and Llama 3.1);
|
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral); and
|
* Mistral (including Mistral 1, Mistral 2, and Mixtral); and
|
||||||
* Gemma (including Gemma 1 and Gemma 2)
|
* Gemma (including Gemma 1 and Gemma 2)
|
||||||
|
|
||||||
@@ -67,14 +67,12 @@ ollama run my-model
|
|||||||
|
|
||||||
Ollama supports importing models for several different architectures including:
|
Ollama supports importing models for several different architectures including:
|
||||||
|
|
||||||
* Llama (including Llama 2, Llama 3, and Llama 3.1);
|
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral);
|
* Mistral (including Mistral 1, Mistral 2, and Mixtral);
|
||||||
* Gemma (including Gemma 1 and Gemma 2); and
|
* Gemma (including Gemma 1 and Gemma 2); and
|
||||||
* Phi3
|
* Phi3
|
||||||
|
|
||||||
This includes importing foundation models as well as any fine tuned models which which have been _fused_ with a foundation model.
|
This includes importing foundation models as well as any fine tuned models which have been _fused_ with a foundation model.
|
||||||
|
|
||||||
|
|
||||||
## Importing a GGUF based model or adapter
|
## Importing a GGUF based model or adapter
|
||||||
|
|
||||||
If you have a GGUF based model or adapter it is possible to import it into Ollama. You can obtain a GGUF model or adapter by:
|
If you have a GGUF based model or adapter it is possible to import it into Ollama. You can obtain a GGUF model or adapter by:
|
||||||
@@ -83,7 +81,7 @@ If you have a GGUF based model or adapter it is possible to import it into Ollam
|
|||||||
* converting a Safetensors adapter with the `convert_lora_to_gguf.py` from Llama.cpp; or
|
* converting a Safetensors adapter with the `convert_lora_to_gguf.py` from Llama.cpp; or
|
||||||
* downloading a model or adapter from a place such as HuggingFace
|
* downloading a model or adapter from a place such as HuggingFace
|
||||||
|
|
||||||
To import a GGUF model, create a `Modelfile` containg:
|
To import a GGUF model, create a `Modelfile` containing:
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
FROM /path/to/file.gguf
|
FROM /path/to/file.gguf
|
||||||
|
@@ -10,6 +10,9 @@ curl -fsSL https://ollama.com/install.sh | sh
|
|||||||
|
|
||||||
## Manual install
|
## Manual install
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If you are upgrading from a prior version, you should remove the old libraries with `sudo rm -rf /usr/lib/ollama` first.
|
||||||
|
|
||||||
Download and extract the package:
|
Download and extract the package:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
@@ -72,7 +75,7 @@ RestartSec=3
|
|||||||
Environment="PATH=$PATH"
|
Environment="PATH=$PATH"
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=default.target
|
WantedBy=multi-user.target
|
||||||
```
|
```
|
||||||
|
|
||||||
Then start the service:
|
Then start the service:
|
||||||
@@ -112,6 +115,21 @@ sudo systemctl status ollama
|
|||||||
> https://www.amd.com/en/support/linux-drivers for best support of your Radeon
|
> https://www.amd.com/en/support/linux-drivers for best support of your Radeon
|
||||||
> GPU.
|
> GPU.
|
||||||
|
|
||||||
|
## Customizing
|
||||||
|
|
||||||
|
To customize the installation of Ollama, you can edit the systemd service file or the environment variables by running:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
sudo systemctl edit ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, create an override file manually in `/etc/systemd/system/ollama.service.d/override.conf`:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
Environment="OLLAMA_DEBUG=1"
|
||||||
|
```
|
||||||
|
|
||||||
## Updating
|
## Updating
|
||||||
|
|
||||||
Update Ollama by running the install script again:
|
Update Ollama by running the install script again:
|
||||||
@@ -129,12 +147,12 @@ sudo tar -C /usr -xzf ollama-linux-amd64.tgz
|
|||||||
|
|
||||||
## Installing specific versions
|
## Installing specific versions
|
||||||
|
|
||||||
Use `OLLAMA_VERSION` environment variable with the install script to install a specific version of Ollama, including pre-releases. You can find the version numbers in the [releases page](https://github.com/ollama/ollama/releases).
|
Use `OLLAMA_VERSION` environment variable with the install script to install a specific version of Ollama, including pre-releases. You can find the version numbers in the [releases page](https://github.com/ollama/ollama/releases).
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.3.9 sh
|
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
|
||||||
```
|
```
|
||||||
|
|
||||||
## Viewing logs
|
## Viewing logs
|
||||||
@@ -168,3 +186,9 @@ sudo rm -r /usr/share/ollama
|
|||||||
sudo userdel ollama
|
sudo userdel ollama
|
||||||
sudo groupdel ollama
|
sudo groupdel ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Remove installed libraries:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
sudo rm -rf /usr/local/lib/ollama
|
||||||
|
```
|
||||||
|
@@ -28,7 +28,7 @@ A model file is the blueprint to create and share models with Ollama.
|
|||||||
|
|
||||||
The format of the `Modelfile`:
|
The format of the `Modelfile`:
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
# comment
|
# comment
|
||||||
INSTRUCTION arguments
|
INSTRUCTION arguments
|
||||||
```
|
```
|
||||||
@@ -49,7 +49,7 @@ INSTRUCTION arguments
|
|||||||
|
|
||||||
An example of a `Modelfile` creating a mario blueprint:
|
An example of a `Modelfile` creating a mario blueprint:
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM llama3.2
|
FROM llama3.2
|
||||||
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
||||||
PARAMETER temperature 1
|
PARAMETER temperature 1
|
||||||
@@ -63,32 +63,36 @@ SYSTEM You are Mario from super mario bros, acting as an assistant.
|
|||||||
To use this:
|
To use this:
|
||||||
|
|
||||||
1. Save it as a file (e.g. `Modelfile`)
|
1. Save it as a file (e.g. `Modelfile`)
|
||||||
2. `ollama create choose-a-model-name -f <location of the file e.g. ./Modelfile>'`
|
2. `ollama create choose-a-model-name -f <location of the file e.g. ./Modelfile>`
|
||||||
3. `ollama run choose-a-model-name`
|
3. `ollama run choose-a-model-name`
|
||||||
4. Start using the model!
|
4. Start using the model!
|
||||||
|
|
||||||
More examples are available in the [examples directory](../examples).
|
|
||||||
|
|
||||||
To view the Modelfile of a given model, use the `ollama show --modelfile` command.
|
To view the Modelfile of a given model, use the `ollama show --modelfile` command.
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
> ollama show --modelfile llama3.2
|
ollama show --modelfile llama3.2
|
||||||
# Modelfile generated by "ollama show"
|
```
|
||||||
# To build a new Modelfile based on this one, replace the FROM line with:
|
|
||||||
# FROM llama3.2:latest
|
|
||||||
FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
|
||||||
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
|
||||||
|
|
||||||
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
> **Output**:
|
||||||
|
>
|
||||||
|
> ```
|
||||||
|
> # Modelfile generated by "ollama show"
|
||||||
|
> # To build a new Modelfile based on this one, replace the FROM line with:
|
||||||
|
> # FROM llama3.2:latest
|
||||||
|
> FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
||||||
|
> TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
||||||
|
>
|
||||||
|
> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
||||||
|
>
|
||||||
|
> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
||||||
|
>
|
||||||
|
> {{ .Response }}<|eot_id|>"""
|
||||||
|
> PARAMETER stop "<|start_header_id|>"
|
||||||
|
> PARAMETER stop "<|end_header_id|>"
|
||||||
|
> PARAMETER stop "<|eot_id|>"
|
||||||
|
> PARAMETER stop "<|reserved_special_token"
|
||||||
|
> ```
|
||||||
|
|
||||||
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
|
||||||
|
|
||||||
{{ .Response }}<|eot_id|>"""
|
|
||||||
PARAMETER stop "<|start_header_id|>"
|
|
||||||
PARAMETER stop "<|end_header_id|>"
|
|
||||||
PARAMETER stop "<|eot_id|>"
|
|
||||||
PARAMETER stop "<|reserved_special_token"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
@@ -96,13 +100,13 @@ To view the Modelfile of a given model, use the `ollama show --modelfile` comman
|
|||||||
|
|
||||||
The `FROM` instruction defines the base model to use when creating a model.
|
The `FROM` instruction defines the base model to use when creating a model.
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM <model name>:<tag>
|
FROM <model name>:<tag>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Build from existing model
|
#### Build from existing model
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM llama3.2
|
FROM llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -113,21 +117,21 @@ Additional models can be found at:
|
|||||||
|
|
||||||
#### Build from a Safetensors model
|
#### Build from a Safetensors model
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM <model directory>
|
FROM <model directory>
|
||||||
```
|
```
|
||||||
|
|
||||||
The model directory should contain the Safetensors weights for a supported architecture.
|
The model directory should contain the Safetensors weights for a supported architecture.
|
||||||
|
|
||||||
Currently supported model architectures:
|
Currently supported model architectures:
|
||||||
* Llama (including Llama 2, Llama 3, and Llama 3.1)
|
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2)
|
||||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
* Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
||||||
* Gemma (including Gemma 1 and Gemma 2)
|
* Gemma (including Gemma 1 and Gemma 2)
|
||||||
* Phi3
|
* Phi3
|
||||||
|
|
||||||
#### Build from a GGUF file
|
#### Build from a GGUF file
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM ./ollama-model.gguf
|
FROM ./ollama-model.gguf
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -138,7 +142,7 @@ The GGUF file location should be specified as an absolute path or relative to th
|
|||||||
|
|
||||||
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
PARAMETER <parameter> <parametervalue>
|
PARAMETER <parameter> <parametervalue>
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -155,8 +159,7 @@ PARAMETER <parameter> <parametervalue>
|
|||||||
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | temperature 0.7 |
|
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | temperature 0.7 |
|
||||||
| seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: 0) | int | seed 42 |
|
| seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: 0) | int | seed 42 |
|
||||||
| stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate `stop` parameters in a modelfile. | string | stop "AI assistant:" |
|
| stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate `stop` parameters in a modelfile. | string | stop "AI assistant:" |
|
||||||
| tfs_z | Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) | float | tfs_z 1 |
|
| num_predict | Maximum number of tokens to predict when generating text. (Default: -1, infinite generation) | int | num_predict 42 |
|
||||||
| num_predict | Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context) | int | num_predict 42 |
|
|
||||||
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
|
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
|
||||||
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
|
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
|
||||||
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter *p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with *p*=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0) | float | min_p 0.05 |
|
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter *p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with *p*=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0) | float | min_p 0.05 |
|
||||||
@@ -186,7 +189,7 @@ TEMPLATE """{{ if .System }}<|im_start|>system
|
|||||||
|
|
||||||
The `SYSTEM` instruction specifies the system message to be used in the template, if applicable.
|
The `SYSTEM` instruction specifies the system message to be used in the template, if applicable.
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
SYSTEM """<system message>"""
|
SYSTEM """<system message>"""
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -196,7 +199,7 @@ The `ADAPTER` instruction specifies a fine tuned LoRA adapter that should apply
|
|||||||
|
|
||||||
#### Safetensor adapter
|
#### Safetensor adapter
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
ADAPTER <path to safetensor adapter>
|
ADAPTER <path to safetensor adapter>
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -207,7 +210,7 @@ Currently supported Safetensor adapters:
|
|||||||
|
|
||||||
#### GGUF adapter
|
#### GGUF adapter
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
ADAPTER ./ollama-lora.gguf
|
ADAPTER ./ollama-lora.gguf
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -215,7 +218,7 @@ ADAPTER ./ollama-lora.gguf
|
|||||||
|
|
||||||
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
LICENSE """
|
LICENSE """
|
||||||
<license text>
|
<license text>
|
||||||
"""
|
"""
|
||||||
@@ -225,7 +228,7 @@ LICENSE """
|
|||||||
|
|
||||||
The `MESSAGE` instruction allows you to specify a message history for the model to use when responding. Use multiple iterations of the MESSAGE command to build up a conversation which will guide the model to answer in a similar way.
|
The `MESSAGE` instruction allows you to specify a message history for the model to use when responding. Use multiple iterations of the MESSAGE command to build up a conversation which will guide the model to answer in a similar way.
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
MESSAGE <role> <message>
|
MESSAGE <role> <message>
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -240,7 +243,7 @@ MESSAGE <role> <message>
|
|||||||
|
|
||||||
#### Example conversation
|
#### Example conversation
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
MESSAGE user Is Toronto in Canada?
|
MESSAGE user Is Toronto in Canada?
|
||||||
MESSAGE assistant yes
|
MESSAGE assistant yes
|
||||||
MESSAGE user Is Sacramento in Canada?
|
MESSAGE user Is Sacramento in Canada?
|
||||||
|
@@ -1,6 +1,7 @@
|
|||||||
# OpenAI compatibility
|
# OpenAI compatibility
|
||||||
|
|
||||||
> **Note:** OpenAI compatibility is experimental and is subject to major adjustments including breaking changes. For fully-featured access to the Ollama API, see the Ollama [Python library](https://github.com/ollama/ollama-python), [JavaScript library](https://github.com/ollama/ollama-js) and [REST API](https://github.com/ollama/ollama/blob/main/docs/api.md).
|
> [!NOTE]
|
||||||
|
> OpenAI compatibility is experimental and is subject to major adjustments including breaking changes. For fully-featured access to the Ollama API, see the Ollama [Python library](https://github.com/ollama/ollama-python), [JavaScript library](https://github.com/ollama/ollama-js) and [REST API](https://github.com/ollama/ollama/blob/main/docs/api.md).
|
||||||
|
|
||||||
Ollama provides experimental compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama.
|
Ollama provides experimental compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama.
|
||||||
|
|
||||||
@@ -37,7 +38,7 @@ response = client.chat.completions.create(
|
|||||||
{"type": "text", "text": "What's in this image?"},
|
{"type": "text", "text": "What's in this image?"},
|
||||||
{
|
{
|
||||||
"type": "image_url",
|
"type": "image_url",
|
||||||
"image_url": "iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
"image_url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
@@ -60,6 +61,42 @@ embeddings = client.embeddings.create(
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Structured outputs
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from openai import OpenAI
|
||||||
|
|
||||||
|
client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
|
||||||
|
|
||||||
|
# Define the schema for the response
|
||||||
|
class FriendInfo(BaseModel):
|
||||||
|
name: str
|
||||||
|
age: int
|
||||||
|
is_available: bool
|
||||||
|
|
||||||
|
class FriendList(BaseModel):
|
||||||
|
friends: list[FriendInfo]
|
||||||
|
|
||||||
|
try:
|
||||||
|
completion = client.beta.chat.completions.parse(
|
||||||
|
temperature=0,
|
||||||
|
model="llama3.1:8b",
|
||||||
|
messages=[
|
||||||
|
{"role": "user", "content": "I have two friends. The first is Ollama 22 years old busy saving the world, and the second is Alonso 23 years old and wants to hang out. Return a list of friends in JSON format"}
|
||||||
|
],
|
||||||
|
response_format=FriendList,
|
||||||
|
)
|
||||||
|
|
||||||
|
friends_response = completion.choices[0].message
|
||||||
|
if friends_response.parsed:
|
||||||
|
print(friends_response.parsed)
|
||||||
|
elif friends_response.refusal:
|
||||||
|
print(friends_response.refusal)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}")
|
||||||
|
```
|
||||||
|
|
||||||
### OpenAI JavaScript library
|
### OpenAI JavaScript library
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -86,7 +123,7 @@ const response = await openai.chat.completions.create({
|
|||||||
{ type: "text", text: "What's in this image?" },
|
{ type: "text", text: "What's in this image?" },
|
||||||
{
|
{
|
||||||
type: "image_url",
|
type: "image_url",
|
||||||
image_url: "iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
image_url: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
@@ -110,7 +147,7 @@ const embedding = await openai.embeddings.create({
|
|||||||
|
|
||||||
### `curl`
|
### `curl`
|
||||||
|
|
||||||
``` shell
|
```shell
|
||||||
curl http://localhost:11434/v1/chat/completions \
|
curl http://localhost:11434/v1/chat/completions \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{
|
-d '{
|
||||||
@@ -142,7 +179,7 @@ curl http://localhost:11434/v1/chat/completions \
|
|||||||
{
|
{
|
||||||
"type": "image_url",
|
"type": "image_url",
|
||||||
"image_url": {
|
"image_url": {
|
||||||
"url": "iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"
|
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -181,7 +218,7 @@ curl http://localhost:11434/v1/embeddings \
|
|||||||
- [x] JSON mode
|
- [x] JSON mode
|
||||||
- [x] Reproducible outputs
|
- [x] Reproducible outputs
|
||||||
- [x] Vision
|
- [x] Vision
|
||||||
- [x] Tools (streaming support coming soon)
|
- [x] Tools
|
||||||
- [ ] Logprobs
|
- [ ] Logprobs
|
||||||
|
|
||||||
#### Supported request fields
|
#### Supported request fields
|
||||||
@@ -199,6 +236,8 @@ curl http://localhost:11434/v1/embeddings \
|
|||||||
- [x] `seed`
|
- [x] `seed`
|
||||||
- [x] `stop`
|
- [x] `stop`
|
||||||
- [x] `stream`
|
- [x] `stream`
|
||||||
|
- [x] `stream_options`
|
||||||
|
- [x] `include_usage`
|
||||||
- [x] `temperature`
|
- [x] `temperature`
|
||||||
- [x] `top_p`
|
- [x] `top_p`
|
||||||
- [x] `max_tokens`
|
- [x] `max_tokens`
|
||||||
@@ -227,6 +266,8 @@ curl http://localhost:11434/v1/embeddings \
|
|||||||
- [x] `seed`
|
- [x] `seed`
|
||||||
- [x] `stop`
|
- [x] `stop`
|
||||||
- [x] `stream`
|
- [x] `stream`
|
||||||
|
- [x] `stream_options`
|
||||||
|
- [x] `include_usage`
|
||||||
- [x] `temperature`
|
- [x] `temperature`
|
||||||
- [x] `top_p`
|
- [x] `top_p`
|
||||||
- [x] `max_tokens`
|
- [x] `max_tokens`
|
||||||
@@ -281,7 +322,7 @@ ollama pull llama3.2
|
|||||||
|
|
||||||
For tooling that relies on default OpenAI model names such as `gpt-3.5-turbo`, use `ollama cp` to copy an existing model name to a temporary name:
|
For tooling that relies on default OpenAI model names such as `gpt-3.5-turbo`, use `ollama cp` to copy an existing model name to a temporary name:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama cp llama3.2 gpt-3.5-turbo
|
ollama cp llama3.2 gpt-3.5-turbo
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -305,7 +346,7 @@ curl http://localhost:11434/v1/chat/completions \
|
|||||||
|
|
||||||
The OpenAI API does not have a way of setting the context size for a model. If you need to change the context size, create a `Modelfile` which looks like:
|
The OpenAI API does not have a way of setting the context size for a model. If you need to change the context size, create a `Modelfile` which looks like:
|
||||||
|
|
||||||
```modelfile
|
```
|
||||||
FROM <some model>
|
FROM <some model>
|
||||||
PARAMETER num_ctx <context size>
|
PARAMETER num_ctx <context size>
|
||||||
```
|
```
|
||||||
|
@@ -111,7 +111,7 @@ Keep the following tips and best practices in mind when working with Go template
|
|||||||
|
|
||||||
ChatML is a popular template format. It can be used for models such as Databrick's DBRX, Intel's Neural Chat, and Microsoft's Orca 2.
|
ChatML is a popular template format. It can be used for models such as Databrick's DBRX, Intel's Neural Chat, and Microsoft's Orca 2.
|
||||||
|
|
||||||
```gotmpl
|
```go
|
||||||
{{- range .Messages }}<|im_start|>{{ .Role }}
|
{{- range .Messages }}<|im_start|>{{ .Role }}
|
||||||
{{ .Content }}<|im_end|>
|
{{ .Content }}<|im_end|>
|
||||||
{{ end }}<|im_start|>assistant
|
{{ end }}<|im_start|>assistant
|
||||||
@@ -125,7 +125,7 @@ Tools support can be added to a model by adding a `{{ .Tools }}` node to the tem
|
|||||||
|
|
||||||
Mistral v0.3 and Mixtral 8x22B supports tool calling.
|
Mistral v0.3 and Mixtral 8x22B supports tool calling.
|
||||||
|
|
||||||
```gotmpl
|
```go
|
||||||
{{- range $index, $_ := .Messages }}
|
{{- range $index, $_ := .Messages }}
|
||||||
{{- if eq .Role "user" }}
|
{{- if eq .Role "user" }}
|
||||||
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS]
|
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS]
|
||||||
@@ -151,7 +151,7 @@ Fill-in-middle support can be added to a model by adding a `{{ .Suffix }}` node
|
|||||||
|
|
||||||
CodeLlama [7B](https://ollama.com/library/codellama:7b-code) and [13B](https://ollama.com/library/codellama:13b-code) code completion models support fill-in-middle.
|
CodeLlama [7B](https://ollama.com/library/codellama:7b-code) and [13B](https://ollama.com/library/codellama:13b-code) code completion models support fill-in-middle.
|
||||||
|
|
||||||
```gotmpl
|
```go
|
||||||
<PRE> {{ .Prompt }} <SUF>{{ .Suffix }} <MID>
|
<PRE> {{ .Prompt }} <SUF>{{ .Suffix }} <MID>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@@ -17,6 +17,7 @@ When you run Ollama in a **container**, the logs go to stdout/stderr in the cont
|
|||||||
```shell
|
```shell
|
||||||
docker logs <container-name>
|
docker logs <container-name>
|
||||||
```
|
```
|
||||||
|
|
||||||
(Use `docker ps` to find the container name)
|
(Use `docker ps` to find the container name)
|
||||||
|
|
||||||
If manually running `ollama serve` in a terminal, the logs will be on that terminal.
|
If manually running `ollama serve` in a terminal, the logs will be on that terminal.
|
||||||
@@ -28,6 +29,7 @@ When you run Ollama on **Windows**, there are a few different locations. You can
|
|||||||
- `explorer %TEMP%` where temporary executable files are stored in one or more `ollama*` directories
|
- `explorer %TEMP%` where temporary executable files are stored in one or more `ollama*` directories
|
||||||
|
|
||||||
To enable additional debug logging to help troubleshoot problems, first **Quit the running app from the tray menu** then in a powershell terminal
|
To enable additional debug logging to help troubleshoot problems, first **Quit the running app from the tray menu** then in a powershell terminal
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
$env:OLLAMA_DEBUG="1"
|
$env:OLLAMA_DEBUG="1"
|
||||||
& "ollama app.exe"
|
& "ollama app.exe"
|
||||||
@@ -49,12 +51,13 @@ Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]
|
|||||||
|
|
||||||
You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
|
You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve
|
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
You can see what features your CPU has with the following.
|
You can see what features your CPU has with the following.
|
||||||
```
|
|
||||||
|
```shell
|
||||||
cat /proc/cpuinfo| grep flags | head -1
|
cat /proc/cpuinfo| grep flags | head -1
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -62,14 +65,18 @@ cat /proc/cpuinfo| grep flags | head -1
|
|||||||
|
|
||||||
If you run into problems on Linux and want to install an older version, or you'd like to try out a pre-release before it's officially released, you can tell the install script which version to install.
|
If you run into problems on Linux and want to install an older version, or you'd like to try out a pre-release before it's officially released, you can tell the install script which version to install.
|
||||||
|
|
||||||
```sh
|
```shell
|
||||||
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION="0.1.29" sh
|
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
|
||||||
```
|
```
|
||||||
|
|
||||||
## Linux tmp noexec
|
## Linux tmp noexec
|
||||||
|
|
||||||
If your system is configured with the "noexec" flag where Ollama stores its temporary executable files, you can specify an alternate location by setting OLLAMA_TMPDIR to a location writable by the user ollama runs as. For example OLLAMA_TMPDIR=/usr/share/ollama/
|
If your system is configured with the "noexec" flag where Ollama stores its temporary executable files, you can specify an alternate location by setting OLLAMA_TMPDIR to a location writable by the user ollama runs as. For example OLLAMA_TMPDIR=/usr/share/ollama/
|
||||||
|
|
||||||
|
## Linux docker
|
||||||
|
|
||||||
|
If Ollama initially works on the GPU in a docker container, but then switches to running on CPU after some period of time with errors in the server log reporting GPU discovery failures, this can be resolved by disabling systemd cgroup management in Docker. Edit `/etc/docker/daemon.json` on the host and add `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to the docker configuration.
|
||||||
|
|
||||||
## NVIDIA GPU Discovery
|
## NVIDIA GPU Discovery
|
||||||
|
|
||||||
When Ollama starts up, it takes inventory of the GPUs present in the system to determine compatibility and how much VRAM is available. Sometimes this discovery can fail to find your GPUs. In general, running the latest driver will yield the best results.
|
When Ollama starts up, it takes inventory of the GPUs present in the system to determine compatibility and how much VRAM is available. Sometimes this discovery can fail to find your GPUs. In general, running the latest driver will yield the best results.
|
||||||
@@ -80,7 +87,7 @@ If you are using a container to run Ollama, make sure you've set up the containe
|
|||||||
|
|
||||||
Sometimes the Ollama can have difficulties initializing the GPU. When you check the server logs, this can show up as various error codes, such as "3" (not initialized), "46" (device unavailable), "100" (no device), "999" (unknown), or others. The following troubleshooting techniques may help resolve the problem
|
Sometimes the Ollama can have difficulties initializing the GPU. When you check the server logs, this can show up as various error codes, such as "3" (not initialized), "46" (device unavailable), "100" (no device), "999" (unknown), or others. The following troubleshooting techniques may help resolve the problem
|
||||||
|
|
||||||
- If you are using a container, is the container runtime working? Try `docker run --gpus all ubuntu nvidia-smi` - if this doesn't work, Ollama wont be able to see your NVIDIA GPU.
|
- If you are using a container, is the container runtime working? Try `docker run --gpus all ubuntu nvidia-smi` - if this doesn't work, Ollama won't be able to see your NVIDIA GPU.
|
||||||
- Is the uvm driver loaded? `sudo nvidia-modprobe -u`
|
- Is the uvm driver loaded? `sudo nvidia-modprobe -u`
|
||||||
- Try reloading the nvidia_uvm driver - `sudo rmmod nvidia_uvm` then `sudo modprobe nvidia_uvm`
|
- Try reloading the nvidia_uvm driver - `sudo rmmod nvidia_uvm` then `sudo modprobe nvidia_uvm`
|
||||||
- Try rebooting
|
- Try rebooting
|
||||||
@@ -95,13 +102,19 @@ If none of those resolve the problem, gather additional information and file an
|
|||||||
|
|
||||||
On linux, AMD GPU access typically requires `video` and/or `render` group membership to access the `/dev/kfd` device. If permissions are not set up correctly, Ollama will detect this and report an error in the server log.
|
On linux, AMD GPU access typically requires `video` and/or `render` group membership to access the `/dev/kfd` device. If permissions are not set up correctly, Ollama will detect this and report an error in the server log.
|
||||||
|
|
||||||
When running in a container, in some Linux distributions and container runtimes, the ollama process may be unable to access the GPU. Use `ls -ld /dev/kfd /dev/dri /dev/dri/*` on the host system to determine the group assignments on your system, and pass additional `--group-add ...` arguments to the container so it can access the required devices.
|
When running in a container, in some Linux distributions and container runtimes, the ollama process may be unable to access the GPU. Use `ls -lnd /dev/kfd /dev/dri /dev/dri/*` on the host system to determine the **numeric** group IDs on your system, and pass additional `--group-add ...` arguments to the container so it can access the required devices. For example, in the following output `crw-rw---- 1 0 44 226, 0 Sep 16 16:55 /dev/dri/card0` the group ID column is `44`
|
||||||
|
|
||||||
If you are experiencing problems getting Ollama to correctly discover or use your GPU for inference, the following may help isolate the failure.
|
If you are experiencing problems getting Ollama to correctly discover or use your GPU for inference, the following may help isolate the failure.
|
||||||
- `AMD_LOG_LEVEL=3` Enable info log levels in the AMD HIP/ROCm libraries. This can help show more detailed error codes that can help troubleshoot problems
|
- `AMD_LOG_LEVEL=3` Enable info log levels in the AMD HIP/ROCm libraries. This can help show more detailed error codes that can help troubleshoot problems
|
||||||
- `OLLAMA_DEBUG=1` During GPU discovery additional information will be reported
|
- `OLLAMA_DEBUG=1` During GPU discovery additional information will be reported
|
||||||
- Check dmesg for any errors from amdgpu or kfd drivers `sudo dmesg | grep -i amdgpu` and `sudo dmesg | grep -i kfd`
|
- Check dmesg for any errors from amdgpu or kfd drivers `sudo dmesg | grep -i amdgpu` and `sudo dmesg | grep -i kfd`
|
||||||
|
|
||||||
|
## Multiple AMD GPUs
|
||||||
|
|
||||||
|
If you experience gibberish responses when models load across multiple AMD GPUs on Linux, see the following guide.
|
||||||
|
|
||||||
|
- https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/native_linux/mgpu.html#mgpu-known-issues-and-limitations
|
||||||
|
|
||||||
## Windows Terminal Errors
|
## Windows Terminal Errors
|
||||||
|
|
||||||
Older versions of Windows 10 (e.g., 21H1) are known to have a bug where the standard terminal program does not display control characters correctly. This can result in a long string of strings like `←[?25h←[?25l` being displayed, sometimes erroring with `The parameter is incorrect` To resolve this problem, please update to Win 10 22H1 or newer.
|
Older versions of Windows 10 (e.g., 21H1) are known to have a bug where the standard terminal program does not display control characters correctly. This can result in a long string of strings like `←[?25h←[?25l` being displayed, sometimes erroring with `The parameter is incorrect` To resolve this problem, please update to Win 10 22H1 or newer.
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user