Skip to content

Commit 3c70add

Browse files
committed
Update changelog
1 parent 272dab5 commit 3c70add

File tree

2 files changed

+25
-0
lines changed

2 files changed

+25
-0
lines changed

CHANGELOG.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,15 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [1.2.0] - 2023-07-06
8+
9+
### Added
10+
- Add server example to the build
11+
- Add documentation on how to use the webinterface
12+
13+
### Fixed
14+
- Fix automatic update of the submodules
15+
716
## [1.1.0] - 2023-07-03
817

918
### Added

README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,22 @@ You can now chat with the model:
103103
--interactive
104104
```
105105

106+
### Chat via Webinterface
107+
108+
You can start llama.cpp as a webserver:
109+
110+
```PowerShell
111+
./vendor/llama.cpp/build/bin/Release/server `
112+
--model "./vendor/llama.cpp/models/open-llama-7B-open-instruct.ggmlv3.q4_K_M.bin" `
113+
--ctx-size 2048 `
114+
--threads 16 `
115+
--n-gpu-layers 32
116+
```
117+
118+
And then access llama.cpp via the webinterface at:
119+
120+
* http://localhost:8080/
121+
106122
### Measure model perplexity
107123

108124
Execute the following to measure the perplexity of the GGML formatted model:

0 commit comments

Comments
 (0)