Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 9f214eb

Browse files
authored
Merge pull request #288 from janhq/287-bug-not-enough-threads-for-non-inference-tasks
feat: add more threads for core services
2 parents 3a47399 + 31cfa14 commit 9f214eb

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

controllers/llamaCPP.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -435,7 +435,7 @@ bool llamaCPP::loadModelImpl(const Json::Value &jsonBody) {
435435
gpt_params params;
436436

437437
// By default will setting based on number of handlers
438-
int drogon_thread = drogon::app().getThreadNum() - 1;
438+
int drogon_thread = drogon::app().getThreadNum() - 5;
439439
LOG_INFO << "Drogon thread is:" << drogon_thread;
440440
if (jsonBody) {
441441
if (!jsonBody["mmproj"].isNull()) {

main.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ int main(int argc, char *argv[]) {
3939
LOG_INFO << "Server started, listening at: " << host << ":" << port;
4040
LOG_INFO << "Please load your model";
4141
drogon::app().addListener(host, port);
42-
drogon::app().setThreadNum(thread_num + 1);
42+
drogon::app().setThreadNum(thread_num + 5);
4343
LOG_INFO << "Number of thread is:" << drogon::app().getThreadNum();
4444

4545
drogon::app().run();

0 commit comments

Comments
 (0)