Skip to content

Commit 0831470

Browse files
Near to complete
1 parent 6e6743a commit 0831470

File tree

7 files changed

+514
-25
lines changed

7 files changed

+514
-25
lines changed

README.md

Lines changed: 106 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,115 @@
11
# Simple AI UI
2-
Simple AI UI is a simple , minimalist and lightweight web GUI. It offers an server also to host your own AI in your local network or on Internet.
32

4-
### Features
5-
- Robust API infrastructure.
6-
- Simple , minimalist and lightweight UI
7-
- Privacy (Your data is stored in your computer)
3+
Simple AI UI is a minimalist and lightweight web GUI that allows you to host your own AI on your local network or the internet.
4+
5+
---
6+
7+
## Table of Contents
8+
9+
1. [Features](#features)
10+
2. [Supported Platforms](#supported-platforms)
11+
3. [Flavors](#flavors)
12+
4. [APIs](#apis)
13+
5. [Server Configuration](#server-configuration)
14+
6. [Installation](#installation)
15+
7. [Credits](#credits)
16+
8. [License](#license)
17+
18+
---
19+
20+
## Features
21+
22+
- Robust API infrastructure
23+
- Simple, minimalist, and lightweight UI
24+
- Privacy: Your data is stored on your computer
25+
- Chat History - [COMING SOON]
26+
- Memory for AI - [COMING SOON]
27+
- Admin Panel - [COMING SOON]
28+
- More APIs - [COMING SOON]
29+
30+
---
31+
32+
## Supported Platforms
833

9-
### Supported Platforms for Hosting
1034
- Windows
1135
- Mac
1236
- Linux
13-
**NOTE :** Ollama must be installed on the system.
1437

15-
### Flavors
16-
- Python
17-
- Rust-lang (coming soon)
38+
**NOTE:** Ollama must be installed on the system.
39+
40+
---
41+
42+
## Flavors
43+
44+
- **Python** - Code Name: Simple-AI-UI-EASY
45+
- **Rust-lang** - Code Name: Simple-AI-UI-ThunderBOLT [COMING SOON]
46+
47+
---
48+
49+
## APIs
50+
51+
For detailed documentation, refer to `./docs/apis.md`.
52+
53+
---
54+
55+
## Server Configuration
56+
57+
For detailed documentation, refer to `./docs/server-config.md`.
58+
59+
---
60+
61+
## Installation
62+
63+
### For Python Lovers and Enthusiasts:
64+
65+
1. **Clone the repository:**
66+
```bash
67+
git clone https://github.com/junaidcodingmaster/SIMPLE-AI-UI
68+
cd SIMPLE-AI-UI
69+
```
70+
71+
2. **Create a virtual environment:**
72+
```bash
73+
python3 -m venv venv
74+
```
75+
76+
3. **Activate the virtual environment:**
77+
78+
**Windows:**
79+
```bash
80+
.\venv\Scripts\activate
81+
```
82+
83+
**Linux or Mac:**
84+
```bash
85+
source ./venv/bin/activate
86+
```
87+
88+
4. **Install requirements:**
89+
```bash
90+
pip install -r ./requirements.txt
91+
```
92+
93+
5. **Run the application:**
94+
```bash
95+
python app.py
96+
```
97+
98+
### For Windows EXE - Coming Soon
99+
100+
### For Linux Binaries - Coming Soon
101+
102+
---
103+
104+
## Credits
105+
Made By [Junaid](https://abujuni.dev)
106+
107+
---
108+
109+
## License
18110

111+
This project is licensed under the MIT License. See the `LICENSE` file for more details.
19112

113+
## Alert
114+
- Currently this project is not opensource.
115+
- I promise Soon it will be opensource.

app.py

Lines changed: 82 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,55 @@
33
import threading
44
import queue
55
from ollamaClient import OllamaClient
6+
from dotenv import load_dotenv
67

8+
import hashlib
9+
10+
# Load environment variables from .env file
11+
12+
load_dotenv("./config.env")
13+
14+
15+
# Helper function to get configuration values
16+
17+
def get_config(key, default=None):
18+
19+
value = os.getenv(key, default)
20+
21+
if value is None:
22+
23+
print(f"Error: The {key} is not set in the environment.")
24+
25+
exit(1) # Exit with non-zero status to indicate an error occurred
26+
27+
return value
28+
29+
30+
# Configuration
31+
32+
config = {
33+
34+
"host": get_config("SIMPLE_AI_UI_SERVER_HOST", "0.0.0.0"),
35+
36+
"port": int(get_config("SIMPLE_AI_UI_SERVER_PORT", "5000")),
37+
38+
"users": {
39+
40+
pair.split(":")[0]: pair.split(":")[1]
41+
42+
for pair in get_config("SIMPLE_AI_UI_AUTH_USERS", "joe:1234,apple:1234").split(",")
43+
44+
if ":" in pair
45+
46+
},
47+
48+
"open_stats": get_config("SIMPLE_AI_UI_AUTH_SHOW_USER_STATS", "true").lower() == "true",
49+
50+
"limit_of_requests": int(get_config("SIMPLE_AI_UI_API_LIMIT_OF_REQS_PER_DAY", "1000")),
51+
52+
"limit_of_devices": int(get_config("SIMPLE_AI_UI_API_NO_OF_DEVICES", "1000")),
53+
54+
}
755
# Initialize Flask app and Ollama client
856
app = Flask(__name__)
957
ai = OllamaClient()
@@ -33,6 +81,39 @@ def process_requests() -> None:
3381
# Start background worker thread
3482
threading.Thread(target=process_requests, daemon=True).start()
3583

84+
def auth(username, password):
85+
for user in config.get("users").keys():
86+
if user == username:
87+
if config.get("users").get(user) == password:
88+
return "OK-12"
89+
else:
90+
return "BAD-2"
91+
else:
92+
return "BAD-12"
93+
94+
def hashing(text):
95+
content = str(uuid4)+text+str(uuid4)
96+
hash_object = hashlib.sha256(content.encode("utf-8"))
97+
hex_dig = hash_object.hexdigest()
98+
return hex_dig
99+
100+
def middlewares(username,password):
101+
auth_status=auth(username, password)
102+
if auth_status == "OK-12":
103+
hash_obj=hashing(username+password)
104+
request.cookies.add("auth":hash_obj)
105+
ALLOW_HASHS.append(hash_obj)
106+
return True
107+
else:
108+
return render_template("error.html", error="YOU ARE NOT ALLOWED - AUTH ERROR")
109+
110+
def api_chat_func():
111+
data = request.json
112+
response_queue: queue.Queue = queue.Queue()
113+
request_queue.put((data, response_queue))
114+
response = response_queue.get() # Blocking wait for response
115+
status = 200 if "response" in response else 400
116+
return jsonify(response), status
36117

37118
# Routes
38119
@app.route("/")
@@ -86,12 +167,7 @@ def api_connection_stats():
86167

87168
@app.route("/api/chat", methods=["POST"])
88169
def api_chat():
89-
data = request.json
90-
response_queue: queue.Queue = queue.Queue()
91-
request_queue.put((data, response_queue))
92-
response = response_queue.get() # Blocking wait for response
93-
status = 200 if "response" in response else 400
94-
return jsonify(response), status
170+
middlewares_firewall=middlewares()
95171

96172

97173
if __name__ == "__main__":

config.env

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Server Configuration
2+
SIMPLE_AI_UI_SERVER_HOST=localhost
3+
SIMPLE_AI_UI_SERVER_PORT=5000
4+
5+
# Users
6+
SIMPLE_AI_UI_AUTH_USERS=joe:1234,apple:1234
7+
SIMPLE_AI_UI_AUTH_NO_OF_USERS_PER_DAY=1000
8+
SIMPLE_AI_UI_AUTH_SHOW_USER_STATS=true
9+
10+
# API Requests
11+
SIMPLE_AI_UI_API_LIMIT_OF_REQS_PER_DAY=1000
12+
SIMPLE_AI_UI_API_NO_OF_DEVICES=1000

docs/apis.md

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# API Documentation
2+
3+
## Index
4+
1. [Base URL](#base-url)
5+
2. [Endpoints](#endpoints)
6+
- [Check Connection](#1-check-connection)
7+
- [Get Connection Stats](#2-get-connection-stats)
8+
- [Chat with AI](#3-chat-with-ai)
9+
3. [Notes](#notes)
10+
11+
---
12+
13+
## Base URL
14+
```
15+
http://localhost:5000
16+
```
17+
18+
---
19+
20+
## Endpoints
21+
22+
### 1. Check Connection
23+
**Endpoint:** `GET /api/connection`
24+
25+
**Description:** Checks the connection to the local Ollama server.
26+
27+
**Response:**
28+
- **200 OK** - Server is reachable.
29+
- **404 ERROR** - Server is unreachable.
30+
31+
**Example Response:**
32+
```json
33+
{
34+
"host": "localhost",
35+
"port": 11434,
36+
"status": "OK"
37+
}
38+
```
39+
40+
---
41+
42+
### 2. Get Connection Stats
43+
**Endpoint:** `GET /api/connection/stats`
44+
45+
**Description:** Returns available and active AI models.
46+
47+
**Response:**
48+
- **200 OK** - Returns list of models.
49+
- **404 ERROR** - No models found.
50+
51+
**Example Response:**
52+
```json
53+
{
54+
"available": ["model1", "model2"],
55+
"active": ["model1"]
56+
}
57+
```
58+
59+
---
60+
61+
### 3. Chat with AI
62+
**Endpoint:** `POST /api/chat`
63+
64+
**Description:** Sends a chat request to the AI model and returns a response. Ensure that the selected model is available before making a request.
65+
66+
**Request Body:**
67+
```json
68+
{
69+
"model": "model_name",
70+
"prompt": "Hello AI!"
71+
}
72+
```
73+
74+
**Response:**
75+
- **200 OK** - Successful response from AI.
76+
- **400 ERROR** - Invalid request data.
77+
78+
**Example Response:**
79+
```json
80+
{
81+
"response": "Hello, how can I assist you today?"
82+
}
83+
```
84+
85+
---
86+
87+
## Notes
88+
- The Ollama server is expected to be running on `http://localhost:11434`.
89+
- Requests to `/api/chat` are processed asynchronously using a queue.
90+

0 commit comments

Comments
 (0)