-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathdappnode_package.json
More file actions
26 lines (26 loc) · 1.02 KB
/
dappnode_package.json
File metadata and controls
26 lines (26 loc) · 1.02 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
"name": "ollama.dnp.dappnode.eth",
"version": "0.1.0",
"upstream": [
{
"repo": "ollama/ollama",
"version": "v0.17.7",
"arg": "OLLAMA_VERSION"
}
],
"mainService": "ollama",
"shortDescription": "Local LLM inference engine with GPU acceleration",
"description": "Run large language models locally on your DAppNode with GPU acceleration. Ollama provides a fast and efficient LLM inference engine with AMD ROCm support.\n\n**Features:**\n- AMD GPU acceleration via ROCm\n- Complete privacy - all processing stays local\n- Support for multiple LLM models (Llama, Mistral, CodeLlama, etc.)\n\n**Requirements:**\n- AMD GPU with ROCm support\n- At least 8GB RAM (16GB+ recommended)\n- Sufficient storage for models (10GB+ recommended)",
"type": "service",
"author": "DAppNode Association <admin@dappnode.io> (https://github.com/dappnode)",
"license": "GPL-3.0",
"categories": [
"AI"
],
"links": {
"Models library": "https://ollama.com/library"
},
"architectures": [
"linux/amd64"
]
}