Ollama Hosting
Ollama makes running Large Language Models as easy as starting a Docker container. Download Llama 3, Mistral, Gemma, and many more models with a single command.
What is Ollama?
Ollama makes running Large Language Models as easy as starting a Docker container. Download Llama 3, Mistral, Gemma, and many more models with a single command.
Features & Benefits
One-command model download
Ollama on your own vServer gives you full control and flexibility.
Llama 3, Mistral, Gemma & more
Ollama on your own vServer gives you full control and flexibility.
OpenAI-compatible API
Ollama on your own vServer gives you full control and flexibility.
GPU acceleration
Ollama on your own vServer gives you full control and flexibility.
Modelfile for customization
Ollama on your own vServer gives you full control and flexibility.
Resource-efficient
Ollama on your own vServer gives you full control and flexibility.
3 Steps to Ollama
From ordering to running installation in just minutes
Order vServer
Choose a suitable vServer plan. For Ollama we recommend at least the XP (4 vCores, 16GB RAM). Your server is ready in 60 seconds.
Install Ollama with 1-Click
In the customer center, simply select Ollama as a template. The installation runs fully automatically.
Configure & Get Started
After installation, access Ollama directly through your browser. Set everything up to your liking.
Recommended vServer for Ollama
All plans with 1-click Ollama installation, root access and unlimited traffic
Virtual NVMe XS
For Testing
- vCores: 2
- ECC RAM: 2 GB
- NVMe SSD: 75 GB
- Traffic: Flatrate
- DDoS Protection
- 1-Click Ollama
Virtual NVMe XB
Standard
- vCores: 4
- ECC RAM: 8 GB
- NVMe SSD: 150 GB
- Traffic: Flatrate
- DDoS Protection
- 1-Click Ollama
Virtual NVMe XP
Recommended
- vCores: 4
- ECC RAM: 16 GB
- NVMe SSD: 256 GB
- Traffic: Flatrate
- DDoS Protection
- 1-Click Ollama
Virtual NVMe XE
Team & Enterprise
- vCores: 6
- ECC RAM: 32 GB
- NVMe SSD: 512 GB
- Traffic: Flatrate
- DDoS Protection
- 1-Click Ollama
Frequently Asked Questions
Everything you need to know about Ollama hosting
Ollama makes running Large Language Models as easy as starting a Docker container. Download Llama 3, Mistral, Gemma, and many more models with a single command.
For Ollama we recommend at least the XP (4 vCores, 16GB RAM). This plan provides enough resources for smooth operation. For higher usage or more users, we recommend upgrading to a larger plan.
With our 1-click installation, Ollama is automatically set up on your vServer. After ordering, simply select Ollama as a template in the customer center. All dependencies are automatically installed and configured.
Yes, Ollama is open source and licensed under the MIT license. The software itself is completely free. You only pay for the vServer running Ollama – starting from just €2.95/month.
Yes! Ollama runs exclusively on your own vServer in our datacenter in Stuttgart. Your data never leaves Germany and is stored fully GDPR compliant. We operate our own infrastructure – no third-party providers, no external hardware.
Yes, upgrading to a larger vServer is possible at any time. Simply contact our support and we'll take care of it – without data loss and without interruption.
Discover more 1-click apps
Ollama is just one of many apps you can install with 1-click on your vServer.
View all apps →Our Promise
What sets us apart from other providers
Own Datacenter
We operate our entire infrastructure ourselves in Stuttgart – no resellers, no external hardware.
Cancel Monthly
No long contract terms. Cancel monthly – fair and flexible.
30-Day Money Back
Not satisfied? Get your money back within 30 days – no questions asked.
100% Germany
Your data never leaves Germany. GDPR compliant and under German law.
Green IT –
Hosting with Responsibility
Climate protection and our environment matter to us. That's why we operate our entire infrastructure exclusively with electricity from renewable energy sources. We even generate part of it ourselves with our own solar power plant directly at the datacenter.