AI
-
DeepSeek R1-0528 Released: Open-Source AI Model Rivals GPT-4 and Gemini
DeepSeek has set the world buzzing again as it is gaining recognition as a powerful, open-source large language model (LLM) that challenges models from the major tech players. In their newest update to DeepSeek, deepseek r1-0528, they have signification improved…
-
5 AI Tools for Kubernetes that will 10X your Home Lab!
Kubernetes is the defacto standard in running containers in a way that is highly available and scalable. There are many other solutions out there, like running multiple Docker hosts, Docker Swarm, Nomad, and others, but Kubernetes is still the standard…
-
Still Worth It? Running AI Workloads on a GTX 1060 in 2025
If you are wanting to get into running AI models on your local workstation or home lab server, you may assume that you need to have the latest video card or a video card that costs hundreds if not upwards…
-
Run Ollama with NVIDIA GPU in Proxmox VMs and LXC containers
I have been having tons of fun working with local LLMs in the home lab the last few days and I wanted to share a few steps and tweaks having to do with how to run Ollama with NVIDIA GPU…
-
Best Self-hosted GitHub Copilot AI Coding Alternatives
AI coding assistants are all the rage these days, from vibe coding, to helping to level the playing field for those who aren’t a developer. For those of us in the home lab, it is a great way to level-up…
-
5 Best LLM Models You Can Run in Docker on Low-Power Hardware
I have been super interested in running LLMs at home with the help of great open source tools freely available out there. Large Language Models (LLMs) are now not just limited to large datacenters full of GPUs. It is now…
-
Meet kubectl-ai: Google Just Delivered the Best Tool for Kubernetes Management
There is no question that the world of Kubernetes is evolving fast. The AI revolution has also brought about some tremendously helpful tools to work with and on Kubernetes clusters. Most IDE coding tools now have chat with AI bots…
-
Self-Hosting LLMs with Docker and Proxmox: How to Run Your Own GPT
One of the coolest things that you can self-host is large language models (LLMs) like GPT. AI has transformed everything from writing content to coding. However, if you want to run your own self-hosted GPT this is a great idea…
-
LM Studio with Phi 3.5 on Snapdragon X Elite No NPU support yet
I upgraded from my Surface Laptop 4 to a Surface Pro 11 with the Snapdragon X Elite processor and have been very satisfied so far with the performance, battery life, and the compatibility. I wanted to see if LM studio…
-
Local LLM Model in Private AI server in WSL
We are in the age of AI and machine learning. It seems like everyone is using it. However, is the only real way to use AI tied to public services like OpenAI? No. We can run an LLM locally, which…
- 1
- 2