Skip to main content

Nvidia

loading · loading ·
AI Capex between 2022-2025
·506 words·3 mins
Lets explore the data of the AI infrastructure investments from NVIDIA, AMD, and hyperscalers like GCP, AWS, and Azure.
Ollama Environment Configuration
·66 words·1 min
Key environment settings for running Ollama efficiently on Windows and Linux.
Force Ollama to use only a single Nvidia GPU
·163 words·1 min
Selecting and enabling which GPUs are visible to ollama within windows.
Base Mac Mini M4, The alternatives for Low-End NVIDIA Hardware for Inference
·741 words·4 mins
How the Mac Mini M4 has enabled affordable Local LLM inference and making VRAM-starved NVIDIA cards obsolete for low power and low cost