AI
loading
·
loading
·
LLM false metric generation
·654 words·4 mins
There is a lot of synthetic data that is being generated by LLMs, These include false metrics.
The Dark Side of abstraction, Supply Chain Attacks on dependencies
·606 words·3 mins
There is growing threat of AI enhanced supply chain attacks targeting Python packages, FOSS software and why sometimes writing your own isolated, network-free software might be our best defense
Switching from LMStudio to Ollama + OpenWebUI
·375 words·2 mins
Why I moved from LMStudio to Ollama and setting up OpenWebUI as frontend.
Is an AI agent a sophisticated version of a loop ?
·637 words·3 mins
Looking at the current state of agents, It is possible to replicate them with a simple loop.
Ollama Environment Configuration
·66 words·1 min
Key environment settings for running Ollama efficiently on Windows and Linux.
Force Ollama to use only a single Nvidia GPU
·163 words·1 min
Selecting and enabling which GPUs are visible to ollama within windows.
Base Mac Mini M4, The alternatives for Low-End NVIDIA Hardware for Inference
·741 words·4 mins
How the Mac Mini M4 has enabled affordable Local LLM inference and making VRAM-starved NVIDIA cards obsolete for low power and low cost
Concurrency in LLMs: Why It Matters More Than size of LLM
·321 words·2 mins
Understanding why handling multiple requests beats raw token speed for my local LLM deployments.
Is AGI impossible ?
·210 words·1 min
It is outright impossible.