desktop ai assistant
building a local ai assistant for desktop productivity. the idea was simple — i wanted something that could help with daily workflows without sending everything to a cloud api.
the backbone is ollama, running open-source llms locally. tried a few models — llama, mistral, codellama — and settled on different ones for different tasks. smaller models for quick lookups and text processing, larger ones when i needed more reasoning.
the interface is built with electron. nothing fancy, just a clean input area that stays out of the way until you need it. global hotkey to summon it, type or speak your request, get a response. voice commands through the web speech api — surprisingly usable once you get the wake word detection right.
the real value came from task automation. natural language to shell commands. "find all pdfs modified this week" becomes a find command. "summarise this file" reads and processes it locally. file management, quick calculations, text transformation — all through conversation.
why local instead of cloud apis? three reasons. privacy — i don't want my file contents and workflows going to external servers. cost — api calls add up fast when you're using it throughout the day. speed — for simple tasks, a local model responds faster than a round trip to an api endpoint.
integrated it with system tools. can open applications, manage clipboard content, interact with the file system. the system prompt is tuned to understand my specific setup and common patterns.
it's not as capable as gpt-4 or claude for complex reasoning. but for the 80% of tasks that are routine — drafting quick messages, reformatting text, searching through files, running common commands — it handles them well enough. and it runs entirely on my machine.