A desktop shell for private AI workflows that stay close to your system.
A Windows front end for the local Giblex Brain backend.
Giblex Assistant is currently built as an Avalonia desktop shell that calls the local Brain backend and works with a locally running model stack such as LM Studio.
It is aimed at private chat, retrieval, and continuity workflows without pushing the core experience into a hosted service by default.
Most AI tools depend on remote systems, short-lived sessions, and weak continuity.
AI assistants today typically operate in isolated sessions, send data to third-party servers, and forget context between conversations. That makes them useful for one-off tasks but unreliable for deep, sustained, private workflows.
Giblex Assistant is being built to work differently: local-first, backend-connected, and structured around private workflows that can stay close to the machine you control.
Desktop shell plus local backend.
The current implementation depends on the local Brain backend and a local model runtime rather than a hosted default path.
Key features.
- Windows desktop shell built with Avalonia
- Local Brain backend integration
- Local model runtime support through LM Studio
- Private chat and retrieval-oriented workflows
- Room for deeper memory and workflow layers over time
Useful AI starts with a workflow you can actually keep close.
Giblex Assistant is built for people who want a local desktop experience around their own backend and model stack, with room to grow into stronger memory and retrieval over time.
Deeper memory can come on top of the current shell.
The current repo is a desktop shell for the Brain backend. Future work can add richer memory systems, broader local model support, and tighter integration with the rest of the Giblex ecosystem.
Private AI starts closer to home.
Join early access to Giblex Assistant.