Documentation

Experience the true power of AI, on your terms. With AEON, your intelligence operates locally, ensuring absolute privacy and anonymity. Your data stays yours, always. Embrace powerful, multi-modal AI without compromise.


Getting started

Stating with Aeon

The entire system is powered by a local-first technical stack:

Llama.cpp Python Library: A highly efficient library that enables the entire system to run on a local machine's CPU, bypassing the need for powerful, specialized GPUs or cloud services.

GGUF Files: These are the model's brain and eyes in a highly compressed format. They are quantized versions of the original models, which significantly reduces their file size and memory footprint without a major loss in performance.

Docker [Recomended]

Dependencies: Docker

Installation on Linux

Dependencies: build-essentials, Python3.11, git.

Installation on Windows

Dependencies: Microsoft C++ Build Tools , Python3.11, git.


Commands (CLI)

Powerful CLI

Command-line interface (CLI) was designed to be fast and a minimalist experience, making it perfect for developers and power users who want to interact with the AI directly.

Some command parameters may change on the web interface.

Commands

/help Show this screen.
/new Create a new chat.
/list List all chats.
/open [NUMBER] Open chat.
/load [PATH]/[FILE].zip Load ZIP backup.
/rename [NUMBER] [NEW_NAME] Rename chat by ID.
/delete [NUMBER] Delete selected chat.
/zip Backup contents to a zip file.
/ingest [PATH] or [.json,txt,md] Add documents to RAG.
/search [TERM] Make web search with DuckDuckGo.
/restart Restart.
/quit, /exit, /bye Exit.