Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Chatter is a Rust-powered terminal client for exploring large language models. It offers a familiar chat workflow for Google’s Gemini API and local Ollama models, complete with streaming output, session management, and an opt-in autonomous agent mode for filesystem tasks.

This book walks you through installing, configuring, and extending Chatter. It expands on the project README with deeper explanations, practical examples, and development notes to help you adapt the tool to your workflow.

Installation

You can install Chatter either with Homebrew or by building from source. Homebrew is the fastest path for macOS users, while the source build works on every platform supported by Rust.

Homebrew

brew tap tomatyss/chatter
brew install chatter

Upgrades follow the usual brew update && brew upgrade chatter flow.

From Source

git clone https://github.com/tomatyss/chatter.git
cd chatter
cargo build --release
sudo cp target/release/chatter /usr/local/bin/

You need the Rust toolchain (via rustup) and a C toolchain for compiling the dependency chain. Once the binary is copied into your $PATH, run chatter --help to confirm the installation.

Configuration

Chatter stores its configuration on disk so you can reuse API keys, default providers, and Ollama settings. Configuration lives in platform-specific directories:

  • macOS: ~/Library/Application Support/chatter/config.json
  • Linux: ~/.config/chatter/config.json
  • Windows: %APPDATA%\chatter\config.json

Managing API Keys

Set a Gemini API key once and Chatter will reuse it for future sessions:

chatter config set-api-key

Alternatively, export the GEMINI_API_KEY environment variable before starting the CLI. Chatter currently stores the API key directly in the plaintext JSON configuration file, so treat config.json as sensitive and manage file permissions accordingly.

Provider Defaults

Configuration fields worth knowing:

  • provider — active provider ("gemini" or "ollama")
  • default_model — fallback model when you omit --model
  • ollama.endpoint — base URL for the Ollama server (defaults to http://localhost:11434)

Edit these values through the CLI or by modifying the JSON file directly.

Using Chatter

The default mode launches an interactive shell with streaming responses and an always-on history buffer. You can also invoke Chatter for one-shot prompts or scripted automation.

chatter

Inside the interface, type /help for a list of commands. Use /model or /provider to switch models, /save to persist the transcript, and /exit to leave the session.

For quick questions, pass the prompt as a positional argument:

chatter "Explain ownership in Rust"

Additional flags let you set the model, override the provider, and inject system instructions.

Interactive Chat

Interactive mode maintains a conversation state so later prompts have access to earlier context. Streaming output keeps the terminal responsive while Gemini or Ollama streams tokens back to the client.

Useful commands during a chat session:

  • /help — show command reference
  • /system — set the system prompt mid-conversation
  • /clear — reset the transcript without restarting the binary
  • /save — write the session to disk (defaults to ./session-<timestamp>.json)
  • /load — load a previous session file

You can toggle providers on the fly with /provider gemini or /provider ollama, and pick a specific model with /model <name>.

Agent Mode

Agent mode grants the assistant controlled access to your filesystem. When enabled, Chatter exposes a curated set of tools (such as read_file, write_file, and search_files) that the model can invoke under supervision.

Enable agent mode from inside a chat session:

/agent on
/agent allow-path .

You can inspect history with /agent history, view available tools with /agent tools, and disable the feature with /agent off. The agent never leaves the directories you explicitly allow.

Use agent mode for repetitive local tasks: summarizing files, quick refactors, or generating reports. Keep an eye on the streamed tool output to ensure each action matches your expectations.

Sessions

Chatter stores conversations as JSON so you can pause and resume long-running threads. Saving a session preserves messages, system instructions, and model selections.

/save my-session.json

Reload the transcript later with /load my-session.json. Session files default to the sessions/ directory in the configuration path, but you can supply absolute or relative paths.

When sharing sessions, remove sensitive content manually—Chatter does not scrub secrets on export.

Providers and Models

Chatter supports Gemini and any Ollama model you have locally.

Gemini

Gemini requires an API key from Google AI Studio. Chatter defaults to the gemini-2.5-flash model, but you can select other Gemini models with --model or /model in the UI.

Ollama

Install Ollama and run ollama serve. Chatter connects to http://localhost:11434 unless you override the endpoint via configuration. Once Ollama is running, pull any supported model, for example:

ollama pull llama3.1
chatter --provider ollama --model llama3.1

Tool calls are available in Ollama mode, enabling local workflows that need filesystem access coupled with language model reasoning.

Development Guide

Chatter is a Rust 2021 project organised as a single binary crate. Key directories:

  • src/cli — argument parsing and command wiring using clap
  • src/chat — interactive chat runtime and terminal presentation using crossterm and ratatui
  • src/api — provider-specific HTTP clients
  • src/agent — tool definitions and execution
  • src/config — persistent configuration management
  • src/templates — reusable output templates

Building and Testing

cargo fmt
cargo clippy -- -D warnings
cargo test

Use cargo build --release for production builds. The build.sh script wraps a release build plus Homebrew packaging steps.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Implement the change and any relevant tests
  4. Run the command suite above
  5. Open a pull request with a summary of the change