Model Providers
Taskter’s agent system is model‑agnostic. A provider layer adapts the neutral agent loop to a specific LLM API. Providers convert between Taskter’s message history and the provider’s wire format, and translate responses into either a text completion or a tool call.
All provider requests and responses are mirrored to .taskter/api_responses.log
so that you can inspect the exact JSON being exchanged when debugging a new
integration.
Built-in Providers
- Gemini (default): selected when
agent.modelstarts withgemini.- Env var:
GEMINI_API_KEY - Code:
src/providers/gemini.rs
- Env var:
- OpenAI: selected when
agent.modelstarts withgpt-4,gpt-5,gpt-4o,gpt-4.1,o1,o3,o4, oromni.- Env var:
OPENAI_API_KEY - Code:
src/providers/openai.rs - APIs:
- Chat Completions: used for models like
gpt-4oandgpt-4o-mini. Tools are passed as{"type":"function","function":{...}}and responses carrychoices[0].message.tool_calls[]. - Responses API: used for
gpt-4.1,gpt-5,o-series, and Omni models. Input is an item list; tool calls arrive as{"type":"function_call", name, arguments, call_id}inoutput[]and you must append both thefunction_calland afunction_call_outputitem with the samecall_id.
- Chat Completions: used for models like
- Optional overrides:
OPENAI_BASE_URLto point at a proxy (https://api.openai.comby default)OPENAI_CHAT_ENDPOINT/OPENAI_RESPONSES_ENDPOINTfor full URL controlOPENAI_REQUEST_STYLE=chat|responsesto force the request formatOPENAI_RESPONSE_FORMATcontaining either a JSON blob (e.g.{"type":"json_object"}) or shorthand (json_object)
- Env var:
- Ollama: selected when
agent.modelstarts withollama:,ollama/, orollama-.- Env var:
OLLAMA_BASE_URL(defaults tohttp://localhost:11434) - Code:
src/providers/ollama.rs - Uses the local
/api/chatendpoint and mirrors the Chat Completions tool schema.
- Env var:
Configure a Provider
- Choose a model string when creating/updating an agent (e.g.
gemini-2.5-pro,gpt-4.1,o1-mini, orollama:llama3). - Set the provider explicitly when running CLI commands by passing
--provider gemini|openai|ollama. To clear a stored provider, usetaskter agent update --provider none …; new agent creation does not acceptnone. When no provider is stored Taskter falls back to model-name heuristics. - Export the provider’s API key environment variable before running agents.
- Gemini:
export GEMINI_API_KEY=your_key_here - OpenAI:
export OPENAI_API_KEY=your_key_here - Ollama does not require an API key. Optionally set
OLLAMA_BASE_URLif your daemon listens somewhere other thanhttp://localhost:11434.
- Gemini:
If no valid API key is present, Taskter falls back to an offline simulation. No
real tool calls are made: agents that include the send_email tool return a
stubbed success comment, while all other agents are marked as failed so you can
spot the missing credentials.
Add a New Provider
Implement the ModelProvider trait and register it in select_provider.
- Create a file under
src/providers/, e.g.my_provider.rs:
#![allow(unused)] fn main() { use serde_json::{json, Value}; use anyhow::Result; use crate::agent::Agent; use super::{ModelAction, ModelProvider}; pub struct OpenAIProvider; impl ModelProvider for OpenAIProvider { fn name(&self) -> &'static str { "openai" } fn api_key_env(&self) -> &'static str { "OPENAI_API_KEY" } fn build_history(&self, agent: &Agent, user_prompt: &str) -> Vec<Value> { vec![json!({ "role": "user", "content": [ {"type": "text", "text": format!("System: {}\\nUser: {}", agent.system_prompt, user_prompt)} ] })] } fn append_tool_result(&self, history: &mut Vec<Value>, tool: &str, _args: &Value, tool_response: &str) { history.push(json!({ "role": "tool", "content": [{ "type": "tool_result", "name": tool, "content": tool_response }] })); } fn tools_payload(&self, agent: &Agent) -> Value { // Map our FunctionDeclaration to OpenAI tools schema json!(agent.tools.iter().map(|t| { json!({ "type": "function", "name": t.name, "description": t.description, "parameters": t.parameters, "strict": true }) }).collect::<Vec<_>>()) } fn endpoint(&self, _agent: &Agent) -> String { "https://api.openai.com/v1/responses".to_string() } fn request_body(&self, history: &[Value], tools: &Value) -> Value { json!({ "model": "gpt-4.1", "input": history, "tools": tools }) } fn parse_response(&self, v: &Value) -> Result<ModelAction> { if let Some(tc) = v["output"][0].get("tool_calls").and_then(|x| x.get(0)) { let name = tc["function"]["name"].as_str().unwrap_or_default().to_string(); let args = tc["function"]["arguments"].clone(); return Ok(ModelAction::ToolCall { name, args }); } // Fallback to text content let text = v["output_text"].as_str().unwrap_or("").to_string(); Ok(ModelAction::Text { content: text }) } fn headers(&self, api_key: &str) -> Vec<(String, String)> { vec![ ("Authorization".into(), format!("Bearer {}", api_key)), ("Content-Type".into(), "application/json".into()), ] } } }
- Register it in
select_providerinsidesrc/providers/mod.rs:
#![allow(unused)] fn main() { pub fn select_provider(agent: &Agent) -> Box<dyn ModelProvider + Send + Sync> { let model = agent.model.to_lowercase(); if model.starts_with("gemini") { Box::new(gemini::GeminiProvider) } else if model.starts_with("gpt-") { Box::new(openai::OpenAIProvider) } else { Box::new(gemini::GeminiProvider) } } }
- Set the API key and choose a matching model:
export OPENAI_API_KEY=your_key
# Example agent
taskter agent add --prompt "Be helpful" --tools run_bash --model my-model --provider openai
Notes
- The agent loop is neutral: it asks a provider for one step, executes a tool if requested, and appends the result via the provider to maintain the correct message format.
- Providers must ensure tools are represented in the target API’s expected
schema and that responses are robustly parsed into
ModelAction. - See
src/providers/gemini.rsandsrc/providers/openai.rsas complete reference implementations.
OpenAI Responses: Tool Calling Flow
The Responses API differs from Chat Completions. A typical multi‑turn flow:
- Send input as a list (start with user):
[ {"role":"user", "content":[{"type":"input_text","text":"Use run_bash to echo hello"}]} ] - Model returns an
outputarray which can include{"type":"function_call", "name":"run_bash", "arguments":"{\"command\":\"echo hi\"}", "call_id":"call_123"}. - Execute the tool, then append to your input list:
{"type":"function_call","call_id":"call_123","name":"run_bash","arguments":"{\"command\":\"echo hi\"}"}, {"type":"function_call_output","call_id":"call_123","output":"hello"} - Call the Responses API again with the expanded
inputand the sametools. The model will produce a finalmessagewithoutput_text.
Taskter automates these steps inside the provider, including the call_id wiring and multi‑turn loop.