Nova Keys
No browser. No clicking. Just clean, authenticated requests — the way a coder prefers it. Nova Keys give your apps and scripts direct access to the Furtune API.
Guardian OnlyWhat are Nova Keys?
Nova Keys are secret API tokens that give your code direct access to Furtune — no web app required. Build integrations, automate workflows, or embed the Furtune Family into your own projects. Nova built the interface; what you do with it is up to you.
- OpenAI-compatible completions endpoint — works with existing tooling
- Choose which cat (agent) to use per request with a single header
- Streaming SSE responses by default, non-streaming available
- Billed against your Aimo balance — same as using the app
- Create multiple keys (e.g. one per project), revoke any time
Getting Your Key
- Go to Settings → Security in the Furtune app.
- Scroll to the Nova Keys panel and click Create Key.
- Give your key a name (e.g. "My script" or "Home server") and confirm.
-
Copy the key immediately — it starts with
ft_and is shown only once. Store it somewhere safe (e.g. an.envfile or a secrets manager).
ft_abc1…)
is visible in the UI. Copy it now. If you lose it, revoke and create a new one — no exceptions.
Authentication
One header. Every request. Pass your Nova Key as a Bearer token in
Authorization and you're in:
Authorization: Bearer ft_your_nova_key_here
No cookies, no session, no OAuth dance. Any authenticated route recognizes the Bearer token and knows exactly who you are.
FURTUNE_API_KEY=ft_...
Completions Endpoint
This is the one endpoint you need. Send messages, get responses — stateless by design. Nothing is saved to the app, no history is loaded from it. You own the context; Nova handles the intelligence.
The API is OpenAI-compatible. Any library that speaks the OpenAI chat format
— Python's openai package, LangChain, LlamaIndex — works here with minimal
changes. Just swap the base URL and add one extra header.
Request Headers
| Header | Value | Required |
|---|---|---|
Authorization |
Bearer ft_your_key |
Required |
Content-Type |
application/json |
Required |
Choosing a cat via the model field
Which cat you talk to is set by the model field in the request body — not a
custom header. Use the cat's slug exactly as configured in the admin panel (e.g.
"nova", "fable", "rumi").
model?
Each cat has a distinct personality, system prompt, and set of abilities. The model
slug tells Nova exactly who you're calling. It follows the standard OpenAI convention, so any
compatible library works without extra configuration.
Request Body
JSON body with the following fields:
| Field | Type | Default | Description |
|---|---|---|---|
model |
string |
— | Required The slug of the cat to use (e.g. "nova", "fable"). Found in the admin panel. |
messages |
Message[] |
— | Required Array of messages (at least one). The cat's system prompt is prepended automatically. |
stream |
boolean |
true |
Stream the response as SSE. Set to false for a single JSON response. |
max_tokens |
number |
agent default | Maximum tokens in the response. Also accepted as max_completion_tokens. |
temperature |
number |
agent default | Sampling temperature (0–2). Higher = more creative/random. |
top_p |
number |
agent default | Nucleus sampling parameter. |
tools |
Tool[] |
— | Optional OpenAI-format tool definitions for function calling. |
tool_choice |
string | object |
"auto" |
Tool selection strategy. Only used if tools is provided. |
Message format
| Field | Type | Description |
|---|---|---|
role | "user" | "assistant" | "system" | "tool" | Message author. |
content | string | null | Message text. |
tool_calls | ToolCall[] | For assistant messages that made tool calls. |
tool_call_id | string | For tool result messages. |
name | string | For tool result messages, the tool name. |
Response Format
Streaming (default)
When stream: true (the default), the response is a
text/event-stream with standard SSE chunks:
// Each chunk:
data: {"id":"chatcmpl-...","choices":[{"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-...","choices":[{"delta":{"content":"! How can I help?"},"finish_reason":null}]}
// Final chunk includes usage:
data: {"id":"chatcmpl-...","choices":[{"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":42,"completion_tokens":18,"total_tokens":60}}
data: [DONE]
Non-Streaming
When stream: false, the response is a single JSON object:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 42,
"completion_tokens": 18,
"total_tokens": 60
}
}
Error Reference
| Status | Code / Body | Meaning |
|---|---|---|
400 |
{ "error": "Bad Request", "details": [...] } |
Missing or invalid request body, or missing X-Agent-Id header. |
401 |
{ "error": "Unauthorized" } |
No key provided, key is invalid, or key has been revoked. |
402 |
{ "error": "Insufficient Aimo", "code": "INSUFFICIENT_AIMO" } |
Your Aimo balance is too low. Wait for your daily refill or top up. |
404 |
{ "error": "Not found", "details": ["Agent not found"] } |
The X-Agent-Id does not match any known cat. |
502 |
{ "error": "Upstream connection failed", "details": [...] } |
Furtune couldn't reach the underlying AI provider. Retry after a moment. |
402 with code === "INSUFFICIENT_AIMO".
Nova is generous with daily refills — but she won't run on empty.
Code Examples
Replace ft_your_key and your-agent-uuid with your real values. Copy, paste, run.
Streaming (default)
curl https://furtune.app/api/completions \
-X POST \
-H "Authorization: Bearer ft_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "nova",
"messages": [
{ "role": "user", "content": "Write a haiku about the ocean." }
],
"stream": true
}' \
--no-buffer
from openai import OpenAI
client = OpenAI(
api_key="ft_your_key",
base_url="https://furtune.app/api",
)
stream = client.chat.completions.create(
model="nova",
messages=[
{"role": "user", "content": "Write a haiku about the ocean."}
],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "ft_your_key",
baseURL: "https://furtune.app/api",
});
const stream = await client.chat.completions.create({
model: "nova",
messages: [
{ role: "user", content: "Write a haiku about the ocean." }
],
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
Non-Streaming
curl https://furtune.app/api/completions \
-X POST \
-H "Authorization: Bearer ft_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "nova",
"messages": [
{ "role": "user", "content": "What is 2 + 2?" }
],
"stream": false
}'
from openai import OpenAI
client = OpenAI(
api_key="ft_your_key",
base_url="https://furtune.app/api",
)
response = client.chat.completions.create(
model="nova",
messages=[{"role": "user", "content": "What is 2 + 2?"}],
stream=False,
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "ft_your_key",
baseURL: "https://furtune.app/api",
});
const response = await client.chat.completions.create({
model: "nova",
messages: [{ role: "user", content: "What is 2 + 2?" }],
stream: false,
});
console.log(response.choices[0].message.content);
Multi-turn conversation
Since the endpoint is stateless, you maintain conversation history yourself and send it with each request:
from openai import OpenAI
client = OpenAI(
api_key="ft_your_key",
base_url="https://furtune.app/api",
)
history = []
while True:
user_input = input("You: ")
history.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="nova",
messages=history,
stream=False,
)
reply = response.choices[0].message.content
print(f"Cat: {reply}")
history.append({"role": "assistant", "content": reply})
import * as readline from "readline";
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "ft_your_key",
baseURL: "https://furtune.app/api",
});
const history: OpenAI.Chat.ChatCompletionMessageParam[] = [];
async function chat(userMessage: string) {
history.push({ role: "user", content: userMessage });
const response = await client.chat.completions.create({
model: "nova",
messages: history,
stream: false,
});
const reply = response.choices[0].message.content ?? "";
history.push({ role: "assistant", content: reply });
return reply;
}
Aimo Billing
Every API call draws from the same Aimo balance you use in the app. Nova accounts for every token — no surprises on your balance. Billing runs in two phases:
- Pre-charge — A hold is placed on your Aimo when the request begins, ensuring you have sufficient balance.
- Settle — When the response finishes (or the stream ends), the actual Aimo cost is calculated from real token usage and deducted. Any excess from the pre-charge is returned.
Cancel a stream early and you're refunded — Nova only charges for what was actually generated.
Key Management API
Prefer to manage keys from code? These endpoints let you list, create, and revoke Nova Keys programmatically. Note: they require a valid session (logged-in user), not a Nova Key itself.
List Keys
// Response
{
"apiKeys": [
{
"id": "key-uuid",
"name": "My MacBook",
"start": "ft_abc1...", // visible prefix only
"lastRequest": "2026-03-29T10:00:00Z",
"expiresAt": null,
"createdAt": "2026-03-20T12:00:00Z"
}
]
}
Create Key
// Request body
{ "name": "My MacBook" }
// Response — full key shown ONLY in this response
{ "key": "ft_full_key_shown_only_here", ... }
Revoke Key
// Response
{ "success": true }
FAQ
Do I need to include message history?
Yes. The /api/completions endpoint is stateless — it has no memory of previous
requests. To have a multi-turn conversation, include the full history in the
messages array with each call (user and assistant turns alternating).
What values can I pass for model?
The model field takes the cat's slug — a short, human-readable identifier
configured in the admin panel (e.g. "nova", "fable",
"rumi"). Each slug maps to a specific cat with its own system prompt, engine,
and abilities. If the slug doesn't match any cat, you'll get a 404.
How many Nova Keys can I have?
There's no hard limit. Create one per app, script, or environment. You can revoke any key at any time from Settings → Security.
Do keys expire?
Keys don't expire by default (the expiresAt field will be null).
You can revoke a key manually at any time.
Is conversation content saved?
No. The /api/completions endpoint does not persist messages to the database.
Your conversation data stays in your own app.
Why am I getting a 402 error?
Your Aimo balance is insufficient. The daily refill runs automatically on your next API call, so if it's a new day your balance should refill. If you've exhausted your daily Aimo, wait for tomorrow's refill or check your tier limits in the app.