Skip to main content

Documentation Index

Fetch the complete documentation index at: https://factory-docs-byok-consolidation.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Bring Your Own Key (BYOK) uses the Factory CLI customModels setting to add models that Factory does not manage for you. Configure the model ID, provider format, base URL, API key, and optional token or reasoning fields once, then choose the custom model from /model in the Factory CLI. For Factory-managed models and usage multipliers, see Available Models.
Store real API keys in environment variables and reference them with . Do not commit API keys to project files.

Configure a custom model

Add custom models to ~/.factory/settings.json or ~/.factory/settings.local.json under the customModels array:
{
  "customModels": [
    {
      "model": "provider-model-id",
      "baseUrl": "https://api.example.com/v1",
      "apiKey": "${PROVIDER_API_KEY}",
      "provider": "generic-chat-completion-api"
    }
  ]
}
Set the environment variable in the shell where you launch Droid:
export PROVIDER_API_KEY="YOUR_API_KEY"
Then run /model and select the entry from Custom models. Settings changes usually reload automatically; if the model does not appear, restart Droid and validate the settings JSON.
Legacy ~/.factory/config.json custom model entries are still loaded for backwards compatibility. Use settings.json or settings.local.json for new BYOK configurations. Environment variable interpolation applies to settings files, not legacy config.json entries.

Provider matrix

Choose the provider value based on the API format exposed by your endpoint, not just the company name.
EndpointproviderBase URLUse/notes
Anthropic APIanthropichttps://api.anthropic.comAnthropic Messages API.
OpenAI APIopenaihttps://api.openai.com/v1OpenAI Responses API.
Basetengeneric-chat-completion-apihttps://inference.baseten.co/v1Use the deployment model ID.
DeepInfrageneric-chat-completion-apihttps://api.deepinfra.com/v1/openaiRepository-style model IDs are common.
Fireworks AIgeneric-chat-completion-apihttps://api.fireworks.ai/inference/v1Model IDs may include an account path.
Groqgeneric-chat-completion-apihttps://api.groq.com/openai/v1Use the Groq console model ID.
Hugging Face Routergeneric-chat-completion-apihttps://router.huggingface.co/v1Accept required provider or model terms first.
OpenRoutergeneric-chat-completion-apihttps://openrouter.ai/api/v1Add required headers with extraHeaders.
Google Geminigeneric-chat-completion-apihttps://generativelanguage.googleapis.com/v1beta/openaiOpenAI-compatible Gemini endpoint; Vertex or gateway URLs can also apply.
Local/self-hosted serversgeneric-chat-completion-apihttp://localhost:11434/v1 or https://example.com/v1Ollama, vLLM, or self-hosted /v1 servers. Start the server first.
Custom model quality, latency, image support, prompt caching, and reasoning support depend on the provider and model you choose. Validate the model on your repository before relying on it for production engineering work.

Supported fields

Required fields

FieldDescription
modelModel identifier sent to the provider.
baseUrlProvider API base URL.
apiKeyAPI key string or ${VAR_NAME} environment variable reference.
providerAPI format: anthropic, openai, or generic-chat-completion-api.

Optional fields

FieldDescription
displayNameHuman-readable label shown in the model selector. Defaults to model.
idStable custom model ID. Droid generates one when omitted.
indexNumeric ordering/index for the custom model. Droid assigns one when omitted.
maxOutputTokensMaximum output tokens to request from the provider.
enableThinkingEnables provider thinking/reasoning configuration when supported by the selected API format.
thinkingMaxTokensMaximum thinking tokens when thinking is enabled.
reasoningEffortReasoning effort hint such as low, medium, or high when supported.
extraHeadersAdditional HTTP headers sent with each request.
extraArgsAdditional provider-specific request arguments.
noImageSupportSet to true to disable image inputs. For non-OpenAI and non-Anthropic formats, Droid defaults this to true when omitted; set it to false only after verifying image support.

Troubleshooting

ProblemCheck
Model does not appearConfirm customModels is valid JSON and includes model, baseUrl, apiKey, and provider.
Invalid providerUse exactly anthropic, openai, or generic-chat-completion-api.
Authentication failsVerify the environment variable is set in the shell where you launched Droid and that the provider account has available quota.
Requests fail immediatelyConfirm the baseUrl matches the API format selected by provider.
Local model failsStart the local server, confirm the model is pulled, and test the /v1 endpoint from the same machine.
Optional fields failRemove unsupported enableThinking, thinkingMaxTokens, reasoningEffort, extraHeaders, or extraArgs, then add them back only when the provider documents support.
Images failSet noImageSupport to true unless you have verified image input support for that endpoint.