Writing
alfredllm

An Alfred workflow that translates with an LLM

April 24, 2026

alfred-translate is a small Alfred 5 workflow I built for myself. Type tt, then whatever you want translated. Four translations come back, ranked in the order you configured. to copy the highlighted one, ⌘1⌘4 to grab any of the others.

Alfred showing the query "tt Hello there, butterfly." with translations to Dutch, French, Spanish and English, each selectable with a Cmd-number shortcut.

I live in the Netherlands and ping between English, Dutch and French in a single afternoon. The browser-based tools all want a tab, a paste, a click. This wants a keystroke.

What's under the hood

The workflow defaults to OpenRouter with Mistral Small 3.2 (24B). It's roughly 1.5 seconds end-to-end, costs pennies per thousand translations, and stays inside the EU. The model holds idiom and tone well enough that I rarely have to second-guess the result.

The full prompt is "translate this into the following languages, return JSON". Mistral Small honors structured output reliably, so the workflow gets back something like {"Dutch":"Hallo daar, vlinder.","French":"Bonjour, papillon.", ...} and Alfred renders one row per key.

Provider as a shell script

The interesting design choice is the provider layer. The Alfred workflow shells out to providers/$PROVIDER.sh with two arguments, text and a comma-separated language list, and expects JSON back on stdout. Two providers ship in the repo:

  • openrouter.sh posts to OpenRouter's chat completions endpoint. Fast.
  • claude.sh shells out to the Claude Code CLI with --model haiku. Useful if you already pay for Claude Code via subscription, but the CLI cold-start adds about five seconds, which is rough on a keystroke-driven workflow.

A codex.sh skeleton is also there if you want to wire up the Codex CLI. Adding your own provider is genuinely about ten lines, because the contract is "read $1 and $2, write JSON to stdout".

This abstraction is the reason the workflow exists at all. Claude Code is excellent for everything else, but five seconds of CLI init for a translation makes Alfred feel sluggish. OpenRouter brings the same Anthropic, Google, and OpenAI models down to one HTTP call, and the provider abstraction lets me drop back to the CLIs only when I actually need them.

Configuration

Everything is set in Alfred's Configure Workflow dialog, no file editing:

VariableDefaultPurpose
OPENROUTER_API_KEY(empty)Required. Get one at openrouter.ai/keys.
LANGSDutch,French,Spanish,EnglishOrder matters. The first entry is the top result.
OPENROUTER_MODELmistralai/mistral-small-3.2-24b-instructAny OpenRouter model. Try google/gemini-2.5-flash for the fastest path.

The API key is marked variablesdontexport, so if you share the workflow it strips itself out automatically.

Install

Grab the latest translate.alfredworkflow from the Releases page, double-click to import, paste your OpenRouter key, done. Or clone the repo and ./build.sh to build it locally.

The whole thing is open source on GitHub. Issues and pull requests welcome, especially for new providers.

Latest posts