Local-first grammar and translation workbench that keeps every request on your machine. The project pairs a Vite + React frontend with an Express backend that streams responses from Ollama models for quick feedback.
- Ships with sensible defaults (
gemma3) but works with any chat-capable Ollama model, including hosted variants that end with-cloud. - Ollama can run locally (
ollama serve) or the server can start it automatically; simply add the model ID toproject.config.json. - Grammar fixes arrive through an inline diff viewer, while translation mode handles auto language detection, punctuation preferences, and paragraph chunking.
- Inline diff review before you copy a grammar fix.
- Translator mode with language detection, tone and strictness controls, and customizable punctuation.
- Persistent settings for preferred models, spelling style, and measurement units.
- Express server streaming incremental responses, exposing health/config endpoints, and optionally auto-starting Ollama.
- Configurable host, port, concurrency, and timeouts via environment variables or
server/config.json.
- Node.js 18+ (latest LTS recommended) and npm
- Ollama with at least one chat-capable model (defaults to
gemma3)
git clone https://github.com/vadondaniel/local-grammar-translate
cd local-grammar-translate
npm install --prefix server
npm install --prefix clientPrefer a classic workflow? Run npm install in each directory instead of using --prefix.
ollama pull gemma3Swap gemma3 with any grammar- or translation-friendly model you plan to use.
# Terminal 1
cd server
node server.js
# Terminal 2
cd client
npm run devVisit http://localhost:5173. Both processes watch for file changes and hot-reload. Press Ctrl+C in each terminal to stop them.
- Sets the Express server port, Vite dev/preview port, and the model catalog shown in the UI (
idmust match the Ollama identifier;nameis the label). - Both server and client read this file during startup; restart your dev processes after editing it.
- The first listed model becomes the default for both Grammar and Translator modes; user choices persist in
localStorage.
- Controls Ollama connectivity (
OLLAMA_HOST,OLLAMA_PORT, autostart, concurrency, timeouts). - Appears in the app under Settings > Server where you can tweak values live and optionally persist them back to disk.
- Configuration precedence (later entries win):
- Built-in defaults
- Environment variables (
OLLAMA_HOST,OLLAMA_PORT,OLLAMA_AUTOSTART,OLLAMA_START_TIMEOUT_MS,OLLAMA_RUN_TIMEOUT_MS,OLLAMA_CONCURRENCY) server/config.json
Example server/config.json:
{
"OLLAMA_HOST": "127.0.0.1",
"OLLAMA_PORT": 11434,
"OLLAMA_AUTOSTART": true,
"OLLAMA_START_TIMEOUT_MS": 15000,
"OLLAMA_RUN_TIMEOUT_MS": 120000,
"OLLAMA_CONCURRENCY": 3
}Set OLLAMA_AUTOSTART to false if you prefer to run ollama serve yourself. The server validates model IDs on each request and falls back to the default if an unknown model is requested.
- Both modes share the same model list from
project.config.json; Grammar usesdefaultModel, Translator usestranslatorDefaultModel. - Changes made in Settings update
localStorageso the app reopens with your last choice. Check Persist to push server-side changes (timeouts, ports) back toserver/config.json. - Lightweight (~4 GB) models usually handle grammar cleanup; larger models shine when translating long or multiple paragraphs together.
- Explore Ollama's catalog for alternatives: https://ollama.com/search. Hosted models ending in
-cloudrequire at least a free Ollama account and have usage limits but remove local hardware requirements.
- Paste text, pick Grammar Fixer or Translator, and click the action button.
- Translation mode lets you pick source/target languages, punctuation style, and chunking rules.
- The diff viewer shows original vs. revised text; use the copy button to grab results.
- The gear icon opens settings for models, tone/strictness, units, spelling style, and server connectivity.
npm run build --prefix clientBuild artifacts land in client/dist. Serve them with your preferred host and keep server/server.js running behind the same or a proxied origin. Adjust CORS or reverse-proxy rules to suit your deployment target.
client/- React + Vite frontend (TypeScript)server/- Express backend plus Ollama integrationproject.config.json- Shared ports and model catalogREADME.md- Project overview and usage guide
- Point the server at a different Ollama host/port via
server/config.jsonor environment variables. - Customize default models, tone, and translator preferences through the in-app settings dialog.
- Run
npm run lint --prefix clientbefore committing to catch TypeScript or JSX issues.