-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy path.env.example
More file actions
146 lines (108 loc) · 5.08 KB
/
.env.example
File metadata and controls
146 lines (108 loc) · 5.08 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
# =============================================================================
# REQUIRED SETTINGS
# =============================================================================
# Provider API key (required unless using --dry-run)
#
# Simple setup (shared with other tools):
OPENAI_API_KEY=your-openai-api-key-here
# Or use provider-agnostic key:
# POTOMATIC_API_KEY=your-api-key-here
#
# Multi-provider setup (provider auto-detected from key name):
# GEMINI_API_KEY=your-gemini-key # Auto-detects provider=gemini
# ANTHROPIC_API_KEY=your-anthropic-key # Auto-detects provider=anthropic
#
# Override for Potomatic-specific keys (rare):
# POTOMATIC_GEMINI_API_KEY=your-potomatic-specific-key # Auto-detects provider=gemini
#
# Provider auto-detection: If you set GEMINI_API_KEY, Potomatic automatically uses provider=gemini
# Key precedence: --api-key > POTOMATIC_<PROVIDER>_API_KEY > <PROVIDER>_API_KEY > POTOMATIC_API_KEY > API_KEY
# Provider precedence: --provider > PROVIDER env > auto-detected from key name > default (openai)
# Target languages to translate to (comma-separated locale codes)
# Examples: fr_FR, es_ES, de_DE, ru_RU, zh_CN, ja_JP, ar_AR
TARGET_LANGUAGES=fr_FR,es_ES
# Path to the input .pot file containing source strings
POT_FILE_PATH=./translations.pot
# =============================================================================
# OPENAI SETTINGS
# =============================================================================
# OpenAI model to use for translation
# Options: gpt-4o-mini, gpt-4o, gpt-4-turbo, gpt-3.5-turbo
MODEL=gpt-4o-mini
# Creativity level for translations (0.0-2.0)
# Lower = more deterministic, higher = more creative
TEMPERATURE=0.7
# Maximum completion tokens for OpenAI responses (auto-calculated if not set)
# MAX_TOKENS=4096
# Source language code (language of the .pot file)
SOURCE_LANGUAGE=en
# =============================================================================
# DICTIONARY SYSTEM
# =============================================================================
# Directory containing dictionary files for consistent translations
# Dictionary files should be named: dictionary-{language}.json
DICTIONARY_DIR=./config/dictionaries
# Enable/disable the user dictionary system (default: enabled)
# Set to false to disable dictionary usage entirely
ENABLE_DICTIONARY=true
# =============================================================================
# FILE OUTPUT SETTINGS
# =============================================================================
# Directory to save generated .po files
OUTPUT_DIR=.
# Prefix for output .po files (e.g., "app-" creates "app-fr_FR.po")
# PO_FILE_PREFIX=
# Locale format for file naming
# Options: target_lang (default), wp_locale, iso_639_1, iso_639_2
LOCALE_FORMAT=target_lang
# Path to existing .po file to merge with (optional)
# INPUT_PO_PATH=existing-translations.po
# Output format for results: console or json
OUTPUT_FORMAT=console
# File to save JSON output (if using json format)
# OUTPUT_FILE=results.json
# =============================================================================
# PERFORMANCE & LIMITS
# =============================================================================
# Number of strings per translation batch (1-100)
# Larger batches reduce cost but increase risk of API failures
BATCH_SIZE=20
# Maximum number of languages to translate in parallel (1-10)
CONCURRENT_JOBS=2
# Timeout for OpenAI API requests in seconds (10-300)
TIMEOUT=60
# Limit the number of strings translated per language (for testing)
# MAX_STRINGS_PER_JOB=50
# Limit total strings translated across all languages (forces sequential processing)
# MAX_TOTAL_STRINGS=150
# Limit total estimated translation cost in USD
# MAX_COST=5.00
# =============================================================================
# ERROR HANDLING & RETRIES
# =============================================================================
# Number of retry attempts per batch (0-10)
MAX_RETRIES=3
# Delay between retry attempts in milliseconds (500-30000)
RETRY_DELAY=2000
# Abort entire translation run if any batch fails all retry attempts
# ABORT_ON_FAILURE=false
# Skip current language on failure and continue with remaining languages
# SKIP_LANGUAGE_ON_FAILURE=false
# =============================================================================
# BEHAVIOR & DEBUGGING
# =============================================================================
# Re-translate all strings, ignoring existing translations
# FORCE_TRANSLATE=false
# Simulate translation without making actual OpenAI API calls
# DRY_RUN=false
# Verbosity level: 0=errors, 1=normal, 2=verbose, 3=debug
VERBOSE_LEVEL=1
# Save detailed request/response logs to timestamped files in debug/ directory
# SAVE_DEBUG_INFO=false
# =============================================================================
# TESTING OPTIONS
# =============================================================================
# Simulate OpenAI API failure rate (0.0-1.0) to test retry logic
# TEST_RETRY_FAILURE_RATE=0.1
# Allow complete failure of a batch (disables final fallback)
# TEST_ALLOW_COMPLETE_FAILURE=false