Skip to content

Commit 33e0208

Browse files
becholsclaude
andcommitted
Add pre-production testing best practices guide
Co-Authored-By: Claude Opus 4.5 <[email protected]>
1 parent 567a46a commit 33e0208

File tree

3 files changed

+381
-0
lines changed

3 files changed

+381
-0
lines changed

docs/best-practices/index.mdx

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,3 +54,6 @@ This section is intended for:
5454

5555
- **[Worker Deployment and Performance](./worker.mdx)** Best practices for deploying and optimizing Temporal Workers for
5656
performance and reliability.
57+
58+
- **[Pre-Production Testing](./pre-production-testing.mdx)** Experience-driven testing practices covering failure
59+
injection, load testing, and operational validation.
Lines changed: 377 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,377 @@
1+
---
2+
title: Pre-production testing
3+
sidebar_label: Pre-Production Testing
4+
description: Experience-driven testing practices for teams running Temporal applications, covering failure injection, load testing, and operational validation.
5+
toc_max_heading_level: 4
6+
keywords:
7+
- testing
8+
- pre-production
9+
- load testing
10+
- chaos engineering
11+
- best practices
12+
tags:
13+
- Best Practices
14+
- Temporal Cloud
15+
---
16+
17+
This guide collects practical, experience-driven testing practices for teams running Temporal applications.
18+
The goal is not just to verify that things fail and recover, but to build confidence that *recovery*, *correctness*, *consistency*, and *operability* hold under real-world conditions.
19+
20+
The scenarios below assume familiarity with Temporal concepts such as Namespaces, Workers, Task Queues, History shards, Timers, and Workflow replay.
21+
Start with [Understanding Temporal](/evaluate/understanding-temporal#durable-execution) if you need background.
22+
23+
Before starting any load testing in Temporal Cloud, we recommend connecting with your Temporal Account team and our Developer Success Engineering team.
24+
25+
## Guiding principles
26+
27+
Before diving into specific experiments, keep these principles in mind:
28+
29+
- **Failure is normal**: Temporal is designed to survive failure and issues, but *your application logic* must be too.
30+
- **Partial failure is often harder to deal with than total failure**: Systems that are "mostly working" expose the most flaws.
31+
- **Recovery paths deserve as much testing as steady state**: Analyze recovering application behavior as much as you analyze failing behavior.
32+
- **Build observability before you break things**: Ensure metrics, logs, and visibility tools are in place before injecting failures.
33+
- **Testing is a continual process**: Testing is never finished. Testing is a practice.
34+
35+
## Network-level testing (optional)
36+
37+
**Relevant best practices**: Idempotent Activities, bounded retries, appropriate timeouts
38+
39+
- [Activity timeouts](https://temporal.io/blog/activity-timeouts)
40+
- [Idempotency and durable execution](https://temporal.io/blog/idempotency-and-durable-execution)
41+
42+
### Remove network connectivity to a Namespace
43+
44+
**What to test**
45+
46+
Temporarily block all network access between Workers and the Temporal service for a Namespace.
47+
48+
**Why it matters**
49+
50+
- Validates Worker retry behavior, Sticky Task Queue behavior, Worker recovery performance, backoff policies, and Workflow replay determinism under prolonged disconnection.
51+
- Ensures no assumptions are made about "always-on" connectivity.
52+
53+
**Temporal failure modes exercised**
54+
55+
- Workflow Task timeouts vs retries
56+
- Activity retry semantics
57+
- Replay correctness after long gaps
58+
59+
**How to run this**
60+
61+
- **Kubernetes**: Apply a NetworkPolicy that denies egress from Worker pods to the Temporal APIs.
62+
- **[ToxiProxy](https://github.com/Shopify/toxiproxy)**: Proves your application doesn't have single points of failure.
63+
- **Chaos Mesh / Litmus**: NetworkChaos with full packet drop.
64+
- **Local testing**: Block ports with iptables or firewall rules.
65+
66+
**Things to watch**
67+
68+
- Workflow failures (replay, timeout)
69+
- Workflow Task retries
70+
- Activity failures, classifications (retryable vs non-retryable)
71+
- Worker CPU usage during reconnect storms
72+
73+
## Worker testing
74+
75+
**Relevant best practices**: Appropriate timeouts, managing Worker shutdown, idempotency
76+
77+
- [Worker shutdown](/encyclopedia/workers/worker-shutdown)
78+
79+
### Kill all Workers, then restart them
80+
81+
**What to test**
82+
83+
Abruptly terminate all Workers processing a Task Queue, then restart them.
84+
85+
**Why it matters**
86+
87+
- Validates at-least-once execution semantics.
88+
- Ensures Activities are idempotent and Workflows replay cleanly.
89+
- Validates Task timeouts and retries and that Workers can finish business processes.
90+
91+
**How to run this**
92+
93+
Depending on execution environment:
94+
95+
- **Kubernetes**: Set pod count to zero:
96+
```bash
97+
kubectl scale deployment <deployment-name> --replicas=0 -n <namespace>
98+
kubectl scale deployment <deployment-name> --replicas=3 -n <namespace>
99+
```
100+
- **Azure App Service**:
101+
```bash
102+
az webapp restart --name <app-name> --resource-group <resource-group>
103+
```
104+
105+
**Things to watch**
106+
107+
- Duplicate/improper Activity results
108+
- Workflow failures
109+
- Workflow backlog growth and drain time
110+
111+
### Frequent Worker restart
112+
113+
**What to test**
114+
115+
Periodically restart a fixed or random percentage (e.g. 20-30%) of your Worker fleet every few minutes.
116+
117+
**Why it matters**
118+
119+
- Mimics failure modes where Workers restart due to high CPU utilization and out-of-memory errors from compute-intensive logic in Activities.
120+
- Ensures Temporal invalidates specific Sticky Task Queues and reschedules the task to the associated non-Sticky Task Queue.
121+
122+
**How to run this**
123+
124+
- **Kubernetes**: Build a script using `kubectl` to randomly delete pods in a loop.
125+
- **Chaos Mesh**: [Simulate pod faults](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/).
126+
- **App Services**: Scale down and up again.
127+
128+
**Things to watch**
129+
130+
- Replay latency
131+
- Drop in Workflow and Activity completion
132+
- Duplicate/improper Activity results
133+
- Workflow failures
134+
- Workflow backlog growth and drain time
135+
136+
## Load testing
137+
138+
### Pre-load test setup: expectations for success
139+
140+
1. Have SDK metrics accessible (not just the Cloud metrics)
141+
2. Understand and predict what you should see from these metrics:
142+
- Rate limiting (`temporal_cloud_v1_resource_exhausted_error_count`)
143+
- Workflow failures (`temporal_cloud_v1_workflow_failed_count`)
144+
- Workflow execution time (`workflow_endtoend_latency`)
145+
- High Cloud latency (`temporal_cloud_v1_service_latency_p95`)
146+
- [Worker metrics](/develop/worker-performance) (`workflow_task_schedule_to_start_latency` and `activity_schedule_to_start_latency`)
147+
3. Determine throughput requirements ahead of time. Work with your account team to match that to the Namespace capacity to avoid rate limiting. Capacity increases are done via Temporal support and can be requested for a load test (short-term).
148+
4. Automate how you run the load test so you can start and stop it at will. How will you clear Workflow Executions that are just temporary?
149+
5. What does "success" look like for this test? Be specific with metrics and numbers stated in business terms.
150+
151+
### Validate downstream load capacity
152+
153+
**Relevant best practices**: Idempotent Activities, bounded retries, appropriate timeouts and retry policies, understand behavior when limits are reached
154+
155+
**What to test**
156+
157+
- Schedule a large number of Actions and Requests by starting many Workflows
158+
- Increase the number until you start overloading downstream systems
159+
160+
**Why it matters**
161+
162+
Validates behavior of Temporal application and application dependencies under high load.
163+
164+
**How to run this**
165+
166+
Start Workflows at a rate to surpass throughput limits. Example: [temporal-ratelimit-tester-go](https://github.com/joshmsmith/temporal-ratelimit-tester-go)
167+
168+
**Things to watch**
169+
170+
- Downstream service error rates (e.g., HTTP 5xx, database errors)
171+
- Increased downstream service latency and saturation metrics
172+
- Activity failure rates, specifically classifying between retryable and non-retryable errors
173+
- Activity retry and backoff behavior against the overloaded system
174+
- Workflow backlog growth and drain time
175+
- Correctness and consistency of data (ensuring Activity idempotency holds under duress)
176+
- Worker CPU/memory utilization
177+
178+
### Validate rate limiting behavior
179+
180+
**Relevant best practices**: [Manage Namespace capacity limits](/best-practices/managing-aps-limits), understand behavior when limits are reached
181+
182+
**What to test**
183+
184+
- Schedule a large number of Actions and Requests by starting many Workflows
185+
- Increase the number until you get rate limited (trigger metric [`temporal_cloud_v0_resource_exhausted_error_count`](/production-deployment/cloud/metrics/reference#temporal_cloud_v0_resource_exhausted_error_count))
186+
187+
**Why it matters**
188+
189+
Validates behavior of Cloud service under high load: "In Temporal Cloud, the effect of rate limiting is increased latency, not lost work. Workers might take longer to complete Workflows."
190+
191+
**How to run this**
192+
193+
1. (Optional) Decrease a test Namespace's rate limits to make it easier to hit limits
194+
2. Calculate current APS at current throughput (in production)
195+
3. Calculate Workflow throughput needed to surpass limits
196+
4. Start Workflows at a rate to surpass throughput limits using [temporal-ratelimit-tester-go](https://github.com/joshmsmith/temporal-ratelimit-tester-go)
197+
198+
**Things to watch**
199+
200+
- Worker behavior when rate limited
201+
- Client behavior when rate limited
202+
- Temporal request and long_request failure rates
203+
- Workflow success rates
204+
- Workflow latency rates
205+
206+
## Failover and availability
207+
208+
**Relevant best practices**: Use [High Availability features](/cloud/high-availability) for critical workloads.
209+
210+
- [High Availability monitoring](/cloud/high-availability/monitoring)
211+
212+
### Test region failover
213+
214+
**What to test**
215+
216+
Trigger a [High Availability](/cloud/high-availability) failover event for a Namespace.
217+
218+
**Why it matters**
219+
220+
- Real outages are messy and rarely isolated.
221+
- Ensures your operational playbooks and automation are resilient.
222+
- Validates Worker and Namespace failover behavior.
223+
224+
**How to run this**
225+
226+
Execute a manual failover per the [manual failovers documentation](/cloud/high-availability/failovers#manual-failovers).
227+
228+
**Things to watch**
229+
230+
- Namespace availability
231+
- Client and Worker connectivity to failover region
232+
- Workflow Task reassignments
233+
- Human-in-the-loop recovery steps
234+
235+
## Dependency and downstream testing
236+
237+
### Break the things your Workflows call
238+
239+
**What to test**
240+
241+
Intentionally break or degrade downstream dependencies used by Activities:
242+
243+
- Make databases read-only or unavailable
244+
- Inject high latency or error rates into external APIs
245+
- Throttle or pause message queues and event streams
246+
247+
**Why it matters**
248+
249+
- Temporal guarantees Workflow durability, not dependency availability.
250+
- Validates that Activities are retryable, idempotent, and correctly timeout-bounded.
251+
- Ensures Workflows make forward progress instead of livelocking on broken dependencies.
252+
253+
**Things to watch**
254+
255+
- Activity retry and backoff behavior
256+
- Heartbeat effectiveness for long-running Activities
257+
- Database connection exhaustion and retry storms
258+
- API timeouts vs Activity timeouts
259+
- Whether failures propagate as Signals, compensations, or Workflow-level errors
260+
261+
**Anti-patterns this reveals**
262+
263+
- Non-idempotent Activities
264+
- Infinite retries without circuit breaking
265+
- Using Workflow logic to "wait out" broken dependencies
266+
267+
## Deployment and code-level testing
268+
269+
### Deploy a Workflow change with versioning
270+
271+
**Relevant best practices**: Implement a versioning strategy.
272+
273+
- [Workflow Versioning Strategies - Developer Corner](https://community.temporal.io/t/workflow-versioning-strategies/6911)
274+
- [Worker Versioning](/production-deployment/worker-deployments/worker-versioning)
275+
- [Replay Testing](/evaluate/development-production-features/testing-suite)
276+
277+
**What to test**
278+
279+
- Deploy Workflow code that would introduce non-deterministic errors (NDEs) but use a versioning strategy to deploy successfully
280+
- Validate Workflow success and clear the backlog of tasks
281+
282+
**Why it matters**
283+
284+
- Unplanned NDEs can be a painful surprise
285+
- Tests versioning strategy and patching discipline to build production confidence
286+
287+
**Things to watch**
288+
289+
- Workflow Task failure reasons
290+
- Effectiveness of versioning and patching patterns
291+
292+
### Deploy a version that causes NDEs, then recover
293+
294+
**Relevant best practices**: Implement a versioning strategy.
295+
296+
- [Workflow Versioning Strategies - Developer Corner](https://community.temporal.io/t/workflow-versioning-strategies/6911)
297+
- [Worker Versioning](/production-deployment/worker-deployments/worker-versioning)
298+
- [Replay Testing](/evaluate/development-production-features/testing-suite)
299+
300+
**What to test**
301+
302+
- Deploy Workflow code that introduces non-deterministic errors (NDEs)
303+
- Attempt rollback to a known-good version, or apply versioning strategies to apply the new changes successfully
304+
- Clear or recover the backlog of tasks
305+
306+
**Why it matters**
307+
308+
- Unplanned NDEs can be a painful surprise
309+
- Tests versioning strategy, patching discipline, and recovery tooling
310+
311+
**Things to watch**
312+
313+
- Workflow Task failure reasons
314+
- Backlog growth and drain time
315+
- Effectiveness of versioning and patching patterns
316+
317+
## Observability checklist
318+
319+
Before (and during) testing, ensure visibility into:
320+
321+
- Workflow Task and Activity failure rates
322+
- Throughput limits and usage
323+
- Workflow and Activity end-to-end latencies
324+
- Task latency and backlog depth
325+
- Workflow History size and event counts
326+
- Worker CPU, memory, and restart counts
327+
- gRPC error codes
328+
- Retry behavior
329+
330+
## Game day runbook
331+
332+
Use this checklist when running tests during a scheduled game day or real incident simulation.
333+
334+
### Before you start
335+
336+
- Make sure people know you're testing and what scenarios you're trying
337+
- Let the teams that support the APIs you're calling know you're testing
338+
- Reach out to the Temporal Cloud Support and Account teams to coordinate
339+
- Dashboards for SDK and Cloud metrics
340+
- Task latency, backlog depth, Workflow failures, Activity failures
341+
- Alerts muted or routed appropriately
342+
- Known-good deployment artifact available
343+
- Rollback and scale controls verified
344+
345+
### During testing
346+
347+
- Introduce *one variable at a time*
348+
- Record start/stop times of each experiment
349+
- Capture screenshots or logs of unexpected behavior
350+
- Track backlog growth and drain rate
351+
352+
### Recovery validation
353+
354+
- Workflows resume without manual intervention
355+
- No permanent Workflow Task failures (unless intentional)
356+
- Activity retries behave as expected
357+
- Backlogs drain in predictable time
358+
359+
### After action review
360+
361+
- Identify unclear alerts or missing metrics/alerts
362+
- Update retry, timeout, or versioning policies
363+
- Document surprises and operational debt
364+
365+
## Summary
366+
367+
Pre-production testing with Temporal is about more than proving durability - it's about proving *operability under stress*.
368+
You want to go through the exercise and know what to do before you go to production and have to do it for real.
369+
370+
If your system survives:
371+
372+
- Connectivity issues
373+
- Repeated failovers
374+
- Greater than expected load
375+
- Mass Worker churn
376+
377+
...then you can have confidence it's ready for many kinds of production chaos.

sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -649,6 +649,7 @@ module.exports = {
649649
'best-practices/cloud-access-control',
650650
'best-practices/security-controls',
651651
'best-practices/worker',
652+
'best-practices/pre-production-testing',
652653
'production-deployment/multi-tenant-patterns',
653654
],
654655
},

0 commit comments

Comments
 (0)