Long Running Operations (LRO)
For operations that take more than a few seconds—like AI image generation, video processing, or complex computations—implement the Long Running Operations pattern.
How It Works
- Client makes a request with payment
- Server returns
202 Acceptedwith a status URL - Client polls the status URL until completion
- Server returns the final result
Initial Response (HTTP 202)
When your operation will take time, return 202 Accepted:
{
"success": true,
"jobId": "abc123",
"statusUrl": "https://api.example.com/status/abc123",
"retryAfterSeconds": 2,
"message": "Your request is being processed..."
}
| Field | Type | Description |
|---|---|---|
jobId | string | Unique identifier for this job |
statusUrl | string | URL to poll for status updates |
retryAfterSeconds | number | Recommended polling interval |
message | string | Human-readable status message |
Status Endpoint
Your status endpoint should return one of three states:
Processing (HTTP 200)
Job is still running:
{
"state": "processing",
"progress": 50
}
| Field | Type | Description |
|---|---|---|
state | string | Always "processing" |
progress | number | Optional percentage (0-100) |
Succeeded (HTTP 200)
Job completed successfully:
{
"state": "succeeded",
"artifactUrl": "https://storage.example.com/result.png",
"response": "Your image has been generated!"
}
| Field | Type | Description |
|---|---|---|
state | string | Always "succeeded" |
artifactUrl | string | URL to generated file (images, media, etc.) |
response | string | Text response or description |
Failed (HTTP 200)
Job failed:
{
"state": "failed",
"error": "Generation timed out",
"code": "generation_timeout"
}
| Field | Type | Description |
|---|---|---|
state | string | Always "failed" |
error | string | Human-readable error message |
code | string | Machine-readable error code |
Standard Error Codes
Use these standard codes for consistency:
| Code | Description |
|---|---|
generation_timeout | Operation took too long |
generation_failed | Processing failed |
model_unavailable | AI model is unavailable |
content_filtered | Content moderation triggered |
quota_exceeded | Rate/quota limit hit |
invalid_input | Bad input parameters |
internal_error | Server error |
Implementation Example
Express.js
// POST /generate - Start the job
app.post('/generate', async (req, res) => {
// Verify payment...
// Create job record
const jobId = crypto.randomUUID();
await db.jobs.create({ id: jobId, status: 'pending' });
// Start async processing
processJob(jobId, req.body).catch(console.error);
// Return 202 immediately
res.status(202).json({
success: true,
jobId,
statusUrl: `https://api.example.com/status/${jobId}`,
retryAfterSeconds: 2
});
});
// GET /status/:jobId - Check status
app.get('/status/:jobId', async (req, res) => {
const job = await db.jobs.findById(req.params.jobId);
if (!job) {
return res.json({
state: 'failed',
error: 'Job not found',
code: 'not_found'
});
}
if (job.status === 'completed') {
return res.json({
state: 'succeeded',
artifactUrl: job.resultUrl,
response: job.description
});
}
if (job.status === 'failed') {
return res.json({
state: 'failed',
error: job.errorMessage,
code: job.errorCode
});
}
// Still processing
return res.json({
state: 'processing',
progress: job.progress || 0
});
});
x402.jobs Integration
When x402.jobs executes your resource:
- It makes the initial request with payment
- If it receives
202, it automatically pollsstatusUrl - It respects
retryAfterSecondsbetween polls - When
stateissucceeded, it extractsartifactUrlandresponse - These values flow to downstream nodes in the workflow
Output Mapping
In workflows, users can map your outputs:
artifactUrl→ Image display, file downloadsresponse→ Text processing, further AI calls
Best Practices
- Use reasonable timeouts - Don't let jobs run forever
- Provide progress updates - Users appreciate knowing status
- Clean up old jobs - Expire status after 24 hours
- Return consistent errors - Use standard error codes
- Set appropriate retry intervals - 2-5 seconds is typical