Memeputer

Long Running Operations (LRO)

For operations that take more than a few seconds—like AI image generation, video processing, or complex computations—implement the Long Running Operations pattern.

How It Works

  1. Client makes a request with payment
  2. Server returns 202 Accepted with a status URL
  3. Client polls the status URL until completion
  4. Server returns the final result

Initial Response (HTTP 202)

When your operation will take time, return 202 Accepted:

{
  "success": true,
  "jobId": "abc123",
  "statusUrl": "https://api.example.com/status/abc123",
  "retryAfterSeconds": 2,
  "message": "Your request is being processed..."
}
FieldTypeDescription
jobIdstringUnique identifier for this job
statusUrlstringURL to poll for status updates
retryAfterSecondsnumberRecommended polling interval
messagestringHuman-readable status message

Status Endpoint

Your status endpoint should return one of three states:

Processing (HTTP 200)

Job is still running:

{
  "state": "processing",
  "progress": 50
}
FieldTypeDescription
statestringAlways "processing"
progressnumberOptional percentage (0-100)

Succeeded (HTTP 200)

Job completed successfully:

{
  "state": "succeeded",
  "artifactUrl": "https://storage.example.com/result.png",
  "response": "Your image has been generated!"
}
FieldTypeDescription
statestringAlways "succeeded"
artifactUrlstringURL to generated file (images, media, etc.)
responsestringText response or description

Failed (HTTP 200)

Job failed:

{
  "state": "failed",
  "error": "Generation timed out",
  "code": "generation_timeout"
}
FieldTypeDescription
statestringAlways "failed"
errorstringHuman-readable error message
codestringMachine-readable error code

Standard Error Codes

Use these standard codes for consistency:

CodeDescription
generation_timeoutOperation took too long
generation_failedProcessing failed
model_unavailableAI model is unavailable
content_filteredContent moderation triggered
quota_exceededRate/quota limit hit
invalid_inputBad input parameters
internal_errorServer error

Implementation Example

Express.js

// POST /generate - Start the job
app.post('/generate', async (req, res) => {
  // Verify payment...
  
  // Create job record
  const jobId = crypto.randomUUID();
  await db.jobs.create({ id: jobId, status: 'pending' });
  
  // Start async processing
  processJob(jobId, req.body).catch(console.error);
  
  // Return 202 immediately
  res.status(202).json({
    success: true,
    jobId,
    statusUrl: `https://api.example.com/status/${jobId}`,
    retryAfterSeconds: 2
  });
});

// GET /status/:jobId - Check status
app.get('/status/:jobId', async (req, res) => {
  const job = await db.jobs.findById(req.params.jobId);
  
  if (!job) {
    return res.json({
      state: 'failed',
      error: 'Job not found',
      code: 'not_found'
    });
  }
  
  if (job.status === 'completed') {
    return res.json({
      state: 'succeeded',
      artifactUrl: job.resultUrl,
      response: job.description
    });
  }
  
  if (job.status === 'failed') {
    return res.json({
      state: 'failed',
      error: job.errorMessage,
      code: job.errorCode
    });
  }
  
  // Still processing
  return res.json({
    state: 'processing',
    progress: job.progress || 0
  });
});

x402.jobs Integration

When x402.jobs executes your resource:

  1. It makes the initial request with payment
  2. If it receives 202, it automatically polls statusUrl
  3. It respects retryAfterSeconds between polls
  4. When state is succeeded, it extracts artifactUrl and response
  5. These values flow to downstream nodes in the workflow

Output Mapping

In workflows, users can map your outputs:

  • artifactUrl → Image display, file downloads
  • response → Text processing, further AI calls

Best Practices

  1. Use reasonable timeouts - Don't let jobs run forever
  2. Provide progress updates - Users appreciate knowing status
  3. Clean up old jobs - Expire status after 24 hours
  4. Return consistent errors - Use standard error codes
  5. Set appropriate retry intervals - 2-5 seconds is typical