P
PUGUH

Background Jobs

PUGUH provides a managed job queue for asynchronous processing, scheduled tasks, and reliable delivery with dead letter queue support.

Overview

Background jobs handle work that shouldn't block API responses:

  • Email delivery — welcome emails, password resets, invitations
  • Webhook dispatch — sending event notifications to endpoints
  • Image processing — generating thumbnails and variants after upload
  • Data export — GDPR exports, bulk data downloads
  • Scheduled tasks — cron-based recurring jobs
  • Cleanup — purging expired tokens, soft-deleted records

Job Types

System Jobs

These run automatically as part of PUGUH's infrastructure:

JobTriggerDescription
email.sendUser actionsSend transactional emails
webhook.deliverEventsDeliver webhook payloads
storage.processFile uploadGenerate image variants
export.generateAPI requestGenerate data export files
audit.streamAudit eventsForward events to streaming destinations

Scheduled Jobs (Cron)

Recurring jobs managed via the scheduling API:

JobDefault ScheduleDescription
token.cleanupEvery hourRemove expired refresh tokens
session.cleanupEvery 6 hoursPurge expired sessions
softdelete.purgeDaily at 03:00Hard-delete records past retention
usage.aggregateEvery hourAggregate usage metrics for billing
invoice.generate1st of monthGenerate monthly invoices

Viewing Jobs

List Jobs

bash
curl https://api-puguh.arsaka.io/jobs?status=running&page=1 \
  -H "Authorization: Bearer YOUR_TOKEN"

Response:

json
{
  "items": [
    {
      "id": "job_abc123",
      "type": "email.send",
      "status": "completed",
      "payload": {"to": "user@example.com", "template": "welcome"},
      "created_at": "2026-02-20T10:00:00Z",
      "started_at": "2026-02-20T10:00:01Z",
      "completed_at": "2026-02-20T10:00:03Z",
      "attempts": 1
    }
  ],
  "total": 1523,
  "page": 1,
  "page_size": 20,
  "has_next": true,
  "has_prev": false
}

Job Statuses

StatusDescription
pendingQueued, waiting to be picked up
runningCurrently being processed
completedFinished successfully
failedFailed after all retry attempts
deadMoved to dead letter queue

Dead Letter Queue (DLQ)

Jobs that fail after all retry attempts are moved to the DLQ for manual inspection.

View DLQ

bash
curl https://api-puguh.arsaka.io/jobs/dlq \
  -H "Authorization: Bearer YOUR_TOKEN"

Retry a DLQ Job

bash
curl -X POST https://api-puguh.arsaka.io/jobs/dlq/job_xyz/retry \
  -H "Authorization: Bearer YOUR_TOKEN"

The job is moved back to the pending queue for reprocessing.

Purge DLQ

bash
curl -X POST https://api-puguh.arsaka.io/jobs/dlq/purge \
  -H "Authorization: Bearer YOUR_TOKEN"

Warning

Purging the DLQ permanently deletes all dead jobs. This action cannot be undone.

Job Schedules

Create custom cron schedules for recurring tasks:

Create Schedule

bash
curl -X POST https://api-puguh.arsaka.io/jobs/schedules \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "name": "Daily usage report",
    "cron_expression": "0 8 * * *",
    "job_type": "export.generate",
    "payload": {"type": "usage", "format": "csv"},
    "is_active": true
  }'

Cron Syntax

FieldValuesExample
Minute0-590 (at minute 0)
Hour0-238 (at 8 AM)
Day of month1-31* (every day)
Month1-12* (every month)
Day of week0-61-5 (Mon-Fri)

List Schedules

bash
curl https://api-puguh.arsaka.io/jobs/schedules \
  -H "Authorization: Bearer YOUR_TOKEN"

Update Schedule

bash
curl -X PATCH https://api-puguh.arsaka.io/jobs/schedules/sched_abc \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{"is_active": false}'

Delete Schedule

bash
curl -X DELETE https://api-puguh.arsaka.io/jobs/schedules/sched_abc \
  -H "Authorization: Bearer YOUR_TOKEN"

Retry Policy

Failed jobs are retried automatically with exponential backoff:

plaintext
Attempt 1: immediate
Attempt 2: after 30 seconds
Attempt 3: after 2 minutes
Attempt 4: after 10 minutes
Attempt 5: after 1 hour
→ If all fail: moved to DLQ

Limits by Plan

PlanMax ConcurrentSchedulesDLQ Retention
Free537 days
Pro251030 days
Business1005090 days
EnterpriseCustomCustomCustom

Monitoring

Job metrics are available in the PUGUH dashboard:

  • Throughput: Jobs processed per minute
  • Error rate: Percentage of failed jobs
  • Queue depth: Number of pending jobs
  • DLQ size: Number of dead jobs
  • Average latency: Time from enqueue to completion

Related