Background Jobs
PUGUH provides a managed job queue for asynchronous processing, scheduled tasks, and reliable delivery with dead letter queue support.
Overview
Background jobs handle work that shouldn't block API responses:
- Email delivery — welcome emails, password resets, invitations
- Webhook dispatch — sending event notifications to endpoints
- Image processing — generating thumbnails and variants after upload
- Data export — GDPR exports, bulk data downloads
- Scheduled tasks — cron-based recurring jobs
- Cleanup — purging expired tokens, soft-deleted records
Job Types
System Jobs
These run automatically as part of PUGUH's infrastructure:
| Job | Trigger | Description |
|---|---|---|
email.send | User actions | Send transactional emails |
webhook.deliver | Events | Deliver webhook payloads |
storage.process | File upload | Generate image variants |
export.generate | API request | Generate data export files |
audit.stream | Audit events | Forward events to streaming destinations |
Scheduled Jobs (Cron)
Recurring jobs managed via the scheduling API:
| Job | Default Schedule | Description |
|---|---|---|
token.cleanup | Every hour | Remove expired refresh tokens |
session.cleanup | Every 6 hours | Purge expired sessions |
softdelete.purge | Daily at 03:00 | Hard-delete records past retention |
usage.aggregate | Every hour | Aggregate usage metrics for billing |
invoice.generate | 1st of month | Generate monthly invoices |
Viewing Jobs
List Jobs
bash
curl https://api-puguh.arsaka.io/jobs?status=running&page=1 \
-H "Authorization: Bearer YOUR_TOKEN" Response:
json
{
"items": [
{
"id": "job_abc123",
"type": "email.send",
"status": "completed",
"payload": {"to": "user@example.com", "template": "welcome"},
"created_at": "2026-02-20T10:00:00Z",
"started_at": "2026-02-20T10:00:01Z",
"completed_at": "2026-02-20T10:00:03Z",
"attempts": 1
}
],
"total": 1523,
"page": 1,
"page_size": 20,
"has_next": true,
"has_prev": false
} Job Statuses
| Status | Description |
|---|---|
pending | Queued, waiting to be picked up |
running | Currently being processed |
completed | Finished successfully |
failed | Failed after all retry attempts |
dead | Moved to dead letter queue |
Dead Letter Queue (DLQ)
Jobs that fail after all retry attempts are moved to the DLQ for manual inspection.
View DLQ
bash
curl https://api-puguh.arsaka.io/jobs/dlq \
-H "Authorization: Bearer YOUR_TOKEN" Retry a DLQ Job
bash
curl -X POST https://api-puguh.arsaka.io/jobs/dlq/job_xyz/retry \
-H "Authorization: Bearer YOUR_TOKEN" The job is moved back to the pending queue for reprocessing.
Purge DLQ
bash
curl -X POST https://api-puguh.arsaka.io/jobs/dlq/purge \
-H "Authorization: Bearer YOUR_TOKEN" Warning
Purging the DLQ permanently deletes all dead jobs. This action cannot be undone.
Job Schedules
Create custom cron schedules for recurring tasks:
Create Schedule
bash
curl -X POST https://api-puguh.arsaka.io/jobs/schedules \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"name": "Daily usage report",
"cron_expression": "0 8 * * *",
"job_type": "export.generate",
"payload": {"type": "usage", "format": "csv"},
"is_active": true
}' Cron Syntax
| Field | Values | Example |
|---|---|---|
| Minute | 0-59 | 0 (at minute 0) |
| Hour | 0-23 | 8 (at 8 AM) |
| Day of month | 1-31 | * (every day) |
| Month | 1-12 | * (every month) |
| Day of week | 0-6 | 1-5 (Mon-Fri) |
List Schedules
bash
curl https://api-puguh.arsaka.io/jobs/schedules \
-H "Authorization: Bearer YOUR_TOKEN" Update Schedule
bash
curl -X PATCH https://api-puguh.arsaka.io/jobs/schedules/sched_abc \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"is_active": false}' Delete Schedule
bash
curl -X DELETE https://api-puguh.arsaka.io/jobs/schedules/sched_abc \
-H "Authorization: Bearer YOUR_TOKEN" Retry Policy
Failed jobs are retried automatically with exponential backoff:
plaintext
Attempt 1: immediate
Attempt 2: after 30 seconds
Attempt 3: after 2 minutes
Attempt 4: after 10 minutes
Attempt 5: after 1 hour
→ If all fail: moved to DLQ Limits by Plan
| Plan | Max Concurrent | Schedules | DLQ Retention |
|---|---|---|---|
| Free | 5 | 3 | 7 days |
| Pro | 25 | 10 | 30 days |
| Business | 100 | 50 | 90 days |
| Enterprise | Custom | Custom | Custom |
Monitoring
Job metrics are available in the PUGUH dashboard:
- Throughput: Jobs processed per minute
- Error rate: Percentage of failed jobs
- Queue depth: Number of pending jobs
- DLQ size: Number of dead jobs
- Average latency: Time from enqueue to completion