Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV CommunityI Traced a "Cute" Minecraft Phishing Site to a C2 Server in ChicagoDEV CommunityYour AI Agent Stopped Responding 2 Hours Ago. Nobody Noticed.Dev.to AIYou Have 50 AI Agents Running. Can You Name Them All?Dev.to AIVoice-to-Schema: Turning "Track My Invoices" Into a Real TableDev.to AIThe AI Stack: A Practical Guide to Building Your Own Intelligent ApplicationsDev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV CommunityI Traced a "Cute" Minecraft Phishing Site to a C2 Server in ChicagoDEV CommunityYour AI Agent Stopped Responding 2 Hours Ago. Nobody Noticed.Dev.to AIYou Have 50 AI Agents Running. Can You Name Them All?Dev.to AIVoice-to-Schema: Turning "Track My Invoices" Into a Real TableDev.to AIThe AI Stack: A Practical Guide to Building Your Own Intelligent ApplicationsDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

Active Job and Background Processing for AI Features in Rails

Dev.to AIby AgentQApril 5, 20266 min read0 views
Source Quiz

This is Part 15 of the Ruby for AI series. We just covered ActionCable for real-time features. Now let's talk about the engine behind every serious AI feature: background jobs. AI API calls are slow. Embedding generation takes time. PDF processing blocks threads. You never, ever want your web request sitting there waiting for OpenAI to respond. Background jobs solve this completely. Active Job: The Interface Active Job is Rails' unified API for background processing. It's an abstraction layer — you write jobs once, then plug in any backend: Sidekiq, Solid Queue, Good Job, or others. # Generate a job rails generate job ProcessDocument # app/jobs/process_document_job.rb class ProcessDocumentJob ApplicationJob queue_as :default def perform ( document_id ) document = Document . find ( document

This is Part 15 of the Ruby for AI series. We just covered ActionCable for real-time features. Now let's talk about the engine behind every serious AI feature: background jobs.

AI API calls are slow. Embedding generation takes time. PDF processing blocks threads. You never, ever want your web request sitting there waiting for OpenAI to respond. Background jobs solve this completely.

Active Job: The Interface

Active Job is Rails' unified API for background processing. It's an abstraction layer — you write jobs once, then plug in any backend: Sidekiq, Solid Queue, Good Job, or others.

# Generate a job rails generate job ProcessDocument

Enter fullscreen mode

Exit fullscreen mode

# app/jobs/process_document_job.rb class ProcessDocumentJob < ApplicationJob  queue_as :default

def perform(document_id) document = Document.find(document_id) content = document.file.download

Call AI to summarize

response = OpenAI::Client.new.chat( parameters: { model: "gpt-4", messages: [{ role: "user", content: "Summarize: #{content}" }] } )

document.update!( summary: response.dig("choices", 0, "message", "content"), processed_at: Time.current ) end end`

Enter fullscreen mode

Exit fullscreen mode

Enqueue it from anywhere:

# Fire and forget ProcessDocumentJob.perform_later(document.id)

With a delay

ProcessDocumentJob.set(wait: 5.minutes).perform_later(document.id)

On a specific queue

ProcessDocumentJob.set(queue: :ai_processing).perform_later(document.id)`

Enter fullscreen mode

Exit fullscreen mode

The controller returns instantly. The job runs in a separate process. The user never waits.

Solid Queue: Rails 8's Default

Rails 8 ships with Solid Queue as the default backend. No Redis needed — it uses your existing database.

# Already included in Rails 8 apps, but if you need to add it: bundle add solid_queue rails solid_queue:install rails db:migrate

Enter fullscreen mode

Exit fullscreen mode

Configure it:

# config/queue.yml default: &default  dispatchers:

  • polling_interval: 1 batch_size: 500 workers:
  • queues: "" threads: 5 processes: 2

development: <<: default

production: <<: default workers:

  • queues: "default,mailers" threads: 5 processes: 2
  • queues: "ai_processing" threads: 3 processes: 1`

Enter fullscreen mode

Exit fullscreen mode

Start it:

# Development (runs with your Rails server) bin/jobs

Production (as a separate process)

bundle exec rake solid_queue:start`

Enter fullscreen mode

Exit fullscreen mode

The beauty of Solid Queue: zero infrastructure overhead. Your database handles job storage. For most apps, this is plenty.

Sidekiq: The Heavy Hitter

When you need serious throughput — thousands of jobs per second, complex retry logic, scheduled jobs — Sidekiq is the standard.

# Gemfile gem "sidekiq"

Enter fullscreen mode

Exit fullscreen mode

# config/application.rb config.active_job.queue_adapter = :sidekiq

Enter fullscreen mode

Exit fullscreen mode

# config/sidekiq.yml :concurrency: 10 :queues:

  • [critical, 3]
  • [default, 2]
  • [ai_processing, 1]`

Enter fullscreen mode

Exit fullscreen mode

# Start Sidekiq bundle exec sidekiq

Enter fullscreen mode

Exit fullscreen mode

Sidekiq uses Redis and processes jobs in threads, making it extremely fast. The numbers after queue names are weights — critical gets 3x the attention of ai_processing.

Job Patterns for AI Work

Pattern 1: Chain of Jobs

AI workflows often have multiple steps. Chain them:

class GenerateEmbeddingJob < ApplicationJob  queue_as :ai_processing

def perform(document_id) document = Document.find(document_id)

embedding = OpenAI::Client.new.embeddings( parameters: { model: "text-embedding-3-small", input: document.content } )

document.update!( embedding: embedding.dig("data", 0, "embedding") )

Chain: after embedding, find similar docs

FindSimilarDocumentsJob.perform_later(document_id) end end`

Enter fullscreen mode

Exit fullscreen mode

Pattern 2: Progress Tracking

Users want to know what's happening. Track progress with ActionCable:

class BulkProcessJob < ApplicationJob  queue_as :ai_processing

def perform(batch_id) batch = Batch.find(batch_id) items = batch.items.unprocessed

items.each_with_index do |item, index| process_item(item)

Broadcast progress

ActionCable.server.broadcast( "batch_#{batch_id}", { progress: ((index + 1).to_f / items.count * 100).round, processed: index + 1, total: items.count } ) end*_

batch.update!(completed_at: Time.current) end

private

def process_item(item)

Your AI processing here

end end`

Enter fullscreen mode

Exit fullscreen mode

Pattern 3: Retry with Backoff

AI APIs have rate limits. Handle them gracefully:

class AiApiJob < ApplicationJob  queue_as :ai_processing

retry_on Faraday::TooManyRequestsError, wait: :polynomially_longer, attempts: 5

retry_on Faraday::TimeoutError, wait: 10.seconds, attempts: 3

discard_on ActiveRecord::RecordNotFound

def perform(record_id) record = Record.find(record_id)

API call that might fail

end end`

Enter fullscreen mode

Exit fullscreen mode

wait: :polynomially_longer spaces out retries: ~3s, ~18s, ~83s, ~293s. Perfect for rate limits. discard_on skips the job entirely if the record was deleted while waiting.

Pattern 4: Unique Jobs

Don't process the same document twice simultaneously:

class ProcessDocumentJob < ApplicationJob  queue_as :ai_processing

before_enqueue do |job| document_id = job.arguments.first key = "processing_document_#{document_id}"_

throw(:abort) if Rails.cache.exist?(key) Rails.cache.write(key, true, expires_in: 30.minutes) end

after_perform do |job| document_id = job.arguments.first Rails.cache.delete("processing_document_#{document_id}") end_

def perform(document_id)

Process document

end end`

Enter fullscreen mode

Exit fullscreen mode

Queues: Separate Your Work

Keep AI processing separate from your regular app work:

# Fast, user-facing stuff class SendNotificationJob < ApplicationJob  queue_as :default end

Slow AI calls

class GenerateSummaryJob < ApplicationJob queue_as :ai_processing end

Critical stuff that can't wait

class ChargePaymentJob < ApplicationJob queue_as :critical end`

Enter fullscreen mode

Exit fullscreen mode

Run separate workers per queue so a flood of AI jobs doesn't block your notifications.

What's Next

Background jobs + ActionCable + Turbo = the complete real-time AI pipeline. Your user submits a prompt, a job picks it up, calls the AI API, and streams the result back — all without blocking a single web request.

Next up: Rails + OpenAI API — we'll build a full chat interface with streaming responses, putting everything we've learned together.

Part 15 of the Ruby for AI series. Code runs on Rails 8+ with Ruby 3.2+.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Active Job …modelupdateproductapplicationfeatureinterfaceDev.to AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 212 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products