Active Job and Background Processing for AI Features in Rails
This is Part 15 of the Ruby for AI series. We just covered ActionCable for real-time features. Now let's talk about the engine behind every serious AI feature: background jobs. AI API calls are slow. Embedding generation takes time. PDF processing blocks threads. You never, ever want your web request sitting there waiting for OpenAI to respond. Background jobs solve this completely. Active Job: The Interface Active Job is Rails' unified API for background processing. It's an abstraction layer — you write jobs once, then plug in any backend: Sidekiq, Solid Queue, Good Job, or others. # Generate a job rails generate job ProcessDocument # app/jobs/process_document_job.rb class ProcessDocumentJob ApplicationJob queue_as :default def perform ( document_id ) document = Document . find ( document
This is Part 15 of the Ruby for AI series. We just covered ActionCable for real-time features. Now let's talk about the engine behind every serious AI feature: background jobs.
AI API calls are slow. Embedding generation takes time. PDF processing blocks threads. You never, ever want your web request sitting there waiting for OpenAI to respond. Background jobs solve this completely.
Active Job: The Interface
Active Job is Rails' unified API for background processing. It's an abstraction layer — you write jobs once, then plug in any backend: Sidekiq, Solid Queue, Good Job, or others.
# Generate a job rails generate job ProcessDocument# Generate a job rails generate job ProcessDocumentEnter fullscreen mode
Exit fullscreen mode
# app/jobs/process_document_job.rb class ProcessDocumentJob < ApplicationJob queue_as :default# app/jobs/process_document_job.rb class ProcessDocumentJob < ApplicationJob queue_as :defaultdef perform(document_id) document = Document.find(document_id) content = document.file.download
Call AI to summarize
response = OpenAI::Client.new.chat( parameters: { model: "gpt-4", messages: [{ role: "user", content: "Summarize: #{content}" }] } )
document.update!( summary: response.dig("choices", 0, "message", "content"), processed_at: Time.current ) end end`
Enter fullscreen mode
Exit fullscreen mode
Enqueue it from anywhere:
# Fire and forget ProcessDocumentJob.perform_later(document.id)# Fire and forget ProcessDocumentJob.perform_later(document.id)With a delay
ProcessDocumentJob.set(wait: 5.minutes).perform_later(document.id)
On a specific queue
ProcessDocumentJob.set(queue: :ai_processing).perform_later(document.id)`
Enter fullscreen mode
Exit fullscreen mode
The controller returns instantly. The job runs in a separate process. The user never waits.
Solid Queue: Rails 8's Default
Rails 8 ships with Solid Queue as the default backend. No Redis needed — it uses your existing database.
# Already included in Rails 8 apps, but if you need to add it: bundle add solid_queue rails solid_queue:install rails db:migrate# Already included in Rails 8 apps, but if you need to add it: bundle add solid_queue rails solid_queue:install rails db:migrateEnter fullscreen mode
Exit fullscreen mode
Configure it:
# config/queue.yml default: &default dispatchers:# config/queue.yml default: &default dispatchers:- polling_interval: 1 batch_size: 500 workers:
- queues: "" threads: 5 processes: 2
development: <<: default
production: <<: default workers:
- queues: "default,mailers" threads: 5 processes: 2
- queues: "ai_processing" threads: 3 processes: 1`
Enter fullscreen mode
Exit fullscreen mode
Start it:
# Development (runs with your Rails server) bin/jobs# Development (runs with your Rails server) bin/jobsProduction (as a separate process)
bundle exec rake solid_queue:start`
Enter fullscreen mode
Exit fullscreen mode
The beauty of Solid Queue: zero infrastructure overhead. Your database handles job storage. For most apps, this is plenty.
Sidekiq: The Heavy Hitter
When you need serious throughput — thousands of jobs per second, complex retry logic, scheduled jobs — Sidekiq is the standard.
# Gemfile gem "sidekiq"# Gemfile gem "sidekiq"Enter fullscreen mode
Exit fullscreen mode
# config/application.rb config.active_job.queue_adapter = :sidekiq# config/application.rb config.active_job.queue_adapter = :sidekiqEnter fullscreen mode
Exit fullscreen mode
# config/sidekiq.yml :concurrency: 10 :queues:# config/sidekiq.yml :concurrency: 10 :queues:- [critical, 3]
- [default, 2]
- [ai_processing, 1]`
Enter fullscreen mode
Exit fullscreen mode
# Start Sidekiq bundle exec sidekiq# Start Sidekiq bundle exec sidekiqEnter fullscreen mode
Exit fullscreen mode
Sidekiq uses Redis and processes jobs in threads, making it extremely fast. The numbers after queue names are weights — critical gets 3x the attention of ai_processing.
Job Patterns for AI Work
Pattern 1: Chain of Jobs
AI workflows often have multiple steps. Chain them:
class GenerateEmbeddingJob < ApplicationJob queue_as :ai_processingclass GenerateEmbeddingJob < ApplicationJob queue_as :ai_processingdef perform(document_id) document = Document.find(document_id)
embedding = OpenAI::Client.new.embeddings( parameters: { model: "text-embedding-3-small", input: document.content } )
document.update!( embedding: embedding.dig("data", 0, "embedding") )
Chain: after embedding, find similar docs
FindSimilarDocumentsJob.perform_later(document_id) end end`
Enter fullscreen mode
Exit fullscreen mode
Pattern 2: Progress Tracking
Users want to know what's happening. Track progress with ActionCable:
class BulkProcessJob < ApplicationJob queue_as :ai_processingclass BulkProcessJob < ApplicationJob queue_as :ai_processingdef perform(batch_id) batch = Batch.find(batch_id) items = batch.items.unprocessed
items.each_with_index do |item, index| process_item(item)
Broadcast progress
ActionCable.server.broadcast( "batch_#{batch_id}", { progress: ((index + 1).to_f / items.count * 100).round, processed: index + 1, total: items.count } ) end*_
batch.update!(completed_at: Time.current) end
private
def process_item(item)
Your AI processing here
end end`
Enter fullscreen mode
Exit fullscreen mode
Pattern 3: Retry with Backoff
AI APIs have rate limits. Handle them gracefully:
class AiApiJob < ApplicationJob queue_as :ai_processingclass AiApiJob < ApplicationJob queue_as :ai_processingretry_on Faraday::TooManyRequestsError, wait: :polynomially_longer, attempts: 5
retry_on Faraday::TimeoutError, wait: 10.seconds, attempts: 3
discard_on ActiveRecord::RecordNotFound
def perform(record_id) record = Record.find(record_id)
API call that might fail
end end`
Enter fullscreen mode
Exit fullscreen mode
wait: :polynomially_longer spaces out retries: ~3s, ~18s, ~83s, ~293s. Perfect for rate limits. discard_on skips the job entirely if the record was deleted while waiting.
Pattern 4: Unique Jobs
Don't process the same document twice simultaneously:
class ProcessDocumentJob < ApplicationJob queue_as :ai_processingclass ProcessDocumentJob < ApplicationJob queue_as :ai_processingbefore_enqueue do |job| document_id = job.arguments.first key = "processing_document_#{document_id}"_
throw(:abort) if Rails.cache.exist?(key) Rails.cache.write(key, true, expires_in: 30.minutes) end
after_perform do |job| document_id = job.arguments.first Rails.cache.delete("processing_document_#{document_id}") end_
def perform(document_id)
Process document
end end`
Enter fullscreen mode
Exit fullscreen mode
Queues: Separate Your Work
Keep AI processing separate from your regular app work:
# Fast, user-facing stuff class SendNotificationJob < ApplicationJob queue_as :default end# Fast, user-facing stuff class SendNotificationJob < ApplicationJob queue_as :default endSlow AI calls
class GenerateSummaryJob < ApplicationJob queue_as :ai_processing end
Critical stuff that can't wait
class ChargePaymentJob < ApplicationJob queue_as :critical end`
Enter fullscreen mode
Exit fullscreen mode
Run separate workers per queue so a flood of AI jobs doesn't block your notifications.
What's Next
Background jobs + ActionCable + Turbo = the complete real-time AI pipeline. Your user submits a prompt, a job picks it up, calls the AI API, and streams the result back — all without blocking a single web request.
Next up: Rails + OpenAI API — we'll build a full chat interface with streaming responses, putting everything we've learned together.
Part 15 of the Ruby for AI series. Code runs on Rails 8+ with Ruby 3.2+.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

I Traced a "Cute" Minecraft Phishing Site to a C2 Server in Chicago
Hello community! As an IT engineering student, I recently conducted a technical investigation into an active threat targeting the gaming community (specifically Minecraft players). What appeared to be a harmless "cute" website turned out to be a Phishing and Malware-as-a-Service (MaaS) infrastructure. Here is a technical breakdown of my findings: PHISHING AND MALWARE SPREAD THROUGH DISCORD The primary domain identified is owocraft.com. At first glance, it uses Tailwind CSS and a Turkish-coded template (identified by source code comments such as /* Sayfa Fade-in Animasyonu */). The main deception is a download button for a fake "Launcher" that actually points to a malicious .rar file hosted on Dropbox (ID: 3d1d505ajob480fkdnpm3). This file contains a Discord Token Stealer. Unmasking the Inf

How to Switch Industries Without Starting Over
You have spent years building expertise in one industry. You know the terminology, the workflows, the unwritten rules. And now you want out. Maybe your industry is shrinking. Maybe you have hit a ceiling. Maybe you just woke up one morning and realized you cannot do this for another twenty years. Whatever the reason, you are staring at the same terrifying question every industry switcher faces: do I have to start over? The short answer: no. Not even close. The longer answer is more nuanced, and it is what this entire guide is about. Because switching industries is not about abandoning everything you have built. It is about translating what you already know into a language your new industry understands. The Numbers Behind Industry Switching Industry switching is not the risky career move it

Stop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️
The AI Editor Problem 🤖 We all love using AI assistants like Cursor, GitHub Copilot, and Claude. But let's be honest: they brought a terrifying new security risk to our local development workflow. Have you ever worried that your AI assistant might accidentally read your local .env file and send your raw database passwords or OpenAI API keys to the cloud? Or maybe you were screen-sharing and accidentally opened your .env file for everyone to see? To solve this problem once and for all, I built Kimu — an open-source, hybrid CLI GUI secret manager powered by Rust and Tauri. 👉 Check out Kimu on GitHub! 🗝️ The Magic of Kimu: Use Placeholders, Not Passwords With Kimu, you no longer need to write actual sensitive information in your .env files. Instead, you use simple placeholders. ❌ Before (D

Why We Ditched Bedrock Agents for Nova Pro and Built a Custom Orchestrator
We're building a healthcare prior authorization platform. If you've never dealt with prior auths, congratulations, you've been spared one of the most soul-crushing workflows in American healthcare. Our platform tries to make it less painful. One of our core features is an AI assistant that helps clinical staff review denial cases, check patient eligibility, and generate appeal letters. We wanted to use Amazon Nova Pro as the foundation model for this particular feature. The reasoning was simple: it's AWS's own model. AWS removes most calls-per-minute limitations on their own models, so you're not fighting throttling issues or provisioned throughput caps. With third-party models on Bedrock you can run into rate limits that require you to request increases or provision dedicated capacity. No




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!