Skip to content
All posts
LaravelArchitecture

Laravel Queues Are Not a Silver Bullet. Here Is What Happens When You Overuse Them.

March 19, 2026·Read on Medium·

There is a moment in every Laravel developer’s career where queues feel like the answer to everything.

Page slow? Throw it in a queue. Webhook sluggish? Queue it. Sending a single email? Queue. Logging a user action? You already know what I am going to say.

The first time you push a slow task to the background and your endpoint drops from 4 seconds to 80 milliseconds, something breaks in your brain. You start seeing queues everywhere. Every function call becomes a candidate. Every database write feels like it should be asynchronous. You have discovered fire and now you want to burn everything.

I have been there. I have also spent days debugging the fires that came after.

This article is not telling you to avoid queues. Queues are genuinely one of the best tools Laravel gives you. But tools have appropriate contexts and queues have a specific kind of failure mode that nobody talks about because it only shows up after you have already committed to the architecture.

What Queues Are Actually For

The contract is simple. Laravel queues move work that does not need to happen right now out of the request-response cycle. The user clicks a button, your app acknowledges it, and the heavy lifting happens separately.

// This is the right mental model
public function store(Request $request)
{
    $order = Order::create($request->validated());

    // User does not need to wait for these
    SendOrderConfirmationEmail::dispatch($order);
    SyncOrderToWarehouse::dispatch($order);
    return response()->json($order, 201);
}

The problem starts when “does not need to happen right now” slowly becomes “could theoretically happen later.” That is a different thing entirely and it will cost you.

The Overuse Patterns I See Most

Queuing Things the User Is Waiting On

This is the subtlest mistake and the most painful one to debug.

// You think this is an optimization
public function updateUserPlan(Request $request)
{
    UpdateUserSubscription::dispatch(auth()->user(), $request->plan);

    return response()->json(['message' => 'Plan updated successfully']);
}

The user gets a 200. They see “Plan updated successfully.” They immediately try to access a feature that belongs to the new plan. The job has not run yet. They get a 403. They open a support ticket. You spend an hour explaining eventual consistency to a person who just paid you money.

If the user’s next action depends on this data existing, the queue bought you nothing except a race condition with a friendly success message on top.

The fix here is not a smarter queue strategy. The fix is to do the work synchronously and optimize the operation itself if it is genuinely slow.

// This is what you actually want
public function updateUserPlan(Request $request)
{
    $user = auth()->user();
    $user->subscription()->update(['plan' => $request->plan]);
    $user->syncPermissions($request->plan);
    return response()->json(['message' => 'Plan updated successfully']);
}

Queuing Cheap Synchronous Operations

A queue job is not free. When you dispatch a job, Laravel serializes the payload, writes it to your queue driver and a worker has to pick it up, deserialize it and run it. There is real overhead on both ends.

// This is what overuse looks like
class LogUserAction implements ShouldQueue
{
    public function __construct(
        public int $userId,
        public string $action,
        public array $metadata = []
    ) {}

    public function handle(): void
    {
        ActivityLog::create([
            'user_id'  => $this->userId,
            'action'   => $this->action,
            'metadata' => $this->metadata,
        ]);
    }
}
// Dispatched on every single request
LogUserAction::dispatch($user->id, 'viewed_dashboard', $meta);

That ActivityLog::create() takes about 2-3 milliseconds on a local database. The overhead of serializing the job, writing it to Redis and having a worker pick it up and execute it adds several more milliseconds on top. You have not optimized anything. You have added latency you cannot see.

Measure before you queue.

$start = microtime(true);
ActivityLog::create([...]);
$duration = (microtime(true) - $start) * 1000;
// If this is under 10ms, do not queue it

For high-volume logging specifically, consider a different approach entirely. Instead of queuing individual inserts, buffer them in memory and flush once at the end of the request using Laravel’s terminating() hook.

// Batch inserts beat queued single inserts at volume
class ActivityLogger
{
    private static array $buffer = [];

    public static function push(int $userId, string $action, array $meta = []): void
    {
        static::$buffer[] = [
            'user_id'    => $userId,
            'action'     => $action,
            'metadata'   => json_encode($meta),
            'created_at' => now()->toDateTimeString(),
            'updated_at' => now()->toDateTimeString(),
        ];
    }
    public static function flush(): void
    {
        if (empty(static::$buffer)) {
            return;
        }
        // insert() bypasses Eloquent model events intentionally.
        // For a logging table that is fine. For anything else, be deliberate.
        ActivityLog::insert(static::$buffer);
        static::$buffer = [];
    }
}
// Register in AppServiceProvider::boot()
$this->app->terminating(fn () => ActivityLogger::flush());

One batch insert at the end of the request. No queue, no worker overhead, no serialization.

One important caveat: static properties persist across requests in Laravel Octane because Octane reuses the same process. If you are running Octane, reset the buffer explicitly in the flushed event or use a request-scoped singleton instead of static state.

Using Queues to Avoid Fixing the Real Problem

This one is the most expensive pattern in the long run.

// Slow endpoint. Developer's solution: queue it.
public function generateReport(Request $request)
{
    GenerateUserReport::dispatch($request->filters);

    return response()->json(['message' => 'Report is being generated']);
}
class GenerateUserReport implements ShouldQueue
{
    public function handle(): void
    {
        // This query has an N+1 problem and missing indexes.
        // The queue hides it. The problem does not go away.
        $users = User::with('orders')
            ->whereHas('orders', fn ($q) => $q->where('status', 'completed'))
            ->get();
        foreach ($users as $user) {
            // Another query per user. Hidden behind the queue.
            $user->orders()
                ->where('created_at', '>=', now()->subDays(30))
                ->sum('total');
        }
    }
}

The queue hides the symptom. The actual problem stays in the codebase. As data grows, the job takes longer. At some point it starts hitting the timeout limit and gets marked as failed. Now you have a report that silently breaks at scale and you are chasing a queue problem that was always a database problem.

The right move is to fix the query first, pushing the aggregation into the database where it belongs:

class GenerateUserReport implements ShouldQueue
{
    public int $timeout = 120;

    public function __construct(public array $filters) {}
    public function handle(): void
    {
        // Aggregation in SQL. One query instead of one per user.
        $rows = DB::table('orders')
            ->select([
                'user_id',
                DB::raw('SUM(total) as total_30d'),
                DB::raw('COUNT(*) as order_count'),
            ])
            ->where('status', 'completed')
            ->where('created_at', '>=', now()->subDays(30))
            ->groupBy('user_id')
            ->get();
        // Fetch the related users separately in one query
        $userIds = $rows->pluck('user_id');
        $users   = User::whereIn('id', $userIds)->pluck('name', 'id');
        $report = $rows->map(fn ($row) => [
            'user'        => $users[$row->user_id] ?? 'Unknown',
            'total_30d'   => $row->total_30d,
            'order_count' => $row->order_count,
        ]);
        // Store or broadcast $report
    }
}

Now the job is fast enough that queuing it is genuinely optional. You queue it because reports are legitimately background work, not because you had no other choice.

Spawning Too Many Jobs From Inside Jobs

Chaining and batching are legitimate patterns. Dispatching hundreds of jobs from inside a single job because you want parallelism is where things get unpredictable.

// This looks efficient. It is not.
class ProcessImportedFile implements ShouldQueue
{
    public function __construct(public string $filePath) {}

    public function handle(): void
    {
        $file = new \SplFileObject($this->filePath);
        $file->setFlags(\SplFileObject::READ_CSV | \SplFileObject::SKIP_EMPTY);
        foreach ($file as $row) {
            // 10,000 row file = 10,000 jobs suddenly flooding the queue
            ProcessImportRow::dispatch($row);
        }
    }
}

A 10,000 row CSV means 10,000 jobs hitting your queue at once. They compete with every other job in the system. Jobs that should run in seconds are now waiting minutes. If this import runs a few times a day, your queue never properly drains.

Laravel’s batch processing exists precisely for this. But there is an important rule: you should not use $this inside batch then() and catch() closures. The callbacks are serialized and stored separately, and capturing $this will throw a serialization error. Capture the values you need explicitly instead.

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
use Throwable;

class ProcessImportedFile implements ShouldQueue
{
    public function __construct(public string $filePath) {}
    public function handle(): void
    {
        $file = new \SplFileObject($this->filePath);
        $file->setFlags(\SplFileObject::READ_CSV | \SplFileObject::SKIP_EMPTY);
        $rows   = iterator_to_array($file);
        $chunks = array_chunk($rows, 100);
        $jobs = array_map(
            fn ($chunk) => new ProcessImportChunk($chunk),
            $chunks
        );
        // Capture $filePath explicitly - do NOT use $this inside these closures.
        // Laravel serializes batch callbacks separately and $this will cause
        // a "Serialization of Closure is not allowed" error at runtime.
        $filePath = $this->filePath;
        Bus::batch($jobs)
            ->then(fn (Batch $batch) => ImportCompleted::dispatch($filePath))
            ->catch(fn (Batch $batch, Throwable $e) => ImportFailed::dispatch($filePath, $e->getMessage()))
            ->onQueue('imports')
            ->dispatch();
    }
}

Now you control the chunk size, you have proper lifecycle hooks and you are not flooding the default queue with work that belongs on a dedicated one.

What the Failure Looks Like in Production

The interesting thing about queue overuse is that it does not fail immediately. It fails gradually, under load, in ways that are hard to attribute to the queue architecture.

The first sign is queue depth creeping upward. You add more workers. It stabilizes. You call it solved.

# Monitor queue depth across named queues
# The format is connection:queue — redis:default means the
# "default" queue on the "redis" connection
php artisan queue:monitor redis:default,redis:high,redis:imports --max=100

The second sign is jobs hitting their timeout and landing in failed_jobs. You check the table and there are hundreds of entries.

php artisan queue:failed
php artisan queue:retry all  # Only if you have diagnosed why they failed

The third sign is job execution time silently inflating. A job that normally takes 200ms is now taking 4 seconds. This is almost always a query that worked fine at small data volumes and stopped working at scale. The queue hid it until it could not anymore.

The Question You Should Ask Before Queuing Anything

Not “can this be queued” but “what happens if this job runs 30 seconds from now instead of right now.”

If the answer is “the user would be confused” or “the data would be inconsistent,” do not queue it.

Also ask whether the job is idempotent. If your job runs twice due to a retry, does it cause duplicate data?

// Not idempotent. Running this twice creates duplicate warehouse entries.
class SyncOrderToWarehouse implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 3;
    public function __construct(public Order $order) {}
    public function handle(): void
    {
        WarehouseAPI::createOrder($this->order->toArray());
    }
}
// Idempotent. Running this twice is safe.
class SyncOrderToWarehouse implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries   = 3;
    public int $backoff = 60;
    public function __construct(public Order $order) {}
    public function handle(): void
    {
        WarehouseAPI::upsertOrder(
            referenceId: $this->order->id,
            data: $this->order->toArray()
        );
    }
}

Every job you write should be safe to run more than once. If it is not, retries will make the failure worse than the original problem.

The Monitoring You Cannot Skip

If your production system uses queues and you do not have the following in place, you are flying blind.

Queue depth per queue name. Not just the default queue. Every named queue.

use Illuminate\Support\Facades\Queue;
use Illuminate\Support\Facades\Log;

// Run this in a scheduled Artisan command, e.g. every minute
$queues = ['default', 'high', 'imports', 'notifications'];
foreach ($queues as $queue) {
    $depth = Queue::size($queue);
    if ($depth > 500) {
        Log::warning('Queue depth exceeded threshold', [
            'queue' => $queue,
            'depth' => $depth,
        ]);
        // Ping Slack, PagerDuty, whatever you use
    }
}

Failed job alerting. You should know within minutes, not when a user reports it.

use Illuminate\Queue\Events\JobFailed;
use Illuminate\Support\Facades\Queue;
use Illuminate\Support\Facades\Log;

// In AppServiceProvider::boot()
Queue::failing(function (JobFailed $event) {
    Log::error('Queue job failed', [
        'job'       => $event->job->getName(),
        'queue'     => $event->job->getQueue(),
        'exception' => $event->exception->getMessage(),
    ]);
});

Job execution time. Track how long your critical jobs actually take.

class ProcessImportChunk implements ShouldQueue
{
    public function __construct(public array $rows) {}


    public function handle(): void
    {
        $start = microtime(true);
        // ... actual work ...
        $duration = (microtime(true) - $start) * 1000;
        Log::info('Job completed', [
            'job'         => static::class,
            'duration_ms' => round($duration, 2),
            'chunk_size'  => count($this->rows),
        ]);
    }
}

If you are on Redis, Laravel Horizon gives you all of this out of the box and then some. There is no reason not to run it.

composer require laravel/horizon
php artisan horizon:install
php artisan horizon

Open /horizon in your browser and you will see queue depth, throughput, failed jobs and execution time per job class. If you are running production queues without Horizon and you are on Redis, you are making your own life harder than it needs to be.

When Queues Are the Right Answer

Sending transactional emails. Always. Email sending is slow, unreliable and completely decoupled from whether the request succeeded.

Processing uploaded files. Images, documents, spreadsheets. The user does not need to wait for the resize or the parse to complete.

Third-party API calls where failure should not block the user. CRM syncs, Slack notifications, analytics events.

Report generation and bulk operations. Anything that takes more than a second or two should be asynchronous. No one should sit at a browser waiting for a 10,000 row CSV to build.

The pattern is consistent across all of these. The work is genuinely independent from the immediate user action. Failure is recoverable with a retry. The user experience is not harmed by a short delay. The job is idempotent.

The Real Lesson

Queues solve one specific problem. They let you defer work that does not need to happen synchronously. They are not a performance optimization in the general sense. They are a deferral mechanism.

When you use them correctly they make your application faster, more resilient and easier to scale. When you use them as a substitute for understanding why something is slow, you end up with a distributed system that is harder to debug than the monolith you started with.

Before you queue something, know why you are queuing it. Know what happens when the job fails. Know what happens when the queue is backed up. Know whether the job is idempotent. Know whether the user’s next action depends on this work completing.

Answer those questions first. Then decide.

The queue will still be there.

Found this helpful?

If this article saved you time or solved a problem, consider supporting — it helps keep the writing going.

Originally published on Medium.

View on Medium
Laravel Queues Are Not a Silver Bullet. Here Is What Happens When You Overuse Them. — Hafiq Iqmal — Hafiq Iqmal