Because retrofitting one into a live production system is a nightmare you do not want.

Most systems get an audit trail added the same way most systems get tests added.
Late. Painfully. After something breaks in production and someone asks “who changed this and when?”
By then your data model is set, your relationships are tangled and you are staring at six months of historical records with no trace of who touched what. You patch something together. It sort of works. It haunts you forever.
This article is about not doing that. It does not matter if you are using Laravel, Django, Spring Boot or a Node.js API. The concepts are the same. Build it right from day one so it becomes a natural part of your system instead of a bolt-on afterthought.
What You Are Actually Building
An audit trail answers four questions:
- Who performed the action
- What changed
- When it happened
- From where: IP, device, server, request context
Most developers build a shallow version of this. A logs table with a description column and a user ID. That feels fine until a compliance officer, a court subpoena or an angry enterprise client asks you to prove exactly what happened to a specific record at a specific time.
Then you find out what you actually built was useless.
Record everything. Yes, everything. The PDPA or GDPR concern is real but it is a retention and access problem, not a collection problem. You can anonymise or purge old records. You cannot retroactively create records that never existed. Collect first. Manage retention later. Your future self will thank you.
The Schema: What to Store
Here is the extended audit table schema. Every field has a reason.
CREATE TABLE audits (
id BIGINT UNSIGNED AUTO_INCREMENT,
user_type VARCHAR(255) NULL, -- 'App\Models\User', 'App\Models\Admin', 'system'
user_id BIGINT UNSIGNED NULL,
event VARCHAR(255) NOT NULL, -- 'created', 'updated', 'deleted', 'restored', 'login', 'logout'
auditable_type VARCHAR(255) NOT NULL, -- model class name or entity type
auditable_id BIGINT UNSIGNED NOT NULL,
old_values JSON NULL, -- what it looked like before
new_values JSON NULL, -- what it looks like after
snapshot JSON NULL, -- full record snapshot at time of change
url TEXT NULL, -- full request URL
ip_address VARCHAR(45) NULL, -- IPv4 or IPv6
user_agent TEXT NULL, -- full browser or client string
hostname VARCHAR(255) NULL, -- server hostname that wrote this record
tags TEXT NULL, -- comma-separated tags for filtering
checksum VARCHAR(64) NULL, -- SHA-256 of the row content
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id, created_at), -- composite PK required for partitioning
INDEX idx_auditable (auditable_type, auditable_id),
INDEX idx_user (user_type, user_id),
INDEX idx_event (event),
INDEX idx_created (created_at)
)
PARTITION BY RANGE (UNIX_TIMESTAMP(created_at)) (
PARTITION p2025 VALUES LESS THAN (UNIX_TIMESTAMP('2026-01-01 00:00:00')),
PARTITION p2026 VALUES LESS THAN (UNIX_TIMESTAMP('2027-01-01 00:00:00')),
PARTITION p2027 VALUES LESS THAN (UNIX_TIMESTAMP('2028-01-01 00:00:00')),
PARTITION p_future VALUES LESS THAN MAXVALUE
);Let me explain the columns that are not obvious.
hostname: When you have multiple app servers or containers behind a load balancer, you need to know which server wrote a given audit record. If server B is behaving abnormally or has a compromised process, you can isolate the records it produced. Use gethostname() in PHP or its equivalent in your language.
snapshot: This stores the full state of the record at the time of the change. Not just the diff. Not just the new values. The entire row as it existed at that moment. This matters because related records can be deleted later. If you only store a foreign key reference to a deleted record, that reference becomes meaningless. The snapshot makes the audit record self-contained and readable regardless of what happens to the source data later.
checksum: A SHA-256 hash of the critical fields in the row: user_id, user_type, event, auditable_type, auditable_id, old_values, new_values and created_at. If someone with database access directly updates or deletes an audit row, the checksum will no longer match. Run a scheduled job that recomputes and compares checksums. Any mismatch is an integrity alert. This is your defence against dishonest database changes by someone who has direct DB access but should not be covering their tracks.
tags: Use these to categorise records without filtering on auditable_type alone. Examples: billing, sensitive, security, pii. Useful when building admin UIs and compliance reports.
A few schema best practices worth following:
Never add an updated_at column to the audit table. Audit rows are immutable. The presence of updated_at implies the record can be changed. Remove the temptation entirely.
Use BIGINT UNSIGNED for id rather than INT. An audit table on any reasonably active system will exceed 2.1 billion rows faster than you expect. INT overflows silently. BIGINT will not be an issue in your lifetime.
Store ip_address as VARCHAR(45) to support both IPv4 and IPv6. The maximum length of an IPv6 address in full notation is 39 characters. VARCHAR(45) gives you comfortable headroom.
If your system is multi-tenant, add a tenant_id column and index it. Filtering audit records by tenant without it becomes a full table scan on your largest and most write-heavy table.
On Partitioning
The audit table is append-only and grows forever. Without partitioning, querying six months of history on a table with 50 million rows is painful regardless of your indexes.
MySQL supports PARTITION BY RANGE on timestamp values using UNIX_TIMESTAMP(). Partition by year for most systems. Partition by month if your system generates extremely high audit volume.
The important thing to understand: in MySQL, the partition column must be part of the primary key. That is why the schema above uses a composite primary key of (id, created_at) instead of just id. This is a real constraint and not something you can ignore.
The major benefit of partitioning here is archival. When you need to drop or archive records older than two years, you run:
ALTER TABLE audits DROP PARTITION p2025;
That single statement removes an entire year of data near-instantly. Compare that to a DELETE WHERE created_at < '2026-01-01' on a 50-million-row table. You do not want to do that.
Add new year partitions every December before you need them. Do it with a scheduled task. Do not wait until January and find the p_future catch-all partition absorbing everything.
A few partitioning best practices:
Never delete partitions without archiving first. Drop a partition and that data is gone permanently. Before running ALTER TABLE audits DROP PARTITION p2025, export the partition to cold storage or a separate archive table. Some compliance frameworks require audit records to be retained for five to seven years.
Do not over-partition. Monthly partitions sound appealing but they generate 12 partition objects per year and MySQL has to manage all of them at query time. For most systems, annual partitions hit the right balance between manageability and pruning granularity. Switch to monthly only if you are generating tens of millions of rows per month.
Test partition pruning with EXPLAIN. Run EXPLAIN SELECT * FROM audits WHERE created_at BETWEEN '2026-01-01' AND '2026-12-31' and verify the output shows only the relevant partition being scanned. If it shows all partitions, your query is not benefiting from partitioning and something is wrong with how the filter is written.
Polymorphic Relationship: SQL and NoSQL
The auditable_type and auditable_id columns implement a polymorphic relationship. One audit table covers every entity in your system.
In a relational database like MySQL or PostgreSQL, this means a single audits table that references App\Models\User, App\Models\Invoice, App\Models\Order and so on through the type column. No need for separate user_audits, invoice_audits tables.
In a document database like MongoDB, the same concept applies. Each audit document stores:
{
"auditable_type": "User",
"auditable_id": "64f2a...",
"event": "updated",
"old_values": { "email": "old@example.com" },
"new_values": { "email": "new@example.com" },
"snapshot": { ... },
"hostname": "app-server-03",
"checksum": "a3f9..."
}The pattern is the same. The storage engine is different. The application code that queries it will differ but the design intention is identical.
One important note for SQL: auditable_id should be BIGINT UNSIGNED if your primary keys are integers or VARCHAR(36) if they are UUIDs. Match the column type to your actual primary key type. Mismatches cause silent failures that are very unpleasant to debug.
A few polymorphic best practices:
Index (auditable_type, auditable_id) together as a composite index, not separately. Querying the audit history of a specific record always filters on both columns simultaneously. A composite index on both serves this pattern far better than two separate single-column indexes.
Keep auditable_type values consistent. If you rename a model class in PHP, all existing audit records with the old class name become orphans unless you run a migration to update them. Consider storing a short canonical string like invoice or user instead of the full class name if you anticipate model refactoring over the lifetime of the system.
For MongoDB, use a compound index on { auditable_type: 1, auditable_id: 1 } to mirror the same query pattern.
Using Laravel Auditing
If you are on Laravel, do not build this from scratch. Use owen-it/laravel-auditing. It handles the polymorphic relationship, the delta tracking and the resolver pattern out of the box.
As of early 2026, the current stable release is v14, which requires PHP 8.2 or higher and supports Laravel 11 and Laravel 12. If you are still on v13, that branch is now end-of-life. Upgrade.
Install it:
composer require owen-it/laravel-auditing php artisan vendor:publish --provider "OwenIt\Auditing\AuditingServiceProvider" --tag="config" php artisan vendor:publish --provider "OwenIt\Auditing\AuditingServiceProvider" --tag="migrations" php artisan migrate
Add the trait and interface to any model you want to audit:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use OwenIt\Auditing\Contracts\Auditable;
class Invoice extends Model implements Auditable
{
use \OwenIt\Auditing\Auditable;
// Exclude sensitive fields from audit
protected $auditExclude = [
'password',
'remember_token',
'two_factor_secret',
];
// Tag this model for filtering
protected $auditTags = ['billing', 'sensitive'];
}The package stores user_type, user_id, event, auditable_type, auditable_id, old_values, new_values, url, ip_address, user_agent and tags by default.
You need to extend it with the extra columns we discussed. Add the missing columns to the published migration before running it:
$table->string('hostname')->nullable();
$table->json('snapshot')->nullable();
$table->string('checksum', 64)->nullable();Then create custom resolvers for the fields the package does not handle natively. The resolver pattern in v14 is clean. Add a hostname resolver:
<?php
namespace App\Resolvers;
use OwenIt\Auditing\Contracts\Auditable;
class HostnameResolver implements \OwenIt\Auditing\Contracts\Resolver
{
public static function resolve(Auditable $auditable): string
{
return gethostname() ?: 'unknown';
}
}Register it in config/audit.php:
'resolvers' => [
'ip_address' => OwenIt\Auditing\Resolvers\IpAddressResolver::class,
'user_agent' => OwenIt\Auditing\Resolvers\UserAgentResolver::class,
'url' => OwenIt\Auditing\Resolvers\UrlResolver::class,
'hostname' => App\Resolvers\HostnameResolver::class,
],For the checksum and snapshot, hook into the auditing event to compute them before the record is saved:
// In AppServiceProvider::boot()
\OwenIt\Auditing\Models\Audit::creating(function ($audit) {
// Store full snapshot
if ($audit->auditable) {
$audit->snapshot = $audit->auditable->toArray();
}
// Compute checksum
$payload = implode('|', [
$audit->user_id,
$audit->user_type,
$audit->event,
$audit->auditable_type,
$audit->auditable_id,
json_encode($audit->old_values),
json_encode($audit->new_values),
$audit->created_at ?? now()->toIso8601String(),
]);
$audit->checksum = hash('sha256', $payload);
});Record Login and Logout
The owen-it/laravel-auditing package tracks model changes. Login and logout are not model changes. They are authentication events. Handle them separately using Laravel's built-in event system.
<?php
namespace App\Listeners;
use Illuminate\Auth\Events\Login;
use Illuminate\Auth\Events\Logout;
use Illuminate\Auth\Events\Failed;
use Illuminate\Http\Request;
use App\Models\Audit;
class RecordAuthEvent
{
public function __construct(protected Request $request) {}
public function handleLogin(Login $event): void
{
$this->record($event->user, 'login');
}
public function handleLogout(Logout $event): void
{
$this->record($event->user, 'logout');
}
public function handleFailed(Failed $event): void
{
// Record failed login attempts too. Yes, always.
$this->record($event->user, 'login_failed');
}
private function record($user, string $event): void
{
$payload = [
'user_type' => $user ? get_class($user) : null,
'user_id' => $user?->id,
'event' => $event,
'auditable_type' => $user ? get_class($user) : 'auth',
'auditable_id' => $user?->id ?? 0,
'old_values' => null,
'new_values' => null,
'ip_address' => $this->request->ip(),
'user_agent' => $this->request->userAgent(),
'url' => $this->request->fullUrl(),
'hostname' => gethostname() ?: 'unknown',
];
$payload['checksum'] = hash('sha256', implode('|', array_filter($payload)));
Audit::create($payload);
}
}In Laravel 11+, EventServiceProvider was removed. Listeners are registered in bootstrap/app.php using the withEvents method:
// bootstrap/app.php
use Illuminate\Foundation\Application;
use Illuminate\Auth\Events\Login;
use Illuminate\Auth\Events\Logout;
use Illuminate\Auth\Events\Failed;
use App\Listeners\RecordAuthEvent;
return Application::configure(basePath: dirname(__DIR__))
->withEvents(discover: [__DIR__.'/../app/Listeners'])
->withEvents(listen: [
Login::class => [RecordAuthEvent::class . '@handleLogin'],
Logout::class => [RecordAuthEvent::class . '@handleLogout'],
Failed::class => [RecordAuthEvent::class . '@handleFailed'],
])
// ...
->create();Alternatively, Laravel 11+ supports event listener auto-discovery. If you have auto-discovery enabled, you can use the #[AsEventListener] attribute directly on your listener class and skip manual registration entirely:
use Illuminate\Events\Attributes\AsEventListener;
use Illuminate\Auth\Events\Login;
#[AsEventListener(event: Login::class)]
class RecordLogin
{
public function handle(Login $event): void
{
// ...
}
}If you are still on Laravel 10 or below, use the classic EventServiceProvider::$listen array.
Failed login attempts are important. They tell you when someone is brute-forcing an account. Record them every time.
A few auth audit best practices:
Record the email or username that was attempted on failed logins, even when no user record matches. An attacker probing your system with admin@yourcompany.com is useful information. Store it in new_values as { "attempted_email": "admin@yourcompany.com" }.
Record password reset requests and two-factor events as well. These are security-relevant actions that most systems ignore. Add listeners for \Illuminate\Auth\Events\PasswordReset and your 2FA events.
Do not store the session token itself in the audit record. Store the session ID only. The token is a credential. The ID is a reference. You want the reference for correlation, not the credential.
Snapshot vs Relational Data
This is where most audit trail implementations make a decision too quickly and regret it later. You need to understand the difference properly before you pick an approach.
The Relational Approach
The relational approach stores only the delta and a reference to the changed record. Your audit row says “user 4291 updated invoice 887, field status changed from pending to paid." To see the full context of that invoice, you join back to the invoices table using auditable_id.
This is storage-efficient and works well while your data is intact. The problem surfaces the moment records start disappearing. And they always do. Soft deletes get hard deleted during cleanup. Clients get GDPR erasure requests. Old records get pruned after a retention window. The moment the source record is gone, your audit entry becomes an orphan. It tells you something happened but cannot tell you what the record looked like at the time.
An audit entry that says “user 4291 deleted invoice 887” means nothing when invoice 887 no longer exists and you cannot reconstruct what was in it.
The Snapshot Approach
The snapshot approach stores a full copy of the record in the snapshot column at the time each audit event is written. Every row is self-contained. You do not need to join anything. The audit record carries everything you need to understand what the data looked like at that exact moment.
This is what makes forensic investigation actually possible. When a client disputes a transaction from 14 months ago, you do not need the original record to still exist. You pull the audit rows for that entity and each one shows you exactly what the data looked like at every point in its history.
The tradeoff is storage. If your model has 30 columns and you write 1,000 updates per day, each snapshot adds meaningful size to your audit table. On a system with large models and high write volume, this adds up. Size your database accordingly and factor snapshots into your partitioning and retention plan.
Relational Reference Data: The Problem Nobody Talks About
There is a third problem that neither approach fully solves by default: related data.
Imagine you audit a payment record. The payment belongs to an order. The order belongs to a user. Your snapshot captures the payment row. But if the order is later deleted or the user account is purged, the snapshot still references those IDs without the data behind them.
The solution is to include denormalised context in your snapshot. When you capture a payment snapshot, include the relevant bits of the parent order and user inline:
{
"id": 5012,
"amount": 250.00,
"status": "paid",
"payment_method": "card",
"_context": {
"order_id": 887,
"order_reference": "ORD-2026-0887",
"user_id": 4291,
"user_email": "user@example.com",
"user_name": "Ahmad Hafizuddin"
}
}The _context block is not part of the live model. It is assembled at audit time and embedded into the snapshot. It persists even after the parent records are gone. This is the pattern that makes your audit trail genuinely useful for compliance and dispute resolution rather than just technically complete.
In Laravel, you can populate this context in the Audit::creating observer:
\OwenIt\Auditing\Models\Audit::creating(function ($audit) {
if ($audit->auditable) {
$model = $audit->auditable;
$snapshot = $model->toArray();
// Embed relational context before reference data disappears
if (method_exists($model, 'getAuditContext')) {
$snapshot['_context'] = $model->getAuditContext();
}
$audit->snapshot = $snapshot;
}
});Then define getAuditContext() on models that have meaningful relational context:
// In App\Models\Payment
public function getAuditContext(): array
{
return [
'order_id' => $this->order?->id,
'order_reference' => $this->order?->reference,
'user_id' => $this->order?->user?->id,
'user_email' => $this->order?->user?->email,
'user_name' => $this->order?->user?->name,
];
}Not every model needs this. Apply it where the audit record would be meaningless without parent context.
The Recommendation: Do Both
Store the auditable_type and auditable_id columns for live data queries where the source record still exists. Store the snapshot column with embedded context for forensic and compliance purposes.
The two serve different consumers. Your admin UI uses the relational reference to link back to live records. Your compliance reports and dispute resolution tools use the snapshot to reconstruct history. They are not redundant. They are complementary.
The storage cost of snapshots is real but predictable. The cost of not having them shows up at the worst possible time: when a client escalates a dispute, when a regulator asks for a point-in-time reconstruction, or when someone deletes a record they should not have and you need to know exactly what was in it.
Verifying Checksum Integrity
Collecting checksums is useless if you never verify them. Run a scheduled command that checks a sample of recent audit records and flags any mismatch.
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Models\Audit;
class VerifyAuditChecksums extends Command
{
protected $signature = 'audit:verify-checksums {--hours=24}';
public function handle(): void
{
$hours = (int) $this->option('hours');
Audit::where('created_at', '>=', now()->subHours($hours))
->whereNotNull('checksum')
->chunkById(500, function ($audits) {
foreach ($audits as $audit) {
$payload = implode('|', array_filter([
$audit->user_id,
$audit->user_type,
$audit->event,
$audit->auditable_type,
$audit->auditable_id,
json_encode($audit->old_values),
json_encode($audit->new_values),
$audit->created_at->toIso8601String(),
]));
$expected = hash('sha256', $payload);
if ($expected !== $audit->checksum) {
$this->error("Checksum mismatch on audit ID {$audit->id}");
// Send alert: Slack, email, PagerDuty, whatever you use
}
}
});
$this->info('Checksum verification complete.');
}
}Schedule this daily at minimum. For regulated systems, run it hourly.
A few checksum best practices:
Use hash_hmac with a secret key instead of plain hash for stronger tamper protection. A plain SHA-256 hash can be recomputed by anyone who knows the field values. An HMAC hash requires the secret key to reproduce. Store the key in your environment file, not in the database.
$audit->checksum = hash_hmac('sha256', $payload, config('audit.checksum_secret'));Log checksum mismatches to a separate alert channel immediately. Do not just write them to the Laravel log file where they might go unnoticed. Send them to Slack, email or your monitoring system. A checksum mismatch on an audit record is a serious event.
Never attempt to “fix” a mismatched checksum by recomputing and updating the audit row. If a mismatch is detected, treat the record as potentially compromised and escalate. The entire point of the checksum is that the row should not change after it is written.
Fields You Should Never Store in Plain Text
Some fields must be excluded or masked before they hit the audit table.
Never store in plain text:
- Passwords and hashes
- API keys and tokens
- OTP codes and two-factor secrets
- Credit card numbers (even partial)
- Bank account numbers
Use $auditExclude on the model level to block fields entirely. Use a custom Transformation class if you need to store a masked version like **** **** **** 4242.
Enforce this at the model level. Do not rely on developers remembering to exclude sensitive fields by hand every time they add a new model.
A few best practices for sensitive field handling:
Create a base AuditableModel class that extends Model, includes the Auditable trait and sets a default $auditExclude list covering the fields every model in your system should never log. All your auditable models extend this base class instead of Model directly. Sensitive field exclusions become opt-out instead of opt-in.
abstract class AuditableModel extends Model implements Auditable
{
use \OwenIt\Auditing\Auditable;
protected $auditExclude = [
'password',
'remember_token',
'two_factor_secret',
'two_factor_recovery_codes',
'api_token',
];
}For fields you want to audit but need to mask, use a Transformation. Do not store 4111111111111111. Store **** **** **** 1111. The Transformation runs before the value hits the database so the original never touches the audit table.
Review your $auditExclude list every time a new sensitive field is added to any model. Make this part of your code review checklist.
Separate the Audit Database for Critical Systems
If your system is critical and handles high traffic, the audit table should not live in the same database as your application data.
Reasons:
Write contention. Every insert, update and delete in your application generates at least one audit write. On a high-traffic system this puts serious write pressure on your main database. Separating the audit database removes this competition.
Access control. Your main application database needs read-write access for normal operations. Your audit database should be append-only for the application. No UPDATE. No DELETE. Only INSERT and SELECT. A database user that can only insert cannot cover their tracks.
Compliance isolation. In some regulated industries, audit logs must be tamper-evident and stored separately from operational data. Having a dedicated audit database makes this requirement straightforward to satisfy.
In Laravel, configure a second database connection in config/database.php:
'audit' => [
'driver' => 'mysql',
'host' => env('AUDIT_DB_HOST', '127.0.0.1'),
'database' => env('AUDIT_DB_DATABASE', 'audit_db'),
'username' => env('AUDIT_DB_USERNAME', 'audit_writer'),
'password' => env('AUDIT_DB_PASSWORD', ''),
// ...
],Then set this connection in your Audit model:
class Audit extends \OwenIt\Auditing\Models\Audit
{
protected $connection = 'audit';
}The audit_writer database user should have INSERT and SELECT privileges only. Not UPDATE. Not DELETE. If your application code tries to delete an audit record, it should fail at the database level, not just at the application level.
A few best practices for the separate audit database:
Use a dedicated read replica for audit queries in your admin UI. Audit history queries can be expensive, particularly when filtering across large date ranges or joining context data. Running them against your primary audit database adds unnecessary load. A replica keeps reads off the write path.
Enable binary logging on the audit database and ship logs off-server. Binary logs give you a secondary recovery path if the audit database itself is compromised or corrupted. Ship them to S3, Backblaze or any offsite storage your budget supports.
If you cannot afford a fully separate database server yet, at minimum use a separate database on the same server and a separate database user with restricted privileges. It is not as strong as physical separation but it raises the bar for accidental or malicious data manipulation significantly.
Access Control
Audit data contains sensitive information. Define a clear read policy and enforce it at the query layer.
A typical breakdown:
- Support staff can see events on records they are authorised to view but cannot see sensitive field values
- Operations managers can see full deltas for records within their scope
- System administrators can see everything
- API clients cannot access audit records at all
Do not implement this only in the frontend. A policy that hides columns in the UI but still returns them in an API response is not a policy. Use query scopes and policies to filter what each role can actually retrieve from the database.
A few access control best practices:
Log access to the audit trail itself. If someone reads the audit history of a sensitive record, that read should itself be recorded. Otherwise a bad actor can quietly review what the audit trail contains before deciding what to cover up. An admin reading your audit table is an auditable event.
Set up alerts for bulk reads. A single user querying thousands of audit records in a short window is unusual behaviour. It could be a legitimate export or it could be someone collecting intelligence before acting. Know when it happens.
Never expose raw auditable_type values that contain internal class names to end users. A customer does not need to see App\Models\PaymentTransaction in their audit history. Map these to human-readable labels at the presentation layer.
The Payoff
When you build this from day one you get things that are genuinely difficult to add later.
You can answer “who deleted this record” in one query. You can show a user exactly what changed on their account and when. You can prove to a compliance auditor that a specific action happened at a specific time on a specific server. You can verify that no one has tampered with your audit records since they were written. You can reconstruct the state of any record at any point in history even if that record has since been deleted.
These are not nice-to-haves. In any system handling money, contracts, healthcare or government data, these are requirements. And in any system where something eventually goes wrong (which is every system), they are the difference between a ten-minute investigation and a two-day one.
Build it on day one. The cost is low. The alternative is painful.