Skip to main content
Engineering

GORM Plugins: Why We Love Them

We built two GORM plugins to solve real problems in production

Rob Bruce portrait

Rob Bruce

9 min read
Image of GORM Plugins: Why We Love Them

As of this writing, we manage 102 database tables and 461 endpoints. At that scale, anything you have to do manually in every handler or repository method becomes a liability. Miss one spot and you've got inconsistent behavior. Decide to change something and you're grepping through hundreds of files.

GORM handles the basics well. Define your models, run your queries, call it a day. But the moment you need behavior that cuts across every database operation (audit trails, metrics, query logging, soft deletes with custom logic) you're left bolting things on at the repository layer and hoping you don't miss a spot.

Most ORMs don't give you much to work with here. You get the API they expose, and if you need something outside that box (custom metrics, request-scoped behavior, query interception) you're wrapping every call site or forking the library. GORM is different. Its plugin and callback system lets you hook into the query lifecycle at precisely the right moments, and once you understand how it works, you can solve these cross-cutting concerns cleanly and consistently. It's one of the reasons GORM stands out in the Go ecosystem: the core is minimal, but the extension points are genuinely powerful.

In this post, we’ll walk through GORM's extension points and share two plugins we built to solve real problems in production.

How GORM's Plugin System Work

GORM's extendability comes down to two mechanisms: plugins and callbacks.

The Plugin Interface

A plugin is anything that implements a two-method interface:

go

Name() returns a unique identifier (which GORM uses to prevent duplicate registration), and Initialize() is where you set up your plugin, typically by registering callbacks. You register a plugin with db.Use():

go

Callbacks: The Real Power

Callbacks are functions that run at specific points in GORM's query lifecycle. There are six processors you can hook into:

  • Create for insert operations
  • Query for select operations
  • Update for update operations
  • Delete for delete operations
  • Row for raw row queries
  • Raw for raw SQL execution

Within each processor, you can register callbacks to run Before or After the main operation, or Replace it entirely. Here's the basic pattern:

go

The string arguments to Before() and After() reference GORM's internal callback names, which control ordering. The most common ones you'll position around are gorm:create, gorm:query, gorm:update, and gorm:delete.

What You Get in a Callback

Inside a callback, db.Statement is your window into the current operation:

  • Statement.Context is the context passed to the query (critical for request-scoped data)
  • Statement.Model is the model type
  • Statement.Dest is the destination for results
  • Statement.Clauses contains the query clauses being built
  • Statement.Error can be set to abort the operation
  • Statement.RowsAffected is available in after-callbacks

You can also use db.Set() and db.Get() (or db.InstanceSet() / db.InstanceGet() for operation-scoped values) to pass data between callbacks.

With the fundamentals covered, let's look at two plugins that put this to work.

Plugin 1: Database Metrics and Slow Query Detection

Once you're running GORM in production, you need visibility. Which tables are getting hammered? Which queries are slow? Are preloads getting out of hand? We built a metrics plugin that instruments every database operation and ships the data to Datadog.

Here's what we wanted to capture:

  • Latency for every operation, tagged by table and operation type
  • Preload counts to spot N+1-adjacent problems
  • Slow query alerts with full SQL for debugging

The Implementation

go

Passing Data Between Callbacks

The before callback's main job is capturing the start time. Since GORM creates a new *gorm.DB instance for each operation, we stash the timestamp in the context:

go

This pattern, using context to ferry data from before to after callbacks, is the cleanest way to handle operation-scoped state.

Collecting Metrics in the After Callback

The after callback is where the real work happens:

go

What This Gets You

A few things worth highlighting:

  • tx.Statement.Schema gives you the table name and model metadata. This is populated by GORM when it parses your model, so it's available in callbacks without any extra work.
  • tx.Statement.Preloads is a map of eager-loaded associations. Tracking this helps identify queries where preload chains are getting expensive.
  • tx.Dialector.Explain() reconstructs the final SQL with interpolated values. This is invaluable for debugging slow queries since tx.Statement.SQL.String() only gives you the prepared statement with placeholders.

Usage

go

Every query now emits latency metrics, and anything over 50 milliseconds gets logged with full context. You can build Datadog dashboards to track P95 latency by table, alert on slow query spikes, and catch runaway result sets before they cause problems.

Plugin 2: Automatic Pagination from Request Context

Pagination is tedious. Every list endpoint needs to parse limit, offset, and page query params, validate them, apply them to the query, and figure out if there are more results. Multiply that across dozens of endpoints and you've got a lot of repetitive code.

We built a plugin that handles this automatically: a Gin middleware parses pagination parameters and stashes them in the request context, then a GORM plugin applies them to any query that opts in. The handler just calls .Scopes(WithOffsetPagination) and gets paginated results.

The Moving Parts

This solution has three components:

  • Gin middleware parses query params, stores pagination config in context
  • GORM plugin reads pagination from context, applies limit and offset
  • Scope function is the opt-in flag that tells the plugin to activate

The Gin Middleware

The middleware runs on every GET request and extracts pagination parameters:

go

Nothing fancy here, we’re just centralizing the param parsing and validation that would otherwise be duplicated in every handler.

The GORM Plugin

The plugin hooks into Query callbacks and applies pagination when a scope has flagged the query:

go

A few things to note:

tx.InstanceGet() checks for an operation-scoped flag. This is different from tx.Get(), which uses session scope. Instance values only live for the current operation, which is what we want. Pagination should be explicitly requested per query, not applied globally.

Limit + 1 is the trick for detecting whether more results exist. We fetch one extra row, then trim it off in the response handler. If we get the extra row, there's another page.

The Scope Function

The scope is just a one-liner that sets the instance flag:

go

This is the opt-in mechanism. Queries that don't call this scope won't be paginated, even if pagination params are in the request context.

Handling the Response

After the query runs, we need to trim the extra row and signal pagination state to the client. We do this via response headers:

go

The function trims the extra row if present, then sets headers so clients know if there's a next page and what the current pagination state is. This keeps the response body clean while still giving clients everything they need.

Putting It Together

Here's what a handler looks like with this system in place:

go

The handler doesn't parse query params, manually apply limits, or figure out if there's a next page. It just calls the scope and gets paginated results. Add a new list endpoint? Same two lines.

Why This Works Well

This pattern plays to GORM's strengths:

Scopes as opt-in flags. The plugin only activates when explicitly requested. You're not fighting global behavior or worrying about pagination leaking into aggregate queries.

Context as the data bridge. The Gin middleware and GORM plugin don't know about each other directly. They communicate through context, which keeps them decoupled and testable.

Callbacks for cross-cutting concerns. Pagination logic runs in one place, not scattered across handlers. Change the max limit? One line. Add cursor-based pagination? Swap the plugin.

Write Your Own

These two plugins solve different problems in different ways. One instruments everything passively, the other modifies queries on demand. But they share the same foundation: a simple interface, a handful of lifecycle hooks, and access to everything GORM knows about the current operation.

That's what makes GORM's plugin system worth learning. It's not just about the plugins you can download; it's about the ones you can write. The next time you find yourself copying the same five lines into every repository method, or wishing GORM would just do the thing automatically, you probably have a plugin waiting to be built.

Hopefully this gives you a starting point. The callback system is more capable than it looks, and once you've written one plugin, you'll start seeing opportunities everywhere.

Also explore more from Nuon:
Join our Community
Check out our Event Calendar
X/Twitter | LinkedIn | YouTube | Docs | Github

Ready to get started?

Newsletter

Subscribe to our newsletter

Too much email? Subscribe via RSS feed