Caching strategies for Rails 5 applications
One of the tremendous benefits of building with a high-level framework like Ruby on Rails is that you are afforded both mental space and an abundance of tools to optimize your application with a thoughtful caching strategy. Caching can be done at several levels in the stack and I wanted to provide an overview of the most common caching strategies for Rails applications and the tradeoffs inherent in each.
The Application Cache Store
Most Rails caching faculties write to and read from a cache store configured at the application-level. You can see what cache store you are using interactively by reading the value of Rails.application.config.cache_store. For production environments you typically want to use a cache store that exposes its interface on the network versus a file-based cache store or in-process-memory store so that cached content can be shared among multiple application processes or servers. Both Redis and Memcached are popular choices. Subjectively I am seeing Redis increasingly favored because it now offers comparable or better performance and more configuration options (such as a straightforward client authentication scheme).
Rails.cache
Rails offers a basic interface to the cache store via the Rails.cache object. You can write to and read from the cache, intuitively, via write/read methods:
Note that in Rails 5 caching is disabled by default in development and the Rails.application.config.cache_store
will be set to the no-op :null_store
unless you first run rails dev:cache
(which will drop a file called “caching-dev.txt” into your tmp directory to indicate to the framework that caching should be enabled).
Writing to and reading from the Rails.cache object directly via the write
/read
methods is often less a fit for performance optimization so much as a strategy for temporarily storing data that you may want to live across several requests or invocations of a job. I have, for instance, seen the Rails.cache write/read interface used effectively for storing transient airfare and hotel rates where this data must be split up among several pages for the user but where requests to the upstream API are slow, costly and return hundreds of records.
If you are using the cache store in this way, definitely make sure that you can still safely flush the application cache store at will without breaking application behavior. If not, you may want to interface directly with the underlying data store (such as via the redis gem) and scope your Rails.application.config.cache_store
down with a namespace so that you can safely flush it (via Rails.cache.clear
) without also flushing out this data.
Fragment Caching
Fragment Caching is Rails’ faculty for storing fully rendered snippets of HTML in the application cache store. Fragment Caching, in my experience, is one of the most common performance “quick wins” you’ll be able to achieve in a Rails codebase. Introducing Fragment Caching to a page that does not have it can easily bring a load time of 500ms down to below 50ms.
There are 2 basic elements to Rails’ Fragment Caching implementation: ActiveRecord::Base#cache_key
and the cache
view helper.
ActiveRecord::Base#cache_key
composes a unique string from an ActiveRecord object’s class and its updated_at
timestamp. The insight and significance of this is that a record’s updated_at timestamp is often a great proxy for when some piece of your UI should also be updated. With Fragment Caching we can effectively “freeze” a bit of dynamic content into the cache and only ever pay the price of re-rendering it when the record or records that it represents change.
For instance, we might fragment-cache a block of content displaying a user’s profile info:
In this example, the html within the cache
block will be rendered out once, stored in the cache store, and then fetched from from the cache store on every subsequent render. The snippet will automatically be re-rendered and re-cached any time the user
is updated, since the cache
helper by default will key the fragment on the cache_key
of whatever record or records it is passed.
This is a nice savings, but where Fragment Caching really shines is in situations involving relational data. For instance, if, on a user’s profile page, we wanted to display a series of posts that the user has created. Consider the following domain model:
The touch: true
option instructs the framework to touch
the parent record (bump its updated_at
timestamp) when the child is modified. The significance of this with respect to Fragment Caching is that the User
record’s cache_key
can now reliably be used as a top-level cache key for the whole collection of Posts that it owns. The user profile page might then look as follows:
With a _posts partial that also caches its own content:
On initial render, each of the rendered _post
code partials will be written to a fragment cache, in addition to the whole containing users/show template. On subsequent renders, the whole consolidated block can be read from a single cache hit (and most likely with only one sub-millisecond database query to pull the User record by it’s ID). If either the post or the user is updated, the users/show template will be re-rendered, but all except the updated Post (if any) can still be read from the cache rather than rendered.
Behind the scenes, the cache
view helper also composes a MD5 hash of the template that it is used in and any referencing templates into the cache key so that any template updates that you subsequently deploy will cause cache misses and re-renders in the appropriate places. You can find details about this here, but for practical purposes it means that most of the time you can trust any updates that you make to your templates to bust appropriate caches once deployed.
Action Caching
Action Caching has actually been formally removed from the Rails framework, but in my opinion it deserves a mention here because it’s still very much in use in the wild and still may be a fit for your application. Action Caching has now been extracted to the actionpack-action_caching gem, and you must install this if you want to use the feature.
Action Caching is similar to Fragment Caching, except that it only caches at the level of entire rendered controller actions. A cache hit, in the context of an Action Cache, will result in no view rendering occurring at all. In practice this usually means a marginal performance increase over Fragment Caching, but with the introduction of some complexities around ensuring that whole page content is either fully cacheable in a singular form for all users (which means being mindful about things like csrf-param and being sure you have a strategy to bust the Action Cache when you deploy any template or asset updates), or that the appropriate identifying information for the user you are caching a document for is factored into the cache key (the :path
option to the caches_action
macro).
Action Caching has a close cousin in Page Caching (now in the actionpack-page_caching gem), which also functions at the level of entire documents but actually writes rendered documents to your web server’s “public” directory rather than the cache store, where these documents are subsequently served by your web server itself, skipping the Rails application stack entirely. A critical practical consideration here is that you will lose the ability to run any before_actions
, authentication logic or similar.
ETags and Browser Caching
ETags are a part of the HTTP spec that allow browsers to make conditional GET requests when accessing a document more than once from the same host, and only pull down the full document body if the document has actually changed. If both host and client are configured to support conditional GET, the host will send an ETag
header down with the document, which is a short identifying string for the version of the document that it is sending. On subsequent requests to the same URI, the client will send a If-None-Match
request header set to the value of the ETag and if the document hasn’t changed on the server, the server will send a 304 Not Modified
response down with an empty body and the client will pull the content it was previously sent out of it’s own cache to display to the user.
Rails supports ETags and conditional GET via the stale?
and fresh_when
controller methods. For example, to render a users#show
action with support for conditional GET, we might write:
An important consideration when implementing ETags and conditional GET in Rails is that by default ETag caches will not be busted when you deploy template changes (unless you update the stock RAILS_CACHE_ID
on deploy as well, but this also often means unnecessarily clearing out your fragment caches). You should look to a library like Nathan Kontny’s bust_rails_etags, which overrides the default Rails ETag methods to also take into account an ETAG_VERSION_ID
environment variable, which you can set in a way that suits your deployment scheme.
Reverse Proxy Caching
For cases where you might be considering Action Caching or Page Caching, I’d encourage you to also take a close look at reverse proxy caching. It has a similar profile in respect to the tradeoffs you’ll need to make in the content you want to cache (i.e. being mindful of user-specific data and the CSRF meta tags), but with potentially much better real-world performance (if you do it through a full-blown CDN with edge servers around the world) and other incidental benefits as well, such as being able to proxy your assets through the same host/CDN as you are reverse-proxying-caching site content through and thereby remove one more disparate piece from your infrastructure.
At a high-level, reverse proxy caching places an HTTP-speaking intermediary between your application server (“origin server”, in the vernacular of reverse proxy caching and CDNs) and the browser seeking to access your content. In the old days this HTTP-speaking intermediary was likely something like Varnish or Squid Cache running atop a server co-located in the very same data center as your application servers. Today it is more often a globally-distributed hybrid reverse proxy/CDN service like Fastly or Cloudflare.
In both cases the application-level implementation is nearly identical. Commonly with a globally distributed proxy like Fastly, you will set a long-lived TTL on content that you want to stick around in and be served up directly by edge servers in the network. Rails has a simple faculty for this with the expires_in
method, which I’ll demonstrate by writing a general-purpose before_action suitable for content that you would like reverse-proxy-cached.
The final ingredient to a reverse proxy caching strategy is purging. Once you have deployed a new version of your application, you will likely want to purge all of the content from your caching proxies as immediately as possible. In the case of Varnish, Fastly and Cloudflare, this can be done via a simple API call. You will likely want to make it a step of your deployment process to do an automated purge of your caching proxies as soon as your new code is live.
This is a good point to note why certain traditional CDNs like Amazon’s CloudFront, while perfectly suitable for caching fingerprinted assets, is actually a very poor choice for caching application content. Purging on CloudFront is eventual – it may and often does take as long as several minutes to purge content across the network. Varnish, Fastly and Clouflare all offer practically instantaneous purging of cached content via simple programmatic interfaces (Fastly is in fact built atop a heavily-modified version of Varnish).
What Should I Do?
There is no universally optimal caching strategy for Rails applications. Every strategy has tradeoffs and what is optimal for your own application is likely a mix of several.
If 20-50 millisecond server times are acceptable to you and network latency isn’t a significant concern, you may wish to go no further than simply leveraging Fragment Caching (plus perhaps a traditional CDN for static assets). If you can afford it and don’t mind the slight increase in complexity, I’ve found that a combination of Fragment Caching across the whole application and select reverse proxy caching through a service like Fastly for pages like your homepage, landing pages and marketing pages that new users are likely hit first can be a sweet spot as far as maximizing user experience while keeping complexity relatively minimal.