How many Heroku dynos do you need, and which size?

I’m emailing you because you signed up to receive Heroku tips from me. If you’re no longer interested, hit the unsubscribe link below. No hard feelings. :-)

Hi there! I just published my first blog post in over a year, and I’m super excited to share it with you.

I’ve been working on this one for a while, and I’m quite proud of the result: How many Heroku dynos do you need, and which size—An opinionated guide.

One thing I didn’t mention in the article is what type of dyno I use myself to run Rails Autoscale. Don’t tell anyone, but I actually use Standard-1x dynos, which I recommend against in that post! 😬

Since I’ve kept the app extremely lightweight, it consumes less memory than any other Rails app I’ve worked on. This puts Rails Autoscale is in the minority of Rails apps that can run on a 1x dyno with two Puma workers—a requirement I lay out in the post.

I felt it would be a bit distracting to include that in the article, but I needed to get it off my chest… 😆

I hope you give it a read, and let me know if you do! I plan to continue updating it, so I welcome any feedback or suggestions. If you’re feeling generous and want to share it on Twitter, Reddit, Slack, or wherever you talk tech, that would be awesome as well. ❤️

That’s it for now.

Thanks for following my journey. Take care of yourselves!


Pipelines for solo devs

Mastering Heroku newsletter

It's spring in Ohio! I'm still flying high after attending MicroConf a few weeks back, and I'm wrapping up the [largest Rails Autoscale feature]( I've added since launch. Exciting times. 🔥

My emphasis this spring/summer will be on writing—on Twitter, on, and in this newsletter. So here goes...

Let's get back to Heroku and talk about Pipelines. It's recently come to my attention that many folks who are using Heroku are NOT using Pipelines. 😱

Let me be clear about my position on this: I think EVERY Heroku app should be using Pipelines. I can't think of a single scenario where it doesn't provide some benefit.

If you're unfamiliar, Heroku Pipelines are a groups of apps that share the same codebase, such as a staging and production app. Pipelines provide tooling for promoting code between these environments and automatically creating review apps for GitHub pull requests. I'm not going to talk about how to create and use Pipelines, because the docs do that very well.

Oh, did I mention Pipelines are FREE?!

The argument I hear against Pipelines is along the lines of "I'm a solo dev working on one feature at a time, so I don't need review apps". Fair enough. I'm also a solo dev on Rails Autoscale, but I still love Pipelines. Here's why...

  • I'm often pushing code and walking away from my computer. As a solo dev, it's important that I'm at my computer for a prod deploy in case anything goes wrong. I want prod deploys to be an intentional act, not a side effect of pushing to master. (I this goes against continuous deployment. I'm a huge fan of frequent deployment, but I'm not quite ready for continuous.)

  • I want prod deployments to be instant. Waiting for a build process is too much friction. Promoting staging to production using Pipelines is incredibly fast because it doesn't rebuild your app—it just copies the compiled slug.

  • The Pipelines UI has a lovely "compare on GitHub" button to review the exact changes that will be deployed to production.

Here's my workflow in a nutshell:

  • Small changes are pushed directly to master. For larger features I'll work in a branch until I'm ready to merge.

  • All changes on master are automatically deployed my staging app on Heroku. I use Codeship to run CI, but I do not wait for CI to pass before deploying to staging. Since it's just me, I'd rather have the speed of a faster staging deploy. If I break something on staging, I'll just fix it on the next commit. Not a big deal.

  • None of this impacts production. I can push complete garbage to master, and at worst I'm left with a messy commit history and a broken staging environment. I can live with this.

  • When I'm ready to deploy to prod, I promote staging through Slack using Heroku ChatOps. I could just as well do it through the CLI or Pipelines GUI.

That's it. My complete solo dev workflow. It's low friction, low stress, and completely enabled by Pipelines.

Thanks for reading!

BTW, if you're on Heroku and not using Pipelines, I'd love for you to reply and let me know what I've overlooked. 😊

Winter hibernation

Happy Sunday!

You haven't heard from me in quite a while, so a quick reminder: This is Mastering Heroku, my personal email newsletter to help folks like you be awesome at running web apps on Heroku.

I took an unplanned hibernation over the winter, but I'd love to start sharing with you again. You can expect personal anecdotes, useful links, and sizzling hot tips.

Experienced people say a consistent schedule is critical for a newsletter, but that's not going to happen with me. You’ll only hear from me if I have something worthy of sharing, which I hope to be once or twice a month. Hopefully you can handle the unpredictability. ;-)

That's it for now. Just wanted to get us both excited for what's to come.

Not your cup of tea? Please unsubscribe below with no hard feelings.

If you're sticking around, what would you like to hear more about?

4 Ways to Scale on Heroku

Mastering Heroku — Issue #6

A few weeks ago I gave a presentation at the Columbus Ruby Brigade about my approach and mental model for scaling Heroku apps. My attempt to record the talk failed, so I rerecorded it as a screencast just for you. ❤️

Here’s what I cover in the video:

  • Scaling option #1: Horizontal—add more dynos. If you see increased request queue times in Scout or New Relic, you need to make your app faster or add more dynos. As soon as you’re using more than one dyno, automate it instead of playing a guessing game.

  • Scaling option #2: Vertical—increase dyno size. Because of Heroku’s random routing, you need concurrency within a single dyno. This means running more web processes, which consume more memory and may require a larger dyno type. Aim for at least three web progresses.

  • Scaling option #3: Process types. You’re not limited to just “web” and “worker” process types in your Procfile. Consider multiple worker process types that pull from different job queues. These can be scaled independently for more control and flexibility.

  • Scaling option #4: App instances. Heroku Pipelines make it relatively easy to deploy a single codebase to multiple production apps. This can be helpful to isolate your main app traffic from traffic to your API or admin endpoints, for example. Heroku will route traffic to the correct app based on the request subdomain and the custom domains configured for each app.

My general advice:

  • Start simple.

  • Configure multiple web processes per dyno, increasing dyno size if needed.

  • If you need more than one web dyno, autoscale it.

  • If certain background workers are resource hogs, they may require a larger dyno size. Split into their own process types with dedicated job queues so they can be scaled independently.

  • If you have dedicated sections of your web app such as an API or admin section, split them into their own subdomain so you can divert traffic to a separate app instance. Keep them on a single app instance until the additional complexity is absolutely necessary.

Did you find the video helpful? Anything you’d add or change? Let me know!

Happy scaling!
— Adam (@adamlogic)

Mastering Heroku — Issue #5

A reader asked me for some help this week:

Hi Adam, saw your post and decided to reach out. I used AutoScale in the past but am now on performance dyno. We keep running into Rack::Timeout::RequestTimeoutException errors. Wondering if you may have any suggestion.

I feel this pain. Request timeouts are the worst. If you're not a Rubyist or if you're just unfamiliar with the error above, Rack::Timeout is a library for timing out requests on Rails.

Why would you want to timeout a request? Because if you don't, Heroku will:

Occasionally a web request may hang or take an excessive amount of time to process by your application. When this happens the router will terminate the request if it takes longer than 30 seconds to complete.

We’ve all seen these infamous H12 errors appear in our logs and in our Heroku metrics panel.

It's unfortunate when a user sees an error page, but what's worse is that your app has no idea when Heroku times out a request. Your app will continue processing to completion, whether it takes an additional five seconds or an additional five minutes.

While the router has returned a response to the client, your application will not know that the request it is processing has reached a time-out, and your application will continue to work on the request.

Libraries like Rack::Timeout allow you to halt processing of a long-running request before Heroku times out. This gives you more control over the error the user sees and prevents a hung request from bringing down your app server.

It's a Band-Aid, though, and this particular Band-Aid often introduces more problems than it solves.

Raising mid-flight in stateful applications is inherently unsafe. A request can be aborted at any moment in the code flow, and the application can be left in an inconsistent state.

This is straight from the Rack::Timeout docs, which do an excellent job of warning you about the risks and tradeoffs. I’ve personally seen all kinds of strange and frustrating behavior with Rack::Timeout. As far as I’m concerned, it’s just not worth it.

So what’s a safe alternative that prevents hung requests from bringing down your app? As usual, Nate Berkopec sums it up well:

Nate references the Ultimate Guide to Ruby Timeouts, which if you’re a Rubyist, you should bookmark right now. By setting library-specific timeouts on database queries and network requests, you can gracefully handle unpreventable slowdowns—roll back transactions, show a meaningful error message, whatever you need to do.

This isn’t a perfect solution, of course. You could still have a single request with 1,000 database queries, none of which individually time out, but collectively are way over Heroku’s 30-second limit.

In these cases, I still don’t think it’s worth reaching for a “big, big hammer”. Instead, set up alerting for Heroku’s H12 errors. You can use Heroku’s threshold alerting for this, or set up alerting in your log management tool (I use both). Heroku add-ons like Logentries will alert you on H12 errors out of the box.

With these alerts in place, you can investigate your timeouts to fix the root cause instead of relying on a Band-Aid. The H12 error will tell you the exact URL that timed out, so use that along with an APM tool like Scout to determine what went wrong. Chances are, you either have an N+1 query or you’ve omitted a timeout on some I/O.

To recap:

  • Set library-specific timeouts for all I/O (database, network, etc.)

  • Avoid solutions that arbitrarily halt application processing in the middle of a request. It’ll lead to unpredictable and hard-to-debug behavior.

  • Monitor your H12 errors.

  • Use an APM tool to fix those slow endpoints.

And of course, use autoscaling to ensure a few slow requests don’t slow down your entire app. 😁

Happy scaling!

Loading more posts…