Pipelines for solo devs

Mastering Heroku newsletter

It's spring in Ohio! I'm still flying high after attending MicroConf a few weeks back, and I'm wrapping up the [largest Rails Autoscale feature](https://twitter.com/adamlogic/status/1110142400566083585) I've added since launch. Exciting times. 🔥

My emphasis this spring/summer will be on writing—on Twitter, on railsautoscale.com, and in this newsletter. So here goes...

Let's get back to Heroku and talk about Pipelines. It's recently come to my attention that many folks who are using Heroku are NOT using Pipelines. 😱

Let me be clear about my position on this: I think EVERY Heroku app should be using Pipelines. I can't think of a single scenario where it doesn't provide some benefit.

If you're unfamiliar, Heroku Pipelines are a groups of apps that share the same codebase, such as a staging and production app. Pipelines provide tooling for promoting code between these environments and automatically creating review apps for GitHub pull requests. I'm not going to talk about how to create and use Pipelines, because the docs do that very well.

Oh, did I mention Pipelines are FREE?!

The argument I hear against Pipelines is along the lines of "I'm a solo dev working on one feature at a time, so I don't need review apps". Fair enough. I'm also a solo dev on Rails Autoscale, but I still love Pipelines. Here's why...

  • I'm often pushing code and walking away from my computer. As a solo dev, it's important that I'm at my computer for a prod deploy in case anything goes wrong. I want prod deploys to be an intentional act, not a side effect of pushing to master. (I this goes against continuous deployment. I'm a huge fan of frequent deployment, but I'm not quite ready for continuous.)

  • I want prod deployments to be instant. Waiting for a build process is too much friction. Promoting staging to production using Pipelines is incredibly fast because it doesn't rebuild your app—it just copies the compiled slug.

  • The Pipelines UI has a lovely "compare on GitHub" button to review the exact changes that will be deployed to production.

Here's my workflow in a nutshell:

  • Small changes are pushed directly to master. For larger features I'll work in a branch until I'm ready to merge.

  • All changes on master are automatically deployed my staging app on Heroku. I use Codeship to run CI, but I do not wait for CI to pass before deploying to staging. Since it's just me, I'd rather have the speed of a faster staging deploy. If I break something on staging, I'll just fix it on the next commit. Not a big deal.

  • None of this impacts production. I can push complete garbage to master, and at worst I'm left with a messy commit history and a broken staging environment. I can live with this.

  • When I'm ready to deploy to prod, I promote staging through Slack using Heroku ChatOps. I could just as well do it through the CLI or Pipelines GUI.

That's it. My complete solo dev workflow. It's low friction, low stress, and completely enabled by Pipelines.

Thanks for reading!

BTW, if you're on Heroku and not using Pipelines, I'd love for you to reply and let me know what I've overlooked. 😊

Winter hibernation

Happy Sunday!

You haven't heard from me in quite a while, so a quick reminder: This is Mastering Heroku, my personal email newsletter to help folks like you be awesome at running web apps on Heroku.

I took an unplanned hibernation over the winter, but I'd love to start sharing with you again. You can expect personal anecdotes, useful links, and sizzling hot tips.

Experienced people say a consistent schedule is critical for a newsletter, but that's not going to happen with me. You’ll only hear from me if I have something worthy of sharing, which I hope to be once or twice a month. Hopefully you can handle the unpredictability. ;-)

That's it for now. Just wanted to get us both excited for what's to come.

Not your cup of tea? Please unsubscribe below with no hard feelings.

If you're sticking around, what would you like to hear more about?

4 Ways to Scale on Heroku

Mastering Heroku — Issue #6

A few weeks ago I gave a presentation at the Columbus Ruby Brigade about my approach and mental model for scaling Heroku apps. My attempt to record the talk failed, so I rerecorded it as a screencast just for you. ❤️

Here’s what I cover in the video:

  • Scaling option #1: Horizontal—add more dynos. If you see increased request queue times in Scout or New Relic, you need to make your app faster or add more dynos. As soon as you’re using more than one dyno, automate it instead of playing a guessing game.

  • Scaling option #2: Vertical—increase dyno size. Because of Heroku’s random routing, you need concurrency within a single dyno. This means running more web processes, which consume more memory and may require a larger dyno type. Aim for at least three web progresses.

  • Scaling option #3: Process types. You’re not limited to just “web” and “worker” process types in your Procfile. Consider multiple worker process types that pull from different job queues. These can be scaled independently for more control and flexibility.

  • Scaling option #4: App instances. Heroku Pipelines make it relatively easy to deploy a single codebase to multiple production apps. This can be helpful to isolate your main app traffic from traffic to your API or admin endpoints, for example. Heroku will route traffic to the correct app based on the request subdomain and the custom domains configured for each app.

My general advice:

  • Start simple.

  • Configure multiple web processes per dyno, increasing dyno size if needed.

  • If you need more than one web dyno, autoscale it.

  • If certain background workers are resource hogs, they may require a larger dyno size. Split into their own process types with dedicated job queues so they can be scaled independently.

  • If you have dedicated sections of your web app such as an API or admin section, split them into their own subdomain so you can divert traffic to a separate app instance. Keep them on a single app instance until the additional complexity is absolutely necessary.

Did you find the video helpful? Anything you’d add or change? Let me know!

Happy scaling!
— Adam (@adamlogic)

Mastering Heroku — Issue #5

A reader asked me for some help this week:

Hi Adam, saw your post and decided to reach out. I used AutoScale in the past but am now on performance dyno. We keep running into Rack::Timeout::RequestTimeoutException errors. Wondering if you may have any suggestion.

I feel this pain. Request timeouts are the worst. If you're not a Rubyist or if you're just unfamiliar with the error above, Rack::Timeout is a library for timing out requests on Rails.

Why would you want to timeout a request? Because if you don't, Heroku will:

Occasionally a web request may hang or take an excessive amount of time to process by your application. When this happens the router will terminate the request if it takes longer than 30 seconds to complete.

We’ve all seen these infamous H12 errors appear in our logs and in our Heroku metrics panel.

It's unfortunate when a user sees an error page, but what's worse is that your app has no idea when Heroku times out a request. Your app will continue processing to completion, whether it takes an additional five seconds or an additional five minutes.

While the router has returned a response to the client, your application will not know that the request it is processing has reached a time-out, and your application will continue to work on the request.

Libraries like Rack::Timeout allow you to halt processing of a long-running request before Heroku times out. This gives you more control over the error the user sees and prevents a hung request from bringing down your app server.

It's a Band-Aid, though, and this particular Band-Aid often introduces more problems than it solves.

Raising mid-flight in stateful applications is inherently unsafe. A request can be aborted at any moment in the code flow, and the application can be left in an inconsistent state.

This is straight from the Rack::Timeout docs, which do an excellent job of warning you about the risks and tradeoffs. I’ve personally seen all kinds of strange and frustrating behavior with Rack::Timeout. As far as I’m concerned, it’s just not worth it.

So what’s a safe alternative that prevents hung requests from bringing down your app? As usual, Nate Berkopec sums it up well:

Nate references the Ultimate Guide to Ruby Timeouts, which if you’re a Rubyist, you should bookmark right now. By setting library-specific timeouts on database queries and network requests, you can gracefully handle unpreventable slowdowns—roll back transactions, show a meaningful error message, whatever you need to do.

This isn’t a perfect solution, of course. You could still have a single request with 1,000 database queries, none of which individually time out, but collectively are way over Heroku’s 30-second limit.

In these cases, I still don’t think it’s worth reaching for a “big, big hammer”. Instead, set up alerting for Heroku’s H12 errors. You can use Heroku’s threshold alerting for this, or set up alerting in your log management tool (I use both). Heroku add-ons like Logentries will alert you on H12 errors out of the box.

With these alerts in place, you can investigate your timeouts to fix the root cause instead of relying on a Band-Aid. The H12 error will tell you the exact URL that timed out, so use that along with an APM tool like Scout to determine what went wrong. Chances are, you either have an N+1 query or you’ve omitted a timeout on some I/O.

To recap:

  • Set library-specific timeouts for all I/O (database, network, etc.)

  • Avoid solutions that arbitrarily halt application processing in the middle of a request. It’ll lead to unpredictable and hard-to-debug behavior.

  • Monitor your H12 errors.

  • Use an APM tool to fix those slow endpoints.

And of course, use autoscaling to ensure a few slow requests don’t slow down your entire app. 😁

Happy scaling!
Adam

Mastering Heroku — Issue #4

I recently had a consulting call with Jesse Hanley (creator of Bento) that got me thinking about an approach many of us are guilty of when our apps are struggling: We just turn the knobs up to 11.

Jesse Hanley@jessethanley

If you're using @heroku and have questions about scale, you should hit up @adamlogic.

Just had a really insightful call with him that took me from "I have no idea what I'm doing, app needs more Performance-L dynos lol" to confident to experiment finding a profitable balance.

September 17, 2018
On Heroku, "turning it to 11" translates to blindly adding dynos and increasing dyno size. This approach can work, but it comes with major strings attached:

  • It gets expensive fast.

  • You risk overwhelming your downstream dependencies, such as hitting a connection limit on Postgres (I touched on this last week).

  • You're masking or ignoring underlying root causes of performance issues.

I subscribe to this crazy idea that Heroku is not expensive. When optimially configured, it can be a steal.

Adam McCrea@adamlogic

Yesterday @railsautoscale handled 1.64M requests. I pay ~$100/mo to host it on @heroku. It *can* get expensive, but that's avoidable. Small teams don't need a dev-ops engineer, they just need guidance.https://t.co/fNOKhXva9d https://t.co/AIycHgDghm

August 14, 2018
So how do you optimize your Heroku setup? Here are some tips that got Jesse on the right track:

  • If absolutely consistent performance is a hard requirement, you need performance dynos. The shared architecture of standard dynos means your performance will fluctuate due to factors completely out of your control.

  • Remove the guesswork from choosing the number of dynos. Use Heroku's own autoscaling, HireFire, or Rails Autoscale to automatically scale up and down as needed. Autoscaling should be a hard requirement for a production app.

  • Once you’re autoscaling, there's little reason to use a larger dyno type than necessary. A smaller dyno lets you autoscale at a finer granularity and save a whole lot more cash. This is another reason jumping straight to Perf-L dynos is usually a bad idea.

  • On the other hand, you do need a large enough dyno to run multiple web processes. Heroku's random routing architecture means you’ll stabilize your performance by adding web processes so your app server can intelligently route requests within a dyno. A good rule of thumb is to choose a dyno type with enough memory to run at least three web processes.

  • Do the math to ensure you don’t exceed database connection limits: [connection pool] * [processes] * [max dynos] <— Do this for each process type (web, worker, release, etc).

With that basic setup, you can focus efforts your app itself. Use an APM like New Relic or Scout to measure and diagnose potential bottlenecks. Any improvements there will result in faster response time, less scaling up, and lower Heroku bills.

Happy scaling!
—Adam



Loading more posts…