Rogish Reading Writing

Software, management, people.

Automatically Maintaining and Improving Code Quality

New software projects start the same way: clean, well-tested, and fresh in everyone’s minds. Over time, however, the knowledge fades as little-visited parts of the code aren’t touched, folks move to other projects, and new developers come on board. Test coverage drops. Mass hysteria.

Without automated, enforced quality and test coverage metrics, entropy wins and you get stuck in a vicious cycle.

We have some tools in our arsenal to help prevent code rust: we can institute code reviews, we can pair program, or folks can “lunch-n-learn” to demo code. All these are well and good, but require active effort to maintain quality. What if there were automated methods to help ensure code quality that didn’t require manual intervention?

I’m a big believer in automating all the things and code inspection is something you should strive to automate. Static code analysis allows us to enforce code standards (no spaces after parens! Two spaces, not tabs! and other holy wars), catch potential bugs or security problems, and improve code quality.

But if it’s outside of your normal day-to-day development routine you’ll forget to check it. And, like the bad old days of waterfall development, if your code gets thrown over a wall and the analysis runs long after you’ve written the code, it merely introduces more inefficiency and churn. If you integrate static code analysis into your automated testing, actual metrics prove your code is improving as you red/green/refactor.

In a Ruby/Rails project, you have several tools to help maintain code quality:


You should integrate this into your specs to run when you execute all your tests:

task default: :all_specs


task all_specs: :environment  do
  ['rubocop -R', 'rails_best_practices', 'rspec'].each do |task|
    sh task

Rails Best Practices

Rails Best Practices performs typical ruby checks but integrates it into a Rails environment. For example, if you have an unreachable route, it’ll report that, or report if you miss an index on a foreign key. It’s super valuable for onboarding new developers to Rails, too. Yes, they should read a Rails book, but if they inadvertently violate one of the many norms of Rails, this gem will catch it.


RuboCop is similar to RBP except focused on Ruby specifically. Things like spaces instead of tabs, methods and classes that are too long or complex, etc.


SimpleCov allows you to lock the minimum amount of test coverage and also refuse a test coverage drop. You should define a low water line to ensure coverage does not drop below some number (90%?)

require 'simplecov'

SimpleCov.start 'rails'
SimpleCov.minimum_coverage 90


CodeClimate ties these all together and has their own flavor of Ruby/Rails linters, along with JavaScript/Node.JS. CodeClimate can plug into your rspec and along with SimpleCov, report code coverage in the tool. More valuable, though, is security monitoring which reports vulnerabilities in your particular version of Ruby/Rails and when you introduce security problems in your code.


CircleCI is a fantastic SaaS CI provider I’ve used for a few years now. Not only are the founders super responsive, their massive parallelization functionality allows us to focus on writing code and tests and not worrying about how long they take to run.

You can combine CodeClimate and CircleCI to get test coverage reports in CC and notified of test coverage regressions in Circle.

Using all these tools, you too can have a non-trivial app that is well-tested, easy to maintain, and a joy to work with.

What tools do you use to improve the quality of your code?

30 Years of Mac

30 years ago, Steve Jobs and Apple unveiled the Macintosh. Most people have seen that video, but I bet most haven’t seen the one he filmed eight years later, demoing the latest version of his new operating system, NeXTSTEP3.0.

Fascinating to see the Mac heritage in NeXTSTEP carry through the acquisition and transformation into OS X. It’s hard to remember, but in 1992 Microsoft had just released Windows 3.1 – NeXT was light-years ahead of Microsoft.

Remember Your Target Audience

In the software business, most folks work in an office. Maybe a quiet, private one, or one of the dastardly open plan ones. But not usually, say, a sports arena. Or in a moving vehicle.

Sometimes, this fact is lost on the people making the software. They design buttons that are washed out when the surveyor on a phone or tablet views it outside in the sun. Or they use too many external libraries in the HTML and end up swamping users who are on high-latency connections.

Most of the time, these bugs (either in design, specification, or implementation) merely create confusion and decreased user satisfaction.

Occasionally, however, they kill people.

Although the three deaths from the Therac-25 are ultimately classified as engineering failures, mainly resulting from lack of testing and other process deficiencies, it’s possible that they could have been detected earlier if someone had observed the users operating the system:

The fifth accident occurred at the same location as the fourth. As a result, someone besides the AECL engineers had knowledge that more than one possible accident transpired while using the Therac-25. A physicists [sic] from the hospital where the two accidents occurred investigated both accidents thoroughly, discovering that the accidents were due to the quick changes made to the setup parameters by the machine operators. Through a quick series of returns, the physicists could reproduce the “Malfunction 54” error, something that AECL never could do (Leveson and Turner, 1993).

One day, back in college, I was interning at a software company that made software and hardware that powered ready-mix concrete plants. We had front-office ticketing software that ran on a PC and batch-plant automation hardware that ran on a real-time OS.

These plants are noisy, exposed to the elements, and somewhat dangerous. Sitting in our comfortable chairs in the middle of central Ohio, it was too easy to forget that the users were in considerably different environments.

To that end, the head of engineering kept a framed picture on his desk of a typical ready-mix user, complete with hard-hat, operating the system. Below the picture was the description: “Remember your target audience!”

I carry that philosophy with me every day. In every company I’ve worked, I’ve setup personas that have pictures and bios of actual customers attached and posted them in prominent positions. Not only does this put a face to the name, but helps everyone on the product team use them in discussion “Yeah, but how would Rosie use this?” – building empathy with and understanding of the customer.

If possible, I also organize “field-trips” to customer sites to actually see how users operate our software in the field – ensuring we never forget who uses our products, and how.