As developers, we have a plethora of modern tools to help us write better code. Unfortunately, with so many great tools, using them all can often be counter-productive due to varying flags and options, inconsistent usage, and the added time waiting for them to run.

Linters for your back-end code and front-end code. Automated dependency review tools. Static analysis for security, code smells, or poor code coverage. Documentation tooling. Local development, continuous integration, multiple teammates and text editors.

It’s a lot to keep track of, and it’s easy to get overwhelmed. Reviewer helps you leverage multiple automated code review tools with orders of magnitude less friction so you and your team can use them more frequently and consistently.

But let’s be honest, that’s kind of hand-wavy. It’s more helpful to see how it works. For the simplest example, instead of remembering and typing…

yarn audit --level moderatebundle exec bundle-audit check --no-updatebundle exec testbundle exec rubocop --parallelbundle exec erblint --lint-all --enable-all-lintersyarn stylelint .yarn eslint .

…you, your teammates, and even your continuous integration setup could use…


…and Reviewer will automatically run through each of the tools and suppress all of the output unless one of them fails. But that’s really an oversimplification and just the beginning of how Reviewer can help. So what makes Reviewer worth introducing yet another tool into your workflow?

Let’s look at a few more examples of how Reviewer saves you time while helping you stay in the zone.

Convenience & Speed

Reviewer’s primary goals revolve around saving time both by typing fewer characters and by reducing time spent looking up unique options and flags for each command. And by making it more convenient to run subsets of tools on subsets of files, it also reduces the time spent waiting for the tools to run.

And finally, it strives to reduce the time spent swimming through results so you can more efficiently find the actionable issues with orders of magnitude less effort and time.

More Signal and Less Noise

Imagine a project with a dozen code quality tools. Linters. Tests. Security scanners. Documentation tools. And maybe even some tools for detecting code smells.

If you run all of these tools at once, you’ll invariably receive limitless amounts of “screen vomit” even when the tools don’t find anything. Some tools support a --quiet flag, but more often than not “quiet” doesn’t equate to “silent”.

The end result? That one security issue scrolls by in a see of text. In the best case, you see some red flash by, and you know to scroll up and look for it. In the worse cast, it’s lost in the scroll, and you miss it entirely.

With Reviewer wrapping all of your commands, it follows the UNIX philosophy that silence equates to success. If you use Reviewer to run a single tool by itself, you get the full unfiltered output, but if you run Reviewer with multiple tools, it captures all of the output and only shows results when a tool fails.

Keeping Tools Current

Some tools like security scanners need regular updates to ensure they have the latest data on vulnerabilities. Naturally, these updates occur regularly, and you want to make sure you’re running the tool with the latest data. But the update process can often take longer than the actual scan.

The result is that you don’t want to update every time, but you also don’t want to forget the updates. Reviewer can automatically check for updates based on a frequency you specify, then the next time you run rvw bundle-audit, it will automatically fetch updates only when the current data is stale.

You get the speed of skipping the updates most of the time, and you get the convenience of not having to keep track of whether updates are needed.


When commands have dozens of configurable options, it’s not easy to ensure the right combination of options is applied every time. You can hard-code the options in a static script or the tool’s configuration file, but that’s not always enough. You can end up with options scattered all over the place with each tool having wildly different approaches to configuration.

So even if your configurations are under version control, there’s no guarantee that teammates will be reliably be applying the same options every time—especially if they’re running slight variations of the commands in order to save time.

Reviewer ensures that everyone is running the same commands with the same options every time. It even lets you use the same options in your continuous integration environment so you reduce those cases of “works on my machine.”

Performance Tracking

Reviewer keeps track of the execution execution time for each run of of each tool so it can help you ensure that regular tasks are running as quickly as possible. For example, if your test suite has been running in under a minute, but Reviewer notices that recent runs have jumped closer to a minute and a half, it will let you know that a recent change may have reduced the performance of your test suite.

Proactive Help

When you have a dozen linters and code review tools, you occasionally need to refer to documentation. Maybe you want to disable a specific check across your entire codebase, or maybe you want to simply ignore it in a specific context but can’t remember how to do that.

Reviewer keeps a list of key links to the project home page, the documentation, the repository, and other key links. Then, when a tool surfaces a failure, Reviewer lets you know about the failure but also presents relevant links to help you tune the tool to your needs.

Run a command, but the tool isn’t installed yet? Reviewer can install it for you or provide a link to the optimal installation instructions for your project.

Selectively Running Subsets of Tools

Say you made some copy changes to your front-end code. You don’t need to run the back-end test suite, but you do want to run the front-end test suite and the linters. But running each command individually gets old.

You’d probably rather just run rvw front-end and Reviewer will only run the commands you’ve tagged with front-end. You save time, and you’re not bombarded with feedback or suggestions from other tools.

Or maybe you want to run a very specific set of tools in CI, you could use rvw ci, and you’ll only run tools you’ve tagged with ci. What about running specific tools on pre-commit hooks? You can do that too.

Review or Format

Some tools provide the option to either review your code and generate a report or automatically fix any issues. But what was the flag for auto-formatting? Is it -a, --fix, --format, --auto-correct or something else?

With Reviewer, you never have to remember. You can just run…

fmt rubocop

…and Reviewer will run Rubocop with your preferred auto-formatting options. If one of your configured tools doesn’t support auto-formatting, no worries, Reviewer will skip it in the context of the fmt command.

Flaky/Flickering Test Failures

Despite your best efforts, your otherwise consistently-passing test suite fails due to a flaky test. Now you have to track down the seed used for the test run, reproduce and narrow down the failure, and then fix th test.

It starts simple enough by running…

bin/rails test

…but behind the scenes, Rails chooses a random seed, and this time it’s just the right seed for the tests to fail. Time to scroll through the test output to figure out the seed that was used and then rewrite the test command to ensure it reuses that seed.

After a few moments of spelunking and remembering the syntax, you know you need to run…

TESTOPTS=--seed=81156 bin/rails test

If you run it frequently enough, you might even remember that exact syntax off the top of your head, but you still waste time scrolling through the output and typing the updated command.

What if you replaced bin/rails test with…

rvw test

Now that Reviewer is handling things, if there’s a test failure, you have several easy-to-remember options at your disposal…

rvw failed # Rerun only the failed tests rvw rerun # Rerun the test suite with the same seed valuervw bisect # Use the same seed value to bisect the test suite

Reviewer does the rest. It remembers each run and knows a few_tricks to help you move forward without having to look up infrequently-used and obscure flags and options.

Running Multiple Tools on Subsets of Files

Let’s say you just modified a single file, and you know it’s isolated enough that you don’t need to run all of your code quality tools on the entire codebase. You just want to review that specific file.

You can run the entire test suite and all of the configured tools, but that might take a few minutes. Or you can spend a minute or or two looking up the options for each tool and typing each of the commands.

Depending your tools, you may end up with commands like…

bundle exec rubocop app/models/user.rbbundle exec reek app/models/user.rbbundle exec brakeman # Brakeman doesn't support file filteringbin/rails test test/models/user_test.rb

…but it would sure be nice to just…

rvw app/models/user.rb

That way, Reviewer can figure it all out based on the file’s extension, and it can intelligently run all of the tools for Ruby files, and if they support file filtering, limit the command so that it only runs the tools against the specified file.

Less typing. More precision. Faster results.

Running Tools on Smart Subsets of Files

Sometimes, you might make a handful of changes to a few different files. You want to quickly run your code quality tools on those files, but you don’t want to have to add each file name to a bunch of commands. So you just run the whole thing and wait a few minutes.

But what if you could choose from a few more convenient options?

rvw staged # Automatically runs against only staged filesrvw recent # Automatically runs against only recently modified files

Your tools run faster because they’re focused on only a few files. You get faster feedback. And you can still run the full suite before you make the final push to open a pull request.

Unifying Results

When you run multiple tools, you get multiple reports. But what if you want to see all of the results in a single place? You’re out of luck.

If a tool can output JSON or YAML results associated with your codebase’s files, Reviewer can pull them all together into a streamlined report with a holistic view and score for each file based on the weight you assign to each tool. That way, a single critical severity issue ensures that it’s bumped to the top of the list over a batch of minor formatting issues.

While it’s still a work in progress, not all of these are working just yet, but hopefully they help paint the picture of what’s possible. Have other ideas or think you could use it? Please don’t hesitate to reach out.