We always want to do more. It’s human nature. We create space by creating better ways to do things. Then we immediately fill that available space with something new. When the net result is no empty space, was there really any improvement at all? If the new additions just create more complexity or chaos where there once was peace, was it really worth it?

Assuming that we all have about four hours of deep thought available on any given day, saving time doesn’t really help. Finding four hours of truly deep thought in one day isn’t the main bottleneck.

We don’t need more time in the day to complete four hours of work. We need ways to improve the results from those four hours. We need to be able to catch problems earlier and faster. We need to more easily be able to stay in the zone. We need to be able to put small blocks of times to better use through better prioritization and direction.

We don’t need time. We need leverage.

So how do we create leverage to generate more and/or better results with limited window of daily focused effort?

One way is better local automation in our development environments. The tools are out there, and they’re amazing in the right contexts. Unfortunately, it’s a mixed bag, and the more tools you add, the more mixed the bag becomes.

So what does that mixed bag look like?

More natural and integrated automated testing is a game changer, but testing is a skill unto itself. And there’s nothing stopping you from writing useless or slow tests without a lot of practice. Automated tests are unquestionably good, but they’re only once piece of the puzzle.

Test coverage and documentation coverage tools can help identify missed opportunities. Or they can distract you into re-arranging the deck chairs.

Static analysis tools are amazing at proactively identifying potential security issues, poorly-written code, or even accessibility problems. But they also generate false positives that send you on a wild goose chase.

Linters can help enforce standards across a team with greater efficiency and accuracy than humans ever could, but they can also be draconian enforcers of pedantic details.

Version control processes and continuous integration make release management more predictable and reliable, but done poorly they can create friction and reduce your ability to iterate. And they don’t quite fit seamlessly in with local development practices.

All of these automated tools can proactively point out better or more efficient ways to do things and make you more aware of new techniques, but they can also nag you into making changes that aren’t necessary.

The biggest upside is that all of this work can be offloaded to the machines and help find problems with incredible speed. They can do in seconds or minutes what would take people hours or days. The biggest downside is that the machines aren’t so good at exercising discretion. So you end up overwhelmed with screenfuls of irrelevant information. And sifting through it chews through the time savings and then some.

Then there’s the configuration costs. Setting up these tools takes effort. And keeping them updated takes more effort. That can be distracting and create friction that blocks you from your primary work. For the most part, configuration is unavoidable, but opportunities to reduce it do exist.

Compounding complexity also gets bad fast. Using one or two tools isn’t too bad, but once you’re using more than a few, things get ugly. Different tools have different syntaxes or use different labels for the same functionality. Remembering all this takes time to learn and takes up space in our heads.

Every tool adds more time spent waiting. Running every tool all the time isn’t tenable because things slow down quickly as projects grow. Remembering which tools to use when takes up headspace too. And typing half-a-dozen different commands with different options quite literally slows you down.

Draconian enforcement can be suffocating. Invariably, if you set up static analysis or linters, it will find false positives or attempt to impose its will in ways you don’t agree with. You either have to update the configuration or regularly add code to tell it to look the other way for special cases. Both of these create additional friction—especially when first adopting a new tool.

They aren’t aware of their own relative value. If your static analysis tools find security issues, whitespace and formatting aren’t important, but your linters don’t know that. So they complain just as loudly when they really just need to be quiet. For all your linter knows, after you fix the security issues, you may have fixed the issues or created more. Less critical tools need to know how to wait their turn.

Too much noise can hide the signal. Most of these tools help make the invisible visible, and that’s huge. But for every ounce of free space they create by saving time or brainpower, they remove a degree of blissful ignorance. Or they dump pages of screen vomit even when they’re successful. If the tools point out ten things you don’t care about for every one useful alert, the noise obfuscates problems.

They’re not great about context. If you’re about to commit some code, it’s handy to get a nudge about inconsistent formatting, but if you’re deploying a hot-fix for a critical bug, your linter shouldn’t block the release. For a hot-fix, as long as the tests pass and you didn’t introduce a gaping security hole, that should occasionally be acceptable.

Adding them to a legacy project is a no-go. If you’ve ever inherited a legacy project and decided to start cleaning up, you know how depressing that first run of static analysis and linters can be. You either see how bad it is and commit a significant portion of time to cleaning it up, or you look the other way and give up on them because it feels hopeless.

They create too much friction for team members that only need the tools related to the things they have the skillset to address. If your front-end team team members are pestered by back-end linters and static analysis, that’s not helpful. And telling a back-end developer about accessibility issues likely isn’t useful either.

Applying a batch of tools consistently is tedious. For the most part, you can configure the tools to apply the same set of default settings every time without having to type additional flags on the command line. But frequently, they have options that are only available via the command line. So you’d have to remember the syntax and add some extra keystrokes every time you wanted to run it.

Filtering files to save time and focus only on the area you’re working on right now is painful at best. Doing it for one tool and one file is tedious enough. Doing it for three tools and five files? Probably not going to happen. If it requires more keystrokes to review your code than it took to change it, it’s easier to just run all of your tools and wait. That kind of defeats the purpose.

They don’t naturally communicate with other tools like version control or release management. If you’re working locally, you can move faster by zeroing in only on the files you’re changing right now. Then, you can run the full suite of tools before you start the pull request. You can move faster with smaller commits, but then you could easily do the slower and more complete run before the PR.

They don’t always work consistently or intelligently in both continuous integration and local development environments. Pushing and waiting for a build to break due to whitespace issues isn’t super-practical. Similarly, testing to see if you fixed a broken build shouldn’t require waiting for CI. It needs to be seamless to run the exact same tools and configurations in CI or locally so you can get back on track faster without waiting around. And CI should be able to stay focused on critical problems. It needs to be easy to review and fix whitespace and formatting locally so CI can run faster and avoid tripping over non-critical details.

So if the tools aren’t a net-positive, could they be? And if so, what would it take?

I’ve spent the last two years iterating and exploring ideas to address these issues. I’ve tried multiple low-effort approaches like aliases, simple ‘dumb’ scripts, and rake files, but they’ve all carried their own baggage.

Now I’m working on a more ambitious approach to tip the scales. It’s not ready yet, but I’ve never been more excited about anything I’ve ever worked on in my entire career.

It’s not going to solve all of these problems on day one. And it will likely never solve all of the issues completely. But it will provide an extensible way for all of these tools to work together with less configuration, less friction, and a much more judicious way of presenting the problems in a way that humans can handle. So far, I’ve found that it solves enough problems that I don’t think twice about adding an additional tool if it helps me ship bitter code.

Then, with the same amount of time and effort on your part, you’ll be able to catch issues earlier (when they’re cheaper to fix), improve your code quality, learn new techniques, streamline code reviews, and improve the results you’re able to achieve with that four hours you have every day.