How Codacy streamlines code reviews
To increase the software code quality, whether for a web-based application, API, micro service, monolithic application, or a small automation script, peer code reviews are essential. This is because, as with writing in natural language, no one can find all the problems and mistakes on their own.
This happens for several reasons, including being too close to the code, spending too much time on it, not stopping to consider alternate implementation approaches and insufficient experience. These aren’t negative critiques. We’re all prone to these limitations from time to time.
So no matter whether we’re a junior developer on our first professional project or a senior developer with decades of experience, collaborative code reviews (or pair programming) help us do better.
Specifically, they can help us uncover issues such as:
- Logic errors
- Unnecessary code complexity
- Code style violations; and
- Insufficient code documentation
What’s more, code reviews:
- Provide the opportunity for senior developers to mentor junior developers.
- Ensure knowledge is shared among the team and not stored with a limited number of people.
So, as you can see, by running regular code reviews, weaknesses and shortcomings will rapidly be exposed, and the team will grow in both skill and professionalism.
Online tools simplify code reviews
Gladly, the major code hosting platforms, such as GitHub, GitLab, and Atlassian‘s Bitbucket provide this functionality as a core part of the offering. Also, thanks to the excellence of their respective UIs’, they make code reviews trivial to conduct.
In the screenshot above, you can see an example discussion around a documentation change in the ownCloud Android app manual that I was involved in recently on GitHub. An aspect of the change has been marked as requiring further work. It’s trivial to see where the changes have been requested and, thanks to the review comments. So actioning this feedback will be quite reasonably straight-forward.
What’s more, whether we work together in an office, across a series of distributed offices, or from home offices and virtual locations around the globe, these tools are accessible 24/7. And any member of a team or the broader organization can review code changes and join in the conversation.
Manual code reviews have weaknesses
Where these tools fall down, however (or should I say, can be limited) is that the reviews require one or more people to conduct them. That means that each review requires someone to read through the changes to ensure that there have not been any regressions, code style violations, increased logic complexity, and so on.
Depending on the size and skill of your team, manual code reviews this could be quite time-consuming and quickly become a bottleneck to rapid, regular deployments. And the larger or more complex a codebase becomes, the greater the likelihood that delays will occur before proper reviews are able to be conducted.
Then there are factors that further complicate manual reviews. If developers don’t apply the same coding standards in their IDEs or text editors (such as using different line endings, whitespace, and tabs, etc.) it can be hard to notice every issue. For these, and other, reasons issues can be masked or made difficult to find.
Automated code review tools save time
Through the use of these tools, all developers in a team — even if that’s a team of 1 or 2 — can quickly check any code change against a consistent standard or custom set of rules.
As they’re automatable, they can be integrated into your version control system, such as Github or Bitbucket (likely via git hooks), at a variety of points in the development lifecycle. These points can include after individual commits, before branches are merged, and before code deployments.
In addition to git hooks, a significant number of these tools can also be integrated with a number of the leading IDEs and text editors. These include the JetBrains/IntelliJ suites, which you can see in the screenshot above, Eclipse, Netbeans, Visual Studio, Atom, Vim, Emacs, Pico, and Sublime Text.
Code review tools have drawbacks
However, there is a catch, they have to be set up and configured by each, individual, developer. As a result, there’s a lag time before the developer will be up and running. As developer time is one of the most significant development costs this can increase the cost of a software project.
Then there’s the possibility that one or more setups may not be the same as all the rest. As a result, not all code reviews will be run to the same level of quality, potentially leading to a false sense of confidence in the quality of the code released and the efficacy of the process.
Use external code review services for consistency and efficiency
So that’s where external tools and services, such as Codacy, Team Foundation Server by Microsoft, or Upsource by JetBrains, make sense. Here are four key reasons why they’re worth considering.
First, as they’re external services, the core service is always available and is supported by an independent team. Given that, no specific technical knowledge, nor direct investment is required to either set it up or to maintain it.
Second, they don’t need to be set up and configured on a per/developer basis. As a result, developers can be up and running as quickly as possible, without the added overhead of code analysis tools.
It’s worth noting, however, that using code analysis tools locally, can lead to a reduced workload when deployed remotely. This both keeps developer cost down and provides a reliable level of quality.
Third, they integrate seamlessly with the hosted version control platforms that we mentioned earlier, those being GitHub, Bitbucket, and Gitlab. As a result, any change, no matter how small or large, can be automatically reviewed against the same benchmark.
You can see in the image above, that the automated review has been automatically run against the changes made. Now let’s get more specific; with a service such as Codacy, you can apply a coding standard or a series of code analysis patterns for one or all codebases. If one of the available code analysis patterns doesn’t quite suit your objectives, you can create one from an existing standard, that does just what you need.
In addition to reviewing a code commit, or pull request, external tools can also provide the ability to show trends over time.
These can help answer such questions as:
- How is the codebase evolving?
- What are the hotspots in my code?
- Are there areas that consistently cause problems?
- Are some developers more prone to cause issues than others?
Fourthly, for each issue that’s highlighted, you have documentation about why it’s an issue and was flagged for review. It’s one thing to know that something is a problem, but it’s essential to understand why. If no one knows why a section of code was marked for review, they cannot learn and grow from the experience and avoid repeating it in future.
No developer, no matter how experienced or talented, is ever going to spot every weakness, bug, or shortcoming in their codebase. That’s why code reviews are essential. They help us collaboratively find and fix the bugs before they make it to production.
However, manual code reviews can be time-consuming and aren’t always applied consistently. That’s why automated code review tools are essential. They ensure a consistent standard of quality and find the majority of bugs and other issues, leaving the more involved issues up to human intervention.
But, while these tools can be and often are excellent, they too have their limitations, most notably that they don’t look at a project holistically or show trends over time. For those, and other, reasons using an external, dedicated service will often provide the best value, and give the greatest return on investment for collaborative code reviews.
If you’re keen to improve the quality of your code reviews and to automate a lot of the manual work, why not give Codacy a try? You can sign up for the free plan using your Bitbucket, GitHub or Google credentials and quickly integrate it with your hosted code platform.