What are the potential tradeoffs for lower down Cyclomatic Complexity

Since high Cyclomatic Complexity is harmful, and it would be beneficial to lower it down by creating sub functions. However, it might lead to a long calling queue and lead to some super nested function that would also let the other programmers feel hard to read.

What would be the beneficial solution or where should we set the bottom line for the depth of calling queue? Is there any metrics or tools to test the calling depth of a function?

Any help or idea is appreciated.


Extracting code to sub-functions does not lower the cyclomatic complexity of a program. It may of course lower the the cyclomatic complexity of an individual function if some complexity is extracted to a separate function, but this just moves the complexity around, which doesn't improve your program in itself.

A high cyclomatic complexity of a function could be a warning sign that the function is written in a overly convoluted way. If this is the case, you should consider if the logic could be simplified.

Of course code may be complex because the problem it solves is complex. You should look for accidental complexity, complexity which is not needed. A typical example:

bool isZero = false;
if (x==0) {
   isZero = true;
} else if (x > 0 || x < 0) {
   isZero = false;
} else {
   Logger.LogError("Invalid value of x");
   throw new FileNotFoundException();

The way to fix this accidental complexity is not to extract it to a seperate function, but rather to rewrite it as:

bool isZero = x==0;

Is high cyclomatic complexity harmful? You state this as if it were some self-evident fact, but it's important to keep in mind that in many cases, the real-world problem that you're modeling in your code is a complex problem.

The only time I've heard the claim that cyclomatic complexity is something to be minimized (or something people talk about at all, in fact,) is in the context of unit testing. If that's the right context for your question, James Coplien puts it better than I could:

I had a client in northern Europe where the developers were required to have 40% code coverage for Level 1 Software Maturity, 60% for Level 2 and 80% for Le vel 3, while some were aspiring to 100% code coverage. No problem! You’d think that a reasonably complex procedure with branches and loops would have provided a challenge, but it’s just a matter of divide et impera. Large functions for which 80% coverage was impossible were broken down into many small functions for which 80% coverage was trivial. This raised the overall corporate measure of maturity of its teams in one year, because you will certainly get what you reward. Of course, this also meant that functions no longer encapsulated algorithms. It was no longer possible to reason about the execution context of a line of code in terms of the lines that precede and follow it in execution, since those lines of code are no longer adjacent to the one you are concerned about. That sequence transition now took place across a polymorphic function call — a hyper-galactic GOTO. But if all you’re concerned about is branch coverage, it doesn’t matter.

  • If you find your testers splitting up functions to support the testing process, you’re destroying your system architecture and code comprehension along with it. Test at a coarser level of granularity.

-- Why Most Unit Testing Is Waste

In situations like this it's good to remember a quote famously attributed to Einstein: "Make everything as simple as possible, but not simpler." Prioritizing secondary metrics like cyclomatic complexity over more important things like code readability is a serious violation of this important guideline.

If you have a large method, simply breaking down the method into smaller units may not add any additional benefit other than tweaking metrics.

Complexity can be fixed by:

  • Refactoring
  • Redesigning

Sometimes refactoring is appropriate. We have a complex method that does five things, it makes sense to refactor into 1 main method that calls 5 sub methods. This should make the code easier to understand and test. We may be able to eliminate some lines of code but in the end we probably have the same amount of the lines of code that we started with. These type of efforts look good at first glance and may produce better metrics against your code base, but overall the value added may be smaller than expected. You have to determine if the break down is appropriate. Obviously breaking down a large method into 10s or 100s of sub or nested calls may introduce a cobra effect where your good intentions made things worse. You want to avoid that.

Sometimes we have a method that is too complex for plain old refactoring or break down into smaller chunks. In that case, we may need a redesign. A redesign may introduce new design pattern(s) and/or new classes that simplify the code. So, instead of 1000 lines of code, the end result of the re-design is 200 lines of code. These are much more difficult endeavors, but provides more value from a metric standpoint and from a overall code standpoint. Less code, less complex.

Tradeoffs are always time and effort. Sometimes the budget can only allow for small refactoring as in the first example. Other times a complex overhaul as in case 2.

Developers should always monitor the health of the code base by policing themselves, providing peer reviews, and official code reviews as code is added to the solution. This will keep the complexity at a manageable level.

Category: tools Time: 2016-07-28 Views: 1

Related post

iOS development

Android development

Python development

JAVA development

Development language

PHP development

Ruby development


Front-end development


development tools

Open Platform

Javascript development

.NET development

cloud computing


Copyright (C) avrocks.com, All Rights Reserved.

processed in 0.194 (s). 12 q(s)