Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion for new lint: Cognitive Complexity #3793

Open
felix91gr opened this issue Feb 21, 2019 · 42 comments
Open

Suggestion for new lint: Cognitive Complexity #3793

felix91gr opened this issue Feb 21, 2019 · 42 comments
Labels
C-an-interesting-project Category: Interesting projects, that usually are more involved design/code wise. C-enhancement Category: Enhancement of lints, like adding more cases or adding help messages E-hard Call for participation: This a hard problem and requires more experience or effort to work on L-complexity Lint: Belongs in the complexity lint group S-needs-discussion Status: Needs further discussion before merging or work can be started

Comments

@felix91gr
Copy link
Contributor

Hi all, I'd like to propose (and implement, if someone can mentor me a bit!) a lint for Cognitive Complexity. It's a way to measure the cognitive load that a particular function puts on the reader.

I think that having an upper bound on the allowed cognitive complexity of functions would help ensure that complex functions are split into simpler, more maintainable ones. It would also help to complement the role that the Cyclomatic Complexity lint currently has.

@oli-obk
Copy link
Contributor

oli-obk commented Feb 21, 2019

Our cyclomatic complexity lint already adresses a few of the issues mentioned in the cognitive complexity paper. E.g. a match has a CC of 2 regardless of the number of arms.

I wonder if it would make sense to simply rename our cyclomatic complexity lint to cognitive complexity and start integrating a few more of the suggestions from the linked paper

@oli-obk oli-obk added C-enhancement Category: Enhancement of lints, like adding more cases or adding help messages L-complexity Lint: Belongs in the complexity lint group labels Feb 21, 2019
@Manishearth
Copy link
Member

Yeah I think that would be better.

@felix91gr
Copy link
Contributor Author

Aside: let's call Cyclomatic Complexity as "CyC" and Cognitive Complexity "CoC"? Since both of them can be called CC, but are not the same thing x3

E.g. a match has a CC of 2 regardless of the number of arms.

Here I assume you meant CyC. If so... does the CyC lint in Clippy give all match's a score of 2? Shouldn't it depend on the number of branches of the match, since CyC measures...

"(...) the number of linearly independent paths through a program's source code."

(Quote from Wikipedia's first paragraph on CyC.)

I wonder if it would make sense to simply rename our cyclomatic complexity lint to cognitive complexity and start integrating a few more of the suggestions from the linked paper.

In general, for introduction purposes, I think it makes sense! You'd have the same lints as always, but the current CyC lint becomes more powerful. Users just get an upgrade of their already great tool.

But in the long term I wonder if it might be better to keep them separate. CyC is better as a metric of how hard a piece of code is to test, while CoC is better as a metric of how hard a piece of code is to understand. (Code Climate's article comparing both is very good to illustrate this point)

Also I think we'd prefer (please correct me if I'm wrong!) to keep the lints smaller, because it would allow a user to use Clippy to tackle smaller, more approachable problems.

@Manishearth
Copy link
Member

Manishearth commented Feb 21, 2019 via email

@felix91gr
Copy link
Contributor Author

it measures something in between CyC and CoC, which is what folks found to be more useful.

Ohhhhhhhhhhhhh, this makes much more sense to me now.

Moving it all the way towards CoC is fine by me. As a linter engine "hard to test" is far less useful and actionable advice to give.

If there's consensus on this, I'm willing to help :)

@oli-obk
Copy link
Contributor

oli-obk commented Feb 21, 2019

Yea, pure CyC has been found to be pretty useless in Rust as you often use matches not to run different code but more as a lookup table for values.

Yea, I think we can just go ahead with this. As a first step I would nuke the cfg function call and the math resulting from it and just keep the numbers from the visitor. Then, change the visitor to match the CoC rules. If I understood the paper correctly, it's all syntactical, so we should probably move to a pre-expansion-pass. IMO macro calls reduce complexity, even if macro definitions increase it

@oli-obk
Copy link
Contributor

oli-obk commented Feb 21, 2019

Oh, btw, feel free to bug me about this here, on discord or on irc. I'm very interested in any musings you have on this.

@felix91gr
Copy link
Contributor Author

With Oliver (@oli-obk) we're now working on this. If anyone wants to chip in, reach us ^^ we plan on making an MVP first and leave the space for later expansions on more fine-grained or complex scenarios, as the use-cases start to appear.

@felix91gr
Copy link
Contributor Author

felix91gr commented Feb 21, 2019

Draft of CoC implementation

Input

A Rust function.
(Option for the future: a Rust module, aggregating the CoC of its functions)

Output

A Cognitive Complexity score that loosely fits the Specification laid out in the paper.

How it works

From now on, (I can change it later if people prefer I copy some of the paper here!) this text assumes the reader is familiar with the paper.

Nesting Structures

The following structures increment the nesting level:

  • if, else, else if
  • if let
  • match
  • for, while, loop
  • Nested functions
  • Closures

(Nesting-Dependent) Increments

For each of the following constructs, there is an increment in the CoC score of (1 + nesting level).

  • if
  • match
  • for, while, loop

(Nesting-Independent) Increments

For each of the following constructs, there is a constant increment of 1 in the CoC score.

  • break, continue
  • Changes in sequences of binary logical operators (these 5: &&, ||, &, |, ^)
  • Recursive function calls
  • Macro calls

Exempt of Increment

The following constructs are explicitly exempt of increment, since they replace and simplify other constructs.

  • ? operator for error propagation
  • if let statement

Pending Points

Here are things that we're explicitly postponing until we have the tools or the know-how necessary to implement them.

  • What to do about conditional compilation? Currently Clippy can't reach that information, but things like C's #if should at least add +1 to the nesting level.

I need help with Rust specific features I might be missing. I'm still relatively new to the language, so any help is appreciated! 🙏

Changelog

  • 16/04: clarified some points regarding the Binary Logical Operators
  • 02/04: added Macro Calls to the list of (Nesting-Independent) Increments constructs. See Ignore cognitive complexity in macro expanded code #3900
  • 21/02: added Pending Points section, and moved Conditional Compilation to it.
  • 21/02: removed Ternary Operator, for it does not exist in Rust.
  • 21/02: added Exempt of Increment subsection, with 2 cases.
  • 21/02: first draft

@oli-obk
Copy link
Contributor

oli-obk commented Feb 21, 2019

* Ternary Operator (`a = q ? b : c;`)

doesn't exist in Rust ;)

* Nested functions

Are these a problem? How is it less complex to have the function outside?

* Is there conditional compilation in Rust? That should increment the nesting level as well.

we have cfg attributes, but clippy doesn't see them right now. There are a bunch of things we want to do with them, so ignore this until we figure out how to get them.

* Sequences of binary logical operators

both short circuiting and non-short-circuiting are relevant here imo

Also, if I read the paper correctly, only changes of the binary operator in a sequence cause an increment. So x && y && z is fine, but x && y || z will increment

I need help with Rust specific features I might be missing. I'm still relatively new to the language, so any help is appreciated! pray

Since everything is an expression, I think having any nesting expression inside e.g. a binary operation (x + if cond { y } else { z }) or a function call (foo(x, y, match bar { X => 42, _ => 0 })) should also cause increments.

break from a loop seems fairly nonproblematic. Not sure if that should increase the CoC even further than the loop by itself did.

@felix91gr
Copy link
Contributor Author

felix91gr commented Feb 21, 2019

doesn't exist in Rust ;)

Haha 😆 my bad, I'm definitely still learning.

  • Nested functions

Are these a problem? How is it less complex to have the function outside?

In the paper, nested functions are considered part of the body of the function itself. They add no increment, but they are like a new if-block in that they add nesting. I think that's because they are being treated as closures, capturing the surrounding context.

Nested functions in Rust can't do that right? For all intents and purposes, they might as well be an external function, but that can only be seen from the scope of the containing fuction. Right? If so, I think we could for the most part treat nested functions as an external function entirely, having its own score and not affecting the containing function's score at all.
(Edit: someone already told me the answer to the capturing questions)

we have cfg attributes, but clippy doesn't see them right now. There are a bunch of things we want to do with them, so ignore this until we figure out how to get them.

Oki doke!

both short circuiting and non-short-circuiting are relevant here imo

Same.

Also, if I read the paper correctly, only changes of the binary operator in a sequence cause an increment. So x && y && z is fine, but x && y || z will increment

Almost. x gives +0, x && y gives +1 and so does x && y && z. x && y || z gives +2. It's +1 as soon as a binary operator is introduced, and +1 for every time it changes.

Since everything is an expression, I think having any nesting expression inside e.g. a binary operation or a function call should also cause increments.

Good point! We do have to consider this from the perspective of expressions. I think your examples are right. Those expressions introduce abstraction, and that should be accounted for :)

break from a loop seems fairly nonproblematic. Not sure if that should increase the CoC even further than the loop by itself did.

That's a fair point. Hmm. Since there can be many breaks inside a loop, maybe the loop should add just 0, putting its cognitive cost mostly in the breaks that stop it 🤔

bors added a commit that referenced this issue Mar 6, 2019
Cognitive Complexity (step 1 out of 3+): name changes

Following up on #3793

**Overall checklist:**

1. **Name changes**
2. MVP of functionality
3. Tests

After this PR, we will start working on the implementation itself.
@Centril
Copy link
Contributor

Centril commented Mar 9, 2019

break from a loop seems fairly nonproblematic. Not sure if that should increase the CoC even further than the loop by itself did.

I think loop { ... break ... } is often considerably less understandable than iterators since the latter declaratively describes what happens while the former describes how it happens. loop + break invites spaghetti in most cases in my view and is rarely the right choice so making it more encouraged seems sub-optimal.

As for nested functions, I think they add to rightward drift, and that is often a key factor in making things harder to understand. Rightward drift is certainly something I try to fight as part of language design.

Another thing to consider is unsafe { ... }; I would try to assign it an extra penalty and additional penalties for both long and deep unsafe { ... } blocks as it risks making more behavior use relaxed rules as well as incurring more proof obligations to check.

@felix91gr
Copy link
Contributor Author

Another thing to consider is unsafe { ... }

unsafe blocks certainly require much more care when reading the code. Thank you for reminding us! I will include it in the list of things left to discuss.

As for nested functions, I think they add to rightward drift, and that is often a key factor in making things harder to understand. Rightward drift is certainly something I try to fight as part of language design.

I'm implementing this atm. Do you think that the score of Nested Functions should be added to that of its mother function? Or should they only be scored separately?

I find NFs very useful. However, scoring them completely apart from their mother function seems not quite a great solution, since the complexity of a mother and one of its nested functions is much greater than two comparable functions that are 100% separated. This is because having a mother and a NF means that their design is intertwined, and that you need to understand both in order to understand the whole (the are, at the end of the day, one function).

At the same time, the amount of complexity in a function tends to be reduced in most cases after adding a NF, because there's code repetition that the NF is consolidating in one unit. Scoring the NF once (and giving it a starting nesting bonus to its score) should reflect that in most cases, unless the NF itself is very complex.

Thoughts? (all)

@felix91gr
Copy link
Contributor Author

felix91gr commented Apr 5, 2019

More fodder for thoughts. Anyone who's interested in this topic, please feel free to reply :)

  1. How much harder, in terms of percentage, is an unsafe block to understand/work with for you than a normal block?
  2. Could you quantify how much more complex a function is for you if it has 1 lifetime parameter, compared to having none? What about 2, 3, 4, and more?

@Centril
Copy link
Contributor

Centril commented Apr 5, 2019

I'm implementing this atm. Do you think that the score of Nested Functions should be added to that of its mother function? Or should they only be scored separately?

I would treat NFs as if they secretly were closures in general. While breaking things into separate functions is great, NFs are not individually testable and so a free helper function can be preferable (depends... maybe the helper makes no sense as a separate logical unit...).

How much harder, in terms of percentage, is an unsafe block to understand/work with for you than a normal block?

I think there's a multiplicative cost here. So instead of giving maybe k + stuff_inside I would consider giving it k1 + (k2 * stuff_inside).

@felix91gr
Copy link
Contributor Author

I would treat NFs as if they secretly were closures in general.

The current draft implementation does exactly that ^^

I think there's a multiplicative cost here.

Nice. I was thinking the same thing :D I'm glad I'm not the only one.

But it makes a lot of sense: every expression in an unsafe block is just like a normal one, only a bit harder. You could factor that "bit" out and multiply it by the score that the entire unsafe block would have if it was a normal block. ^^

@Centril
Copy link
Contributor

Centril commented Apr 5, 2019

Could you quantify how much more complex a function is for you if it has 1 lifetime parameter, compared to having none? What about 2, 3, 4, and more?

This probably applies to type parameters as well but less so. Yet, I would try to avoid penalizing generic code over less generic code since that's often nice for code reuse. Maybe start adding increments at 3? I'd give lifetimes higher increments than type parameters.

@felix91gr
Copy link
Contributor Author

Maybe start adding increments at 3? I'd give lifetimes higher increments than type parameters.

Seems fair :)

@oli-obk
Copy link
Contributor

oli-obk commented Apr 8, 2019

It's not just statements inside an unsafe block that increase the difficulty in understanding the code, but also statements ouside an unsafe block. In fact an unsafe block can essentially require you to understand the entire module.

@flip1995
Copy link
Member

flip1995 commented Apr 8, 2019

I would like this lint to be highly configurable. Since there are much more things to consider (unsafe blocks, nested functions, macros, ...) than in the cyclomatic complexity lint, it is probably not really helpful by configure this lint with only one threshold. IMO the best case would be if every part of this lint gets it's own threshold / is configurable on it's own.

For nested functions, for example, you should be able to write:

cognitive-complexity = [
    (nested-functions, 2),
    (livetime-parameter, 2),
    .. // more configs
]

and then the lint will increase the cognitive complexity only for nested functions with a nesting level greater or equal 2.

I don't really know how to implement this, with our current configuration system. We probably need an own configuration (sub-)parser for the cognitive complexity lint config.

After a configuration like this is possible, we can set the threshold of every configuration pretty low and wait for complains to figure out sensible default values for every configuration.

@felix91gr
Copy link
Contributor Author

It's not just statements inside an unsafe block that increase the difficulty in understanding the code, but also statements ouside an unsafe block. In fact an unsafe block can essentially require you to understand the entire module.

Hmm. That is true. How would you suggest scoring them? Just a high constant (e.g. 20) if there is at least one unsafe block in the function, maybe?

@flip1995 you raise a fair point. We were thinking about having an MVP running and seeing where we could go from there. Maybe a finer-grained configuration for CoC can be added then after we understand it better in the field :D

@oli-obk
Copy link
Contributor

oli-obk commented Apr 11, 2019

I think for now you can just pessimize unsafe blocks by increasing the counter by 1 and we can revisit once the unsafe coding guidelines working group has some rules laid down.

@felix91gr
Copy link
Contributor Author

I'm working on making the scores more fine-grained. Lemme show you what I mean.

The paper works on a number of syntactic language constructs. However, now that I've implemented all of them, I still have left around half of the ExprKinds in rustc's Abstract Syntax Trees. For deciding whether or not to propose scoring them in any way, I've asked myself the following:

  1. Are they simple enough at this point to be covered in the MVP?
  2. Are they similar in cognitive complexity to any already-scored construct?
  3. Are they nesting-dependent in how complex they are to reason about?
  4. Do they have more expressions inside?

These are the answers that I've got:

  1. Yes, except maybe for InlineAsm (too complex, deserves its own research), Async (unstable) and Box (unstable),

I would leave these three outside of the scope of this work for now.

  1. I see two categories:
    • As complex: very similar to already documented cases: Ret and Yield are both similar to Break, and Tup and Array feel as complex as Call and MethodCall. In total, this is the list:
      • Ret
      • Yield
      • Cast
      • Struct
      • Tup
      • Array
      • Range
    • Less complex: of lower complexity to already documented cases: Paren sure adds some weight, but is way lower than a function call. Same for Index, Field, Unary, etc. This is the list:
      • Paren
      • AssignOp
      • Assign
      • Index
      • Field
      • AddrOf
      • Unary

I would score the as complex items in similar ways as the ones that appear in the draft, remembering to classify them as "Nesting-Dependent" or "Nesting-Independent". The less complex ones, I'd give a much lower score (a fifth or so) than the baseline, since they increase the complexity by smaller steps.

(Points 3. and 4. continue in next comment)

@felix91gr
Copy link
Contributor Author

any divergence from the paper should be documented

I agree very much. I want to document all of this, even if it takes me 2+ weeks to do it after the implementation is done :)

@felix91gr
Copy link
Contributor Author

Hi all, sorry for the lack of activity. I have news :D

I've finally started my internship! It's at a local foundation here in Chile, they do super cool things and the people are awesome <3
However, that means that my time will be much reduced for the following months. I should have at most like 30% of the time I've had before. Progress will still exist, but at a lower pace. I hope y'all understand ^^

I really want to see this lint through, and for the moment my intent is to finish it.
Have a great week!
--Félix

@Manishearth
Copy link
Member

No problem, thanks so much for all your work so far! Congrats on your internship!

@flip1995
Copy link
Member

flip1995 commented May 6, 2019

First off, congrats on your internship! 🎉

If you want help with specific tasks of this lint, I'd be happy to help, if you "mentor" me. Just delegate the tasks to me that are still open for implementation and you can't find the time for.

@felix91gr
Copy link
Contributor Author

Hi you guys, here's an update. Thank you for not pinging me in the meantime ^^

I had a couple of really rough weeks irl at the beginning of May, and had to ask for a break at work to work things out. Now I'm doing better, have come back to work and I think I have a steady pace there. I wouldn't call the issue I had a done thing yet, but I think the prospects are good.

If all goes well, I want to try coming back to this. I'll write below what's left before the next step, and @oli-obk will correct me if I forgot something :)

And @flip1995: I'd be glad to 🌈 tell me if there's something you're interested in from the list :3

Remaining tasks

  • Write the complete specification for what's been implemented (the Draft above should be a good base, and the original paper should be used as a reference). This should end up in the docs directory of Clippy, I think.
  • Decide if and how to score the constructs that are not being scored right now at all (most of them were not considered in the paper and so I thought that maybe there could be a default, low value, for non-specified kinds of expressions. Either do that, or just skip them for now and expand upon them in later versions).
  • Finish writing up test cases and tidy up the test folder for this lint.
  • Downgrade the lint to "Nursery" level in preparation for the next step.

After this step (what's not in scope for now, but is considered the last step before publishing the MVP)

  1. Test the lint's default configuration and current scoring with popular (and representative of a target level of complexity) projects from the community.
  2. If the lint feels loose, adjust the parameters and goto 1.
  3. If it feels well tuned, go on to 4.
  4. Tidy it up.
  5. Upgrade it to Complexity level again and publish it.

@flip1995 flip1995 added T-async-await Type: Issues related to async/await and removed T-async-await Type: Issues related to async/await labels Jun 22, 2019
@iddm
Copy link
Contributor

iddm commented Aug 9, 2019

I have just come here just to tell how amazing this is. Please, finish the work on it, I can't wait any longer, it is itchy to try it asap.

@iddm
Copy link
Contributor

iddm commented Aug 9, 2019

The complexity level in the clippy can be set to zero, so even a main with a hello world string printed introduces a warning. Can we check for the value of zero? Or does it make any sense or a use-case?

@flip1995
Copy link
Member

flip1995 commented Aug 9, 2019

Yeah that will be possible. You can configure the threshold as low or as high as you want. But a threshold of 0 would mean: "Every code is to complex, so please don't write any code", which wouldn't be that useful, would it? 😄

There's still a WIP PR (#3963) where you can contribute, if you want that feature rather sooner than later 😉

And @flip1995: I'd be glad to 🌈 tell me if there's something you're interested in from the list :3

Since I wrote that, I'm busy with my master thesis and will be for a few months. So I don't have the time to contribute to this project either :/

@felix91gr
Copy link
Contributor Author

Hi everyone! It's been a while.

I've been researching this topic more and more on my free time. I also contacted an expert on static analysis to bug them about it, and they were very kind about it. Anyway, I digress.

This research has led me to conclude that this lint, in two different ways, is a mistake. I want to write about it more comprehensively than this, but if you'll believe my words, I really think it is.

First: what do I mean by two different ways?

There are two paths that I would like to follow. The first one is retracting my own proposal. The second one is blocking this entire lint family until we get much better analysis tools and understanding of code analysis than what we have today in both Clippy and the static analysis community.

Let's start with why I would like to retract this proposal.

Second: why do I want to reject my own proposal?

Let's take a look at the "paper". Here's what I found while digging into it:

  1. It's a whitepaper, not a paper. Which means it's much more about marketing and reporting than about science.
    a. It has no empirical base: you can see that it has no real-world testing, no user feedback, no implementation, nothing of the sort.
    b. It has no theoretical base: the algorithm is derived only from touchy feely things. From its Basic Criteria and Methodology section:
    1. Ignore structures that allow multiple statements to be readably shorthanded into one
      What do you mean, so hiding complexity is okay?
    2. Increment (add one) for each break in the linear flow of the code
      Why one? Where does that come from?
      None of this comes from any formal analysis of complexity.
  2. SonarSource makes it sound awesome, but the damned thing has almost identical implementations across the different languages in which they offer it.

All in all, it's pretty clear to me that the lint that I was planning to emulate is bullshit.

But there's more to this story. Why would I want to not only reject my proposal, but also mark the idea of Cognitive Complexity linting a non-goal in itself?

Third: why go beyond and reject the idea?

You'll have to read more to get me here, I'm sorry. First go here.

The first problem with any Cognitive Complexity lint is a lack of technology: in order to really assess how complex code will be to read, we would need to emulate how a person processes code, or hack around it at least. And how hard could that be?

Well, as mentioned in the article above, the information of a program’s design is largely not present in its code. There are isomorphic programs that are completely different in cognitive complexity, just because of their context. What does x > 52 mean? Is it a buffer restriction, or an ASCII character boundary? We need to be able to distinguish between those two programs. Therefore, we at least need to be able to parse and understand basic documentation.

Can we do that today with Clippy? No, we cannot.

And we should not promise that, either. Clippy is awesome, probably the best static analyzer that I've seen. But there are limits to how good of an analysis Clippy itself can do. We should we humble and accept the reality of the technological hardness of Cognitive Complexity.


I want to thank @Centril and @flip1995 for being so kind to me <3,
@Manishearth for their patience with my git scrambling,
and @oli-obk for taking me seriously and discussing all sorts of things while guiding me through Clippy :3.


As a last point? Maybe? I want to draw you to this paper. This is a real paper, and it's about static analysis and at the end, about a more real attempt at measuring cognitive complexity. I would like to understand it more, because it seems to me that the shortcomings in Clippy could be reduced with those kinds of techniques.

@iddm
Copy link
Contributor

iddm commented Oct 16, 2019

Hi everyone! It's been a while.

I've been researching this topic more and more on my free time. I also contacted an expert on static analysis to bug them about it, and they were very kind about it. Anyway, I digress.

This research has led me to conclude that this lint, in two different ways, is a mistake. I want to write about it more comprehensively than this, but if you'll believe my words, I really think it is.

First: what do I mean by two different ways?

There are two paths that I would like to follow. The first one is retracting my own proposal. The second one is blocking this entire lint family until we get much better analysis tools and understanding of code analysis than what we have today in both Clippy and the static analysis community.

Let's start with why I would like to retract this proposal.

Second: why do I want to reject my own proposal?

Let's take a look at the "paper". Here's what I found while digging into it:

1. It's a _whitepaper_, **not** a _paper_. Which means it's much more about marketing and reporting than about science.
   a. It has **no empirical base**: you can see that it has no real-world testing, no user feedback, no implementation, nothing of the sort.
   b. It has no **theoretical base**: the algorithm is derived only from touchy feely things. From its _Basic Criteria and Methodology_ section:
   > 1. Ignore structures that allow multiple statements to be readably shorthanded into one
   >    What do you mean, so hiding complexity is okay?
   > 2. Increment (add one) for each break in the linear flow of the code
   >    Why **one**? Where does that come from?
   >    None of this comes from any formal analysis of complexity.

2. SonarSource makes it [sound awesome](https://blog.sonarsource.com/cognitive-complexity-because-testability-understandability), but the damned thing has almost identical [implementations](https://github.com/search?q=topic%3Astatic-analysis+org%3ASonarSource+fork%3Atrue) across the different languages in which they offer it.

All in all, it's pretty clear to me that the lint that I was planning to emulate is bullshit.

But there's more to this story. Why would I want to not only reject my proposal, but also mark the idea of Cognitive Complexity linting a non-goal in itself?

Third: why go beyond and reject the idea?

You'll have to read more to get me here, I'm sorry. First go here.

The first problem with any Cognitive Complexity lint is a lack of technology: in order to really assess how complex code will be to read, we would need to emulate how a person processes code, or hack around it at least. And how hard could that be?

Well, as mentioned in the article above, the information of a program’s design is largely not present in its code. There are isomorphic programs that are completely different in cognitive complexity, just because of their context. What does x > 52 mean? Is it a buffer restriction, or an ASCII character boundary? We need to be able to distinguish between those two programs. Therefore, we at least need to be able to parse and understand basic documentation.

Can we do that today with Clippy? No, we cannot.

And we should not promise that, either. Clippy is awesome, probably the best static analyzer that I've seen. But there are limits to how good of an analysis Clippy itself can do. We should we humble and accept the reality of the technological hardness of Cognitive Complexity.

I want to thank @Centril and @flip1995 for being so kind to me <3,
@Manishearth for their patience with my git scrambling,
and @oli-obk for taking me seriously and discussing all sorts of things while guiding me through Clippy :3.

As a last point? Maybe? I want to draw you to this paper. This is a real paper, and it's about static analysis and at the end, about a more real attempt at measuring cognitive complexity. I would like to understand it more, because it seems to me that the shortcomings in Clippy could be reduced with those kinds of techniques.

Personally, I did find some real rationale behind this paper draft. I know it hasn't been published and does not refer to any research, but I agree with most of the points there. Anyway, there's been the cyclomatic complexity for a long time, and this draft paper referred to it as well. And this cyclomatic complexity does not imply readability and maintanability, so can't be used as "cyclomatic complexity" any way. So this draft paper and the "cognitive complexity" principle is still something we have now, there isn't anything better now, and I doubt will ever be, if someone from some university, or a scientist(?) decides to really work hard on this.

Given all of that said, by you and me, I still think we should let the lint live as it is. Once anything better occurs - we could implement it as well, or when the cognitive complexity becomes oficially recognized, we could change this lint or add new one like with _v2 suffix at the end. I am using this lint and it is pretty much useful to me. My code is checked and has become much cleaner. I like functional languages, functional style, and I prefer this to be checked automatically. What I've seen so far using this lint, has only pleased me.

Also, nothing prevents us from discussing here all the ways to improve it even so. So that we would implement something even better, than was suggested by the paper draft.

@felix91gr
Copy link
Contributor Author

there isn't anything better now, and I doubt will ever be

I believe there will be. Give humanity 20 years and we will have automatic, human-like reasoning about code.

There is much stronger tech than what Clippy has in the research circles already. Things can and I think, will, go beyond the current state of affairs.

cyclomatic complexity

Yeah. Cyclomatic Complexity is a really bad metric. It shouldn't be used for linting, except for setting up a test suite. It really has no meaning with regards to the understandability of code.

Also, nothing prevents us from discussing here all the ways to improve it even so.

I think you're right, but I don't think any of what's implemented here or proposed in the Draft comes close to improving it.

We aren't really measuring code complexity at all. If we were to enhance it and be honest to ourselves about it, we really need to start using something better than a structural pass over the code.

Given all of that said, by you and me, I still think we should let the lint live as it is. Once anything better occurs - we could implement it as well, or when the cognitive complexity becomes oficially recognized, we could change this lint or add new one like with _v2 suffix at the end. I am using this lint and it is pretty much useful to me.

That is awesome. If you do find that your project is better off from this lint, great.

But even still, I think we should remain humble. This lint's name doesn't say what it is. It can't measure Cognitive Complexity. We should at least change it to "Structural Complexity" or something similar. That would help a lot, in my opinion. That and/or changing it to Pedantic.

@iddm
Copy link
Contributor

iddm commented Oct 16, 2019

I agree with all you've said, and

This lint's name doesn't say what it is. It can't measure Cognitive Complexity.

Also sounds reasonable. I can't think of any other name though. Also, I thought, this lint has already been marked as "pedantic".

@felix91gr
Copy link
Contributor Author

Thank you for listening @vityafx 😊

Also: I checked it just now on the lint list, and CoC is listed under "Complexity" (as opposed to "Pedantic"): https://rust-lang.github.io/rust-clippy/master/#cognitive_complexity

@iddm
Copy link
Contributor

iddm commented Oct 21, 2019

Maybe we should create a working group for defining the cognitive complexity ourselves? Why not? Should we be scientists for this? We are experienced people, and we have all the cards to play this game.

@felix91gr
Copy link
Contributor Author

I mean you can. Why not, right?

I will not accompany you though. Because:

We are experienced people, and we have all the cards to play this game.

I don't think we have all the cards. We need a better, much better understanding of the human mind in order to do this.

If you wanna dive into it, by all means do! But I have retired myself from this endeavor at the moment.

@phansch phansch added C-an-interesting-project Category: Interesting projects, that usually are more involved design/code wise. S-needs-discussion Status: Needs further discussion before merging or work can be started E-hard Call for participation: This a hard problem and requires more experience or effort to work on labels Apr 26, 2020
@j-v-m

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-an-interesting-project Category: Interesting projects, that usually are more involved design/code wise. C-enhancement Category: Enhancement of lints, like adding more cases or adding help messages E-hard Call for participation: This a hard problem and requires more experience or effort to work on L-complexity Lint: Belongs in the complexity lint group S-needs-discussion Status: Needs further discussion before merging or work can be started
Projects
None yet
Development

No branches or pull requests

8 participants