You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let's face it, neither I, nor all developers who will ever work on this project, won't have the time to implement linting rules for every single tool under the sun. Also, for closed source tools or business-specific practice enforcement, users may want to build their own custom checks to enforce on their projects through mllint configurations.
The api.Linter interface is only three methods (Name, Rules and LintProject), which all return serialisable data, so it should be pretty easy to allow a user to define a custom linter in the mllint config, with a simple console command for LintProject. This will then provide the project details (and possibly mllint config?) to the command via stdin. The command is then expected to print a YAML or JSON linter result to the standard output. This linter result consists of what LintProject usually returns, i.e. a report and an error if there was any.
What is probably easier to configure though, is to have each custom rule define a script to lint it, which must then only return YAML or JSON score and details (e.g. { score: 100, details: "" }). Any non-YAML or JSON output is interpreted as an error, obviously automatically failing the rule. Having a linter that checks a single rule is different from api.Linter in mllint, as that may (and often should) check multiple rules in one run. However, this is probably easier to define.
Idea for how it will look in the config:
rules:
custom:
- name: Project complies to custom rule 1slug: custom/rule-1details: | Go all out with a full `markdown` description of this rule, what it checks, why it's there and how to implement it for a project. Multiline too.weight: 1# importance within the Custom category of rules.run: python check_project.py # can be any command that can be passed to Golang's `os/exec`
To implement it all:
Determine and implement proper config structure for implementing custom linter
Create a Custom (custom) category and implement a linter to run all these custom linting rules and aggregate the results.
Test this custom linter runner linter runner linter thing.
Display the output of these custom linters in the report nicely.
Write some documentation and perhaps some examples as to how users should set up these custom rules.
The text was updated successfully, but these errors were encountered:
Let's face it, neither I, nor all developers who will ever work on this project, won't have the time to implement linting rules for every single tool under the sun. Also, for closed source tools or business-specific practice enforcement, users may want to build their own custom checks to enforce on their projects through
mllint
configurations.The
api.Linter
interface is only three methods (Name
,Rules
andLintProject
), which all return serialisable data, so it should be pretty easy to allow a user to define a custom linter in themllint
config, with a simple console command forLintProject
. This will then provide the project details (and possibly mllint config?) to the command viastdin
. The command is then expected to print a YAML or JSON linter result to the standard output. This linter result consists of whatLintProject
usually returns, i.e. a report and an error if there was any.What is probably easier to configure though, is to have each custom rule define a script to lint it, which must then only return YAML or JSON score and details (e.g.
{ score: 100, details: "" }
). Any non-YAML or JSON output is interpreted as an error, obviously automatically failing the rule. Having a linter that checks a single rule is different fromapi.Linter
inmllint
, as that may (and often should) check multiple rules in one run. However, this is probably easier to define.Idea for how it will look in the config:
To implement it all:
custom
) category and implement a linter to run all these custom linting rules and aggregate the results.The text was updated successfully, but these errors were encountered: