Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reviewing Subdomain Registry / Private Request Space #1793

Open
dnsguru opened this issue Jul 5, 2023 · 6 comments
Open

Reviewing Subdomain Registry / Private Request Space #1793

dnsguru opened this issue Jul 5, 2023 · 6 comments
Labels
❌FAIL - DNS VALIDATION Unable to confirm _PSL TXT = This PR # (also see #1439) ❌FAIL - NON-ACCEPTANCE See https://github.com/publicsuffix/list/wiki/Guidelines#validation-and-non-acceptance-factors ❌FAIL - REBASE NEEDED Got out of synch with the repo and needs a re-base on it NOT IOS FB Submitter attests PR is not #1245 related

Comments

@dnsguru
Copy link
Member

dnsguru commented Jul 5, 2023

Recently there are a number of voxel.sh subdomain registries that are clogging the PR system. The noted PRs / issues are tied together - applying them in cronological order is needed, which will cause a number of the subsequent PR to require rebasing to proceed or be solved.

Due to volunteer resource constraints, there are delays in processing pull requests for private section entries/updates. In this case, a subdomain registry that had spun up an entrepreneurial model to test out introduced a number of subdomains, and then later ghosted, either selling off or non-renewing a number of the subdomain apex domains within the short span of the processing time of the PSL requests.

So, not only was this disposible labor cycles on behalf of the PSL and downline, it also compounded into a mass of requests.

META: This will have an impact on processing considerations - perhaps a new requirement of a requested namespace showing demonstration of 2-3 years of functional operation of a subdomain space and a certain threshold of distinct, non-spam entries where there are site:foo.bar within google search results as a condition of acceptance criteria for a pull request for foo.bar etc. This is a new conversation, but one that is necessary as a reasonable amount of friction to filter out 'throw at wall' mercenary experimentation namespaces that result in customers being abandoned, may introduce security issues, and most notably... leave debris and cleanup at increased expense of PSL volunteer cycles that could be going to other more beneficial things.

@dnsguru dnsguru added NOT IOS FB Submitter attests PR is not #1245 related ❌FAIL - DNS VALIDATION Unable to confirm _PSL TXT = This PR # (also see #1439) ❌FAIL - REBASE NEEDED Got out of synch with the repo and needs a re-base on it ❌FAIL - NON-ACCEPTANCE See https://github.com/publicsuffix/list/wiki/Guidelines#validation-and-non-acceptance-factors labels Jul 5, 2023
@dnsguru
Copy link
Member Author

dnsguru commented Jul 5, 2023

#1786
#1741
#1755

@BenjaminEHowe
Copy link
Contributor

IMO part of the solution to this problem is automatically removing entries from the private section once they cease to qualify for inclusion (e.g. no SOA record). This would then remove the need for PRs such as #1753, as well as the type of PR referenced by @dnsguru above, freeing up volunteer time. Perhaps a script could regularly run and submit PRs with proposed removals and their rationale? I'm happy to make a first attempt at writing said script -- I'd propose using Python and GitHub Actions assuming that the maintainers are happy with those choices of technology.

@dnsguru
Copy link
Member Author

dnsguru commented Jul 31, 2023

Thanks for the dialog here. This pull request is a "large meal" as it cross-affects others and these layer over each other like acetates.

I think personally that NX A/CNAME as a sensor is likely to have false positives and should be avoided. The reason being, namespaces opting to subdomain a given name may opt not to actually resolve the domain itself, and I have seen some domains configured IPv6-only if you could imagine it. Example: kung.foo.tld or tube.bar.tld might be legit subdomains of foo.tld and bar.tld respectively, but there may not be A/CNAME RRs for foo.tld or bar.tld, and that's perfectly allright as long as the _PSL text leaf is present as a TXT record.

The SOA on the other hand, that is gonna be present in every existing domain name. A missing SOA would be indicative of a non-existent domain name. Or so I think...

I was trying to come up with a legit reason an SOA would not be present on a given namespace that is legitimate, and the only situations I could come up with would be intervention by resolver, as some of the public resolvers perform affectation on queries as a feature.

The way to make automation not misbehave about this is to do diverse SOA lookups per domain, and do so against multiple public resolvers (1.1.1.1 8.8.8.8 9.9.9.9). This would mitigate against affectation and also make the automation stronger in handling any latency or connectivity issues specific to where it is operating from.

I only have volunteer time to land my helicopter every once in a while and try and advance some PR reviews, which automation sounds delightful for, but it needs some elegance in order to not actually stack the reviewer with MORE cycles.

@BenjaminEHowe
Copy link
Contributor

I've investigated checking for SOA records -- but it looks like there are a number of prominent domains return some form of error from multiple independent public resolvers. For example:

cloudflare got NoAnswer when resolving elb.amazonaws.com
google got NoAnswer when resolving elb.amazonaws.com
quad9 got NoAnswer when resolving elb.amazonaws.com

A screenshot showing approximately 4.5 million results on Google under the domain elb.amazonaws.com

What about:

  • an initial review of TXT records at _psl.suffix, marking suffixes as "legacy" where they lack the record but were added before the need to permanently maintain the record was advertised.
  • going forward, non "legacy" suffixes which no longer have a _psl.suffix record are automatically flagged for removal, e.g. via an automated pull request with a copy-and-paste command for maintainers to verify that the script was correct.

@bulk88
Copy link

bulk88 commented Nov 8, 2023

A TXT record/_psl.suffix going forward for new listings sounds reasonable, if the root faux-ETLD NS/DNS server disappear or domain expires, and take down all subdomains, with maybe a X day safety window for natural disasters/DC problems/financial problems at the NS server, the TXT record disappearing is a pretty good indicate the old domain owner lost control on the domain. Certain external services but not UI browsers (marketing/ads/CAs???), could voluntarily check for the root domain for _psl.suffix, and trigger certain special behaviors to subdomains, without involving the PSL at all. The faux-ETLD owner is real-time announcing by DNS his subdomains are explicitly untrusted and sub-user/3rd party controlled. No PSL involvement. Perhaps a _psl.suffix=expires=1730940618 or _psl.suffix=max-age=31536000 like HSTS should be mandatory, in case suddenly the root TXT record disappears by accident/malice.

@dnsguru
Copy link
Member Author

dnsguru commented Nov 8, 2023

This "removal if no txt record" concept has been thoroughly discussed in a number of iterations.

In a world of one or zero, it is amazing as an idea.

While that approach is binary and clean for automation purposes, we have no means to communicate with the admins of 15 years worth of existing entries. The reality is that there is a lot of organic legacy stuff with no TXT records and we need to not disrupt those through removal because they didn't know they needed to do something.

So, IF this were done, it would have to be a "from x date forward", which would require a more robust means of tracking. So, we have a legacy entry challenge with automation of removals because there are 15 years worth of entries that would need to be evergreened with no means to evergreen, and zero or negative resources at this time to do any of this.

An aside, HSTS is often mistakenly advanced as some utopian evolutionary model to aim for, but it is not a good solution for PSL.

Due to the narrow scoping of the HSTS, it works ok for what it is made for, but it is not without gaps and pitfalls.

What is different about the PSL is the myriad of diverse use cases that exist. Different segments of those use-cases present narrow solutions from time to time that address the 30-50% of the use cases they or their employer need solved, such as DBOUND or browser-need-only stuff.

For those same legacy disruption issues, some form of holistic evolution is the wisest path forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
❌FAIL - DNS VALIDATION Unable to confirm _PSL TXT = This PR # (also see #1439) ❌FAIL - NON-ACCEPTANCE See https://github.com/publicsuffix/list/wiki/Guidelines#validation-and-non-acceptance-factors ❌FAIL - REBASE NEEDED Got out of synch with the repo and needs a re-base on it NOT IOS FB Submitter attests PR is not #1245 related
Projects
None yet
Development

No branches or pull requests

3 participants