-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: add experimental stages #46100
doc: add experimental stages #46100
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm -1 to this approach. I think it creates more confusion and processes where there is no need.
I think we should focus in documenting:
- what criteria are needed for a feature to exit experimental status
- when to use CLI flags for experimental features
- when to emit a warning
Those three things have been used wildly, and we should document some guidelines.
This comment was marked as outdated.
This comment was marked as outdated.
I apologize if I'm off base here as I haven't been able to follow the prior discussion as closely as I would ordinarily like to. With that out of the way.... With apologies to George Orwell, at the current time, all experimental features are equal but some are more equal than others. That's...not good (IMO). In other words: The effective statuses of various Experimental features vary tremendously. Sometimes we require a CLI flag but sometimes not. Sometimes we emit a warning and sometimes not. Sometimes we know that we can't actually introduce breaking changes because of widespread adoption, but we keep things Experimental anyway for unpublished reasons. This PR attempts to take the first step in solving this issue by trying to document experimental statuses that are currently only implied. I like that approach, but it's certainly not the only approach. Going a different way, I also like Matteo's three suggestions for what to do. Those suggestions solve specific problems (which is good!) whereas this is an attempt to document (as best we can) current practice which is inconsistent (or at least certainly seems inconsistent) as a way of laying groundwork for fixing the issues. If this PR is likely to stall in review, maybe a good way forward is to take just one of the three suggestions Matteo made and open a PR trying to solve just that one. And then move to the next one. Or even divide up the work. For example, and purely hypothetically, maybe Geoffrey could try to get consensus on the exit-from-Experimental status criteria, James could work on when a CLI flag can be omitted, and Matteo could work on the criteria for when a warning can be left out. |
I'm ok with documenting what we do. However this proposal implies attaching specific warnings and conditions to each stage. I'm strongly opposed to that. To solve that specific mess, I think we should create a guide instead, as each feature is mostly its own case and risk level. My point of view is that Experimental means "breaking changes are possible", and we should keep it that way. |
It looks like we need to discuss this among the TSC again, or have a breakout meeting. There are so many divergent opinions here that I’m not sure that this PR can get resolved via a simple review process. The point of this PR is as I wrote at the top: to better set user expectations around experimental features. We have been burned in both directions by users considering experimental features either stable when they shouldn’t be considered as such, or too rough to test when we really want users trying out our experiments before we graduate them to stable. By subdividing “experimental” into stages we can better focus our testers’ efforts, and also avoid situations where we upset the user base by breaking or removing an experimental feature that they’ve come to depend on even though they shouldn’t have. These goals can’t be achieved by staying silent about how ready or not we consider each stage to be, or how likely or not each stage is to have breaking changes. It’s a simple fact that some experimental features are more stable than others, and both users and we are better served by knowing where each feature stands. They’re not all the same, hence the need to divide into stages. We’re also at a disadvantage currently in landing new experimental features in that most past experimental features have landed straight into near-stable state and then rarely changed (and often never got marked as stable) so user expectations right now are that experimental features are generally okay to use right away. That’s wrong, of course, but the only ways to reeducate our users are either through a docs update like this or by burning them a few times by shipping rougher experimental features that do break or go away. I think a documentation update is preferable to that. We should also write something in the contributor’s guide about how to write experimental features and graduate them and so on, but that follows after first getting consensus around what the stages are. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should do this for a few reasons:
- We should never discourage user testing. We've had the problem of not getting enough user testing in the past.
- We should not try to discourage things from getting popular. Yes, things could have gone better with async_hooks, but that is now a pretty old API and seems like more of an isolated incident than a big recurring problem. Also, Node is not the only JS runtime in town anymore. If we don't want features to get popular, I can think of at least two other runtimes that would love to have our users and give them features.
- If a feature is so immature that we don't want anyone to use it, then we should not ship it. We should maintain a fork with the feature as we have done in the past, or tell people to build from
main
. In your other proposal, you mentioned a build flag here - so we'd just be telling people to build from source anyway. - Even though these are bullets under the Experimental stability index, we are essentially creating a stability index with six distinct stages and only really want people to use one of them.
- The text here doesn't provide any real guarantees. If something is listed as a release candidate, it can still be completely broken or removed at any time.
- Users should only be concerned with whether or not something is production ready. All of these shades of experimental grey are just likely to confuse people - or upset them because we don't provide any guarantees (you completely removed this release candidate feature).
- Have we done any surveying of how other languages/runtimes handle this? If not, we should do that first.
I'm not opposed to adding text stating that experimental features may be in various stages of development.
acc2c61
to
bb4e9c5
Compare
I pushed an update with revised text that tries to resolve all the notes that people left. The various opinions are conflicting, so the perfect text isn’t possible, but hopefully this comes closer to a majority opinion and we can continue to revise. If you don’t mind, if you see something you don’t like in the new text, please suggest an alternative (and thanks to those who did so for the previous draft). |
An example that deserves special mention is Oracle Java: https://blogs.oracle.com/javamagazine/post/the-role-of-preview-features-in-java-14-java-15-java-16-and-beyond. They have three categories of what they call “nonfinal” features: “preview,” “experimental” and “incubating.” The article goes into deep detail about what kind of feedback the core team wants from each stage and where the feedback should be submitted. Like the other examples I found, none of these features are enabled by default:
That “incubator” namespace is an interesting idea; a version of that for Node could look like What these all have in common is that users must explicitly opt into enabling experimental features; not just by using the feature, but via a config setting/flag or by using a non-stable release/build. They’re never enabled by default in a stable release. I couldn’t find any project that does so, though I encourage you all to look harder than I did 😄 This list includes all the projects I found that ship semver-exempt features; the list isn’t cherry-picked. There were lots of other projects I looked at that don’t follow this pattern; I would guess they probably don’t merge in or release features that aren’t ready to be considered stable. |
Thanks for the survey. I'll dismiss my objection.
Thanks. And with regard to “I can think of at least two other runtimes that would love to have our users and give them features,” yes that’s absolutely true, but I would argue that Node.js’ value proposition is that it’s stable. Only the very brave are using Bun in production, and Deno may be past 1.0 but it’s still nowhere near as mature as Node.js. I think that a big part of our market share relative to those runtimes is that we’re considered battle-tested and proven and safe in production for big, important sites. Reading that long article about how Java handles this made me think, whoa, Java is a platform even more mature than we are, and it really shows in the level of detail they have around this. Perhaps this is just necessary complexity when you reach the point that Java’s at and we’re possibly at, where we want to continue innovating while also needing to continue being stable enough for big companies to run mission-critical applications. |
I think a feature should be marked as stable after it’s been in 1.2 / “release candidate” for a while and no one’s reported any issues that would require breaking changes to fix. It should also require TSC approval to ensure that we’re aware.
In the TSC meeting, we discussed hashing that out as the follow-up to this PR. If we follow the proposal in #45900, flags would be required for 1.0 and 1.1, “early development” and “active development” (previously alpha and beta). I think that in general the primary factor in deciding whether a flag should be needed would be the relative maturity of the experimental feature, which corresponds to which stage it’s in.
Along the same lines as flags. In #45900 I suggested the warning be emitted for all experimental features, even RC ones; that gives contributors an incentive to get their features over the finish line and marked as stable. (If you search the codebase for
That’s what this PR aims to begin. I absolutely agree with you that we should set some standards around flags and warnings and moving to “stable” status; and I think the most sensible way to do so is to establish experimental stages to use as a framework for doing so. The follow-up PR to this one can and should dive into all these issues. |
I don't think I will ever agree to mandatory CLI flags and experimental warnings. I believe they will greatly damage our momentum in shipping new features, because they will make it harder for people to test them. This goes against what we have been doing for the last while, and all the excitement about everything we generated in the community about the features we shipped in the last year or so. This also add another barrier to contribution, and we had dwindling numbers even there. Why making things harder? We should be working to increase our velocity, not slow it down. |
Honestly, if we're going to start requiring a flag to opt in to experimental features, we shouldn't emit warnings for those. Going back to the survey of other runtimes, are there any that emit console warnings when an explicitly opted-in experimental feature is enabled? That said, I am in full agreement with @mcollina and I'm still generally -1 on this whole approach. |
This PR never mentions flags or warnings, mandatory or otherwise. Is there anything in this PR that you find objectionable? Yes the intention is to follow this up with another PR that discusses what, if any, requirements we want to attach to each stage. But we haven’t even begun that discussion, much less made decisions about that. We agreed in the last TSC meeting to first settle on what we thought the stages were, and then move on to discussing what requirements or recommendations we would make for each stage. I would think that that would take the form of a section in the collaborator’s guide, as it’s not really information that the general public needs; and when we write it we can go into detail about what considerations to take. For example, features that are unlikely to ever be used in production like
This PR never mentions flags or warnings, mandatory or otherwise. We can discuss standards for when flags and warnings should be used in a follow-up PR.
I didn’t notice any mention of warnings in the various projects’ documentations; I think you would have to try each one. Deno doesn’t print a warning from what I can tell.
What are your objections to this PR? I’ve already revised per your notes above. |
On this PR I still do not believe the additional formality is needed or that it helps. I appreciate the changes that have been made already but I still don't think it's necessary at all. |
I'm ok as the text is currently written. I'm ok to land that as long as there are no planned flags or runtime checks attached to the level. However, the whole premise of this work is adding runtime flags or build-time options for specific levels:
I cannot approve this approach. |
Thanks. I think we should keep this on the TSC agenda to discuss at our next meeting. I assume the next steps are:
I feel like there might be a lot of debate around the classifications of various features, so I want to separate that discussion into its own PR (or even a set of PRs) rather than adding all of that onto this one. We might even decide to revise this one as a result of reviewing all our existing features, for example if we decide we need a fourth stage or could maybe make do with only two. When I search for |
I think most of those could be graduated to stable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
FWIW, the companion PRs would probably need changes to some of the scripts in |
@RaisinTen one option might be adding a line in the new text that says this subdivision is new and that some APIs may still just be tagged as experimental. The tools/doc change might make sense to have in place/part of this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do have a slight concern about the border between 1.2 and 2, as if we don't implement a warning recommendation at level 1.2, then the ecosystem might eventually start treating 1.2 as 2 -- the language is similar, and if some features will seamlessly go from 1.2 to 2, the ecosystem might stop seeing any difference.
This is not a blocker, but something that we should keep in mind later, when implementing recommendations and actually using these stages.
LGTM
We discussed again in the TSC meeting yesterday and the next steps agreed by the 7 TSC members there was:
@nodejs/tsc since we had a smaller group in the discussion at the meeting. |
Per #46100 (comment) I’ll land this tomorrow (Friday) unless anyone objects. It’s marked as “don’t backport.” |
Landed in 1022c6f |
Implements the first PR per #45900 (comment). This only defines the stages; follow-up PRs will classify all currently experimental features accordingly and add any stage requirements such as flags or warnings that we decide are appropriate. We could potentially land this PR at the same time as the one that updates the statuses for all experimental features, but I don’t want to start work on that one without reaching consensus on this one first.
Even just on its own, this PR solves a few problems:
It encourages more use and testing of experimental features that we’ve been referring to as in the “baking” phase, where we think they’re ready to be marked as stable. This should hopefully catch more bugs before features become stable.
It clarifies the refactoring risk that various experimental features have; “release candidate” ones should hopefully require no refactoring due to breaking changes, but “active development” ones very well might; and “early development” ones almost certainly will. Users therefore have a better idea of how much maintenance work they’re signing up for by choosing to use a subject-to-change feature.
Per #45900 (comment) I tried to come up with better names for the stages. Please let me know if there are better ideas for names that can reach consensus. 🚲
cc @nodejs/tsc @nodejs/documentation