Skip to content
This repository has been archived by the owner on Apr 27, 2021. It is now read-only.

Functionality (implementation details) of each future #5

Open
mistadikay opened this issue May 8, 2015 · 7 comments
Open

Functionality (implementation details) of each future #5

mistadikay opened this issue May 8, 2015 · 7 comments

Comments

@mistadikay
Copy link

Hi,

First of all, great project, really appreciate your work and hope babeljs to implement it on some level so we can exclude from transpiling some of the language features which are widely supported already.

What is missing here though is that usually we need to not just know existence of a feature, but some implementation details — like is there anything is missing or are there any differences with the spec, etc. Take @kangax ECMAScript compatibility table for instance — there are several tests for each future so you can see how fully each one of them is supported on different platforms. I understand that it's an early stage of the project, so maybe you considered this already, but anyway.

@getify
Copy link
Owner

getify commented May 8, 2015

Ahh, you're talking about having more fine-grained test results. Yes, I think there's room for that. I'm not sure that the entire set of tests on @kangax's table is useful to people for this service, so there's a balance to be had.

But I definitely think the service should have whatever test results people are actually going to use in deciding what files to load.

If you have any specific fine-grained results you think we should start to consider, please let me know. :)

@mistadikay
Copy link
Author

Well, I'm not sure about specific tests yet. What I have in mind though is when reading, for example, your books or @rauschma great articles people get to know lots of little details of the language and some of these details can or can not be fully implemented on different platforms. So when firing feature tests it would be just safer to be aware of the situation — so we won't be caught on some implementation bugs or inconsistencies later on.

@getify
Copy link
Owner

getify commented May 8, 2015

All (major) implementations will eventually be fully compliant with the spec, or the spec will eventually have to change to match reality.

So what we're really interested in is what are the nuances that people care about in the gap between what the spec says and when the browser does exactly the right thing -- or rather, all browsers you care about get to that point. This gap is the important part where the service steps in.

In that gap, will there be a predominant and persistent enough problem where some syntax/feature exists in several browsers, but it doesn't work correctly to spec, and that fix won't come soon? In those cases, you need a test result that lets you know this and decide. But in other cases, it will be binary... either the browser implemented it fully and correctly, or they haven't done it yet.

So that's what I mean by balancing the fine-grained detail of the @kangax tables with the realities of what people will use to decide what files to load. Somewhere in between nothing and everything. And that's the challenge here.

@getify
Copy link
Owner

getify commented May 8, 2015

I should add, the reason why we shouldn't just expose everything is maintenance overhead as well as performance penalties (in several axes). That's why a balance matters and why being conservative on what to include/test for matters.

@mistadikay
Copy link
Author

I see your point now, thanks for the explanation!

@mgol
Copy link

mgol commented Jun 28, 2015

So that's what I mean by balancing the fine-grained detail of the @kangax tables with the realities of what people will use to decide what files to load.

People will use various things, even unpopular ones. If someone uses such a feature and uses featuretests.io to serve different versions of the script to various browsers, the effect will be that in a modern browser that still has some bugs in an implementation the site will just stop working correctly... So I think it's important to be comprehensive, especially that with modular builds most people won't need to load a lot of code - but it's crucial they get the tests they need.

@getify
Copy link
Owner

getify commented Jun 29, 2015

It is not a goal of this project to run the entire ECMA-262 test suite (nor even the entire ES6 test suite), mostly since that would be unwieldy and non-pragmatic perf wise. So there's going to be some sort of balance/compromise between "everything" and "nothing".

Moreover, I don't want to test too many things which we can't feasibly automatically scan someone's JS files for with testify. There are a few tests like that already, and that bothers me. If normal static analysis of the AST can't detect that the code needs it, it's going to be much harder for someone to maintain their list of needed tests for their project.

I prefer to take this on a case-by-case basis. That is, if there's a demonstrated case where another test is necessary, we'll consider adding it. But I'm not inclined to blow up the test suite with a bunch of tests that might get used and may never get used.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants