Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correction to the definition of "ocean sigma over z coordinate" in Appendix D #314

Closed
JonathanGregory opened this issue Mar 3, 2021 · 41 comments · Fixed by #317
Closed
Labels
enhancement Proposals to add new capabilities, improve existing ones in the conventions, improve style or format

Comments

@JonathanGregory
Copy link
Contributor

JonathanGregory commented Mar 3, 2021

Title

Correction to the definition of "ocean sigma over z coordinate" in Appendix D

Moderator

None yet

Moderator Status Review [last updated: YYYY-MM-DD]

None yet

Technical Proposal Summary

The convention for this parametric vertical coordinate appears to be defective in its design. Correcting it cannot be treated as a remedy for a defect, because the intention isn't clear, but a remedy is needed because at the moment there is no definition of what the coordinate variable should contain.

Associated pull request

#317

Detailed Proposal

With the ocean sigma over z coordinate, above depth depth_c there are nsigma layers and below this depth the levels are surfaces of constant height zlev (positive upwards) relative to the horizontal datum. The vertical coordinate z for level k at time n and point (j,i) is

for k <= nsigma:
z(n,k,j,i) = eta(n,j,i) + sigma(k)*(min(depth_c,depth(j,i))+eta(n,j,i))
for k > nsigma:
z(n,k,j,i) = zlev(k)

where z(n,k,j,i) is height (positive upwards) relative to the datum (e.g. mean sea level) at gridpoint (n,k,j,i), eta(n,j,i) is the height of the sea surface (positive upwards) relative to the datum at gridpoint (n,j,i), sigma(k) is the dimensionless coordinate at vertical gridpoint (k) for k <= nsigma, and depth(j,i) is the distance (a positive value) from the datum to the sea floor at horizontal gridpoint (j,i).

In the above, "sigma(k) is the dimensionless coordinate for k <= nsigma" means that sigma is the value that appears for level k in the vertical coordinate variable, whose standard name is ocean_sigma_z_coordinate. However, sigma is not defined for k>nsigma, so it cannot supply the vertical coordinate value at these lower levels. I think this must be an oversight in the convention as it stands. The convention is therefore faulty in not defining a monotonic set of values for the vertical coordinate variable.

I propose that we should delete the statement "sigma(k) is the dimensionless coordinate for k <= nsigma", and add the following:

The vertical coordinate variable with standard name ocean_sigma_z_coordinate should contain sigma(k)*depth_c for k <= nsigma and zlev(k) for k>nsigma, with units of length. These comprise a set of indicative values, monotonically increasing with k, for the depths of the levels below the datum.

We should also add "The standard_name for sigma is ocean_sigma_coordinate" at the end of the entry, where the other standard names are defined.

It would be particularly useful to have comments from anyone who uses this vertical coordinate variable.

@JonathanGregory JonathanGregory added the enhancement Proposals to add new capabilities, improve existing ones in the conventions, improve style or format label Mar 3, 2021
@johnwilkin
Copy link

I think there is a problem here with an implicit assumption about the numbering convention for whether k=N is the surface or the bottom. In ocean_sigma_coordinate is does not matter if a user numbers from the surface sigma(1) = 0 to bottom sigma(N) = -1, or as in the ROMS model for example sigma(1) = -1 at the bottom and sigma(N) = 0 at the surface (k increasing in the same direction as z). As written, the definition above requires k=1 be sigma=0 and z=eta. The reverse order is not supported.

@JonathanGregory
Copy link
Contributor Author

Dear @johnwilkin

Thanks for you comment. Yes, that's good point. I think it's a separate problem, but I agree the text should be reworded to avoid the implicit numbering convention. In fact I don't think the numbering needs to be stated at all. It could just describe the treatment of the sigma levels and depth levels without mentioning nsigma or the values of k.

Do you agree with the proposal for what should be stored in the vertical coordinate? If I have understood the definition properly, I think it must be the case that the highest z level is deeper than depth_c (where sigma=1). The depth and sigma levels should not overlap. Is that right?

Jonathan

@johnwilkin
Copy link

We need auxiliary information for sigma(k) to make the definition generic.

Something like this ... check my math and whther it's < or <=

If k numbers from top to bottom then (left side of diagram attached)
for 1 < k <= Nsigma we have sigma(k) = -k/Nsigma
zlev(k) is undefined
for k > Nsigma then sigma(k) is undefined (_FillValue?)
zlev(k) has valid entries for Nsigma < k < N

If k numbers bottom to top then (right side of diagram attached)
for 1 < k <= N-Nsigma sigma(k) is undefined
zlev(k) has valid entries for 1 < k < Nsigma
for k > N - Nsigma we have sigma(k) = (k-N)/Nsigma
zlev(k) is undefined for k > N-Nsigma

With sigma defined thus as part of the standard, and/or as 1-D vector of sigma(k) saved in the output file (this is what we do in ROMS to accommodate multiple alternate stretchings in the"sigma" space for ocean_s_coordinate), and zlev included in the output as a 1-D vector, then the rule is generic.

This will work for both numbering conventions:
If sigma is defined
z(n,k,j,i) = eta(n,j,i) + sigma(k)*(min(depth_c,depth(j,i))+eta(n,j,i))
If sigma is not defined
z = zlev(k)

I don't know how you write this in CF-speak. But it's a robust way to code a calculation of z(n,sigma(k),i,j)

2021-03-03 14 11 42-1

@johnwilkin
Copy link

One more point ... working from sigma(k) encoded in the file would set the scene to use ocean_s_coordinate or ocean_s_coordinate_g1 (g2) over z coordinate with minimal added hassle.

@JonathanGregory
Copy link
Contributor Author

Dear @johnwilkin

Thanks. That's very helpful. Yes, I agree with you. sigma should be stored in the file in a variable named by formula_terms, as zlev is. In fact that is what Appendix D already specifies. It would be unnecessarily restrictive to require the sigma levels to be equally spaced.

Cheers

Jonathan

@JonathanGregory
Copy link
Contributor Author

I have created pull request #317 to implement this. If @johnwilkin and @davidhassell (who originally discovered this problem) and anyone else could have a look at it, that would be most helpful. Thanks.

@JonathanGregory
Copy link
Contributor Author

@johnwilkin, thanks for your comment on the pull request, which I repeat here for the record:

I think this will work OK. Regardless of how the modeler orders the k index it is a universal convention that sigma, or s, increases upward from -1 at the bottom (or of c_depth) to 0 at the sea surface.

@davidhassell
Copy link
Contributor

Hello @JonathanGregory and @johnwilkin,

Thanks. The PR looks good to me, but I'd like to check it with my cf-python implementation (or vice versa), and I won't be able to do that until after the weekend, I will get back to you next week.

David

@davidhassell
Copy link
Contributor

Hello,

In the proposed text we have:

The parameter sigma(k) is defined only for the nsigma layers nearest the ocean surface, while zlev(k) is defined for the deeper layers. If the layers are numbered top-down i.e. with k = 1 nearest the surface and increasing numbers for greater depth, then sigma(k) is defined for 1 \<= k \<= nsigma. If the layers are numbered bottom-up i.e. with k = 1 nearest the sea floor, then sigma(k) is defined for nlayer - nsigma \<= k \<= nlayer, where nlayer is the number of layers. It is recommended that sigma(k) and zlev(k) should contain missing data at layers for which they are not defined.

I'm still not sure how the user can know if the layers are numbered top-own or bottom-up.

@johnwilkin commented: Regardless of how the modeler orders the k index it is a universal convention that sigma, or s, increases upward from -1 at the bottom (or of c_depth) to 0 at the sea surface.

Does that mean we should work it out from inspection of the sigma values - decreasing sigma values indicate top-down, and vice versa? If so, it would be good to state that...

Thanks,
David

@JonathanGregory
Copy link
Contributor Author

Dear @davidhassell

Thanks for raising this point. I agree with your suggestion (when we spoke) that it would be better to require sigma and zlev to contain missing data at levels to which they do not apply. In that case, if the first element zlev contains missing data, or the first element of sigma does not contain missing data, you can infer that the layers are arranged in order of increasing depth, from the surface to the bottom; if the opposite, that they are arranged from the bottom upwards. This should be stated in the text. It also implies that the non-missing elements of both sigma and zlev are in monotonic order.

In writing this, I wondered why we assume that layers should be numbered from 1, rather than 0, or any other number. We don't have to make any such assumption, since layer number is not part of formula terms. What matters is only the order of the elements along the dimension.

Furthermore, if we require the missing data, nsigma is redundant, because you can count the number of non-missing sigma values. We could omit nsigma from formula_terms. Possibly that would be better, because there is a danger of nsigma not being updated if the data is processed to keep only subset of the sigma layers; the formula_terms might be copied without modification, and thus the metadata could become incorrect and would lead to misinterpretation of the coordinates. However, I expect that existing software may rely on nsigma. We don't normally have conformance rules for formula_terms, but maybe we should include one to require nsigma to agree with missing data in sigma and zlev, if we keep nsigma.

What do you and @johnwilkin think of this?

Jonathan

@johnwilkin
Copy link

I don't think zlev should be required to have missing values where they are not used. A user might start with a solely z-based coordinate, say, zlev = -5000 (k=1) by 500 to 0 (k=11) and c_depth=0 (or n_sigma=0) but subsequently tinker with increasing values of c_depth and n_sigma. This eats into the entries in zlev and ceases to use some of them (zlev<c_depth) but why demand that unused entries be defined as "missing"?

That said, such "tinkering" is not going to be very flexible because it demands a change in the vertical dimension. The number of vertical coordinate surfaces is n_sigma plus the number of z-levels below c_depth. So, if anything, it might be better to require that zlev contain values strictly < c_depth so that the vertical dimension is always length(zlev) plus n_sigma.

@JonathanGregory
Copy link
Contributor Author

Dear @johnwilkin

Thanks for your comment. I think zlev and sigma both have to be dimensioned by nlayer, since they're formula terms of a coordinate variable of that dimension. That's also implied by the text, since the same k is used to index both of them. However, for any k, only one of them is used. The reason for putting missing data in the other is so that the data-reader or program can easily tell which way round (top down or bottom up) the levels have been arranged along the dimension.

If nsigma=0, the vertical coordinate is purely depth; if nsigma=nlayer, the vertical coordinate is sigma. In either case you wouldn't use sigma_over_z, would you?

Best wishes

Jonathan

@JonathanGregory
Copy link
Contributor Author

Quoting my earlier comment

The reason for putting missing data in the other is so that the data-reader or program can easily tell which way round (top down or bottom up) the levels have been arranged along the dimension.

There is another easy way to do this: we should be able to depend on the positive attribute, which must be present since the vertical coordinate is not pressure, in conjunction with inspecting the vertical coordinate variable, which contains a set of indicative values for the levels, monotonically increasing or decreasing. For example, if positive=up and the values are decreasing along the dimension, they must be arranged top-down, so the first nsigma levels are sigma levels. Is this sufficient, @davidhassell? If so, we don;'t need the requirement about missing data.

Jonathan

@johnwilkin
Copy link

I think this suggestion is helpful. The sense of the numbering is readily determined by diff(zlev) or diff(sigma). But if we have sigma dimensioned by nlayer, do we need to enforce or recommend what sigma value to assign for unused layers? In the case of numbering from the surface down, for k >= nsigma should sigma be repeated as -1? Numbering from the bottom up, does sigma = -1 for all k <= nlayer-nsigma? I'm starting to come around to the idea that in layers out of scope that zlev and sigma be assigned a missing value. I can see how easy it would be for a user to notice variable zlev in the file and unwittingly assume it applies to all data. Or, likewise, assume the model is sigma throughout. A missing value would throw an error.

@davidhassell
Copy link
Contributor

Hello @johnwilkin and @JonathanGregory,

Thanks for these points.

I think that the use of the positive attribute to determine the direction is sufficient and elegant - that works for me.

I am now quite troubled by the use nsigma, though! How it is applied is surely dependent on whether one assumes one-based or zero-base indices k, as @JonathanGregory pointed out. And the possibility of it becoming incorrect when the vertical dimension is subsetted is not good. These problems all disappear, I think, if we insist on missing values for the out-of-range portions of zlev and sigma...

It's just occured to me that we dispense with nsigma and use missing values instead, surely we don't need to know the direction at all? and could omit the new text that says:

"If the layers are numbered top-down i.e. with k = 1 nearest the surface and increasing numbers for greater depth, then sigma(k) is defined for 1 \<= k \<= nsigma. If the layers are numbered bottom-up i.e. with k = 1 nearest the sea floor, then sigma(k) is defined for nlayer - nsigma \<= k \<= nlayer, where nlayer is the number of layers."

With missing values, doesn't

"for levels k where sigma(k) has a defined value and zlev(k) is not defined:

for levels k where zlev(k) has a defined value and sigma(k) is not defined:"

give us all we need?

David

@johnwilkin
Copy link

I am now quite troubled by the use nsigma, though! How it is applied is surely dependent on whether one assumes one-based or zero-base indices k, as @JonathanGregory pointed out. And the possibility of it becoming incorrect when the vertical dimension is subsetted is not good. These problems all disappear, I think, if we insist on missing values for the out-of-range portions of zlev and sigma...

Quite right. I had to sketch this to convince myself. See below. Having vectors sigma and zlevel both dimensioned on nlayers with each undefined when they are out of scope makes the conversion unambiguous. In my diagram, it doesn't matter in which order (top to bottom, or bottom to top) the values are stored, and whether index k is 0-based or 1-based is at the discretion of whoever reads the file.

Here, the square symbols with the dot are the nominal layer (cell) centers for a typical vertical staggered grid where tracers are defined at the cell centers, and vertical velocity (w) at the layer interfaces dimensioned nlayers+1. To get z of layer interfaces we use the same equation but need a second vector of sigma_w values (here 6 defined values -1:0.2:0). The sigma_w = -1 has to be defined. If there is a counterpart zlevel_w vector then the interface corresponding with c_depth has to be undefined.

If the layers are not uniform thickness one can't infer zlevel_w from the center depths zlevel. This is a common failing of z-level model output files that don't provide layer thickness data. It confounds computing vertical integrals from -h to eta.

For the sigma-space the layer thicknesses are readily computed by differencing the z on the interfaces computed with sigma_w.

2021-04-22 17 19 39

@JonathanGregory
Copy link
Contributor Author

Dear @davidhassell and @johnwilkin

Thanks for your comments, and for the new diagram, John. Omitting the bounds is a problem for many types of vertical coordinate variable, not just this one. Bounds can be provided for both sigma and zlev in the usual way; the CF convention has a special provision for bounds of formula terms, since they are often needed.

It seems like we agree then. Unless you say otherwise, I'll revise the text to reduce or remove the explicit mention of which way up the coordinates are and the choice of base for k, and to require the missing data. I think that means we should make nsigma optional in formula_terms, and add a conformance rule to state that, if it is supplied, it must equal the number of entries of zlev which have missing data, and also that at each level either sigma or zlev must be missing, but not both or neither.

Best wishes

Jonathan

@johnwilkin
Copy link

Sounds like we've converged and can CLOSE this.

And, yes, this doesn't actually require nsigma be a saved parameter - though it's useful supporting metadata.

My remarks on "w" grid vertical coordinates is not a bounds issue. It's another coordinate for variables defined at a different location, as in the vertical velocity in this ROMS model dataset
http://tds.marine.rutgers.edu/thredds/dodsC/roms/doppio/2017_da/avg/Averages_Best.html
that uses ocean_s_coordinate_g2

s_rho
long_name: S-coordinate at RHO-points
valid_min: -1.0
valid_max: 0.0
positive: up
standard_name: ocean_s_coordinate_g2
formula_terms: s: s_rho C: Cs_r eta: zeta depth: h depth_c: hc

and

long_name: S-coordinate at W-points
...
formula_terms: s: s_w C: Cs_w eta: zeta depth: h depth_c: hc`

u velocity data are dimensioned on s_rho.
w velocity data are dimensioned on s_w.

For ocean_sigma_over_z_coordinate IF the file contains w data, THEN we would need another dimension (length nlayers+1) and coordinate.

@JonathanGregory
Copy link
Contributor Author

Dear @johnwilkin and @davidhassell

I have updated the pull request #317. Is it OK now?

Thanks for your help

Jonathan

@davidhassell
Copy link
Contributor

Thanks, @JonathanGregory.

A couple of comments:

  • Could we say "nlayer is the size of the dimension of the vertical coordinate variable.", adding "size of the"
  • Keeping an optional nsigma term variable does go against Guiding Principle (6): To avoid potential inconsistency within the metadata, the conventions should minimise redundancy. However, I can see a reason for keeping it if it is useful to know in circumstances when you don't want to/can't read the entire zlev corodinate array. Is that a real use case? In the case of cf-python, which will implement this, it would never need nsigma without zlev because its only concerned with calulating z, and so has to read zlev anyway.

@JonathanGregory
Copy link
Contributor Author

Dear @davidhassell

I have changed the text about nlayer as you suggest. Thanks for the correction.

I propose that we should keep nsigma because removing it would be backward-incompatible, in the sense that existing data created with previous versions of the convention would be invalid in the new version. That sounds like an unnecessary nuisance, so the next best thing is to check that there isn't an inconsistency.

Best wishes

Jonathan

@davidhassell
Copy link
Contributor

Dear @JonathanGregory - good points. Keeping nsigma is fine by me.

@zklaus
Copy link

zklaus commented Apr 28, 2021

Another option would be to declare nsigma deprecated. This way it would be around for some time to come, but new data would avoid using it and it would be clear that that is correct. In a future release, such backward-incompatible changes could be bundled. For reference, I link here to the Numpy deprecation policy and the Python deprecation policy as examples of how other projects handle this kind of issue.

@JonathanGregory
Copy link
Contributor Author

JonathanGregory commented Apr 28, 2021 via email

@davidhassell
Copy link
Contributor

That sounds good to me, too.

@JonathanGregory
Copy link
Contributor Author

Dear all

I have updated #317 again (at last, I'm beginning to feel that I know what I'm doing with git and github!) to deprecate nsigma, as well as requiring it (if present) to agree with the missing data. Are there any further concerns? If none, and presuming that those who have supported this already still do so, the proposal will be accepted on 20th May.

Best wishes and thanks for your engagement

Jonathan

@davidhassell
Copy link
Contributor

Thank you, @JonathanGregory. It all looks good to me.

@davidhassell
Copy link
Contributor

Hello - the 20th May is here, and no further comments have arisen. Thanks to all for the interesting discussion - and especially to @johnwilkin for the excellent diagrams.

Before we merge, however, it is noteworthy that this issue has identified and corrected a fundamental flaw in the conventions (as opposed to a typographical error, for example), which I'm not sure has happened before. The rules on this state The flawed version will be deprecated by a statement in the standard document and the conformance document. However, any data written with the flawed version will not be invalidated, although it may be problematic for users.

I am not sure in what form such a statement should take, or in where in the conventions document it should live. In any case, perhaps it should be considered as part PR #317?

@JonathanGregory - do you agree the PR should be amended in this way? If so, would you like to propose some extra text?

Many thanks,
David

@JonathanGregory
Copy link
Contributor Author

Dear @davidhassell

According to my computer, the 20th is tomorrow, but I expect you are thinking ahead, or perhaps in Australia? 😄 Thank you for raising this point. I suggest that we insert the following statement just after the title for the "sigma over z" coordinate in Appendix D:

The description of this type of parametric vertical coordinate is defective in version 1.8 and earlier versions of the standard, in that it does not state what values the vertical coordinate variable should contain. Therefore, in accordance with the rules, all versions of the standard before 1.9 are deprecated for datasets that use the "ocean sigma over z" coordinate.

and in the conformance document we add another recommendation for Appendix D:

Versions of the standard before 1.9 should not be used for vertical coordinate variables with standard_name = "ocean_sigma_z_coordinate" because these versions are defective in their definition of this coordinate.

Best wishes

Jonathan

@davidhassell
Copy link
Contributor

Dear Jonathan,

Oh - what a difference a day makes!

Thanks for the text. I think it is fine, but it states that it is still OK to produce CF-1.8 datasets if they don't contain this particular formula - is that what we want? I would have thought that the the creation of all new CF-1.8 datasets should be deprecated.

David

@pp-mo
Copy link

pp-mo commented May 20, 2021

I would have thought that the the creation of all new CF-1.8 datasets should be deprecated.

Just listening in here + heard something that affects us, (i.e Iris).

I think that disallowing any creation of datasets using older conventions would be problematic for us, likewise maybe anyone who writes generic CF handling code.

So, Iris nows support "most" of CF 1.7 for loading : We therfore aim to be CF-1.7 compliant and that is the version level that we state in our output datafiles.
Although we still don't support all of CF-1.7 (even), that is the level we are currently notionally aspiring to.
In particular it is the lowest convention version consistent with any output we may currently produce.

So for us, stating CF-1.7 for output data means "you won't find anything in here that is not described by CF-1.7".
For our users, i.e. from the point of view of an individual or software tool interpreting the data, this is the statement that most usefully describes what that data might contain.

So, I think it would be unhelpful for us to state compliance with a later version (e.g. 1.8), even though that is "consistent" with our output, if our outputs contain no CF-1.8 -level features.
We would move to stating CF-1.8, only when our output code adopts some CF-1.8 concepts, and so requires that level to be correctly understood.

Sorry for hijacking your specific discussion with such a general query.
@JonathanGregory @davidhassell how do you think this relates to the discussion here ?

@JonathanGregory
Copy link
Contributor Author

Dear @davidhassell and Patrick @pp-mo

The rules state that we should deprecate flawed versions of CF, as David says. The aim of doing this is to discourage any more data from being written which will be problematic for users because of the defect which has been discovered. Therefore I'm inclined to think we don't need to deprecate all existing versions of CF to deal with this defect, because it's very limited in scope. We only need to discourage any data from being written with versions <1.9 for the "sigma over z" coordinate.

Patrick: The concern of this discussion, which we're resolving with this proposal, is that the text of "sigma over z" in 1.8 and previous versions doesn't describe a correct vertical coordinate variable. If your software supports writing this coordinate on output, the resulting data is flawed. That's not your fault; it's CF's mistake. It doesn't seem like a good idea to continue to write flawed data, does it? It would be better to withdraw support for writing "sigma over z" or to implement the new version. I think it would be fine to call it 1.7 even if you include the 1.9 version of "sigma over z" because it makes more sense than the 1.7 version we are deprecating. Maybe we should say that as well. What do you think?

Best wishes

Jonathan

@ethanrd
Copy link
Member

ethanrd commented May 20, 2021

Hi all - The paragraph in the rules document that mentions deprecation seems focused on recent (or even the most recent) changes/versions. The paragraph starts with

If the change, once implemented in the conventions, subsequently turns out to be materially flawed … a github issue should urgently be opened to discuss whether to revoke the change.

This “ocean sigma over z” deprecation would affect all pre-1.9 versions of CF. So there is no change to revoke. How to handle this situation seems unclear, though I lean towards agreement with @JonathanGregory and @pp-mo that it should be more granular than entire versions.

Given all that, I think Jonathan’s text looks good.

So as not to slow down the resolution of this issue, I propose we move further discussion of the rules around deprecation to a new issue. (I’ll create an issue shortly for further discussion.)

@JonathanGregory
Copy link
Contributor Author

Should we implement this change without the deprecation, and rely on Ethan's new issue #328 to take care of it later?

@zklaus
Copy link

zklaus commented May 21, 2021

I was a bit confused by how the term deprecation was used by @davidhassell and @JonathanGregory, so I searched for it in the issue, finding that I myself introduced it here. Allow me to clarify how I understand it.

Deprecation doesn't apply to versions of an artifact, be it a software package or a standards document. Rather, it applies to specific features. What it says is: "We think this feature should not be used going forward. To allow for a transition period, we do not remove it at this point in time, so you can still use it for a bit, but we'd rather you don't, and we want to remove it in a later version." In my mind, it does not retroactively declare past versions wrong, and writing a new file today that declares that it follows the CF conventions version 1.6 is perfectly legal, if ill-advised.

Independent of any deprecation, we might want to have a recommendation to always use the latest version of CF available for new developments.

@JonathanGregory
Copy link
Contributor Author

I will reply to Klaus @zklaus in Ethan's new issue #328

@pp-mo
Copy link

pp-mo commented May 21, 2021

Thanks for your clarifications @JonathanGregory and @zklaus.
I think I've understood this more clearly now.
And I'm happy to say, I'm agreeing with what you are both saying !

@ethanrd
Copy link
Member

ethanrd commented May 21, 2021

Should we implement this change without the deprecation, and rely on Ethan's new issue #328 to take care of it later?

I was suggesting moving forward with the deprecation language for this issue and revisiting it once item #328 comes to a conclusion. There are a few other deprecation items currently in the spec (I'll add a list of those in issue #328 shortly) that we'll have to review as well. So adding the deprecation language now would mean all the current deprecations are in one place. But I'm fine either way if waiting on this deprecation language seems a cleaner approach.

@JonathanGregory
Copy link
Contributor Author

I have updated the pull request #317 to include the proposed deprecation text. If there are no further comments, it can be accepted in three weeks beginning from five days ago when I proposed it. That makes 9th June.

@davidhassell
Copy link
Contributor

That's good for me, thanks @JonathanGregory

@JonathanGregory
Copy link
Contributor Author

The three weeks have passed without comment. Please could you merge the pull request, @davidhassell ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Proposals to add new capabilities, improve existing ones in the conventions, improve style or format
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants