Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial v1beta2 (experimental) cut #1694

Merged
merged 3 commits into from
Apr 16, 2015

Conversation

smarterclayton
Copy link
Contributor

@ncdc review please

This enables v1beta2 as experimental, exposing only the new v1beta2 objects.

@smarterclayton smarterclayton force-pushed the v1beta2 branch 3 times, most recently from 6750858 to 40ea64a Compare April 10, 2015 22:06
@smarterclayton smarterclayton changed the title WIP - Initial v1beta2 (experimental) cut Initial v1beta2 (experimental) cut Apr 10, 2015
@smarterclayton
Copy link
Contributor Author

@bparees @pweil- @ironcladlou this does not include builds, routes, or deployments, the latter two will need a "spec" / "status", and the former should include the issues discussed in other threads. Can you guys make sure there is a card / issue to track each of those sections into v1beta2? Thanks.

@smarterclayton
Copy link
Contributor Author

@deads2k @liggitt see the last commit - in order to properly default values (using the scheme defaults) the internal version needs to go through a conversion. This makes defaults versionable (which they aren't today).

@deads2k
Copy link
Contributor

deads2k commented Apr 13, 2015

I'm not clear why we would want to version our default values is a way that is separate from us building the config itself. I would have said that the buildSerializeableConfig methods were a giant defaulter intended for matched set usage. Either we'd serialize out however we want into a fresh file or we'd use the memory copy directly. I can't envision a scenario where we'd want the config to built in a stale scheme.

@smarterclayton
Copy link
Contributor Author

If you have value defaulting, you can't change those defaults once real values are in the wild. Since we will ship config that we have to add things to, we will have to set default values for a particular version of API objects. So defaults have to come from a version. It's really no different than our other API objects.

On Apr 13, 2015, at 8:00 AM, David Eads [email protected] wrote:

I'm not clear why we would want to version our default values is a way that is separate from us building the config itself. I would have said that the buildSerializeableConfig methods were a giant defaulter intended for matched set usage. Either we'd serialize out however we want into a fresh file or we'd use the memory copy directly. I can't envision a scenario where we'd want the config to built in a stale scheme.


Reply to this email directly or view it on GitHub.

@deads2k
Copy link
Contributor

deads2k commented Apr 13, 2015

Defaults that are set during read will happen during the existing conversion process. If it's a concern that there will be value drift, why not continue to build the config using the in-memory representation and round-trip it through conversion before usage?

@deads2k
Copy link
Contributor

deads2k commented Apr 13, 2015

Wouldn't we always round trip through the latest version of the scheme? I don't see why we'd continue using the old scheme for building the config in code.

It seems like our config validation (against the in-memory package) is going to force any bad defaulting into the open. At the very least, it would prevent usage.

@smarterclayton
Copy link
Contributor Author

So that we don't have support to round tripping old config versions.

On Apr 13, 2015, at 8:53 AM, David Eads [email protected] wrote:

Defaults that are set during read will happen during the existing conversion process. If it's a concern that there will be value drift, why not continue to build the config using the in-memory representation and round-trip it through conversion before usage?


Reply to this email directly or view it on GitHub.

@smarterclayton
Copy link
Contributor Author

The only risk is if we have behavior that can't be expressed in the external storage version (like objects internally that are read). If you think we have no expectation of doing that, we can round trip to latest.

However, it's looking likely that we will start using versioned objects internally for all the reasons mentioned and a few more. Validation is likely to move to external objects as well. We don't have to beat the rush, but we do need to be aware.

On Apr 13, 2015, at 9:01 AM, David Eads [email protected] wrote:

Wouldn't we always round trip through the latest version of the scheme? I don't see why we'd continue using the old scheme for building the config in code.

It seems like our config validation (against the in-memory package) is going to force any bad defaulting into the open. At the very least, it would prevent usage.


Reply to this email directly or view it on GitHub.

// Optional, if specified this stream is backed by a Docker repository on this server
DockerImageRepository string `json:"dockerImageRepository,omitempty"`
// Tags map arbitrary string values to specific image locators
Tags []NamedTagReference `json:"tags,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we just created this as part of the ImageStream work, but I'm questioning if TagReference is the appropriate name for this type. It's entirely valid to have annotations for a tag without there being any "reference" part. Would TagSpec or something along those lines be more appropriate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.

----- Original Message -----

+type ImageStream struct {

  • kapi.TypeMeta json:",inline"
  • kapi.ObjectMeta json:"metadata,omitempty"
  • // Spec describes the desired state of this stream
  • Spec ImageStreamSpec json:"spec"
  • // Status describes the current state of this stream
  • Status ImageStreamStatus json:"status,omitempty"
    +}

+// ImageStreamSpec represents options for ImageStreams.
+type ImageStreamSpec struct {

  • // Optional, if specified this stream is backed by a Docker repository on
    this server
  • DockerImageRepository string json:"dockerImageRepository,omitempty"
  • // Tags map arbitrary string values to specific image locators
  • Tags []NamedTagReference json:"tags,omitempty"

I know we just created this as part of the ImageStream work, but I'm
questioning if TagReference is the appropriate name for this type. It's
entirely valid to have annotations for a tag without there being any
"reference" part. Would TagSpec or something along those lines be more
appropriate?


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1694/files#r28236941

@pweil-
Copy link

pweil- commented Apr 13, 2015

@bparees @pweil- @ironcladlou this does not include builds, routes, or deployments, the latter two will need a "spec" / "status", and the former should include the issues discussed in other threads. Can you guys make sure there is a card / issue to track each of those sections into v1beta2? Thanks.

Yep, I have a card for the multi-port changes that were in the new rebase

@smarterclayton
Copy link
Contributor Author

@deads2k all comments addressed PTAL

@deads2k
Copy link
Contributor

deads2k commented Apr 13, 2015

One comment, otherwise the last commit lgtm.

@derekwaynecarr
Copy link
Member

[test]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_openshift3/1833/)

@smarterclayton
Copy link
Contributor Author

[merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_openshift3/1591/) (Image: devenv-fedora_1286)

@openshift-bot
Copy link
Contributor

Evaluated for origin up to 6a7364d

openshift-bot pushed a commit that referenced this pull request Apr 16, 2015
@openshift-bot openshift-bot merged commit 8c99516 into openshift:master Apr 16, 2015
@smarterclayton smarterclayton deleted the v1beta2 branch May 18, 2015 02:16
jboyd01 pushed a commit to jboyd01/origin that referenced this pull request Feb 15, 2018
…service-catalog/' changes from b69b4a6c80..b758460ba7

b758460ba7 origin build: modify hard coded path
871582f73a origin build: add origin tooling
9fa4e70 chart changes for v0.1.8 (openshift#1741)
cada49c handle instance deletion that occurs during async provisioning or async update (openshift#1587) (openshift#1708)
3032f01 phony output binaries (openshift#1729)
0c98a72 remove last vestiges of glide (openshift#1696)
8435935 Prune vendor (openshift#1739)
0f657ec allow setting go version, clean up alignment
08af73f Disable test-dep target temporarily
41984a5 Check for existing bindings only for instances with DeprovisionStatus == ServiceInstanceDeprovisionStatusRequired. (openshift#1640)
706e555 chart changes for v0.1.7 (openshift#1721)
23644db we inconsistently rm thing with and without docker (openshift#1713)
a38092d Chart changes for Release v0.1.6 (openshift#1718)
2fd4ecf Add PodPreset into settings api group (openshift#1694)
bac68f4 update docs of developer's guide (openshift#1716)
3200b16 add integration test for proper async binding retry (openshift#1688)
6d809c3 Add custom columns to OpenAPI schema (openshift#1597)
fcdefa6 Workaround spf13/viper stringarray bug (openshift#1700)
ebbeb8c undo 6bad71d358ad3ad39eb8c003f5807cca1ec1d1e7 (openshift#1714)
1ee9659 Load all client auth plugins in the cli (openshift#1695)
b9ad10d must run tests (openshift#1698)
c621cdc add stages to Travis
REVERT: b69b4a6c80 origin build: modify hard coded path
REVERT: 527fac4d02 origin build: add origin tooling

git-subtree-dir: cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog
git-subtree-split: b758460ba7a45d370da9d5d634e71c16e9eb282a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants