Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'MANIFEST_INVALID' when uploading to Artifactory #534

Closed
jinglejengel opened this issue Jul 10, 2018 · 39 comments
Closed

'MANIFEST_INVALID' when uploading to Artifactory #534

jinglejengel opened this issue Jul 10, 2018 · 39 comments
Assignees
Milestone

Comments

@jinglejengel
Copy link

Description of the issue: When running jib as part of a maven goal, I get the following:

[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.2:build (build-docker-image) on project jengel-service: Build image failed: Tried to push image manifest for $OUR_REGISTRY/sre/jengel-test:jib but failed because: manifest invalid (something went wrong) | If this is a bug, please file an issue at https://github.com/GoogleContainerTools/jib/issues/new: 400 Bad Request
[ERROR] {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{"description":"null"}}]}

When looking at the artifactory logs, it seems like the manifest being uploaded is null:

2018-07-10 19:55:25,650 [http-nio-8081-exec-55] [INFO ] (o.a.e.UploadServiceImpl:360) - Deploy to 'et-docker-local:sre/jengel-test/_uploads/49928209-6f48-451f-8bd3-57a329be6880' Content-Length: unspecified
2018-07-10 19:55:25,748 [http-nio-8081-exec-66] [INFO ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:257) - Deploying docker manifest for repo 'sre/jengel-test' and tag 'jib' into repo 'et-docker-local'
2018-07-10 19:55:25,761 [http-nio-8081-exec-66] [ERROR] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:298) - Error uploading manifest: 'null'

Expected behavior: The image should be uploaded to artifactory :D

Steps to reproduce:

  • Add an artifactory repo as your <to>
                            </from>
                            <to>
                                
                            </to>
                            <container>
                                <jvmFlags>
                                    ...
                                </jvmFlags>
                                <format>OCI</format>
                            </container>
                        </configuration>
                        <executions>
                            <execution>
                                <id>build-docker-image</id>
                                <goals>
                                    <goal>build</goal>
                                </goals>
                                <phase>package</phase>
                            </execution>
                        </executions>
                    </plugin>

Log output: See description above for relevant logs

Additional Information:

Tried running as well with the -X flag in maven with nothing else extreme helpful. I do actually see part of the image layers in an _uploads folder in artifactory, but no manifest.json like I'd expect from an image. Happy to provide additional information if needed!

@loosebazooka
Copy link
Member

Hey @Joeskyyy thanks for the report, we'll take a look and see what's wrong with our Artifactory interactions.

@Hi-Fi
Copy link

Hi-Fi commented Jul 12, 2018

I got the same thing, when I tried to push JIB pushed image from local registry:2 (https://docs.docker.com/registry/deploying/#run-a-local-registry) to Openshift (Red Hat Container Registry).

$ docker pull localhost:5000/jib-test-project:1.5.2 
...
$ docker tag localhost:5000/jib-test-project:1.5.2 openshift-redhat-registry/testProject/jib-test-project:1.5.2
$ docker push openshift-redhat-registry/testProject/jib-test-project:1.5.2
The push refers to a repository [openshift-redhat-registry/testProject/jib-test-project]
465ecf4ed5fd: Layer already exists 
141f70ae30db: Layer already exists 
a9dbec2c58b4: Layer already exists 
43e653f84b79: Layer already exists 
empty history when trying to create schema1 manifest

When checking with API calls:
JIB-generated image pushed to local registry by JIB:

$ curl -X GET http://localhost:5000/v2/jib-test-project/tags/list
{"name":"jib-test-project","tags":["1.5.2","1.6.0-SNAPSHOT"]}
$ curl -X GET http://localhost:5000/v2/jib-test-project/manifests/1.5.2
{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}

Centos -image from Docker hub just pushed to local by Docker (renaming to keep thing clearer):

$ curl -X GET http://localhost:5000/v2/centos-local/tags/list
{"name":"centos-local","tags":["latest"]}
$ curl -X GET http://localhost:5000/v2/centos-local/manifests/latest
{
   "schemaVersion": 1,
   "name": "centos-local",...

@coollog
Copy link
Contributor

coollog commented Jul 12, 2018

Hmm, this might be that there is something in our manifest format that Openshift and Artifactory finds to be invalid. I wish those registries could give more detail as to why rather than just giving back "manifest invalid". We will look more into this and try to fix this as a high priority issue. @GoogleContainerTools/java-tools

@chanseokoh
Copy link
Member

empty history when trying to create schema1 manifest

The manifest of the CentOS image, which works fine with OpenShift, seems to be scheme1 (although I could be totally out of place).

$ curl -X GET http://localhost:5000/v2/centos-local/manifests/latest
{
   "schemaVersion": 1,

The OpenShift doc says

For this reason, the registry is configured by default not to store schema2. This ensures that any docker client will be able to pull from the registry any image pushed there regardless of client’s version.

Just a wild guess from the logs above and the following OpenShift doc: I guess we are generating schema2 manifests and the user's OpenShift repository cannot store it? Maybe the repository is trying to convert it to schema1 on the fly?

@saturnism
Copy link

/cc @gshipley @jbaruch

@marekjelen
Copy link

/cc @mfojtik

@jainishshah17
Copy link

@Joeskyyy Can you try creating new local docker registry in Artifactory with V1 api support and try pushing image again. You will see flag as follow when you create new local docker registry in Artifactory.

screen shot 2018-07-12 at 4 11 32 pm

@jorgemoralespou
Copy link

jorgemoralespou commented Jul 13, 2018

OpenShift registry can store manifest v2 schema, even not configured by default can easily be enabled: https://docs.openshift.com/container-platform/3.9/install_config/registry/extended_registry_configuration.html#docker-registry-configuration-reference-middleware, and should accept them since 3.6.
Configuration in 1.5 vs Configuration in 3.6

@Hi-Fi can you verify what version of OpenShift were you testing against?

The reason is very well explained here: https://docs.openshift.com/container-platform/3.9/install_config/registry/extended_registry_configuration.html#middleware-repository-acceptschema2 and it's basically to support docker versions previous to Docker 1.10.

I would think that nowadays there are very few (close to none) users with older versions of docker cli, but it would be good to support both schema1 and schema2 in the plugin, defaulting to schema2 but with the ability to override the value, and obviously a nicer error message explaining to the user what to do in case he get's a response from the registry that schema2 is not supported.

Edited by coollog: @jorgemoralespou Thanks for the suggestion! I'll file an issue for this: #601

@Hi-Fi
Copy link

Hi-Fi commented Jul 13, 2018

@jorgemoralespou: Openshift we have is:
OpenShift Master: v3.6.173.0.49
Kubernetes Master: v1.6.1+5115d708d7

@jorgemoralespou
Copy link

jorgemoralespou commented Jul 13, 2018

/cc @bparees Can you provide any more insight into what's going on here?

@bparees
Copy link

bparees commented Jul 13, 2018

@jorgemoralespou the registry logs might provide more insight. But the theory that a v2 schema is being pushed and the registry is rejecting it seems plausible (I think the normal docker client managed to recognize that situation and fallback to schema 1).

@legionus and @dmage are the real experts here though.

@mzagar
Copy link

mzagar commented Jul 18, 2018

Hi all, fyi I created a ticket for jfrog Artifactory trying to get more info as to what is causing this issue:
https://www.jfrog.com/jira/browse/RTFACT-17134

@chanseokoh
Copy link
Member

For the Artifactory issue: from the Artifactory server log in the above Artifactory service ticket (https://www.jfrog.com/jira/browse/RTFACT-17134) created by @mzagar, I am suspecting it's the Artifactory JSON manifest parser that is breaking. Also from the log, Artifactory does seem like it's capable of handling manifest schema 2.

2018-07-18 08:09:06,855 [http-nio-8081-exec-200] [ERROR] (o.j.r.d.m.ManifestSchema2Deserializer:133) - ManifestSchema2Deserializer CIRCUIT BREAKER: 5000 Iterations ware performed breaking operation.

@Hi-Fi
Copy link

Hi-Fi commented Aug 6, 2018

Seems that files are uploaded OK to Artifactory, so only Manifest error is preventing containers to be usable from there:

[INFO] Retrieving registry credentials for test_artifactory_server:15000...
[INFO] Getting base image test_artifactory_server:15000/test_project/distroless-java...
[INFO] Building dependencies layer...
[INFO] Building snapshot dependencies layer...
[INFO] Building resources layer...
[INFO] Building classes layer...
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/ over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/distroless-java/manifests/latest over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/distroless-java/manifests/latest
[INFO] The base image requires auth. Trying again for test_artifactory_server:15000/test_project/distroless-java...
[INFO] Retrieving registry credentials for test_artifactory_server:15000...
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/distroless-java/manifests/latest over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/distroless-java/manifests/latest
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:c6214c301930c8b579c746a317b3968520dda6e4850731b3aa26795892e9995f over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:c6214c301930c8b579c746a317b3968520dda6e4850731b3aa26795892e9995f
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:363596415e324a41b3cb785da315800d728221092766287806f50f950b5f5366 over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:363596415e324a41b3cb785da315800d728221092766287806f50f950b5f5366
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:ceea43ecb1e567009daa48f0ff3f987a49c01c4b0e20fded2e7dc13d26f7632e over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:ceea43ecb1e567009daa48f0ff3f987a49c01c4b0e20fded2e7dc13d26f7632e
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/distroless-java/blobs/sha256:be7cce24ab14c13de06435c05c9670afd73aebbb96dfd59c49fce9db826b364f over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/distroless-java/blobs/sha256:be7cce24ab14c13de06435c05c9670afd73aebbb96dfd59c49fce9db826b364f
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:363596415e324a41b3cb785da315800d728221092766287806f50f950b5f5366&from=test_project/distroless-java over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:363596415e324a41b3cb785da315800d728221092766287806f50f950b5f5366&from=test_project/distroless-java
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:c6214c301930c8b579c746a317b3968520dda6e4850731b3aa26795892e9995f&from=test_project/distroless-java over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:c6214c301930c8b579c746a317b3968520dda6e4850731b3aa26795892e9995f&from=test_project/distroless-java
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:fc7b4c7963f54d0db0563d722be2969ca2b2962bb2cd18ea92e1ff50da34b052 over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:fc7b4c7963f54d0db0563d722be2969ca2b2962bb2cd18ea92e1ff50da34b052
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:5ffca8b298d5abfc906ae13e87ec2627d482ae8cb8184ca031accb963243fc6a over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:5ffca8b298d5abfc906ae13e87ec2627d482ae8cb8184ca031accb963243fc6a
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:a1aef001875f598a7051ad8d2abbb89be66a29d13c608d65dba7c1c906c668a9 over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:a1aef001875f598a7051ad8d2abbb89be66a29d13c608d65dba7c1c906c668a9
[INFO] Finalizing...
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:7c46de080f2d1c5fb8c4bfbb3cca885c7570299597673f4d985549f850875828 over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:7c46de080f2d1c5fb8c4bfbb3cca885c7570299597673f4d985549f850875828
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:da61c0efb79b224fe7e9147408d60084d973160716de035dd4aec7ace4b672a3 over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/sha256:da61c0efb79b224fe7e9147408d60084d973160716de035dd4aec7ace4b672a3
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:da61c0efb79b224fe7e9147408d60084d973160716de035dd4aec7ace4b672a3&from=test_project/distroless-java over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/blobs/uploads/?mount=sha256:da61c0efb79b224fe7e9147408d60084d973160716de035dd4aec7ace4b672a3&from=test_project/distroless-java
[WARNING] Failed to connect to https://test_artifactory_server:15000/v2/test_project/test_application/manifests/latest over HTTPS. Attempting again with HTTP: http://test_artifactory_server:15000/v2/test_project/test_application/manifests/latest
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.037 s
[INFO] Finished at: 2018-08-06T09:29:14+02:00
[INFO] Final Memory: 64M/671M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.8:build (default) on project test_application: Tried to push image manifest for test_artifactory_server:15000/test_project/test_application:latest but failed because: manifest invalid (something went wrong) | If this is a bug, please file an issue at https://github.com/GoogleContainerTools/jib/issues/new: 400 Bad Request
[ERROR] {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{"description":"Circuit Breaker Threshold Reached, Breaking Operation. see log output for manifest details."}}]}

image

@Hi-Fi
Copy link

Hi-Fi commented Aug 6, 2018

With Openshift and Artifactory it seems that target/jib-cache/metadata-v2.json -file is almost empty content being:
{"layers":[]}

When doing successful against local registry:2 file has layer information (including files in each layer), but no e.g. schemaVersion, mediaType or config that are present in manifest that's downloaded from Artifactory item.

@Hi-Fi
Copy link

Hi-Fi commented Aug 9, 2018

Made some tcpdumping to see what's different. Results below (haven't yet figured out what is the thing causing the issue). Also not sure if (without clearing up those messages) the put should work twice or long after the other things are uploaded. But at least some differences can be seen from those.

Jib:

PUT /v2/test_project/test_application/manifests/latest HTTP/1.1
Accept:
Accept-Encoding: gzip
Authorization: Bearer.removeValidToken
User-Agent: jib.0.9.9-SNAPSHOT.jib-maven-plugin.Google-HTTP-Java-Client/1.23.0.(gzip)
Transfer-Encoding: chunked
Content-Type: application/vnd.docker.distribution.manifest.v2+json
Host: test_artifactory_server:15000
Connection: Keep-Alive

{
	"schemaVersion":2,
	"mediaType":"application/vnd.docker.distribution.manifest.v2+json",
	"config":
		{
			"mediaType":"application/vnd.docker.container.image.v1+json",
			"digest":"sha256:b2a10b2a921637bbee627553275e6f7a849fa511b2bc125167a00494da6e090e",
			"size":963
		},
	"layers": [
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:fc7b4c7963f54d0db0563d722be2969ca2b2962bb2cd18ea92e1ff50da34b052",
			"size":7869666
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:5ffca8b298d5abfc906ae13e87ec2627d482ae8cb8184ca031accb963243fc6a",
			"size":643663
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:a1aef001875f598a7051ad8d2abbb89be66a29d13c608d65dba7c1c906c668a9",
			"size":38870533
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:7c46de080f2d1c5fb8c4bfbb3cca885c7570299597673f4d985549f850875828",
			"size":58172233
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:363596415e324a41b3cb785da315800d728221092766287806f50f950b5f5366",
			"size":315660
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:f73f9ca393435c6e2f011779ee355e622205ac06d88ab956e480001764b4b1ad",
			"size":3112
		},
		{
			"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip",
			"digest":"sha256:ceea43ecb1e567009daa48f0ff3f987a49c01c4b0e20fded2e7dc13d26f7632e",
			"size":19381
		}
	]
}


HTTP/1.1.400.Bad.Request
Date: Thu,.09.Aug.2018.11:31:17.GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: Artifactory/6.0.1
X-Artifactory-Id: artifactory_identifier
Docker-Distribution-Api-Version: registry/2.0

{
	"errors":
	[
		{
			"code":"MANIFEST_INVALID",
			"message":"manifest.invalid",
			"detail":
			{
				"description":"Circuit.Breaker.Threshold.Reached,.Breaking.Operation see.log.output.for.manifest.detail"
			}
		}
	]
}


Docker-client

PUT /v2/test_project/test_application/manifests/latest HTTP/1.1
Host: test_artifactory_server:15000
User-Agent: docker/1.12.6.go/go1.7.4.kernel/3.10.0-514.10.2.el7.x86_64.os/linux.arch/amd64.UpstreamClient(Docker-Client/1.12.6.\(linux\))
Content-Length: 1789
Authorization: Bearer.removeValidToken
Content-Type: application/vnd.docker.distribution.manifest.v2
Accept-Encoding: gzip
Connection: keep-alive

{
	"schemaVersion": 2,
	"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
	"config": 
		{
			"mediaType": "application/vnd.docker.container.image.v1+json",
			"size": 3473,
			"digest": "sha256:088d204168a91fa85bae7d7208de148c8733243a56f5f3410cb3435be858e499"
		},
	"layers": [
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 7869666,
			"digest": "sha256:fc7b4c7963f54d0db0563d722be2969ca2b2962bb2cd18ea92e1ff50da34b052"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 643663,
			"digest": "sha256:5ffca8b298d5abfc906ae13e87ec2627d482ae8cb8184ca031accb963243fc6a"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 38870533,
			"digest": "sha256:a1aef001875f598a7051ad8d2abbb89be66a29d13c608d65dba7c1c906c668a9"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 58191675,
			"digest": "sha256:a1d09f4bd8711a78772117d4adc9f0b420a3773cbfe15846f4c65b88b6255a31"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 315927,
			"digest": "sha256:7c4d4cbb2a07735cba5bd8a9c7624ecd5170951ec59ac5b96cb5c10e9b51eaad"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 3231,
			"digest": "sha256:747822417e29d78d147ceabef840d1b771a7e0d23378bfe3c81cf98e43d9b7b1"
		},
		{
			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
			"size": 19333,
			"digest": "sha256:5b04d502d1fa4e4b9e1fd9e56c12c966d7787155d0d3a80194963a81c8a997c1"
		}
	]
}

HTTP/1.1.201.Created
Date: Thu,.09.Aug.2018.11:28:25.GMT
Content-Type: text/plain
Content-Length: 0
Connection: keep-alive
Server: Artifactory/6.0.1
X-Artifactory-Id: artifactory_identifier
Docker-Distribution-Api-Version: registry/2.0
Docker-Content-Digest: sha256:9cbafa2ffc22c459065420626b40c5cc65195886289ce0c8969aa1c99cb9fe09

@Hi-Fi
Copy link

Hi-Fi commented Aug 9, 2018

If only removing "+json" from MANIFEST_MEDIA_TYPE error message is:

{
    "errors": [
        {
            "code": "MANIFEST_INVALID",
            "message": "manifest invalid",
            "detail": {
                "description": "null"
            }
        }
    ]
}

@chanseokoh
Copy link
Member

@Hi-Fi as I pointed out earlier, this looks like a bug in Artifactory : #534 (comment)

@chanseokoh
Copy link
Member

@Hi-Fi but thanks for the tcpdump. It is indeed very interesting to compare what Jib and docker push. They don't seem much different, so I am intrigued.

@chanseokoh
Copy link
Member

Looks like we are not sending Content-Length, and if so, we should fix that.

@Hi-Fi
Copy link

Hi-Fi commented Aug 9, 2018

That's OK, as it's chunked so it doesn't need to have that.

@coollog
Copy link
Contributor

coollog commented Aug 9, 2018

Hmm, sometimes MANIFEST_INVALID may actually mean that the container configuration is invalid rather than the manifest (or more specifically, that there is a mismatch between the container configuration and the manifest). @Hi-Fi if you don't mind, could you possibly tcpdump those requests (for sending container configuration)?

@Hi-Fi
Copy link

Hi-Fi commented Aug 9, 2018

You mean the thing mentioned in 'config' element? If so, I actually compared the ones in Artifactory. The Docker version is about 4 times bigger.

@coollog
Copy link
Contributor

coollog commented Aug 9, 2018

@Hi-Fi Yep - I believe it is about 4 times bigger because Docker adds the history field that contains a bunch of metadata about each layer.

@chanseokoh
Copy link
Member

@Hi-Fi so one theory we've been speculating for long is that the Artifactory parser is not conformant to the standard and crashes if there is no history field.

@coollog can we just put an empty history? The Artifactory ticket seems stalled, so it'd be interesting to see if it will make Artifactory happy in the next Jib release.

@coollog
Copy link
Contributor

coollog commented Aug 9, 2018

@chanseokoh We can try the solution and test it first, but I'd be hesitant to have it be the default behavior, since it adds an unnecessary extra field and some registries may require the number of items in history to match the number of layers.

@Hi-Fi
Copy link

Hi-Fi commented Aug 10, 2018

Seems that there's also some other differences than history -part. Files that end up to Artifactory are below (for Red Hat registry doesn't show at least as easily those uploads, so haven't check how that sees things. Probably same way).

From JIB:

{
	"created": "2018-08-09T12:51:57.725Z",
	"architecture": "amd64",
	"os": "linux",
	"config": {
		"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"],
		"Entrypoint": ["java", "-Dspring.profiles.active=in-memory", "-cp", "/app/libs/*:/app/resources/:/app/classes/", "org.example.test.application.TestApplication"],
		"Cmd": [],
		"ExposedPorts": {
			"8080/tcp": {}
		}
	},
	"rootfs": {
		"type": "layers",
		"diff_ids": ["sha256:a9872a8d1d8497c269582f6ed3eab8507b258ed1865afa31f46c6f8b3adc88ec", "sha256:6189abe095d53c1c9f2bfc8f50128ee876b9a5d10f9eda1564e5f5357d6ffe61", "sha256:ce50d0a8644296d91457dc2206cd8d13b6253a16b18fc7f353bf5541c882facf", "sha256:b5bef9909a5bc0f2e8b14cdb80de2ac206148643db5714b8e1f3f92d5d17f2ba", "sha256:630535e23e69fc9bc784ade975cbbf078b8d2bd90ee662f6e58d9a2b687552ec", "sha256:1ea6b8dab7190bf60a79464f5f7fbea5c276c3005178009b7d7e317ac0c0a6b0", "sha256:b55ee0e95003a168ca12b7d7a0a8fb05e7fb14fe258156b1d1a85a81012aae08"]
	}
}

From Docker client:

{
	"architecture": "amd64",
	"config": {
		"Hostname": "1f39ec5d9a07",
		"Domainname": "",
		"User": "",
		"AttachStdin": false,
		"AttachStdout": false,
		"AttachStderr": false,
		"ExposedPorts": {
			"8080/tcp": {}
		},
		"Tty": false,
		"OpenStdin": false,
		"StdinOnce": false,
		"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"],
		"Cmd": [],
		"Image": "sha256:bc109b38ecc6c38191f4d9d9e295ca1b52c28cc2c749d6e409b47ddcdd269ad6",
		"Volumes": null,
		"WorkingDir": "",
		"Entrypoint": ["java", "-Dspring.profiles.active=in-memory", "-cp", "/app/resources/:/app/classes/:/app/libs/*", "org.example.test.application.TestApplication"],
		"OnBuild": [],
		"Labels": {}
	},
	"container": "5e1acfe1e58c5727d1f59a4a7414af3afd0058893dac336afa9c6b16e4dbb836",
	"container_config": {
		"Hostname": "1f39ec5d9a07",
		"Domainname": "",
		"User": "",
		"AttachStdin": false,
		"AttachStdout": false,
		"AttachStderr": false,
		"ExposedPorts": {
			"8080/tcp": {}
		},
		"Tty": false,
		"OpenStdin": false,
		"StdinOnce": false,
		"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"],
		"Cmd": ["/bin/sh", "-c", "#(nop) ", "CMD []"],
		"Image": "sha256:bc109b38ecc6c38191f4d9d9e295ca1b52c28cc2c749d6e409b47ddcdd269ad6",
		"Volumes": null,
		"WorkingDir": "",
		"Entrypoint": ["java", "-Dspring.profiles.active=in-memory", "-cp", "/app/resources/:/app/classes/:/app/libs/*", "org.example.test.application.TestApplication"],
		"OnBuild": [],
		"Labels": {}
	},
	"created": "2018-08-09T11:24:34.760605095Z",
	"docker_version": "1.12.6",
	"history": [{
			"created": "1970-01-01T00:00:00Z",
			"author": "Bazel",
			"created_by": "bazel build ..."
		}, {
			"created": "1970-01-01T00:00:00Z",
			"author": "Bazel",
			"created_by": "bazel build ..."
		}, {
			"created": "1970-01-01T00:00:00Z",
			"author": "Bazel",
			"created_by": "bazel build ..."
		}, {
			"created": "2018-08-09T11:24:14.500844147Z",
			"created_by": "/bin/sh -c #(nop) COPY dir:8c790ae2c9c23d8536de33a3d75c9d6972b7bd25e9743f1bb940c4ef1eb53b36 in /app/libs/ "
		}, {
			"created": "2018-08-09T11:24:17.973122753Z",
			"created_by": "/bin/sh -c #(nop) COPY dir:79bf607e2fc210c90e55b00e3f6849ed9e15ea4002cbe6e0050b1bd1ba96875e in /app/libs/ "
		}, {
			"created": "2018-08-09T11:24:21.866933058Z",
			"created_by": "/bin/sh -c #(nop) COPY dir:0535fdf23e0d29ca47b7ec967e624c25a15a0f20ba740bb4474b142e86f83150 in /app/resources/ "
		}, {
			"created": "2018-08-09T11:24:27.643603513Z",
			"created_by": "/bin/sh -c #(nop) COPY dir:be66b8b2c128d8d03036cc1727c2691b23be61a98eed726e1325e71b8c40e119 in /app/classes/ "
		}, {
			"created": "2018-08-09T11:24:29.880712091Z",
			"created_by": "/bin/sh -c #(nop)  EXPOSE 8080/tcp",
			"empty_layer": true
		}, {
			"created": "2018-08-09T11:24:32.1796031Z",
			"created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"java\" \"-Dspring.profiles.active=in-memory\" \"-cp\" \"/app/resources/:/app/classes/:/app/libs/*\" \"org.example.test.application.TestApplication\"]",
			"empty_layer": true
		}, {
			"created": "2018-08-09T11:24:34.760605095Z",
			"created_by": "/bin/sh -c #(nop)  CMD []",
			"empty_layer": true
		}
	],
	"os": "linux",
	"rootfs": {
		"type": "layers",
		"diff_ids": ["sha256:a9872a8d1d8497c269582f6ed3eab8507b258ed1865afa31f46c6f8b3adc88ec", "sha256:6189abe095d53c1c9f2bfc8f50128ee876b9a5d10f9eda1564e5f5357d6ffe61", "sha256:ce50d0a8644296d91457dc2206cd8d13b6253a16b18fc7f353bf5541c882facf", "sha256:ca6c65486d1b31ca342d09625ea909ff06bf61b876b35509ae9f9694e170d79e", "sha256:5a09a0404d0626eab14c41267836ecb023eb52d2f73241afb858bfde6336d0d1", "sha256:a0939605667deb59f16ab6e84b596231230e1e024117ae9007009f72244bf043", "sha256:c8853f54e4024a6957a9fda55e4ad25893c33c978897ce63d106f333de567f5e"]
	}
}

Noticed also funny thing: From those examples I pasted, the docker client example was not working when I used it directly "as is". When I changed Content-Type to "application/vnd.docker.distribution.manifest.v2+json" (adding "+json") and using chunked transfer encoding, I was able to remake the PUT request. So the Content-Type is not causing the issue, it's just probably showing it some other way (or showing it correctly, as with Content-Length the error is just null).

@Hi-Fi
Copy link

Hi-Fi commented Aug 10, 2018

At least empty histrory didn't did the trick, same error about circuit breaker threshold.

@coollog
Copy link
Contributor

coollog commented Aug 10, 2018

Thanks for getting this info! The Docker client is adding a bunch of extra fields that are Docker V2.2-specific and not required by OCI format. I'm wondering if Artifactory is expecting one of these fields to be present.

@Hi-Fi
Copy link

Hi-Fi commented Aug 15, 2018

Seems that Jfrog repos docker importer checks only a few things for manifest validity: https://github.com/jfrog/docker2artifactory/blob/c1639b1747a4dbb8f348b31ca72adbc9fecae1c4/tests/util/ArtifactoryUtil.py#L51.

@briandealwis
Copy link
Member

@Hi-Fi The log from RTFACT-17134 indicates that Artifactory is written in Java. That code seems to be from a test helper for a migration utility.

2018-07-18 08:09:06,843 [http-nio-8081-exec-200] [INFO ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:256) - Deploying docker manifest for repo 'mzagar/jib-helloworld' and tag '1' into repo 'docker-local'
2018-07-18 08:09:06,855 [http-nio-8081-exec-200] [ERROR] (o.j.r.d.m.ManifestSchema2Deserializer:133) - ManifestSchema2Deserializer CIRCUIT BREAKER: 5000 Iterations ware performed breaking operation.
Manifest: {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:840d5635c8968b0d1bfb2fa6b94eb755b8e973b09310b2bdf14ff39c608d3b73","size":694},"layers":[{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:57752e7f9593cbfb7101af994b136a369ecc8174332866622db32a264f3fbefd","size":7695860},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:ba7c544469e514f1a9a4dec59ab640540d50992b288adbb34a1a63c45bf19a24","size":622796},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:079855b1a2fb11564a2bc8ea9f278bcb2bfa9a7a8edbc22653fd4e2afdf5a0c4","size":38708273},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:553e0277f162b20549b8e83df8dd07bd79bfa17ffa8d65b4b95e7258353fbe69","size":2476712},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:12590341691b549b4c86dbbe4fdccc66b0fa39104048252384ab5af1ed89d773","size":104},{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:b5b5c7d33b898d4a1075ab166afc78607af99380e3b85705480c3940f51148e7","size":829}]}
jsonBytes:{"created":"1970-01-01T00:00:00Z","architecture":"amd64","os":"linux","config":{"Env":[],"Entrypoint":["java","-cp","/app/libs/*:/app/resources/:/app/classes/","example.HelloWorld"],"Cmd":[],"ExposedPorts":{}},"rootfs":{"type":"layers","diff_ids":["sha256:a9872a8d1d8497c269582f6ed3eab8507b258ed1865afa31f46c6f8b3adc88ec","sha256:6189abe095d53c1c9f2bfc8f50128ee876b9a5d10f9eda1564e5f5357d6ffe61","sha256:40930fb97dd9e5c822d4b2f9952679758b5272007754a3e2f624c72fa794a008","sha256:8928a2974aad3b02b2e7408678f5ddf1db8bbb24c3cb26dd4292cfc75f078a0f","sha256:06bdc65f4d36b5538573a682d7d5cfcb62e00fe82ef1c21a4ffe743ae9ffac01","sha256:afda9a7d383fa61a2580879372fdb09dbc397e1e80adbc53b9753c1ddd07a7e5"]}}
2018-07-18 08:09:06,855 [http-nio-8081-exec-200] [ERROR] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:297) - Error uploading manifest: 'Circuit Breaker Threshold Reached, Breaking Operation. see log output for manifest details.'

@Hi-Fi
Copy link

Hi-Fi commented Aug 17, 2018

@briandealwis: Yes, Artifactory is written in Java, but that docker2artifactory importer is util provided by jFrog, which (I think) should mean that it should check validity correctly before trying to copy containers from some other registry to Artifactory. So point just was, that at least in that copy there's not many fields checked, which could indicate that Artifactory doesn't require anything else.

I could think that from JFrog side it should be quite easy to tell what parsing is tried to be done when that error occurs, but sadly they seem to be quite unresponsive.

@briandealwis
Copy link
Member

@jainishshah17 @jbaruch

I used a method decompiler and walked through Artifactory's ManifestSchema2Deserializer class while attempting to push up an image from Jib. The problem occurs in ManifestSchema2Deserializer#applyAttributesFromContent().

Basically this code walks through the layers and creates blob infos for each layer. Although it attempts to support containers with no history (which is allowed by the spec, and which Jib does not produce), it does not increment layerIndex when there is no history. The code looks something like:

private static ManifestMetadata applyAttributesFromContent(...) {
    ...
    final JsonNode history = config.get("history");
    final JsonNode layers = manifest.get("layers");
    final boolean foreignHasHistory = layers.size() == historyCounter;

    int historyIndex = 0;
    int layersIndex = 0;
    while (historyIndex < historySize || layersIndex < layers.size()) {
        final JsonNode historyLayer = (history == null) ? null : history.get(historyIndex);
        final JsonNode layer = layers.get(layersIndex);
        long size = 0L;
        String digest = null;
        // This condition appears to be problematic:
        //  - Jib images have no history, thus `historyLayer` is always `null`,
        //     and so `notEmptyHistoryLayer(null)` is always `false`
        //  - None of our layers are foreign
        // and so layersIndex is never incremented
        if (notEmptyHistoryLayer(historyLayer) || (!foreignHasHistory && isForeignLayer(layer))) { // !!!
            size = layer.get("size").asLong();
            totalSize += size;
            digest = layer.get("digest").asText();
            ++layersIndex; // XXX never incremented
        }
        // build a BlobInfo from digest, size, and created
        // add the blobInfo to the manifest metadata
        // circuit breaker checks
    }
    // populate remaining more manifest metadata
}

...

private static boolean notEmptyHistoryLayer(final JsonNode historyLayer) {
    return historyLayer != null && historyLayer.get("empty_layer") == null;
}

It seems to me that either the conditional marked with !!! should be expanded to include historyLayer == null?

@jainishshah17
Copy link

@briandealwis Let me check with Dev Team

@Hi-Fi
Copy link

Hi-Fi commented Aug 24, 2018

If history is the cause, #875 then hopefully solves also this one.

@Hi-Fi
Copy link

Hi-Fi commented Aug 24, 2018

Confirmed: Adding as many history elements as there's layers make container push to Artifactory. Red Hat Container Registry still throws error with this quick try, so it might still require something else.

But Artifactory works when there's equal number of (non-empty-layer) history elements than layers.

For test I only added static ones to jib-core/src/main/java/com/google/cloud/tools/jib/image/json/ContainerConfigurationTemplate.java:

private final List history = new ArrayList();

  private static class History implements JsonTemplate {
        private String author = "Bazel";
        private String created = "1970-01-01T00:00:00:Z";
        private String created_by = "bazel_build ...";
  }

public void addLayerDiffId(DescriptorDigest diffId) {
    history.add(new History());
    rootfs.diff_ids.add(diffId);
  }

@TadCordle TadCordle self-assigned this Aug 27, 2018
@coollog coollog added this to the v0.9.10 milestone Aug 29, 2018
@TadCordle
Copy link
Contributor

Closing this since it should be fixed by #877 (the fix will be in 0.9.10, which should be coming later today or tomorrow).

@zhzhm
Copy link

zhzhm commented Aug 30, 2018

Tested with Artifactory 6.0.2, works well. Big Thanks!

@coollog
Copy link
Contributor

coollog commented Aug 31, 2018

Version 0.9.10 has been released!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests