-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inability to unzip assets during build on Unix x64 #32805
Comments
Issues still occurring, impacting non-Mac builds now too
|
Just happened again here: https://dev.azure.com/dnceng/public/_build/results?buildId=547852. I believe we started a mail thread about this. cc @safern? |
Failures since March 3rd (since last update)
Evaluated 215 builds |
We're tracking this via https://github.com/dotnet/core-eng/issues/9100. If this is hitting often enough, I am willing to bump its priority and sit on a Sev2 IcM bridge; I'm not sure the % hit rate will convince them of this level of importance yet though. |
Closing as the zip disable has fixed this. |
Reopening as I'm hitting this in some Windows legs: https://dev.azure.com/dnceng/public/_build/results?buildId=742988&view=logs&j=694d544e-ff71-5faf-b01a-5137c04e57c6&t=38d97292-8f3b-5b9c-e49b-40e136935136 |
@ViktorHofer this isn't an inability to unzip assets, it's an inability to download them. From your logs:
... but if you download that zip, you'll find it's > 380 MB, not 78 Relevant issue: microsoft/azure-pipelines-tasks#13250 |
@safern previously added a workaround to our pipelines for this. Are we missing that workaround in some places we call DownloadBuildArtifacts? |
I don't think so. This should be disabled by setting a variable on the yml file and we set that variable in a place where all pipelines use: runtime/eng/pipelines/common/xplat-setup.yml Lines 29 to 32 in 7242e14
|
Yeah, but this was the issue tracking this failure and the workaround I provided was to prevent that from happening. |
I struggle finding the action item for this issue besides continue ref-counting when the issue happens again. Hence I'm inclined moving this to Future. |
@MattGal the linked issue is closed. From what I understand we implemented a workaround but this started to happen again. How should we proceed here? |
The task we're discussing isn't one we own, and the linked IcM (https://portal.microsofticm.com/imp/v3/incidents/details/177158735/home) was archived without being mitigated back in May. I pinged my contact on that team but aside from recreating the same IcM again and being told again they haven't figured out how to fix it, your workaround is probably going to be doing the downloading of artifacts yourself directly via AzDO API calls. I'll update once I hear back. |
Just pinged @MattGal about this offline. Even if we might be able to mitigate this by collapsing our legs and depending less on the affected task, we will probably not be able to completely get rid of this in the near future. We might want to create a new IcM for the issue. |
At @markwilkie 's suggestion I'll be tracking my efforts to reduce this problem via the linked core-eng issue. |
This is a case where we can safely retry. Pretty much any infra level issue which is identifiable and occurs before tests run is safe to retry. We don't have to wait for core-eng to provide the necessarily logging infra for tests. |
You can safely retry the leg, but the problem is the task "succeeds" when this problem occurs and runfo can't differentiate this from things like an actual malformed archive. It would definitely be preferable (and unfortunately fell through the cracks since July) for the task to actually fail on failure and use its built-in retry mechanisms. |
Correct. At the same time we have zero actual malformed archives. So the rate at which this is a false positive is presently zero 😄 Even in the case where we do have a false positive (actual malformed archive) the risk is low here. We retry the job, it fails again and a developer will have to investigate. |
@jaredpar makes sense; I'll keep pushing on the actual task getting fixed too. |
@MattGal we just hit this in a rolling build on OSX but I see the dnceng issue is closed. Do we have a way to track AzDo adding retry for Unix code paths? |
I have been tracking this since "the surge" via https://github.com/dotnet/core-eng/issues/11551, and AFAIK we're just waiting for this PR to merge; feel free to ping it some more too: microsoft/azure-pipelines-tasks#14065 . I suspect things are just slower than usual because folks are still getting back from holidays. |
Thanks, @MattGal. I didn't know about https://github.com/dotnet/core-eng/issues/11551. |
Seeing about ~3% of our builds failing with the following error:
Spot checking the failures these seem to be limited to OSX builds.
Runfo Tracking Issue: Runtime unable to unzip assets
Build Result Summary
Other tracking issue: https://mseng.visualstudio.com/AzureDevOps/_workitems/edit/1673333
The text was updated successfully, but these errors were encountered: