You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
level=info time=2024-03-20T00:07:09Z msg="Agent version associated with task model in boltdb 1.75.0 is bigger or equal to threshold 1.0.0. Skipping transformation."
level=critical time=2024-03-20T00:07:09Z msg="Error loading previously saved state: failed to load previous data from BoltDB: failed to load task engine state: did not find the task of container XXXX: arn:aws:ecs:REGION:1111111111:task/XXXX/06548beea8f34300a560e8aa2e660cb" module=agent.go
The text was updated successfully, but these errors were encountered:
Is this issue continuing to occur, or were you able to fix the starting of agent? This is due to a small edge case that corrupts task and container information when agent is terminating. We are tracking this issue internally. For a temporary mitigation, when upgrading agent, you could try to stop tasks on the instance beforehand. I would suggest setting up the external instance with ECS from scratch if agent is still not starting, if that is feasible..
Hi @hozkaya2000
Reinstalling ecs agent (including deleting all related files) and registering the cluster again fixed the issue.
On other hosts, stopping all tasks before upgrading was successful in preventing this.
Thanks
Summary
Upgraded ecs agent on external instance.
The ecs service keeps restarting.
Ecs agent server fails after this error is logged:
Description
Upgraded ecs agent but the service keeps restarting.
Refer to logs section.
Environment Details
Ubuntu 22.04.2 LTS
ecs agent version
docker info
df -h
Supporting Log Snippets
The text was updated successfully, but these errors were encountered: