This was confirmed as an AWS outage, and during our debugging and monitoring investigations we were able to see and confirm that builds had resumed without issue.
Posted Jun 20, 2023 - 00:11 UTC
Resolved
This incident has been resolved.
Posted Jun 13, 2023 - 23:04 UTC
Monitoring
We are continuing to see builds successfully resume on our SAAS clusters. We are actively monitoring as our runtimes work through the build backlog bottleneck created by this incident, and making adjustments as necessary to help expedite recovery.
Posted Jun 13, 2023 - 21:15 UTC
Update
Volumes for customer builds are now able to be provisioned. Pending builds should be slowly returning to normal. We are continuing to monitor the behavior for this incident.
Posted Jun 13, 2023 - 20:47 UTC
Identified
We have confirmed that AWS endpoints are timing out and we cannot provision volumes used for builds. We are continuing to investigate what appears to be a widespread AWS issue with us-east-1.
Posted Jun 13, 2023 - 19:58 UTC
Investigating
We are currently investigating an issue with AWS (in particular us-east-1) which is impacting volume provisioning for some customers.
Posted Jun 13, 2023 - 19:29 UTC
This incident affected: Codefresh Systems (Codefresh Classic SLA, Codefresh GitOps SLA, Codefresh API, Codefresh Hosted GitOps Services, Codefresh Classic Pipeline Engine) and AWS (AWS ec2-us-east-1).