In a week already marked with controversy after Google Cloud accidentally deleted the entire account of Australian pension fund UniSuper, the tech giant has stumbled again.
At 15:22 PT last Thursday, a maintenance automation, intended to shut down an unused network control component at a single location, malfunctioned.
Instead of targeting only one site, the process inadvertently affected around 40 locations, taking many customers offline in the process.
Some Google Cloud customers were left without connection last week
The blunder led to nearly three hours of downtime, disrupting 33 Google Cloud Services for affected users including high-profile offerings like the Compute Engine and Kubernetes Engine.
Some of the issues included the provision of new VM instances without network connectivity, VM migrations and restarted systems losing connection, and configurations like firewalls and network load balancers not updating.
Operations reliant on Google Cloud Engine VMs were severely impacted during the few hours, until the incident concluded at 18:10 PT, just in time for the weekend.
Google attributed the outage to a bug in the automation tool used for the maintenance. A network status page reads: “Google engineers restarted the affected component, restoring normal operation.”
The cloud hosting company added: “We extend our sincerest apologies for the service interruption incurred as a result of this service outage.”
Just over a week before, services like Google Big Query and Google Compute Engine were taken offline during what Google described as “an unplanned power event caused by a power failover due to a utility company outage.”
The previous week, Google also hit the headlines for accidentally deleting UniSuper’s account in a “one-of-a-kind occurrence,” a company that had an estimated $124 billion in funds under management last summer.
For now, businesses and users alike remain on edge, hoping that Google Cloud puts in the work to prevent other similar outages.
+ There are no comments
Add yours