Instability production databases

Incident Report for Maxxton

Resolved

Good morning,

All systems are back up and running. We are closing this incident for now, but will continue to monitor closely and follow up with Google for further information. A thorough investigation and postmortem will follow — this does not reflect the standard of service we hold ourselves to.

Should you experience any further issues, please reach out through our regular support channels or via the 24/7 phone line for P1 incidents.
Posted Feb 28, 2026 - 08:34 CET

Update

We have verified everything and restarted services and processes out of precaution. We performed random functional tests and all is working fine again. The next team is taking over now for the monitoring during the rest of the night and we will do another verification in the morning.

We will keep this incident open till tomorrow morning, just to be sure, but all statuses back to operational.
Posted Feb 28, 2026 - 03:12 CET

Monitoring

Finally, a positive update! All servers are back online and the databases are running. This is a first conclusion, we're checking everything and restarting services that need a push to recover. More updates soon.
Posted Feb 28, 2026 - 02:27 CET

Update

The recovering of systems at Google Cloud is taking longer as we wished for. The team at Google and Maxxton continues overnight but we will lower the pace of information to the moments that there's an actual update. We have all reasons to believe that we are fully recovered by the Dutch morning.
Posted Feb 27, 2026 - 23:27 CET

Update

Google Cloud is starting up systems in batches out of precaution. We're pushing to be first in line but we are not the only one. We will try to keep you posted on the progress. Expectation is still that we are back to normal by tonight.
Posted Feb 27, 2026 - 21:31 CET

Update

After close communication with Google Netherlands, we learned that the issue is mitigated and that a restart procedure is soon to be started. We are careful with being positive and wait further updates. We will share next updates as soon as possible.
Posted Feb 27, 2026 - 20:11 CET

Update

We are still waiting for more information about the situation. The disturbance impacts multiple companies using the Google Bare metal solutions. We continuously follow up the latest status as much as we can. Our sincere apologies for the situation.
Posted Feb 27, 2026 - 19:39 CET

Update

We are still waiting with deciding on the failover to our DR environment. The reason is the protection of data consistency and the risks we take for the period after the switch.
We are in close touch and escalated via all possible options with our Google Cloud contacts.
Posted Feb 27, 2026 - 18:30 CET

Update

We are continuing to work on a fix for this issue.
Posted Feb 27, 2026 - 18:00 CET

Update

We decided to put our Disaster Recovery process into motion and switched over to Frankfurt. This gives a slight decrease in performance. We will keep you posted on the progress.
Posted Feb 27, 2026 - 17:46 CET

Update

We are in close touch with people from Google Cloud, trying to identify the root cause.
Posted Feb 27, 2026 - 17:21 CET

Identified

All production database servers went for a hard reboot, including one of the acceptance servers. We are waiting to finish up the booting process to continue investigation and to be able to start up the databases as soon as possible.
Posted Feb 27, 2026 - 17:15 CET

Investigating

We are currently experiencing issues with production databases. This has all priority it requires, more updates soon.
Posted Feb 27, 2026 - 16:51 CET
This incident affected: Maxxton Software, API, Web Manager, Integrations, and Operations App.