Thanks to poor driver support, we had been running for who knows how long with
3 failing drives in the raid10 array that housed the database. But that wasn't
actually what caused the outage... if a machine with an 8500 in it goes down
unexpectedly (think power failure), the controller can't trust the data on the
drives to be in-sync, so it needs to rebuild the array. Unfortunately, one
of the drives it picked to be authoritative was failing, and decided that it
wasn't going to give up it's data.
Unfortunately we've been unable to recover the array. We tried using spinrite
as a last resort, but at the rate it was going it would have taken something
like a week to recover the drive. This means that when we get back online,
we'll be running from a stats backup taken Nov. 6, about 4 days before the
failure. Any changes made to participant accounts or teams in the meantime will
have been lost.
In an ironic twist of fate, we've been working on getting a new machine in
production that would have allowed replicating user-modifiable tables (ie:
participant accounts and teams) to another machine. Had that been in place we
would have lost very little, if any, of this data.
The current situation is that we've bought 3 new drives and used them to
rebuild the array. We've also taken this opportunity to upgrade to FreeBSD 6.0.
But now any time we try to access the array, the machine reboots.
Once someone is on-site to investigate we'll hopefully know more.