Tech outage brings massive disruption worldwide including major air carriers to full stop (3 Viewers)

but from what I hear, the problem some have with that (apart from the obvious practical issues, like potentially needing physical access, etc.,) is that if people are using bitlocker encryption, they need a recovery key, which makes things a bit more complex. And in some cases, apparently the admins can't access the recovery keys because they carefully stored them on a server that's now also inaccessible because of this problem. Whoops.

so in laymans terms, to fix you need key. the key is stored on a server you cannot access because of the original issue and have no way of getting into that server?
 
I'm flying international tomorrow morning, thank godness I didn't book for today - it sounds like airports around the world are just a mess right now. Hopefully it gets resolved without too much spillover apart from the re-booking.


Same same.. except i flew yesterday, or day before yesterday, my days are all mixed up.. actually i flew out Wed morning, 27 hrs, Houston to Tokyo to Bangkok.. had delays but nothing on the level that this outage would have caused me, im hearing Asia has been particularly hard-hit.. Flying that long was so brutal, I can’t even imagine if id gotten caught in this clusterfrick .. anyway im getting ready to take the nap of a lifetime, then going to find some good Thai grub .




thank godness


I like that, sort of a hybrid between ‘thank god’ and ‘thank goodness’, i hope it catches on .
 
so in laymans terms, to fix you need key. the key is stored on a server you cannot access because of the original issue and have no way of getting into that server?
Yes, although for someone to get into that situation, they've probably already made a big mistake themselves (because, e.g. if a recovery key might be necessary to fix that server first, that key definitely shouldn't be stored on that server). But there can be other reasons why fixing that server could be difficult.

And then even with the keys, it's the scale of the problem. Companies with potentially hundreds, or thousands, of potentially remote workers, with bitlocked laptops. I would not want the job of remotely talking hundreds of people through booting a bitlocker-protected system into recovery mode and deleting a system file.
 

I'm really curious to see what exactly happened. Because the failure is clearly, a) very obvious, and b) very widespread, it's not like it's a really subtle issue that only affects systems with some unusual configuration, so it's hard to imagine how it couldn't have been picked up in automated QA testing before the update was rolled out. But it's also hard to imagine that no-one involved does that testing. So it'll be revealing to find out how this happened (assuming we ever do).
 
Thankfully, I haven't been affected by this directly; the systems I manage in our cluster are almost entirely Linux, and the Windows systems managed by our central IT don't use CrowdStrike (think they use Cisco).

But it's a real mess they've made. It's reportedly fixable by booting in recovery mode and removing a particular file (https://www.crowdstrike.com/blog/statement-on-falcon-content-update-for-windows-hosts/), but from what I hear, the problem some have with that (apart from the obvious practical issues, like potentially needing physical access, etc.,) is that if people are using bitlocker encryption, they need a recovery key, which makes things a bit more complex. And in some cases, apparently the admins can't access the recovery keys because they carefully stored them on a server that's now also inaccessible because of this problem. Whoops.
You can beat it to the punch pre-boot if your RMM/remote access can load before CS does. We sent the delete command and it's able to kill the file before CS loads, but only wired devices are able to check in quick enough.
 
Always fun to see half the servers in your environment go offline and come up to blue screens at around midnight. Who needs sleep anyway?
 
Sitting at Quest Diagnostics and their computers are down because of the incident….thank goodness that I brought my paper orders.
 
Always fun to see half the servers in your environment go offline and come up to blue screens at around midnight. Who needs sleep anyway?


ill try to remember next time i call IT because my mapped network drive is gone and i cant scan a document and lose my mind

;)

sorry. ( in advance ) lol.
 
Just saw alert about the CrowdStrike update that was the cause of it- not a cyber attack.

an update.

Atlanta Hartsfield- 166 min delay lol ( assuming you flying out Charlotte- 80 min delay ) And its a "cascading effect" so as more delays/cancellations happen, the worse the delay. But at some point the cancellations will allow airlines to catch up

We were going to Colorado- but at last min moved from this weekend ( fri depart ) to next weekend ( fri depart ) .

A !$@##@ Windows update lol

Atlanta is now up to 249 min delay for departures.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Users who are viewing this thread

    Back
    Top Bottom