A recent Reddit thread asked users to reveal what they can now share after an NDA they signed expired.
Stories varied from Kraft introducing premium Mac n’ Cheese in the 90s, reality TV fakery, to… IT stories no one was supposed to repeat.
One user specifically claimed the big Sony hack originated “because of a 5 year old account they didn’t delete or monitor from an ex employee.” Ouch.
If you read our blog, we’ve covered that kind of shadow IT horror story before. I share that because while these stories lack corroboration, we regularly see these kinds of issues. Grab your popcorn, because we found some of the best stories for you.
Companies Are Finding Out That 95% of Their Cloud Services Were Previously Unknown
Similar to the Sony hack, we saw web-accessible software remain really, really open for ex-employees, where we seem to depend on the kindness of those who no longer work for us:
My company forgot to remove my credentials to their investor’s website when I left. Only like 5 people in the company had access to the site because it had people names, addresses, SSNs, Credit Scores, etc. Over 400k people. Like 3 years later I was working for a competitor that had the same client.
A company I worked for in college was bought out years after I left. They just found a way to merge database entries and since my old job never deactivated my account I was suddenly able to access even more permissions that were automatically added to my new profile based solely on what rank my account had already.
A company I used to work for hasn’t changed their twitter password since 2012 at least, and it’s a pretty big account. It’s currently still on my tweetdeck from when I worked there and was given access, and I could tweet from it any time I wanted. The second I did it, I’m sure they’d delete the tweet and change the password finally, so there’s really no point. But I still kinda get a kick out of it. If anyone wants your Soundcloud advertised to like 65k people for as long as it takes someone to notice a rogue tweet, hit me up.
So while Microsoft says that password expiry policies are useless, they seem helpful in an era with web-based software and distributed management responsibilities.
Another part of the thread highlights the false security that may come from having security “teams.” Those teams are limited by process, policy, and power. That means they may find creative ways to view their job:
I can’t tell you how many times I’ve stopped a customer in the middle of confessing some egregious behavior, simply because it was far easier for me to ignore it than actually address it by the book. Willful ignorance.
[The security team’s] job is to protect their jobs by protecting their electronic infrastructure, and that’s it. A password written on a sticky note can be less of a threat to them than you’d think. Of course it’s not secure at all but it wouldn’t be their problem; worst case scenario they have some more work to do after a security breach but they still keep their jobs.
And this statement perfectly explains why security teams would want to maintain bad security policies:
If you are the employee who put your password on a sticky note and something happens, they aren’t gonna fire the security team dude who made you change your password too often, they are going to hold you accountable. No skin off the security team’s back, so why would they care? Hell, if there is a breach and it’s clearly not directly their fault, they’re not gonna think “Oh man, perhaps if I hadn’t made Jim change his password so often none of this would have happened!” No, they are gonna think “Phew! Bullet dodged, Jim was kinda chummy anyway.”
What Can These Stories Tell Us About Managing SaaS?
Based on the attitudes and behaviors we saw in the thread, let’s explore a concept we’ve been discussing recently – the tendency of IT teams to blanket block or allow SaaS, even if those blanket management policies are not effective anymore.
By the same sticky-note logic mentioned above, how could the average security analyst not support blocking SaaS? If all SaaS is blocked, it is simply someone’s fault for finding a way around the policy. It makes the inevitable SaaS-related breach “safe” for the security team while leaving the company vulnerable.
Likewise, permitting SaaS with little oversight by “trusting in employees” can create similar cover from punishment without making the company safer. Especially if the policy comes from senior leadership.
If you’re looking to get more serious about SaaS security, we have over a dozen ways to discover what your employees use, and then allow you to monitor the underlying security implications and take action. Contact us for a demo or start a 14-day trial. You’ll see how Alpin can work for you. Get started by emailing firstname.lastname@example.org.