Do you remember your first day of work? Your very first day of full-time, non-intern, benefit-getting, 401(k)-matching, big kid work? For Reddit.com user "cscareerthrowaway567", the first day of their first job as a junior software developer, fresh from university graduation, meant panic, terror, and being fired by the company’s Chief Technology Officer. And you thought finding the restroom was bad.
As the Reddit post explains, "cscareerthrowaway567", who will be known as CS from this point on, used a company provided training document containing step-by-step instructions to help him set up a local development environment. He followed the instructions very carefully. Unfortunately, the training document contained actual URL, username, and password specifications for the production environment. Instead of replacing the production credentials in the document with his own, CS unknowingly wiped out the entire company’s production database. Within an hour the company was in full restoration panic mode and the newly salaried developer was canned.
Unfortunately we’ve seen this type of issue before. As recently as the end of May, highly classified intelligence data that belonged to the US National Geospatial-Intelligence Agency was found on Amazon Web Services S3 storage service. This oversight was linked to Booz Allen Hamilton, a defense and intelligence contractor. And you may recall the major Edward Snowden data leak, he was also a Booz Allen contractor. These events can occur on purpose, on accident, or as a security oversight and there is very little difference between these situations and that of Reddit user CS.
How does this happen? What happened? Who is at fault?
First of all, what Mensa candidate puts highly classified, sensitive credentials that can effectively change the way a business is run in a training document? This assumedly printable, distributable word doc must’ve also gone through layers of management and was used by multiple new employees, right? And no one caught it. And when they did, it was too late.
After CS ran the script, the whole database was deleted. Entire production was wiped out by a single error. When restore was initiated, the restore failed. Apparently, backups weren't a priority. Even if it was just an oversight, this company was left with one less software developer to help reinstate the code.
Red flags can be seen throughout this story. Besides the carelessly written training doc, the company fired CS for making a mistake, using CS as a scapegoat for the company’s lack of disaster recovery plan. The company failed and is at fault. The company should have tested their recovery plan on a monthly basis to ensure these types of accidents can be quickly remedied. They could have also participated in a risk analysis, which could have alerted the company to a failed recovery plan because it would have shown a database vulnerability.
Data breaches, human error, natural disasters. Expect the unexpected.
Numerous situations can put your data, information, and business at risk. As we’ve seen, no matter how protected and intelligent employees may be, accidents happen and even the most capable people can make mistakes. The best way to prepare for uncertainty is by carrying out a risk assessment, an honest look into the state of security at your company, and ways to mitigate exposure. Hostway offers risk assessments, along with providing dependable and reliable security as part of our customer promise, and why our customers trust us to host their sensitive and mission critical data.
Perhaps CS’s company and Booz Allen Hamilton failed because they didn’t know where to start, how to prepare and plan. Hostway Solutions Engineers can offer guidance and security consultations to help reduce internal and external risk factors. Had CS’s company gone through a Risk Assessment with Hostway, they may not have stopped CS from wiping out the data production, but they would have had a specific and fast-acting recovery plan to get the data back quickly, and perhaps CS would still have a job.