The case for shaming security lapsers
Public shaming isn’t always a bad thing and can sometimes contribute to the wider good, writes Nicola Whiting MBE, co-owner of Titania Group, listed in SC Magazine's top 30 most influential women and an Amazon best-selling author
Most people will agree that the word ‘shame’ is loaded. But despite it sounding like a harsh concept, ‘public shaming’ isn’t always a bad thing.
Sometimes our generally accepted cybersecurity mantra of “be kind” and “everyone makes mistakes” isn’t impactful enough to drive real change.
Many great examples of how shame can drive change can be found in an article authored by Troy Hunt, “The Effectiveness of Publicly Shaming Bad Security”.
Take British Gas, for example, which changed its stance on allowing pasting in the password field after this exchange:
A Twitter user @passy posted to @BritishGasHelp that “Disallowing pasting and therefore password managers is NOT standard practice. It’s unnecessary and dangerous”. The British Gas Help Team responded that they could “lose their security certificate” if they allowed pasting as it could leave them open to a “brute force” attack. The numerous replies calling that out as a ludicrous position and agreeing with the initial poster, created a change of policy (and an improvement in secure practice) for all British Gas users.
This public shaming also chimed with other large organisations who hadn’t updated their systems to allow people to easily paste complex passwords. So just one tweet to British Gas helped drive industry-wide change and best practice (today most sites accept password managers and work seamlessly with them).
Shaming can be a force for good – but is there another way?
Many people, myself included, lean in the first instance to a less public approach. A case in point: “bug bounties”.
For example, the Microsoft Bug Bounties programme offers awards up to $100,000. To collect the award, Microsoft asks that when finding a vulnerability you “report it to us privately and give us the opportunity to correct the vulnerability and protect our customers before disclosing it publicly”.
This makes sense as the goal of bug bounties is to help address the vulnerabilities organisations may have missed in their development process.
Microsoft and its peers not only pay out bounties to encourage discreet notifications, but once the vulnerability is fixed, they give credit to the finder.
This seems to be the ideal, improved security reward for both parties – and in this instance direct shaming is rendered unnecessary. Bug Bounties are generally accepted to be a good thing and the UK government has even published guidelines on how to set up Vulnerability Disclosure Programmes.
Unfortunately, despite all that good practice, there remains a place for “necessary shaming”.
When a security expert has done everything possible to highlight a breach or risk but receives generic “we take security seriously” responses (or no response at all). This means that the researcher has no indication that the breach has been reported, or acted upon.
Therefore, they may feel that their only option is public disclosure. Some organisations, such as PenTest Partners, have made this part of their Vulnerability Disclosure Policy which makes it transparent that failure to engage or fix an issue that puts the public at risk is not an option.
Vendors are given the opportunity to address the risk but “if the vendor’s proposed timeline is unacceptably long without very good reason, in line with common disclosure policies, Pen Test Partners will write and publish an advisory detailing the vulnerability.” (They use the commonly accepted timeline of 90 days for an organisation to address the vulnerability in question.)
When shame works
It has been proven that public disclosure often leads to funds being unlocked by boards, and even governments, to drastically reduce the timelines on solving specific security risks.
For example, when in 2015 security researchers Chris Valasek & Charlie Miller, demonstrated they could remotely disable a Jeep Cherokee’s brakes and steering while it was driving on a highway, it instigated a worldwide focus on the risks associated with communication and access to the Internet of Things (IoT). As a result 1.4 million vehicles were recalled to mitigate that risk and add extra security.
From this demonstration and others like it, numerous start-ups sprang up as governments unlocked research funding to help address what was seen as a global problem. To drive IoT security, governments also began to legislate for tighter security around IoT connectivity with directives.
A middle way?
But, despite its benefits – I would still hope that, ultimately, the security industry as a whole treats “shaming” with caution.
Many security professionals struggle with Imposter Syndrome. And the fear of making mistakes and being shamed for them is a commonly shared anxiety. It has been well documented that public shaming can sometimes impede innovation, resilience, and career growth.
Our industry has worked long and hard to dispel the beliefs that people in security work in a “culture of fear” – where it’s only a matter of time before you make a mistake and are “outed” for your lack of expertise.
Security professionals must work collaboratively. We must share best practices and support each other in times of need – this requires trust.
If we want to attract new security professionals to cybersecurity then the trusting collaborative nature of our industry has to be overwhelmingly visible – not the more unfortunate side, where we throw frustrated rocks at poor security perpetrators until they’re shamed into taking action.
Ultimately, the cybersecurity industry faces a battle of two wolves. One is “shame” – which aligns with anger, guilt, regret, arrogance, false pride, superiority and ego.
The other is “trust” – which chimes with hope, humility, kindness, empathy, generosity, truth and compassion.
We are all part of that battle’s outcome – and the wolf that wins is the one we choose to feed.