Have you ever received a call or email from somebody you know is trying to scam you, and decided to take them for a ride? If you have, then you are already familiar with the underlying goals of designing an “Attacker Experience” Most of you, in your leadership roles in IT, are looking to protect your resources, and to allow only the “right” people in.But what about those ‘other people’? Those people deserve consideration.They deserve to have their own experience.In his talk last year at the Cloud Identity Summit conference, Bob Blakley laid it all out out.We design our authentication, notification, and audit regimes as if there are only two people involved - the “right” user and the resource.But we know that often there is a third party at play, working around the edges, looking for any angle that can be gamed, in order to gain unsanctioned access. What if we started to design for that attacker, right within the corporate authentication ceremony? Today, most companies give the attackers a wonderful experience (from their perspective, not ours). We allow an attacker to try anything they want, giving them immediate feedback on the success or failure of their endeavors. Maybe we throttle a bit on that IP address if the attacker is too persistent, maybe we temporarily lock an account after too many tries;but usually, we turn the attacker away and forget they were even there.
“We are playing to delay, to enrage the opponent, to force away every margin call”
What would it take to heed Bob’s advice and create an experience that attackers despise? First, we need to know who the attackers are. In the identity world, we are working really hard at creating multiple active and passive indicators of user identity, and correlating a lot of data, as part of the authentication process.Is the user on a known device? Are they on a network with the right geographical characteristics? Have they validated the right credentials? Is their behavior consistent with past actions? All of these factors are used to create contextually rich confidence that the “right” user is behind the resource access. But there are times when we could have as much or more confidence that the “wrong” user is in front of us. We can, in effect, begin to “authenticate” an attacker presence — and if we work at it, we can get better and better at recognizing at least some of these bad actors, and messing with their heads.
Imagine that an attacker steals a username/password combination, through some account harvesting technique. It happens all the time. In an unsophisticated organization, this attacker would be able to impersonate a user - the ultimate positive experience for the attacker. However, in a more sophisticated organization, perhaps other factors can correctly identify the attacker as a bad actor in a timely fashion, even though the password was a match. In that case, the fun can begin, as long as we are confident that an unauthorized user has logged in. What can be done? Anything that convinces the attacker to invest more time, when in reality you have already locked them out of any real access. This could be as simple as sending the attacker to error pages that look like a second factor of authentication, so that the attacker spends time trying to evaluate whether this second factor can also be compromised. Perhaps you send the attacker into a nest of page navigation that never gets anywhere, or use 501 or other errors to keep the user interested in the account for as long as possible. Perhaps you introduce the attacker to a honeypot or other more “professional” way to waste their time. While you are doing that, you are forcing your real user to reset the password, and saving the compromised username/password combination at a different location to identify the attacker. An attacker-only credential is a gift that allows you to perform evil experiments to your heart’s content.
Some of you might be asking, why spend valuable resources and time in such a juvenile way? Well, in the same way you have to construct your security and identity resources never knowing for sure whether the person trying to access your resources is there for good or for evil, wouldn’t it be nice and useful if a reciprocal doubt was introduced on the part of the attackers? What if, every time the attacker gets a hit on a harvested password, they are forced to take into account the idea that what *looks* like a successful resource penetration might instead be a trip down the garden path, designed purely to waste time and force additional qualification? You might be tempted to leave this type of skullduggery to the professionals, but I would argue the opposite — that what we need the most is an amateur guerrilla force, pushing attackers into creative, unpredictable, and varied experiences that confound their scripts and whittle away at the numbers game they play.We are not playing to win here. We are playing to delay, to enrage the opponent, to force away every margin call. Most of all, we are playing to learn. If an attacker can’t easily tell who is truly sucked in, and who is leading them on, it changes the economics of perpetrating the attacks, and maybe we force a change in their business model.
If nothing else, I hope you start to think a little more like the people who are driving you crazy.Take for example those pesky people who call you up trying to scam money from you. The worst possible experience you can give those con artists, is to prevent them from moving on to a more receptive target. It takes a certain frame of mind, a certain amount of schadenfreude, to cheerfully engage with the enemy in this way. But if enough people do it, who knows, we might even be able to cause some disruption. I would surely like to try.