Simple Taxonomy for Abuse

When Mat and I started talking about the idea for Siegescape, we intuitively understood that the tools and techniques we wanted to build would be broadly applicable, but we weren't sure exactly how far they would reach. We started fleshing out the idea on a whiteboard, thinking through different angles, how the pieces fit together, and how we could leverage Siegescape technology to address problems we knew about. What emerged was a framework for thinking about abuse that allows us to, for example, talk in broad terms about the maturity of different solutions, regardless of the organization or even the market they operate in. This framework can help clarify your own abuse problem and how to approach it, so I thought it would be helpful to share.

Baseline

For the types of abuse we're talking about, there is always a process of identification and authentication between the service provider and the service user. Identification is the process by which the service user states who they are. Authentication refers to the process by which the service user "proves" their identity to the service provider. This might include providing a username and a password in the case of a web service, or it might include providing information about the last 3 purchase in the case of a credit card helpline. In most services, identification and authentication are closely related.

Identity Abuse

Front door

With authentication in place, we are ready to talk about the first type of abuse: identity abuse. Identity abuse refers to the type of abuse that occurs as a result of an attacker subverting the process of authentication, allowing them to pretend to be someone they aren't. This is a well-known type of abuse, and it comes in many forms. Here are some examples:

  • Corporate IT. This is the traditional view of a "hacker" - breaking into accounts, stealing secrets, disrupting normal business, etc.
  • Email. An email account that has been broken into can be mobilized as a spam account, or can be monitored for secrets that benefit the attacker, as in the FIN4 attacks identified a few years back.
  • Payments. Credit and debit cards are subject to identity abuse, particularly in card-not-present transactions.
  • Tax return submissions. Identity thieves submit fraudulent tax returns using the details of a real taxpayer's identity. In this case the IRS is the service provider and the SSN, address, birthdate and other information are the requirements for authentication.
  • Social account hacking. Having a prominent Twitter or Facebook account hijacked can result in loss of reputation, spam, and bad experiences for other network members.

Most public-facing services suffer from some form of identity abuse. When you have a visible front door, some jerk is always going to try to just waltz in and start eating your snacks. There are two complimentary approaches to cutting down on account abuse: making authentication stronger, and monitoring behavior post-authentication.

In the former category, efforts like password strength requirements and strong multi-factor authentication go a long way, but they are hard to enforce and increase friction for users while not being foolproof. Non-web based protocols exacerbate the problem, opening the door for brute-force attacks. Tools like Fail2ban can help there, but again are not perfect.

Behavioral techniques leverage knowledge of how your users behave to build models to detect when someone isn't who they say they are. This approach is exemplified by credit card companies who have been doing this kind of work for decades. Because behavioral approaches are more nuanced, they typically need more mitigation options and benefit from having humans in the loop.

If you caught every instance of identity abuse in your service, your job still wouldn't be done. That's because people who, in fact, are who they claim are ALSO eating your snacks. That brings us to behavioral abuse.

Behavioral Abuse

Sign banning loud noise

Behavioral abuse is when a legitimate user is abusing the power they have been authorized. It tends to be much more domain-specific than identity abuse, but is also widespread. Here are some examples:

  • Corporate IT "insider threats". Disgruntled employees, rogue contractors, and the like can knowingly exfiltrate data, plant backdoors, and do all sorts of other horrible things.
  • Prescription drug fraud. "Pill mills" and other fraudulent prescription strategies take advantage of doctors' capabilities.
  • Insurance fraud. Lying to benefit from insurance benefits the policy holder (sidenote, there are tons and tons of types of insurance fraud!).
  • Lotto and gambling. From program administrators rigging jackpots to manipulating casino games.
  • Finance and accounting. Rigging the books to reap personal reward is an age-old form of abuse.
  • Insider trading. Making use of secret information to which you are privy to trade related stocks

Not every service suffers from behavioral abuse. This type of abuse arises when there is some benefit to acting in a way that is destructive or negative for other service users or the service itself. If the actions of a user don't affect other users and the service is well-protected, there may not be much scope for behavioral abuse. For example, a to-do application won't have much behavioral abuse, because the authorized capabilities of each user don't overlap. If they do overlap but there is little benefit to acting in a disruptive way, there will likely be some abuse, but the problem may be relatively small - some men just like to watch the world burn. The biggest scope for problems is when you have both.

A good example is paid reviews on Amazon. Paid reviews are against the terms of service, and they make Amazon worse for other users, but there is a clear benefit to creating them. In situations like this, having mechanisms to protect against behavioral abuse is critical to keeping service value high.

As the name suggests, the only approach to stopping behavioral abuse is behavioral monitoring. As with identity abuse, applying behavioral mitigation techniques requires domain specificity. Unlike identity abuse, many behavioral problems have well-defined signatures and thus don't necessarily require the subtle modeling needed to find masquerading users. The signatures can be used in rule-based systems, but like all rule-based systems these can become brittle with age and difficult to maintain.

So what?

Why bother making this distinction? What is the benefit of thinking about fraud in these terms? By distinguishing the types of fraud you are trying to combat, you can be clear about the approaches that will be most effective. No amount of authentication will ever stop behavioral fraud. By quantifying the types of fraud and how much they are costing you, you can also start to make value-based decisions on where to invest in your anti-abuse efforts.

If you'd like to discuss your current fraud with someone who can help you understand it, please contact us to arrange a conversation.

Joseph Turner