Man in the Middle With You

I recently wrote about an attack against users of Office 365 which remains one of the most common causes of cyber-security related losses. Most cyber-crime is motivated by money so, having successfully compromised an Office 365 user’s mailbox, the attacker needs to monetise his success, so far only technical. In this piece I describe one of the attacker’s favourite gambits, diversion of planned payments to his own bank account.


This is one possible manifestation of phase 4 of the Office 365 phishing cycle and it proceeds through at least four observable phases:

  1. Mining – the attacker evaluates exfiltrated data with the intention of preparing a social engineering attack.
  2. Preparation – the attacker prepares his attack.
  3. Collection – the attacker launches his attack and, often quite quickly, takes his profit.
  4. Return – the attacker may return and try the same or a very similar attack again if the exfiltrated data suggests further opportunities.

1. Mining

The attacker has already downloaded a complete copy of the victim’s mailbox; sent items, received items, contacts etc. He can learn a lot from this about an organisation and is particularly interested in:

  • Who requests payments
  • Who authorises payments
  • Who initiates payments
  • Any currently planned payments (the bigger, the better)

The attacker will take some time finding just the right opportunity among the several which may be present. This part of the process can take a few days; it can be a week or more after the initial attack before he decides how to act.


2. Preparation

The attack itself is simple social engineering. There is no malware although the attacker does have an opportunity to seed some malware if he wants. He will generally choose not to because he doesn’t want any unexpectedly effective ant-malware system to get between him and his victim.

Let us say that the compromised mailbox belongs to Jonathan. Jonathan is a contract manager who regularly makes payments to contractors, usually at the request of his boss, Carol. Jonathan’s mailbox is full of emails from Carol and the attacker studies these carefully. He notes that:

  • Carol nearly always starts her emails with “Hi Jon”.
  • Carol never got around to changing her email footer when the company changed its branding three months ago and her emails still use the old logo.
  • There’s a typo in the second line of the address in Carol’s email footer.
  • Carol has, on occasion, asked Jonathan to expedite planned payments, where a contractor has chased her directly for an overdue payment.
  • Carol goes to the same pub quiz as Jonathan on most Friday nights.
  • There’s an ongoing conversation between Jonathan and a contractor about an overdue payment worth about £20,000.

This is all the attacker needs. He wants that £20,000 and he knows how to get it.

Jonathan’s (and Carol’s) business domain is “domain.example”. Email addresses are jonathan@domain.example and carol@domain.example.

The attacker registers a new domain, “domaln.example”. This looks superficially similar with only a single substitution, a lowercase L where there should be a lowercase I in “domain”.

The domain is registered on a service which also provides email and implements sender authentication (SPF and DKIM). Email emitted from this domain will tend not to trigger spam filters, particularly if the content is sufficiently business-like.


3. Collection

The attacker then composes an email from “carol@domaln.example” (i.e. from the attacker at the fake domain) and sends it to “jonathan@domain.example” (i.e. Jonathan at the real domain). It says:

Hi Jon

I’ve just had Frank from ACME Contractors on the phone chasing his £20k. They’ve got another big client in the works at the moment and I think we might get some work out of it, so can you pay him please?

Frank’s bank is AnyBank, sort code 00-00-01, account 87651234.

Thanks

Carol

See you at the quiz later.

This email is followed by a footer bearing the wrong logo and the same typo as exists in Carol’s real footer.

It’s Friday, quiz night, and Jonathan is keen to finish his work and get to the pub. This instruction from Carol appears real enough. He sees messages like it all the time, so he logs into on-line banking, queues up the payment and presses “send”.

Later, at the pub, he sees Carol who is also there for the quiz.

“How did you get on in round one?” she says, “I’d never have guessed Yasser Arafat.”

“Fine,” says Jonathan. “By the way, I paid ACME that twenty grand like you asked.”

“What twenty grand?”


4. Return

A few weeks have gone by and Jonathan is still on leave, suffering from depression. Edward is doing Jonathan’s job in the interim. Edward receives an email from “carol@domaln.example”:

Hi Ed…


So, what can be done?

The best defence against this attack is not to be vulnerable in the first place. The defences suggested by my previous article will help to accomplish this.

However, should an attacker successfully deliver a social engineering email to a vulnerable victim, some of those defences are worth repeating and others worth adding.

People

Awareness is a key factor in avoiding exploitation.

  • A purely local email (i.e. one sent by one user to another at the same domain) will generally not include the sender’s internet email address in its header when viewed on screen. In this case, a real email from Carol would just have had “Carol Smith” in the header. The forged email from the attacker would have had “Carol Smith <carol@domaln.example>” in the header, a clear sign that the email originated from outside and is not from the real Carol. Do users know this simple trick to distinguish between local emails and emails from remote senders?
  • Do they know that an unexpected email requiring urgent action, even if apparently from a previously trusted sender, should be questioned?
  • Do they feel confident to ask if they feel unsure?

Process

  • Are duties effectively segregated? Business processes should not permit a user to initiate payments (perhaps only payments above a certain threshold) without corroboration of requests for such payments. This could be as simple as a telephone call to a manager – “I am about to make this payment; please confirm that you requested it.”
  • Does your organisation have an effective incident response process? Know what you are going to do if compromise is suspected.

Technology

Purely technological defences against this type of attack are possible, but difficult and expensive to develop, train and maintain. Due attention given to the People and Process elements is a more sustainable way to protect against this type of social engineering.


Christopher Linfoot