seb's ramblings

Here is where I write down longer thoughts for everyone to see. If you'd like to have a conversation with me, you can find me [at]seb on ioc.exchange

Key Performance Indicators (KPI) are what people in business generally use to measure the performance of a business function or team. In the Cybersecurity world we are blessed (or cursed) with plenty of KPIs. And even in the cyber sub-section of phishing we still have a lot of metrics that email security vendors force on us – Here are some examples:

  • Number of hard spoof emails blocked
  • Number of soft spoof emails blocked
  • Number of phishing emails quarantined
  • Number of malicious attachments quarantined
  • Number of phishing URLs blocked
  • Number of phishing emails reported by users
  • Number of phishing test failures
  • ...

Now, let's take a step back! Does any of the above metrics make any sense to a normal (non cyber) person? And who is giving you your cyber budget again..?

So what would be a good KPI to measure the performance of your phishing defenses?

To answer that question, we need to look at what it is that we are defending against. The adversaries goal with ordinary run of the mill phishing is usually to harvest valid credentials from your users. What the adversary is doing with those credentials afterwards, is a completely different story. With phishing defenses you are trying to prevent successful credential harvesting FullSTOP.

This goal now easily translates into the Ultimate Phishing KPI:

  • Number of compromised passwords

Report out on that number on a weekly/monthly/quarterly basis and every report recipient knows exactly what you are talking about.

How to measure that?

Now comes the hard part – How the heck do I know how many credentials have been stolen last week? Well, here you have to make the following assumption:

Adversary will test the credentials soon after they have been harvested!

And that credential testing you can monitor for. Use your authentication logs (incl. MFA logs) to look for logins that are being blocked by the MFA challenge and come from unusual locations (unusual location differs from user to user, therefore a ML based tool that can profile your users is helpful here).

Whatelse?

Besides watching the adversary using the harvested credentials you can also observe when your users interact with phishing websites. A pre-requisite for that is that you know what phishing websites to look for. So you need some kind of reporting mechanism that alerts you about a Phishing URL that has been successfully delivered to your users' inboxes. One of these reporting mechanisms we all have to our disposal: Our users. If you train them, they will report phishing emails to you. Then all you have to do is examine the reported email and look through your firewall and EDR logs (a SIEM is helpful here) for users, who have interacted with that phishing website.

What comes out of that exercise is another good KPI:

  • Number of potentially compromised passwords

Sometimes this particular KPI gets you questions about why you use the word “potentially” but those questions can be answered easily and those answers demonstrate that the cyber game is not trivial as it sometimes sounds ;–)

Me on the Fediverse: https://ioc.exchange/@seb

Over the last couple of years there have been numerous debates on whether it is a good idea to get rid of password expiration. The arguments against password expiration are usually variations of the below:

  1. Forcing users to change their passwords on a regular basis leads to widespread use of weak passwords.
  2. Frequent password changes in many systems lead to password re-use (aka user is using the same password everywhere)
  3. Putting the burden of security on the user is wrong, technology should do the heavy lifting.

What is interesting about most of the conversations I read, is that they seem to ignore all the other password improvements that the advocates for getting rid of password expiration usually cite – When you get rid of password expiration, you are supposed to also do the following:

A. Improve password length significantly (switch from passwords to pass phrases) B. Get rid of some password complexity requirements (Special characters are really hard to remember!) C. Introduce a Password Blacklist (e.g. block the word 'password' or its variations like 'p4ssw0rd') D. Monitor your user accounts for leaked credentials and force password changes once you detect a leaked credential!

Now, A,B,C are really easy to deliver – You can just install and configure some tech, which will do the job for you. It is pretty much a set and forget kind of thing to implement. However, D is a completely different beast – See the paragraphs below.

The Role of CyberOps Maturity in all of this

OK, we understand now that with the new guidelines we are supposed to not expire passwords periodically anymore but we are supposed to force a password to be changed when it has been leaked/compromised. How would one do that?

In my experience there are three types of sources that can tell you when a password has been leaked/compromised:

  1. A Leaked Password Database (e.g. https://haveibeenpwned.com , Azure AD Identity Protection, Recorded Future, + other services)
  2. Your SOC tells you that someone has tried to login from an unusual location and was blocked by MFA
  3. The User tells you that he has given away his password to a phishing website

Number 1 is pretty straight forward – You feed these services with your user accounts and they alert you about any credentials they see on the dark web, on pastebins or any other nefarious places. However, those services need to be setup with your data and kept up to date with leaving and arriving new users. Therefore you need to drive your CyberOps maturity to a point at which you can rely on a process that keeps those services up2date, so you can be sure that they will alert you about any leaked credentials of yours.

Number 2 is probably the hardest. First, you need a SOC – That is something that not all companies/organizations can afford. For those that don't know: A SOC is basically a team of Cybersecurity Analysts that watch your cybersecurity tools on a 24x7 basis and respond to any detected attack. The expensive part about that is that you need at least 7-8 full-time Cyber Analysts to cover 24x7. Once you have a SOC, you need to give them a tool that is capable of detecting unusual login attempts. Those kind of tools usually use machine learning to create login profiles for all of your users, so they can alert you when Bob suddenly logs in from Nigeria although he usually works out of the US and India. By now you probably also understood that having MFA in place for all publicly available apps/services is a must – Even with a SOC you cannot really afford a successful login by the adversary. All in all you need a lot of money, time, and effort to put the tech (IdP, SSO, MFA) in place and then you need a lot of money, time, and effort to on-board and run an effective SOC that enables you to properly respond to leaked/compromised credentials every time.

Number 3 is fairly simple in comparison. All you need to do is train your users on the dangers and creativity around phishing in our days. In addition you need to tell your users how to tell the SOC real quick when they suspect that they have just given away their credentials (a shared mailbox or ticket system usually does that trick).

Another thing to consider: Phishing

Another thing I haven't mentioned within the realm of CyberOps maturity yet: Email Defenses against phishing. These days phishing constitutes to about 1% of the email traffic a company/organization receives. If your email defenses let 50% of that phishing volume get through to the user's inbox, your users are pretty much giving away their credentials on a daily basis. Even a 24x7 SOC goes crazy with that amount of leaked/compromised credentials and will get to its capacity limits. Therefore you need to make sure that your bases (email defenses) are covered appropriately as well.

Icing on the Cake

Another thing a well trained SOC can do besides detecting MFA blocked login attempts, is to investigate every phishing email that is reported by a user or any of the (after-delivery) email security tools. And by that I mean only the phishing emails that actually get through to the users' inboxes. A good SOC should be able to figure out who else within your organization has received a similar phishing email (same subject or same sender or same sender ip etc.) and which of the users have interacted with the phishing website. The results of such investigations lead to 'potentially compromised' credentials. I believe it is good practice to force a password changes on 'potentially compromised' credentials as well.

Quintessence

If your CyberOps is mature enough to execute the above procedures reliably and repeatedly, there are no reasons to keep expiring passwords. You are ready to take this security burden off the shoulders of your users and hand it to the CyberOps team.

If you like what you read and want to chat about it, you can find me at https://ioc.exchange/@seb . If you don't have a Mastodon account yet, feel free to join https://ioc.exchange .

https://pages.nist.gov/800-63-3/ https://www.troyhunt.com/passwords-evolved-authentication-guidance-for-the-modern-era/ https://docs.microsoft.com/en-us/office365/admin/misc/password-policy-recommendations?view=o365-worldwide https://blogs.technet.microsoft.com/secguide/2019/04/24/security-baseline-draft-for-windows-10-v1903-and-windows-server-v1903/

I have recently built a new Mastodon instance to create an additional home for InfoSec professionals on the Fediverse. Being a new Mastodon admin, I was evaluating the different ways to get more content on the Federation Timeline. This post summarizes my experience, findings, thoughts, and possible feature requests.

Challenge A – The New-User-Experience

When you start a new instance the Local and Federated Timelines are empty. Also, there seems to be no effective way to find people to follow by utilizing the search function on your own instance. The currently recommended workaround is to create an account on an established instance first, so you can find people to follow and then to move these connections over to your own instance.

Challenge B – Network Effects and Information Flow

Once you have managed to find other user, whose content you find interesting, your Home and Federated timelines start to fill with content. However, since this seems to mainly show the content of the users, who you already know, it creates a social media bubble effect that leads you to believe that everyone has the same interests and world-views like you have. Media has been covering this effect a lot after the 2016 presidential election in the US, where the Facebook bubble that users created for themselves seems to have heavily influenced the election.

Challenge C – Resiliency

Through my research on the topic of Mastodon and the Fediverse, I stumbled across a very interesting study: https://arxiv.org/pdf/1909.05801.pdf In this paper the authors have analyzed a great amount of Mastodon data to research the resiliency and other aspects of federated social media platforms. They found that by blocking/eliminating a couple of key instances the available content of the Fediverse can be severely impacted. In my opinion this is something to be improved upon, so that freely provided / non-censored content can resist a targeted attack of larger players (e.g. government sponsored entities) in the cyber realm.

Existing Solution A – Bot based Following of interesting Mastodon Users

In the past some Instance admins seem to have utilized bots to follow user of specific instances that they found to be well-maintained and interesting.

Existing Solution B – ActivityPub Relays

Since a couple of Mastodon versions the admin can configure her instance to become a member of ActivityPub relays. And while this seems to solve the problem, it creates some challenges due to the its bi-directional nature: Once you connect your instance to the relay you are basically at the mercy of the relay admin and your instance will 'blow-up', if the relay gets a new member with large amount of content.

Proposed Solution C – Mastodon Instance2Instance Subscriptions

If we could implement a feature that allows the instance admin to subscribe to other instances' content, one could easily create larger communities based on multiple smaller communities. And if that subscription has a one-directional and a bi-directional option, it even becomes possible for larger instances to promote content of smaller instances without forcing the smaller instance to eat all the content of the larger instance. With a feature like this it becomes much easier to create resiliency for content of specific instances. It also allows an instance community to curate their Federation feed in a meaningful way. (The admin could run polls with his users to decide which other instances to subscribe to as an example.)

If you find this idea interesting and want to chat about it, please go to the Mastodon Discourse here: https://discourse.joinmastodon.org/t/mastodon-on-the-fediverse-current-limitations/2350/2

If you'd like to talk to me directly, you can find me here: https://ioc.exchange/@seb

Now that I have setup Mastodon (https://ioc.exchange) and WriteFreely (https://rfc.ioc.exchange) instances, I feel the need to explain the why: Working in InfoSec I always felt like it is hard to exchange ideas and compare notes in this field of work. Over time I have identified the following reasons for this:

  1. InfoSec practitioners are rare – There aren't that many yet.
  2. InfoSec practitioners are usually introverts – They don't talk much.
  3. It is hard to know what one can talk about and what we shouldn't talk about to not breach any agreements with our employer.

IOC.exchange is supposed to make exchanging InfoSec ideas easier by providing online spaces that allow anonymous communication and aren't controlled by any tech companies or vendors.

My hope is that we can learn from each other and with that make the cyber world a safer place.