Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

londons_explore

It's important to note the nature of the failure.

Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.

Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.

It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.

IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.

yunesj

> It's important to note the nature of the failure.

Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.

“You just got fished!” eye roll

I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.

dyslexit

The article says 17 employees opened the link, and 10 of those types in their credentials. The 20% the headline is talking about are those 10, not the 7 that didn't do anything.

paholg

They did a test like this at a company I worked at. I ended up entering fake credentials because the thing seemed so shady, I was curious what its deal was.

yosito

Hi Gitlab! I'm available for hire if you need a replacement.

pdonis

> I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.

"Curious" might get you to opening the web page, but actually entering credentials moves you into "phished" no matter how curious you are.

TimTheTinker

Not if every component is intentionally faked.

zifnab06

I did the same thing at a previous job. Got signed up for an 8 hour security training because of it - somehow I refused to go and they didn't fire me.

leppr

VM escape exploits are a actually used in the wild, so yes, if that was on your work machine, you failed the test.

minitoar

If your security model requires people to never open an untrusted link in their browser, you just cannot allow open Internet access

bpodgursky

This is kinda ridiculous. You first need the email client to have a bug which enables some kind of cross-site scripting just rendering an email, then a sandbox bug for a webpage to leak into the underlying system, and THEN a bug for the VM to escape to the parent OS.

At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.

If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.

stepanhruda

You are confusing executing untrusted code in a VM with opening something in a browser in a VM - would really need to be a double VM escape.

arpa

Clearly your threat model adversary is Mossad.

furyofantares

Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever, your guard is down and you happen to receive a phishing attempt that also pattern matches something you're kind of expecting, either because it's a targeted attempt or just randomly for a wide-net phishing attempt. That's my model of how phishing works, they just make lots of attempts and know they will get lucky some small percentage of the time.

With that as my model: the email getting to your inbox is of course the first failure and increases the chance of getting phished from zero to not zero. Opening the email is another failure that raises the chance. Clicking the link is another.

All of the steps leading up to entering credentials or downloading and executing something from a phishing site is a real failure in that it increases the chances of becoming compromised.

That's even true if you're suspicious the whole way through. If you know it's a phishing attempt and are investigating, fine. But if you are suspicious, that means you can still go either way. You can also get distracted and end up with the phishing link in some tab waiting for you to return to it with all the contextual clues missing.

phire

Someone once posted a link on hackernews titled "new phishing attack uses google domain to look legit"

I opened it in a new tab along with several other links to read, I was expecting a nice blog post explaining an exploit.

After about 20min of reading the other tabs I came across that tab again. I had forgotten the title of what I had clicked, I'm not sure I even remembered it was a hackernews link that got me to that page.

"Oh, looks like Google has randomly logged me out, that doesn't happen often" I think as I instinctively enter my email and password and hit enter.

Followed half a second later by "oh shit, that wasn't a legitimate google login prompt."

I raced off to quickly change my password, kick off any unknown IPs and make sure nothing had changed in my email configuration.

I'm lucky I came to my senses quickly. I think it was the redirect to generic google home page that made me click, along with the memory of the phishing related link I had clicked 20min ago.

But yeah, it can happen to anyone on a bad day.

PowerfulWizard

There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input. Or if not prevents, at least a scary warning click through like an unsigned certificate creates, which at least shows the true full domain name.

Whenever it read about phishing it seems insane we have a system that requires human judgement for this task. If there isn't a deterministic strategy to detect it, how could the user ever reliably succeed? And if there is such a strategy, it should be done by the mail server, mail client, and browser.

Even an extension doing this might work in a corporate context. That makes me wonder if companies do their own extensions to enhance the browser for their needs. If all your employees are using web browsers for multiple hours per day it might really be worth it.

ripdog

This is why an auto-filling password manager is an essential security tool for every internet user. If your password manager doesn't autofill/offer to fill your passwords, the domain isn't legitimate.

Password managers are great for security and super convenient. It continues to shock me how many people surf the web while continuing to type the same password into dozens of sites, and then they wonder why they fall for phishing.

pdonis

> Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever

It shouldn't matter how tired or distracted you are: you should never enter credentials into any place you get to from anything you receive in an email--or indeed by any channel that you did not initiate yourself. If you get an email that claims there is some work IT issue you need to resolve, you call your work IT department yourself to ask them what's going on; you don't enter credentials into a website you got to from a link in the email.

It's the same rule you should apply to attempted phone call scams: never give any information to someone who calls you; instead, end the call and initiate another call yourself to a number you know to see if there is actually a legitimate issue you need to deal with.

Rules like this should be ingrained in you to the point where you follow them even when you're tired or distracted, like muscle memory.

lucb1e

I just realized that this might happen to me. On my home PC my alarm bells would definitely go off when Firefox stops suggesting credentials for a supposedly known domain, but on my work computer we're a bit higher security and a password manager integrated into the browser (even with master password and quickly installing patches and whatnot) is just not up to scratch. So what I realized is that I may not notice a lookalike domain because I need to grab the creds from another program anyway.

Is there an add-on for Firefox that warns when you enter credentials on a new domain? Or puts a warning triangle in a password field when today is the first day you visited the domain or something? Firefox already tracks the latter, you can see it in the page info screen, so both should be easy to make but I'm not sure anyone thought of making this before.

hn_throwaway_99

This is exactly why security keys should become standard. They are essentially unphishable.

amelius

I really don't understand why every laptop/computer/keyboard/smartwatch doesn't come with NFC for exactly this purpose.

kjaftaedi

Clicking a hyperlink is certainly bad.

Browsers have vulnerabilities and you're broadcasting the attacker valuable information about yourself, including the fact that you're receiving, reading, and clicking on links in their mails.

Also, the article states clearly that 1 in 5 fully entered their credentials.

zulln

> Clicking a hyperlink is certainly bad.

HN must be a boring place if you are not prepared to click on external links.

stingraycharles

There’s a fundamental difference between HN links and links in targeted emails. I cannot start phishing GitLab employees using HN posts, the threat model is just different.

ubercow13

The point is to recognise the email/situation as phishing or otherwise malicious before deciding to click the link. The chance of clicking a malicious link on HN is pretty low if you stick to the front page.

Rebelgecko

Usually I mouseover and see where the link would take me. If it's something like micr0soft.co, it raises some red flags. For something like a targeted phishing email, it's even more reasonable to be concerned about things like browser 0 days

yjftsjthsd-h

Eh; I'm 95% here for the comments.

unethical_ban

Emails and HN are different.

Then someone will point out watering hole attacks, where adversaries find where targets hang out socially, and attack that.

And then I'll point out that the inherent risk in HN links vs. unfamiliar emails are very different.

amelius

Some people never do ;)

underwater

In theory, sure. In practice everyone is clicking on links all day. If someone is has a 0-day, employees manually checking domain names on emails is not going to stop them.

MiroF

Yeah good luck defending your company against a Chrome 0-day

kjaftaedi

It's not about defending against something specific.

It's using strategies like teaching people to check links before clicking them that can prevent a number of different things (phishing, malware, etc.)

If you've already clicked a link, attackers know exactly what browser you are using, and that you're probably also willing to click on the next link you send them too, allowing them to go from a blanket attack to a targeted attack.

mmxmb

I disagree that clicking a hyperlink is not bad. If you have a determined attacker with some 0-days up their sleeves, simply opening a hyperlink may result in arbitrary code execution.

See an example from last summer: https://blog.coinbase.com/responding-to-firefox-0-days-in-th...

hannob

My understanding of the text is that 10 of 50 actually entered credentials. So the 1/5th is really the number of people who a phisher would've stolen credentials (although they say later they use 2fa which would've prevented a real attack, but still bad enough, as you can expect these people use other accounts which may not even support 2fa).

oefrha

2FA (assuming TOTP, not hardware keys) prevents attacks using credentials leaked from side channels, but does not work in phishing attacks using a fake login form. The attacker just needs to channel the TOTP you entered into the real login form, and on average they have a bit more than 15 seconds to do so, which is more than enough.

datguacdoh

This is what makes security keys so great, you can't surreal a token from one domain and use it on another. They completely remove this type of attack, which no amount of training will ever fully protect you from. You can't put the onus on the employee, you have to make it impossible for them to do the wrong thing in this case.

OJFord

When I worked somewhere large enough to have an IT dept. running these tests, it was obvious they were from IT, and people would open them for amusement.

So yeah, definitely some interaction should be required to consider it a failure, but also the test email should be as convincing high quality phishing as possible.

Not just because it makes for a better test, but because it's more likely to be a valuable lesson for more people, people who thought they wouldn't fall for it.

chrisseaton

> The email client is specifically designed to be able to display untrusted mail.

Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.

Some email clients try not to do this, but that's actually somewhat recent, and I wouldn't say they're 'specifically designed to be able to display untrusted mail', rather 'they try to avoid common exploits when they become known'.

mewpmewp2

What can be done with this information?

Most companies have e-mail addresses that are completely predictable, so you can pretty much assume that this e-mail address exists. If this really was a security risk shouldn't you have UUID emails for everyone?

Also how do you as an attacker know that it was user not a e-mail server checking those images?

chrisseaton

> What can be done with this information?

It will reveal if they're working right now, what time they work otherwise, their IP address, their approximate physical location, their internet provider. A lot you can do with that.

> Most companies have e-mail addresses that are completely predictable

That's the point. Predict an email address, send it, find out if such a person works there.

If I email unusual.name@sis.gov.uk and they open it then guess what I've worked out?

> Also how do you as an attacker know that it was user not a e-mail server checking those images?

Agent signatures.

Kalium

> What can be done with this information?

Now you know who's curious enough to open a shady-looking email, and perhaps click a link out of curiosity. It means your list for the next round of attacks is much smaller and more targeted, making it easier to evade detection.

Aeolun

> Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.

That makes it less than ideal, but describing it as a ‘failure’ isn’t going to help any users pay more attention to phishing mails, because they get tons of legitimate emails with images in.

jandrese

This is one thing I like about Outlook. It doesn't load embedded images unless you click on a button at the top. All email clients should do this. Not only is it safer, but it discourages people from putting a ton of images in emails which is just annoying in general anyway.

ttsda

Gmail used to do this, but it seems they've phased it out.

wolco

Thunderbird has always done this. Loading images prompting before send receipt notifications you name it.

capableweb

Email clients started out without embedded images. Images came after the initial email implementation. So one could say that displaying images in email clients is rather new. Also, most if not all email clients have the option of disabling Inline images.

Email clients, just like browsers, are made specifically to handle untrusted user content. That then some people/clients allow information leak, is another thing. Just like websockets in modern browsers.

brmgb

Sure, let's pretend images in email are a new development and should be stopped.

Meanwhile in the real world some of us have actual users. Pretending we should stop using widely used and useful technology while flailing your arms and shooting "but security!" is not going to help anyone.

eganist

discriminating between different failure modes is important. However, every situation you've described is still some form of failure mode.

1. A user opening a phishing email means the email made it into their inbox (spamming failure unless whitelisted for the sake of a test) and the user was moved to click the email based on the subject line. This in itself is the lowest risk of the failure modes we're about to describe, But some risk will exist considering e.g malware has spread through the simple opening of emails before.

2. Clicking a link in a phishing email is much higher risk and, regardless of how the phishing test was crafted, is considered with absolute certainty to be a failure mode of any phishing test or event for three reasons: A user has definitively disclosed their presence within a company (email clients today may block trackers from loading, but clicking a link gives it away), the user has disclosed their receptivity to the message, and in a real world attack, merely landing on the page may trigger an event such as the delivery of a malware payload via a functioning exploit against the browser and the underlying operating system.

3. Entering credentials is probably the most obvious one.

---

Rather than a "password alert" control that just alerts a user that their account was signed into, what would be more helpful is a second factor; a bare minimum would be a prompt on a user's phone indicating that a login attempt was detected and requesting confirmation before that attempt can succeed. This at least helps a user potentially preempt an attack against their own account (assuming they're trained on how this works) even if they never figure out that they've entered their credentials into a phishing site, And if the second factor challenge is never met, an alert to the security team could automatically get the security team to triage the risky login.

Pardon typos. Voice to text.

mewpmewp2

What can be done with the info that user has read, opened and clicked on the website? Our company for example has completely predictable e-mail addresses with first letter of first name and then last name @ company.com. You would have this knowledge even without having to send e-mails. I assume Gitlab has it similarly.

Also I assume Gitlab already has 2fa.

eganist

> What can be done with the info that user has read, opened and clicked on the website?

Follow it up with a much more tailored spear phishing attack - https://www.knowbe4.com/spear-phishing/

Reworked to sales terms: it's the difference between a cold lead and a hot lead. A user who's clicked through has proven themselves to at least be warm or receptive to phishing campaigns in general.

As an adversary, I'd probably couple unique links (for tracking clicks) with heatmapping and other front-end tracking technologies to see what exactly the user is doing and how far they've gone before backing out, which helps me refine the attack. Most attackers probably wouldn't go that far (spear phishing the people who clicked would probably be the extent of it), but if someone is after something of particular value at your firm, there's no reason why they wouldn't put more effort into sharpening the attack.

StavrosK

I'm a web developer with a focus on security and I nearly got phished multiple times. Once was a legitimate-looking email from Linode, which I opened and was fooled by (I didn't check the domain because I trusted my spam filter too much to consider that it might be fake), I was saved by my password manager not auto-filling the credentials because the domain didn't match, which made me look and see that I was on the wrong domain.

The second time, someone was about to steal $30k worth of cryptocurrency from me with a very convincing page on śtellar.org, where I nearly entered my wallet seed (did you notice the accent over the s? I didn't), and was saved by the fact that I keep my cryptocurrency in a hardware wallet, so I had no seed to enter.

Both times, what saved me from being phished wasn't that I'm trained or that I'm more observant (which my parents have no hope of ever being), but that I had used best practices so I didn't have to rely on being trained or observant.

I'm hoping WebAuthn takes off, which will really kill phishing for good, but you can take steps now: Use hardware U2F keys as second factors, use a password manager, don't use SMS auth. Make long, random passwords, etc.

WrtCdEvrydy

I was honestly almost phished a couple of times until one of my professors said something I had never though about before.

"If you have a password manager, use the password manager's 'take me to site' function instead of anything on the email. Just open the site from your password manager instead"

londons_explore

Except a good number of emails aren't directing you to their site in general, but a specific page on their site, that might be very hard or impossible to find any other way.

WrtCdEvrydy

Right, but if you login to wellsfargo.com and click on a link to a specific page on wellsfargo.com, you will be logged in already...

hombre_fatal

Two years ago I was fooled by "colnbase.com" (L instead of i) to the point that I was annoyed that 1Password "wasn't working". Of course, 1Password didn't have a uname/password for a phishing site. I almost opened it to copy the password in manually when I spotted the L. It's sobering.

As for WebAuthn and U2F, unfortunately they chose every trade-off possible away from practical usability. They're doomed. Go look up the impl/ux flow for WebAuthn right now for example.

We need less of that and more good ideas that people would actually implement and use.

StavrosK

Really? What do you think is impractical about it? I just tap my USB key and I'm logged in.

Hell, it even supports a mode where you don't have to have a username or password at all (e.g. log in and try adding a key on https://pastery.net, you can then just log in with the key with no username/password at all).

tialaramex

Note that to do the latter ("Usernameless login") you need a FIDO2 key. A relatively modern Yubico product can do FIDO2, but cheaper alternatives mostly don't offer this.

The reason it's a cost upgrade? Those credentials have to live somewhere, and that means they're using Flash storage baked inside the FIDO2 key, ordinary FIDO keys don't have close to enough storage.

Next you might wonder: Wait, how does a FIDO key log me into Google if it isn't storing the keys?

Magic. Well, cryptography. When you registered the key it minted a key pair (Elliptic curve most likely) and obviously it gave Google the public key, but it also provides Google a large random-looking "Identifier" which Google must give back each time you authenticate. That identifier could, by the specification, just be some sort of hidden "serial number" but in reality what everybody does is encrypt the private key or its moral equivalent - with an AEAD scheme using a device-specific secret key and then use that as the identifier. So when Google gives you back the "identifier" the FIDO device decrypts it to discover its own private key for the site which it can use to log you in. The FIDO dongle doesn't actually even know you have a Google account, yet it works anyway. Magic!

FIDO2 is a much less clever trick, and that flash storage is too expensive to use it everywhere - but the UX is so seamless it makes username plus password look like they asked you to undergo a cavity search by comparison.

tialaramex

On impl, which I take to mean implementation:

I finally have direct implementation experience (thanks COVID-19 I guess?) of WebAuthn now so I can speak confidently to this consideration.

I built a toy implementation on my vanity site and am gradually integrating it to a site friends built back when we all lived in the same city at the turn of the century. That site is old PHP (actually parts of it are terrifying Perl CGI code that looks like it was written before HTTP/1.1 existed) so my WebAuthn implementation is also PHP at the backend. This is neither the simplest, nor most capable technology, I have no doubt it can be done faster and better in your preferred language (it certainly can in mine).

I wrote <1 KLOC, no frameworks, no libraries beyond standard components, there's a little corner cutting in my PHP CBOR implementation but nothing likely to break in the real world for this purpose (we can treat all "I don't understand" cases as "Probably bogus, refuse entry" and be fine).

The JS is a little bit of Promises and some JSON processing, nothing every browser (that can do WebAuthn) doesn't offer already and I included it in my < 1 KLOC total.

Now you aren't going to get this done by thinking it's something else. Trying to do all the work on the client? Not going to make that happen. Hoping to hide all the WebAuthn credentials in a 64 character "password" field your database already has for each user? Not going to be like that.

But if a team has one person who understands in principle what this looks like, I'd say it's maybe a week for a backend person, a week for a frontend person and a week for a tester to spin up on what's going on and learn it. And that's the first time. And that's going to be markedly less for people who aren't learning the components (Web Crypto, public key crypto) as they go.

The pay off is huge. When you store passwords, that's a liability you've got there, it's like toxic waste you're storing. If somebody gets those passwords you can face fines, somebody might sue you, even at best you'll need a PR firm to help try to sell how sorry you are about it. But stored WebAuthn credentials aren't even secret. They make your preferred sock colour look like the crown jewels of PII by comparison, yet they're far stronger than a password as login credentials.

CapacitorSet

>a very convincing page on śtellar.org

If you rarely use IDNs, toggling `network.IDN_show_punycode` in about:config can help with that - you would have seen `xn--tellar-2ib.org`.

StavrosK

Thanks, I had originally typed up the URL in the comment with https:// and HN did convert to punycode, foiling the attack. I never use IDNs, even though I'm in Greece, so I've set that option, thank you.

supuun

haha, for the first time i thought the accent on the s in śtellar.org was just a dust particle on my monitor.

StavrosK

I THOUGHT THE EXACT SAME THING! That's why I didn't notice it at the time :(

kchr

Thats one of the reasons it is so effective :-)

ohah

This is similar to a correlation problem. I was complaining multiple times to a company, finally they called me back. They had this elaborate explanation and needed me to reset my password.

Then they asked for my password. I was pretty confused but almost gave it to them. It was just a coincidence that some con-artist had called me to try to phish me when I had been trying to reach out to the company.

Those assumptions where you know it’s real are dangerous because it can make you ignore red flags.

xondono

Any recommendations on hardware U2Fs? I’ve looked a couple of times the yubikey but didn’t go through with it

tialaramex

Most important element is definitely finding a device that suits your needs in terms of connecting it, USB connectors, NFC and so on. The whole idea is these things are trivial to either plug in and leave in a machine that's with you everywhere, or carry on a keyring to use quickly, if it's a whole performance to use your key then you just won't.

I can vouch without hestitation for the Yubico Security Key (newer version has a "2" printed on it clearly, this also does the FIDO2 protocol with resident credentials). This is a relatively expensive option for the purpose but it's robust (lots of people put these on key rings and stuff then carry them everywhere) and simple and the people who built it know what they're doing. But it's a USB A device, if you need bluetooth or USB C or whatever then don't buy one hoping to like it.

That product skips all the fancy Yubico features other than being a Security Key, thus saving a big fraction of the cost - but there are much cheaper options that work if budget is tight, if you're just playing around, or to do testing for a potential deployment: I also have a "KEY-ID FIDO U2F Security Key" again USB A and it works nicely, but many people don't love the bright green LED (all the time, not just when authenticating, it's on all the time). However it clearly also feels cheaper than the Yubico product, this is not an heirloom product.

StavrosK

I have a Yubikey 5C but it might be a bit of a waste of money, since all I use it is for FIDO2/U2F, especially now that SSH supports that.

I'm excited about the new version of the SoloKeys (https://solokeys.com/) coming out next month, they aren't using secure elements like the Yubikeys are but I'm not really worried about someone stealing the key from me to extract the credentials with physical attacks, so they might be a good alternative.

Other than that, I eventually see password managers having built-in software FIDO2 implementations, so you just open your password manager and it automatically intercepts U2F requests and authenticates them, but that's a different thing.

Basically, anything you get that's U2F/FIDO2 compatible is fine, and much better than the second best thing (TOTP or whatever). Get something that's cheap enough for you to get two of, have one with you and the other at home as a backup, and that's it.

NicoJuicy

Use nextdns.Io to block phished and new domains

sytse

Maybe this article came about because of my tweet: https://twitter.com/sytses/status/1263216521175642112?s=20 “ I'm grateful for the red team at GitLab doing an amazingly realistic phishing attack https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-tea... with custom domains and realistic web pages. The outcome was that 20% of team-members gave credentials and 12% reported the attack.”

I think it is amazing that our res team make https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-tea... public so other companies can learn from it and they where comfortable with sharing the results.

elliekelly

I’ve seen this a lot in my work where companies hesitate to conduct phishing exercises that are “too convincing” (or, put another way, too realistic) because they fear documenting poor results. Of course that means the exercise and the learning opportunities are much less impactful. I’ll concede it’s a little different with financial institutions because regulators and auditors will usually see the results at some point but I really admire Gitlab’s commitment to transparency.

I try to emphasize to clients that it’s not a test but a phishing exercise akin to a fire drill. You don’t pass or fail a fire drill - you use it assess how prepared you are for a fire. And if you find that you’re totally unprepared, well wouldn’t you prefer to figure that out before anything is actually on fire?

thomasdub

I love the lure, and I respect the GitLab team for making it public, but this is a tough read - it’s putting way too much responsibility on the end-user. For example I’m a huge fan of security teams using email headers to analyze suspicious messages, but I think it’s a step too far to expect a user to ever look at an email header, no? We can hardly get regular end-users to hover over a link; encouraging them to open up email headers to see what service the mail was sent from, or to understand what a “received” message header vs an x-originating-ip means is counter-productive. Headers are hard to understand even for a security analyst, asking HR or Recruitment or Sales to analyze them and understand them feels like the red-team are underestimating how little time everyone has and overestimating how technical most employees are!

Jugurtha

I'm intrigued. Why limit the experiment to 50 employees? Why not everyone except the Red Team?

LordGrey

My company regularly runs internal phishing tests like this, using an outside organization. We apparently have a near-constant 7% failure rate. Personally, I cheat: Long ago I discovered that the outside org puts some identifying headers into the email, so I wrote an email rule that adds "[PHISHME]" to the subject line.

The phishing emails are sometimes very good. They appear to be from senior management and address projects or other internal events everyone knows about. Some emails are very easy to spot, in the Nigerian prince category. It is very interesting that we have that 7% failure rate no matter how good or bad the phishing email is.

In general, I think internal phishing tests are a great way to educate the workforce.

JumpCrisscross

> My company regularly runs internal phishing tests like this... I think internal phishing tests are a great way to educate the workforce

Yes and no. I used to report phishing attempts to IT. Then we started running tests like every month, so I'd just delete suspicious messages and move on. Of course, that's when we got a real phishing message.

Frequent company-wide tests are, in my opinion, overboard. Once a year company-wide tests, followed up by more-frequent tests for sensitive groups and/or those who failed previous tests, makes more sense.

WrtCdEvrydy

That's the thing, reporting a phising email in my org excludes you from one month's worth of email... then two months... then four months... I spoke to the guy in charge and he checked (my account is set to not receive for 2 years)

LordGrey

Our tests seem to be somewhat staggered. We may see phishing email tests twice in a month, then nothing for several months. Typically there is a two-month lag between the tests.

I should note that phishing tests are just one component of many company-wide education programs regarding physical, computer, data, and network security. My company deals with very sensitive data, so information security is a Big Deal.

The problem with targeting these tests is that new employees are constantly coming in and need to be educated/trained. Also, the persistent failures do not seem to be confined to only certain work groups; they're spread around the company fairly randomly, and they move.

Exactly how phishing tests are run probably depends quite a bit on what kind of company you have and what kind of employees work there. A workforce full of programmers would -- I would hope! -- be much less susceptible to phishing scams. The sales force, possibly more susceptible. That may be stereotyping, though.

thepete2

Just curious: Are there repercussions if you don't "pass" the phishing test (that would be seriously stupid), do you dislike them or simply "cheat" because it saves you time?

heipei

I'm not a huge fan of these phishing-test exercises. I run the service at https://urlscan.io which a lot of folks use regularly to check out suspicious links in mails / chat messages. I've been approached by some of these phishing-test companies asking me to prevent scanning their domains/IPs. They flat-out told me that they weren't happy about users using my service to check the link, which I always found odd, and I never got an explanation for it. Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...

WrtCdEvrydy

> Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...

It's the same issue as "ad companies"... if you don't cook the numbers that show your expensive service is worth it, then people will switch to the service that looks worse (this one has 7% fail rate but this one has 50% fail rate)

tikkabhuna

Perhaps they should look at doing integration that shows how much urlscan.io is blocking the phishing test companies?

duckmysick

What are the legitimate cases for excluding the domains from your scanning service?

heipei

Not many, I usually only do it when the domain or URL pattern in question is almost exclusively used for sessions/invites/sharing-links and basically every URL submitted leaks either a customer name and/or invite-token and/or PII. zoom.us is a good example, certain DocuSign URL patterns, the sort of thing where knowing the URL gets you a sensitive document, etc.

ilikebits

When I worked at Google, orange teams weren't allowed to use phishing tactics because they worked so reliably every single time that they provided no new information about the security of internal systems.

The reality is that humans are hard to secure, so defense in depth generally involves preventing compromised accounts from causing lots of damage, detecting them as early as possible, and controls for shutting them down.

chrisseaton

I don’t understand how working from home is relevant to this?

Do people working in offices have IT staff come by to update their laptops? Would people in an office not open this email if they’d do so at home?

When I worked in an office nobody touched my laptop but me.

unnouinceput

While in office you're connected to internal network, supposedly within internal domain and IT dept. would have direct access to push updates automatically. When outside you're suppose to connect via a VPN (best case) or communicate via encrypted something (email, ftp etc) but you'll need to enter your credentials somewhere.

Also, please remember, it's not your laptop, it's company's laptop, merely given to you to do your work on it. Anybody within the company with correct credential would have the right to touch that laptop.

chrisseaton

> While in office you're connected to internal network

Not all companies do it this way. Many use a clear network and make services encrypted.

> Also, please remember, it's not your laptop

It is if you work for a bring-your-own-device company.

unnouinceput

Bring your own device is bad for companies. Any of them using this approach are just begging to have their talent pool drained. If I do work for company on my own device there absolutely no difference between my personal research and the company research and in eyes of the law these companies will always lose if they try to enforce some "secret sauce" to not go to their competition. Wondered why FAANG companies never did this, those that will lick every penny from whatever corner they can? Exactly because they know too well they'll lose badly. Just look at that guy that got bankrupted by Google after he went to Uber - HN had an article a few weeks back.

usr1106

> Also, please remember, it's not your laptop, it's company's laptop

Correct.

> Anybody within the company with correct credential would have the right to touch that laptop.

That is only partially correct. In many European countries people enjoy quite some protection also in work life. So in order not to do anything illegal the employer has to carefully control access rights to your PC. And the ones who have access rights cannot do whatever. Reading emails is typically illegal, yes emails on the work account! (Just to mention the legal concepts; of course in today's architecture emails are rarely stored on your PC)

I understand in the US employees enjoy little protection while at work. I could guess video surveillance in the toilets could still be unacceptable. Just to make the point, even if the location, paper and water is paid by the employer and more importantly the time is paid, it shouldn't be like that that the employer controls everything. (Although there have been reports that Amazon warehouse workers in the UK use bottles for their needs, because the employer does not provide for more human arrangements in practice. Some employers are always worse than others and that's why I have stopped ordering from that company.)

thomasdub

Most companies will have a firewall on their corp network so new domains, or malicious-categorized websites will usually be blocked which offers additional protection above working from home. You can obviously use an always-on-VPN for wfh companies, or tools like Cisco Umbrella, ZScaler or Netskope, but many companies haven't done that yet.

abdullahkhalids

Someone at my work (before lockdown) recently avoided a phishing attempt because they turned to their colleague and asked, "Why would the high-rank-officer email me?"

boomboomsubban

Gitlab is a remote-only company, I don't know why this article is choosing to highlight that fact so much though.

chrisseaton

I think there's some sort of anti-work-from-home agenda going on here. It's completely irrelevant to the story. If you were in an office you'd get exactly the same email and presumably respond to it in exactly the same way.

usrusr

It's relevant to the story because so many people are currently in their first months of WFH so a headline that mentions WFH will be more interesting to them than one that doesn't. Another way to put it would be "WFH pioneer gitlab phished its own staff", nothing wrong with that.

PeterisP

In offices you have the ability to monitor and filter the network connection, so it's plausible to detect and/or prevent the malicious connection after the phish succeeds.

mensetmanusman

Our company informed us 2 years ago that they will be attempting to phish us continuously (no frequency specified).

If you fail, the last page is corporate training on the topic.

I was so inspired to not have to do corporate training, that I assume everything is a scam now.

blntechie

> If you fail, the last page is corporate training on the topic.

In my work, the policy is 3 strikes and you are gone. First two fails are trainings with tests and third fail is an instant fireable event. As we work with clients and and their data, this is strictly enforced too.

perl4ever

I've never gotten in trouble for missing a phishing test, but everywhere I've worked there are real emails that have all the hallmarks of a phishing one. Like, misspellings, weird domains, etc. So I don't think it's reasonable to punish people, nor it is sufficient to raise awareness. The security people don't address the issue of real emails that look fake that condition people to click on similar things, because obviously it's outside of their area of responsibility and control.

Also, what do you do if you have a draconian policy and someone important clicks on one?

eertami

I guess that depends if failure is visiting the unique URL they've sent you or actually inputting credentials.

I got curious about an obvious internal phishing test and decided copy the link to another machine and see how convincing it was... I hadn't clicked, it wasn't my work machine, and I didn't enter any details - but instantly received an email informing me I'd failed.

Yeah right, I obviously haven't done the associated failure training and I will forever refuse to do so out of principle.

Igelau

Sounds like a hellhole. That policy is perfectly tailored for corruption and paranoia.

fargle

concur. I do hope that the "well meaning" security team that thought this up is diligent in investigating and accounting for false positives. "Oh, I clicked the link in the fishing email IN A VM to see what the F* it was" and "I entered 'fakeceo' and 'mrpassword123'".

People have different methods of exploring and learning to decide if something is legit or not. Nor should any "security policy" should be a 3 strikes zero tolerance policy. Everything needs context.

P.S. I'm pretty sure that the mental and behavioral damage done by this 3 strikes policy can easily be weaponized.

Shame.

blntechie

That’s the cost of client enforced security policy. I have not known or heard anyone personally fired for this but definitely getting warnings and or getting reassigned their roles.

gruez

> That policy is perfectly tailored for corruption

Elaborate?

renewiltord

Christ, what a nightmare.

usr1106

> Hunt said GitLab has implemented multi-factor authentication and that would have protected employees had the attack not been a simulation.

"Protected employees" is a weird way to put it to say the least. It's not about protecting employees, it's about protecting gitlab company and their customers. And the protection would have failed. The attacker would have needed to use the credentials (including the one-time credential) in real-time. That makes the attack-site logic a bit more difficult, but it would have allowed to break in. I doubt gitlab employees have to reauthenticate very often during a working day.

Well, unless they really use a challenge response system. At least what I use as a gitlab customer is not, it's just standard OTP. I would provide a valid one time password to a phishing site, should I fall for it.

(Edit: reworded. Commenting on the phone is never a good idea...)

tialaramex

Most challenge response systems don't help either, the attacker gets to forward the challenge to you, and then your response back to the real site. It's some extra work but you can get ready-made software to help perform this attack.

WebAuthn (and the older U2F) works, because it's recruiting the browser (which knows perfectly well which site this is) to mint site-specific credentials every time.

An attacker with a phishing site https://fake-gitlab.example/ has a few options, none of which work out for them:

* Just don't do WebAuthn, now they don't have a second factor and can't get in

* Ask the browser for legitimate WebAuthn credentials for fake-gitlab.example. But, of course GitLab won't accept those credentials, any more than it'd accept a made-up username so they're useless.

* Show the browser the "cookie" GitLab offered for GitLab WebAuthn credentials, the browser will cheerfully give a user's FIDO dongle this cookie and the fake-gitlab.example name, and the dongle will explain that it doesn't recognise the combination, maybe use a different dongle? No joy.

* Show the browser that cookie and tell it this is gitlab.com. But this is fake-gitlab.example not gitlab.com, so the browser will just raise a DOMException SecurityError in the fake site's JS code. The code can hide that easily, but it doesn't get any credentials.

usr1106

Thanks for mentioning https://en.m.wikipedia.org/wiki/WebAuthn According to Wikipedia Dropbox supports it. Any other widely used adopters? Need to check whether gitlab supports when I am at my computer. So it might well be that they even mandate it for their employees. But the statement or at least the part of the statement that made it to the article was not that specific.

tialaramex

My understanding is that Google mandates U2F (the de facto predecessor to WebAuthn) for employee systems, certainly the Google employees I know have FIDO keys. One interesting thing is that some of them don't really understand how those keys work - and the U2F/WebAuthn design means that doesn't matter at all. I believe way more firms should do this and I've tried to gently encourage it at places I've worked.

Older sites tend to support U2F rather than WebAuthn. If you're on a greenfield install, you should just do WebAuthn, but it can be complicated in some scenarios to migrate from U2F especially if you're huge so it's understandable that not all have. In at least Chrome and Firefox the UX is identical anyway.

So, not differentiating them:

Facebook, GitHub and Google are three popular examples

You can also authenticate for some US Federal Government business on Login.gov (even if you aren't a US citizen)

And the UK's "Gov.uk verify" authentication can use Digidentity's offering which in turn relies on WebAuthn or U2F.

Edited to add:

AWS can do it, but, for some crazy reason they won't let you register more than one FIDO dongle. So I would not advise securing an "admin" AWS account this way, only users who can go to someone with admin privs to reset if they lose the dongle, but it's good for a team of developers I guess.

Not allowing multiple dongles goes against the intended security design, ignores a SHOULD in the WebAuthn standard, and also makes a bunch of the fairly complicated design pointless, I can't tell if Amazon are incompetent or had some particular weird reason to do it.

usr1106

> Need to check whether gitlab supports when I am at my computer.

They support U2F, of course completely opt-in for users/customers.

The question that remains is do they mandate it for employees.

Nullabillity

Google, GitHub, GitLab all support it, at least. Azure AD, notably, does not.

Nullabillity

Gitlab.com has used U2F/WebAuthn for years (not sure which, but they're both isolated by origin anyway).

usr1106

Right, according to https://en.m.wikipedia.org/wiki/Universal_2nd_Factor it's U2F. So I would not be surprised if gitlab requires their employees to use the dongle instead of simple OTP which they allow for customers/users. A shortcoming of the article not to mention whether that's the case or not.

emilycook

I work at gitlab and just stumbled across this, we use U2F but we have an MR to add WebAuthn support https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26692

Jonnax

At a place I worked at they did something similar with the most obviously fake email possible.

Seemed like a pointless box ticking exercise.

Funny enough IT sent out an email about a Windows update rolling out (upgrade to a new version like 1709) that looked ever dodgier than their fake email. That had people reporting that as phishing.

jiofih

> Seemed like a pointless box ticking exercise.

Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.

Had the same at a previous company. If you use GMail, IT needs to manually approve the mail to avoid it going into the spam folder. A huge warning saying “this message has been excluded from your spam filter by your IT department” shows up at the top. People still click through...

pinum

That might actually make it seem like the email has been explicitly sanctioned by IT. "Huh, this email is a bit weird, but IT says it's okay." click

zulln

> Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.

For frauds that requires the attacker to spend time with the victim, sure. For a fully automated phishing attack? There is no reason to lose out on people early on.

And for a targeted attack against a company? Makes even less sense to make it obvious.

Rexxar

It could be a strategy to make people less careful : send one or two "obvious" fake phishing email and then the real one a little later when they are confident they can avoid phishing.

sergers

My company sends phishing emails every few weeks, for like past 5 years

U click it or open attachment, u are automatically enrolled in trainingg u must complete.

Very few people click anything remotely obscure, and are asking manager if the email is from a legitimate company we are dealing with.

Ex: I got a signup confirmation email from a legitimate website, asked our director about it. He looked into it and confirmed with IT we had been signed up and infosec was fine with it.

We then relayed to the whole team that it is legitimate email.

I would say highly successful

whydoyoucare

A better approach is to implement anti-phishing measures way up in the chain -- at the MTA level itself. Simpler ideas like: stripping URLs' from mail, stripping attachments if email origin is outside the organization, converting HTML email to plain-text, disallowing HTML email, yield substantial benefit in stopping phishing.

Basically, don't try to solve a problem by humans when it can be solved more efficiently by technology!

Phishing exercises are absolutely pointless in my experience and contribute zero to increasing the awareness. Shaming does not address the underlying human weaknesses that make us fall for phishing, they simply make the IT Guys look cooler, and increase CISOs' and Red Team budget. :-(

somebodythere

The best security is multi-layered. The human layer is the weakest part of any security system, and both technical and human measures must be taken to achieve defense in depth.

Some technical measures used here were requiring 2FA for all internal services, and scoping keys/POLP to limit the damage from one compromised key.

The purpose of exercises like these is not to shame someone who "fell for it", but to educate workers about phishing attacks and strengthen the human security layer.

whydoyoucare

Two decades of experience suggests that "strengthening human security by training" ain't happening, no matter how hard/smart you try. The technical controls have to be beefed up to a point where that human-weak-link is eliminated.

These tests are nothing but CISOs'(and Red Teams, and the whole industry around it) justifying their existence, and potentially doing a song-and-dance about it at the quarterly all-hands. Nothing more, nothing less. We can come back to this thread in another year/two years/five years/decade, and I can bet dollars-to-doughnuts, the industry will still be training humans, and claiming these pointless statistics about phishing. ;-)

On this note, see #6 "Educating Users", in Marcus Ranum's excellent article "The Six Dumbest Ideas in Computer Security": https://www.ranum.com/security/computer_security/editorials/...

Daily Digest email

Get the top HN stories in your inbox every day.

Gitlab phished its own work-from-home staff, and 1 in 5 fell for it - Hacker News