- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
Hope this isn’t a repeated submission. Funny how they’re trying to deflect blame after they tried to change the EULA post breach.
I’m seeing so much FUD and misinformation being spread about this that I wonder what’s the motivation behind the stories reporting this. These are as close to the facts as I can state from what I’ve read about the situation:
- 23andMe was not hacked or breached.
- Another site (as of yet undisclosed) was breached and a database of usernames, passwords/hashes, last known login location, personal info, and recent IP addresses was accessed and downloaded by an attacker.
- The attacker took the database dump to the dark web and attempted to sell the leaked info.
- Another attacker purchased the data and began testing the logins on 23andMe using a botnet that used the username/passwords retrieved and used the last known location to use nodes that were close to those locations.
- All compromised accounts did not have MFA enabled.
- Data that was available to compromised accounts such as data sharing that was opted-into was available to the people that compromised them as well.
- No data that wasn’t opted into was shared.
- 23andMe now requires MFA on all accounts (started once they were notified of a potential issue).
I agree with 23andMe. I don’t see how it’s their fault that users reused their passwords from other sites and didn’t turn on Multi-Factor Authentication. In my opinion, they should have forced MFA for people but not doing so doesn’t suddenly make them culpable for users’ poor security practices.
I think most internet users are straight up smooth brained, i have to pull my wife’s hair to get her to not use my first name twice and the year we were married as a password and even then I only succeed 30% of the time, and she had the nerve to bitch and moan when her Walmart account got hacked, she’s just lucky she didn’t have the cc attached to it.
And she makes 3 times as much as I do, there is no helping people.
These people remind me of my old roommate who “just wanted to live in a neighborhood where you don’t have to lock your doors.”
We lived kind of in the fucking woods outside of town, and some of our nearest neighbors had a fucking meth lab on their property.
I literally told him you can’t fucking will that want into reality, man.
You can’t just choose to leave your doors unlocked hoping that this will turn out to be that neighborhood.
I eventually moved the fuck out because I can’t deal with that kind of hippie dippie bullshit. Life isn’t fucking The Secret.
I have friends that occasionally bitch about the way things are but refuse to engage with whatever systems are set up to help solve whatever given problem they have. “it shouldn’t be like that! It should work like X”
Well, it doesn’t. We can try to change things for the better but refusal to engage with the current system isn’t an excuse for why your life is shit.
The bootlickers really come out of the woodwork here to suck on corporate boot.Edit: wrong thread.
What in the fuck are you talking about? You’re the one standing up for the corporation
Yeah that is my bad, responded to the wrong thread.
In this case, the corporation isn’t wrong that users aren’t doing due dilligence.
Happens to the best of us
internet userspeople
I agree, by all accounts 23andMe didn’t do anything wrong, however could they have done more?
For example the 14,000 compromised accounts.
- Did they all login from the same location?
- Did they all login around the same time?
- Did they exhibit strange login behavior like always logged in from California, suddenly logged in from Europe?
- Did these accounts, after logging in, perform actions that seemed automated?
- Did these accounts access more data than the average user?
In hindsight some of these questions might be easier to answer. It’s possible a company with even better security could have detected and shutdown these compromised accounts before they collected the data of millions of accounts. It’s also possible they did everything right.
A full investigation makes sense.
I already said they could have done more. They could have forced MFA.
All the other bullet points were already addressed: they used a botnet that, combined with the “last login location” allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.
A full investigation makes sense but the OP is about 23andMe’s statement that the crux is users reusing passwords and not enabling MFA and they’re right about that. They could have done more but, even then, there’s no guarantee that someone with the right username/password combo could be detected.
Those are my questions, too. It boggles my mind that so many accounts didn’t seem to raise a red flag. Did 23&Me have any sort of suspicious behavior detection?
And how did those breached accounts access that much data without it being observed as an obvious pattern?
If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.
The part that would probably look suspicious would be the increase in traffic from data exfiltration. However, that would probably be a low priority alert for most engineering orgs.
Even less likely when you have a bot network that is performing normal logins with limited data exfiltration over the course of multiple months to normalize any sort of monitoring and analytics. Rendering such alerting inert, since the data would appear normal.
Setting up monitoring and analysis for user accounts and where they’re logging from and suspicious activity isn’t exactly easy. It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them. And even if they had this setup which I imagine they already did it was defeated.
Credential stuffing is an attack which is well known and that organizations like 23andme definitely should have in their threat model. There are mitigations, such as preventing compromised credentials to be used at registration, protecting from bots (as imperfect as it is), enforcing MFA etc.
This is their breach indeed.
They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe’s fault.
Also, how do you go about “preventing compromised credentials” if you don’t know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.
The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.
Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.
Regarding the last bit, it might noto have helped against this specific breach, but we don’t know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.
Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don’t have sufficient controls.
I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.
Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.
My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users’ own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that’s why you enforce things.
Very often companies use “ease” or “users don’t like” to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That’s why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don’t enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it’s clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.
It’s up to each user to determine how securely they want to protect their data.
Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That’s the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?
There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.
This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.
Yea, you’re right. Good point.
I actually saw someone on FB complaining that they were being forced to enable 2FA on FB.
deleted by creator
Laziness alone is a pretty big reason. MFA was available and users were prompted to set it up. The fact that they didn’t should tell you something.
deleted by creator
I agree. The people blaming the website are ridiculous here.
Would bet that you’re a crypto fan.
Would bet your password includes “password” or something anyone could guess in 10 minutes after viewing your Facebook profile.
Step 4 is where 23andme got hacked
By your logic I hack into every site I use by … checks notes presenting the correct username and password.
It’s called social hacking,
23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users
I’m honestly asking what the impact to the users is from this breach. Wasn’t 23andMe already free to selling or distribute this data to anybody they wanted to, without notifying the users?
That’s not how this works. They are running internationally, and GDPR would hit them like a brick if they did that.
I would assume they had some deals with law enforcement to transmit data one narrow circumstances.
I’m honestly asking what the impact to the users is from this breach.
Well if you signed up there and did an ancestry inquiry, those hackers can now without a doubt link you to your ancestry. They might be able to doxx famous people and in the wrong hands this could lead to stalking, and even more dangerous situations. Basically everyone who is signed up there has lost their privacy and has their sensitive data at the mercy of a criminal.
This is different. This is a breach and if you have a company taking care of such sensitive data, it’s your job to do the best you can to protect it. If they really do blame this on the users, they are in for a class action and hefty fine from the EU, especially now that they’ve established even more guidelines towards companies regarding the maintenance of sensitive data. This will hurt on some regard.
If they really do blame this on the users
It’s not that they said:
It’s your fault your data leaked
What they said was (paraphrasing):
A list of compromised emails/passwords from another site leaked, and people found some of those worked on 23andme. If a DNA relative that you volunteered to share information with was one of those people, then the info you volunteered to share was compromised to a 3rd party.
Which, honestly?
Completely valid. The only way to stop this would be for 23andme to monitor these “hack lists” and notify any email that also has an account on their website.
Side note:
Any tech company can provide info if asked by the police. The good ones require a warrant first, but as data owners they can provide it without a warrant.
That’s not 23 and me fault at all then. Basically boils down to password reuse. All i would say is they should have provided 2fa if they didn’t.
All i would say is they should have provided 2fa if they didn’t.
At this point, every company not using 2FA is at fault for data hacks. Most people using the internet have logins to 100’s of sites. Knowing where to do to change all your passwords is nearly impossible for a seasoned internet user.
A seasoned internet user has a password manager.
Not using one is your negligence, no one else’s.
One password to break them all, and in the dark web bind them.
The sad thing is you have to balance the costs of requiring your customer to use 2FA with the risk of losing business because of it and the risk of losing reputation because your customers got hacked and suffered loss.
The sad thing is some (actuall most) people are brain dead, you will lose business if you make them use a complicated password or MFA and it puts them in the position to make a hard call.
They took the easy route and gave the customer the option to use MfA if they wished and unfortunately a lot of people declined. Those people should not have the ability to claim damages (or vote, for that matter)
The only way to stop this would be for 23andme to monitor these “hack lists”
Unfortunately, from the information that I’ve seen, the hack lists didn’t have these credentials. HIBP is the most popular one and it’s claimed that the database used for these wasn’t posted publicly but was instead sold on the dark web. I’m sure there’s some overlap with previous lists if people used the same passwords but the specific dataset in this case wasn’t made public like others.
I would guess (hope?) that the data sets they sell are somewhat anonymized, like listing people by an i.d. number instead of the person’s name, and not including contact information like home address and telephone number. If so then the datasets sold to companies don’t contain the personal information that hackers got in this security breach.
I’m honestly asking what the impact to the users is from this breach.
The stolen info was used to databases of people with jewish ancestry that were sold on the dark web. I think there was a list of similar DB of people with chinese ancestry. 23andme’s poor security practices have directly helped violent white supremecists find targets.
If you’re so incompetent that you can’t stop white supremecists from getting identifiable information about people from minorities, there is a compelling public interest for your company to be shut down.
That is a whoooolllee lot of assumptions
Why do you think someone would buy illegally obtained lists of people with Jewish or Chinese ancestry? And who do you think would be buying it?
Scammers, that opens up a lot of scam potential.
Hi, I’m your new cousin.
Scammers would buy all info, not specifically targeted to people of Jewish or Chinese descent. That’s not what’s being sold.
Who do you think would want only information about people with Jewish or Chinese ancestry, and why?
OK you’re gonna have to give me a link to what you’re talking about. It feels like you are being specific, and I am being generic.
It’s the same incident, the OP article just didn’t mention it.
OP spreading disinformation.
Users used bad passwords. Their accounts where accessed using their legitimate, bad, passwords.
Users cry about the consequences of their bad passwords.
Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves
Are you telling me a password of 23AndMe! Is bad? It meets all the requirements.
How am I spreading disinformation? I just contributed an article I found interesting for discussion.
From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe’s DNA Relatives feature.
How exactly are these 6.9M users at fault? They opted in to a feature of the platform that had nothing to do with their passwords.
On top of that, the company should have enforced strong passwords and forced 2FA for all accounts. What they’re doing is victim blaming.
Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves
Tell me you didn’t read the article without telling me.
If 14,000 users who didn’t change a password on a single use website they probably only ever logged into twice gives you 6.9 million user’s personal info, that’s the company’s fault.
You didn’t read it either. They gained access to shared information between the accounts because both accounts had enabled “share my info with my relatives” option.
Logging into someones Facebook and seeing their friends and all the stuff they posted as “friends only” and their private DM discussions isn’t a hack or a vulnerability, it’s how the website works.
Laughing a feature that lets an inevitable attack access 500 other people’s info for every comprimised account is a glaring security failure.
Accounting for foreseeable risks to users’ data is the company’s responsibility and they launched a feature that made a massive breach inevitable. It’s not the users’ fault for opting in to a feature that obviously should never have been launched.
Are you yelling me a password of 23AndMe! Is bad? It meets all the requirements.
Bro just don’t have DNA.
If you were really on your sigma grindset, your DNA would have never existed.
More of a gamma grindset since if you get hit by enough of those rays, you might not have recognizable DNA anymore.
…this checks out. Gamma grindset origin story.
Too late man…
And I agree with them, I mean 23andMe should have a brute-force resistant login implementation and 2FA, but you know that when you create an account.
If you are reusing creds you should expect to be compromised pretty easily.
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem.
Yep it was 14,000 that were hacked, the other 6.9 million were from that DNA relative functionality they have. Unfortunately 23andMe’s response is what to expect since companies will never put their customers safety ahead of their profits.
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem
I mean…
You volunteered to share your info with that person.
And that person reused a email/password that was compromised.
How can 23andme prevent that?
It sucks, but it’s the fault of your relative that you entrusted with access to your information.
No different than if you handed them a hardcopy and they left it on the table of McDonald’s .
Quick edit:
It sounds like you think your account would be compromised, that’s not what happened. Only info you shared with the compromised relative becomes compromised. They don’t magically get your password.
But you still choose to make it accessible to that relatives account by accepting their request to share
Could I please have your personal information?
No.
See… it’s that easy.
Ok, who else would be able to give me your personal information. I’ll go get it from them instead.
Your mom has my contact information. You can ask her.
/pwn3d.
Oh, so you’re actually not consenting to have some personal information you’ve given to family given to me as well? Odd, you sure seemed ok when it was people having their information snagged from 23andMe.
afaik there was no breach of private data, only the kind of data shared to find relatives, which is opt-in and obviously not private to anyone who has seen how this service works. In other words, the only data “leaked” was the kind of data that was already shared with other 23andMe users.
Name, sex and ancestry were sold on the dark web, that’s a breach of private data.
The feature that lets a hacker see 500 other people’s personal information when they hack an account is obviously a massive security risk. Especially if you run a single use service - no one updates their password on a site they don’t use anymore.
Launching the feature in the first place made this inevitable.
I doesn’t. Sharing that info was opt-in only. In this scenario, no 23andMe accounts were breached. The users reused their credentials from other sites. It would be like you sharing your bank account access with a family member’s account and their account getting accessed because their banking password was “Password1” or their PIN was “1234”.
So if you enabled a setting that is opt-in only that allows sharing data between accounts and you are surprised that data was shared between accounts how is that not your fault?
You shouldn’t have shared your information with someone who is untrustworthy then. Data sharing is opt-in.
Credential stuffing attacks will always yield results on a single use website because no one changes passwords on a site they don’t use anymore.
Launching a feature that enables an inevitable attack to access 500 other people’s info is very clearly the fault of the company who launched the feature.
Even if you didn’t reuse a compromised password yourself, the fact that your relatives did indicates that you’re genetically predisposed to bad security practices. /s
Is it also the User’s fault for the 6,898,600 people that didn’t reuse a password and were still breached?
Yes, because you have to choose to share that data with other people. 23andMe isn’t responsible if grandma uses the same password for every site.
23andMe is responsible for sandboxing that data, however. Which they obviously didn’t do.
User opted-in to share those data
Did you not read my comment? Users opt in to sharing data with other accounts, which means if one account is compromised, then every account that allowed them access would have their data compromised too. That’s not on the company, because they feature can’t work without allowing access.
They weren’t breached. The data they willingly shared with the compromised accounts was available to the people that compromised them.
Pretty sure nobody clicked a button that said “share my data with compromised accounts.”
There was a button that said “share my data with this account”. If that person went and shared that info publicly, how is that any different? The accounts accessed with accessed with valid credentials through the normal login process. They weren’t “breached” or “hacked”.
I wonder if they can identify a genetic predisposition that these patients had that made them more prone to compromising their passwords? And then if so, was it REALLY their fault?
Should probably ask OP!
They seems to be in the same boat based on this submission…
This is the best summary I could come up with:
“Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch in an email.
In December, 23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users, nearly half of all its customers.
The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.
“The breach impacted millions of consumers whose data was exposed through the DNA Relatives feature on 23andMe’s platform, not because they used recycled passwords.
23andMe’s attempt to shirk responsibility by blaming its customers does nothing for these millions of consumers whose data was compromised through no fault of their own whatsoever,” said Zavareei.
Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving,” and “a desperate attempt” to protect itself and deter customers from going after the company.
The original article contains 721 words, the summary contains 184 words. Saved 74%. I’m a bot and I’m open source!
From the article:
The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.
From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe’s DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform.
I knew better than to give thee companies my DNA but of course I’ve had family give it to them. I suppose if I was wanted for an unsolved murder I’d be a bit concerned, but I’m still not happy that anyone’s DNA is compromised that I’m associated with.
The question to me is what’s the play with that data. I’d assume they would have a use for it if they went to the trouble of stealing it. I suspect in the future this will be lucrative data, but what’s the play right now??
“users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe…Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,”
This is a failure to design securely. Breaking into one account via cred stuffing should give you access to one account’s data, but because of their poor design hackers were able to leverage 14,000 compromised accounts into 500x that much data. What that tells me is that, by design, every account on 23andMe has access to the confidential data of many, many other accounts.
I don’t think so. Those users had opted in to share information within a certain group. They’ve already accepted the risk of sharing info with someone who might be untrustworthy.
Plenty of other systems do the same thing. I can share the list of games on my Steam account with my friends - the fact that a hacker might break into one of their accounts and access my data doesn’t mean that this sharing of information is broken by design.
If you choose to share your secrets with someone, you accept the risk that they may not protect them as well as you do.
There may be other reasons to criticise 23andMe’s security, but this isn’t a broken design.
And it’s your fault you have access to them. Stop doing bad things and keep your information secure.
you clearly have no familiarity with the principles of information security. 23andMe failed to follow a basic principle: defense in depth. The system should be designed such that compromises are limited in scope and cannot be leveraged into a greater scope. Password breaches are going to happen. They happen every day, on every system on the internet. They happen to weak passwords, reused passwords and strong passwords. They’re so common that if you don’t design your system assuming the occasional user account will be compromised then you’re completely ignoring a threat vector, which is on you as a designer. 23andMe didn’t force 2 factor auth (https://techcrunch.com/2023/11/07/23andme-ancestry-myheritage-two-factor-by-default/) and they made it so every account had access to information beyond what that account could control. These are two design decisions that enabled this attack to succeed, and then escalate.
Didn’t say /s…
Gentle reminder to plop your email address in here and see if you, much like 14,000 23andMe users, have had an account compromised somewhere. Enable two-factor where you can and don’t reuse passwords.
It’s saying I’ve been hacked on websites I’ve legitimately never even heard of, websites I have 100% never interacted with. Is this just a normal consequence of companies sharing all my data with other companies?
I can’t speak to how you ended up on the list. The way haveibeenpwned works is that they crawl publicly available credential dumps and grab the associated usernames/emails for each cred pair. However it got there, your email ended up in one of those dumps. Recommend you change your passwords, make sure you don’t repeat the same password across multiple sites and use a password manager so you don’t have to remember dozens of passwords yourself.
I mean, it is kinda their fault in the first place for using an optional corporate service that stores very private data of yours which could be used in malicious ways.
This is at least partly true. If you reuse the same information, you should expect to get pwned
Blaming your customers is definitely a strategy. It’s not a good one, but it is a strategy.
BRB deleting my 23AndMe account
I’m just of the general opinion that any personal data you entrust to any corporation is going to be at risk - regardless of it’s assurances. There’s also a risk of that corporation being legitimately acquired by another thus nullifying previous TOS, etc. Or worse case, they sell all your info anyway. Connected technology is moving quickly. What might seem safe to share today could become the basis of an insurance claim denial when they discover a genetic predisposition they believe you were obligated to disclose.