New York attorney general’s office is investigating Facebook for harvesting the email contacts of about 1.5 million users without their consent.
“Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.” – New York Attorney General Letitia James The social network confirmed in April that it collected the email contacts of its users, but said it wasn’t on purpose.
The attorney general’s office said in a press release that hundreds of millions of Facebook users could have been affected because users might have hundreds of email contacts stored. The attorney general’s investigation comes as other regulators and lawmakers are cracking down on Facebook for its privacy practices e.g. Ireland’s Data Protection Commission is investigating whether Facebook safeguarded its users’ passwords properly, which could show violations of GDPR. In December, the DC attorney general sued Facebook for allegedly failing to safeguard the data of its users and Canadian regulators have accused Facebook of violating local laws for mishandling user data and said they could take the company to court for its privacy mishaps.
The privacy commissioner of Canada and the information and privacy commissioner for British Columbia started investigating Facebook last year after revelations surfaced that a UK political consultancy Cambridge Analyticaharvested data from about 87 million users without their permission.
Cloudbleed aka Cloudleak is a bug in Cloudflare which is a CDN service, a proxy service, and a DNS provider… well to be honest cloudflare is a LOT of things these days and provides a freemium set of services, you can run your site using their DNS, proxy / CDN service for free or pay $20-$200, to get some interesting set of goodies. According to their own homepage:
“Cloudflare speeds up and protects millions of websites, APIs, SaaS services, and other properties connected to the Internet. Our Anycast technology enables our benefits to scale with every server we add to our growing footprint of data centers.”
They provide these services for ~6 Million websites, and recently a researcher at google found a critical flaw in cloudflare’s inhouse parser that may have leaked passwords and authentication tokens.
Tavis Ormandy a self-described “Vulnerability researcher at Google” currently working for Google’s Project Zero which is a security initiative found a bug on February 18th. He posted an issue on Feb 19th. he tweeted looking for anyone from cloudflare security to get in touch with him.
Could someone from cloudflare security urgently contact me.
Cloudflare people got back to him right away and they worked on solving this issue ASAP. Unfortunately, the issue may be as old as September 2016. Cloudflare released a statement letting us know that the larger issue started on February 13th when a code update meant one in every 3,300,300 HTTP requests potentially resulted in memory leakage which doesn’t mean anything until you realize the massive amount of information being passed through the Cloudflare network.
Tavis found when they “fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major Cloudflare-hosted sites from other users”. there’s just so much information going through the Cloudflare network that we don’t know what has and hasn’t been affected until something is released showing an actual malicious leak.
Unfortunately, a lot of data was cached by Google and other search engines and was available to be viewed as late as Feb 24th 2017. Cloudflare has been working with Google and Bing etc to remove such information before it can be maliciously used.
Ormandy’s original post :
On February 17th 2017, I was working on a corpus distillation project, when I encountered some data that didn’t match what I had been expecting. It’s not unusual to find garbage, corrupt data, mislabeled data or just crazy non-conforming data…but the format of the data this time was confusing enough that I spent some time trying to debug what had gone wrong, wondering if it was a bug in my code. In fact, the data was bizarre enough that some colleagues around the Project Zero office even got intrigued.
It became clear after a while we were looking at chunks of uninitialized memory interspersed with valid data. The program that this uninitialized data was coming from just happened to have the data I wanted in memory at the time. That solved the mystery, but some of the nearby memory had strings and objects that really seemed like they could be from a reverse proxy operated by cloudflare – a major cdn service.
A while later, we figured out how to reproduce the problem. It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like heartbleed, but cloudflare specific and worse for reasons I’ll explain later). My working theory was that this was related to their “ScrapeShield” feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers.
We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.
This situation was unusual, PII was actively being downloaded by crawlers and users during normal usage, they just didn’t understand what they were seeing. Seconds mattered here, emails to support on a friday evening were not going to cut it. I don’t have any cloudflare contacts, so reached out for an urgent contact on twitter, and quickly reached the right people.
Cloudflare’s response to cloudbleed
Cloudflare has shown there is a good reason millions of sites trust them, they have stepped out in front and fixed the immediate issue within 6 hours of the report, and have been working on fixing the issue at large and hunting down any related bugs in the past few days.
Some of the companies affected have done their own due diligence and told users to change their passwords right away, while others like 1password & OkCupid have come to a different conclusion and informed their users but not forced a password change.
Our investigation into the Cloudflare bug has revealed minimal exposure, if any. More details >> https://t.co/lYN7nq2oGq
Well there isn’t a single easy answer to this, this is like a car part advisory / warning from a manufacturer, it may mean some day down the road your center console’s clips may pop out from use. or they may not. This could be bad. or not…. who knows at this point. with the use of password managers you shouldn’t be using the same password any two sites as it is, but let’s be honest with the amount of signups a typical tech-oriented person has its impossible that you didn’t use the same password across two sites by accident or out of laziness. So if you want to be cautious? change your passwords. if you want to wait and see then do so and follow what the individual sites recommend. I personally am rotating my passwords where possible and adding 2factor authentication such as google totp, authy or duo etc.
CVE-2016-5195 is a bug in the Copy On Write mechanism of the Kernel. Any user or user owned process can gain write access to memory mappings which should be read only for the end user. This allows them to interact with otherwise root only files. Should you worry about it? YES. you should jpatch your system(s) right away!
Who found CVE-2016-5195?
Who cares? ITS BAD PATCH NOW!! ok just kidding, security researcher Phil Oester was the first one to publically release info about this exploit. He found it via a http packet capture setup.
Is this related to SSL / OpenSSL?
No, unlike heartbleed, poodle etc this is not related to SSL.
Where can I get some official info about this exploit?
Not sure what you mean by official but check at mitre and Redhat
How to find out if I am affected?
Ubuntu / Debian
type as root or with sudo uname -rv
sample outputs :
4.4.13-1-pve #1 SMP Tue Jun 28 10:16:33 CEST 2016 2.6.32-openvz-042stab104.1-amd64 #1 SMP Thu Jan 29 13:06:16 MSK 2015 4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19)
If you are vulnerable you will get a result such as :
Your kernel is X.X.X.X.X.x86_64 which IS vulnerable. Red Hat recommends that you update your kernel. Alternatively, you can apply partial mitigation described at https://access.redhat.com/security/vulnerabilities/2706661 .
There’s no real guarantee that handle_mm_fault() will always be able to break a COW situation – if an update from another thread ends up modifying the page table some way, handle_mm_fault() may end up requiring us to re-try the operation.
That’s normally fine, but get_user_pages() ended up re-trying it as a read, and thus a write access could in theory end up losing the dirty bit or be done on a page that had not been properly COW’ed.
This makes get_user_pages() always retry write accesses as write accesses by making “follow_page()” require that a writable follow has the dirty bit set. That simplifies the code and solves the race: if the COW break fails for some reason, we’ll just loop around and try again.
This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”).
In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better). The s390 dirty bit was implemented in abf09bed3cce (“s390/mm: implement software dirty bits”) which made it into v3.9. Earlier kernels will have to look at the page state itself.
Also, the VM has become more scalable, and what used a purely theoretical race back then has become easier to trigger.
To fix it, we introduce a new internal FOLL_COW flag to mark the “yes, we already did a COW” rather than play racy games with FOLL_WRITE that is very fundamental, and then use the pte dirty flag to validate that the FOLL_COW flag is still valid.
Virustotal is a webapp that lets you upload files to check them for viruses before you install them. You can also scan a URL directly or search the VirusTotal database. The great thing about virustotal is that it checks the uploaded file against many commercial antivirus and malware detection engines not just one, and then it tells you which ones detected the file as malware. Consequently lots of people, companies, websites, & tools have started to make use of this amazing tool to bolster their virus and malware detecting capabilities. If, for example, multiple high rated engines detect a file as suspect, then we can be certain it requires a further inspection.
The Issue at hand is that many companies have taken this service as granted. They use the results provided by virustotal as is or with little to no face checking and due diligence on their part. In some cases their own detection engines are so lack luster that it is actually better for everyone involved that they don’t bother. However this does cause a bit of an issue as this is rather unfair. Some companies and products are basically taking whats put on virustotal by other providers, checking results against those but not putting their own engines on virustotal so no one can benefit from that extra bit of checking. Dont get me wrong, every one of these product pays for a Virustotal API access subscription, but that subscription relies on a lot of great people and companies making their engines available to VT , which in turn improves the results and detection overall for the average Joe like me.
Virus total has now changed their policy to make some issues clearer and to make some things mandatory.
Virustotal is not a replacement for a proper antivirus.
Virustotal isnt intended to be a proper replacement for a full on AV product, as it doesnt have a full on Antivirus environment just the basic detection engine, hence it shouldn’t be used to rank or rate AV products of their engines.
Dont use third party names in your product without talking to and getting permission from the third parties, such as the engine developers etc who provide the results for virtustotal.
Dont use virustotal logo, name, or trademark anywhere without virustotal’s prior permission.
Biggest change that will certainly sink a few products out there :
all scanning companies will now be required to integrate their detection scanner in the public VT interface, in order to be eligible to receive antivirus results as part of their VirusTotal API services.
Additionally, new scanners joining the community will need to prove a certification and/or independent reviews from security testers according to best practices of Anti-Malware Testing Standards Organization (AMTSO)
Simply put, ALL scanning companies and end point makers will not be forced to put up their detection scanners and engines for VT to be put on their public interface, i.e. the interface you and I use, before these scanning companies can use Virustotal API. These companies cant just take everyone’s hard work and build multi million dollar companies on top of that. they have to contribute to this community effort before they can benefit from it.
Any new players on the scene will have to be vetted and certified by a governing body, in this case that would be the Anti Malware testing standards organization (AMTSO). In theory this makes it so there are some standards to be maintained.
Time shall tell how well this change works out for everyone and We shall see!