Welcome to SumGuy's Ramblings!

I am Sum guy. You may call me Sumguy. I like to post news, tutorials, tidbits, tips and sometimes just ramble because I can! so make yourself at home.
CloudBleed a Cloudflare flaw leaks customer data

CloudBleed a Cloudflare flaw leaks customer data

Cloudbleed aka Cloudleak is a bug in Cloudflare which is a CDN service, a proxy service, and a DNS provider… well to be honest cloudflare is a LOT of things these days and provides a freemium set of services, you can run your site using their DNS, proxy / CDN service for free or pay $20-$200, to get some interesting set of goodies. According to their own homepage:

“Cloudflare speeds up and protects millions of websites, APIs, SaaS services, and other properties connected to the Internet. Our Anycast technology enables our benefits to scale with every server we add to our growing footprint of data centers.”

They provide these services for ~6 Million websites, and recently a researcher at google found a critical flaw in cloudflare’s inhouse parser that may have leaked passwords and authentication tokens.

Tavis Ormandy a self described “Vulnerability researcher at Google” currently working for Google’s Project Zero which is a security initiative found a bug on on February 18th. He posted an issue on Feb 19th. he tweeted looking for anyone from cloudflare security to get in touch with him.

Cloudflare people got back to him right away and they worked on solving this issue ASAP. Unfortunately the issue may be as old as september 2016.  Cloudflare released a statement letting us know that the larger issue started on February 13th when a code update meant one in every 3,300,300 HTTP requests potentially resulted in memory leakage which doesn’t mean anything until you realize the massive amount of information being passed through the cloudflare network.

Tavis found when they “fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users”. there’s just so much information going through the cloudflare network that we don’t know what has and hasn’t been affected till something is released showing an actual malicious leak.

Unfortunately a lot of data was cached by google and other search engines and was available to be viewed as late as Feb 24th 2017. Cloudflare has been working with google and bing etc to remove such information before it can be maliciously used. cloudbleed aka cloudleak example from Travis

 

Ormandy’s original post :

On February 17th 2017, I was working on a corpus distillation project, when I encountered some data that didn’t match what I had been expecting. It’s not unusual to find garbage, corrupt data, mislabeled data or just crazy non-conforming data…but the format of the data this time was confusing enough that I spent some time trying to debug what had gone wrong, wondering if it was a bug in my code. In fact, the data was bizarre enough that some colleagues around the Project Zero office even got intrigued.

It became clear after a while we were looking at chunks of uninitialized memory interspersed with valid data. The program that this uninitialized data was coming from just happened to have the data I wanted in memory at the time. That solved the mystery, but some of the nearby memory had strings and objects that really seemed like they could be from a reverse proxy operated by cloudflare – a major cdn service.

A while later, we figured out how to reproduce the problem. It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like heartbleed, but cloudflare specific and worse for reasons I’ll explain later). My working theory was that this was related to their “ScrapeShield” feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers.

We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.

This situation was unusual, PII was actively being downloaded by crawlers and users during normal usage, they just didn’t understand what they were seeing. Seconds mattered here, emails to support on a friday evening were not going to cut it. I don’t have any cloudflare contacts, so reached out for an urgent contact on twitter, and quickly reached the right people.

Cloudflare’s response to cloudbleed

Cloudflare has shown there is a good reason millions of sites trust them, they have stepped out in front and fixed the immediate issue within 6 hours of the report, and have been working on fixing the issue at large and hunting down any related bugs in the past few days.

Affected sites

Some of the companies affected have done their own due diligence and told users to change their passwords right away, while others like 1password & okcupid have come to different conclusion and informed their users but not forced a password change.

Lastpass , a competitor to 1password, (I personally use lastpass but have no vested interest in the company) was not hosted behind cloudflare and has had no impact from cloudbleed.

Download a list of all sites currently known to be using Cloudflare CDN/Proxy that may be affected by Cloudbleed by clicking this button below.

What should you do?

Well there isn’t a single easy answer to this, this is like a car part advisory / warning from a manufacturer, it may mean some day down the road your center console’s clips may pop out from use. or they may not. This could be bad. or not…. who knows at this point. with the use of password managers you shouldn’t be using the same password any two sites as it is, but let’s be honest with the amount of signups a typical tech oriented person has its impossible that you didn’t use the same password across two sites by accident or out of laziness. So if you want to be cautious? change your passwords. if you want to wait and see then do so and follow what the individual sites recommend. I personally am rotating my passwords where possible and adding 2factor authentication such as google totp, authy or duo etc.

Mooooo linux Dirty cow vulnerbility cve-2016-5195

Mooooo linux Dirty cow vulnerbility cve-2016-5195

What is Dirty Cow

CVE-2016-5195 is a bug in the Copy On Write mechanism of the Kernel. Any user or user owned process can gain write access to memory mappings which should be read only for the end user. This allows them to interact with otherwise root only files. Should you worry about it? YES. you should jpatch your system(s) right away!

Who found CVE-2016-5195?

Who cares? ITS BAD PATCH NOW!! ok just kidding, security researcher Phil Oester was the first one to publically release info about this exploit. He found it via a http packet capture setup.

Is this related to SSL / OpenSSL?

No, unlike heartbleed, poodle etc this is not related to SSL.

Where can I get some official info about this exploit?

Not sure what you mean by official but check at mitre and Redhat

How to find out if I am affected?

Ubuntu / Debian

type as root  or with sudo
uname -rv

sample outputs :

4.4.13-1-pve #1 SMP Tue Jun 28 10:16:33 CEST 2016
2.6.32-openvz-042stab104.1-amd64 #1 SMP Thu Jan 29 13:06:16 MSK 2015
4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016
3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19)

List of kernel numbers which need to be updated.

  • 4.8.0-26.28 for Ubuntu 16.10
  • 4.4.0-45.66 for Ubuntu 16.04 LTS
  • 3.13.0-100.147 for Ubuntu 14.04 LTS
  • 3.2.0-113.155 for Ubuntu 12.04 LTS
  • 3.16.36-1+deb8u2 for Debian 8
  • 3.2.82-1 for Debian 7
  • 4.7.8-1 for Debian unstable

Redhat / Centos / Fedora

wget the test file directly from redhat access :

wget https://access.redhat.com/sites/default/files/rh-cve-2016-5195_1.sh
CHmod +x rh-cve-2016-5195_1.sh
bash rh-cve-2016-5195_1.sh

If you are vulnerable you will get a result such as :

Your kernel is X.X.X.X.X.x86_64 which IS vulnerable.
Red Hat recommends that you update your kernel. Alternatively, you can apply partial mitigation described at https://access.redhat.com/security/vulnerabilities/2706661 .

update your kernel and reboot.

How do I upgrade my kernel?

Debian: sudo apt-get update && sudo apt-get dist-upgrade
Redhat: sudo yum update kernel

now sudo reboot and you are in happy land. if you are paranoid like me just run uname -rv again and test.

 

 

Original and new git commit messages to Linux Kernel regarding this exploit :

commit 4ceb5db9757aaeadcf8fbbf97d76bd42aa4df0d6
Author: Linus Torvalds <[email protected]>
Date: Mon Aug 1 11:14:49 2005 -0700

Fix get_user_pages() race for write access

There’s no real guarantee that handle_mm_fault() will always be able to
break a COW situation – if an update from another thread ends up
modifying the page table some way, handle_mm_fault() may end up
requiring us to re-try the operation.

That’s normally fine, but get_user_pages() ended up re-trying it as a
read, and thus a write access could in theory end up losing the dirty
bit or be done on a page that had not been properly COW’ed.

This makes get_user_pages() always retry write accesses as write
accesses by making “follow_page()” require that a writable follow has
the dirty bit set. That simplifies the code and solves the race: if the
COW break fails for some reason, we’ll just loop around and try again.

 

commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619
Author: Linus Torvalds <[email protected]>
Date: Thu Oct 13 20:07:36 2016 GMT

This is an ancient bug that was actually attempted to be fixed once
(badly) by me eleven years ago in commit 4ceb5db9757a (“Fix
get_user_pages() race for write access”) but that was then undone due to
problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”).

In the meantime, the s390 situation has long been fixed, and we can now
fix it by checking the pte_dirty() bit properly (and do it better). The
s390 dirty bit was implemented in abf09bed3cce (“s390/mm: implement
software dirty bits”) which made it into v3.9. Earlier kernels will
have to look at the page state itself.

Also, the VM has become more scalable, and what used a purely
theoretical race back then has become easier to trigger.

To fix it, we introduce a new internal FOLL_COW flag to mark the “yes,
we already did a COW” rather than play racy games with FOLL_WRITE that
is very fundamental, and then use the pte dirty flag to validate that
the FOLL_COW flag is still valid.

Windows 10 User Experience & Telemetry service

Windows 10 User Experience & Telemetry service

Windows 10 was released long ago in internet time, but I get asked questions about it randomly by various users, friends and clients. One of the most asked ones is about “spying” that windows 10 may be doing on the user. Initially a server called DiagTrack was present in windows that provided these “spying capabilities”. Since end of 2015 they have renamed the service to “Connect User Experience and Telemetry service”. I am not sure why they changed the name, maybe the word tracking was bothering some people and MS made it… “different”.

diagtrack-vs-telemetry

Connect User Experience and Telemetry service

Microsoft says Telemetry is system data that is uploaded by the Connected User Experience and Telemetry component. The telemetry data is used to keep Windows devices secure, and to help Microsoft improve the quality of Windows and Microsoft services. It is used to provide a service to the user as part of Windows.

Whats that mean? Only MS really knows what it means truly and if you care you should ask someone from MS for a real answer. I am simply here to tell you how to disable this IF you want to do so.

  1. Hold down the Windows key and tap the R key
  2. In the box that opens type services.msc and press the Enter key or click the OK button
  3. In the ‘Services (Local)’ section find the line with the name ‘Connected User Experiences and Telemetry’ and double-click it
  4. In the ‘Service status’ section click ‘Stop’ (highlighted blue in the screenshot below)
  5. Under the ‘Startup type’ drop down menu select ‘Disabled’ and then confirm this and close the window by clicking ‘OK’ (highlighted yellow in the screenshot below)

Connected User Experiences and Telemetry service

 

This should disable the service and the tracking for this install. Now we know microsoft has enabled this service under a different name once, will they do it again? who knows, maybe check on this periodically to see what state its in. Working in an enterprise environment and want to know how to control the telemetry service? find out from a technet article written by MS themselves by clicking here.

Enable WebGL on Chrome or Firefox

Enable WebGL on Chrome or Firefox

WebGL on Chrome

Enable hardware acceleration :

  • browse to chrome://settings/advanced
  • scroll to the bottom and look for Use hardware acceleration when available
    1. webgl_3
  • make sure Use hardware acceleration when available is checked 
  • if it tells you to then click the relaunch button

Check if webGL is enabled in Chrome

  • Copy paste the following in your browser’s address bar chrome://gpu
  •  Look for the WebGL item in the Graphics Feature Status list
    1. webgl_4
  • The status will be one of the following:
    1. Hardware accelerated — WebGL is enabled and hardware-accelerated (running on the graphics card).
    2. Software only, hardware acceleration unavailable — WebGL is enabled, but running in software.
    3. Unavailable — WebGL is not available in hardware or software.
  • You are looking for the status to be #1 from the above list i.e. Hardware accelerated

 

WebGL on FireFox

Enable WebGL

  • Copy paste the following in your browser’s address bar about:config
    1. you will be asked to accept a scary warning, I am positive this will be ok unless you start going godzilla or the hulk on unrelated settings 🙂 so.. dont do that.
    2. webgl_ff_1
  • Search for webgl.disabled
  • make sure that its value is set to false
    1. webgl_ff_2

Check WebGL status on FireFox browser

  • Copy paste the following in your browser’s address bar about:support
  • Inspect the WebGL Renderer row in the Graphics table
    1. webgl_ff_3
  • The status can be either of two things
    1. the name of a  graphics card manufacturer, model and driver i.e. Google Inc. — ANGLE (NVIDIA GeForce GTX 980 Ti Direct3D11 vs_5_0 ps_5_0)
    2. Something along the lines of BLocked due to version or Blocked due to unresolved issues.
  • Obviously you want #1 as the result i.e. a working webgl.

SwiftKey sharing users data with strangers

SwiftKey sharing users data with strangers

Swiftkey was an amazing keyboard that usurped swype as my default keyboard, I loved its predictions, its swiping to type tech and its overall layout and features. For a long time it worked great, then I grew enamored with other keyboards and moved on. Recently Microsoft bought the app for a cool $250 million. Awesome right? well it seems that SwiftKey sharing users data with strangers and just about anyone that asked… or didnt ask.

Multiple swiftkey users found other people phone numbers and emails or text predictions in languages not used or installed by the user. Swiftkey has announced that this is due to a synchronization “feature”. they have now disabled this bug feature and are working on fixing it.

This week, a few of our customers noticed unexpected predictions where unfamiliar terms, and in some rare cases emails, appeared when using their mobile phone. We are working quickly to resolve this inconvenience.

While this did not pose a security issue for our customers, we have turned off the cloud sync service and have updated our applications to remove email address predictions. During this time, it will not be possible to back up your SwiftKey language model.  

The vast majority of SwiftKey users are not affected by this issue. If you have any reason to believe you are seeing unfamiliar predictions, please contact [email protected].    

We take users’ privacy and security very seriously and are committed to maintaining world-class standards for our community.   

We will continue to post further updates on our blog.   

The SwiftKey Team 

users everywhere are finding out about this and displaying some displeasure!

Good news ( sort of?) is the sync really is disabled as noticed by frustrated users complaining about it on twitter.

Guess we have to wait and see how far this rabbit hole goes. I personally am making sure I dont install swiftkey on any new devices, especially since I barely ever use it anymore.

SwiftKey Keyboard
SwiftKey Keyboard
Developer: SwiftKey
Price: Free
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot
  • SwiftKey Keyboard Screenshot

Top alternative keyboards:

Gboard - the Google Keyboard
Gboard - the Google Keyboard
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
  • Gboard - the Google Keyboard Screenshot
Swype Keyboard
Swype Keyboard
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
  • Swype Keyboard Screenshot
Fleksy- Emoji & gif keyboard app
Fleksy- Emoji & gif keyboard app
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot
  • Fleksy- Emoji & gif keyboard app Screenshot

Nginx ProxMox Proxy using Letsencrypt SSL cert

Nginx ProxMox Proxy using Letsencrypt SSL cert

Why use a nginx proxmox proxy using letsencrypt ssl?

1st: why not?
2nd: Load balancing! Nginx is built to handle many concurrent connections at the same time from multitude of clients. This makes it ideal for being the point-of-contact for said clients. The server can pass requests to any number of backend servers to handle the bulk of the work, which spreads the load across your infrastructure. This design also provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance.
3rd: Security! Many times Nginx can be secured to not allow access to certain parts of the underlying application so life doesnt throw you a curveball at 3AM on December 24th 2006(dont ask 🙁 ).
4th: Port firewall constraints! Sometimes you need to access an application on port 34563 but firewall doesn’t allow access on random ports. You can allow incoming connections on port 80 via nginx but proxy them to the app on 34563.
5th: seriously… why not…..

Now you know why we may want nginx as  a frontend proxy for our underlying app. so let’s get to setting it up for our use case which is to protect proxmox from bad actors! and to provide reliable access to our proxmox for ourselves. We are going to setup nginx to forward all traffic from port 80 to port 443 where letsencrypt will provide us with ssl encrypted access!

Install nginx light instead of full, so you have a smaller set of utilities but also a lighter install. you can install nginx or nginx-full also if you wish.

remove default nginx config

add new nginx config copying the code below

 

install git

grab a copy of letsencrypt client

get the certs

specify your email when asked, this is only to retrieve lost certs.
lets encrypt add emailAgree to the TOS.lets encrypt tos

you will get 4 files from this:

  • cert.pem: Your domain’s certificate
  • chain.pem: The Let’s Encrypt chain certificate
  • fullchain.pem: cert.pem and chain.pem combined
  • privkey.pem: Your certificate’s private key

these files are located in

  • /etc/letsencrypt/live/proxmoddomain.com

Now that your certs are live and running! restart your nginx and you are live!

or

 

Categories

Follow me on Twitter