Welcome to SumGuy's Ramblings!

I am Sum guy. You may call me Sumguy. I like to post news, tutorials, tidbits, tips and sometimes just ramble because I can! so make yourself at home.
Restart service on no linux logs output

Restart service on no linux logs output

Sometimes I have apps that suddenly stop working, however they don’t have PID output or I can’t start them via systemd or upstart due to convoluted requirements. Other times the app is on but it stops processing incoming queues due to various reasons. I need to make sure i have a mechanism in place to Restart service as needed. I’m going to describe how you can check for Linux logs output and if there is no log for X seconds then we restart the app! we check the log every X seconds and if in the past X seconds there is no movement we perform X function.

Restart service script

# simple script to check logs, if no entries have been made in 45 seconds restart
set -x

TS=$(date +%s)

timer1=$(expr $TS - $(date +%s -r /path/to/some/file.log)) # make sure to set the log path here

if [ "$timer1" -gt 45 ]
        echo $timer1
        echo "restarting service due to no activity for 45 seconds"
        sudo service mycoolservice restart    # you can change this for something else such as sending email or even rebooting the machine.
Nginx ProxMox Proxy using Letsencrypt SSL cert

Nginx ProxMox Proxy using Letsencrypt SSL cert

Why use a nginx proxmox proxy using letsencrypt ssl?

1st: why not?
2nd: Load balancing! Nginx is built to handle many concurrent connections at the same time from multitude of clients. This makes it ideal for being the point-of-contact for said clients. The server can pass requests to any number of backend servers to handle the bulk of the work, which spreads the load across your infrastructure. This design also provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance.
3rd: Security! Many times Nginx can be secured to not allow access to certain parts of the underlying application so life doesnt throw you a curveball at 3AM on December 24th 2006(dont ask 🙁 ).
4th: Port firewall constraints! Sometimes you need to access an application on port 34563 but firewall doesn’t allow access on random ports. You can allow incoming connections on port 80 via nginx but proxy them to the app on 34563.
5th: seriously… why not…..

Now you know why we may want nginx as  a frontend proxy for our underlying app. so let’s get to setting it up for our use case which is to protect proxmox from bad actors! and to provide reliable access to our proxmox for ourselves. We are going to setup nginx to forward all traffic from port 80 to port 443 where letsencrypt will provide us with ssl encrypted access!

Install nginx light instead of full, so you have a smaller set of utilities but also a lighter install. you can install nginx or nginx-full also if you wish.

apt-get install nginx-light

remove default nginx config

rm /etc/nginx/sites-enabled/default

add new nginx config copying the code below

nano /etc/nginx/sites-enabled/default

add the folllowing in there

upstream proxmox {
    server "proxmoxdomain.com";

server {
    listen 80 default_server;
    location ~ /.well-known {
      root "/var/www/html";
      allow all;
    rewrite ^(.*) https://$host$1 permanent;


server {
    listen 443;
    server_name _;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/proxmoxdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/proxmoxdomain.com/privkey.pem;
    include ssl-params.conf;
    proxy_redirect off;

        location ~ /.well-known {
                root "/var/www/html";
                allow all;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header  Host  $host;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_pass https://localhost:8006;
        proxy_buffering off;
        client_max_body_size 0;
        proxy_connect_timeout  3600s;
        proxy_read_timeout  3600s;
        proxy_send_timeout  3600s;
        send_timeout  3600s;

install git

apt-get -y install git

grab a copy of letsencrypt client

git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

get the certs

cd /opt/letsencrypt
./letsencrypt-auto certonly -a webroot --webroot-path=/var/www/html -d proxmoxdomain.com

specify your email when asked, this is only to retrieve lost certs.
lets encrypt add emailAgree to the TOS.lets encrypt tos

you will get 4 files from this:

  • cert.pem: Your domain’s certificate
  • chain.pem: The Let’s Encrypt chain certificate
  • fullchain.pem: cert.pem and chain.pem combined
  • privkey.pem: Your certificate’s private key

these files are located in

  • /etc/letsencrypt/live/proxmoddomain.com

Now that your certs are live and running! restart your nginx and you are live!

service nginx restart


systemctl restart nginx
Proxmox iso upload method

Proxmox iso upload method

I just setup proxmox, and am testing out various features. I needed to upload an ISO so I can install an OS. took me a bit so i figured I’d throw it on here for future ref.

  1. Login to proxmox web control panel.
  2. Goto server view from drop down on left hand side.
  3. Expand datacenter menu until you see local then click it
  4. Right hand side select COntent tab
  5. click upload button
  6. Click select file, find your ISO, click upload.

This should solve any proxmox iso upload questions that may arise 🙂

Update to add a screenshot below.

Facebook investigated by New York AG’s office for harvesting email contacts

Facebook investigated by New York AG’s office for harvesting email contacts

New York attorney general’s office is investigating Facebook for harvesting the email contacts of about 1.5 million users without their consent.

“Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.” – New York Attorney General Letitia James The social network confirmed in April that it collected the email contacts of its users, but said it wasn’t on purpose.

The attorney general’s office said in a press release that hundreds of millions of Facebook users could have been affected because users might have hundreds of email contacts stored. The attorney general’s investigation comes as other regulators and lawmakers are cracking down on Facebook for its privacy practices e.g. Ireland’s Data Protection Commission is investigating whether Facebook safeguarded its users’ passwords properly, which could show violations of GDPR. In December, the DC attorney general sued Facebook for allegedly failing to safeguard the data of its users and Canadian regulators have accused Facebook of violating local laws for mishandling user data and said they could take the company to court for its privacy mishaps.

The privacy commissioner of Canada and the information and privacy commissioner for British Columbia started investigating Facebook last year after revelations surfaced that a UK political consultancy Cambridge Analyticaharvested data from about 87 million users without their permission.

CloudBleed a Cloudflare flaw leaks customer data

CloudBleed a Cloudflare flaw leaks customer data

Cloudbleed aka Cloudleak is a bug in Cloudflare which is a CDN service, a proxy service, and a DNS provider… well to be honest cloudflare is a LOT of things these days and provides a freemium set of services, you can run your site using their DNS, proxy / CDN service for free or pay $20-$200, to get some interesting set of goodies. According to their own homepage:

“Cloudflare speeds up and protects millions of websites, APIs, SaaS services, and other properties connected to the Internet. Our Anycast technology enables our benefits to scale with every server we add to our growing footprint of data centers.”

They provide these services for ~6 Million websites, and recently a researcher at google found a critical flaw in cloudflare’s inhouse parser that may have leaked passwords and authentication tokens.

Tavis Ormandy a self-described “Vulnerability researcher at Google” currently working for Google’s Project Zero which is a security initiative found a bug on February 18th. He posted an issue on Feb 19th. he tweeted looking for anyone from cloudflare security to get in touch with him.

Cloudflare people got back to him right away and they worked on solving this issue ASAP. Unfortunately, the issue may be as old as September 2016.  Cloudflare released a statement letting us know that the larger issue started on February 13th when a code update meant one in every 3,300,300 HTTP requests potentially resulted in memory leakage which doesn’t mean anything until you realize the massive amount of information being passed through the Cloudflare network.

Tavis found when they “fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major Cloudflare-hosted sites from other users”. there’s just so much information going through the Cloudflare network that we don’t know what has and hasn’t been affected until something is released showing an actual malicious leak.

Unfortunately, a lot of data was cached by Google and other search engines and was available to be viewed as late as Feb 24th 2017. Cloudflare has been working with Google and Bing etc to remove such information before it can be maliciously used. cloudbleed aka cloudleak example from Travis

Ormandy’s original post :

On February 17th 2017, I was working on a corpus distillation project, when I encountered some data that didn’t match what I had been expecting. It’s not unusual to find garbage, corrupt data, mislabeled data or just crazy non-conforming data…but the format of the data this time was confusing enough that I spent some time trying to debug what had gone wrong, wondering if it was a bug in my code. In fact, the data was bizarre enough that some colleagues around the Project Zero office even got intrigued.

It became clear after a while we were looking at chunks of uninitialized memory interspersed with valid data. The program that this uninitialized data was coming from just happened to have the data I wanted in memory at the time. That solved the mystery, but some of the nearby memory had strings and objects that really seemed like they could be from a reverse proxy operated by cloudflare – a major cdn service.

A while later, we figured out how to reproduce the problem. It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like heartbleed, but cloudflare specific and worse for reasons I’ll explain later). My working theory was that this was related to their “ScrapeShield” feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers.

We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.

This situation was unusual, PII was actively being downloaded by crawlers and users during normal usage, they just didn’t understand what they were seeing. Seconds mattered here, emails to support on a friday evening were not going to cut it. I don’t have any cloudflare contacts, so reached out for an urgent contact on twitter, and quickly reached the right people.

Cloudflare’s response to cloudbleed

Cloudflare has shown there is a good reason millions of sites trust them, they have stepped out in front and fixed the immediate issue within 6 hours of the report, and have been working on fixing the issue at large and hunting down any related bugs in the past few days.

Affected sites

Some of the companies affected have done their own due diligence and told users to change their passwords right away, while others like 1password & OkCupid have come to a different conclusion and informed their users but not forced a password change.

LastPass , a competitor to 1password, (I personally use LastPass but have no vested interest in the company) was not hosted behind Cloudflare and has had no impact from cloudbleed.

Download a list of all sites currently known to be using Cloudflare CDN/Proxy that may be affected by Cloudbleed by clicking this button below.


What should you do?

Well there isn’t a single easy answer to this, this is like a car part advisory / warning from a manufacturer, it may mean some day down the road your center console’s clips may pop out from use. or they may not. This could be bad. or not…. who knows at this point. with the use of password managers you shouldn’t be using the same password any two sites as it is, but let’s be honest with the amount of signups a typical tech-oriented person has its impossible that you didn’t use the same password across two sites by accident or out of laziness. So if you want to be cautious? change your passwords. if you want to wait and see then do so and follow what the individual sites recommend. I personally am rotating my passwords where possible and adding 2factor authentication such as google totp, authy or duo etc.

Mooooo linux Dirty cow vulnerbility cve-2016-5195

Mooooo linux Dirty cow vulnerbility cve-2016-5195

What is Dirty Cow

CVE-2016-5195 is a bug in the Copy On Write mechanism of the Kernel. Any user or user owned process can gain write access to memory mappings which should be read only for the end user. This allows them to interact with otherwise root only files. Should you worry about it? YES. you should jpatch your system(s) right away!

Who found CVE-2016-5195?

Who cares? ITS BAD PATCH NOW!! ok just kidding, security researcher Phil Oester was the first one to publically release info about this exploit. He found it via a http packet capture setup.

Is this related to SSL / OpenSSL?

No, unlike heartbleed, poodle etc this is not related to SSL.

Where can I get some official info about this exploit?

Not sure what you mean by official but check at mitre and Redhat

How to find out if I am affected?

Ubuntu / Debian

type as root  or with sudo
uname -rv

sample outputs :

4.4.13-1-pve #1 SMP Tue Jun 28 10:16:33 CEST 2016
2.6.32-openvz-042stab104.1-amd64 #1 SMP Thu Jan 29 13:06:16 MSK 2015
4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016
3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19)

List of kernel numbers which need to be updated.

  • 4.8.0-26.28 for Ubuntu 16.10
  • 4.4.0-45.66 for Ubuntu 16.04 LTS
  • 3.13.0-100.147 for Ubuntu 14.04 LTS
  • 3.2.0-113.155 for Ubuntu 12.04 LTS
  • 3.16.36-1+deb8u2 for Debian 8
  • 3.2.82-1 for Debian 7
  • 4.7.8-1 for Debian unstable

Redhat / Centos / Fedora

wget the test file directly from redhat access :

wget https://access.redhat.com/sites/default/files/rh-cve-2016-5195_1.sh
CHmod +x rh-cve-2016-5195_1.sh
bash rh-cve-2016-5195_1.sh

If you are vulnerable you will get a result such as :

Your kernel is X.X.X.X.X.x86_64 which IS vulnerable.
Red Hat recommends that you update your kernel. Alternatively, you can apply partial mitigation described at https://access.redhat.com/security/vulnerabilities/2706661 .

update your kernel and reboot.

How do I upgrade my kernel?

Debian: sudo apt-get update && sudo apt-get dist-upgrade
Redhat: sudo yum update kernel

now sudo reboot and you are in happy land. if you are paranoid like me just run uname -rv again and test.


Original and new git commit messages to Linux Kernel regarding this exploit :

commit 4ceb5db9757aaeadcf8fbbf97d76bd42aa4df0d6
Author: Linus Torvalds <[email protected]>
Date: Mon Aug 1 11:14:49 2005 -0700

Fix get_user_pages() race for write access

There’s no real guarantee that handle_mm_fault() will always be able to
break a COW situation – if an update from another thread ends up
modifying the page table some way, handle_mm_fault() may end up
requiring us to re-try the operation.

That’s normally fine, but get_user_pages() ended up re-trying it as a
read, and thus a write access could in theory end up losing the dirty
bit or be done on a page that had not been properly COW’ed.

This makes get_user_pages() always retry write accesses as write
accesses by making “follow_page()” require that a writable follow has
the dirty bit set. That simplifies the code and solves the race: if the
COW break fails for some reason, we’ll just loop around and try again.

commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619
Author: Linus Torvalds <[email protected]>
Date: Thu Oct 13 20:07:36 2016 GMT

This is an ancient bug that was actually attempted to be fixed once
(badly) by me eleven years ago in commit 4ceb5db9757a (“Fix
get_user_pages() race for write access”) but that was then undone due to
problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”).

In the meantime, the s390 situation has long been fixed, and we can now
fix it by checking the pte_dirty() bit properly (and do it better). The
s390 dirty bit was implemented in abf09bed3cce (“s390/mm: implement
software dirty bits”) which made it into v3.9. Earlier kernels will
have to look at the page state itself.

Also, the VM has become more scalable, and what used a purely
theoretical race back then has become easier to trigger.

To fix it, we introduce a new internal FOLL_COW flag to mark the “yes,
we already did a COW” rather than play racy games with FOLL_WRITE that
is very fundamental, and then use the pte dirty flag to validate that
the FOLL_COW flag is still valid.