Sometimes I have apps that suddenly stop working, however they don’t have PID output or I can’t start them via systemd or upstart due to convoluted requirements. Other times the app is on but it stops processing incoming queues due to various reasons. I need to make sure i have a mechanism in place to Restart service as needed. I’m going to describe how you can check for Linux logs output and if there is no log for X seconds then we restart the app! we check the log every X seconds and if in the past X seconds there is no movement we perform X function.
Restart service script
# simple script to check logs, if no entries have been made in 45 seconds restart
timer1=$(expr $TS - $(date +%s -r /path/to/some/file.log)) # make sure to set the log path here
if [ "$timer1" -gt 45 ]
echo "restarting service due to no activity for 45 seconds"
sudo service mycoolservice restart # you can change this for something else such as sending email or even rebooting the machine.
Why use a nginx proxmox proxy using letsencrypt ssl?
1st: why not? 2nd: Load balancing! Nginx is built to handle many concurrent connections at the same time from multitude of clients. This makes it ideal for being the point-of-contact for said clients. The server can pass requests to any number of backend servers to handle the bulk of the work, which spreads the load across your infrastructure. This design also provides you with flexibility in easily adding backend servers or taking them down as needed for maintenance. 3rd: Security! Many times Nginx can be secured to not allow access to certain parts of the underlying application so life doesnt throw you a curveball at 3AM on December 24th 2006(dont ask 🙁 ). 4th: Port firewall constraints! Sometimes you need to access an application on port 34563 but firewall doesn’t allow access on random ports. You can allow incoming connections on port 80 via nginx but proxy them to the app on 34563. 5th: seriously… why not…..
Now you know why we may want nginx as a frontend proxy for our underlying app. so let’s get to setting it up for our use case which is to protect proxmox from bad actors! and to provide reliable access to our proxmox for ourselves. We are going to setup nginx to forward all traffic from port 80 to port 443 where letsencrypt will provide us with ssl encrypted access!
Install nginx light instead of full, so you have a smaller set of utilities but also a lighter install. you can install nginx or nginx-full also if you wish.
New York attorney general’s office is investigating Facebook for harvesting the email contacts of about 1.5 million users without their consent.
“Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.” – New York Attorney General Letitia James The social network confirmed in April that it collected the email contacts of its users, but said it wasn’t on purpose.
The attorney general’s office said in a press release that hundreds of millions of Facebook users could have been affected because users might have hundreds of email contacts stored. The attorney general’s investigation comes as other regulators and lawmakers are cracking down on Facebook for its privacy practices e.g. Ireland’s Data Protection Commission is investigating whether Facebook safeguarded its users’ passwords properly, which could show violations of GDPR. In December, the DC attorney general sued Facebook for allegedly failing to safeguard the data of its users and Canadian regulators have accused Facebook of violating local laws for mishandling user data and said they could take the company to court for its privacy mishaps.
The privacy commissioner of Canada and the information and privacy commissioner for British Columbia started investigating Facebook last year after revelations surfaced that a UK political consultancy Cambridge Analyticaharvested data from about 87 million users without their permission.
Cloudbleed aka Cloudleak is a bug in Cloudflare which is a CDN service, a proxy service, and a DNS provider… well to be honest cloudflare is a LOT of things these days and provides a freemium set of services, you can run your site using their DNS, proxy / CDN service for free or pay $20-$200, to get some interesting set of goodies. According to their own homepage:
“Cloudflare speeds up and protects millions of websites, APIs, SaaS services, and other properties connected to the Internet. Our Anycast technology enables our benefits to scale with every server we add to our growing footprint of data centers.”
They provide these services for ~6 Million websites, and recently a researcher at google found a critical flaw in cloudflare’s inhouse parser that may have leaked passwords and authentication tokens.
Tavis Ormandy a self-described “Vulnerability researcher at Google” currently working for Google’s Project Zero which is a security initiative found a bug on February 18th. He posted an issue on Feb 19th. he tweeted looking for anyone from cloudflare security to get in touch with him.
Could someone from cloudflare security urgently contact me.
Cloudflare people got back to him right away and they worked on solving this issue ASAP. Unfortunately, the issue may be as old as September 2016. Cloudflare released a statement letting us know that the larger issue started on February 13th when a code update meant one in every 3,300,300 HTTP requests potentially resulted in memory leakage which doesn’t mean anything until you realize the massive amount of information being passed through the Cloudflare network.
Tavis found when they “fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major Cloudflare-hosted sites from other users”. there’s just so much information going through the Cloudflare network that we don’t know what has and hasn’t been affected until something is released showing an actual malicious leak.
Unfortunately, a lot of data was cached by Google and other search engines and was available to be viewed as late as Feb 24th 2017. Cloudflare has been working with Google and Bing etc to remove such information before it can be maliciously used.
Ormandy’s original post :
On February 17th 2017, I was working on a corpus distillation project, when I encountered some data that didn’t match what I had been expecting. It’s not unusual to find garbage, corrupt data, mislabeled data or just crazy non-conforming data…but the format of the data this time was confusing enough that I spent some time trying to debug what had gone wrong, wondering if it was a bug in my code. In fact, the data was bizarre enough that some colleagues around the Project Zero office even got intrigued.
It became clear after a while we were looking at chunks of uninitialized memory interspersed with valid data. The program that this uninitialized data was coming from just happened to have the data I wanted in memory at the time. That solved the mystery, but some of the nearby memory had strings and objects that really seemed like they could be from a reverse proxy operated by cloudflare – a major cdn service.
A while later, we figured out how to reproduce the problem. It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like heartbleed, but cloudflare specific and worse for reasons I’ll explain later). My working theory was that this was related to their “ScrapeShield” feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers.
We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.
This situation was unusual, PII was actively being downloaded by crawlers and users during normal usage, they just didn’t understand what they were seeing. Seconds mattered here, emails to support on a friday evening were not going to cut it. I don’t have any cloudflare contacts, so reached out for an urgent contact on twitter, and quickly reached the right people.
Cloudflare’s response to cloudbleed
Cloudflare has shown there is a good reason millions of sites trust them, they have stepped out in front and fixed the immediate issue within 6 hours of the report, and have been working on fixing the issue at large and hunting down any related bugs in the past few days.
Some of the companies affected have done their own due diligence and told users to change their passwords right away, while others like 1password & OkCupid have come to a different conclusion and informed their users but not forced a password change.
Our investigation into the Cloudflare bug has revealed minimal exposure, if any. More details >> https://t.co/lYN7nq2oGq
Well there isn’t a single easy answer to this, this is like a car part advisory / warning from a manufacturer, it may mean some day down the road your center console’s clips may pop out from use. or they may not. This could be bad. or not…. who knows at this point. with the use of password managers you shouldn’t be using the same password any two sites as it is, but let’s be honest with the amount of signups a typical tech-oriented person has its impossible that you didn’t use the same password across two sites by accident or out of laziness. So if you want to be cautious? change your passwords. if you want to wait and see then do so and follow what the individual sites recommend. I personally am rotating my passwords where possible and adding 2factor authentication such as google totp, authy or duo etc.
CVE-2016-5195 is a bug in the Copy On Write mechanism of the Kernel. Any user or user owned process can gain write access to memory mappings which should be read only for the end user. This allows them to interact with otherwise root only files. Should you worry about it? YES. you should jpatch your system(s) right away!
Who found CVE-2016-5195?
Who cares? ITS BAD PATCH NOW!! ok just kidding, security researcher Phil Oester was the first one to publically release info about this exploit. He found it via a http packet capture setup.
Is this related to SSL / OpenSSL?
No, unlike heartbleed, poodle etc this is not related to SSL.
Where can I get some official info about this exploit?
Not sure what you mean by official but check at mitre and Redhat
How to find out if I am affected?
Ubuntu / Debian
type as root or with sudo uname -rv
sample outputs :
4.4.13-1-pve #1 SMP Tue Jun 28 10:16:33 CEST 2016 2.6.32-openvz-042stab104.1-amd64 #1 SMP Thu Jan 29 13:06:16 MSK 2015 4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19)
If you are vulnerable you will get a result such as :
Your kernel is X.X.X.X.X.x86_64 which IS vulnerable. Red Hat recommends that you update your kernel. Alternatively, you can apply partial mitigation described at https://access.redhat.com/security/vulnerabilities/2706661 .
There’s no real guarantee that handle_mm_fault() will always be able to break a COW situation – if an update from another thread ends up modifying the page table some way, handle_mm_fault() may end up requiring us to re-try the operation.
That’s normally fine, but get_user_pages() ended up re-trying it as a read, and thus a write access could in theory end up losing the dirty bit or be done on a page that had not been properly COW’ed.
This makes get_user_pages() always retry write accesses as write accesses by making “follow_page()” require that a writable follow has the dirty bit set. That simplifies the code and solves the race: if the COW break fails for some reason, we’ll just loop around and try again.
This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”).
In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better). The s390 dirty bit was implemented in abf09bed3cce (“s390/mm: implement software dirty bits”) which made it into v3.9. Earlier kernels will have to look at the page state itself.
Also, the VM has become more scalable, and what used a purely theoretical race back then has become easier to trigger.
To fix it, we introduce a new internal FOLL_COW flag to mark the “yes, we already did a COW” rather than play racy games with FOLL_WRITE that is very fundamental, and then use the pte dirty flag to validate that the FOLL_COW flag is still valid.