Feb 162022
 

I have a friend of mine, a small business owner, who manages his own linux server.  Its a simple web server using plain jane html … and he loves to manage the server himself.  Except, when he gets locked out because his home ip is dynamic and he is using lfd/csf as a security layer.  He calls me, we chat, the chats always end up with something like “hey, can you reset my ip when you get time” and i love talking to him … he is a great friend, but third time is a charm … meaning, I love helping friends out but if I have to fix the same problem more than once, I am likely going to find a more permanent solution.  And this is what I came up with ….

Continue reading »

Oct 222020
 

So I recently bought a Western Digital PR4100 NAS drive to backup my Linux Desktop and Laptop after losing 2 external usb drives in about 2 years.  I had always used rsnapshot to back them up.  While poking around the WD NAS via ssh, I realized it has a binary for rsnapshot installed on it that is relatively recent.  I couldn’t find any of the config files though and I came up with this solution.

The first thing you need to do is set up the sshd user on the WD NAS so you have shell access.

The second thing you need to do is to configure rsync via ssh from your computer to copy the rsnapshot config files into place.  This will need done daily because the WD NAS drive reboots at 3am EST every day.  That reboot only lasts a few minutes so what I did was created a cron job that passes the sshd users password via sshpass to create a non-interactive shell in order to rsync the files.  You will also need to copy the entire /home/root/.ssh folder and all its files to your local server and also install them using rsync because the mycloud device also deletes those files on reboot.  It looks something like this:

Create file sync-mycloud-confs.sh containing:

rsync --rsh='sshpass -p PASSWORD ssh -l sshd' /path/to/rsnapshot/*.conf MyCloud_hostname_or_IP_Address:/home/root/
rsync --rsh='sshpass -p PASSWORD ssh -l sshd' /path/to/rsnapshot/.ssh/ MyCloud_hostname_or_IP_Address:/home/root/.ssh/

In the above you will need to replace PASSWORD with your sshd users password, replace /path/to/rsnapshot/*.conf with your path to them and MyCloud_hostname_or_IP_Address with whatever method you use to ssh into your mycloud device, I use ip address btw.  I may try to find a better place to store these files so they don’t get deleted on device reboots but for tonight, it works there so thats what I am going with before I forget.

Continue reading »

Jun 222020
 

I had a server with plenty of disk space, like gigs of space but was out of inodes.  This is a seldom ever used server and it didn’t make any sense to me.  It was used for pentesting and had a minimal install of some vulnerable apps running under nginx.  Anywho, this is the command I used to find the two folders using a combined 100% of inodes:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

Continue reading »

Jun 222020
 

This was driving me crazy and then I figured it out … lets assume you need to edit and existing vpn connection or maybe you imported a new ovpn file and need to edit it, but you can’t because its 2020 and everything else is going wrong so what the hell.  This is how to fix it ….

Continue reading »

Feb 012020
 

I write this script originally in 2014, updated it in 2015, forgot about it and needed it recently again … so its updated current to 2020.  A few things about this script … I don’t recommend you just “block tor exit nodes” unless you have a good reason.  Why have I used this script in the past?  During DDOS attacks that seemed to be using tor addresses, to block hackers from using tor, etc.  When I have blocked tor, it was only for short periods of time maybe a few hours or a day or 2.  If you are under an attack via tor, the attacker can just pivot somewhere else or use some other proxied method so just keep in mind this isn’t a solve-all either.

Continue reading »

Jul 092018
 

If you have worked with mysql/mariadb/galera …. sooner or later you are going to have to do a restore.  Or if you are setting up a new master – slave, the size of the database can greatly affect how long it takes.  mysdqldump at one time was all that was available and for it for to be accurate, you need to lock tables which can affect production environments, do the dump in another shell, record the master log and position, transfer the files to another server, import the database, change master too ….. very very very time consuming.  So here is a way I have found that doesn’t lock the tables, doesn’t need to record the master log file or position, and does the dump and import in parallel greatly speeding things up.

Continue reading »

May 052018
 

We have all lost a hard drive at one time or another on a laptop or desktop computer and it always seems like it happens right after several weeks of not performing backups.  Last year, I lost about 15 years of research on an external drive that failed.  I had this system that has worked as long as I can remember where I simply swapped an external drive every two years with a new one after copying the data.  What failed on me though was I became over-confident in this system and wiped out the older drives in order to make room for something else, meanwhile the current drive decided to barf after only about 6 months of usage … literally within a couple weeks of me wiping the previous drives clean.  I was pretty pissed to say the least.  So, lesson learned, I decided to implement a better backup plan.  I wanted a way that would work and be simple.  Instead of a file server and transferring data over a wire, I wanted an external drive I could plug-in and leave plugged in while working or at home or in some motel.  I wanted full backups and I wanted it to be incremental to save space.  This was how I accomplished these tasks …

Continue reading »

May 012017
 

This is another way to quickly analyze nginx logs.  It will spit out the top 25 ip’s, domains, requests and some other data.  You may need to change the array value to match the format of your nginx logs.

It uses perl so it is very fast and takes just seconds to analyze hundreds of thousands of lines in a log file.  It can also be used for apache too or other column whitespaced logs.

Continue reading »

May 012017
 

I’ve wrote about this before in Using Sed to search between dates and offered a ad-hoc solution but the other day I came up with a much better solution using a little known option of ‘date’ command.  Using this new method, you just pass the time in minutes prior to current time.  I.e. if you want the last hour, you would simply type ‘./sed_time.sh 60’ and it will spit out the correctly formatted sed command like this:

$ ./sed_time.sh 60
sed -n '/01\/May\/2017\:07\:16\:15/,/01\/May\/2017\:08\:16\:15/ p'

Continue reading »