Oct 222020
 

So I recently bought a Western Digital PR4100 NAS drive to backup my Linux Desktop and Laptop after losing 2 external usb drives in about 2 years.  I had always used rsnapshot to back them up.  While poking around the WD NAS via ssh, I realized it has a binary for rsnapshot installed on it that is relatively recent.  I couldn’t find any of the config files though and I came up with this solution.

The first thing you need to do is set up the sshd user on the WD NAS so you have shell access.

The second thing you need to do is to configure rsync via ssh from your computer to copy the rsnapshot config files into place.  This will need done daily because the WD NAS drive reboots at 3am EST every day.  That reboot only lasts a few minutes so what I did was created a cron job that passes the sshd users password via sshpass to create a non-interactive shell in order to rsync the files.  You will also need to copy the entire /home/root/.ssh folder and all its files to your local server and also install them using rsync because the mycloud device also deletes those files on reboot.  It looks something like this:

Create file sync-mycloud-confs.sh containing:

rsync --rsh='sshpass -p PASSWORD ssh -l sshd' /path/to/rsnapshot/*.conf MyCloud_hostname_or_IP_Address:/home/root/
rsync --rsh='sshpass -p PASSWORD ssh -l sshd' /path/to/rsnapshot/.ssh/ MyCloud_hostname_or_IP_Address:/home/root/.ssh/

In the above you will need to replace PASSWORD with your sshd users password, replace /path/to/rsnapshot/*.conf with your path to them and MyCloud_hostname_or_IP_Address with whatever method you use to ssh into your mycloud device, I use ip address btw.  I may try to find a better place to store these files so they don’t get deleted on device reboots but for tonight, it works there so thats what I am going with before I forget.

Continue reading »

Jun 222020
 

I had a server with plenty of disk space, like gigs of space but was out of inodes.  This is a seldom ever used server and it didn’t make any sense to me.  It was used for pentesting and had a minimal install of some vulnerable apps running under nginx.  Anywho, this is the command I used to find the two folders using a combined 100% of inodes:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

Continue reading »

Jun 222020
 

This was driving me crazy and then I figured it out … lets assume you need to edit and existing vpn connection or maybe you imported a new ovpn file and need to edit it, but you can’t because its 2020 and everything else is going wrong so what the hell.  This is how to fix it ….

Continue reading »

Feb 012020
 

I write this script originally in 2014, updated it in 2015, forgot about it and needed it recently again … so its updated current to 2020.  A few things about this script … I don’t recommend you just “block tor exit nodes” unless you have a good reason.  Why have I used this script in the past?  During DDOS attacks that seemed to be using tor addresses, to block hackers from using tor, etc.  When I have blocked tor, it was only for short periods of time maybe a few hours or a day or 2.  If you are under an attack via tor, the attacker can just pivot somewhere else or use some other proxied method so just keep in mind this isn’t a solve-all either.

Continue reading »

Feb 012020
 

How to trap a troll 101 (using psychology of humans)

1) get into a huge online argument with troll
2) create a free tier disposable web server online
3) set up logging to capture all the bits, but mostly x-forwarded-for

Continue reading »

Jul 092018
 

If you have worked with mysql/mariadb/galera …. sooner or later you are going to have to do a restore.  Or if you are setting up a new master – slave, the size of the database can greatly affect how long it takes.  mysdqldump at one time was all that was available and for it for to be accurate, you need to lock tables which can affect production environments, do the dump in another shell, record the master log and position, transfer the files to another server, import the database, change master too ….. very very very time consuming.  So here is a way I have found that doesn’t lock the tables, doesn’t need to record the master log file or position, and does the dump and import in parallel greatly speeding things up.

Continue reading »

May 052018
 

We have all lost a hard drive at one time or another on a laptop or desktop computer and it always seems like it happens right after several weeks of not performing backups.  Last year, I lost about 15 years of research on an external drive that failed.  I had this system that has worked as long as I can remember where I simply swapped an external drive every two years with a new one after copying the data.  What failed on me though was I became over-confident in this system and wiped out the older drives in order to make room for something else, meanwhile the current drive decided to barf after only about 6 months of usage … literally within a couple weeks of me wiping the previous drives clean.  I was pretty pissed to say the least.  So, lesson learned, I decided to implement a better backup plan.  I wanted a way that would work and be simple.  Instead of a file server and transferring data over a wire, I wanted an external drive I could plug-in and leave plugged in while working or at home or in some motel.  I wanted full backups and I wanted it to be incremental to save space.  This was how I accomplished these tasks …

Continue reading »

Dec 042013
 

I often review various vulnerability scanners.  When I review them, I look at several different things:

  • were they able to find a vulnerability I previously missed?
  • are they accurate in their findings?
  • how quickly do they complete an audit compared to “insert some other vulnerability scanner here”?
  • sometimes I will also grab the tcpdumps of the audits for even further analysis
  • how accessible and easy are they to use by “skiddies”?
  • based on the tcpdumps + noise generated on the server logs, are the audit signatures of wapiti easy to detect?

Continue reading »

May 182013
 

A long time ago, I created a database to hold passwords and their respective hashes for some 16 various hash types.  It has approximately 310,261,848 passwords for each type and is growing nearly every day as more password lists become available.  I found a pretty quick way to generate the hashes for these wordlists and wanted to share how it is done.  These hashes only work with unsalted/unpeppered passwords.

First, lets look at my table schema, which is very simple and very effective.  It uses an index on the hash + password column so there can not be any two hashes+passwords that are the same.  The types table is a  simple lookup table that references data.type 1 to a name like DES.  The primary key is on the name column.  I don’t claim to be a db administrator so if you spot any errors, let me know.

Continue reading »