ConfigServer (CSF) is advanced open-source firewall for Linux. If you are like me, I don't really care much for the native firewalld that's included with RHEL7 releases, and I've used APF for years which is basically just a frontend for iptables.

Here's instructions on how to install it:

1. Disable firewalld

systemctl stop firewalld
systemctl disable firewalld
systemctl mask firewalld

2. Install iptables

yum -y install iptables-services
touch /etc/sysconfig/iptables
touch /etc/sysconfig/ip6tables

3. Start and enable the iptables service

systemctl start iptables
systemctl start ip6tables

systemctl enable iptables
systemctl enable ip6tables

4. Install CSF and its dependencies

yum -y install perl perl-libwww-perl net-tools wget perl-GDGraph perl-LWP-Protocol-https
cd /opt
tar xzf csf.tgz
cd /opt/csf
cd /etc/csf
rm -rf /opt/csf

5. Test your kernel modules to ensure everything is OK

perl /usr/local/csf/bin/

That's it, you're done!

You now have a working CSF installation on your server. You should now know the basics on how to configure it. A good place to start is the config file, located at /etc/csf/csf.conf

For more information, please read the full README file available from the vendor's website.

One of my favorite tools that I find myself using quite often is called "ScreenCloud". It allows you to quickly select any area of your workspace, create an sized screen shot, and then upload it or export it off to their server, your Dropbox account or an SFTP server.

If you've recently performed upgrades, either to Ubuntu >16.x or Fedora >21 or the latest version of ScreenCloud, you may be experiencing the same pain that I just endured when launching the application.

In my case, this particular error relates to the SFTP plugin inside of ScreenCloud which automatically uses secure FTP to upload your screenshots to a remote server.

Traceback (most recent call last):
File "", line 1, in
ImportError: No module named 'Screencloud'

First, kill any running instances, remove any custom repositories you may have installed it from and then completely remove the screencloud application.

In Ubuntu/Mint:

sudo killall -9 screencloud
sudo rm -vf /etc/apt/sources.list.d/screencloud*
apt-get remove screencloud -y

In Fedora:

sudo killall -9 screencloud
sudo rm -vf /etc/yum.repos.d/screencloud*
yum remove screencloud -y

Now, remove its associated local plugins (sftp, dropbox, etc). These are easily attainable again via the application or GitHub.

rm -rf ~/.local/share/data/screencloud/ScreenCloud/plugins/

Next, we need to place the module in a directory that is actually included in the search paths for the application. This varies depending on systems and versions.

You can find a list of the paths within the ScreenCloud application, by opening "Preferences" and pressing CTRL+D (debug mode) and typing the following into the debug CLI:

py> import sys
py> print sys.path

In my case, ~/.local/share/data/screencloud/ScreenCloud/screencloud/modules was part of the path and the directory did not exist, so I simply created it and rsync'd the file over:

mkdir -p ~/.local/share/data/screencloud/ScreenCloud/screencloud/modules

cp -f /usr/share/screencloud/modules/ ~/.local/share/data/screencloud/ScreenCloud/screencloud/modules

Now, try to run the application again.

Did it work? Great!

No? If you received an error such as this:

from Crypto.PublicKey import DSA
ImportError: No module named Crypto.PublicKey

Make sure you have the latest Crypto and PyCrypto modules installed:

sudo pip install crypto pycrypto --upgrade

Copy over the Crypto module from your system's distribution package into a path that screencloud is configured to look in.

You can find a list of the paths within the ScreenCloud application, by opening "Preferences" and pressing CTRL+D (debug mode) and typing the following into the debug CLI:

py> import sys
py> print sys.path

And again I simply rsync'd the entire Crypto/ module's directory over to it one of the included paths. Note: Debian based systems like Ubuntu use different system paths for Python packages than RedHat based systems. Note the differences below and know your system's layout before running commands.

In Ubuntu/Mint:

sudo rsync -avz /usr/lib/python2.7/dist-packages/Crypto ~/.local/share/screencloud/modules/

In Fedora:

sudo rsync -avz /usr/lib64/python2.7/site-packages/Crypto ~/.local/share/data/screencloud/ScreenCloud/screencloud/modules/

I hope this helps save someone else some time and frustration!

In a déjà vu scenario of a previous blog post I authored in 2012 called Source control != File System, I ranted about why binaries do not have any place in a source controlled repository. Fast forward nearly 4 years later, and I've once again encountered a repository that was filled with network device firmware image (.bin) files.

I knew something was terribly wrong when I went to clone a fresh copy of the repo to look at some basic device startup configs, and it took me nearly 10 minutes:

Cloning into 'network'...
remote: Counting objects: 1014, done.
remote: Compressing objects: 100% (925/925), done.
remote: Total 1014 (delta 499), reused 155 (delta 61)
Receiving objects: 100% (1014/1014), 1.63 GiB | 2.67 MiB/s, done.
Resolving deltas: 100% (499/499), done.
Checking connectivity... done.

real 9m9.360s
user 1m13.431s
sys 0m22.595s

After grabbing another coffee and enjoying a smoke, the cloning operation had actually finally completed. A bit of poking around quickly revealed the 2.9GB "Firmware" directory inside an otherwise organized and newly restructured repository. The logical fix to reclaim what would likely be hours of my life over the period of a few quarters of working with this repo was to just git rm -rf and move on. After pushing my changes, I quickly realized that the wasted space was still very much alive in git's history data. This wasn't much of a surprise, considering this functionality is required to fulfill one of my favorite and arguably one of the most valuable purposes of source control: revision history.

So, how did restore sanity to this repository?

No, you do not have to delete everything and start over - though this would be effective, it is a waste of time and energy.

However, I HIGHLY recommend that you backup or clone a copy of your bloated repo, just in case you do something dumb.

Next, you will need to find the files that you want removed from the repo. I stumbled upon a one liner that leveraged the git rev-list command, and piped it to some ugly perl to chomp and print the largest files (source). While I am not a big fan of perl, it is certainly not the ugliest perl I've ever encountered, and it's effective.

But, in the interest of my quest to avoid using perl and enable me to continue making fun of one of my perl loving systems architects, I decided to find my own way. Just as with 90% of the lines you'll find in an average perl script, you can accomplish near identical results with 10% by using an alternative. In this case, I achieved the same thing with just a bit of sed|sort|head action:

git rev-list master | while read rev; do git ls-tree -lr $rev | cut -c54- | sed -r 's/^ +//g;'; done | sort -u | sort -rnk1 | head -n 20

The output will list you the byte size and path/file of the largest nonsense to ever get pushed to your repo in any previous revision.

Using the results, you need to determine what files you want removed. In the command below, simply replace DUMBFILE with the files or directories that you want removed.

git filter-branch --tag-name-filter cat --index-filter 'git rm -r --cached --ignore-unmatch DUMBFILE' --prune-empty -f -- --all

Next up, I needed to do a bit of garbage collection on the references and reclaim the lost space:

rm -rf .git/refs/original/
git reflog expire --expire=now --all
git gc --aggressive --prune=now

Lastly, I needed to push the history changes made back to the repository with the use of force.

Disclaimer: there are VERY few situations where the use of --force is recommended, and in my first hand experience I have seen it destroy repos if used incorrectly or simultaneously while others are pushing. USE CAUTION!

git push origin --force --all
git push origin --force --tags

And all was right with the world again...

After the successful push, demolish your bloated local repository and clone her slim and healthy self back home again...

Cloning into 'network'...
remote: Counting objects: 773, done.
remote: Compressing objects: 100% (317/317), done.
remote: Total 773 (delta 439), reused 773 (delta 439)
Receiving objects: 100% (773/773), 196.15 KiB | 0 bytes/s, done.
Resolving deltas: 100% (439/439), done.
Checking connectivity... done.

real 0m0.910s
user 0m0.088s
sys 0m0.032s

I recently recompiled PHP on a server I own to upgrade it to the latest version. This particular server runs cPanel, which usually makes this otherwise tedious process much simpler. However, when recompiling mod_python it errors out with this message:

touch connobject.slo connobject.c: In function '_conn_read':
connobject.c:142: error: request for member 'next' in something not a structure or union
apxs:Error: Command failed with rc=65536
make[1]: *** [] Error 1

The entire process then fails and reverts itself to the last known working LAMP stack.

The problem is actually a known bug, and is fixable with a bit of hackery. If you are not using a cPanel server, it's very easy. Just simple change one line at your leisure prior to compiling.

If you are using a cPanel server, you need to time the patch correctly. Let me elaborate.

First, the bug itself is in connobject.c on line 142.

To fix, simply replace:




Simple enough.

Now, back to cPanel. Since easy apache will download fresh copies of the source upon each build session, you need to apply the fix strategically AFTER it downloads the fresh copy, but BEFORE it attempts to compile it. There is a window of approximately 1-2 minutes to do this, depending on the speed of your server.

Here's a trick that I used to make sure I'm catching it.

First before starting the build, remove the file entirely

# rm -f /home/cpeasyapache/src/mod_python-3.3.1/src/connobject.c

Second, start the build process.

Now, do a simple while loop or just manually check for the file periodically. Once it appears, you know you're working with the fresh copy from the build and can perform your edit on line 142.

# while true ; do ls /home/cpeasyapache/src/mod_python-3.3.1/src/connobject.c ; sleep 5; done

Once you've edited the buggy line, save the file, sit back and wait for the build to complete (successfully).

Hope this helped!

Bug reported at:

So, those that know me know I have a few laptops. I'm no stranger to technology.

Imagine my chagrin when I try piping a linux command to grep and my output comes out like so:

ps uawx > grep X

I know that the ps command thinks it is superior to grep, but seriously. WTF

Huzzah! You can use xmodmap to tell Linux what that key should actually do.

xmodmap -e "keycode 94 = backslash bar"

Now suppose you don't want to actually have to hack that every time you use your system? Well, here's a quick fix until the real fix hits the repos...

echo 'xmodmap -e "keycode 94 = backslashbar"' >
chmod +x

Note: ALL punctuation is necessary above (quotes and double quotes).

If using one of the common distros, open System -> Preferences -> Startup Applications and in the Startup Programs Tab, click Add.

Then put the path to your newly made executable script, give it a description and reboot. Your keyboard should be sane again.

I did too.

I love my Dell XPS 13 because it's small, lightweight and fast. I also surprisingly enjoy using the touchscreen (much more than I thought I would before purchasing). However, because of its 13" screen and the very high resolution, it makes the scroll bars in firefox and other applications extremely tiny. Combine that with my fat fingers, finding the scroll bar became even more difficult.

Feel my pain?


Edit the file ~/.gtkrc-2.0 (create it if it doesn't already exist)

Add the following lines and save:

style "scroll"
GtkScrollbar::slider-width = 40

class "*" style "scroll"

Now just restart Firefox, and enjoy.

There's a ton of outdated information floating around the web on how to simply and effectively install Node.JS on CentOS, specifically one of my legacy boxes that runs 5.11. I spent about 15-20 minutes toying around with various hacky ways to do this. To save you the time, here's the easiest way I found:

Perform (as root):

curl -sL | bash -
yum install -y nodejs

If you're a security fanatic like myself, you'll likely want to first download the 'setup' script before piping it to bash. If you don't care, I can assure you I've reviewed the code and is a trusted source.

It will output each step as it moves along, prompting you to manually install any 3rd party dependencies.

Once done, verify your install with:

node -v

On one of my local Ubuntu workstations at home, I sometimes have the need to send mail out using mailutils/mailx inside of scripts or on the command line. I also don't necessarily want/need to set up an entire mail server on my workstation. In addition, Verizon FiOS doesn't take too kindly to this for purposes of preventing malicious activity, SPAM, etc. They actually block outbound connections on the default SMTP port (25).

If you're using Ubuntu 14.04 LTS, it comes with Postfix by default. If you're using a different version or flavor, chances are you've got Sendmail or Exim installed. These instructions assume you've uninstalled whatever MTA came with your system, and that you want to use Postfix (far superior to its counterparts in my eyes).

First, it's easiest to install postfix and Cyrus SASL packages from your operating system's repository. If you're compiling from source, be sure to make Postfix with the -DUSE_SASL_AUTH flag for SASL support and -DUSE_TLS for TLS support.

# apt-get install postfix libsasl2-2 -y

In Ubuntu/Debian/Mint, the SASL package is called libsasl2-2
In CentOS/RHEL/Fedora, the SASL packages are called cyrus-sasl and cyrus-sasl-plain

Next, edit the main Postfix configuration file @ /etc/postfix/ to include the following:

# Set this to your server's fully qualified domain name.
# If you don't have a internet domain name,
# use the default or your email addy's domain - it'll keep
# postfix from generating warnings all the time in the logs
mydomain = local.domain
myhostname = host.local.domain

# Set this to your email provider's smtp server.
# A lot of ISP's (ie. Verizon) block the default port 25
# to prevent spamming. So in this case we'll use port 587.
relayhost =

smtpd_sasl_auth_enable = yes
smtpd_sasl_path = smtpd
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_type = cyrus
smtp_sasl_auth_enable = yes

# optional: necessary if email provider uses load balancing and
# forwards emails to another smtp server
# for delivery (ie: -->
smtp_cname_overrides_servername = no

# optional: necessary if email provider
# requires passwords sent in clear text
smtp_sasl_security_options = noanonymous

Note: Your remote SMTP host must be configured to listen on the alternate port you specify in relayhost=

Next, you need to configure authentication with SASL, so edit /etc/postfix/sasl_passwd and provide the credentials in this format: username:password

Note: The host and port must match identically to relayhost= in

Generate a postfix .db file from the previous file

# postmap hash:/etc/postfix/sasl_passwd

For security, you'll want to make sure the sasl_passwd and sasl_passwd.db files are not readable:

# chmod 600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db

That's it, restart the postfix service and test sending email.

# service postfix restart
# echo testing | mail

If you did everything correctly, you'll see your local host connect to the remote host and send the message. If something went wrong, you'll want to start digging through logs to figure out why.


ownCloud is enterprise file sync and share that is self-hosted in your data center, on your servers, using your storage. ownCloud provides Universal File Access through a single front-end to all of your disparate systems. Users can access company files on any device, anytime, from anywhere while IT can manage, control and audit file sharing activity to ensure security and compliance measures are met.

WebDAV (Web Distributed Authoring and Versioning) allows you to "mount" your ownCloud content as a local mount point on your local Linux environment.

I prefer this over using the ownCloud client program, as it gives me much more flexibility with my data (e.g. I can rsync directly from CLI from my local storage to the ownCloud and vice versa - effortlessly).

Here's how I did it:


1. Install davfs2
Using your Linux distribution's package manager, install the 'davfs2' package which is available in most all repositories regardless of Linux flavor.


# apt-get install davfs2


# yum install davfs2

2. Create and set permissions for a local mount point for your ownCloud data.
(I prefer mine in /owncloud -- replace all instances of 'youruser' with your non-root username)

# mkdir /owncloud
# chown youruser.youruser /owncloud
# mkdir ~youruser/.davfs2
# touch ~youruser/.davfs2/secrets
# chmod 600 ~youruser/.davfs2/secrets
# chown -R youruser.youruser ~youruser/.davfs2/

3. Edit ~youruser/.davfs2/secrets with your favorite editor to store your ownCloud credentials: youruser yourpassword

4. Edit /etc/fstab and put the following entry in to tell the filesystem how to mount it: /owncloud davfs user,rw,auto 0 0

Note: if you prefer to not have the mountpoint auto mounted each time you log-in, change auto to noauto

5. Add youruser to the 'davfs2' group so it can use it.

# usermod -aG davfs2 youruser

Note: you will need to log out and back in to get your new group to take effect

6. Now all you need to do is mount your ownCloud webdav instance

# mount /owncloud


On some operating systems, the mount.davfs binary does not have setuid privileges to run as a non-root user. If you see an error such as:

/sbin/mount.davfs: program is not setuid root

You are running an OS that needs the privileges granted. To do so, run this as root:

# chmod +s /usr/sbin/mount.davfs

Want faster rsyncs??

If you're transferring a large amount of files with rsync, you'll want to pass it some extra arguments.

Check the rsync man page for:

  • --size-only because most webdav implementations do not accept setting modification time
  • --no-whole-file to tell rsync its handling a remote filesystem
  • --inplace having rsync replacing files directly, instead of uploading an then replacing


On one of my personal laptops (Dell Inspiron 17R), my attempts at using the function key combinations to change the brightness did not yield any results on Fedora 20, Linux Mint 17, or Ubuntu 14.10. The LCD brightness was stuck at "can barely see, but must conserve battery because I'm stuck on an island mode" (not a real setting, but might as well be).

The system has an integrated Intel graphics card, and if you're not sure what your system has you can check it here:

Run the command below in terminal to know what video card is used for the backlight/brightness:

ls /sys/class/backlight/

If you see dell_backlight and intel_backlight, this article may help you. If you do not, continue your search.

Fire up a terminal and edit the following configuration file with your preferred editor (disclaimer: yes, I'm an old school Linux user dating back to the 'pico' days, so when I'm not coding I use nano - judge if you will, but I'm about to brighten your life):

sudo nano /usr/share/X11/xorg.conf.d/20-intel.conf

Add the following lines to this file:

Section "Device"
Identifier "card0"
Driver "intel"
Option "Backlight" "intel_backlight"
BusID "PCI:0:2:0"

Save it (CTRL+X in nano, some crazy 😡 in vim, etc)

Log out and log in back. The brightness control should be working through function keys now.

Good day, world!