Search This Blog


Wednesday, June 11, 2014

Replace a RAID 5 disk that has failed (Linux / Ubuntu)

If you take a look at my last couple of blog entries, you'd know that I had a hard drive that was approaching imminent failure:
I got the new drive in the mail from Amazon which was a different model, but the same size (2.0 TB) WesternDigital Green (WD20EZRX).  Once I decided that using a SATA 3 drive on a SATA 2 bus was going to work, I went for the purchase.

On to the replacement:
Using mdadm, tell the Linux RAID to not recognize the disk as usable:
mdadm --fail /dev/md0 /dev/sda1

If you are like me, and you don't know which one is which, use the Disk Manager tool and write down the serial number of the drive.  This will correlate to the number on the printed label of the physical drive.  Note: it is handy to tape a piece of paper to the inside of your computer listing all of your drive serial numbers and the associated partition for future reference.  I actually had forgotten that I did this the last time a drive failed, wrote down the serial number of my drive, and then realized the paper was in the computer.

Power down the machine, remove the faulty drive, and replace with the new one.

Once the drive is replaced, power on your computer.  You should see a /dev/md0 fail event upon startup.  Mine said something to the effect of 3 out of 4 devices available, 1 removed.. etc.

Next, format the new drive with fdisk:
sudo fdisk /dev/sda

This will bring you into the fdisk program.  Type m for the help menu and available input options.  Perform these in order:
p - print the current configuration and verify there is no partition already.  This is a quick idiot check to make sure you are configuring the correct drive.
n - new partition
p - make it a primary partition
<enter> - accept the default start sector (should be 2048)
<enter> - accept the default end sector (should be the end of the hard drive)
t - change the type of the partition
fd - make it a Linux RAID autodetect
p - verify all of your settings are correct.  It should look something like this:
w - write your changes to the file

This will write the new partition table, exit fdisk and return you to the command line.  Execute partprobe to ensure your system will recognize the new partition.

Tell mdadm that the drive is now available:
sudo mdadm --add /dev/md0 /dev/sda1

Your data from the other 3 drives will now be rewritten across the new sda1 drive.  This will take some time, but can be monitored:
watch cat /proc/mdstat

It is important to leave your machine on and uninterrupted until the rebuilding process is complete.

Aren't RAID 5's a beauty?  I love having automatic hardware failure protection... assuming not more than 1 drive fails at a time.  I hope you found this useful.  If you have any questions or comments feel free to post in the comments below.

Next up will be to create a RAID 1 using my existing system drive and a spare unused drive I've had sitting around.... without losing any data.  Should be fun!

Tuesday, June 3, 2014

Checking and repairing a RAID in Linux

Recently I've been having a weird issue where I will sit down at my computer after it has not been used in a while and it shows a black screen with a blinking "_" in the top left corner.  The only thing I can do to recover from this is to issue the Alt+Prt Sc+REISUB to force an emergency file system sync and reboot (click the link for details on all the inputs).  Once the machine was back up, I quickly started researching what caused the issue by checking out dmesg and kern.log. I also ran some smartctl tests and noticed there were some bad blocks on my RAID 5 (4x2TB).  I started down the rabbit hole of repairing bad blocks, only to find out I could be causing more harm than good.  I vaguely remember attempting this before on a non-RAID, and ending up with more unusable blocks than when I started.  Before doing too much damage to my RAID, I decided to do some more research.  Turns out, with a Linux software RAID (mdadm), I can easily find and repair my issues using one simple command:

sudo echo 'check' > /sys/block/md0/md/sync_action

Of course, my RAID is on md0, so change this to wherever your mount your disks if different.  It is wise to do this while the volume is not mounted (sudo umount /dev/md0), otherwise you risk damage.  This command will start the filesystem check but will not keep you up to date on its progress.
To check up on the progress, issue:

watch cat /proc/mdstat

This will take a long time, depending on the size of your drives; mine started out with ~290 minutes to finish.  To quit watching, Ctrl+C.

To pause the check:

sudo /usr/share/mdadm/checkarray -x /dev/md0


sudo /usr/share/mdadm/checkarray -a /dev/md0

Once it has completed, check the mismatch count:

cat /sys/block/md0/md/mismatch_cnt

If output returns 0, then you're all set and your RAID array should be as repaired as it can be.  If it returns something other than 0, you can synchronize the blocks by issuing:

sudo echo 'repair' > /sys/block/md0/md/sync_action
watch cat /proc/mdstat
And, once the repair is complete, check it again:
sudo echo 'check' > /sys/block/md0/md/sync_action
watch cat /proc/mdstat

For more info, check out the Thomas Krenn Wiki.

Thursday, April 24, 2014

Make magnet links work in Xubuntu

When trying to open magnet links in Xubuntu, sometimes you will get an error.  For example, when searching in Catfish and a folder is clicked, I got this:
"Unable to detect the URI-scheme of /home/user/folder/folder".

You might also get this in Chrome when trying to open a magnet to a torrent file.  For some reason, Firefox works fine with magnet links (probably uses gnome-open instead of the system's opener by default).

To fix the problem, edit /usr/bin/xdg-open: sudo gedit /usr/bin/xdg-open

In there, find the lines that look like this:

if [ x"$DE" = x"" ]; then
And change it to this:
if [ x"$DE" = x"" ]; then
#xdg-open workaround for bug #1173727:
This will force Xubuntu to think you are using the Gnome Desktop Environment, and will in turn use gnome-open instead of xdg-open.  When Xubuntu detects the XFCE display manager, it calls exo-open "$1" which is not capable of handling magnets.  This workaround will get you going until the bug has been fixed.

Monday, April 21, 2014

Replace Ubuntu One with Insync

Ubuntu announced the pending shutdown of their Ubuntu One services recently.  I depend on Ubuntu One to sync my ~/Documents folder between my desktop and laptop, and have even bragged to Windows users how easy it is to get files from one computer to the next using Ubuntu One.  It seems that Canonical does not make enough money through their services, would rather focus on other projects, and so are shutting it down completely on June 1st.  This led me to finding an alternative solution.

Google Drive allows one to upload up to 15 GB of data to the cloud for free (100 GB is only $2/month).  There are many services out there that do the same sort of thing, such as Dropbox or Amazon Cloud Drive, and a quick search will reveal other cloud storage solutions.  The major downfall to using something other than Google Drive is that the common amount of free space given for storage is somewhere between 2 to 5 GB, with steep prices if you wish to expand.  I am an avid Android user, so I already make use of my pictures getting automatically pushed to Drive, as well as awesome Gmail integration.  So for me, the decision was simple: find a way to sync my two computers using Drive.

Introducing InSync.  At the most basic level, it allows a user to automatically sync data between a specified folder on their computer (/home/user/Insync, or whatever you desire to name it) and Drive.  This is useful, but other services offer the same thing (Grive, SyncDrive, etc).  But where InSync prevails is all the other stuff it is capable of:

  • Automatic conversion of Google Docs to Office (LibreOffice/OpenOffice compatible)
  • Built-in sharing without a browser
  • Recent changes feed
  • Window manager integration (ie Nautilus) - right-click a file to sync
  • Symlink, junction and alias support (key feature; more on this below)
  • Multiple Google account support
  • Watch any folder for changes
  • Support for almost every platform
  • And many others
As you can see by their features, InSync has done a great job at integrating Google Drive into the desktop environment.  Back to my original issue, however, is that I needed to sync two computers to be mirror images of each other.  This process is not glaringly simple in InSync, so this is how you do it:

  1. Ensure all your files are synced between the two machines using Ubuntu One or other methods
  2. Download InSync on one of the computers (we'll say desktop, for ease of explanation).
  3. Complete the installation, choosing the "Advanced Setup" when prompted
  4. Authenticate InSync with your Google account and choose where to store your files (/home/user/Insync).
  5. Once InSync has finished installing, Nautilus users (Ubuntu) may want to install package insync-nautilus in order to have right-click menu integration.
    1. sudo apt-get install insync-nautilus
    2. Restart Nautilus by clicking the prompt once that completes, or by logging out and back in.
    3. Another method is by opening terminal (Alt F2) and typing nautilus -q. This will kill the window manager.  You start it back by opening a folder either from Ubuntu's side bar, or pressing the Super key and typing home, and opening the folder.
  6. Now the fun part: create a symlink inside your InSync folder to your Documents folder:
    1. Right-click Documents in your home folder > Add to Insync > Your Google Account
    2. OR: ln -s ~/Documents ~/Insync/Documents
  7. This will sync all of your Documents to Google Drive.  Give it some time to finish.  Once the InSync icon in the toolbar looks like: 
    you are ready to proceed.
  8. On your other computer (laptop), prepare it for syncing by installing InSync, and choosing Advanced Setup again.  Do not put anything in the InSync folder, as your desktop and laptop are currently mirror images of each other via Ubuntu One.
    1. Don't forget to install the Nautilus integration and restart Nautilus (step 5)
    2. Go to InSync > Your Google Account Name > Settings > "Selectively sync your files & folders"
    3. Choose the Documents folder from your Google Drive, and Apply Changes
    4. Allow all of your Drive's documents to sync to your laptop
  9. On the laptop, rename ~/Documents to ~/Docs
    1. mv ~/Documents ~/Docs
  10. In the terminal on your laptop:
    1. ln -s ~/Insync/Documents ~/Documents
    2. This creates a symlink from Insync's Documents directory to a non-existing ~/Documents on your Laptop. (note** non-existing because you have renamed the ~/Documents folder on step 9)
    3. Note that this symlink is opposite of step 6.2, because we want to create the illusion of a ~/Documents folder that is actually under ~/Insync/Documents, allowing all previous shortcuts to continue working.
That's it! Your files on your laptop and desktop will stay in sync with each other.  

Now go disconnect Ubuntu One! 

Monday, January 6, 2014

Google Chrome profile error on Ubuntu

Lately Google Chrome has been acting strange for me.  Every few times I would open Chrome, I would get an error that "Your profile could not be opened correctly".  However, it was very inconsistent, so the cause was hard to nail down.  When this would happen, I would get about 10 error windows stacked on top of one another, I would press OK several times, and then have to sign in to my profile again using the settings menu.  After a while this started to annoy me, so I started to hunt down a solution.

I am using Chrome 31.0.1650.63 and Ubuntu 13.04 (Xubuntu variant), although the below fixes should apply to most versions until Google fixes the issue.  To check your versions in

Chrome: Go to Settings ()  > About Chrome

Ubuntu: In terminal, lsb_release -a

The fix for me was to kill all the zombie processes that Chrome left behind from the last time(s) it ran.  The easiest way to do this is:
pgrep -l chrome

Make sure that all the processes shown are only chrome.  If they are, then:
pkill chrome

You may also try:
killall -9 chrome
Although this may not work if the pkill chrome command did not work.

You should now check that all chrome processes were killed by issuing the same pgrep -l chrome command.  If so, then restart Chrome and see if you get the same error.  If there are still processes, you may have to go in more depth to make it go away:
ps -ef | grep chrome
Many results may show up, but look for the PID, which is the second column of each line:
benmctee 11841     1  6 07:30 ?        00:02:23 /opt/google/chrome/chrome

11841 is the process ID (PID).  Issue the kill command as follows:
kill -TERM 11841
kill -SIGTERM 11841
kill -15 11841

Repeat the above kill process for each PID that appeared in the ps -ef | grep chrome command.

An alternative way to go about this is to use the top command:
top (opens the process manager)
s (goes to the sort screen)
The display will show all fields like the following window:
Notice how the top line shows %CPU on the far right.  This is what top is currently sorting by.  I don't really care at this point which process is causing the most CPU usage, since I know that we want to kill the Chrome process.  Scroll down to COMMAND, and press s.  Your window should now show COMMAND in the upper right:
Press q to get back to the main top screen.  Everything will be sorted now, but in reverse order:
 Type uppercase R to sort everything alphabetically, and then scroll down to chrome.

Notice that there are a multitude of instances that get started when Chrome runs.  To kill the processes, type k, and then enter the PID of the process you desire to kill, and then press <enter>.  To quit top, type q.

Killing each process can take some time, which is why I prefer to use the pgrep and pkill commands from earlier.

If you still have troubles with Chrome, you can follow other steps as outlined in Saravanan Thirumuruganathan's article over on Wordpress.

Thursday, January 2, 2014

Garmin GPSMap 62S under Ubuntu

I recently picked up Geocaching, thanks to my wonderful parents. It is a highly addictive sport in which caches, mostly small containers with (at a minimum) a paper log sheet inside, are hidden all around the globe. Once a cache is hidden, the cache owner will post its coordinates to, at which point other cachers can use a GPS receiver (GPSr) or a GPS enabled smartphone with the geocaching app installed to search for it. Once found, the cacher will log it as found on the app or website, sign the log, and move on to the next cache. The smartphone way is great for beginners, but oftentimes one must log a DNF (did not find) because phone accuracies are usually no better than 16 ft. This is where the use of a handheld GPSr comes in handy. Models like the Garmin GPSMAP 62s have a geocaching feature, and users can log a find with it and then upload that data once back home and connected to a computer.

This tutorial is meant for users of a Garmin GPSr who also use Ubuntu.  It will install the Garmin plugin, as well as QLandkarte GT, a very useful GPS program written for the Linux OS.


sudo add-apt-repository ppa:andreas-diesner/garminplugin
sudo add-apt-repository ppa:mms-prodeia/qlandkarte
sudo apt-get update
sudo apt-get install garminplugin qlandkartegt qlandkartegt-garmin

For the most up-to-date location of install files, head on over to:
You will still need to add the repository, like in the above steps.

A quick run down of someone using QLandkarte GT for geocaching:

Open QLandkarte by searching for it within the Ubuntu window, or execute qlandkartegt from terminal.  We must first get a map to use within the program.  There are many ways to go about this, and are all based on your preference.  My first stop would be GPS File Depot.  They have many custom maps for Garmin that are ready for download.  Another way to obtain maps if they don't have what you are looking for is to export one directly from OpenStreetMaps (OSM).  The problem with doing it directly from their website is that you are limited to a small number of tiles (I was unable to download the entire island of Oahu).

In order to download larger regions than what OSM allows, I recommend using this link to select the region you would like to download.  From there, you select the region / state / etc, and then go to the download page.  Since we will be using QLandkarte GT, download the "" file.

  1. How to load a map and caches into QLandkarte GT:
    1. Copy this URL:
      1. I originally starting writing this article based around Google Maps, but after reading this, I decided that the best support is given (and free) by OpenStreetMaps.
    2. Download your desired region from the link above or here.
    3. Once your zip file has downloaded, unzip the contents into your desired directory (/home/user/geocaching/maps).
    4. In QLandkarte: File > Load Map
    5. Select the file with .tdr as the extension.  Another file dialog will open, and it will ask for an img file, where you will choose the filename_mdr.img (ie 63240000_mdr.img).  Your map will now load into the software.
    6. Generate a Pocket Query on for the caches you wish to load to your GPSr
    7. Download the pocket query, unzip the GPX file(s), and load those into qlandkarte
      1. File > Load Geo Data > select the GPX file.
      2. To add a second GPX file: File > Add Geo Data.  You will have to choose 'Add Geo Data' vice 'Load Geo Data' when loading waypoints as the GPX data will replace the caches you loaded in the previous step.
    8. You should now have all of the caches that were created in the pocket query layered on top of the Map in QLandkarte GT
  2. Exporting Caches:
    1. File > Export Geo Data > name the file something.gpx (I used 20130311.gpx to indicate the date on which it was created).
    2. Ensure all waypoints are selected, and click OK.
  3. Exporting the Map:
    1. Ensure you are still on the map tab
    2. Choose Map (menu) > Select Sub map
    3. Select an area to export.
    4. Click the name of the selection in the Maps tab, and then click Export map
    5. Select the folder you wish to export to, create a name, select Garmin Custom Map, and click Export
    6. Connect your Garmin to your computer and copy the newly created img file to the Garmin folder on your GPSr's SD card.  Please note that you must have the same directory structure on the SD card as in the default Garmin internal storage.
      1. As you can see, I have two volumes mounted, GARMIN and 7.9 GB Volume.  The 7.9 is the SD card and the GARMIN is the GPSr internal storage.  I drop the .img files in the 7.9\Garmin root directory, and the cache file into the GPX directory.  The CustomMaps directory is empty, and is probably left over from previous experiments.  I had no success in being able to open the custom map on the GPSr when I copied the img file to that directory.
    7. While you are here, also upload the GPX file to the GPX folder.  Both the GPX folder and the Custom Maps folder are under the Garmin directory.

That's it!  Unmount the GPSr and try it out.

If you run into problems with QLandkarte not doing something because of permissions, try running it as sudo from the terminal: sudo qlandkartegt

As always, please leave a comment below if you find any of these instructions incomplete or wrong.  I would greatly appreciate a more efficient way of going about updating the Garmin within Ubuntu, so if you have any tips, please let me know.  I keep looking at Open Cache Manager as a way to manage things the way the Windows folks do with GSAK, but it has no map export feature as of yet, so that's off the table.  However, it's a great cache manager tool, and you should check it out at their website.

Sunday, October 20, 2013

Reset Windows user password without original install disk

I rarely boot up into my Windows 7 partition, as most of my work is done under Ubuntu.  However, the occasional time arises where I must do the inevitable.  I had been using my fingerprint reader to login since my original password was long forgotten from when I originally set it.  Typically, one may reset a forgotten password by booting from the install CD or from a USB drive with Windows on it, but I recently came across a way to reset my user account's password without using the original install disk or a USB flash drive.  This method requires that you are already logged in to the computer.  If you have a fingerprint reader, this is easy, or if you have another account, just log in with that one.  If you do not have a way to log in, you must use the CD / USB method.

  1. Start > All Programs > Accessories > Right-click Command Prompt and Run as Administrator
    1. An alternate method is to search for cmd in the "Search programs and files" on the Start menu.
  2. This will open a terminal window, in which you will type:  set username
  3. Use the username it shows in the following command: net user username *
  4. You will prompted two times to enter a password.  Once the password is reset, it will say The command completed successfully.
  5. The password has now been successfully changed.

Monday, October 7, 2013

Ubuntu: Using VNC over SSH

Recently my inherent laziness caused me to start researching a remote desktop like environment from my laptop to my Xubuntu media server.  I wanted to do this so that while sitting in one part of the house, I could remote in to my media server in the back room and rearrange files, perform updates, etc. Essentially, I wanted something to work as a headless environment from the comfort of my couch.

In order to remotely control the desktop, I needed to setup some sort of secure environment.  We all know that VNC by default is not secure.  RDP (Remote Desktop Protocol), mainly used for accessing Windows machines, also has it's flaws.  I started looking into VNC (Virtual Network Computing) over SSH (Secure Shell) tunneling.  My client computer (laptop) is Ubuntu 13.04 and the server I wish to connect to is Xubuntu 13.04.  Please do note that since I'm using Xubuntu, the following instructions may not work for you if you don't use lightdm as a window manager.

  1. Start by installing ssh server on the remote machine:
    1. sudo apt-get install openssh-server
    2. If you are using Webmin to configure your system (highly recommended, see here), it is a fairly simple setup after the install completes.  The default setup will work for now, but we need to lock it down.  More on that in a second.
    3. If you are running Ubuntu on your local machine, you already have the openssh client installed by default.  I cannot speak for Windows, OS X, or other flavors of Linux, but finding a package should be pretty simple using the great interwebs.
    4. In the terminal on your client computer (ie your laptop), generate ssh public and private keys:
      1. mkdir ~/.ssh
      2. chmod 700 ~/.ssh
      3. ssh-keygen -t rsa
      4. You will be prompted for a location to save the keys, and a passphrase for the keys. This passphrase will protect your private key while it's stored on the hard drive and be required to use the keys every time you need to login to a key-based system.
      5. If you do not wish to password-protect your key file (not recommended), just press enter without typing a password.  Remember, if your laptop is ever stolen, a brute force attempt may be made to unlock your key file, and then the server you connect to can be compromised.
      6. Note: Public keys are what you give out to servers.  It is what is used in conjunction with your private key - stored locally - to authenticate.  Under no circumstances should you give out your private key!
    5. Now that you have generated your public and private SSH keys, it's time to transfer your public key to the server:
      1. ssh-copy-id username@host
      2. Replace username with the username you login with on the server.  Use the server's local IP address (assuming you're doing this over a LAN) as the host.
      3. When prompted for the password, enter the password associated with the username you provided for that machine.
      4. For more details on steps 4 and 5 above, jump on over to SSH Keys on the Ubuntu Community website.
    6. Great!  You're on your way to having a secure shell environment that's actually secure.  Now, we must proceed with locking down openssh-server:
      1. In Webmin, login to the server and find the SSH Server section.
        1. If you find that SSH Server is in the "Unused" section, click Refresh Modules on the left, bottom.  Now logout and log back in.  You will now find it under the "Servers" section.
      2. Authentication:
        1. Allow authentication by password? No
        2. Permit logins with empty passwords? No
        3. Allow login by root? No
        4. Allow RSA (SSH 1) authentication? No
        5. Allow DSA (SSH 2) authentication? No
        6. Check permissions on key files? Yes
        7. Display /etc/motd at login? No
        8. Ignore users' known_hosts files? No
        9. User authorized keys file: Default
        10. Maximum login attempts per connection: 2
        11. Ignore .rhosts files? Yes
      3. Networking:
        1. Listen on addresses: All addresses
          1. If you have a dedicated IP address for your client, setting this to it's IP would make it even more secure, allowing only connections within the local network.
        2. Listen on port: 22
          1. 22 is the default port for SSH.  If you wish to change this for more security, connecting will become more difficult.
        3. Accept protocols: SSH v2
          1. With modern SSH clients there is no need to enable v1.  In fact, there are known vulnerabilities in older SSH servers, including a CRC32 Compensation Attack.
        4. Disconnect if client has crash? Yes
        5. Time to wait for login? 120 seconds
        6. Allow TCP fowarding? Yes
          1. This sounded insecure to me at first.  But, after further research, it actually encapsulates any traffic based on TCP into the SSH tunnel, making insecure traffic (checking mail, surfing the web) secure.  However, if you're a LAN admin and have other security restrictions in place for network traffic, enabling TCP forwarding would allow one to bypass those restrictions.
        7. Allow connection to forwarded ports? No
      4. Client-Host Options:
        1. If you have not tweaked this section before, the only available option will be All Hosts
        2. Click the Add options for client host link at the bottom.
        3. Enter the host name or IP address of the server
          1. "*" can be used for host names.  ie * will allow SSH to anything on the domain.
        4. Compression level: Worst
          1. Setting it to anything else will consume unneeded CPU cycles on a fast network and actually slow down file transfers when using scp
        5. Use privileged source port? No
          1. By default SSH clients will use the privileged source port when connecting, which indicates to the server that it is a trusted program and thus can be relied on to provide correct information about the user running it. This is necessary for rlogin-style authentication to work, but unfortunately many networks have their firewalls configured to block connections with privileged source ports, which completely blocks SSH. To have the clients use a normal port instead, select No for the Use privileged source ports? field. Unless you are using host-based authentication, this will cause no harm.
        6. All other options can be left as default.
      5. Access Control:
        1. Select the users you want to allow to connect, or type them in using commas to separate. "?" can be used as a wildcard.  ie admin_? will allow any users starting with admin_ to connect.
      6. Once you have made all of these changes, Stop Server and Start Server from the module's index page to apply the changes.
    7. SSH-server is now configured and locked down using Webmin, you have generated and published your keys from the client to the server, and are ready to move on to configuring VNC over SSH.  But first we must verify that all the changes just made didn't break our SSH connection
      1. ssh user@host
      2. You will be prompted with something like:
      3. The authenticity of host '10.0.X.XX (10.0.X.XX)' can't be established.
        ECDSA key fingerprint is XX:XX...XX.XX.
        Are you sure you want to continue connecting (yes/no)?
      4. Note: Don't panic!  You are connecting to an "unknown" server using only your key for the first time.  Unless some hacker is really efficient at setting up a man-in-the-middle attack on your server between the time you installed the ssh-server to now, you are most likely connecting to what you intended to connect to.
      5. Type yes
      6. Warning: Permanently added '10.0.X.XX' (ECDSA) to the list of known hosts.
        Permission denied (publickey).
        1. I didn't expect this error.  Upon exiting and logging in again, all was well with the world and I did not receive the same error.
      7. Once you have established a connection successfully, we can move on.  Type exit and execute until you are cleared away from the SSH session.
    8. We will log back into the remote shell, but this time we will use trusted X11 forwarding (-Y option) in order to use a graphical text editor.
      1. ssh -Y user@hostname 
      2. x11vnc -storepasswd
        1. Enter a secure password, and again to verify it.
        2. Store this password as /etc/x11vnc.pass (not the default location)
        3. sudo chmod 744 /etc/xllvnc.pass
      3. cd /etc/lightdm
      4. sudo gedit lightdm.conf
        1. Assuming gedit is installed on your machine.  If not, use whatever text editor is installed, or vi if you are comfortable with that.
      5. Append the last line to your file, so that it looks like this:
        1. Save and close
        2. Note: If this step were skipped, x11vnc will not start after rebooting your server and must be manually started after logging in locally.  Some may wish to do this as an extra layer of security.
      6. sudo service lightdm restart
      7. Now the service is running and will run each time your computer starts.
    9. Back on your local computer (outside the SSH session), you will want to bring the remote display to you:
      1. Install SSVNC from the Ubuntu software center for your local VNC viewer.
      2. In the VNC Host:Display field:
        1. username@host
        2. I found it to work best if you let the software decide the display port rather than attempting to connect to username@host:displayPort (ie benmctee@10.0.X.XX:0 as the instructions tell you to do).
        3. Enter your VNC password that you set above when prompted.
  2. Assuming you have not encountered any errors - as I did while writing this - you should be viewing your server remotely!
    1. To stretch / shrink the display:
      1. F8 > Scale Viewer > auto or fit depending on your preference.
    2. Full screen view:
      1. F9
    3. To close the remote viewer but leave you logged in on the server, simply close the window.  If you wish to log out, do that normally as well.
As with all my tutorials, if you have questions or run into issues, please let me know in the comments section below.

Thursday, July 25, 2013

Updating your Verizon Galaxy Nexus ROM

This quick tutorial is for owners of a rooted Galaxy Nexus running the SlimROM custom ROM and are on Verizon (toro).  These steps may work for other devices running the same ROM, but I make no guarantees as this is written for a friend as a reference.

  1. Go to Settings > SlimCenter > SLIMOTA tab
    1. Download SlimBean by clicking the link (which will bring you to their website), select the most current OFFICIAL build (Slim-toro-4.?.?.build.?-OFFICIAL), and click the blue Download button.
    2. Go back to SlimCenter and click Download gapps.  On the web page, select AIO_Addons.4.?.?.build.?.?.   Again, click the blue Download button.
  2. Both of these files will show their download progress in the Notifications drawer.
    1. While these are downloading, open ES File Explorer
    2. Browse to the RootStuff folder I created.  If it isn't there, just create a new folder named RootStuff (or whatever you want to call it, as long as you remember the name).
    3. Delete any old Slim-toro or AIO_Addons zip files that are stored there.
  3. Once the 2 updates have finished downloading, they will be in your Download folder.  Select both of them by long-press, then cut and move them to the RootStuff folder.
  4. Open ROM Manager
    1. Reboot into Recovery
  5. In Recovery (use volume buttons to navigate and power button to select if you have not purchased the touch version):
    1. wipe cache partition
    2. advanced > wipe dalvik cache
    3. ++++Go Back+++
    4. install zip from sdcard
    5. choose zip from sdcard
    6. 0/
    7. RootStuff
    8. Slim-toro-4.?.?.build.?
    9. Once that is complete, it should bring you back to the RootStuff folder. If not, navigate to that folder again.
    10. AIO_Addons.4.?.?.build.?.zip
    11. Again, you should still be in the RootStuff folder once that is complete.
    12. +++Go Back+++
    13. +++Go Back+++  (at the bottom of the folder list)
    14. +++Go Back+++
    15. +++Go Back+++
    16. reboot system now
  6. Once your phone is back up, it will say that it is optimizing your apps. 
  7. Once it has finished (~5 min), you are done.
A couple of quirks:
  • I always have to re-download Google Search from the Play Store to get the search bar on my home screen to work and to enable Google Now.  I have submitted this bug report to their forums with no reply.  Easy fix though. 
  • Some apps when trying to update will give an error (triangle with exclamation mark in the notifications drawer).  Simply click on the error, which brings up the app in Google Play Store, and update manually from the store.
  • If you use Google Voice for voicemail (which I highly recommend), you should open the app to reactivate it after upgrading your phone.  Otherwise you may or may not receive notifications of voicemails automatically.
Let me know if you have any issues and I can update this guide.

Saturday, June 22, 2013

Monitoring APC UPS Batteries using Webmin on Ubuntu

I've been using an APC Back-UPS XS 1300 since initially building my media server approximately 2 years ago.  Until now, however, I've been blindly trusting it will do it's thing when I have a power outage/surge/brown-out.  I have come across a method to monitor the battery pack using my favorite monitoring and admin tool, Webmin.  Here goes:
  1. Install apcupsd and the cgi tools to enable web monitoring
    1. sudo apt-get install apcupsd apcupsd-cgi
  2. Edit the config file:
    1. gksudo gedit /etc/apcupsd/apcupsd.conf
    2. # UPSCABLE <cable> section: change the value to
      UPSCABLE usb
    3. In the next section down, it currently reads DEVICE /dev/ttyS0.  This will not work with a USB cable, so change it to:
      UPSTYPE usb
      1. (leave DEVICE blank, but put the word DEVICE in there)
    4. Close apcupsd.conf, saving changes
  3. Now, tell the service that your config file has been configured:
    1. gksudo gedit /etc/default/apcupsd
    3. Close, saving changes
  4. Next, restart the apcupsd service:
    1.  sudo service apcupsd restart
  5. Download the apcupsd Webmin module from their downloads page 
    1. At the time of writing, it was named "apcupsd-0.81-2.wbm.gz".
    2. Unzip it to a temp directory
  6. Install the apcupsd module in Webmin
    1. Login to Webmin
    2. Go to Webmin menu > Webmin Configuration > Webmin Modules
    3. Select the [...] button next to "From Local File", and locate where you unzipped it, and install.
    4. Once it is done, click Refresh Modules on the left pane above Logout
    5. Close Webmin in your browser, re-open it, and re-login
  7. Configure apcupsd in Webmin
    1. Go to Others > APC UPS Daemon > Configure Module
    2. Change to the following values:
      1. Configuration file for apcupsd: /etc/apcupsd/apcupsd.conf
      2. Time interval for update screens (in sec): 30
      3. Path to multimon.cgi: /usr/lib/cgi-bin/apcupsd/multimon.cgi
      4. Path to upsfstats.cgi: /usr/lib/cgi-bin/apcupsd/upsfstats.cgi
      5. Path to upsstats.cgi: /usr/lib/cgi-bin/apcupsd/upsstats.cgi
      6. Path to upsimage.cgi: /usr/lib/cgi-bin/apcupsd/upsimage.cgi
      7. Start apcupsd command: /etc/rc.d/init.d/apcupsd start
      8. Stop apcupsd command: /etc/rc.d/init.d/apcupsd stop
  8. That's it.  Just log out of Webmin and back in, go to Others > APC UPS Daemon, and it should look like this:
You can play around with your config file to make your computer respond to certain battery percentages (for example, shutdown with 10% remaining).  To view all options, use "man apcupsd" in the terminal.

Also, you may want to perform a trial run by unplugging your battery from the wall and seeing what it does.  If you're interested in that sort of thing, there's a good tutorial over at

If you have any issues or find this tutorial is inaccurate, please comment below.