Search This Blog

Saturday, February 3, 2018

Upgrading to an SSD using Clonezilla

Samsung 860 PRO SSD

I've been contemplating upgrading my Xubuntu boot drive for quite some time. It was still running on the original 250GB HDD that I bought back in 2010, and from my experience that is quite a long time to trust a single hard drive. Plus, Samsung just released the 860 Pro SSD which I got a pretty good deal on. The SSD should allow the OS to run a bit snappier and boot time will be quicker.

I did some research to find out what all I needed to do in order to move 250GB HDD to a 500GB SSD. All signs pointed to Clonezilla, so that's the route I took. I started out by installing Multiboot in Xubuntu so that I could create a bootable USB stick with both Clonezilla on it, as well as a copy of Xubuntu 16.10 just in case I screwed things up and needed to get in to edit fstab or grub.

Installing and using Multiboot is fairly simple. Download it from the website, run the .sh install file, and it creates a new menu item for itself under Accessories. Once open, point it to your USB stick on which Grub and your multiple ISOs will be installed, and then drag over the ISOs into the window (supposedly). Note, drag-and-drop did not work for me, but it has a file finder option that worked just fine. It will prompt for password, and then run the copy operation in a small terminal window. The first time around I became impatient because the process was at 100% for about 10 minutes, so I just closed the window. When I tested it, the Grub menu didn't have any bootable options. I then formatted the USB stick, dropped the image back into it, and then just let it do it's thing for a while. I think Xubuntu took about 15 minutes to finish, and Clonezilla was only about 5, at which point it takes you back to the main Multiboot screen. From there, click the Boot tab at the top and test the USB drive. If it tests properly in the VM, you are ready to shutdown your machine and install the new drive.

Like I said previously, I purchased the Samsung 500GB 860 Pro SSD. I also purchased an IcyDock Flex-Fit Trio cage for it, which can fit 2x2.5" and 1x3.5" drive simultaneously into a 5.25" (ie CD drive size) slot. This works out great because I've filled my media server hard drive slots (8 total!), but have plenty 5.25" slots remaining.

IcyDock Flex-Fit Trio
Once everything was installed with power and SATA cords connected, I was ready to boot up into Clonezilla. I used the device-to-device option (the one not highlighted below):

If I wanted to copy my existing hard drive to an image on a USB drive for backup purposes, I would have used the first option.

Next, I chose my 250GB drive as the source disk. NOTE: The drive naming that was used by Clonezilla differed from that of the normal OS. For example, my /dev/sde hard drive contained sde1 (swap) and sde2 (/). Clonezilla named it sda, so just look for the serial number of the correct hard drive if you are unsure.

Next step was to choose the new SSD as the target. After that, since I selected Advanced mode, I was presented with some options. I actually left them all as default, because it already had the -r flag to resize the partition. On the final screen before, I chose to partition proportionally. Since I am moving up in size, this allows the entire 512GB to be utilized. I've read that if your /swap is at the end of your drive on the old HDD, the new SSD would not be able to be larger than the end of your /boot, but I didn't do too much research on it once I read about the partition proportionally magic that Clonezilla does. I think it's option k1, but don't quote me...

Finally, it asks for confirmation two times, just in case the target volume was selected incorrectly (it wipes the entire drive!). And then the actual cloning process took about 20 minutes, which is impressively fast I think.

Now, I did run into issues. I left my old drive and new drive both plugged in when I rebooted. Once in the OS, I noticed that it's still running on the old drive. So, I checked my BIOS boot order, and ensured that it was changed to boot from the Samsung. Then I shut the computer down, unplugged the original drive, then tried to boot. Nothing. This is why I put Xubuntu on the thumb drive, I have a feeling some fstab editing will be happening soon...

After some research, I booted back into Clonezilla and chose "Super Grub2 Disk." From there, select "Detect any Operating System," and then the latest Linux kernel (usually the top in the list). Allow it to boot into your normal environment (wow SSDs are fast!!). After this, I did a couple things. First, I ran update-grub and tried to reboot. Still nothing. Then I installed and ran Boot Repair, with the following options:
Reinstall Grub
Unhide boot menu 10 seconds
OS to boot by default sdd2 (The OS now in use...)
Place GRUB into sdd
Place the boot flag on sdd2 (The OS now in use...)

Applied those settings and got "GPT detected. Please create a BIOS-Boot partition (>1MB, unformatted filesystem, bios_grub flag). This can be performed via tools such as Gparted. Then try again. Alternatively, you can retry after activating the [Separate /boot/efi partition:] option.

So, I rebooted into my Xubuntu live install (usb stick), and ran Gparted; however, I cannot reliably resize the /boot partition to add a 1MB partition with a bios_grub flag - it warns of the potential of losing data. I then booted back into the live Xubuntu and ran the same Boot Repair installation - which warned of conflicts with my RAID, so I had to install mdadm - and then selected these options:
Reinstall GRUB
Use the standard EFI file
Unhide boot menu 10 seconds
[GRUB location]
Separate /boot/efi partition: sdd2
[GRUB options]
Purge GRUB before reinstalling it
[MBR options]
[Other options]
Place the boot flag on sdd2
(unchecked the defaults at the bottom to create pastebins and BootInfo summary, etc.)

Now then, after attempting I get: "The current session is in Legacy mode. Please reboot the computer, and use this software in an EFI session. This will enable this feature. For example, use a live-USB of Boot-Repair-Disk-64bit (url), after making sure your BIOS is set up to boot USB in EFI mode."

I enabled UEFI mode in my BIOS (it was previous disabled), and attempted to boot back into the live USB, however apparently I can't boot to the USB when UEFI is enabled, so I disabled UEFI boot, went back into Xubuntu live on the USB, bit the bullet, and resized my partitions (hey, I have a backup already - the original disk!).

Using GParted, I moved the /swap partition (beginning of the drive) up by 1MB at the start, created a new partition (/dev/sdd3) as an ext4, gave it the grub flag, and then committed changes. I then ran the boot-repair utility again, this time with more success. I did have to finagle it a bit to remove the old grub(s) and reinstall the new one, but it finally got to a point where it could recognize my /boot/vmlinuz... and then: "Boot successfully repaired. You can now reboot your computer. Please do not forget to make your BIOS boot on sdd (ATA Samsung SSD 860) disk!" Thanks, boot-repair, that's fantastic news.


Annnnnnnd..... WaLa!

Now, off to testing for that corrupted operating system...

Thursday, September 1, 2016

Issues with my Aperture Backup Plan

On a recent post, I explained how easy it is to transfer data from Amazon's S3 storage to Google Cloud Storage (GCS). I mean, this is cloud computing, so it should be simple, right? Well, in case my readers ran into problems (like I did), I didn't want to skip over the fact that issues can arise. And big issues they were...

First off, my transfer did not work as I described. In fact, I started writing the blog entry while the transfer was happening, and it was still transferring when I was done. It only appeared as if it were going to be successful. However, when all was said and done, I ended up with errors. To keep a long story short - and to not bore you with the research I had to do - I'll just link to my Stack Overflow post, and let you read it if you're interested.

At the end of the day, I got up to speed with gsutil, a very handy command line utility for talking with Google Storage from the local computer (remember - I'm running Xubuntu, but it should work fine for you Windows folks too). Some background though: when I started using S3, my intentions were to archive to Glacier to save money, and then only restore to S3 if a disaster ever happened. I would just sync my Macbook to the cloud, and then it would automagically archive to Amazon's cheap, long-term storage. Something went awry in the mix, though, and my files were neither in Glacier, nor classified as Standard storage in S3. The file types, as viewed from S3, was Glacier - but I could not see them in that service. I started down the path of restoring the files by meticulously right-clicking and restoring from Glacier within the S3 web console, but then I found out that the files would only be available for 3-5 days, and then go back to Glacier status. On top of that, I found out that the pricing would quickly escalate. So my dream of restoring from Glacier to S3 in bulk and have my Aperture library back up and running within 3 hours should a catastrophe happen was immediately squashed. I guess that's why they say that you should test your backup plan before putting all of your eggs in one basket, right?

At any rate, I got to learn some new command line interface (CLI) options for Linux, which always gets me going. Again, I'll save you from the boredom of explaining all of my research, but it suffices to say that the following command is what I needed to get my local files (from a USB thumb drive) to GCS:

gsutil -m rsync -r -d /media/benmctee/27F4-D3DE/ApertureLibrary.aplibrary gs://photo-archive-benmctee/ApertureLibrary.aplibrary

Let me explain what is going on here:
gsutil - that's the Google Storage Utility, which is part of the Google Cloud SDK. It's very useful, and more intuitive than one would think.
-m - Enable multithreading. This allows for multiple operations to go on at once when there are a lot of files to be processed. at over 220,000 files in my library, this really sped things up.
rsync - this shouldn't be new to any CLI users out there. But if it is, it's a very useful file mirroring tool for Linux (not sure about Windows?). It will sync two directories to ensure a 100% backup.
-r - Recursive. This option allows us to dive deep into all the folders
-d - Delete remote files that are not the same or available locally (use with caution!)
/media/benmctee.... - This is my local directory on my thumbdrive. Remember, always use the local directory first, and then the remote. Otherwise, serious deletions/file damage can occur!
gs://photo-archive-benmctee... - This is my GCS bucket, the "remote" location

If you want more details on gsutil rsync, check it out on Google's website.

This time, I did wait until a successful transfer before making this blog post. If you never used the Glacier option before, then my first post will hopefully work for you, because that is a lot easier and more straightforward. But if not, this should get you going. To install the Google Cloud SDK, which puts gsutil on your computer, head on over to Google Cloud Platform website. Happy clouding!

Wednesday, August 31, 2016

Transfer Data from Amazon's AWS S3 Servers to Google Cloud

A while back I decided it would be a smart idea to archive my Aperture photos (now Apple Photos) since my Macbook, a 2008 model, is near end of life in terms of hardware longevity. A quick research landed me at Amazon's cloud storage, S3 (Simple Storage Service), at just pennies per GB. I have around 100 GB of data within Aperture, so I needed something that was less expensive than using Google Drive.

As of this writing, Google Drive is $1.99/mo for 100 GB, and the next tier is 1 TB for $9.99/mo, with no scaling between the two tiers. S3, on the other hand, is $0.03/GB for standard storage ($3 per 100 GB, but scalable), and $0.007/GB for their long-term infrequent access "Glacier" storage. My plan was to upload all of my data from my Aperture library to S3, and then have it automatically archive to Glacier, costing me about 7 cents per month to safely stow away several years worth of photos. Sounds like a pretty good deal, right?

The problem with Glacier is that it is meant to be a long-term storage solution with very infrequent access. Once the files are in S3, I'd have to run a script to archive them to Glacier, followed by removing the files from S3 to save money. The next archive time I would have to transfer from Glacier to S3 so that I could compare source/destination (computer to S3) for changes on upload, and then repeat the whole sequence of archiving. However, I don't have that much time dedicated to optimizing my backup plan. Rather, I'd just like a cloud storage solution that I can easily access whenever I want, without having to worry where all the data is spread across the AWS platform.

I recently discovered Google Cloud, which I surprisingly had not heard of before, considering how Google-centric all my stuff is. I mean, I have a Nexus phone, Chromebook, this website is hosted on Google (including using Google's DNS services), I use Blogger, as well as Drive. I'm pretty much a Google fanboy at this point. But, it never crossed my mind to see if Google had a solution. So I started to compare AWS to Cloud, and, for my purposes, they are surprisingly similar, yet Google's services seem more intuitive.

There are 3 tiers with Google Cloud Storage: Standard ($0.026/GB), Durable Reduced Availability ($0.02/GB) and Nearline ($0.01/GB). Although the Nearline is more expensive than Glacier, the ease of use far beats that of AWS's option. What's more, Google has a transfer service that talks to Amazon's S3, so I can easily transfer over my bucket - which is what a storage node is called for both services. Google has key term explanations if this is all new to you. But, basically, you create your Project (Google Cloud account), create a bucket (Standard, DRA, or Nearline), and then put objects (or files) into that bucket.

Once you have your Project, or Google Cloud Storage account, to initiate a transfer from S3 to Google Cloud, follow these steps:

  1. Create a user access policy within AWS IAM (Identity and Account Management)
    1. Create a user in IAM (ie GoogleTransfer)
      1. Download the credentials 
      2. Copy the Access Key and Secret Access Key to somewhere you won't lose it (I created a Google Sheets file to keep track of users and their access keys). You will need both of these while creating the transfer later over on Google, and will never have access to Secret Key once you leave this page.
    2. Give that user an Inline Policy
      1. Policy Generator
        1. Effect: Allow
        2. AWS Service: Amazon S3
        3. Actions: All Actions, minus the 5 Delete* options at the top of the list
          1. This is probably overkill, but I ran into access permission problems while trying to use Groups instead of an inline policy, so I just gave blanket permission.
        4. Amazon Resource Name: arn:aws:s3:::*
          1. This gives the user access to everything on your S3, including all your buckets. If you want to restrict it further, have a read here.
      2. Add Statement
      3. Next Step
      4. Apply Policy
  2. Create a Bucket in Google Cloud Storage
    1. Give it a unique name (ie aperture-backup-benmctee)
    2. Select your storage class (pricing and explantions)
    3. Select your storage location. I would stick to multi-regional unless you have a good reason not to.
  3. In your Google Cloud Console, create a new Transfer
    1. Amazon S3 Bucket: s3://bucket-name (this is your unique bucket name, ie benmctee-aperture-archive)
    2. Access Key ID: This is the public key generated in IAM.
    3. Secret Access Key: The secret key generated in IAM - you saved it, right??
    4. Continue
    5. Select the bucket you created
    6. If this is the first time you are transferring, you should not need to select further options. If you are trying it again because a transfer failed, you may want to select Overwrite destination with source, even when identical
    7. Continue
    8. Give it a unique name, if desired
    9. Choose Run Now, and Create
The beauty of cloud computing is that this will all happen without you having to stay on that page to monitor it. If you want to come back later and check the progress, just log back into your Google Cloud Console, go to Transfers, and click the job to see where you're at. From Amazon to Google should be relatively quick, depending on the volume of files ("objects") you are transferring.

Thursday, July 14, 2016

2016 Toyota Entune Messaging Issues

Aloha everyone. It has been a while since I've posted a "how to" on something technologically based; I guess that means I've not run into issues that were not easily solved by a Google search. That changed over the last couple of days, however.

I recently purchased a 2016 Toyota Tacoma TRD Off Road that came with the Entune Premium Audio navigation and multimedia system. This is the standard head unit that comes with Toyota, and not the Entune Audio Plus version. From what I can tell, you have the standard version if you do not have the JBL logo at the bottom. Also, I believe the Entune Audio Plus version comes with apps like Pandora, Slacker, etc. So if you have those, you have the upgraded version. At any rate, this problem is probably across both systems.

Entune Premium Audio - Courtesy of PC Mag
Entune Audio Plus - Courtesy of Truck Trend

I connected my Android Nexus 6P device, which I use on the Project Fi network, and it seemed to work without issue for phone and bluetooth audio, as well as downloading all of my contacts. I can easily make a phone call using voice commands, play Pandora from my phone over the audio system, and look at all of contacts and make a phone call. However, when I went to the messaging feature, it showed some very old messages (newest was December 2015 - almost 7 months).

I was using the Google Hangouts app since I'm on Project Fi. It syncs all of my text messages, MMS, hangouts, and voicemail across all of my Android devices, as well as allowing me to make voice calls from any device over WiFi. To start the troubleshooting process, I opened the stock Messaging app on the phone, and it had all of my SMS and MMS messages in there, so I was curious why they weren't showing up on the phone.

After some Googling, I found some users had success with deleting all of their messages on the phone and that would make it show up on the Entune head unit. So I did that within the stock Messaging app, sent a test message from my wife's phone, and still no luck. I then changed my default SMS app on the phone to Messaging instead of Hangouts, no love. The next thing I did got it to work:

1) Hangouts app > Settings

2)  Untick Enable Merged Conversations

2) Account Settings (click on your e-mail account on the same settings screen).
3) Disable Project Fi calls and SMS "Messages" option, about halfway down.

If you are not a Project Fi user, the first step might work for you. If it still doesn't work, try deleting your old messages in the default messaging app as well as making it the default messaging app.

I am probably a niche user within the Toyota realm by being on the Project Fi network, which is why I could not find a solution on the forums for the fix. Hopefully this helps someone else out. If you have any advice or ways to still use Hangouts on your Project Fi device, please post in the comments below.

Tuesday, November 3, 2015

Resizing a VirtualBox Windows 7 drive

I originally allocated 30 GB for use as a Windows 7 virtual drive using Oracle's VirtualBox software. I then installed the baseline software: Comodo Anti-Virus, Firefox, and various Windows updates (read: hours of time wasted while watching "Please wait while Windows installs 1 of 238248 updates", etc.). 

"Features" is probably used too liberally

Once I had a good working Windows 7, I created a snapshot to come back to, should something go awry in the future. Once that was done, I installed SketchUp for my woodworking projects and some common access card reader software for my work stuffs (Navy websites). All was working well until this morning when I wanted to install the Home Remote Developer package to design a custom home automation interface. I was hit with "This program requires 2048 MB to install and you have [some random amount lower than this] remaining. Do you want to continue?"  Nope, I would not like to continue. I would like to find out why Windows thinks it needs so much space (30 GB, really?) to operate with a handful of programs installed. I went in to the Add/Remove programs to verify that, yep, I only had the aforementioned programs installed. So I went to disable Windows "features" and removed games, tablet services, and some other useless checkboxes, restarted, and now had even less space (originally was 1.02 GB on C:):

At any rate, rather than getting all spun-up over Windows (did I tell you about Xubuntu yet?), I researched how to make the C: drive bigger in my virtual machine. I quickly came up with this post: How to resize a virtual drive, which, turns out, is exactly what I intended to do. I found out that file resizing does not work if snapshots had been taken (forgot to read inside the parenthesis about prior to 4.3...), so I went into VirtualBox and created a clone with no snapshots of the current state.

Once that was complete, I continued with the guide by using the following script in terminal:

VBoxManage modifyhd <absolute path to file> --resize <size in MB>

Only I wanted to go from 30GB to 40GB, so the exact command was:

VBoxManage modifyhd "/mnt/Media/Virtual Machines/Windows 7 Clone/Windows 7 Clone.vdi" --resize 40960

And that was the first step of the process, by giving VirtualBox a bigger "hard drive" to work with. I then started up the Windows 7 Clone to extend the primary partition into the new space. After opening Disk Management, I was greeted with the unallocated space:

So here, just right click the (C:) > Extend Volume. Follow the menus to extend it out into the unallocated space, and then voila!

Like I said previously, I created a clone to work with, so I wasn't messing around with the original installation. If you are doing this on your working install, make sure you create a backup prior to attempting to resize the drive, as something could possibly get messed up. Also, since I'm running 4.3, I could have possibly done this procedure on the existing snapshot, but creating the clone meant that I could follow the original tutorial and also have a backup to work with.

If you have any comments, feel free to leave them below.

Saturday, October 17, 2015

Nexus 5 woes - Boot loop / power button sticking

I've been using the original Nexus 5 since it was first released by Google. It has been a great device and I've enjoyed a couple years' worth of use out of it. However, towards the end of August, it started a random boot loop, as if someone were holding down the power button. After a bit of Internetting, I found some people advising (on reddit) to tap it on the corner of the phone on a hard surface, near the power button. This seemed odd, but after many responses of "I can't believe this worked for me" and the like, I gave it a shot. It didn't work. I contacted Google in the hopes that they could shed some light. No luck there, but they did forward me over to LG and they were very happy to take my money for repair of the phone. So money I gave them, but a repaired phone in exchange I did not get. Here is some history, some of my troubleshooting efforts, and the logs that goes with them.

August 25th: Contacted Google who then forwarded me to LG for repair
Sep 1st: LG sent a product received notification. A couple days later they asked if I wanted to repair the power button or completely refurbish the phone. I chose a complete refurbish for $179 since the battery was pushing 2 years, and my screen had a couple scratches.
Sep 14th: LG sent a product shipped notification. Their "Repair Results" in the e-mail stated "Re-Solder (Must input Part Location No.)". Not sure what that even means since when I got the phone back it seemed completely new.
Sep 25th: Received the phone, booted up, restored from Google backup, everything was running well. Received notification that a system update was available (~10 MB Android 5.1.1 update, likely LMY48M from LMY48I). Installed it, boot loop began.
Sep 29th: Contacted Google again, no help, contacted LG directly
Oct 2nd: LG received the phone again
Oct 6th: LG sent it back to me. Their "Repair Results" in the e-mail stated "S/W upgrade (download)". Seems they did a factory reset of some sort, no hardware work this time.
Oct 16th: received in the mail from FedEx
Oct 17th:
  • Booted up phone (no sim card)
  • Entered all Google account info, restored from backup
  • Phone ran fine, updated all apps from the app store once completely started up
  • Notification of new system update available, same as last time, chose to install (~10.1 MB), again probably the LMY48I to LMY48M update.
  • After install, the phone restarted, got to "optimizing apps" and powered off
  • Phone will not boot into OS. It will only go into fastboot or to the Google logo, then power off
  • Started troubleshooting sequence below

Found forum discussions with the same issue:

1) Plugged in USB cable, battery charging icon appeared on screen
2) Went into fastboot mode with key combo of VolDwn+Pwr
3) Installed marshmallow via the instructions at (
  • Unlocked device, acknowledged warning
  • Ran ./ (see update log at the bottom of this doc)
    • This package is MRA58K
  • On reboot, showed Android with spinning blue wireframe ball
  • Restarted again; Google logo w/unlock icon at the bottom of screen
  • Phone turned off, will not turn on with power button alone. Will go into fastboot via Vol Dwn+Pwr
    • New bootloader version: HHZ12k
    • New baseband version: M8974A-
    • Lock state: unlocked
    • (all else the same)
  • Attempt the "Start" option from fastboot
    • Does not get past the Google screen, instead, turns off
    • Unplugged USB cable and re-plugged to see if the phone indeed turned off (as confirmed by a battery charging icon) or if the screen just went dark. Nothing happened when the cable was plugged back in. Waiting 5 min to see if the phone does something
    • No response from phone, unplugged USB and turned on w/power button, go to Google screen and turned off
  • Plugged in USB, went into fastboot via key combo. Ran ./ again (see update log 2 at bottom)
    • During the automatic restart, Google screen showed, then turned off
    • Manually powered on again, same thing happened
  • Went into fastboot, attempted to flash LMY48B (last known stable version of 5.1.1, update log 3 at bottom)
    • New fastboot screen info: Bootloader Version...: HHZ12h; Baseband Version.....: M8974A-
    • Phone rebooted after flashing, then turned off. Showed battery charge icon
    • Manually turned on, got to Google screen, turned off
  • Back into fastboot, attempted flashing 5.0.1 (LRX22C, update log 4 at bottom)
    • New fastboot screen info: Bootloader Version...: HHZ12d; Baseband Version.....: M8974A-
    • Rebooted, got to Google logo, powered off, showed battery charge icon
  • Went into fastboot, locked device via fastboot oem lock.
    • Chose the Start option from fastboot, got to Google logo, turned off. This time it automatically turned back on, and then gets to the Google logo in a boot loop cycle (happened ~7 times)
    • Manually powered on, got to Google logo, stayed there for ~15 minutes. Powered off with all 3 buttons. Powered on, got to Google logo, turned off.
    • Rebooted into fastboot, attempted to go into Recovery Mode. Got to Google logo and powered off. Manually powered on, stuck in boot loop.
  • Ran fastboot oem unlock again
    • Once complete with erase and unlock, attempted to boot device. Got to Google logo, turned off
  • Back into fastboot, attempted flashing 4.4 (KRT16M, update log 5 at bottom)
    • New fastboot screen info: Bootloader Version...: HHZ11d; Baseband Version.....: M8974A-
    • Completed install, Google logo, reboot, Google logo, power off
  • Went into Recovery Mode, showed Google Logo, then showed Android guy with spinning blue wireframe ball
    • Sat there for about 2 minutes, then powered off
    • Automatically rebooted into Android guy again. Powered off after about 45 seconds
    • Powered on, got to Google logo, went into Android guy again
      • The blue bar at the bottom of the screen doesn't appear to be showing progress of any sort, it just has black vertical lines moving across it to the left
    • After about 45 seconds, the screen froze (no more animation) then 10 seconds later it powered off
  • Attempted to go back into recovery mode again, but it went into a boot loop at the Google logo
    • Again, android guy, reboot, android guy, power off
  • Flashed 6.0.0 again
    • Logo, reboot, logo, reboot, logo, power off
    • Recovery: android guy (no blue bar this time) - vol up+pwr tap doesn't go into factory reset screen. stayed on android guy for 1 minute, rebooted, logo, rebooted, logo,
  • Within fastboot, flashed TWRP bootloader via fastboot flash recovery recovery.img
    • (openrecovery-twrp-
    • Wiped everything
    • Attempted permissions repair
    • Ran fastboot -w then fastboot continue. It restarted into TWRP, formatted cache, and then powered off.
  • FINALLY!! I booted into fastboot, and this time just chose "Start" instead of recovery, and it got past the Google logo and into the new marshmallow animation screen.... progress.
    • The animation started at 14:15. At 19:35 I manually restarted the phone.
    • Back at square one (won't get past Google logo)
  • Tried fastboot -w again, no luck
  • Reflashed 6.0 again, booted into recovery, performed "Wipe data/factory reset". Boot loop. Notice a trend??
  • Hail Mary: Installed TWRP recovery, performed a factory reset (wipe /data and /cache).
    • Performed adb push Got half-way through the install, phone rebooted.
  • One more attempt: flashed the stock recovery, restarted bootloader, flashed TWRP recovery, restarted bootloader, entered recovery, wipe > advanced wipe > check everything > wipe.
    • Once complete, checked /system, and "repair system". Phone rebooted.
    • Reflashed stock recovery, reflashed TWRP
    • Won't let me into TWRP again.
    • Performed fastboot -w, got this:
      • erasing 'userdata'...
        FAILED (status read failed (Protocol error))
        finished. total time: 5.266s
      • Then phone turned off
  • I'm done. Sending this log to Google and LG, more to follow.

UPDATE (Oct 18, 2015):
I called Google to see if there was anything they could do since obviously LG doesn't properly repair their products during refurbishment. After being on the phone for close to an hour, I got the same "your device is out of warranty, let me forward you to LG" answer, which, by the way, were not open. Looks like I will be calling tomorrow to try and speak to someone who can promise me a different phone or at least a full repair with this one.

UPDATE (Nov 4, 2015):
I received the phone back from LG repair very disappointed. The sheet they included with the box said that the primary complaint was that it wasn't charging. The fix: replace the charging receptacle. Did they not listen to a word I said?? Apparently not. The good news is that the phone booted to the welcome screen, and that's where I'm leaving it. My Nexus 6P is on the way (good bye LG!), and this phone is going on the shelf.

Thursday, January 8, 2015

Resize a VirtualBox VDI Image

I recently wanted to hone my woodworking skills, not in the workshop, but in the planning phase.  My previous projects have all been either "plan as you go" or, at the very best, drawn out and dimensionalized on paper using straight-edges, etc.  As I am not a skilled craftsman, but a mere hobbyist, my projects never seemed to come out how I envisioned them in my head (read: mortise and tenon joints ended up being screwed together).  Anyways, I've known about Sketchup for a while, previously made by Google and now by Trimble, and decided to give it a whirl.  The only problem: for Windows and Mac only.  Of course.  I then set out on a path of finding an alternate, but the community support on YouTube is just phenomenal (search for Sketchup for Woodworking, and you'll see what I mean).

I installed VirtualBox, got Windows 7 up and running, and of course installed all of the updates, Comodo anti-virus, and Firefox.  Bare minimal installation.  Then I installed Sketchup and went to work.  Hours of watching and pausing YouTube, repeating the process in Sketchup, and I'm starting to get excited about the prospect of having actual, real dimensions printed out with my woodworking plans in multiple views, angles, etc.

Tenons for a mobile planer cart

The planer cart

Sander caddy with paper storage

Next up: Windows strikes again and is running out of space.

Out of space!

How is that even possible?  I dynamically allocated 25GB on my hard drive just for Windows.  It's a very minimal installation, yet here I am with a red bar below my C:\ drive, and it stating I have 941MB left of my initial 25GB.  Seriously?  I am reminded once again why I switched years ago to Linux.

  1. Shut down Windows
  2. Close VirtualBox
  3. terminal:
    1. sudo VBoxManage modifyhd "/path/to/vdi/Windows 7.vdi" --resize <size in MB>
    2. In my case: sudo VBoxManage modifyhd "/mnt/Media/VBox/Windows 7.vdi" --resize 30000
    3. Note: the quotes are only required if there is a space in the path (or use the \ modifier, whatever suits you best).
  4. Open VirtualBox, but don't start Windows just yet
  5. Right Click the Windows instance > Settings
  6. Go to Storage in the left pane and click on your VDI file
  7. Verify the Virtual Size is what you requested
  8. Close out of settings and start Windows
  9. Since Windows won't automatically add the new space to your drive, open Disk Management (Control Panel > Administrative Tools > Create and Format hard disk partitions
  10. Right click on your Windows volume > Extend Volume
    Right click on Windows Volume
  11. Next
  12. Change options as you see fit.  I left everything as-is, because I want all 5GB additional space allocated to this volume.
    Extend Volume by preferred amount or leave as-is
  13. Next > Finish
  14. Your Disk Management should now show that you have increased the size of your Windows volume
    Disk Management shows increased size
  15. Open Windows Explorer and verify your hard drive now shows your new storage space
    1. For me, it still showed the old amount because I was in the My Computer view before resizing.  If this is your case, simply click on some other place in Windows Explorer, and go back to Computer view and it should update.
And Wa La!  Back to the happy blue color.

I hope this helps out someone.  If you run into problems, review the documentation.  While researching, I ran across a post on the Ubuntu forums that if it is a fixed disk instead of dynamically allocated - meaning your original VDI file was the full size of the disk instead of gradually growing as needed over time - this process will not work.  There is, however, a VirtualBox utility out there that will supposedly copy your fixed VDI file to a dynamic one while resizing it.  I believe it is a Windows utility so I didn't pay much attention to it.

Good luck, and if there are any discrepancies please let me know in the comments.

Wednesday, June 11, 2014

Replace a RAID 5 disk that has failed (Linux / Ubuntu)

If you take a look at my last couple of blog entries, you'd know that I had a hard drive that was approaching imminent failure:
I got the new drive in the mail from Amazon which was a different model, but the same size (2.0 TB) WesternDigital Green (WD20EZRX).  Once I decided that using a SATA 3 drive on a SATA 2 bus was going to work, I went for the purchase.

On to the replacement:
Using mdadm, tell the Linux RAID to not recognize the disk as usable:
mdadm --fail /dev/md0 /dev/sda1

If you are like me, and you don't know which one is which, use the Disk Manager tool and write down the serial number of the drive.  This will correlate to the number on the printed label of the physical drive.  Note: it is handy to tape a piece of paper to the inside of your computer listing all of your drive serial numbers and the associated partition for future reference.  I actually had forgotten that I did this the last time a drive failed, wrote down the serial number of my drive, and then realized the paper was in the computer.

Power down the machine, remove the faulty drive, and replace with the new one.

Once the drive is replaced, power on your computer.  You should see a /dev/md0 fail event upon startup.  Mine said something to the effect of 3 out of 4 devices available, 1 removed.. etc.

Next, format the new drive with fdisk:
sudo fdisk /dev/sda

This will bring you into the fdisk program.  Type m for the help menu and available input options.  Perform these in order:
p - print the current configuration and verify there is no partition already.  This is a quick idiot check to make sure you are configuring the correct drive.
n - new partition
p - make it a primary partition
<enter> - accept the default start sector (should be 2048)
<enter> - accept the default end sector (should be the end of the hard drive)
t - change the type of the partition
fd - make it a Linux RAID autodetect
p - verify all of your settings are correct.  It should look something like this:
w - write your changes to the file

This will write the new partition table, exit fdisk and return you to the command line.  Execute partprobe to ensure your system will recognize the new partition.

Tell mdadm that the drive is now available:
sudo mdadm --add /dev/md0 /dev/sda1

Your data from the other 3 drives will now be rewritten across the new sda1 drive.  This will take some time, but can be monitored:
watch cat /proc/mdstat

It is important to leave your machine on and uninterrupted until the rebuilding process is complete.

Aren't RAID 5's a beauty?  I love having automatic hardware failure protection... assuming not more than 1 drive fails at a time.  I hope you found this useful.  If you have any questions or comments feel free to post in the comments below.

Next up will be to create a RAID 1 using my existing system drive and a spare unused drive I've had sitting around.... without losing any data.  Should be fun!

Tuesday, June 3, 2014

Checking and repairing a RAID in Linux

Recently I've been having a weird issue where I will sit down at my computer after it has not been used in a while and it shows a black screen with a blinking "_" in the top left corner.  The only thing I can do to recover from this is to issue the Alt+Prt Sc+REISUB to force an emergency file system sync and reboot (click the link for details on all the inputs).  Once the machine was back up, I quickly started researching what caused the issue by checking out dmesg and kern.log. I also ran some smartctl tests and noticed there were some bad blocks on my RAID 5 (4x2TB).  I started down the rabbit hole of repairing bad blocks, only to find out I could be causing more harm than good.  I vaguely remember attempting this before on a non-RAID, and ending up with more unusable blocks than when I started.  Before doing too much damage to my RAID, I decided to do some more research.  Turns out, with a Linux software RAID (mdadm), I can easily find and repair my issues using one simple command:

sudo echo 'check' > /sys/block/md0/md/sync_action

Of course, my RAID is on md0, so change this to wherever your mount your disks if different.  It is wise to do this while the volume is not mounted (sudo umount /dev/md0), otherwise you risk damage.  This command will start the filesystem check but will not keep you up to date on its progress.
To check up on the progress, issue:

watch cat /proc/mdstat

This will take a long time, depending on the size of your drives; mine started out with ~290 minutes to finish.  To quit watching, Ctrl+C.

To pause the check:

sudo /usr/share/mdadm/checkarray -x /dev/md0


sudo /usr/share/mdadm/checkarray -a /dev/md0

Once it has completed, check the mismatch count:

cat /sys/block/md0/md/mismatch_cnt

If output returns 0, then you're all set and your RAID array should be as repaired as it can be.  If it returns something other than 0, you can synchronize the blocks by issuing:

sudo echo 'repair' > /sys/block/md0/md/sync_action
watch cat /proc/mdstat
And, once the repair is complete, check it again:
sudo echo 'check' > /sys/block/md0/md/sync_action
watch cat /proc/mdstat

For more info, check out the Thomas Krenn Wiki.

Thursday, April 24, 2014

Make magnet links work in Xubuntu

When trying to open magnet links in Xubuntu, sometimes you will get an error.  For example, when searching in Catfish and a folder is clicked, I got this:
"Unable to detect the URI-scheme of /home/user/folder/folder".

You might also get this in Chrome when trying to open a magnet to a torrent file.  For some reason, Firefox works fine with magnet links (probably uses gnome-open instead of the system's opener by default).

To fix the problem, edit /usr/bin/xdg-open: sudo gedit /usr/bin/xdg-open

In there, find the lines that look like this:

if [ x"$DE" = x"" ]; then
And change it to this:
if [ x"$DE" = x"" ]; then
#xdg-open workaround for bug #1173727:
This will force Xubuntu to think you are using the Gnome Desktop Environment, and will in turn use gnome-open instead of xdg-open.  When Xubuntu detects the XFCE display manager, it calls exo-open "$1" which is not capable of handling magnets.  This workaround will get you going until the bug has been fixed.