Tag Archives: bash

Restore Thunderbird Email

Recently I was forced to restore one of my email accounts that I use Thunderbird as my main client. But first a bit of background is in order. The domain that held the account was in a separate package for billing but also on the same provider, 1and1. I simply wanted all my domains to be on the same account but in order to do this a transfer was needed. I initiated all the steps and soon the transfer was under way. I really didn’t think the email would be disturbed since the transferee and the transferrer were one and the same. Me. Boy was I wrong. No sooner than the email that came in stating the transfer was completed, email to the other account stopped. And in addition to stopping, the action erased all of the current content from Thunderbird and all other clients. This was how the process of email recovery was needed. I thought the cloud was safe and probably is unless you transfer your domain.

So the recovery process had begun. I navigated to the folders where Thunderbird stores the email, and was very happy to see this directory listing.

johnny@computer ~/.thunderbird/ifswkzqc.default/ImapMail/imap.1and1.com $ ls -l
total 72240
drwxr-xr-x 2 johnny johnny     4096 Aug 17 05:30 Archives-1.sbd
-rw-r–r– 1 johnny johnny     1238 Aug 17 05:42 Archives.msf
-rw-r–r– 1 johnny johnny     1137 Jan 15  2014 Drafts.msf
-rw-r–r– 1 johnny johnny 58492214 Aug 20 16:01 INBOX
-rw-r–r– 1 johnny johnny     9260 Aug 20 16:10 INBOX.msf
-rw-r–r– 1 johnny johnny     1140 Jan 15  2014 Junk.msf
-rw-r–r– 1 johnny johnny       25 Jun 15 04:06 msgFilterRules.dat
-rw-r–r– 1 johnny johnny     2628 Aug 17 05:31 Sent Items.msf
-rw-r–r– 1 johnny johnny     1233 Aug 17 05:42 Sent.msf
-rw——- 1 johnny johnny   843873 Aug 18 01:27 Spam
-rw-r–r– 1 johnny johnny     3047 Aug 19 18:03 Spam.msf
-rw-r–r– 1 johnny johnny     1241 Jan 15  2014 Templates.msf
-rw——- 1 johnny johnny 14448469 Aug 20 16:00 Trash
-rw-r–r– 1 johnny johnny    24495 Aug 20 16:06 Trash.msf

I have highlighted the line for my INBOX file. Notice its size is almost 59 MB. This is the current state of my folders as I write, but there are only four messages accessible in the INBOX. When the account was switched, I can only guess a permission flag was set and that is why they are not accessible. They probably should have been deleted too. I was just glad to see the file size large enough to contain something. I opened the file in Writer and indeed the emails were there but if you’ve ever done this, you would know that they are not be in the friendliest of formats. So some work was in order.

Step one was to copy the originals to preserve anything of value. The next step was to try and use an extension for Thunderbird, ImportExportTools 3.01, to try and recover the email. The first try failed since importing of Imap email is unsupported. That brings us to step two. I copied the files again, but this time to the local folders.

johnny@computer ~/.thunderbird/ifswkzqc.default/Mail/Local Folders $ ls -l
total 56668
drwxr-xr-x 2 johnny johnny     4096 Aug 17 06:16 Inbox
-rwxrwxrwx 1 johnny johnny 58492214 Aug 17 05:30 INBOX    (After the chmod command.)
-rw-r–r– 1 johnny johnny     1806 Aug 20 00:08 Inbox.msf
drwxr-xr-x 2 johnny johnny     4096 Aug 20 00:05 Inbox.sbd
-rw-r–r– 1 johnny johnny        0 Jan 15  2014 Junk
-rw-r–r– 1 johnny johnny     2349 Aug 20 16:40 Junk.msf
-rw-r–r– 1 johnny johnny       25 Aug 19 23:34 msgFilterRules.dat
-rw——- 1 johnny johnny        0 Aug 20 00:06 Trash
-rw-r–r– 1 johnny johnny     2583 Aug 20 00:08 Trash.msf
-rw-r–r– 1 johnny johnny        0 Jan 15  2014 Unsent Messages
-rw-r–r– 1 johnny johnny     2866 Aug 19 23:44 Unsent Messages.msf

After the file copy I modified the file permissions giving it the easy global 777. I wanted the file to be completely unencumbered by any permission. Running the command below from the same directory changed the permissions on the file.

johnny@computer ~/.thunderbird/ifswkzqc.default/Mail/Local Folders $ chmod 777 INBOX

After I ran the command I re-opened Thunderbird. I was hoping to see the messages in the local folders. But no messages were there. I was a bit more than disappointed, but thought I would give the extension one more try. To access the tools, right clicking the account gives you this menu.

ImportExportTools 3.0.1 by Paolo "Kaosmos"

ImportExportTools 3.0.1
by Paolo “Kaosmos”

I chose the Import mbox file as shown above and this time the import was a success. A big thanks goes out to Paolo “Kaosmos” for making the extension available. I will be submitting a donation for this great extension. I am assuming it was the moving of the INBOX file to a local folder and out of the ImapMail folder structure that allowed this to work.

johnny@computer ~/.thunderbird/ifswkzqc.default/Mail/Local Folders $ ls -l

total 56668
drwxr-xr-x 2 johnny johnny     4096 Aug 17 06:16 Inbox
-rwxrwxrwx 1 johnny johnny 57937587 Aug 17 05:30 INBOX
-rw-r–r– 1 johnny johnny     1806 Aug 20 00:08 Inbox.msf
drwxr-xr-x 2 johnny johnny     4096 Aug 20 00:05 Inbox.sbd
-rw-r–r– 1 johnny johnny        0 Jan 15  2014 Junk
-rw-r–r– 1 johnny johnny     2434 Aug 20 17:10 Junk.msf
-rw-r–r– 1 johnny johnny       25 Aug 19 23:34 msgFilterRules.dat
-rw——- 1 johnny johnny        0 Aug 20 00:06 Trash
-rw-r–r– 1 johnny johnny     2583 Aug 20 00:08 Trash.msf
-rw-r–r– 1 johnny johnny        0 Jan 15  2014 Unsent Messages
-rw-r–r– 1 johnny johnny     2866 Aug 19 23:44 Unsent Messages.msf

You may notice the file size is almost the same as the current file. I have trimmed it down a bit but basically it is the same. So that was all there was to recovering my email and an almost tough lesson learned.


Upgraded to Linux Mint 17 Qiana

Last week I upgraded Mint to Qiana. I don’t like to upgrade my main laptop that often, but when I saw that version 17 would be supported to 2019, I had to do it. I downloaded the ISO and prepared a flash drive for the install. I was ready to take the plunge, but during my quick inventory of installed programs, I started thinking of all the little customizations I had done. Not to mention most of the software I use on a regular basis are not in the default Mint image.

I still wanted to upgrade, but not in the traditional method of installing clean and restoring backups. This upgrade was done in almost one command, apt-get dist-upgrade. But first there were a few commands to run in order to prep the system for Qiana. In order for this to be successful, you need to point apt to thee new repositories. This is done with the following commands running them under sudo. I pass thanks for this post for clean instructions. Another of the many reasons I love Linux; when you want to know something, almost every time someone else did it too and wrote about it. The Linux community is world-wide and really does share the wealth of knowledge. You just have to find it. Each command is ran as a single line as sudo or su.

sudo sed -i 's/saucy/trusty/' /etc/apt/sources.list
sudo sed -i 's/petra/qiana/' /etc/apt/sources.list
sudo sed -i 's/saucy/trusty/' /etc/apt/sources.list.d/official-package-repositories.list
sudo sed -i 's/petra/qiana/' /etc/apt/sources.list.d/official-package-repositories.list
Once this is completed, you can proceed with the upgrade. But first some words of caution.
  1. Ensure you have continuous power. This means for laptops to be plugged in.
  2. Set the computer to not sleep. Screen is okay but definitely not the main unit.
  3. Give yourself plenty of time for this to finish. I didn’t time it but it did take a couple of hours.
  4. Have backups ready just in case. You do this anyway, right? Are they tested?
  5. Knowing how to recover if something goes south will always help too.

Once you are ready, from a terminal enter these commands in order.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get upgrade

This will start the upgrade process and once it is finished, I ran the last command again. You may be wondering just how did the upgrade perform for me? Somewhere in the process my upgrade failed on one item and stopped. I rebooted and was given a nice green screen but without the login section. I entered a terminal by pressing ctrl+alt+F2. I logged in and ran the command sudo apt-get clean. Followed by sudo apt-get dist-upgrade. At this point the upgrade re-started finishing in about a half hour. The next reboot brought me to Linux Mint 17 Qiana. I am glad I knew a few commands and especially the alternate method of logging in.

Mint 17 is running without any problems, but one. It is not a deal breaker nor is it interfering with anything I do, but it is there and I haven’t tackled removing it as if yet. On my panel I have two network indicators instead of the normal one. They both show identical information and if you stop one, it stops the other. Strange, yes. But like I mentioned not a big deal. If anyone knows of a solution, I am all ears.

Update: I was checking items in the Startup Applications menu under Preferences, and I noticed two networking items. And yes I removed one and when I rebooted the laptop, the extra network item on the panel was gone and networking still functions. That was easy. :)

That’s all there is to an upgrade of Linux Mint 16 Petra to Linux Mint 17 Qiana. The small problems were indeed small and I would not hesitate to go this route of upgrading once again as long as I am not experiencing any problems before the process, I don’t expect problems during the upgrade.


Linux Openssl Saves the Day

Recently one of my co-workers needed to create a third-party certificate for wireless authentication. Okay that’s easy enough right? Well it turns out that this wasn’t as easy for her as it should have been. She had been running into the same error on every attempt in step 1 of the Cisco instructions for two days. Needless to say she was a bit frustrated. The cert was for Cisco wireless controllers so she was following their instructions and using a tool from them to generate the cert request.

When she showed me the error I almost instantly knew the answer. This is the error she showed me as best as I can remember.

Cannot find file /usr/bin/openssl/openssl.conf

Clearly we have a Linux or BSD path here but she was on a Windows 7 machine as per her instructions. Upon looking at the folder structure of the openssl exe file, I could see the mimicking of the Linux file structure sans /usr. I knew either it was missing or the exe was not ported correctly. Since I carry Parted Magic with me all the time, I booted a laptop with it and sure enough openssl was included.

I opened a terminal window, changed to the directory, (cd /usr/bin), and ran openssl from there. I was able to create the two files she needed for the request and once she processed that with GoDaddy, I created her certificate. Linux to the rescue once again and all it took was noticing the error had nothing to do with Windows and about five minutes of my time.


Chown Command Primer

I won’t attempt to cover everything there is to know about the command chown, but it is important to know the basics. I have in the past changed distributions quite often, and in doing so have found that file and folder ownership still assigned to the former user account. Enter the command chown to the rescue. This command allows you to change the ownership of a file or folder and you can change permissions via groups at the same time. It should go without saying that this should be run as root or at least sudo. Here is an example of its usage.

Scenario: You have a folder of images and not all sub-folders and images allow you access. As root or sudo run: chown -Rv owner:group /home/user/Pictures/*.jpg

Breaking it down we have the command chown followed by two options. The first is R for recursive acting on files and folders including sub-folders, and the second is be verbose or show output on the screen. Next we have the owner:group replacing these values with what matches for your situation. If the owner is missing it does not change. This is followed by the path to execute the action. In this case we are running the command on all files and folders within the user’s home/pictures folder.

This is a very simple example and not representative of all you can do with chown. Follow the link above for a man page or run man chown in a terminal to load the man page. Online pages may give more information and will have links to related or advanced usage.

When you are faced with the task of changing owners/groups on many files and or folders, ditch the graphical interface and use the chown command. Learning even just the basics will prove itself to be a valuable time saver.



Update and Install With the Command Line Using Apt

We have demonstrated GUI tools to install and update packages or software, but what about the command line? The command line holds the power of Linux, giving life to most of the GUI tools for updating and installing packages, interfacing with the command line as partners. I like using apt and do so frequently but I admit I have to turn to the GUI simply because I do not know the commands as well as I like and I do not know all of the package names.

Updating the package list should come first most of the time and is run with super user privileges. In Debian based distributions:

sudo apt-get update

To upgrade all available updates run:

sudo apt-get upgrade (You will be asked to confirm Y/N.)

If you need to update the distribution, (After an apt-get upgrade you may see some packages held back.):

sudo apt-get dist-upgrade (You will be asked to confirm Y/N.)

Installing a single or multiple packages is easy with:

sudo apt-get install package-name(s) (Separate each package name with a space.)

These are the four commands I use the most, but there are more available. To see the help text run as sudo or normal user:

apt-get -h or -help

This is the output of the help file on my system:

apt for amd64 compiled on Mar 13 2013 21:25:25
Usage: apt-get [options] command
apt-get [options] install|remove pkg1 [pkg2 …]
apt-get [options] source pkg1 [pkg2 …]

apt-get is a simple command line interface for downloading and
installing packages. The most frequently used commands are update
and install.

update – Retrieve new lists of packages
upgrade – Perform an upgrade
install – Install new packages (pkg is libc6 not libc6.deb)
remove – Remove packages
autoremove – Remove automatically all unused packages
purge – Remove packages and config files
source – Download source archives
build-dep – Configure build-dependencies for source packages
dist-upgrade – Distribution upgrade, see apt-get(8)
dselect-upgrade – Follow dselect selections
clean – Erase downloaded archive files
autoclean – Erase old downloaded archive files
check – Verify that there are no broken dependencies
changelog – Download and display the changelog for the given package
download – Download the binary package into the current directory

-h This help text.
-q Loggable output – no progress indicator
-qq No output except for errors
-d Download only – do NOT install or unpack archives
-s No-act. Perform ordering simulation
-y Assume Yes to all queries and do not prompt
-f Attempt to correct a system with broken dependencies in place
-m Attempt to continue if archives are unlocatable
-u Show a list of upgraded packages as well
-b Build the source package after fetching it
-V Show verbose version numbers
-c=? Read this configuration file
-o=? Set an arbitrary configuration option, eg -o dir::cache=/tmp
See the apt-get(8), sources.list(5) and apt.conf(5) manual
pages for more information and options.
This APT has Super Cow Powers

As you can see there are many options just in the help text. This simple command line tool is the basis for most all Debian based package management. I have never touched the full power of apt, but I am still learning.
This short post only scratches the surface for the power and flexibility of one tool. And there are several more each with broad capabilities just waiting for you.




How to View System Logs in Real-Time

If you are having an issue especially with hardware or some other reproducible manner, here is a solution to see just what is getting written to the sys log in real-time. Perhaps this will aid in trouble shooting the issue. I know it has helped me in the past identify a device’s mount point when it would not show with the mount command. I ran across this string a few years ago in a forum and have saved it ever sense. I want to share it with you here.

The command is the tail command run in a separate terminal window if needed as root or with escalated privileges with sudo.

# tail -f /var/log/syslog

That’s all there is to it. Of course adjust the path if your logs are in a different directory and read the man page if you want or need additional options. You can expect some output like this:

johnny@polarbear ~ $ sudo tail -f /var/log/syslog
[sudo] password for johnny:
Jan 17 20:29:25 polarbear rtkit-daemon[1811]: Successfully made thread 2111 of process 1809 (n/a) owned by ‘1000’ RT at priority 5.
Jan 17 20:29:25 polarbear rtkit-daemon[1811]: Supervising 3 threads of 1 processes of 1 users.
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> (eth1): IP6 addrconf timed out or failed.
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) scheduled…
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) started…
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) complete.
Jan 17 21:17:01 polarbear CRON[2279]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 21:48:19 polarbear goa[2395]: goa-daemon version 3.6.0 starting [main.c:112, main()]
Jan 17 22:17:01 polarbear CRON[2728]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 23:17:01 polarbear CRON[3044]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 23:31:20 polarbear anacron[3408]: Anacron 2.3 started on 2013-01-17
Jan 17 23:31:20 polarbear anacron[3408]: Normal exit (0 jobs run)
Jan 17 23:31:27 polarbear kernel: [10938.411516] usb 1-1.2: USB disconnect, device number 3
Jan 17 23:31:32 polarbear kernel: [10942.680548] usb 1-1.2: new full-speed USB device number 5 using ehci_hcd
Jan 17 23:31:32 polarbear kernel: [10942.775553] usb 1-1.2: New USB device found, idVendor=046d, idProduct=c52f
Jan 17 23:31:32 polarbear kernel: [10942.775563] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 17 23:31:32 polarbear kernel: [10942.775569] usb 1-1.2: Product: USB Receiver
Jan 17 23:31:32 polarbear kernel: [10942.775573] usb 1-1.2: Manufacturer: Logitech
Jan 17 23:31:32 polarbear kernel: [10942.778371] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/input/input13
Jan 17 23:31:32 polarbear kernel: [10942.778814] hid-generic 0003:046D:C52F.0003: input,hidraw0: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:00:1a.0-1.2/input0
Jan 17 23:31:32 polarbear mtp-probe: checking bus 1, device 5: “/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2″
Jan 17 23:31:32 polarbear kernel: [10942.781280] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/input/input14
Jan 17 23:31:32 polarbear kernel: [10942.782247] hid-generic 0003:046D:C52F.0004: input,hiddev0,hidraw1: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1a.0-1.2/input1
Jan 17 23:31:32 polarbear mtp-probe: bus: 1, device: 5 was not an MTP device
^C johnny@polarbear ~ $

You can see where I disconnected and reconnected both my power cable and USB receiver for the mouse. A control c will stop the output. Put this tip away for the one time you may need it and if you ever do please come back and tell us how you used it and if it helped you solve a problem.


How To Get Information About Your CPU Using Linux and Bash

Generic CPU

So what do you do when you want to find out information on a CPU and you don’t want to wade through thousands of links? Using most Linux distributions whether they are installed or running live offers you some simple commands for identifying the model and more information.

The first example is extremely easy and provides a lot of information about the CPU. Open a terminal, I am using BASH, and type: lscpu and you will get something similar to this example.

lspcu command

As  you can see there is a lot of information here. You now know it is Genuine Intel in this case, handles 32 bit or 64 bit and what speed it runs and more.

But what if you already know it is a dual core Intel but are not sure what model it is? Can you run a command for just this information? The answer is yes. Open a terminal window and type this command:


grep “model name” /proc/cpuinfo and you will get output like this:

grep "model name" /proc/cpuinfo

Now you have the make, type, model and rated speed. These commands will work for virtually all CPU’s and almost all Linux distributions. ( I can’t say all here since I have not tested all CPU’s and distributions but it should hold true.) There may not be many times this information is really needed but if the need arises you now have a way that is much faster than searching for the model specs of the computer.

Which leads us to another short command using grep. What if the model sticker on your computer or laptop is gone or rubbed away and you need the model number for identifying it and troubleshooting a hardware problem? The answer is easy but this time it involves running the command as root or sudo and using the output piped to grep. Open a terminal and switch to root or use sudo like this example.

sudo dmidecode | grep Product

Now you have the make and model of the computer which can greatly speed up troubleshooting, finding parts etc..

I don’t classify these commands as anything more than time savers but sometimes time is already working against you and you need the information fast. If you have some great one liners post one or many in the comments. I like reading about what others use for simple yet important tasks.


Bash Command Line Blunder Using rm Not to be Repeated

This is a post the veterans of Linux can laugh about and the newbies better pay attention. It started last week when I announced to my sister-in-law I was going to give her a laptop. Of course said laptop would be sporting Linux. I spent much thought on the specs and distro combination and finally arrived at PCLinuxOS LXDE edition.

The laptop is a not so shiny HP ZE5385US sporting a 2.66 Mhz Pentium 4 and an entire 1 GB of RAM. Not what you would call a screamer but it will still get the job done and has lasted for nine years with only one motherboard change out due to some shoddy soldering by an inept craftsman. Yes that was my blunder too and it only cost me $85. Thanks eBay for coming through on this one.

So it was paramount that the laptop be outfitted with a usable OS and one that was still in the modern category and easy for a non-technical never used Linux before person. I have chosen PCLinuxOS LXDE since it will come with some major benefits. It is quick and low on resources, rolling OS so updates are easy, large upstream as well as its own repositories and it has a good community behind it for support.

So now the story of why has been told and if you have read this far I’m sure you want to know the blunder in the title of the post, ‘Bash Command Line Blunder Using rm Not to be Repeated’, so here it goes. I wans’t fully awake or still in the mid-morning Saturday fog when my wife said, “She will be here in about 30 minutes.” or something along those lines. Well I did not have a CD in the house to burn and I wanted to send her back with a live environment also just in case support was needed. So I popped in a 2Gb thumb drive only to find ArchBang Linux on it. A very cool distro I might add. Instead of just letting Unetbootin, (in most repositories for almost all distros), I thought I would head to the command line, (cli) and get the job done first of clearing the files. This is the command I used as root of course:

desert-bear johnny # rm -R * /media/ARCHBANG/       #This is where the veterans laugh and the newbies need to pay attention.

Looks innocent enough right? Mind you I have read the man file on rm and I have even read said warning on using wild card characters with it. But in my haste I still ran the command as is and was promptly met with a message my Dropbox folder was no longer in sync, would I like to re-sync? I knew instantly what I had done and a click on Nautilus proved it. My entire home directory was gone.Vanished into computer oblivion. Unlike Windows which prepares to delete when faced with such mass deletions and prepares to delete and prepares to delete and prepares to delete, Linux and the rm command is swift and ruthless. Cutting through my home folders like warm swiss cheese.

And to add insult to injury take a look at this screen capture from a few minutes ago.

Thumb Drive with ArchBang Linux

Notice how ArchBang survived. Okay veterans of Linux may laugh again here too. Noobs too if you want to.

To break it down a bit here is what happened and why. The rm command is short for remove or delete and does exactly what it is told to do. If you own the file(s), folders are files in Linux, you can rm them at will. By adding the -R I essentially told the command to go and delete every folder and sub-folder it finds. Well it found quite a few since I added a space followed by the wild card * . This essentially gave carte blanche to the rm command and it zipped by the terminal window only showing errors. Linux will not show a completed command’s output unless specified, usually with a  -v switch. The errors were the lack of ownership on the thumb drive containing ArchBang.

So to end the story my sister-in-law did not get the laptop as promised on time. I spent the entire day and most of the night letting recovery programs do their work. I have now 1000’s of small text files. I have back ups for most of the most important files like financials but I did lose some items like recent pictures and some creative writing.

On a last note here the real saving grace is built into the rm command itself. By default it does not delete ./folders or files. These are hidden by default and have to be explicitly named or given another switch. Lucky for me too since this is the meat of /home when it comes to preferences and things you set long ago but just don’t think about any longer. I also should have entered this command right before the rm command:

desert-bear johnny # cd /media/ARCHBANG

and everything would have been a bit better. I would still have lost some mount points and for all I know maybe more. But I can assure you I am not going to test it without adding the option -i. I hope you enjoyed reading this and had a good laugh as well as learned from my mistake. I laughed and you should too.