Tag Archives: bash

Upgraded to Linux Mint 17 Qiana

Last week I upgraded Mint to Qiana. I don’t like to upgrade my main laptop that often, but when I saw that version 17 would be supported to 2019, I had to do it. I downloaded the ISO and prepared a flash drive for the install. I was ready to take the plunge, but during my quick inventory of installed programs, I started thinking of all the little customizations I had done. Not to mention most of the software I use on a regular basis are not in the default Mint image.

I still wanted to upgrade, but not in the traditional method of installing clean and restoring backups. This upgrade was done in almost one command, apt-get dist-upgrade. But first there were a few commands to run in order to prep the system for Qiana. In order for this to be successful, you need to point apt to thee new repositories. This is done with the following commands running them under sudo. I pass thanks for this post for clean instructions. Another of the many reasons I love Linux; when you want to know something, almost every time someone else did it too and wrote about it. The Linux community is world-wide and really does share the wealth of knowledge. You just have to find it. Each command is ran as a single line as sudo or su.

sudo sed -i 's/saucy/trusty/' /etc/apt/sources.list
sudo sed -i 's/petra/qiana/' /etc/apt/sources.list
sudo sed -i 's/saucy/trusty/' /etc/apt/sources.list.d/official-package-repositories.list
sudo sed -i 's/petra/qiana/' /etc/apt/sources.list.d/official-package-repositories.list
Once this is completed, you can proceed with the upgrade. But first some words of caution.
  1. Ensure you have continuous power. This means for laptops to be plugged in.
  2. Set the computer to not sleep. Screen is okay but definitely not the main unit.
  3. Give yourself plenty of time for this to finish. I didn’t time it but it did take a couple of hours.
  4. Have backups ready just in case. You do this anyway, right? Are they tested?
  5. Knowing how to recover if something goes south will always help too.

Once you are ready, from a terminal enter these commands in order.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get upgrade

This will start the upgrade process and once it is finished, I ran the last command again. You may be wondering just how did the upgrade perform for me? Somewhere in the process my upgrade failed on one item and stopped. I rebooted and was given a nice green screen but without the login section. I entered a terminal by pressing ctrl+alt+F2. I logged in and ran the command sudo apt-get clean. Followed by sudo apt-get dist-upgrade. At this point the upgrade re-started finishing in about a half hour. The next reboot brought me to Linux Mint 17 Qiana. I am glad I knew a few commands and especially the alternate method of logging in.

Mint 17 is running without any problems, but one. It is not a deal breaker nor is it interfering with anything I do, but it is there and I haven’t tackled removing it as if yet. On my panel I have two network indicators instead of the normal one. They both show identical information and if you stop one, it stops the other. Strange, yes. But like I mentioned not a big deal. If anyone knows of a solution, I am all ears.

Update: I was checking items in the Startup Applications menu under Preferences, and I noticed two networking items. And yes I removed one and when I rebooted the laptop, the extra network item on the panel was gone and networking still functions. That was easy. :)

That’s all there is to an upgrade of Linux Mint 16 Petra to Linux Mint 17 Qiana. The small problems were indeed small and I would not hesitate to go this route of upgrading once again as long as I am not experiencing any problems before the process, I don’t expect problems during the upgrade.


Linux Openssl Saves the Day

Recently one of my co-workers needed to create a third-party certificate for wireless authentication. Okay that’s easy enough right? Well it turns out that this wasn’t as easy for her as it should have been. She had been running into the same error on every attempt in step 1 of the Cisco instructions for two days. Needless to say she was a bit frustrated. The cert was for Cisco wireless controllers so she was following their instructions and using a tool from them to generate the cert request.

When she showed me the error I almost instantly knew the answer. This is the error she showed me as best as I can remember.

Cannot find file /usr/bin/openssl/openssl.conf

Clearly we have a Linux or BSD path here but she was on a Windows 7 machine as per her instructions. Upon looking at the folder structure of the openssl exe file, I could see the mimicking of the Linux file structure sans /usr. I knew either it was missing or the exe was not ported correctly. Since I carry Parted Magic with me all the time, I booted a laptop with it and sure enough openssl was included.

I opened a terminal window, changed to the directory, (cd /usr/bin), and ran openssl from there. I was able to create the two files she needed for the request and once she processed that with GoDaddy, I created her certificate. Linux to the rescue once again and all it took was noticing the error had nothing to do with Windows and about five minutes of my time.


Chown Command Primer

I won’t attempt to cover everything there is to know about the command chown, but it is important to know the basics. I have in the past changed distributions quite often, and in doing so have found that file and folder ownership still assigned to the former user account. Enter the command chown to the rescue. This command allows you to change the ownership of a file or folder and you can change permissions via groups at the same time. It should go without saying that this should be run as root or at least sudo. Here is an example of its usage.

Scenario: You have a folder of images and not all sub-folders and images allow you access. As root or sudo run: chown -Rv owner:group /home/user/Pictures/*.jpg

Breaking it down we have the command chown followed by two options. The first is R for recursive acting on files and folders including sub-folders, and the second is be verbose or show output on the screen. Next we have the owner:group replacing these values with what matches for your situation. If the owner is missing it does not change. This is followed by the path to execute the action. In this case we are running the command on all files and folders within the user’s home/pictures folder.

This is a very simple example and not representative of all you can do with chown. Follow the link above for a man page or run man chown in a terminal to load the man page. Online pages may give more information and will have links to related or advanced usage.

When you are faced with the task of changing owners/groups on many files and or folders, ditch the graphical interface and use the chown command. Learning even just the basics will prove itself to be a valuable time saver.



Update and Install With the Command Line Using Apt

We have demonstrated GUI tools to install and update packages or software, but what about the command line? The command line holds the power of Linux, giving life to most of the GUI tools for updating and installing packages, interfacing with the command line as partners. I like using apt and do so frequently but I admit I have to turn to the GUI simply because I do not know the commands as well as I like and I do not know all of the package names.

Updating the package list should come first most of the time and is run with super user privileges. In Debian based distributions:

sudo apt-get update

To upgrade all available updates run:

sudo apt-get upgrade (You will be asked to confirm Y/N.)

If you need to update the distribution, (After an apt-get upgrade you may see some packages held back.):

sudo apt-get dist-upgrade (You will be asked to confirm Y/N.)

Installing a single or multiple packages is easy with:

sudo apt-get install package-name(s) (Separate each package name with a space.)

These are the four commands I use the most, but there are more available. To see the help text run as sudo or normal user:

apt-get -h or -help

This is the output of the help file on my system:

apt for amd64 compiled on Mar 13 2013 21:25:25
Usage: apt-get [options] command
apt-get [options] install|remove pkg1 [pkg2 ...]
apt-get [options] source pkg1 [pkg2 ...]

apt-get is a simple command line interface for downloading and
installing packages. The most frequently used commands are update
and install.

update – Retrieve new lists of packages
upgrade – Perform an upgrade
install – Install new packages (pkg is libc6 not libc6.deb)
remove – Remove packages
autoremove – Remove automatically all unused packages
purge – Remove packages and config files
source – Download source archives
build-dep – Configure build-dependencies for source packages
dist-upgrade – Distribution upgrade, see apt-get(8)
dselect-upgrade – Follow dselect selections
clean – Erase downloaded archive files
autoclean – Erase old downloaded archive files
check – Verify that there are no broken dependencies
changelog – Download and display the changelog for the given package
download – Download the binary package into the current directory

-h This help text.
-q Loggable output – no progress indicator
-qq No output except for errors
-d Download only – do NOT install or unpack archives
-s No-act. Perform ordering simulation
-y Assume Yes to all queries and do not prompt
-f Attempt to correct a system with broken dependencies in place
-m Attempt to continue if archives are unlocatable
-u Show a list of upgraded packages as well
-b Build the source package after fetching it
-V Show verbose version numbers
-c=? Read this configuration file
-o=? Set an arbitrary configuration option, eg -o dir::cache=/tmp
See the apt-get(8), sources.list(5) and apt.conf(5) manual
pages for more information and options.
This APT has Super Cow Powers

As you can see there are many options just in the help text. This simple command line tool is the basis for most all Debian based package management. I have never touched the full power of apt, but I am still learning.
This short post only scratches the surface for the power and flexibility of one tool. And there are several more each with broad capabilities just waiting for you.




How to View System Logs in Real-Time

If you are having an issue especially with hardware or some other reproducible manner, here is a solution to see just what is getting written to the sys log in real-time. Perhaps this will aid in trouble shooting the issue. I know it has helped me in the past identify a device’s mount point when it would not show with the mount command. I ran across this string a few years ago in a forum and have saved it ever sense. I want to share it with you here.

The command is the tail command run in a separate terminal window if needed as root or with escalated privileges with sudo.

# tail -f /var/log/syslog

That’s all there is to it. Of course adjust the path if your logs are in a different directory and read the man page if you want or need additional options. You can expect some output like this:

johnny@polarbear ~ $ sudo tail -f /var/log/syslog
[sudo] password for johnny:
Jan 17 20:29:25 polarbear rtkit-daemon[1811]: Successfully made thread 2111 of process 1809 (n/a) owned by ’1000′ RT at priority 5.
Jan 17 20:29:25 polarbear rtkit-daemon[1811]: Supervising 3 threads of 1 processes of 1 users.
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> (eth1): IP6 addrconf timed out or failed.
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) scheduled…
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) started…
Jan 17 20:29:29 polarbear NetworkManager[877]: <info> Activation (eth1) Stage 4 of 5 (IPv6 Configure Timeout) complete.
Jan 17 21:17:01 polarbear CRON[2279]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 21:48:19 polarbear goa[2395]: goa-daemon version 3.6.0 starting [main.c:112, main()]
Jan 17 22:17:01 polarbear CRON[2728]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 23:17:01 polarbear CRON[3044]: (root) CMD (   cd / && run-parts –report /etc/cron.hourly)
Jan 17 23:31:20 polarbear anacron[3408]: Anacron 2.3 started on 2013-01-17
Jan 17 23:31:20 polarbear anacron[3408]: Normal exit (0 jobs run)
Jan 17 23:31:27 polarbear kernel: [10938.411516] usb 1-1.2: USB disconnect, device number 3
Jan 17 23:31:32 polarbear kernel: [10942.680548] usb 1-1.2: new full-speed USB device number 5 using ehci_hcd
Jan 17 23:31:32 polarbear kernel: [10942.775553] usb 1-1.2: New USB device found, idVendor=046d, idProduct=c52f
Jan 17 23:31:32 polarbear kernel: [10942.775563] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 17 23:31:32 polarbear kernel: [10942.775569] usb 1-1.2: Product: USB Receiver
Jan 17 23:31:32 polarbear kernel: [10942.775573] usb 1-1.2: Manufacturer: Logitech
Jan 17 23:31:32 polarbear kernel: [10942.778371] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/input/input13
Jan 17 23:31:32 polarbear kernel: [10942.778814] hid-generic 0003:046D:C52F.0003: input,hidraw0: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:00:1a.0-1.2/input0
Jan 17 23:31:32 polarbear mtp-probe: checking bus 1, device 5: “/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2″
Jan 17 23:31:32 polarbear kernel: [10942.781280] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/input/input14
Jan 17 23:31:32 polarbear kernel: [10942.782247] hid-generic 0003:046D:C52F.0004: input,hiddev0,hidraw1: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1a.0-1.2/input1
Jan 17 23:31:32 polarbear mtp-probe: bus: 1, device: 5 was not an MTP device
^C johnny@polarbear ~ $

You can see where I disconnected and reconnected both my power cable and USB receiver for the mouse. A control c will stop the output. Put this tip away for the one time you may need it and if you ever do please come back and tell us how you used it and if it helped you solve a problem.


How To Get Information About Your CPU Using Linux and Bash

Generic CPU

So what do you do when you want to find out information on a CPU and you don’t want to wade through thousands of links? Using most Linux distributions whether they are installed or running live offers you some simple commands for identifying the model and more information.

The first example is extremely easy and provides a lot of information about the CPU. Open a terminal, I am using BASH, and type: lscpu and you will get something similar to this example.

lspcu command

As  you can see there is a lot of information here. You now know it is Genuine Intel in this case, handles 32 bit or 64 bit and what speed it runs and more.

But what if you already know it is a dual core Intel but are not sure what model it is? Can you run a command for just this information? The answer is yes. Open a terminal window and type this command:


grep “model name” /proc/cpuinfo and you will get output like this:

grep "model name" /proc/cpuinfo

Now you have the make, type, model and rated speed. These commands will work for virtually all CPU’s and almost all Linux distributions. ( I can’t say all here since I have not tested all CPU’s and distributions but it should hold true.) There may not be many times this information is really needed but if the need arises you now have a way that is much faster than searching for the model specs of the computer.

Which leads us to another short command using grep. What if the model sticker on your computer or laptop is gone or rubbed away and you need the model number for identifying it and troubleshooting a hardware problem? The answer is easy but this time it involves running the command as root or sudo and using the output piped to grep. Open a terminal and switch to root or use sudo like this example.

sudo dmidecode | grep Product

Now you have the make and model of the computer which can greatly speed up troubleshooting, finding parts etc..

I don’t classify these commands as anything more than time savers but sometimes time is already working against you and you need the information fast. If you have some great one liners post one or many in the comments. I like reading about what others use for simple yet important tasks.


Bash Command Line Blunder Using rm Not to be Repeated

This is a post the veterans of Linux can laugh about and the newbies better pay attention. It started last week when I announced to my sister-in-law I was going to give her a laptop. Of course said laptop would be sporting Linux. I spent much thought on the specs and distro combination and finally arrived at PCLinuxOS LXDE edition.

The laptop is a not so shiny HP ZE5385US sporting a 2.66 Mhz Pentium 4 and an entire 1 GB of RAM. Not what you would call a screamer but it will still get the job done and has lasted for nine years with only one motherboard change out due to some shoddy soldering by an inept craftsman. Yes that was my blunder too and it only cost me $85. Thanks eBay for coming through on this one.

So it was paramount that the laptop be outfitted with a usable OS and one that was still in the modern category and easy for a non-technical never used Linux before person. I have chosen PCLinuxOS LXDE since it will come with some major benefits. It is quick and low on resources, rolling OS so updates are easy, large upstream as well as its own repositories and it has a good community behind it for support.

So now the story of why has been told and if you have read this far I’m sure you want to know the blunder in the title of the post, ‘Bash Command Line Blunder Using rm Not to be Repeated’, so here it goes. I wans’t fully awake or still in the mid-morning Saturday fog when my wife said, “She will be here in about 30 minutes.” or something along those lines. Well I did not have a CD in the house to burn and I wanted to send her back with a live environment also just in case support was needed. So I popped in a 2Gb thumb drive only to find ArchBang Linux on it. A very cool distro I might add. Instead of just letting Unetbootin, (in most repositories for almost all distros), I thought I would head to the command line, (cli) and get the job done first of clearing the files. This is the command I used as root of course:

desert-bear johnny # rm -R * /media/ARCHBANG/       #This is where the veterans laugh and the newbies need to pay attention.

Looks innocent enough right? Mind you I have read the man file on rm and I have even read said warning on using wild card characters with it. But in my haste I still ran the command as is and was promptly met with a message my Dropbox folder was no longer in sync, would I like to re-sync? I knew instantly what I had done and a click on Nautilus proved it. My entire home directory was gone.Vanished into computer oblivion. Unlike Windows which prepares to delete when faced with such mass deletions and prepares to delete and prepares to delete and prepares to delete, Linux and the rm command is swift and ruthless. Cutting through my home folders like warm swiss cheese.

And to add insult to injury take a look at this screen capture from a few minutes ago.

Thumb Drive with ArchBang Linux

Notice how ArchBang survived. Okay veterans of Linux may laugh again here too. Noobs too if you want to.

To break it down a bit here is what happened and why. The rm command is short for remove or delete and does exactly what it is told to do. If you own the file(s), folders are files in Linux, you can rm them at will. By adding the -R I essentially told the command to go and delete every folder and sub-folder it finds. Well it found quite a few since I added a space followed by the wild card * . This essentially gave carte blanche to the rm command and it zipped by the terminal window only showing errors. Linux will not show a completed command’s output unless specified, usually with a  -v switch. The errors were the lack of ownership on the thumb drive containing ArchBang.

So to end the story my sister-in-law did not get the laptop as promised on time. I spent the entire day and most of the night letting recovery programs do their work. I have now 1000′s of small text files. I have back ups for most of the most important files like financials but I did lose some items like recent pictures and some creative writing.

On a last note here the real saving grace is built into the rm command itself. By default it does not delete ./folders or files. These are hidden by default and have to be explicitly named or given another switch. Lucky for me too since this is the meat of /home when it comes to preferences and things you set long ago but just don’t think about any longer. I also should have entered this command right before the rm command:

desert-bear johnny # cd /media/ARCHBANG

and everything would have been a bit better. I would still have lost some mount points and for all I know maybe more. But I can assure you I am not going to test it without adding the option -i. I hope you enjoyed reading this and had a good laugh as well as learned from my mistake. I laughed and you should too.

Bash Primer

As get to know Linux, you will inevitably use the command line (cli) at some point. Every distribution I have ever installed includes at least one terminal and at least one command interpreter. Most often the interpreter of choice is Bash. I am not going to do a history lesson on Bash other than to say it is an acronym meaning Bourne again shell. If you want to read the history follow the Wiki link above. Bash is a very versatile shell and thus has become widely popular amongst the FOSS community. I cannot think of a distribution that I have run that did not include it by default or at least make it available. Also I should point out that the terminal window itself is not Bash but rather a container for Bash to run in. This is why it might look different from one Linux distro to another.

There are two methods Bash can be run in; as an interactive shell you log into as a user and use the cli exclusively or as an interactive shell from within the graphical environment. Most of the time we are using the latter but if you work on a server, use ssh or are recovering a broken system, then you may find yourself at the login prompt of Bash. I hope that you are not facing that option until you at least have the basics down. System recovery is hard enough on its own let alone trying to remember commands.

To get started using Bash I recommend reading some blogs or websites devoted to Bash. You can also find plenty of books on Bash as well and since the core fundamentals have not changed drastically, even older books found in the used aisles or bargain bins are still very relevant to new users.

To illustrate a simple example you can open a terminal and type pwd. This is the output:

johnny@desert-bear ~ $ pwd
johnny@desert-bear ~ $

The command pwd provides a method of telling you where you are in the file system. It can be helpful to verify this before running a rm command as root. Notice the prompt is a $ sign. This is the default prompt for an unprivileged user. As root the prompt will turn into the bang symbol # by default. Both of these values and many more can be customized. This is yet another reason that Bash is so popular is that it can be tailored to the user very easily, Any customizations values are stored in the file ~/.bashrc. This file is read upon opening a terminal window using Bash when in an interactive mode.

This versatility and the fact it is found in almost every distro is why you will often see help given in forums as cli commands. They are universal with very few exceptions. In the various distributions graphical environment, the instructions can vary from one distro to another pretty quick. That makes trouble shooting very difficult indeed.

So now we know what it is and the question now is how to use it? Single commands can be run one at a time and for most there will be a help file by typing -h, –help or mancommand‘ (no quotes and substitute command for the command you are needing to know more about). Commands can be connected together using the pipe symbol | and the command following the | symbol will be run after the first command. Commands can be run together as scripts and I’ll show an example later.

There are many websites and blogs devoted to Bash and the cli. I could fill an entire post with nothing but links there are so many. But this is one I have referred to over and over as I progress in my understanding of Bash; LinuxCommand.org. One more site that is sometimes a bit more advanced but gives practical uses of command is All commands | commandlinefu. The first of these is a basic primer moving into some more advanced techniques and the second allows others to post examples of commands they use for a variety of tasks.

Just today I ran across this example ot a Bash script that can be used for the discovery of large files. I needed this at work about six months ago. :) Notice the start of every Bash script is the same; #! /bin/bash . This #! symbol identifies the following commands are a script and /bin/bash is the working path for finding the commands. Open a text editor and paste the following code into it:

# if nothing is passed to the script, show usage and exit
[[ -n "$1" ]] || { echo “Usage: findlarge [PATHNAME]“; exit 0 ; }
# simple using find, $1 is the first variable passed to the script
find $1 -type f -size +100000k -exec ls -lh {} ; | awk ‘{ print $9 “: ” $5 }’

Notice the two # symbols not followed by an !. These are used to comment on the script. To use this script type as root:

chmod a+x findlarge.sh

This sets the file as an executable and you do this in the same directory as the file or include the path. To run the script as root type:

./findlarge.sh / > largefiles.txt &

This will run the script and output to a text file in the current or working directory. The & symbol at the end tells Bash to run this in the background since this can take a while. Of course the variables of path and file size can be modified to suit your needs. Thanks goes out to Jarrod Goddard for sharing the script and Rackspace for sharing it with him.

Have a favorite example of your own or for more discussion, share in the comments.