GNU vs. Linux: A Name, Commands, Respect

GNU’s not Unix. Nor is it Linux – or is it? Slightly controversial in its own right, an article I wrote recently opened the door to a heated debate that has been ongoing in the free software community for quite some time. GNU vs. Linux has divided the IT nerd population and even after all this time, neither side appears to be close to a compromise.

I know I’m guilty of it. You probably are too. The crime in question is referring to a popular open source OS like Debian by the Linux tag. As the kernel, Linux is no doubt important. But just as the heart can’t pump life into the human body without its associated organs, the kernel is essentially useless minus the complementary software that makes up the entire operating system. Linux’s limited role in the complete picture is the angle GNU backers are using to beat a horse that has probably been dead since the first major distro went commercial.

The Birth of Controversy

Linux history lessons often start in 1991, when an ambitious programmer named Linus Torvalds created the kernel that, perhaps to the chagrin of Richard Stallman, is still the driving force of the open source movement. However, what usually gets skipped in the timeline happened in 1984, when the Free Software Foundation (FSF) began assembling the GNU OS as a free take on Unix. The project was almost finished by the time Torvalds rolled out his baby, leaving a rapidly growing Linux following with little to do other than gather up the other free software tools that would finally complete the Unix-like system.

The software packages layered around Linux typically include desktop environments, GNU tools, and the GNU operating system itself. While not all distros are GNU based, many are, including those distributed by mega companies like IBM and Red Hat.

So if it’s arguably the smallest piece of the puzzle, why does Linux get all the attention? How did a measly kernel become the icon of the free and open source software revolutions?

That’s the GNU beef and the reason why its supporters at the very least, would like to see “GNU/Linux” used in place of Linux alone more often.

For the GNU community, the naming controversy is all about respect: getting Linux to remember where it came from. It’s sort of like the athlete who went from struggling to get by in a rough and rugged environment to living the life of luxury as a multi-million dollar celebrity figure. Even with the bright lights of super-stardom threatening to blind them, some manage to stay grateful and remember those humble beginnings. Call it petty (and some have), but the GNU crew is like the group of friends who want credit for supporting Linux before all the fame and notoriety.

Command Conundrum

command line photo

Anyone looking to flaunt his or her nerd cred can use the GNU vs. Linux angle to pick apart the most trivial distinctions. Take the post I mentioned in the intro for example. One SpiceWorks commenter thought it should be known that a couple of items on my list were not Linux commands, but core utilities (coreutils) built into the GNU OS. In this context, commands are basically small programs or “utilities“ baked into the shell itself; but I felt the commenter’s need for clarification warranted a closer look at the tools used in and around the terminal environment.

Whether it should be attributed to Linux, GNU, or GNU/Linux is another topic entirely. Either way, the coreutils family is made up of several tools that are universally embraced as commands for manipulating Unix-like systems. The coreutils package features three sets of utilities that respond to both standard and command line input when working with the terminal or shell scripts. Below we examine each utility to determine how they relate to and/or differ from what one might call Linux commands.


The file utilities package equips system administrators with extensive file management capabilities. Commonly used programs in this package include:

chmod: Short for change mode, the chmod utility allows you to change user permissions for files and directories on the system. Admins tasked with managing access permissions for a large group of users can save a lot of time by getting familiar with this tool.

cp: The “cp” utility makes copies of files and directories. cp can be used to copy the contents of one file to another, copy one or more files to a specific directory, or copy whole directories to a larger directory.

ls: The “ls” utility lists the files in a given directory. Executed as is, it lists files in the current directory, but can be tweaked to pull up lists from other specified directories as well.

rm: The “rm” utility removes files, directories, and other objects from the file system. Rather than remove the data entirely, rm removes references of those objects to make for a leaner, better organized system.

shred: The “shred” utility overwrites and deletes files on the system. This simple command acts as both a powerful privacy feature and security tool by making deleted files virtually impossible to recover.


The text utilities facilitate advanced manipulation of the text-based aspects of files in the system. Commonly used programs in this package include:

cksum: Short for checksum, the “cksum” utility generates a cyclic redundancy check (CRC) value and number of bytes for files and blocks of data. Linux administrators regularly use cksum to verify data integrity, particularly for files that have been sent via insecure means.

cut: The “cut” utility is used to extract sections of columns and fields from one or multiple files. For many administrators, this program is extremely handy at text parsing and other everyday command line tasks.

paste: The “paste” utility is used to join horizontal lines in text files. This command is useful for merging the lines in a single file as well as those from multiple individual files.

split: The “split” utility is used to split larger files into two or more smaller files. It is regularly used with the paste utility to merge lines between text files.

wc: Short for word count, the “wc” utility generates statistics for word, character, and line counts in individual text files. Gaining a quick snapshot of the total number of database records across a given set of files is one of several ways administrators can use a wc command.


The shell utilities enable system management beyond files and text. Several of the “core” tools in this package are designed to keep the OS environment running in tip top condition, including:

chroot: The change root or “chroot” utility temporarily changes the root directory for current running processes. By isolating specific applications in an environment unseen by Linux (or GNU), chroot offers an ideal way to test software without compromising the rest of the system.

du: Short for disk usage, the “du” utility estimates the amount of space used on a given directory or file system. This utility comes in handy for identifying directories and files that may be taking up large amounts of space on the hard drive, or connected storage media.

pwd: Short for print working directory, the “pwd” utility is used to identify the path of the directory you’re currently working in. Often paired with the ls and cd utilities, it is one the most the commonly used commands in the family of Unix functions.

sleep: The “sleep” utility pauses the execution of commands for a specific period of time. Commonly used for task scheduling, sleep commands can be used to delay called processes for minutes, hours, or even days.

uptime: The uptime utility simply lets you know how long your system has been running. This is a command an admin might call on to tout the reliability of their distro in comparison to Windows, which is not uncommon among those seeking uptime bragging rights.

Technically Speaking …

From command line tools to desktop enhancers, Linux gets some of its best known qualities from the Free Software movement. The naming controversy seems a bit silly, but as a spirited truth seeker, it feels good to have a better understanding of the GNU vs. Linux drama and why one side just can’t let go.


Printer Security Risks and Tips for IT Service Providers

Printers suck. I’ve already written about why they suck. What I didn’t realize until recently is that they also pose a serious threat to IT security.

white paper published by IT analyst Quocirca found that 63 percent of companies surveyed suffered at least one printer-related security breach. Even more disturbing (yet not all that surprising) is that only 22 percent of organizations consider printer security an area worth prioritizing.

Feature-rich with little to no out of the box protection, a printer can act as one of the biggest attack surfaces in a given IT infrastructure. Some companies have no idea that they’re vulnerable to common print security risks such as:

  • Theft. Forget network hackers – any documents that are not immediately secured can easily be swiped from the document tray by anyone in the office.
  • Printer attacks. From bringing the system to a halt with phony print jobs to using the device as a pawn in DDoS attacks, hackers can exploit printers to wreak major havoc on company resources.
  • Network vulnerability. An unsecured printer can put an entire network at risk. All it takes is a single open vector to provide access to any connected device.
  • Data breaches. Social security numbers, customer data, and confidential documents are regularly stored in printer caches. Left unguarded, this info is easy pickings for criminals armed with the simplest of tools.

Managed print services (MPS) are available to organizations looking to optimize, streamline, and simplify their printing environment. MSPs and VARs are in the position to guide their clients on the finer aspects of printer security. These pointers can help.

Devise a Printer Security Strategy

Companies build elaborate training programs around teaching workers to safely use mobile devices. After realizing how dangerous they are, the same respect should be given to printers. An ideal printer security strategy is made up of standards, policies, and processes that govern how printing resources are to be used across the organization. A layered strategy that covers all the basics, yet can incorporate advanced security functionality as printing operations and business needs evolve is truly ideal.

Lock Down the Printer

Proper printer security starts at the device. Most modern network and multi-function units come with access control, authentication, and other built-in security features. IT administrators should check with the manufacturer for firmware updates and recommendations on default configurations that protect the device while on the network. It’s also a good idea to keep physical security in mind by housing printers in a safe area and incorporating locks, proximity badges, and smart cards that protect against physical removal.

Encrypt Printer Data

As a shared resource in most office spaces, print job data travels across a network of connected computers. While the original data may be protected on the source system, print jobs send it over the network in plaintext that can be easily intercepted and comprehended by the most basic of hackers. Encryption offers a way to protect data as it travels across terminals, cables, and other points in the network. Found in Linux distros and older versions of Windows, Internet Printing Protocol (IPP) offers both encryption and authentication to keep prying eyes away from printer data.

Stay on Top of the Print Environment

For most companies, organizing documents is the extent of their print management strategy. As a vital part of the network, a printer represents its own infrastructure and should be handled accordingly. Luckily there are tools available to help IT administrators track virtually every key aspect of the infrastructure. Such utilities make it easy to track print jobs and keep up with the users running them. Armed with these monitoring capabilities, administrators can identify users who may be violating printing policies as well as opportunities that allow the company to reduce print jobs and save money.

Dispose of Old Printers the Right Way

printer security photo

Old hardware can be a headache in more ways than one. An old, broken printer, for example, can be a major liability if not properly disposed of. Many printers store small amounts of data on internal hard drives and when old equipment is tossed, that information discarded right along with it – but it can be retrieved with relative ease. IT administrators should make sure all printer data has been wiped clean from the device before anything is junked or sent to a thrift store. In many cases, those hard drives can be removed, connected to a PC, and erased in a few quick clicks.

It’s past the time we start to view printers like smartphones, tablets, and other devices we add to the network. Most organizations don’t want to go through all the trouble. IT service providers who can shoulder the burden with managed print services that make security a non-issue will be treasured as valuable assets by their clients.


How Hardware Failure Can Help IT Service Providers Sell the Value of BDR

Some businesses will go their entire life span without contracting a single malware infection. Few are so lucky when it comes to failed hardware. In surveying nearly400 of its IT partners, StorageCraft learned that 99 percent of 387 respondents had suffered from hardware failure in the past. While it only offers a small sample size, this data furthers the perceived notion that failed equipment is the biggest threat to business continuity.

Common Causes of Hardware Failure

There are a number of reasons why a piece of IT equipment may go clunk in the night. Here’s a look at the most common causes of hardware failure:

Overheating: An insufficiently cooled, improperly ventilated server room is a breeding ground for the type of extreme heat that threatens to put IT operations out of commission. Recognizing that overheating is the leading cause of hardware failure, data center operators pour countless resources into cooling the IT environment. A report by Markets and Markets projects that the data center cooling industry will be valued at $11.5 billion by 2018.

Power surges: A power surge doesn’t guarantee failure, but the results can be devastating just the same. Power surges are commonly caused by lightning, faulty electrical wiring, and other instances that cause energy flow to abruptly stop and restart. I don’t know if you wanna call it a best case scenario, but losing all unsaved data in a system crash probably seems like the lesser or two evils when compared to complete failure of the hardware, though either may be sparked from a surge.

Physical damage: Machines that house several tiny moving parts have a certain level of sensitivity to begin with. A computer has fewer moving parts than most other machinery, but those components are equally sensitive. Any IT equipment exposed to bumps, drops, and other forms of physical force is prone to immediate, or gradual failure over time. Whether it’s powered on or off, physical harm may spell the end of your hardware.

Water damage: Water is a hardware killer that threatens IT equipment in more ways than one. Having a desktop washed out in a flood or spilling a cup of coffee directly on a laptop is as close to certain doom as you can get. However, moisture caused by extreme humidity can also build up inside the equipment and cause critical parts to fail. Hardware design has come a long way, but water-resistant housing is something manufacturers still appear to be far from perfecting.

Malware infection: While malware is known to wreak havoc on software systems, rarely does it ever cause significant damage to the underlying hardware. But it’s possible. By providing backdoor access, a Trojan horse can give an intruder complete control of the target machine. This means they can put a strain on hardware resources such as BIOS and memory, and even open and close the door of the DVD drive at will. These annoyances contribute to failed hardware over time.

According to StorageCraft’s survey on hardware failure, 52.7 of respondents said that getting clients to recognize the value of BDR was the most challenging aspect of offering a disaster recovery solution. Here we’ve laid out why hardware is an ideal place to start. If organizations realize that their hard drives can be destroyed in the blink of an eye, they’re more likely to grasp why backup and disaster recovery is the only way to roll in today’s IT environment.