Command Line

SUSE Linux Enterprise is designed for the enterprise. Part of what it means to be “Enterprise-ready” is to have “rock solid” components in the distribution which have been fully tested and can be supported. Unstable and unsupportable components/packages just won’t do. BUT… Every now and then, it’s necessary to run “the latest” version of a component of the distribution. Perhaps you have a new application which requires the latest java, or a new development library, etc. So you don’t want to have to wait until that “latest version” of the package gets fully tested and “officially supported”. You’ve got to have that new version now!

You could go to the source and compile your own package for SUSE Linux Enterprise – and while not difficult, it is still kind of a pain – and certainly a turn-off for many a new Linux user. A much better option is to simply visit the openSUSE Build Service and see if your desired package is already being built for SUSE Linux Enterprise. You’ll find builds for SUSE Linux Enterprise, openSUSE, -plus- several other Linux distributions as well… Fedora, Debian, Ubuntu, … So save some time, and check to see if the package you need has already been built by looking here.

Want more info on the openSUSE Build Service, check out this good overview article and this blog entry. and of course the project site which includes other great info.

From the article:

When it comes to file systems, Linux® is the Swiss Army knife of operating systems. Linux supports a large number of file systems, from journaling to clustering to cryptographic. Linux is a wonderful platform for using standard and more exotic file systems and also for developing file systems. This article explores the virtual file system (VFS)—sometimes called the virtual filesystem switch—in the Linux kernel and then reviews some of the major structures that tie file systems together.

More here.

While Intel and Atheros are doing a great job writing wireless drivers for linux, there are still other wireless cards, specifically Broadcom, who do not have linux drivers or who do not have good linux drivers.

The purpose of this article is to explain how to configure ndiswrapper in SUSE Linux Enterprise Desktop 10 SP1. On my end I am using an old dell c640 (with the embedded wireless card turned off in BIOS) and a Linksys wusb54gc usb wireless device.

1: Go into Yast and install ndiswrapper and the appropriate ndiswrapper kernel module.
- hit alt+f2 enter yast2.
- open the software management module.
- search for ndiswrapper
- determine which version of the kernel you are running(bigsmp, default, smp) by opening a terminal and entering uname -r
- check off the “ndiswrapper” package as well as “ndiswrapper-kmp-<kernel version>” in yast and click accept to install.

2. Setup ndiswrapper
-Determine which chipset your wireless device is using. To do this enter:
You can grep the results for wireless ex. hwinfo | grep -i wireless or just manually scroll through the output and search for something that looks like your wireless device.

In the case of my Linksys device it uses a Ralink chipset. I found the windows driver (rt73.inf) on the cd that came with the device. Find the .inf file for your card on your manufacturer’s website and download it. (Often times you will have to unzip the .exe driver installer to find the .inf).

-enter the following commands:
ndiswrapper -i /path/to/driver.inf #to install the driver
modprobe ndiswrapper #to load the module
ndiswrapper -m #To ensure that ndiswrapper will always use the same network interface name

3. Configure the wireless device in yast
- You should already have yast open from when you installed the ndiswrapper packages
- This time go into the “network card” module
- Verify that “NetworkManager” is selected and click next
- Click “Add”
- For Device Type choose “wireless”
- Configuration Name “0″
- Moduel Name “ndiswrapper”
- Click next then finish etc. to finish.

I have based this article off of the documentation that can be found in /usr/share/doc/packages/ndiswrapper/README.SUSE after installing ndiswraper

Why Worry About It?

Backups are essential, and so is the reducing the time needed to perform those backups. Many’s the time I have sat waiting for a backup to complete only to remember that I had a link to a large set of files, or a bunch of ISO files in the ./download directory, and had to migrate those over to somewhere else and restart the backup.

If you are like me, data = “files that contain data of original or irreplaceable content”. I don’t want to backup ISO files, large sets of files that can be gotten from an install DVD or things that are easy to download from a site somewhere.

I use a simple (yeah, it really is), script before every backup to find all the files over a particular size, which I then can so anything I want with. If I find anything that’s too large and expendable, I either use an -exclude statement (usually in the case of all ISO’s) or even move the files elsewhere quickly by re-running the script and tacking on a -exec statement.

The Script

Here is the script I use, it’s from a bunch of different sources, and uses a couple of useful tools to do it’s work:


echo "Enter the fully-qualified start path"
read start_path
echo "Enter the lower size limit in Megabytes"
read lower_size
find $start_path \( -size +"$lower_size"M -fprintf ~/Desktop/bigfiles.txt '%kk %p\n' \)

Fables of the Deconstruction


The first line is where you declare what shell you want to run this script with. This string is known as the “shebang”, not absolutely necessary since it defaults to the bash shell anyway, but it’s certainly good form.

echo "Enter the fully-qualified start path"
read start_path

Lines 3 and 4 work together, prompting you to enter the fully-qualified start path and then storing what you enter in the newly-created variable named start_path. This is expanded in Line 7 by referring to it’s name $start_path.

echo "Enter the lower size limit in Megabytes"
read lower_size

The same arrangement occurs with lines 5 and 6, you’re prompted to enter the smallest size in Megabytes you want to report on, which stores that in the newly-created variable named lower_size. This too is expanded in Line 7 with the name $lower_size.

All Together Now

Line 7 is where all the fun stuff happens. First you are using the find command, not the easiest thing for newcomers, but well worth, ahem, “finding” out more about. Find requires several things, shown below:

find (path) (-option) (expression)

We’re using the start_path variable as the (path), then we include a function (sort of a macro) that looks for files of a size that is at least the value of the lower_size variable we set and populated earlier. Then when it finds each file over that size, it will print out the file size in 1K blocks, followed by the LETTER k, so it’s obvious, and then the full path and name of the file that has been found. This will all then be output to a file named bigfiles.txt in the current user’s Desktop folder.

Note: The use of the tilde (technical name: squiggle 8-> ) character in a command means to expand the current user’s $HOME variable from the executing shell, so the full path of the bigfiles.txt file if rossb is running the script is:


Running the Script

Executing scripts that aren’t in your path (the variable, not the physical directory) is different on Linux/Unix, either you’ll use this script as a parameter to the bash shell:

# /bin/bash

Or you’ll use the following command to set the script to be executable:

# chmod +x

Then when it’s set to executable, you’ll either need to put it in your path, (try /usr/local/bin) or execute it by preceding it with the characters “./”, which is necessary to execute something in the local directory if it’s not in the path:

# ./


There are so many other things you can do with find, such as tack on a -exec statement and execute a command on each and every file found, or find and act on files that meet a particular permission set, the possibilities are nearly endless.

Of particular help in my work over the years with the find command has been the find man page, with it’s useful examples and Chapter 14 “Finding Files with Find” of the Unix Power Tools 2nd Edition from O’Reilly and Associates. (I know the 2nd Edition is out of print, but I don’t care much for the 3rd Edition’s updates).

Let us know in the comments what cool find scripts you have come up with, the randomly drawn winner will get a very cool Novell-Candy-Apple-Red 9 LED flashlight. (Sorry, Continental U.S. only).



In my palatial estate in scenic Waltham, Massachusetts, aka my apartment, I have several computers. My two favorite computers to use are my Lenovo X60 (running SUSE Linux Enterprise Desktop 10 SP1) and my Apple Macbook Pro running OS X (10.4.10). I also have a whitebox machine from Intel that I use as my server running SLES 10 SP1.

The thought came across my mind the other day that I would like a central way to store and access my music. This way I can save room on my laptop hard drive for “business” items and utilize the larger disk on my server to store higher bit rate songs. (true audiophiles will really appreciate this)

To achieve this I scp’d all of my music files from my my Mac over to my SLES server using OS X’s terminal application located in /Applications/Utilities/terminal. In this example the ip address of my server is

scp -r /Users/username/music
The ‘-r’ stands for recursive and allows me to copy over a directory.

Next I setup a NFS server on my SLES machine. NFS is a network file system protocol that allows a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. This is perfect for our purposes.

To setup a NFS server:

  • Open up YaST: Alt+f2, enter yast2
  • Filter for “nfs server”
  • Check off “Start” under the NFS server section
  • Check off “open port in firewall” if you have a local firewall enabled
  • Hit next
  • Go to “Add directory”
  • Enter the path to your music folder.

Next you need to mount the NFS volume on your local machine

  • On Linux enter (in a terminal as root): mount 192.168.5:/Music /music
  • On OS X enter (in a terminal): sudo /sbin/mount_nfs -P linux:/install /music
  • I had to use the ‘-P’ option to get around an error that said something to the effect of “mount_nfs: Operation not permitted”

At this point you need to configure your desired music players to point to the appropriate directories.

On SLED 10 if you are using Banshee:

  • Open up Banshee
  • Go to Edit>Preferences
  • Make sure that “copy files to music folder when importing” is unchecked
  • Go to Music>Import Music
  • Choose Local folder and navigate to where you mounted the NFS share. (in this example in /music)

On OS X, if you are using iTunes:

  • Open up iTunes
  • Go to iTunes>Preferences
  • Go to the “Advanced” tab.
  • Make sure that the “Copy files to iTunes Music folder when adding to library” option is unchecked
  • Go to File>Import and browse to the location of your NFS mount (in this example /music).

In this example I do not set the machines to automatically mount the NFS share. Each time you reboot you will have to remount the NFS volume, but you shouldn’t have to re-import the music.

When you register SUSE Linux Enterprise Desktop or SUSE Linux Enterprise Server you automatically set as an update source. Many enterprise customers prefer to setup their own local update servers which mirror The utility available to do this is called YUP, Yum Update Proxy.

YUP is not included in SUSE Linux Enterprise and can be downloaded from here (you must use the newest version of YUP if you want to mirror SP1 updates):

After installation, the configuration file resides in /etc/sysconfig/yup. You can either edit this file by hand or use the /etc/sysconfig YaST module.

Let’s take a look at some of the important parameters.

  • YUP_DEST_DIR=”/path/to/directory”
    • Specify the directory on your server where you want to save the updates.
    • Make sure you have a fair amount of disk space.
    • Yup will automatically setup the directory structure underneath this for all the different architectures and products you’re mirroring.
    • Configure your server to share these directories to clients. I use the “installation server” module to setup an http share.
  • YUP_ID=”blablabla1234″ and YUP_PASS=”password”
    • Your ID and Password can be found in 2 places:
      • Novell Customer Center. (This is the preferred method) In the Novell Customer Center ( click on the “Products and Subscriptions” tab, then select a relevant subscription (SLED or SLES), next double click on a subscription (you may have several). You’ll see a link to generate mirror credentials down at the bottom of the new page. This page will create credentials that you can use to access any and all catalogs that you own a subscription for.
      • /etz/zmd/secret and deviceid on a machine that has been registered with the Novell Customer Center. If you use userid/secret from /etc/zmd you can only download updates for the architecture you registered with.
    • “” is the new name of the update server at novell.
    • I just copy the ID and Password that we discussed above
  • YUP_ARCH=”i586″
    • Pick the architecture(s) that you want to mirror.
    • Multiple architectures are delimited by a spaces
    • Valid options: i586,ppc,s390x,ia64,x86_64
    • Pick whether you want to mirror SLES or SLED
    • Multiple architectures are delimited by a spaces
    • Valid options SLES10 and/or SLED10
    • Pick the subversion(s) you want to mirror
    • Multiple architectures are delimited by a spaces
    • Valid options: “GA”,”SP1″

Once you have finished configuring, simply run the command “YUP” to pull down the updates. This process can take a long time depending on the speed of your connection and the number of products and architectures you want to pull down. You can create a cron job to update your mirror. After it is run once, yup will only pull down updates that you don’t already have.

After setting up the YUP server you have to configure clients to point to your server for updates.

From the command line use the rug command:

#assuming you’re using http, use this command to add the service.
rug sa --type=YUM http://ipaddress/install/sledyup/i586 repodata
#subscribe to the service
rug sub -a
# update your server
rug up

You can also use the ZMD applet

  • Right click on the ZMD applet
  • Choose configure
  • Add service
  • Choose ZYPP
  • Enter the URI and Service Name (repodata is the default name)

Good morning from beautiful and sunny Trenton, NJ all!

So I thought I’d share an interesting experience myself and a colleague ran into last week. We were with a customer building a Linux desktop build for them when they were hit with the potential Excel 2007 calculation error (,2933,298510,00.html in case you hadn’t seen it). They asked us if there was an easy way to deploy (OOo) on a large number of desktops….in the next 40 minutes :)

We did some poking around and found out an easy way to deploy OOo on Win unattended. If you unpack the OOo zip file and run setup.exe /qn the OOo installer fires up silently and deploys the whole suite using the default options. Pretty cool!

OzzyJ (aka Jason Ganovsky)

In another of the “How do you do _____ with Linux?” series of questions, I have been doing presentations and having lots of discussions about how can you manage more than a few (10-25) Linux machines, (typically servers) ahem, intelligently.

Styles of Managing Systems

It’s all about styles and methods. First think of the two main management “styles” that people typically use:

  • Command Line – Oh Sweet Mystery of…., er, ahem. Cough. Yes, it’s no secret that I use and love the command line a great deal, but it’s just so darned useful! When you do something every day the same way, repeatedly over and over, it’s supposed to be turned into a script, the Universe demands that you know how to write a script.
  • Graphical User Interface – Yes, there are times when the GUI is the best thing, new users, something you haven’t done for 3 weeks and don’t expect to do again for another considerable stretch, that’s when a GUI wizard-thing is very handy.

These styles are just that, a style of managing Linux machines, you can use whichever makes the most sense for you, but remember that but almost more important are the “methods” of managing Linux machines.

Methods Of Managing Systems

SUSE Linux Enterprise offers built-in 3 methods for managing systems remotely

  • 1 to 1 – This is you at a console session using ssh or VNC to remotely connect to a machine and run tools that are resident on that machine. This can work for up to about 25 or so production machines, but doing something repeatedly on box after box gets tiresome, and errors will likely occur.
  • 1 to many – This is you sitting in front of a console session where a management tool like Zenworks Linux Management (ZLM) or a competing product’s management interface with the capability to have an agent on the managed machines (physical and virtual) that lets you do remote console/GUI, software management, imaging, patching/fixing/upgrading all from that single seat.
  • Orchestration – The ability to apply an overall management grid application that can provision, deploy, migrate and manage tasks to sets of virtual and physical machines that have an agent installed. The grid is fully and atomically configurable, ie: you can manage an individual machine’s tasks, or manage thousands of machines automatically.

I would recommend you take a look at the Zenworks Linux Management product, it’s a must-have for anyone who has SUSE Linux Enterprise and Red Hat systems, it can manage both easily.


It’s an age-old issue, what text editor do you use on a Unix/Linux system? You might cry “EMACS” while I shout “VI”, and the games begin. No matter which one you use, a knowledge of the other is a good idea, so take a look at this well-done article about VIM (VI iMproved), and harness the beast that is VIM the text editor.

Update: How could I be so neglectful as to not promote my very own VIM chapter in my book? I can’t, so here is the link to it, it’s free for viewing. (Thanks to TC for reminding me about this, it’s been a long week!

Oh, and just to start the flames a-burning, EMACS reportedly stands for “Even a Master of Arts Comes Simpler”…  What’s you favorite one?


The Setup

In another of my many “people are always asking me ______” moments, I thought I’d jot down the top reasons why we find customers wanting to switch from Red Hat Enterprise Linux to a SUSE Linux Enterprise environment. These points are gathered from countless discussions, presentations, questions and even osmosis. I hope that these points are useful for our customers who are SLES-curious, our partners who are representing SLE to customers and I welcome any feedback or suggestions you might have.

The List

Top 5 Reasons to Move from RHEL to SLE

  • Cost – We subscribe on a machine level, one cost for unlimited virtualized machines, support for 32 hardware CPU sockets with any number of cores in them, Red Hat makes you pay 3x the price for unlimited virtual machines, artificially restricting customers to 4 VM’s in the base product.
  • Management – Red Hat has about 40 individual tools (system-config-blahblah) that all have differing looks and feels, it’s a confusing environmenet, we have YaST, a single interface that’s well-organized, easy to use and very consistent. We also have Zenworks Linux Management (ZLM) where they have the Red Hat Network (RHN). ZLM is very easy to use and deploy, including the ability to provision, image, deploy software singly and in bundles, remote control and many other features. ZLM offers a single consistent console, manages both RHEL and SUSE Linux Enterprise and costs less than RHN.
  • Deployment – Red Hat has the Kickstart service, which is good for limited deployments, but they don’t support nearly as many options as AutoYaST (SLE’s equivalent) does. For example, it’s difficult to script the presence of multiple NIC’s with Kickstart, AutoYaST does it easily.
  • Interoperability – Novell started life in the pre-Open Source days, it’s got a huge patent portfolio, years of closed-source product development and many customers who use those products. Red Hat was begun to be and is aggressively Open Source, even when it doesn’t make sense, they have to adhere to that ideal. Novell enters into and works hard on agreements that increase it’s interoperability with other environments and makes it easy to just get things working. Novell’s agreement with Microsoft is a good example of two organizations that aggressively compete also setting aside differences to make the customers life easier.
  • Customer Satisfaction – We have many interactions with customers who are running either mostly RHEL or mixed RHEL and SLE environments who have experienced significant challenges with getting RHEL support for issues that have already been resolved satisfactorily on the SLE side, or haven’t occurred due to pro-active patching etc. by Novell.

Feedback on this is much appreciated, please let me know your changes, suggestions or corrections to these.


There are many great new features in the SP1 release SUSE Linux Enterprise Desktop and Server. The purpose of this article is to outline the methods available to upgrade from the FCS release of SUSE Linux Enterprise to SP1

Update From DVD or CD

This method is very easy and straightforward. Download the media from here:



Burn the ISO(s) to a DVD or CD and boot from it. After selecting your language etc. choose the “update” option rather than “New Installation”. Follow the prompts to complete your upgrade.

Upgrading from the Novell Customer Center

If you already have FCS installed and have registered for updates with the Novell Customer Center you can use the ZMD applet or rug to upgrade to SP1.

Update by using zen-updater

  • Start zen-updater from the system tray.
  • Select the “move-to-sles10-sp1″ patch. Do not select any other patch at the same time.
  • Press the “Update” button.
  • Wait for success message.
  • A small popup will appear informing about changing the update server to
  • Later, a popup asking to provide root password will appear.
  • After installing maintenance stack update, a window with patch selection will appear.
  • Select/unselect the required patches and press ‘Accept’.
  • After the update has finished: reboot the system.

Update using rug

  • Open a root shell
  • Run ‘rug in -y -t patch switch-update-server’. Do not select any other patch at the same time.
  • Run ‘/usr/bin/switch-update-server’
  • Check that your update server is now (call rug sl to find out)
  • Run ‘rug sub SLES10-Updates’
  • Run ‘rug in -y -t patch move-to-sles10-sp1′ (or ‘rug in -y -t patch move-to-sled10-sp1′ accordingly)
  • Run ‘rug refresh’
  • Run ‘rug sub SLES10-SP1-Online’ (or rug sub SLED10-SP1-Online accordingly)
  • Run ‘rug in -y -t patch slesp1o-liby2util-devel’ (or ‘rug in -y -t patch sledp1o-liby2util-devel’ accordingly)
  • Run ‘rczmd restart’
  • Run ‘rug up’ followed by
  • Run ‘rug in -y -t patch product-sles10-sp1′ (or ‘rug in -y -t patch product-sled10-sp1′ accordingly) to install the update stack patch
  • Reboot

Installation Server/Source

Many customers maintain their own YUP servers to mirror or If you are doing this you cannot use the move-to-sled10-sp1 script to update to SP1. The reason why is that the script expects to be your update source, not your local YUP server. In order to update to SP1 from a non Novell corporate server you have to set up another installation source.

Setting up a HTTP installation server (Sever Side)

  • Open up the “installation Server” yast module
  • Choose the appropriate protocol (in this case http)
  • Select a directory where you want to keep your installation source.
  • Choose an alias for your directory
  • Click Finish
  • Copy over the contents of SLED or SLES iso to the directory you just specified.
  • Check and make sure you can browse to your source through firefox

Setting up an http installation source (Client Side)

  • Open up the “Installation Source” YaST module
  • Click the “Add” button
  • Choose the appropriate protocol (in this case http)
  • Enter the server name or ip address
  • Enter the directory where the installation source resides
  • If you get an error check that you can browse to the installation source from firefox


Now that you have added the installation source you will notice that you have a ton of updates. (DO NOT install the “move-to-sles10-sp1″ patch). After you install the selected updates reboot your machine and you will have SP1!

More information about updating can be found here



Talk about a controversial topic, file managers can get people fighting and arguing almost as much as discussing why VIM is so much better than EMACS. The default file managers that ship with GNOME (Nautilus) and KDE (Konqueror) are very usable, helpful and configurable, I use them both all the time, but it’s never enough to just use the defaults, or you wouldn’t be reading this post… Also, if you like your Norton Commander-style layout or pine for the days of Xtree Gold, then the last half of this article is just for you.

Nautilus Alternatives

First off, let’s talk about direct replacements for GNOME’s Nautilus, which you probably either use and don’t care much about, or don’t use and think was designed to drive you crazy. Either way, there are some great alternatives to Nautilus:

  • Thunar – Named after the Norse God Thor, this file manager is a component of the XFCE desktop, which is not quite on the same level as GNOME and KDE, but is gaining in popularity. Pluses include small size, responsiveness, lots of plugins and familiarity for new users.
  • Endeavor Mk II – Endeavor is cross-platform and has a ton of features, including multiple layout options, archive-management, image-viewer and management and front-ends for tools like zip and wget.
  • EmelFM – I used emelfm for a while, and like it’s features, but I found I needed to use it and a couple of others, so try it out, the features and stability are excellent, but you may find you need additional options.
  • Rox-Filer – If you’re used to the Mac OS X finder, or Windows Explorer, then you’ll probably like Rox, it’s a component of the Rox Desktop, but is easily installable separately.
  • GFileRunner Velocity – This is a great tool, it’s almost a complete Windows Explorer work-alike, in a good way… Features include clickable, expandable trees for files and directories, configurable toolbars, address bar for quick navigation and an undo feature to keep you from goofing up too bad.  NOTE: Thanks to Eric Woods for the updated project name, location and status.
  • PCManFM – Probably one of the most useful competitors to Nautilus, it does a lot of things that have driven us crazy in Nautilus for years, such as Tabbed interface, loading large directories quickly, bookmarks, several great views and good stability.

Konqueror Alternatives

Next lets cover the alternatives for the KDE Konqueror file manager, which a lot of people actually use as a replacement for Nautilus! I personally use the GNOME desktop and default to Konqueror for a lot of tasks, you just run it, it automatically loads the needed KDE libraries and it works.

One point, rather than focus on the same kind of replacements that Nautilus has, I want to show the file managers that provide either Xtree-like or Norton Commander-like alternatives. These alternatives to Konqueror include:

  • Midnight Commander – A part of the GNOME desktop, mc is authored by Miguel de Icaza, the founder of GNOME, and is a text-mode Norton Commander clone, and a serious file manager that I have used for years with great results.
  • XNC – Dubbed the “X Northern Captain”, this is probably the most Motif-looking of the alternatives to Konqueror, it is a well-done implementation of the NC options, highly configurable and a good alternative to consider.
  • Krusader – For fellow Saxon fans, this is my favorite name for a file manager, and Krusader doesn’t disappoint, it’s very configurable, has a two-pane NC-like interface, has ACL support (a first for a GUI file manager, I believe), smart renaming of files, a new and updated look and finally my favorite, a greatly-improved synchronization feature that helps keep large directory trees updated.
  • KCommander – This NC clone is well done, offers archive file management and is very speedy, helps you upload via ftp (which some others also offer), but not much else over it’s competitors.


Finally, we get to the Xtree-like alternatives:

  • ytree – Obviously an Xtree alternative, ytree is a text-mode app that really does look very much like the Xtree Gold app that I used so often for a number of years in the old DOS days.
  • UnixTree – Excellent and stable implementation of Xtree for the Unix/Linux platform.
  • XTC – Unix clone of the DOS clones of Xtree, it’s not in very active development, and has some bugs.
  • linXtree – Another older and fairly well done Xtree clone.
  • utree – Lastly, another older version of the same type of clone of Xtree, last updated in 2005


Obviously you can choose to run just the stock file manager, whatever desktop you have, but in true Unix/Linux fashion you can customize, replace and in general play around with the various choices until you find just the right one for you.

Enjoy, and as ever, if you have a favorite alternative, please leave a comment and I’ll add it and a shout-out to you for the suggestion.


The Novell Courseware Team has released Course 3068, Migrating to SUSE Linux Enterprise Server 10 for free.  You can download the kit and print the manuals out, but it’s not for reselling or further distribution.  This outstanding offering covers how to migrate from Red Hat Enterprise Linux to SUSE Linux Enterprise 10, and incorporates not only that team’s materials, but a lot of feedback from us in the field, all of which was taken into account, the result being a great course.

This is no puff piece that’s just out there so they could claim it existed, this is seriously useful material for the sysadmin in the trenches doing these tasks.  The list of topics the course covers are:

  • Installing SUSE Linux Enterprise Server 10
  • Using YaST
  • Configuring the Network
  • Managing the Linux File System
  • Managing System Initialization
  • Configuring Mail and Web Services
  • Using AppArmor
  • Managing Virtualization with Xen
  • Configuring iSCSI
  • Understanding Cluster File Systems

The course is available either as a free download, or you can use the course finder to locate an Instructor-Led version of the course.  If you are like me, you can study the download version and if you can make it to a class, then do so, but this is essentially a class-in-a-can, some assembly needed.  You can download the courseware kit, it includes:

  • Migrating from RedHat to Suse Linux Enterprise Sever 10 Student Manual
  • Migrating from RedHat to Suse Linux Enterprise Sever 10 Student Workbook
  • Course materials ISO file (for burning to a DVD)

The team thoughtfully includes a number of items on the DVD iso, including the manuals, Acrobat Reader for Windows and Linux, various setup instructions for a bare-metal lab system and two VMWare virtual machines for use with a virtualized lab system (ie: your spouse will shoot you if you blow away the kid’s Windows PC and install SLES 10 on it).

As a surprise bonus, they included a slightly older whitepaper by yours truly and a few team-mates as Appendix C.  It’s a whitepaper that I came up with as a way to show people how to do what this course now does, and includes some very useful tables and other side-by-side comparisons that will help you accomplish the migration.



Another in the series of questions that we get asked a lot is: “What Linux filesystem should I use/what ones are available?” This question is not relegated to newcomers to the Linux arena, there are such a baffling array of choices it’s confusing even to old hands at times.

The Main Players:

Ext2/3 – Ext2 is a descendant of the FFS (Fast File System) and UFS (Unix File System) tree of filesystems, while Ext3 adds the all-important journalling, allowing for much-improved handling of crashes and markedly faster filesystem recovery times. Originally written by Stephen Tweedie and Remy Card, it’s currently the default for a goodly number of distributions, including our SLE 10 SP1 lineup, (SLED and SLES). Pluses of the Ext3 file system include extreme compatability with Ext2 file systems, ease of upgrade, familiarity for those already on Ext2, speed and many options. Minuses include it’s adherance to the past and that inodes (effectively the ability to have a file, each inode is a pointer to a file) are set in number at file system creation time.

Reiserfs – Named after it’s creator and majority fundraiser, Hans Reiser, the Reiser FS definitely set out some new features and ways to think about file systems. It’s pluses include a more database-like approach to the file system, better speed at higher loads and that it creates new inodes from an almost infinite pool when needed, so it doesn’t run out of inodes like Ext3 does, at least until it really does run out of space.

XFS – As the oldest file system that features journalling (logging of actions), Silicon Graphic’s XFS began life in 1993 and made it into the Linux Kernel in the 2000 timeframe. XFS supports journalling, has a maximum file size of

XFS is a 64-bit journaling file system with guaranteed file system consistency. It supports a maximum file system size of 8 exabytes, though this is subject to block limits imposed by the host operating system. On 32-bit Linux systems, this limits the file and file system sizes to 16 terabytes.

IBM has a great article about File Systems, Heirarchy and Devices that was written by Ian Shields of IBM, it’s part of a series for prepping to take the LPIC Level 1 exams. A good read.

Rather than spend too much time editorializing about the subject of which you should use, let me point you to a great comparison table that is on Wikipedia, one that I like to use a lot, it’s got the following:

  • General Info (including the date released, company and name)
  • Limits (Minimum and Maximum File and File System Size, etc.)
  • Metadata (Permissions, ACL’s and Extended Attributes)
  • Features (Journalling, Snapshots, Case-Sensitivity and Encryption)
  • Allocation and Advanced Features

As a side note, those of you who follow Apple’s Leopard version of OS X and the announced Time Machine interface for viewing past versions of files, you can check out the Ext3Cow File System, it’s very interesting and shows promise.



Novell has done a lot of work to expand the the use cases for SUSE Linux Enterprise Desktop. Today SLED can be deployed in a number of ways from a fully locked down kiosk to a full blown laptop for general knowledge workers. Locked down environments are particularly useful in thinclient computing models.

One of the most compelling reasons to deploy SLED over a proprietary desktop is the ability to lock it down at a very granular level. This means that you have the ability to lock down desktops so that EVERYTHING is locked down, or just a few things.

There are a number of tools included in SLED to lockdown the desktop. In this article we’ll discuss how to manually lockdown the desktop using:

  • Gconf
  • Permissions and groups
  • Removal of programs and modules
  • Configuring files/settings

GConf is a system used by the GNOME desktop environment for storing configuration settings for the desktop and applications. Each user has a .gconf directory stored in their home directory that stores their individual settings. There is also a global gconf directory located in /etc/opt/gnome/gconf/. Administrators can mark settings as “default” or prevent users from changing the settings by marking them as “mandatory”.

There are several lockdown options stored in GConf. There are two great tools to configure GConf keys, gconf-editor and gconftool-2.

  • gconf-editor (/opt/gnome/bin/gconf-editor) is a graphical tool that allows you to change local gconf keys or set global mandatory/default keys.
    • To set a key as mandatory or default, open gconf-editor as root, navigate to the key you want to set, right click on it and choose to set as mandatory or default.
    • You can search for gconf keys by going to the edit menu and choosing “find”.
  • gconftool-2 (/opt/gnome/bin/gconftool-2) is a command line tool which allows you to modify gconf settings. It be used in creating a script to lockdown desktops as part of an automated/scripted deployment.  Gconftool-2 is also very useful when writing scripts to build and lockdown KIWI based images.  Listed below is an example of the syntax for changing a key which has a boolean key:
    • gconftool-2 –direct –config-source xml:readwrite:/etc/opt/gnome/gconf/gconf.xml.mandatory –type bool –set /apps/metacity/general/reduced_resources true
    • Here is the syntax for setting a string gconf key:
    • gconftool-2 –direct –config-source xml:readwrite:/etc/opt/gnome/gconf/gconf.xml.mandatory –type string –set /apps/metacity/window_keybindings/begin_resize disabled
    • Note how both keys being modified are in the gconf.xml.mandatory directory. To make a key default rather than mandatory switch gconf.xml.mandatory to gconf.xml.defaults.

GConf Schema is broken down into 5 main categories: apps, desktop, schema, schemas, and system. As far as lockdown is concerned the main categories of interest are apps and desktop. Listed below are some important gconf keys which you can modify to customize and lockdown your desktops. Remember that these keys can be set as default or mandatory for users.

  • /apps/gnome-screensaver/idle_activation_enabled –This will force the screen saver to come on when the session is idle
  • /apps/gnome-screensaver/idle_delay –The number of minutes of inactivity before the session is considered idle.
  • /apps/gnome-screensaver/lock_enabled –Set this to TRUE to lock the screen when the screensaver goes active.
  • /apps/nautilus/preferences/show_desktop –If set to true, then Nautilus will draw the icons on the desktop. If false the user will not be able to interact with the file system through the Desktop.
  • /apps/panel/global/locked_down –If true, the panel will not allow any changes to the configuration of the panel. Individual applets may need to be locked down separately however. The panel must be restarted for this to take effect.
  • /desktop/gnome/applications/main-menu/lock-down/search_area_visible –set to true if the search area should be visible and active.
  • /desktop/gnome/applications/main-menu/lock-down/user_modifiable_apps –set to true if the user is allowed to modify the list of user-specified or “Favorite” applications.
  • /desktop/gnome/background/picture_filename –File to use for the background image
  • /desktop/gnome/lockdown/disable_command_line –Prevent the user from accessing the terminal or specifying a command line to be executed. For example, this would disable access to the panel’s “Run Application” dialog.
  • /desktop/gnome/lockdown/disable_printing –Prevent the user from printing. For example, this would disable access to all applications’ “Print” dialogs.
  • /desktop/gnome/lockdown/disable_print_setup –Prevent the user from modifying print settings. For example, this would disable access to all applications’ “Print Setup” dialogs.
  • /desktop/gnome/lockdown/disable_save_to_disk –Prevent the user from saving files to disk. For example, this would disable access to all applications’ “Save as” dialogs.
  • /desktop/gnome/remote_access/ –There are a number of settings in this directory for configuring remote access through vnc.

There are many other useful keys and some new ones we have introduced in SLED 10 SP1. I suggest that you spend some time browsing through gconf with gconf-editor. Each key has a “description” associated with it that will give you some info on what it does.

Permissions and Groups is another useful way of locking down Desktops. You can modify permissions on particular applications so that only users who are in a specific group can have access to it. In the example Below I show you how to change permissions on Firefox and GnomeTerminal so that user1 can use firefox and gnome-terminal, but user2 can only use gnome-terminal.

#Here I create two groups
groupadd gnometerminal -g 203
groupadd firefox -g 204

#Here I assign local users to the appropriate group or groups
usermod user1 -A gnometerminal,firefox
usermod user2 -G gnometerminal

#Here I change the ownership of the applications to lock out others from accessing it and changing it.
chown root:firefox /usr/bin/firefox
chown root:gnometerminal /opt/gnome/bin/gnome-terminal

#Here I change the permissions of the applications to lock out others from accessing it and changing it.
chmod 754 /usr/bin/firefox
chmod 754 /opt/gnome/bin/gnome-terminal

Another way to lock down the system is by removing components. The easiest way to prevent users from using certain applications is by not installing them in the first place. You can remove applications by using the YaST software management module or by using the rpm -e command.

You can further lockdown the system by removing certain kernel modules. By removing the following module you can prevent the system from recognizing USB mass storage devices (like flash drives, usb drives, iPods etc.), but still use USB keyboards and mice.

/lib/modules/ (you can use the uname -r command to determine which version of the kernel you’re using).

While you can use gconf to prevent users from getting to terminals installed on the system you need to configure /etc/X11/xorg.conf to prevent access to virtual terminals. In the “ServerLayout” section add the following lines to prevent users from switching to a virtual terminal and to prevent them from killing X by typing ctrl-alt-backspace:

Option DontVTSwitch True
Option DontZap Yes

This article only shows a small subset of the lockdown functionality of SUSE Linux Enterprise Desktop, but it should get you well on your way. Have a lot of fun!

In a world where even SSH seems like it’s not enough, enter SBD. Yeah, it’s the same initials as something that we all said as kids, but it really refers to System Back Door.

SBD is an ultra-secure service that relies on the SBD protocol, one-time pad’s and the HMAC authentication routine to verify what you’re sending to it.

Effectively, it allows you to encrypt a single command that is sent to the server based on completely random and identical files on both systems, making it easy to send a wake-up call to an SSH server or other service with an almost-unbreakable one-time encrypted command.

After using the service on demand, you can then disable it with another SBD-secured command, or have the service disable itself automatically via scripting. has a great article about this, including make instructions for those who find they will need this additional security measure. The SourceForge project page, while, ahem, somewhat terse, is helpful too.



Often I find myself poring over data files, usually logs or large output from programs or data sets, trying to find any differrences, if they exist. Years ago the method of finding differences in similar files was to get a text editor and scroll them on the screen simultaneously if possible. Not a very accurate method and seriously hard on the eyes.

I’ll start with the most simple and prosaic of comparison tools, the diff command. Comparing two files with diff is pretty easy, the command would be:

diff file1 file2

Most people are confused about the output from a diff compare, as any differences will be shown rather cryptically with < and > arrows, sets of numbers and letters etc.. The output is not designed to be necessarily human-friendly, it’s designed to be used to patch files with the updates to those files, and the output is really a set of instructions that will be used by the patch command when the patch is applied. Explaining this in a posting of this length without putting people to sleep is not really possible, so for a more detailed view of these instructions, visit the GNU help pages for comparing and merging files.

A particularly useful module or offshoot of VIM (VI iMproved) is my favorite method for comparing files, used by executing the vimdiff command with two or more filenames as arguments. For example, if I had file1 and file2 to compare, I would execute the following command:

vimdiff file1 file2

This opens a version of VIM with two windows, vertically separated, making it easy to visually compare the two files. If I scroll the first file, it locksteps the second file, moving them both so you can see the changes in real-time. You can switch between file1 and file2 by pressing Ctrl-w and then w again, and quitting all the files is easiest by hitting ESC and then typing :qall.

More GUI-related tools abound, the most common of which seems to be Meld, which you can read more about in this article. Other options include Diffuse, a graphical tool that does similar things to Meld. Another tool in the same type and style is Directory Synchronise.

Of course this doesn’t include tools like uniq, which will go over a file and after sorting the lines to group any exact matches, will discard all but a single unique instance of that line. The resulting output is sent to standard out, typically the console. This doesn’t tell you the differences between files, but it’s seriously useful

Got a fave tool that you use to compare files or directories? Post a comment and if we update the story with it, you’ll get a shout-out/mention and some good karma, probably.


P.S.   thehoagie commented that the Trac tool is a great way to see differences in code, see a demo here.

The Easiest

The first and most obvious option for creating PDF’s with Open Source is the excellent suite, where creating a PDF is just a quick click on the Export to PDF icon away. You can always print from FireFox and other apps on a Linux system and convert the resulting .ps file to a PDF with the instructions on this page, or this page and for a whole suite of .ps to PDF and vice versa check out PStill. An online source for .ps to PDF conversion is located here.

An excellent source for PDF-creation tools is the Wikipedia List of PDF Software page. The sections are broken up into Multi-platform Free and Open Source and then Multi-platform Proprietary, Linux/Unix, Mac OS X and Windows. Someone wrote an informative article that you might find useful, here.

Printer Driver Capture

After, a popular method is to put in a different printer driver, one that captures print jobs to a PDF, much like the driver that Adobe Acrobat installs on Windows machines. The first option I recommend is PDF Creator, which is a direct competitor to the Acrobat printer driver, and runs only on Windows.

Another possibility in the print driver replacement side of things is CUPS (Common Unix Printing System), and in particular the CUPS-PDF module that effectively gives you a network printer that produces PDF’s on demand. Here is a link to the documentation that explains how this all works. Someone blogged about this too, nice helpful post.

Standalone Apps

Standalone apps to create PDF’s include CutePDF, which has both free and for pay editions, and is probably the most popular free PDF-creation tool for Windows users. Another standalone app option is Foxit PDF Creator which is available for free, and they have a load of other apps that look very useful, including Foxit Reader for Linux Desktop Linux, Embedded Linux and an interesting search tool called Foxit PDF Ifilter.

For-Pay PDF-creation tools include PDF-Creator, which is free to try, but costs money to unlock all the features. Another options is Vista PDF Creator, which has a reasonable set of features in comparison to others. Go2PDF is the smallest freeware tool to create PDF’s but you’ll find if you want advanced features that you’ll have to go elsewhere. PrimoPDF is an example of a great free app, good feature set, including the ability to merge and append PDF’s.

Online PDF Tools

Finally some online PDF-creation tools exist, one of which is PDF Online, which has a free PDF Creator tool online, along with a for-pay EasyPDF tool. Check out the PDF Online blog, very informative.

Hopefully this is helpful, leave a comment if you know of anything that I have missed. (Update: I missed something, a Cool Solutions article that andysp brought to my attention, thanks!)

Enjoy and Digg This Story!


Here’s a good question for all of you: What is the average daily CPU utilization of your Linux systems? If you know, you’re probably on the higher end of utilization, or have recently done a study with an eye to reducing costs, such as server consolidation. If you don’t know, why not? I always recommend that you gather data, even if it’s not that often, because it’s better to know than not know, and you can really get good information from your systems.

Over the years I have used a wide range of methods to get system information, including extremely complex and expensive tools from the higher end vendors. However, like most of the rest of Open Source, no one tool seems to be the be-all-end-all, it takes a toolbox to really get what you want, so here is the set of tools that I think will be of help to you.

Command Line Tools:

top – One of the most basic tools, it’s quick, easy to use and has some surprising features, like using toggle keys for turning on and off features while top is displaying it’s information (i to toggle off idle processes, b to bold the process that’s most active and H to show threads and processes).

htop – An improved version of top, not included in most distro’s but you can get it here. Check out the comparison of top and htop.

iftop – Like the top command for interfaces, this command shows a sorted view of the various networking interfaces, toggles include (s to ignore source, d to ignore destination and t toggles through display options. Get iftop here.

iptraf – Where iftop is a little simplistic, iptraf offers a lot more complexity and options. Rather than try to explain it in detail, there’s a great article that explains a lot about iptraf. Get iptraf here.

uptime – Talk about simplistic, the uptime command features various values, such as time, amount of uptime, users count and load averages for the last 1, 5 and 10 minutes. (Included in SLE) Sample output is shown below:

13:30 up 2 days, 15:39, 2 users, load averages: 1.07 1.16 1.24

strace – Used to trace the system calls and signals that a particular command uses, you can save the output to a file, filter it through grep for keywords like “error” etc. Very useful for a misbehaving program, or for errors that aren’t displayed in the interface. Can be used to trace either a program at execution “strace /bin/date” or a running process “strace -p 4899“. (Included in SLE)

ltrace – Monitors dynamic library calls made by a program, either at execution or while running (same syntax as strace above). (Included in SLE)

sar – Possibly one of the most useful (and vexing) commands you can use to monitor a system, System Activity Reporter gathers the specified information from the system on a scheduled basis and builds logfiles that you can then report on or mine specific timeframes to see what was going on between a set of time ticks. You can monitor many things with sar, including: file access routines, buffer activity, system call activity, block device activity, paging and much more.

Hopefully this little roundup is helpful to the troops out there, Part II will cover the GUI tools for monitoring Linux systems.



Seems like every other week someone is posting an article about the Death of the Command Line, or how Linux Distributions are Headed in the Wrong Direction, or mourning how we’ve all become so graphically oriented we’re losing our command line skills.

Jem Matzan of the Jem Report opines that XGL and twirly-spinny cubes are distracting and slow down games etc. I agree, it can be distracting and seemingly slow things down, that’s why they are packages that you have to install and enable, not the default install. No one is forcing anyone to use the XGL/Compiz cube, it’s an option for those who think they’ll use it and get something out of it.

I have a different perspective from all the “Whither goeth the Command Line?” sorts out there, the historical view of Operating Systems shows that we started out with the CLI on just about every OS that has come out, even Windows NT had a solely CLI interface in it’s first few incarnations. It was to Dave Cutler’s great sorrow (and rage, he was known to punch a wall or two…) that they forced the WFW GUI onto his project, the result being the ever-stellar Windows NT.

I usually explain (and have for over a decade in classes and presentations) that you can coexist with the CLI and GUI, just use the right one for the task at hand. The CLI is fast, it’s able to auto-complete commands and directories, and most of all, it’s very scriptable, aka you can do something over and over again manually, or you can just script the steps and execute it on demand.

The GUI, on the other hand is excellent for beginners, to walk people through sets of steps that would be mind-numbingly difficult or repetitive on the CLI, and essentially for one-off, every-now-and-then types of tasks that you won’t remember the steps for or it would be a waste of time to script. Adding users with a GUI is fine if it happens a couple of times a day or week, but adding in a couple of hundred or several thousand users requires a command line interface, or bulk importing tools.

I have found that the SUSE products are designed to have 3 methods of doing just about any task:

  • CLI – Typing commands and editing files manually, fun, thought-provoking and often leads to troubleshooting
  • TUI – Characterized by the /sbin/yast text-based UI, often using the NCurses library, it’s a text version of a GUI, as it were, and YaST in the TUI mode matches screen for screen with the GUI mode
  • GUI – Characterized by the /sbin/yast2 X11 GUI interface, it’s run in X, is visually pleasing and very easy for people to understand and use

Usually when I hear someone complaining about CLI vs. TUI vs. GUI it’s because something they used to be able to do is not immediately obvious to them on a new distribution or they haven’t taken the time to figure it out, so they complain. If we ever want to achieve the amount of mass-market appeal that Novell once enjoyed, this time based on and running Linux, we have to provide tools that provide for a safe, sane and doable path for the various types of methods that people want to use. I think SUSE does this well, and not many others do.

How do you feel about CLI vs. GUI? Should we feature more articles about tools and the CLI? Comment and let me know.


Next Page »


Get every new post delivered to your Inbox.