December 2000, Issue 60       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Linux NetworX
Penguin Computing
Red Hat

Table of Contents:


Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Michael "Alex" Williams, Don Marti, Ben Okopnik

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm],
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-2000 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"

 The Mailbag!

Write the Gazette at


Help Wanted -- Article Ideas

New submission address!

Send tech-support questions, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.

No unanswered 'help wanted' letters this month.

Gazette Matters

 Fri, 3 Nov 2000 09:39:41 -0000
From: Arthur G S Wilkinson <>
Subject: LG FTP listings have bogus "@" signs in them

I have noticed that the Linux Guides FTP site at returns the directory listing in a format which appears garbled in some versions of Microsoft Internet Explorer.

Using the Windows command line FTP program the unix user and group ID's appear with @'s in them this appears to confuse IE.

Can anything be done about this?

[This was an artifact of our upgrade of wu-ftpd from 2.6.0 to 2.6.1 following advisories against version 2.6.0. As near as I can tell, "@" in the directory listings resulted from a defect in this new version of wu-ftpd. For this reason and because wu-ftpd was now experiencing segvs, indicating possible buffer overflows or memory allocation problems, we've retired it in favor of a relative newcomer, muddleftpd: From the netnoise I've found so far, this daemon is well-recommended. The configuration is very simple and covers our needs nicely. Take a look and give us some feedback if you like. -Dan.]

 Sun, 5 Nov 2000 11:19:15 EST
From: <>
Subject: suggestion

would help if content had a concise statement of what each article holds. sentence could appears only when cusor passes over.

[Regarding the first part (a concise abstract for each article), we'll consider that the next time we revise the Gazette's layout. The current Table of Contents doesn't have room for it, and we really want all the article links visible with as little scrolling as possible.

What should the concise statement contain that isn't already in the title? I try to make the title as descriptive as possible, so that readers will not miss an article about something they're concerned about simply because they didn't realize the article would be about that.

Regarding the second part (making the sentance appear only when the cursor passes over it) would require Javascript, and we have preferred to keep the site free of Javascript, style sheets, etc--anything which might cause problems for some browsers. Perhaps in the future we'll revisit the question of Javascript now that it has a browser-neutral standard (ECMAscript).

 Thu, 9 Nov 2000 16:36:35 -0600
From: THE MAGE <>
Subject: Getting all the FTP files in one file

Dear editor, I would like to know if there is any way I could download all the issues in HTML format within a tar.gz or .zip file. I know that I could download each issue alone,but it would be very helping if you could tell me a way to download all the magazine's issues together.

[There is no single file that contains all the issues. However, you can have a program download all the files at once without human intervention.
mget *
Do the prompt command once or twice until it says "Interactive mode off". This prevents it from asking whether to download each file.
get *
I don't know the options...

I personally would use ncftp for a one-time download, or rsync to set up something which would regularly via cron, or rsync on demand via a simple shell script. The beauty of rsync is that it downloads only the portions of files that have changed, saving time and bandwidth, especially if your Internet access is expensive. -Mike.]

 Fri, 10 Nov 2000 19:29:16 -0500
From: Andy Kinsey <>
Subject: Kudos

Just a note regarding one of your 2-cent tip submissions:

I attempted to perform the 2-cent tip, from the March 2000 Linux Gazette that places a weather screen on the desktop. I was having difficulty, so I e-mailed the author, Matthew Willis. Matt not only replied quickly to my question, but suggested a way to fix the problem, which worked. Thanks to Matt's assistance (which he did not have to do), I discovered the problem and learned something new in the process. Matt is a credit to Linux Gazette, and I'll be looking forward to many more tips from him and others like him.

 Sun, 12 Nov 2000 01:16:01 EST
From: Mike Cathcart <>
Subject: dmesg explained

I just finished reading the article 'dmesg explained'. Good article, although I thought you might like to know that some of the excerpts from dmesg that are shown are not visible in Konqueror. Basically, any excerpt that did not include a <BR> tag are not rendered. This can be fixed by adding a to the end of those excerpts, which will not change the appearance in other browsers. I'll be filing a bug report with, but I thought you might want to 'fix' the page in the meantime.

Your Editor wrote:

You mean all the <PRE> blocks need a <BR> just before the </PRE>? Or they need it on every line?

Mike responded:

Actually, they just need a <BR> anywhere inside the <PRE>...</PRE>, it doesn't really matter where or how many. Kinda weird, but that seems to do it.

[I added a <BR> tag inside the manual page excerpt. Does it look all right in Konqueror?

I'm not interested in putting <BR> tags in other articles, for this browser bug. I suppose if it were Netscape or IE, I'd have to. -Mike.]

 Sun, 12 Nov 2000 01:16:01 EST
From: BanDiDo <>
Subject: Kudos for LG

LG is awesome, if you charged for it I would subscribe. When I get some free time one of these I hope to pen a few articles and such.

With appreciation for a fine publication

Your Editor writes:

Thanks. Linux Gazette was established as a free zine and we firmly intend to keep it that way. There are already paid magazines out there (we publish one of them :), but LG fills a unique niche. No other e-zine I know of (Linux or otherwise) is read, not just through a single point of access, but in large part via mirrors or off-line (via FTP files, CD-ROMS, etc).

Also, because LG's articles are written by our readers, you (readers) are truly writing your own magazine. I only put things together and insert a few comments here and there, and occasionally write an article. If it weren't for our volunteer authors, there would be no Linux Gazette. When I first took over editing in June 1999, I used to wonder every month whether there would be enough articles. But every month my mailbox magically fills with enough articles not just for a minimal zine (5-10 technical articles), but for a robust zine with 15+ articles covering a variety of content (for newbies and oldbies, technical articles and cartoons). A year ago, we never predicted there would be cartoons in the Gazette, but the authors just wrote in and offered them, and it's been a great addition. It is truly a privilege to work with such a responsive group of readers, and years from now when I'm retired (hi, Margie!), I'm sure I will remember fondly what an opportunity it was.

Our biggest thanks go to The Answer Gang, especially Heather and Jim, who each spend 20+ hours a month _unpaid_ compiling The Answer Gang, 2-Cent Tips and The Mailbag. This has really made things a lot easier for me.

We look forward to printing some articles with your name on them. See the Author Info section at

And you other readers who haven't contributed anything yet, get off your asses and send something in! Write a letter for the Mailbag, answer a tech-support question, join The Answer Gang, do a translation for our foreign-language sites, or write an article. What do *you* wish the Gazette had more of? *That's* what it needs from you.

BanDiDo wrote back:

Would be lovely if you guys established an EFNET irc channel :)

This page written and maintained by the Editors of the Linux Gazette. Copyright © 2000,
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

News Bytes


Selected and formatted by Michael Conry and Mike Orr

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.

 December 2000 Linux Journal

The December issue of Linux Journal is on newsstands now. This issue focuses on System Administration. Click here to view the table of contents, or here to subscribe. All articles through December 1999 are available for public reading at Recent articles are available on-line for subscribers only at

Distro News

 ASP Linux

ASP Linux - the first Singapore made Linux distribution for PC - has been developed by SWsoft with the needs of Application Service Providers in mind. It combines a user friendly Linux distribution (Red Hat 7 compatible) for home and office desktops with professional server software packages for ASPs, ISPs NSPs and others. SWsoft are currently looking for mirrors located in Asia Pacific Region for their distribution, source code and applications.


OREM, UT-November 1, 2000: In a follow up to last month's item, Caldera Systems announced its Linux management solution (formerly code-named "Cosmos") has been named Caldera Volution. The product, currently in open beta, is still available for download from Caldera's Web site at Volution is a browser- and directory-based management product for Linux systems that utilizes the strengths of LDAP directories. Using Volution, network administrators can create policies and profiles to manage a half dozen or thousands of Linux systems, without having to individually manage/touch each.

OREM, UT-November 6, 2000: Caldera Systems, Inc. announces its upcoming Linux/Unix Power Solutions Tour 2000 which runs from November 14th through December 12th. The 12-city tour targets those who develop and deploy on Linux and Unix-including VARs, ASPs, ISVs, developers, resellers, consultants and corporate IT professionals. This tour presents Caldera's vision of the future for Linux and UNIX and Linux training. Each presentation on the tour includes two sessions: a morning business briefing and an afternoon Linux Essentials course with hands-on training, including for-sale software and solutions guides. You can get more details from, or call toll-free on 1-866-890-8388.


Neoware Systems showcases the first embedded Linux designed specifically for desktop computing appliances at Comdex Fall 2000. NeoLinux 2.0 is the latest version of its embedded Linux operating system. NeoLinux features a newly designed, customizable user interface designed specifically for desktop computing appliances, and made up of two main components. ezConnect provides a user interface to allow users and administrators to easily create connections to run a range of applications (e.g. MS Windows applications on servers, or UNIX applications over X). ezSnap allows new software features to be easily added to appliances across a network. As a stand-alone product NeoLinux 2.0 is available for $60 per appliance with one year of technical support and upgrades.


Storm sent us some links which may be of interest to those wanting to find out about this distribution...

News in General

 Upcoming conferences and events

Courtesy Linux Journal. For the latest updates, see LJ's Industry Events page.
USENIX Winter - LISA 2000
December 3-8, 2000
New Orleans, LA
Pluto Meeting 2000
December 9-11, 2000
Terni, Italy
LinuxWorld Conference & Expo
January 30 - February 2, 2001
New York, NY
February 5-8, 2001
Toronto, Canada
Internet World Spring
March 12-16, 2001
Los Angeles, CA
Game Developers Conference
March 20-24, 2001
San Jose, CA
March 22-28, 2001
Hannover, Germany
Linux Business Expo
April 2-5, 2001
Chicago, IL
Strictly e-Business Solutions Expo
May 23-24, 2001
Location unknown at present
USENIX Annual Technical Conference
June 25-30, 2001
Boston, MA
PC Expo
June 26-29, 2001
New York, NY
Internet World
July 10-12, 2001
Chicago, IL
O'Reilly Open Source Convention
July 23-26, 2001
San Diego, CA
LinuxWorld Conference & Expo
Conference: August 27-30, Exposition: August 28-30, 2001
San Francisco, CA
Linux Lunacy
Co-Produced by Linux Journal and Geek Cruises

October 21-28, 2001
Eastern Carribean

 OEone and Tatung Join Forces

Toronto, ON - October 31, 2000: A joint agreement has been announced between Ottawa-based OEone and Tatung Co. of Canada. The two companies will be working together to bring fully-integrated, Linux-based Internet Computer solutions to leading OEM customers. The core of this deal is an exclusive arrangement between the two parties to fully integrate OEone's Linux-based Operating Environment and web applications with Tatung's All-In-One plus additional custom computer designs.

 SGI and ePeople Linux support

MOUNTAIN VIEW, Calif. Nov. 6, 2000: SGI and ePeople are bringing a new online technical support marketplace to Linux users. Linux users can receive support through a new SGI Online Helpdesk or through the ePeople marketplace web site.

The agreement also allows the more than 200 SGI open source operating system support technicians to join the ePeople marketplace to provide fee-based Linux support to anyone who needs it. SGI also offers Web-based service incident packs, called WebPacks, from its Online Helpdesk. WebPacks are prepaid service agreements available in quantities of 5, 10 or 20 incidents, (e.g. a 5-incident WebPack costs $449 (U.S. list)).

 Clarksville Linux Users Group

Kinda a local news item (Tennessee, USA), but LUG's are a very important part of the whole Linux movement: The CLlug meets the third Thursday of every month in the Claxton Bldg, third floor @ Austin Peay State University. CLlug has been around for almost one year and is actively recruiting new members. Mark, from CLlug, tells us that they are pushing for further Linux use in the University, in particular by linking in with staff who already use Open Source software. The group have also got the use of the College's projection facilities for meetings and classes.

Our Editor, Mike, recommends looking up GLUE (Groups of Linux Users Everywhere) for anyone interested in finding like-minded individuals in their area, or in publicising new groups.

 IBM/KDE League and Voice Technology

November 15, 2000 (Las Vegas, Nevada): Further to their existing support for Linux, IBM are now joining the KDE League, and integrating their Via Voice technology into KDE. IBM's ViaVoice is currently the only voice recognition software commercially available for the Linux operating environment.

The KDE League is a group of industry leaders and KDE developers formed to focus on facilitating the promotion, distribution and development of KDE. The League will not be directly involved in developing the core KDE libraries and applications, but rather will focus on promoting the use of KDE and development of KDE software by third party developers.

 Linuxcare and Eazel partner

SAN FRANCISCO Nov. 7, 2000: Linuxcare and Eazel announced a partnership geared toward speeding Linux development.

Under the agreement, Linuxcare will provide email support services to customers of Eazel's Network User Environment which includes Eazel Services and the Nautilus client for Linux systems which can be downloaded at Linuxcare will also maintain a Linux knowledgebase support site at by capturing documentation and software updates, as well as managing and updating support FAQs. Linuxcare's services will support the preview of Eazel's Internet services and Nautilus client that is being integrated with the GNOME 1.4 windowing system.

 Training Pages (UK's Largest Online Training Directory) Reaches 10,000 Courses

Training Pages announced that it had just passed the landmark of ten thousand (10,000) distinct and separate courses.

Training Pages runs entirely on open source software, including the Linux operating system, the Apache web server, the MySQL database and the PHP scripting language.

 Announcing Release 0.9.2 of the Computer History Graphing Project

Version 0.9.2 of the Computer History Graphing Project has now been released. The project aims to graph all of computerdom in one large family tree. This version contains an updated version of the unified parser program, parsech. It can now optionally output a DBM hash containing the parsed data. The documentation has also been updated, in addition to the data trees. More specifically, the NeXT, Palm, Windows, and Apple Darwin trees have all been updated. The project's web site is located at

 Linux Links

Adobe beta tested a Linux version of FrameMaker, then decided not to release a product. Linux Weekly News speculates why.

Is the Internet in China, rather than heralding an age of open communication, actually solidifying Big Brother's control? Linux Journal author Bryan Pfaffenberger argues so in his web article The Internet in China.

Tips on getting that darned mouse wheel to scroll under X.

Links from The Duke of URL:

Links from Anchordesk and ZDnet:

MSNBC have a good report from COMDEX 2000, focusing on the rise of embedded Linux systems.

A look at Gnutella, and possible legal implications.

Open-source developer's agreement (clauses for the contract between developers and their employers)

Slashdot review of a book explaining the Open Source revolution to non-tekkies.

From Linuxworld, an article alleging that MS is using Linux code in the latest Windows versions to make their product more stable.

Traceroute Java Servlet sources (under Linux) are now available for free downloading from the

Software Announcements

 IBM Small Business Suite for Linux

Somers, NY, November 6, 2000 . . . IBM today announced the industry's first Linux-based integrated software solution for small businesses. It delivers the tools necessary to help customers with messaging and collaboration, productivity, Web site creation and design, and data management. IBM also includes a fully integrated install program.

"This offering provides small businesses, and the Value-Added Resellers (VARs) and Independent Software Vendors (ISVs) that serve them, everything they need to do serious e-business on Linux.," said Scott Handy, director, Linux solutions marketing, IBM Software. "The IBM Small Business Suite is first-of-a-kind for Linux and delivers the three most requested servers: database, e-mail and Web application server software, delivering a great solution at a great price."

The suite is available for US$499 at Site licenses are also available. Supported distributions include Caldera, Red Hat, SuSE and TurboLinux. The installer program and desktop software are available in ten European and Asian languages.

The Small Business Suite for Linux includes the following software:

Lotus Domino Release 5.04

Is a leading messaging and collaboration solution that allows customers to get e-mail and Web sites up-and-running rapidly with a unified, easy-to-manage administrator interface. This solution provides desktop and mobile e-mail, Web access, calendaring, group scheduling, bulletin boards/newsgroups, workflow and database access. Domino sends and receives e-mail using standard Internet e-mail protocols including native Internet addressing, SMTP routing, and supports a wide variety of clients and devices including a Web browsers, Lotus Notes clients, POP3 and IMAP4 mail clients.

IBM WebSphere Application Server, Standard Edition Version 3

This delivers an open and flexible Web application runtime environment for internal and external Web pages by allowing Java servlets to run on top of an HTTP server including the Apache Web server or the included IBM HTTP server powered by Apache. The WebSphere Application Server makes it easy to build security-enhanced, individual Web sites providing Web access to credit, delivery, order processing or other business-critical information. This offering also provides interaction with IBM DB2 Universal Database for access to managed relational data directly from Web applications.

IBM DB2 Universal Database Version 7

This award-winning database is a powerful, easy-to-use multi-media ready relational database management system with the ability to handle the ad-hoc structured queries for a wide variety of data types including text, data, voice, image or any binary object. Reports can be customized and generated from the information to help run a small business more efficiently. Data from third party applications that support DB2, such as accounting information, can be stored, retrieved and reported on from DB2, routed or shared in a teamroom with Lotus Domino or made accessible to customers or suppliers over the internet with WebSphere Application Server.

IBM WebSphere Homepage Builder

Includes the necessary templates, tools and multimedia tutorials for creating and publishing Internet and intranet Web sites and pages in minutes. This easy-to-use software is designed to appeal to the ever-growing community using Linux as both a development platform and a Web server environment.

IBM Suites Installer

This tool assists in the distributed installation and configuration of the suite components and other applications, providing a quick and easy way for customers to install the software.

WebSphere Studio

Is a complete set of tools integrated and designed to support all Web development levels, permitting content authors, graphic artists, page scripters, Web programmers and webmasters to work on the same projects simultaneously. WebSphere Studio features automated Web site building, Java applications and a host of other design and publishing capabilities.

Domino Designer

This rapid Web site design and development tool is used to bring back office data to the Web and implement e-business processes using HTML authoring, site/page design, frameset design and application preview.

 Wolfram Announces Mathematica 4.1

November 27, 2000--Champaign, IL: Wolfram Research, Inc. announces the release of Mathematica 4.1, the latest version of their technical computing system. Mathematica now supports all major Linux platforms natively. With Mathematica 4.1 and Parallel Computing Toolkit, the Linux clusters popular in both academic and commercial settings can easily solve large-scale problems. There are currently more than 150 users of this powerful combination, including the Cornell Theory Center.

Product details are available at

 Omnis Studio 3.0 now brings business solutions to the web even faster

Omnis Software has announced the release of Omnis Studio 3.0, the latest version of their 4GL rapid application development (RAD) program. The new release incorporates extensive changes to their web server and Web ClientT technologies, significantly speeding up web-based business applications. It also includes a range of other enhancements to make the development experience more intuitive, easier to use, and more powerful.

Omnis Studio is a high-performance visual RAD tool that provides a component-based environment for building GUI interfaces within e-commerce, database and client/server applications. Development and deployment of Omnis Studio applications can occur simultaneously in Linux, Windows, and Mac OS environments without changing the application code.

A demonstration copy of Omnis Studio 3.0 can be downloaded from the web site: and more details of the new version are available at:

 Backup Utility Integrated into Linux NetworX Evolocity Clusters

SANDY, Utah, Nov. 8, 2000: Linux NetworX, Inc. announced today the integration of BRU(TM) Backup & Restore Utility into its Evolocity(TM) cluster solutions. BRU is an award winning backup software solution for Linux systems from Enhanced Software Technologies, Inc..

The reliability of tape device technology today is extremely high, but the potential for errors on the tapes following the writing of the archive can still occur in some cases. BRU has the ability to effectively detect and recover from errors when reading a tape to allow successful completion of the restore.

Evolocity cluster systems include computational hardware, ClusterWorX(TM) management software, RapidFlow(TM) 10/100 and Gigabit Ethernet Switch, applications, and storage, including the BRU backup utility.

 Tribes(tm) 2 Coming to Linux: Beta Testers Needed

Tustin, California - November 9, 2000: Loki Software, Inc., publisher of commercial games for the Linux operating system, announces an agreement with Sierra Studios(tm) to bring the highly-anticipated Tribes(tm) 2 to Linux.

Loki is porting this first-person action game alongside the Windows development, and is now accepting beta tester applications for the Linux version. Interested participants should visit and complete an online registration form.

 Mahogany 0.60 GTK+/Win32 mail client with Python scripting

A new release of the `Mahogany' e-Mail and News client has been made. Mahogany is an OpenSource cross-platform mail and news client. It is available for X11/Unix and MS Windows platforms, supporting a wide range of protocols and standards, including POP3, IMAP and full MIME support as well as secure communications via SSL. Thanks to its built-in Python interpreter it can be extended far beyond its original functionality.

Source and binaries for a variety of Linux and Unix systems are available at and

Binaries for Win32 systems and Debian packages will also be made available shortly.

 Opera for Linux Beta2

The latest beta of Opera for Linux is available at

 Announcing GtkRadiant 1.1 Beta for Linux and Win32

Loki Software, Inc. and are pleased to release GtkRadiant 1.1 beta for Linux and Win32. GtkRadiant is a cross-platform version of the Quake III Arena level editor Q3Radiant. GtkRadiant offers several improvements over Q3Radiant and many new features.

For more information, please visit

 Open Source Development Toolkit from Epitera

LAS VEGAS - November 15, 2000: AbsoluteX, a new Open Source development toolkit, was officially launched at COMDEX 2000 in Las Vegas, Nevada, USA. AbsoluteX is an X-Window developer toolkit created by Epitera ( ) to streamline and facilitate the process of developing customized GUIs (graphical user interfaces) for Linux. It is available for free download at ( ). Epitera believes AbsoluteX will help get Linux out of the exclusive IT world and into the mainstream desktop world of home, work and novice users.

 Open Motif available on IA64 TurboLinux

Integrated Computer Solutions, Inc. have announced the first port of Motif to the upcoming IA64 platform from Intel. ICS says that this is important for the Linux community, because most of the existing Enterprise applications written for UNIX platforms (e.g., Suns, HP, SGI, etc.) use Motif as a GUI toolkit. Without the port of Motif to the IA64, it will be difficult and expensive for Enterprises to migrate to Linux.

A full press release is available. The software is also available for download from:

Copyright © 2000, Michael Conry and the Editors of Linux Gazette.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

(?) The Answer Gang (!)

By Jim Dennis, Ben Okopnik, Dan Wilder, , the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to


¶: Greetings From Heather Stern
(?)Caldera Names Linux Management Product Volution --or--
LDAP users: look to Caldera.
(?)Windows NT Event Log on a Linux Box
(?)Two OS
(?)Best Linux Distro For A Newbie...?
(?)newbie installation question
(?)PPP protocol stack modification
(!)What IS "The Internet" anyway?
(?)Classified Disk - Low-level Format
(?)GPM is interfering with x...
(?)Graphics Programming for Printing / Faxing
(?)networked machine goes to sleep
(?)Internet server specifications --or--
Web server/firewall hardware specifications, Apache and Zope
(?)'neighbour table overflow'
(!)DSL on Linux Information
(!)sticky notes

(¶) Greetings from Heather Stern

(?) LDAP users: look to Caldera.

From Caldera

As a followup to the LDAP discussions that have been answered here:
Caldera Systems' Linux management solution (formerly code-named "Cosmos") has been named Caldera Volution. The product, currently in open beta, is available for download from Caldera's Web site at
More details can be found in our News Bytes (Distribution section).

(?) Windows NT Event Log on a Linux Box

Answers by: Dmitriy M. Labutin, CÚsar A. K. Grossmann, Niek Rijnbout

You can dump NT event log with dumpel utility (it comes with WindowsNT Resource kit) into flat file.
(!) [Cesar] To do this I must "be" in the NT computer. Not a thing I can schedule a crontab at the Linux box to do it. I was thinking in some utility I can use to dump the log remotely, from the Linux box, where I have some freedom and tools to do nasty things such as reporting unusual activities from the users...
(!) [Nick] See
...for a $25 application to send the NT log to a syslog host.

The app Nick mentions also appears to deal well with Win2000 and offers email as well as syslog transfer of the events. -- Heather

(?) Two OS

From Juan Pryor on Tue, 7 Nov 2000
Answered by: Heather Stern

I'm pretty new to Linux and I was wondering if there is a way in which I

can have two OSes working at the same time. I mean, I've had some trouble with the people at my house since they want to go back to Win98 and I only have one PC. Is there any win98 program that reboots and starts in Linux and then when the computer reboots it starts in win98 again? Any help will do.

(!) Juan,
It's very common for Linux users to have their systems setup as dual-boot, sometimes up in MSwin, sometimes running Linux. Some distributions even try to make it easy to turn a box which is completely Windows into a half and half setup (or other divisions as you like).
There is a DOS program named LOADLIN.EXE which can easily load up a Linux kernel kept as a file in the MSwin filesystem somewhere - my friends that do this like to keep their Linux parts under c:\linux so they can find them easily. Loadlin is commonly found in a tools directory on major distro CDs. Of course, you do have to let Windows know that Loadlin needs full CPU control. In that sense, it's no different than setting up a PIF for some really cool DOS game that takes over the box, screen and all. Anyways, there's even a nice GUI available to help you configure it, called Winux, which you can get at ... which, I'm pleased to add, comes in several languages.
It's also possible to setup LILO so that it always prefers to boot MSwin (the option is often called 'dos') instead of Linux... in fact, I recommend this too, unless you want to not be able to boot Linux from anything but a floppy if MSwin should happen to mangle its drive space too far.
Now this is kind of different from "two OSes working at the same time"... It is possible to run VMware, and have a couple of different setups running together, but doing this might be rather confusing to family who are not used to anything but Windows. They might accidentally hit some key combination that switches to the other environment that's running, and think they broke something even if it's all running perfectly.
To finish off - it's also possible to find really friendly boot managers; I've been looking over one named GAG (don't laugh, it's just initials for Spanish words meaning "Graphical Boot Manager") that looks like it might be fun, at It was just updated, too. Anyways, it can boot up to 9 different choices and has nice icons to use for a lot of different OSs you may have on a system. Unlike LILO and some other boot managers that only replace the DOS "master boot record", though, it takes over a fair chunk of track 0.

(?) Best Linux Distro For A Newbie...?

From Michael Lauzon to tag on Tue, 14 Nov 2000
Answers by: Dan Wilder, Ben Okopnik, Heather Stern

I am wondering, what is the best Linux distro for a newbie to learn on (I have been told never to ask this question or it would start a flame war; I of course don't care) in your opinion: what is the best Linux distro for a newbie?

--- Michael Lauzon

(!) [Dan] <troll>
Slackware. Beause by the time you really get it installed and running, you know a lot more about what's under Linux's hood, than with any other common distribution!
Dan Wilder
Darn those trolls anyway. They're eating the dahlias now!
(!) [Ben] <Grumble> Sure, you don't care; we're the ones that need the asbestos raincoats! :)
(!) [Heather] Well yeah, but I usually put out the flame with a Halon cannister labelled "waaay too much information." It does make me popular in the mailing lists though.
(!) [Ben] Spoilsport. :)
(!) [Ben] To follow on in the spirit of Dan's contribution:
<Great Big Troll With Heavy Steel-Toed Boots>
Debian, of course. Not only do you get to learn all the Deep Wizardry, you get all the power tools and a super-easy package installer - just tell it which archive server you want to use, and it installs everything you want!
(The Linux Gazette - your best resource for Linux fun, info, and polite flame wars... :)
(!) [Heather] Of course it helps if you know which archive server you want to use, or that the way to tell it so is to add lines to /etc/apt/sources.list ...
(!) [Ben] Oooh, are you in for a pleasant surprise! (I was...) These days, "apt" (via dselect) asks you very politely which server you want to use, and handles the "sources.list" on its own. I still wish they'd let you append sources rather than having to rewrite the entire list (that's where knowing about "/etc/apt" comes in handy), but the whole "dselect" interface is pretty slick nowadays. It even allows you to specify CD-based (i.e., split) sources; I'm actually in the process of setting up Debian 2.2 right now, and my sources are a CD-ROM and DVD drive - on another one of my machines - and an FTP server for the "non-free" stuff. Being the type of guy who likes to read all the docs and play with the new toys, I used "tasksel" for the original selection, "dselect" for the gross uninstallation of all the extraneous stuff, and "apt-get" for all subsequent install stuff. It's worked flawlessly.
(!) [Heather] I did write a big note on debian-laptops a while back about installing Debian by skipping the installer, but I think I'll let my notes about the handful of debian based distros stand.
(!) [Ben] I agree with your evaluation. It's one of the things I really like about Debian; I was able to throw an install onto a 40MB (!) HD on a junk machine which I then set up as a PostScript "server", thus saving the company untold $$$s in new PS-capable printers.
(!) [Heather] There is rpmfind to attempt to make rpm stuff more fun to install, but it's still a young package. I think the K guys have the right idea, writing a front end that deals with more than one package type.
(!) [Ben] Yep; "alien" in Debian works well, but I remember it being a "Catch-22" nightmare to get it going in RedHat. I've got package installation (whatever flavor) down to a science at this point, but it could be made easier.
(!) [Heather] It's really a matter of requirements analysis. Most of the flame wars arise from people stating their own preferences, and fussing over those instead of trying to figure out which would work best for you.
Learning linux is a big definition, some people mean learning the unixlike features that they've never encountered before; some people mean learning to use the same things in Linux that they already know how to use in other systems. These are, to say the least, rather opposite needs...
If you want to goof off learning Linux but are very afraid of touching your hard drive's data, there are a few distributions designed to run off of a CD, or out of RAM. One pretty good one that runs directly from a RAMdisk is Tom's rootboot ( While a lot of people use it merely as a rescue disk, Tom himself lives in it day to day. But, it's not graphical. And, it's libc5 based, so it's a little strange to get software for. It uses a different shell than most major distributions, but the same kernels. It's not exactly aimed at "just surfing the web and doing email" which I often hear newbies say that they'd be happy with. Linux Weekly News ( has recently sorted their distributions, so you could find a CD based distro that meets these more mainstream desires fairly easily there.
If you want to learn about things from their raw parts, the way some kids like to learn about cars by putting one together themselves, there is a Linux From Scratch HOWTO stored at the LDP site (
If the newbie's native language isn't English, he or she probably wants a localized distro, that is, one that installs and whose menus, etc. are in their language. (I'm guessing that such a newbie wouldn't be you - your .sig links were to purely English websites.) You can find a bunch of those at LWN too, but you'll have to go looking at home pages to be sure what languages are covered.
Otherwise, you probably want a "normal" linux, in other words, a major distro. Newbies generally want to be able to ask their local gurus for help, rather than wonder if some random wizard on the internet will ever answer them. If your local techie pals have a favorite, try that - they'll be better at helping you with it than stuff they don't know as well. I could be wrong of course - some techie folks prefer to learn stuff the same time you do, and you can get a great sense of energy by sometimes figuring out a thing here and there faster than they do. But by and large, gaining from someone else's experience will make things smoother, a smooth start will generally be more fun, and enjoying your first experiences will make you more willing to experiment later.
If you like to learn from a book, there are a fair number of books that are about a specific distro, and have a CD of that distro in the back. These are good, but not usually aimed at people who want to dual boot. Just so you know.
The big commercial brands usually try to push that they're an easy install. What they don't push so much is their particular specialty, the market they are aiming for. I've heard good things about Corel (esp. for dual boot plans), I've seen good things with both SuSE and Storm. Mandrake and Debian have both been a little weird to install - not too bad, but I'm experienced, and enjoy wandering around reading the little notes before doing things ... if you want the computer to be bright enough to do it all by itself, these might not be for you. (note, my Mandrake experience is a version old. And they compile everything Pentium optimized, so if things go smoothly, it will usually be a lot faster system.) Several of the brands are now pushing a "graphical installer" which is supposed to be even easier. However, if you have a really bleeding edge video card, it would also make the distro a real pain to install. Storm and RedHat favor graphical over non-graphical installs. LibraNet has a nongraphical install that still gives Debian a somewhat friendlier setup. I hear that Slackware is fairly friendly to people who like to compile their own software, and I never hear anything about their installer, so maybe it is really incredibly easy. Or maybe my friends don't want to tell me about their install woes once they get going, I dunno ;)
If RedHat (6.2, I have to say I haven't tried 7 yet) is where you're going, and their graphical install is a bummer for you, use their "expert" mode. Their "text" mode is almost useless, and they really do have lots of help in expert mode, so it's not as bad as you would think.
In any case, I would recommend backing up your current system if there's anything on it you want to keep, not because the installs are hard - they're nothing like the days before the 1.0 kernel - but because this is the most likely time to really mangle something, and you'll just kick yourself if you need a backup after all and don't have one.
The next thing to consider is your philosophy. Do you want to be a minimalist, only adding stuff that makes sense to you (or that you've heard of), and then add more later? If so, you want a distro that makes it really easy to add more later. Debian and its derivatives are excellent for this - that includes Corel, Libranet, and Storm. SuSE's YaST also does pretty well for this, but they don't update as often... on the other hand, they don't get burned at the bleeding edge a lot, either. If most of the stuff you'll add later is likely to be commercial, RedHat or a derivative like Mandrake might be better - lots of companies ship RedHat compatible rpm's first, and get around to the other distros later, if at all.
If you have a scrap machine to play on, try several distros, one at a time; most of them are available as inexpensive eval disks from the online stores.
If you'd rather install the kitchen sink and take things back out later, any of the "power pack" type stuff, 3 CDs or more in the set, might work for you. Most of these are still based on major distros anyway, there's just a lot more stuff listed, and you swap a couple of CDs in. Umm, the first things you'll probably end up deleting are the packages to support languages you don't use...
A minimal but still graphical install should fit in a gigabyte or so - might want 2. A more thorough setup should go on 6 Gb of disk or so (you can, of course, have more if you like). It's possible to have usable setups in 300 to 500 Mb, but tricky... so I wouldn't recommend that a newbie impose such restrictions on himself.
To summarize, decide how much disk you want to use (if any!) and whether you want to go for a minimal, a mostly-normal, or a full-to-the-brim environment. Consider what sort of help you're going to depend on, and that might make your decision for you. But at the end, strive to have fun.
(!) [Ben] Heather, I have to say that this is about the most comprehensive answer to the "WITBLD" question yet, one that looks at a number of the different sides of it; color me impressed.

WITBLD = "What Is The Best Linux Distro"

(!) [Heather] The key thing here is that there are several aspects of a system. When one is "easiest" fo you it doesn't mean all the others are. So, you have to decide what parts you care the most about making easy, and what parts you consider worth some effort for the experience you'll get. Once you know that, you are less of a newbie already. I hope my huge note helped, anyway.

(?) Well, I bought Caldera OpenLinux eDesktop 2.4, so I am looking for people who have had experience with OpenLinux. I still haven't installed it on a computer yet, as I need to upgrade the computer; but once I do that I will install it (though i do plan on buying other distros to try out).

--- Michael Lauzon

(?) newbie installation question

From vinod kumar d
Answers by: Heather Stern, Ben Okopnik

Hello I'm about to install Redhat Linux as a dual boot on my machine running win98 that came preconfig'd to use my 30 gigs all for windows, and for all the browsing i did through red hat's online docs, i could'nt figure out one basic thing: should i have an unallocated partition to begin installation, or will disk druid/fips do the "non-descructive repartitioning" as part of the install?

(!) [Heather] I do not remember if RedHat will do the right thing here or not. CorelLinux will (in fact, made a great PR splash by being one to make this pleasant). Um, but CorelLinux is a debian-type system, not a rpm type system. I'm not sure what requirements had you pick RedHat, maybe you need something a bit more similar.
(!) [Ben] Having recently done a couple of RH installations, I can give you the answer... and you're right, it's not the one you'd like to hear. :)
No, RedHat does not do non-destructive repartitioning. Yes, you do need to have another partition (or at least unallocated space on the drive) for the installation - in fact, you should have a minimum of two partitions for Linux, one for the data/programs/etc., and the other one for a swap partition (a max of 128MB for a typical home system.) There are reasons for splitting the disk into even more partitions... unfortunately, I haven't found any resources that explain it in any detail, and a number of these reasons aren't all that applicable to a home system anyway.

(?) if i do need the unallocated partition, which is the best partition software to use cos i have stuff that i dont want to lose.

(!) [Heather] If you feel up to buying another commercial product, PartitionMagic is very highly regarded. Not just amongst us linux-ers, but also for people who wanted to make a new D:, give half a server to Novell, or something like that. It's very smart.
It's also what comes in CorelLinux...
If you're more into Linux than MSwin and comfortable with booting under a rescue environment, I'm pleased to note that gparted (the GNU partition editor) deals well with FAT32 filesystems. Tuxtops uses that.
If you're feeling cheap, FIPS is a program that can do the drive division after booting from a DOS floppy, which you can easily make under the MSwin you already have. I'm pretty sure a copy of FIPS is on the redhat CD as a tool, so you could use that. It doesn't do anything but cut the C: partition into two parts. You'd still use disk druid later to partition the Linux stuff the way you want.
(Of course mentioning buying a preloaded dual boot from one of the Linux vendors like Tuxtops, VA Linux, Penguin, or others is a bit late. I'm sure you're fairly fond of your 30 Gb system with the exception of wanting to set it up just a bit more.)
None of these repartitioners will move your MS Windows swap file though. In the initial setup MS' is as likely to have the swap near the beginning of the drive, or the end. I recommend that you use the control panel advanced system options to turn off the swap file, and your favorite defragmenter, and then run a nice solid backup of your windows stuff before going onwards.
This isn't because Linux installs might be worse than you think (though there's always a chance) but because Windows is fragile enough on its own, and frankly, backups under any OS are such a pain that some people don't do them very often, or test that they're good when they do. (I can hardly imagine something more horrible than to have a problem, pat yourself on the back for being good enough to do regular backups, and discover that the last two weeks of them simply are all bad. Eek!) So now, while you're thinking:
"cos i have stuff that i dont want to lose."
is a better time than most!
(!) [Ben] Following on to Heather's advice, here's a slightly different perspective: I've used Partition Magic, as well as a number of other utilities to do "live partition" adjustment (i.e., partitions with data on them.) At some point, all of these, with one exception, have played merry hell with boot sectors, etc. - thus reinforcing Heather's point about doing a backup NOW. The exception has turned out to be cheap old FIPS; in fact, that's all I use these days.
FIPS does indeed force you to do a few things manually (such as defragmenting your original partition); I've come to think that I would rather do that than let PM or others of its ilk do some Mysterious Something in the background, leaving me without a hint of where to look if something does go wrong. Make sure to follow the FIPS instructions about backing up your original boot sector; again, I've never had it fail on me, but best to "have it and not need it, rather than need it and not have it."
In regard to the Windows swap file, the best way I've found to deal with it is by running the defrag, rebooting into DOS, and deleting the swapfile from the root directory. Windows will rebuild it, without even complaining, the next time you start it.

(?) i really tried a lot of faq's before asking you, so could you go easy if you're planning to: a) flame me about rtfm'ing first.

(!) [Heather] Oboy, a chance to soapbox about doing documentation :) I promise, no flame!
If we should do this we generally are at least kind enough to say which F'ing M's to R. Which brings another thought to mind. FAQs and HOWTOs are okay, but they are sort of... dry. Maybe you could do an article for the Linux Gazette about your experience, and "make linux a little more fun" (our motto) for others who are doing the dual boot install their first time out.
Unfortunately it's really sad that the FAQs and HOWTOs aren't as useful to everyone as they could be :(
If one of them was pretty close and just plain wasn't quite right, or wasn't obvious until you already went through it, give a shot at improving it a little, and send your notes back to the maintainer. If he or she doesn't answer you in a long time (say a month or two) let us know, maybe get together with some friends and see if you can become its new maintainer.
To be the maintainer of a Linux project doesn't always mean to write everything in it, just sort of to try and make sure it stays with the times. Linus himself doesn't write every little fragment of code in the kernel - though maybe he reads most of it :D - he maintains it, and keeps it from falling apart in confusion. This is really important. Documents need this too.
Because these things are not meant to be ground in stone, they're written to be useful and yeah, sometimes it happens that the fella who first wrote a given doc has moved on to other things. Meanwhile folks like you join the linux bandwagon every month and still need them, but Linux changes and so do the distros.
But, it's ok if you personally can't go for that. It's enough if we can find out what important HOWTOs could stand some improvement, since maybe it will get some more people working on them.

(?) b) ignoring me totally.

(!) [Heather] Sadly, we do get hundreds and hundreds of letters a month, and don't answer nearly that many. But hopefully what I described above helped. If it isn't enough, ask us in more detail - there's a whole Gang of us here, and some of us have more experience than others.
(!) [Ben] Well, OK - you get off scot-free this time, but if you ever ask another question, we'll lock you in a room with a crazed hamster and two dozen Pokemon toys on crack. :) The Answer Gang in general seems to have taken its mandate from Jim Dennis, the original AnswerGuy: give the best possible answers to questions of general interest, be a good information resource to the Linux community, and eschew flames - incoming or outgoing. <Grin> I like being part of it.

(?) btw really liked your answers in the column (well here's hoping some old fashioned flattery might do the trick ;-P)
thanks in advance...

(!) [Heather] Thanks, vinod. It's for people like you (and others out there who find their answer and never write in at all) that we do this.
(!) [Ben] If you scratch us behind the ears, do we not purr? :) Thanks, Vinod; I'm sure we all like hearing that our efforts are producing useful dividends. As the folks on old-time TV used to say, "Keep those letters and postcards coming!"

(?) PPP protocol stack modification

From David Wojik
Answered by: Heather Stern, Paul MacKerras

I need to modify the PPP daemon code to enable dynamic requests to come in and renegotiate link parameters. I also need to make it gather packet statistics. Do you know of any textbooks or other documentation that explain the structure of the PPP protocol stack implementation? The HowTos only explain how to use Linux PPP, not how to modify it.



(!) [Heather] Once the ppp link is established, it's just IP packets like the rest of your ethernet, so you should be able to get some statistics via ifconfig or other tools which study ethernet traffic, I'd think.
Still, renegotiating the link sounds interesting (I'm not sure I see what circumstances should cause it ... your modem renegotiating a speed is not at all the same thing). Anyways, if for some reason the source code of the PPP daemon itself isn't enough, your best bet would probably be to start a conversation with Paul Mackerras, the ppp maintainer for Linux. After all, if you really need this feature, there are likely to be others out there who need it too. I've cc'd Paul, so we'll see what he has to say.

(?) Hi Heather,

Thanks for responding so promptly. My problem is that the product I'm working on uses Linux PPP to communicate between routers not modems. My software needs to be able to do things dynamically like take down the link, start an echo test, or change the mru.

(!) [Heather] It sounds like you want to create a router-handler to do that part, that looks like a serial interface as far as the ppp functions are concerned. Then, these can remain seperated off.

(?) The PPP protocol provides for dynamic renegotiation of link parameters but since Linux PPP was written primarily for modems connecting to ISPs, the PPP daemon is designed to take all of the parameters on the command line when it is invoked; after that it locks out any new input. My software also needs to count all of the different LCP packet types (Config-Ack, Config-Nak, etc.) and provide an interface to retrieve them.

(!) [Heather] And logically the router-handler would do these too? (Sorry, I'm not up on whether these are internal to the PPP protocols, they look like higher level stuff to me.)

(?) The PPP Protocol Stack implementation consists of thousands of lines of code. So what I am hoping to find is some high level documentation that will help me to determine how to modify only the parts I need. Even better would be to find some software that already does this as you suggest.

(!) [Heather] Hmm. Well, best of luck, and we'll see if Paul can point us to something good.

(?) Thanks again,

(!) [Paul] David,
As you say, the Linux pppd doesn't currently let you change option values and initiate a renegotiation (not without stopping pppd and starting a new one). It should however respond correctly if the peer initiates a renegotiation. I have some plans for having pppd create a socket which other processes can connect to and issue commands which would then mean that pppd could do what you want. I don't know when I'll get that done however as I haven't been able to spend much time on pppd lately. As for counting the different packet types, that wouldn't be at all hard (you're the first person that has asked for that, though).
-- Paul Mackerras, Senior Open Source Researcher, Linuxcare, Inc.
Linuxcare. Support for the revolution.

(!) What IS "The Internet" anyway?

Between Bryan Henderson and Mike Orr

In answering a question about the role of an ISP in making one's cable-connected computer vulnerable to hackers, Mike Orr makes a misstatement about the Internet that could keep people from getting the big picture of what the Internet is:

(!) The cableco or telco connects you to your ISP through some non-Internet means (cable or DSL to the cableco/telco central office, then ATM or Frame Relay or whatever to the ISP), and then the ISP takes it from there. Your ISP is your gateway to the Internet: no gateway, no Internet.

(!) [Bryan] The copper wires running from my apartment to the telephone company's central office are part of the Internet. Together with the lines that connect the central office to my ISP, this forms one link of the Internet.
The Internet is a huge web of links of all different kinds. T3, T1, Frame Relay, PPP over V.34 modem, etc.
The network Mike describes that all the ISPs hook up to (well, except the ones that hook up to bigger ISPs), is the Internet backbone, the center of the Internet. But I can browse a website without involving the Internet backbone at all (if the web server belongs to a fellow customer of my ISP), and I'm still using the Internet.
I would agree that you're not on the Internet if you don't have some path to the Internet backbone, but that path is part of the Internet.

(!) [Mike] It depends on how you define what the Internet "is". My definition is, if a link isn't communicating via TCP/IP, it's not Internet. (IP isn't called "Internet Protocol" for nothing.) This doesn't mean the link can't function as a bridge between Internet sites and thus hold the Internet together.

Internet hops can be seen by doing a traceroute to your favorite site. The listing doesn't show you what happens between the hops: maybe it's a directly-connected cable, maybe it's a hyperspace matter-transporter, or maybe it goes a hundred hops through another network like ATM or Frame Relay or the voice phone network. Traceroute doesn't show those hops because they're not TCP/IP--the packet is carried "somehow" and reconstructed on the other side before it reaches the next TCP/IP router, as if it were a direct cable connection.

Of course communicating with another user at your ISP is "Internet communication", provided the ISP is using TCP/IP on its internal network (as they all do nowadays, not counting a parallel token ring network at an ISP I used to work at, where the mailservers were on the token ring). And of course, the distinction is perhaps nitpicky for those who don't care what precisely the network does as long as it works.

(!) [Bryan] I'm with you there. But the link between my house and my ISP (which is quite ordinary) is TCP/IP. I have an IP address, my ISP's router has an IP address and we talk TCP/IP to each other. In the normal case that my frame's ultimate destination is not the router, the router forwards it, typically to some router in the backbone. Traceroute shows the hop between my house and the ISP.
All of this is indistinguishable from the way frames get from one place to another even in the heart of the Internet.
The layers underneath IP might differ, as you say, but you seem to be singling out protocols used in the home-ISP connection as not real TCP/IP, whereas the links between ISPs are real TCP/IP. There's no material difference between them. If not for the speed and cost disadvantage, the Internet backbone could be built on PPP over 28.8 modems and POTS lines.
One way we used to see that the home-ISP connection really _wasn't_ the Internet was AOL. You would talk AOL language to an AOL computer which was on the Internet and functioned as a gateway. The AOL computer had an IP address but the home computer did not. But now even AOL sets up an IP link between the AOL computer and the home computer. It's via a special AOL protocol that shares the phone line with non-IP AOL communications, but it's an IP link all the same and the home computer is part of the Internet whenever AOL is logged on.

(?) Classified Disk - Low-level Format

From Shane Welton
Answered by: Ben Okopnik, Heather Stern, Mike Orr

As you know the world has gone wild for Linux, and the company I work for is no acception. We work with classified data that can be some what of a hassle to deal with. The only means of formatting a hard disk is the analyze/ format command that comes with Solaris. That method has been ensured as declassification method.

(!) {Ben] Actually, real low-level formats for IDE hard drives aren't user-accessible any more: they are done once, at the factory, and the only format available is a high-level one. This does not impact security much, since complete data erasure can be assured in other ways - such as multiple-pass overwrites (if I remember correctly, a 7-pass overwrite with garbage data is recognized as being secure by the US Government - but it's been a while since I've looked into it.)

(?) I was hoping you could tell me if Linux offers a very similar low-level format that would ensure complete data loss. I have assumed that "dd if=/dev/zero of=/dev/hda" would work, but I need to be positive. Thanks.

(!) {Ben] Linux offers something that is significantly more secure than an "all zeroes" or "fixed pattern" overwrite: it offers a high-quality "randomness source" that generates output based on device driver noise, suitable for one-time pads and other high-security applications. See the man page for "random" or "urandom" for more info.
Based on what you've been using so far, here's something that would be even more secure:
dd if=/dev/urandom of=/dev/hda
If you're concerned about spies with superconducting quantum-interference detectors <grin>, you can always add a "for" loop for govt.-level security:
for n in `seq 7`; do dd if=/dev/urandom of=/dev/hda; done
This would, of course, take significantly longer than a single overwrite.
(!) [Mike] Wow, seven-level security in a simple shell script!
(!) [Ben] <Grin> *I've* always contended that melting down the hard drive and dumping it in the Mariannas Trench would add just that extra touch of protection, but would they listen to me?...
(!) [Heather] Sorry, can't do that, makes the Mariannas Trench too much of a national security risk. Someone could claim that our data has been left unprotected in international waters. ;P
Or, why security is a moving target: what is impossible one year is a mere matter of technology a few years or a decade later.
(!) [Heather] You wish.
(!) [Mike] My point being, that a one-line shell script can do the job of expensive "secure delete" programs.
(!) [Heather] /dev/urandom uses "real" randomness, that is, quanta from various activities in the hardware, and it can run out of available randomness. We call its saved bits "entropy" which makes for a great way to make your favorite physics major cough. "We used up all our entropy, but it came back in a few minutes." :)
(!) [Ben] Hey! If we could just find the "/dev/random" for the Universe...
(!) [Heather] When it's dry I don't recall what happens - maybe you device wait on it, that would be okay. But if you get non-randomness after that (funny how busy the disk controller is) you might not really get what you wanted...
(!) [Ben] That's actually the difference between "random" and "urandom". "random" will block until it has more 'randomness' to give you, while "urandom" will spit up the the entire entropy pool, then give you either pseudorandomness or a repeat (I'm not sure which, actually), but will not block.
(!) [Ben] You're welcome to experiment - by which I mean, try it and study the results, check that they're what you want or not (confirm or refute the hypothesis).
I'm not clear from the original request if they're trying to clear the main drive on a system, or some secondary data drive. If it's the main, I'd definitely want to boot from Tom's rootboot (a RAM based distro) so there'd be no chance of the system resisting getting scribbled upon, or failing to finish the job. Also continuing to multitask (Toms has 4 virtual consoles, you can read some doc files or something) will give /dev/urandom more noise sources to gather randomness from.
/dev/random would be faster - not as random, but at 7 times, it's (wince now, you know what I'm going to say) good enough for government work. MSwin doesn't have a /dev/urandom, it only has pseudorandomness. At least, last I looked.
(!) [Ben] Again, the other way around: "urandom" would be faster but marginally less secure (after 7 overwrites? The infinitesimal difference croggles my mind...), while "random" is slower but has the true /gelt/. Given that "/dev/hda" was used in the original example, Tom's RootBoot would be an excellent idea.
(!) [Mike] I thought /dev/urandom was the faster but less random one.
(!) [Heather] I just looked in the kernel documentation (/usr/src/linux/Documentation) and you are correct. /dev/random (character major 1 minor 8) is listed as nondeterministic, and /dev/urandom (character major 1 minor 9) is listed as faster and less secure.
Anyways our readers will have to decide for themselves whether they want 7 layers of pseudo-random, or if their system will be busy enough in different ways to get a nice batch of true randomness out of the "better" source.
(!) [Heather] I hear that the i810 motherboard has a randomness chip, but I don't know how it works, so I don't know how far I'd trust it for this sort of thing.

(?) Thanks for the help and the humor, I shall pass the information on to our FSO in hopes that this will suffice. Again, thanks.

Shane M. Walton

(?) GPM is interfering with x...

From Dave
Answered By: Ben Okopnik

Hello Answerguy,
Since installing Debian a few days ago, I've been more than pleased with it. However, I have run into a wee problem which I was hoping you could help me with. Yesterday, I realised I hadn't installed GPM. I immediately got round to installing using apt (a lovely painless procedure when compared to RPM). All went great until I started to run X, at which point my mouse went insane - just flying round the desktop at its own free will every time as I so much as breathed on the hardware that operated it. I immediately killed GPM using the GPM -k command, but to no avail. Then I shut down X, and restarted it with no GPM running - the mouse refused to move at all. I then proceded to uninstall GPM, and yet the pointer remains motionless :(. I'm using a PS/2 mouse.. Any suggestions?

I thank you for your time

(!) Yep; it's a bad idea to kill or uninstall GPM.
In the Ages Long, Long ago (say, 3 years back), it used to be standard practice to configure two different ways to "talk" to the mouse: GPM for the console, and the mouse mechanism built into X. Nowadays, the folks that do the default configuration for X in most distributions seem to have caught on to the nifty little "-R <name>" switch in GPM. This makes GPM pass the mouse data onto a so-called "FIFO" (a "first in - first out" interface, like rolling tennis balls down a pipe) called "/dev/gpmdata" - which is where X gets _its_ mouse info. By removing GPM, you've removed the only thing that pays any attention to what the mouse is doing.
So, what's to do? Well, you could configure X to actually read the raw mouse device - "/dev/psaux" in most computers today, perhaps "/dev/ttyS0" if you have a serial mouse on your first serial port (or even "/dev/mouse", which is usually a symlink to the actual mouse device.) My suggestion is, though, that you do not - for the same reason that the distro folks don't do it that way. Instead, reinstall GPM - in theory, your "/etc/gpm.conf" should still be there, and if isn't, it's easy enough to configure - and make sure that it uses that "-R" switch (hint: read the GPM man page.)
Once you've done all that, you'll now need to solve the "jumping mouse" problem. In my experience, that's generally caused by the mouse type being set to the wrong value (usually "PS/2" instead of "Microsoft".) Here's the easy way to do it: from a console, run "XF86Setup"; tell it to use your current configuration when prompted. Once X starts up and you get the "Welcome" screen, tab to the "Mouse" button and press "Enter". Read the presented info page carefully: since you'll be using the keyboard to set the options, you'll need to know which keys do what. If you forget, "Tab" will get you around.
Make sure that the "Mouse Device" is set to "/dev/gpmdata", and try the various mouse protocols - these are obviously dependent on your mouse type, but the most common ones I've seen have been PS/2 and Microsoft. Remember to use the "Apply" button liberally: the changes you set won't take effect until you do.
Once you have the right protocol, the mouse should move smoothly. I suggest that, unless you have a 3-button mouse, you set the "Emulate3Buttons" option - you'll need it to copy and paste in X! Also, play with the resolution option a bit - this will set the mouse response. I've seen high resolution "lock up" a mouse - but by now you know how to use that "Tab" key... :)
Once you're done, click "Done" - and you're ready to fly your X-fighter.

(?) Graphics Programming for Printing / Faxing

From G David Sword
Answered By; Ben Okopnik, Mike Orr

(?) I have a text file full of data, which I would like to turn into a bunch of fax documents for automated faxing. I could simply parse the file in perl, and produce straight text files for each fax.

Instead of this, I would like to be able to build up something which resembles a proper purchase order, or remittance, containing logos, boxes for addresses etc. Could I have an expert opinion (or six) on what would be the best method to use to achieve this - I have read a bit about LaTeX and groff, but I am not sure if they are the best solution or not.

Thanks in advance
G. David Sword

(!) [Ben] Since you have already implied that you're competent in Perl, why not stick with what you know? Parse the data file (which you will have to do anyway no matter what formatting you apply to it afterwards), then push it out as HTML - Perl is excellent for that. I can't imagine an order form so complex that it would require anything more than that.
As a broader scope issue, learning LaTeX or groff is, shall we say, Non-Trivial. In my !humble opinion, neither is worth doing just to accomplish a single task of the sort that you're describing. SGML, on the other hand, is an excellent "base" format that can be converted to just about anything else - DVI, HTML, Info, LaTeX, PostScript, PDF, RTF, Texinfo, troff-enhanced text, or plaintext (as well as all the formats that _those_ can be converted into.) You can learn enough to produce well-formatted documents in under an hour (no fancy boxes, though) - "/usr/share/doc/sgml-tools/guide.txt.gz" (part of the "sgml-tools" package) will easily get you up to speed. If you want the fancy boxes, etc., check out Tom Gordon's QWERTZ DTD <>, or the LinuxDoc DTD (based on QWERTZ.) I haven't played with either one to any great extent, but they're supposed to do mathematical formulae, tables, figures, etc.
(!) [Mike] Let me second this. If you need to get the reports out the door yesterday, stick with what you know. Get them to print in any readable text format now and then worry about enhancements later. The code you use to extract the fields and calculate the totals will still be useful later, whether you plug it into the new system directly or convert it into a new language.
TeX and troff both have a learning curve, and you have to balance this against how useful they will be to your present and future purposes. At best, they make a better temporary "output format" nowadays than a document storage format. SGML or XML is a much better storage format because it's more flexible, given the unpredictable needs of the future.
Actually, your "true" storage format will probably remain your flat file or a database, and then you'll just convert it to SGML or XML and then to whichever print format you want (via a generic SGML-to-something tool or your own home-grown tool).
I would look at XML for the long term, even if you don't use it right away. Perhaps someday you'll want to store your data itself in XML files rather than in the text files you're using. This does allow convenient editing via any text editor, and for new data, a program can create an empty XML structure and invoke an editor on it. And as time goes on, more and more programs will be able to interpret and write XML files. On the other hand, it is darn convenient to have that data in a database like MySQL for quick ad-hoc queries...
If you just want to learn a little bit of formatting for a simple document, troff is probably easier to learn than TeX.
You can always use the "HTML cop-out" one of my typesetting friends (Hi, johnl!) tells people when they ask him what's an easy way to write a formatted resume. Write it in HTML and then use Netscape's print function to print it Postscript.

(?) networked machine goes to sleep

From Bob Glass to tag on Fri, 03 Nov 2000

Hi, everyone. I'm a newbie and need help with a linux machine that goes to sleep and has to be smacked sharply to wake it up. I'm trying to run a proxying service for user authentication for remote databases for my college. That's all the machine is used for. The Redhat installation is a custom, basically complete, installation of Redhat Linux 6.2. The machine is a 9-month old Gateway PIII with 128MB of RAM. The network adapter is an Intel Pro100+. My local area network is Novell 5.x and my institution has 4 IP segments. I have not configured my linux installation beyond defining what's needed to make the machine available on the local network (machine name, hard-assigned IP address, default gateway etc).
The proxying software is EZProxy from The software is a URL rewriting style service, which uses port 2048 to listen for requests for access to subscription services. When ezproxy receives a request, it compares the IP address from the requestor to an approved list. If the IP is not approved, it asks for authentication info. When authentication info is approved, ezproxy creates a virtual web server which has an approved IP and then 'brokers' all requests to and from remote services for the duration of the session. Ezproxy is loaded at bootup and runs as a background process. The software runs just wonderfully. I've tested it from both on and off-campus.
The problem I'm unable to deal with is: my proxy machine disappears from the network or 'goes to sleep.' At that point, I can't use a web browser to contact the proxy service machine, I can't telnet to the machine, and I can't ping the machine. However, if I go across the room to the proxy machine, open the web browser, go to an weblink (i.e., send packets out from the machine), then go back to my computer and test a link, ezproxy responds and all is well. However, usually in an hour or so, the proxy machine is unreachable again. Then much later or overnight, it will begin to respond again, usually after a 5-7 second delay.
I have turned off power management in the BIOS. I have stopped loading the apm daemon. I have tried a different network adapter, 3Com509b. I have even migrated away from another computer to the machine described above. And still the machine goes to sleep ...!?$#@
We are all stumped here, including people magnitudes more knowledgeable and sophisticated than I am. If you need any further clarification, please call or write back. Any help you could give would be most appreciated.
Bob Glass

(?) networked machine goes to sleep

From Bob Glass (with a bonus question from Dan Wilder)
Answered by: Ben Okopnik

(?) Hi, everyone. I'm a newbie and need help with a linux machine that goes to sleep and has to be smacked sharply to wake it up. I'm trying to run a proxying service for user authentication for remote databases for my college. That's all the machine is used for. The Redhat installation is a custom, basically complete, installation of Redhat Linux 6.2. The machine is a 9-month old Gateway PIII with 128MB of RAM. The network adapter is an Intel Pro100+. My local area network is Novell 5.x and my institution has 4 IP segments. I have not configured my linux installation beyond defining what's needed to make the machine available on the local network (machine name, hard-assigned IP address, default gateway etc).


The problem I'm unable to deal with is: my proxy machine disappears from the network or 'goes to sleep.' At that point, I can't use a web browser to contact the proxy service machine, I can't telnet to the machine, and I can't ping the machine. However, if I go across the room to the proxy machine, open the web browser, go to an weblink (i.e., send packets out from the machine), then go back to my computer and test a link, ezproxy responds and all is well. However, usually in an hour or so, the proxy machine is unreachable again. Then much later or overnight, it will begin to respond again, usually after a 5-7 second delay.

(!) [Ben] First, an easy temporary fix: figure out the minimum time between failures and subtract a couple of minutes; run a "cron" job or a backgrounded script that pings a remote IP every time that period elapses. As much as I hate "band-aid fixes", that should at least keep you up and running.
Second: I've encountered a similar problem twice before. Once with sucky PPP in an older kernel (2.0.34, if I remember correctly), and one that involved a flaky network card on a Novell network (I've sworn off everything but two or three brands of cards since.) Perhaps what I'd learned from troubleshooting those may come in useful.
(!) [Dan] If you don't mind saying, which brands have you had the best luck with under Linux?
(!) [Ben] Intel EE Pro 10/100Bs have been faultless. I've used a stack of those to replace NE2K clones, and a number of problems - some of which I would have sworn were unrelated to hardware - went away. I can't say the same for the various 3Coms I've tried; whether something in the driver software or in the cards themselves (under Linux and Windows both), I could not get consistent performance out of them. My experience with LinkSys has been rather positive, although I've never had the chance to really beat up on them; perhaps this has to do with the quality of Donald Becker's driver, as they have been very friendly to the Linux community from the start (this was the reason I decided to try playing with them in the first place.)
For consistently high throughput, by the way, I have not found anything to beat the Intels.
(!) [Ben] Note that I'm not trying to give you The One True Solution here; this seems to be one of those problems that will require an iterative approach. The way I'd heard this put before is "when you don't understand the problem, do the part that you do understand, then look again at what's left."
A good rule of thumb is that if the problem is happening at regular intervals, it's software; if it's irregular, it's hardware. Not a solution, but something to keep in mind.

(?) I have turned off power management in the BIOS. I have stopped loading the apm daemon. I have tried a different network adapter, 3Com509b. I have even migrated away from another computer to the machine described above. And still the machine goes to sleep ...!?$#@

(!) [Ben] When it goes to sleep, have you tried looking at the running processes (i.e., "ps ax")? Does PPP, perhaps, die, and the proxy server restart it when you send out a request? Assuming that you have two interfaces (i.e., one NIC that talks to the LAN and another that sees the great big outside world), are both of them still up and running ("ifconfig" / "ifconfig -a")?
What happens if you set this machine up as a plain workstation? No proxy server, minimum network services, not used by anyone, perhaps booted from a floppy with an absolutely minimal Linux system - with perhaps another machine pinging it every so often to make sure it's still up? If this configuration works, then add the services (including the proxy server) back, a couple at a time, until something breaks.
This is known as the "strip-down" method of troubleshooting. If it works OK initially, then the problem is in the software (most likely, that is: I've seen NICs that work fine under a light load fall apart in heavy traffic.) If it fails, then the problem is in the hardware: NICs have always been ugly, devious little animals... although I must admit they've become a lot better recently; I can't say that I've had any problems with Intel Pros, and I've abused them unmercifully. :)
(A related question: When you moved from one machine to the other, did you happen to bring the NICs along? This could be important...)
(!) [Ben] My bad, there; I missed the part about the different NIC in the original request for help, even though I quoted it (blame it on sleep- deprivation...) - ignore all the stuff about the Evil NICs; it's certainly starting to sound like software.

(?) On Tue, Nov 07, 2000 at 11:37:46AM -0500, Bob Glass wrote: Dear Mr. Okopnik,

Thanks so much for your suggestion about creating a cron job which pings a network device. I did just that, and now the problem is 'solved.' (finding a source which detailed how to set up a cron job to run every 15 minutes _and_ not e-mail the output to the root account was a bit of a challenge!) It's a measure of what a newbie I am that this didn't occur to me on my own!

I've talked to many people about this problem and have come to the conclusion that there's a weird mismatch between hardware and software at both the machine and network level (routers, switches, NICs, Linux, Novell who knows!@#$ I wish Novell would write network clients for Linux and Solaris. I have a Solaris machine which very occasionally has this same problem.) Having tussled with this for over a month and been shown a workaround which both works and causes no problems, I'm satisfied. And as director of my library, I've got to move on to other tasks.

Again, many thanks.
Bob Glass

(!) [Ben] You're certainly welcome; I like being able to "pay forward" at least some of the huge debt I owe to the people who helped me in my own early struggles with Linux.
Pinging the machine is a workable solution, and I'm glad that it mitigated the problem for you - but let me make a suggestion. If you do not have the time to actually fix it now (or even in the foreseeable future), at least write down a good description of the problem and the workaround that you have used. The concept here is that of a shipboard "deficiency log" - any problems aboard a ship that cannot be immediately resolved go into this log, thus providing a single point of reference for anyone who is about to do any kind of work. ("I'll just remove this piece of wire that doesn't look like it belongs here... hey, why are we sinking???") That way, if you - or another director/admin/etc. - have to work on a related problem, you can quickly refresh yourself on exactly why that cron job is there. A comment in "crontab" that points to the "log" file would be a Good Thing.
As I understand it, Caldera's OpenLinux promises full Novell compatibility/connectivity. I can't comment on it personally, since I have no experience with OpenLinux, but it sounds rather promising - Ray Noorda is the ex-CEO of Novell, and Caldera is one of his companies.

(?) Web server/firewall hardware specifications, Apache and Zope

From John Hinsley
Answered by: Mike Orr

(?) I want a web site, but it looks like I'll have to put together my own server and put it on someone's server farm because:

(!) What do you mean by server farm? You're going to colocate your server at an ISP? (Meaning, put the server in the ISP's office so you have direct access to the ISP's network?)

(?) I need to run Zope and MySQL as well as Apache (or whatever) in order to be able to use both data generated pages via Zope and "legacy" CGI stuff (and it's far easier to find a Perl monger when you want one rather than a Python one!). If this seems remotely sensible, we're then faced with the hardware spec of this splendid server.

(!) I set up one Zope application at Linux Journal ( It coexists fine with our Python and Perl CGI scripts.
<ADVOCACY LANGUAGE="python"> While it may be easier to find a Perl monger than a Pythoneer, us Python types are becoming more common. And anybody who knows any programming language will find Python a breeze to snap up. The programming concepts are all the same, the syntax is very comprehensible, and the standard tutorial is excellent. </ADVOCACY>

(?) So, proposed spec:

Athlon 700, 3 x 20 GB IDE hard drives, 2 of which are software raided together and the third of which is for incremental back up. 256 Mb of Ram (at least), 1 100 Mbps NIC. Open SSH as a mode for remote administration, but otherwise with a lean kernel with an internal firewall.

Does this sound like a remotely viable spec?

(!) You didn't say how many hits per month you expect this site to receive. Our server has less capacity than that, and it runs Linux Journal + Linux Gazette + some small sites just fine. And yes, our servers are colocated at an ISP. You get much better bandwidth for the price by colocating.
I discussed your spec with our sysadmin Dan Wilder (who will probably chime in himself) and concluded:
** An Athlon 700 processor is way overkill for what you need. (Again, assuming this is an "ordinary" web server.) An AMD K6-2 or K6-3 running at 233 MHz should be fine (although you probably can't get a new one with less than 500 MHz nowadays...) Web servers are more I/O intensive than they are CPU intensive. Especially since they don't run GUIs, or if they do, the GUI is idle at the login screen most of the time! And if you really want the fastest chip available, an Athlon 700 is already "slow".
** Your most difficult task will be finding a motherboard which supports the Athlon 700 adequately. One strategy is to go to the overclocking web pages (search "overclocking" at and see which motherboards overclock best with your CPU. Not that you should overclock, especially on a production server! But if a motherboard performs OK overclocking your CPU, it should do an adequate job running your CPU at its proper speed.
** 256K MB RAM may or may not be adequate. Since memory is the cheapest way to increase performance at high server load, why not add more?
** 3 x 20 GB IDE (1 primary, 1 for RAID, 1 for backup) should be fine capacity-wise. Are you using hardware RAID or software RAID? Software RAID is pretty unreliable on IDE. Will you have easy access to the computer when you have to work on it? Or does the ISP have good support quality, and will they handle RAID problems for you? One thing we want to try (but haven't tested yet) are the 3Ware RAID I cards.
** IDE vs SCSI. SCSI may give better performance when multitasking. Of course, it's partly a religious issue how much that performance gain is. Given that a web server is by nature a disk-intensive application, SCSI is at least worth looking into. Of course, SCSI is also a pain to install and maintain because you have to make sure the cables are good quality, ensure they are properly terminated, etc.
** 100 Mbps Ethernet card. Are you sure your ISP's network is 100 Mbps? 10 Mbps should be fine. If your server saturates a 10 Mbps line, you're probably running video imaging applications and paying over US$7000/month for bandwidth. Make sure your Ethernet card operates well at 100 Mbps; many 10/100 Mbps "auto-switching" cards don't auto-switch that well.
** OpenSSH for remote admin. Sure.
The biggest FTP site in the world,, runs on an ordinary PC with FreeBSD. And the Zopistas at the Python conference in January said Zope can handle a million hits per day on an ordinary PC.
The biggest problem we have with our servers is that the Linux 2.2 kernel can have only 256 file descriptors open globally (=for all processes combined). This is obviously a limitation for a high-traffic site where every web request directly or indirectly opens 1-5 files. Our sysadmin periodically grumbles about the lateness of Linux 2.4 and wonders whether we should install a pre-release and cross our fingers, just to get around this limitation.
* There are several ways to integrate Zope with Apache. We chose the "proxy server" way because it allows Zope's web server (Zserver) to multitask. You run Apache at port 80, Zserver at 8080, and use Apache's ProxyPass directive to relay the request to Zserver and back. You have to do some tricky things with mod_rewrite and install a scary Zope product, but it works.
(Scary because it involves modifying the access rules for the entire Zope site, which can lock you out of your database if you're not careful, and because it makes Zope think your hostname/port is what Apache publishes them as, rather than what they really are, and this can also lock you out of your database if Apache isn't running or the rewrites or proxying aren't working. I refused to implement virtual hosts on our Zope server--because they also require playing with access rules--until a safer way comes along. Why not let Apache handle the virtual hosting since Apache is good at it? You can use a separate Zope folder for each virtual site, or even run a separate Zope instance for each.)
In the end, we decided not to go ahead with wide-scale deployment of Zope applications. This was because:
1. Adequate Zope documentation was missing. Most documentation was geared for the through-the-web DTML content manager rather than the application programmer. It was a matter of knowing a method to do X must exist, then scouring the docs to find the method name, then guessing what the arguments must be.
2. Zope wants to do everything in its own private world. But text files and CGI scripts can handle 3/4 of the job we need.
3. Zope's main feature--the ability to delegate sections of a web site to semi-trusted content managers who will write and maintain articles using the web interface--was not really what we needed. Our content managers know vi and know how to scp a file into place. They aren't keen on adjusting to a new interface--and having to upload/download files into Zope's database--when it provides little additional benefit for them.
We decided what we really needed was better CGI tools and an Active Server Pages type interface. So we're now deploying PHP applications, while eagerly waiting for Python's toolset to come up with an equivalent solution.
Disclaimers: yes, Zope has some projects in development which address these areas (a big documentation push, Mozilla-enhanced administration interface, WebDAV [when vi supports it] for editing and configuring via XML, built-in support for virtual hosts, a "distributed database" that an ordinary filesystem directory can be a part of), but these are more or less still in the experimental stages (although deployed by some sites). And yes, Python has Poor Man's Zope and Python Server Pages and mod_python, but these are still way in alpha stage and not as optimized or tested as PHP is. I also want to look into AOLserver's embedded Python feature we read about in October (, but have not had the chance to yet.
(!) [Mike again] I forgot to mention MySQL.
Our web server runs MySQL alongside Apache and Zope. MySQL is called by CGI applications as well as Zope methods.
It took a while to get MySQLdb and the ZMySQLDA (the Zope database adapter) installed, but they're both working fine now. I spent a couple weeks corresponding with the maintainer, who was very responsive to my bug reports and gave me several unreleased versions to try. These issues should all be resloved now.
One problem that remained was that ZMySQLDA would not return DateTime objects for Date/DateTime/Timestamp fields. Instead it returned a string, which made it inconvenient to manipulate the date in Zope. One problem of course is that Zope uses a same-name but incompatible DateTime module than the superior one the rest of Python uses (mxDateTime). I finally coded around it and just had the SQL SELECT statement return a pre-formatted date string and separate month and year integers.

(!) Dear Mike,

thank you so much for a really comprehensive answer to my questions. Of course, it raises a few more questions for me, but I think the view is a bit clearer now.

Yes, I did mean colocation (co-location?). It's a term I have some problems with as it seems to suggest putting something in two places at one time.

We might be fortunate in that the funding for this is unlikely to come through before 2.4 about which I hear "around Christmas, early New Year". And even more so in that we could probably get away with hiring some server space for a month or two while we played around with the new server and tried to break it. Of course, this might well mean doing without much in the way of interactivity, let alone a database driven solution, but we can probably survive on static pages for a while and get some kind of income dribble going.

My inclination would be to go with software Raid and IDE (hence the attempt to break it!) but I will consider the other alternatives.

Ultimately whether we go with Zope (and in what context vis-a-vis Apache, or Zap) is going to have to depend on whether I can get it up and running to my satisfaction at home, but it's good to be reminded that PHP is a good alternative.

Once again, many thanks.

(?) 'neighbour table overflow'

From Alex Kitainik to tag on Wed, 01 Nov 2000

I've found 'neighbour table overflow' question in your gazette. Explanation for this case seems to be not complete although. The most nasty case can happen when there are two computers with the same name in the LAN. In this case neighbours' search enters endless loop and thus 'neighbour table overflow' can occur.
PS. I apologize for my English (it isn't my mother tongue...)
Regards -- Alex.
Alex Kitainik

(?) 'neighbour table overflow'

From Heather to tag on Wed, 1 Nov 2000


I've found 'neighbour table overflow' question in your gazette. Explanation for this case seems to be not complete although. The most nasty case can happen when there are two computers with the same name in the LAN. In this case neighbours' search enters endless loop and thus 'neighbour table overflow' can occur.

(!) Actually, the arp cache doesn't care about names - it cares about MAC addresses (those things that look like a set of colon seperated hex values in your ifconfig output). But, it is a good point - some cards are dip switch configurable, and ifconfig can change the 'hw ether' interface if you ask it to.
Between arpwatch and tcpdump it should be possible to seriously track down if you have some sort of "twins" problem of either type, though. At the higher levels of protocol, having machines with the same name can cause annoying problems (e.g. half the samba packets going to the wrong machine) so it's still something you want to prevent.

(?) PS. I apologize for my English (it isn't my mother tongue...)

Regards -- Alex.

(!) Your English is fine.

(?) Networking

From Ben Okopnik to tag on Sat, 11 Nov 2000

On Sat, Nov 11, 2000 at 01:36:36PM +0000, [Kopf] wrote: Hi,

I want to set up a home network, with 2 machines - workstation & server. The problem is, I want to configure Linux so that if I use the workstation, nothing is saved on the local drive, everything is kept on the server, so that if I shut down the workstation, and I go up to the server, I can work away there, without any difference of environments between the 2 boxes.

Another problem is, I'm a bit strapped for cash, so I don't want to buy a server & networking equiptment until I know what I want to do is possible.



(!) Not all that hard to do; in fact, the terms that you've used - workstation and server - point to a solution.
In the Windows world, for example, those terms have come to mean "basic desktop vs. big, powerful machine." With Linux, the meanings come back to their original sense: specifically, a server is a program that provides a service (and in terms of hardware, the machine that runs that program, usually one that is set up for only - or mainly - that purpose.)
In this case, one of a number of possible solutions that spring to mind is NFS - or better yet, Coda ( Either one of these will let you mount a remote filesystem locally; Coda, at least in theory (I've read the docs, haven't had any practice with it) will allow disconnected operation and continuous operation even during partial network failure, as well as bandwidth adaptation (vs. NFS, which is terrible over slow links.) Coda also uses encryption and authentication, while NFS security is, shall we say, problematic at best.
Here is how it works in practice, at least for NFS: you run an NFS server on the machine that you want to export from - the one you referred to as the "server". I seem to remember that most distributions come with an NFS module already available, so kernel recompilation will probably not be necessary. Read the "NFS-HOWTO": it literally takes you step-by-step through the entire process, including in-depth troubleshooting tips. Once you've set everything up, export the "home/kopf" directory (i.e., your home directory) and mount it under "home/kopf" on your client machine. If you have the exported directory listed in your "/etc/fstab" and append "auto" to the options, you won't even have to do anything different to accomodate the setup: you simply turn the machine on, and write your documents, etc. Your home directory will "travel" with you wherever you go.
Since you mention being strapped for cash, there's always another option: put together a minimal machine (say, a 486 or a low-end Pentium) that does nothing more than boot Linux. Telnet to your "big" machine, work there - run a remote X session, if you like. Other advantages of this setup include the need for only one modem (on your X/file server), the necessity of securing only a single machine, and, of course, the total cost. I would suggest spending a little of the money you save on memory and a decent video card, though - not that X is that resource-intensive, but snappy performance is nice to have. 32-64MB should be plenty.
I also suggest reading the "Thinclient-HOWTO", which explains how to do the NFS "complete system export" and the X-client/server setup.
Ben Okopnik

(?) Hi! Thanks for all the great info!

What you've said has really enlighened me - I had never thought of remote mounting and stuff like that. Just one question, if I were to mount "/" on the server as "/" on the workstation, how much diskspace would I need on the workstation to start up Linux until it mounts all the drives? Or would I use a bootdisk to do this, and have absolutely no partition for Linux on the workstation?

(!) You could indeed boot from a floppy, but it's a longish process, and floppies are rather unreliable; I would think that scrounging around can get you an small HD for just a few dollars. One of the things I really appreciate about Debian is that you can do a "base setup" - a complete working Linux system with networking and tons of system tools - in about 10 minutes, on about 20MB worth of drive space. I seem to remember that Slackware does much the same thing.
As to how much diskspace: you really don't need any. You could even set your machine up as a terminal (a bit more of a hassle, but it eliminates the need for even a floppy.) An HD is nice to have - as I've said, booting from one is much more convenient - but start with the assumption that it's a luxury, not a necessity. From there, everything you do is just fun add-ons.
The point to this is that there are almost infinite possibilities with Linux; given the tremendous flexibility and quality of its software, the answer to networking questions that start with "Can I..." is almost always going to be "Yes."

(?) Also - I know the risks associated with allowing "(everyone)" to mount "/" or even "/home/" on Linux... Would I be able to restrict this to certain users, or even certain computers on the network?

Thanks for all the help!


(!) "Would I be able to..." qualifies; the answer is "Yes". The "NFS-Howto" addresses those, and many other security issues in detail.

(?) Ben,

by the way, you talked about putting in about 32mb of Video memory into one of the computers to enhance X performance.. Which computer would I put it in, the X Server or Client?


(!) Perhaps I didn't make myself clear; I believe I'd mentioned using a decent video card and 32MB of system memory. In any case, that's what I was recommending. Not that X is that hungry, but graphics are always more intensive than console use - and given the cost/performance gain of adding memory, I would have a minimum of 32MB in both machines. As to the video card, you'd have to go far, far down-market to get something that was less than decent these days. A quick look at CNet has Diamond Stealth cards for US$67 and Nvidia Riva TNT2 AGPs for US$89, and these cards are up in the "excellent" range - a buck buys a lot of video bang these days!

(?) Ok, well, you've answered all questions I had!

Now 'tis time to make it all work.

Thanks again!


(!) DSL on Linux Information

Answer by Robert A. Uhl

I've some brief information on DSL for Linux.
Several phone companies do not officially support Linux since they do not have software to support our favoured platform. Fortunately I have found that it is still possible to configure a DSL bridge and have had some success therewith.
Let me note ahead of time that my bridge is a Cisco 675. Others may vary and may, indeed, not work.
The programme which you will use in place of the Windows HyperTerm or the Mac OS ZTerm (an excellent programme, BTW; I used it extensively back in the day) is screen, a wonderful bit of software which was included with my distribution.
To configure the bridge, connect the maintenance cable to the serial port. First you must su to root, or in some other way be able to access the appropriate serial port (usually /dev/ttyS0 or /dev/ttyS1). Then use the command
screen /dev/ttySx
to start screen. It will connect and you will see a prompt of some sort. You may now perform all the tasks your ISP or telco request, just as you would from HyperTerm or ZTerm.
One quits screen simply by typing control-a, then \. Control-a ? is used to get help.
Hope this is some use to some of the other poor saps experiencing DSL problems.
-- Robert Uhl
If I have pinged farther than others, it is because I routed upon the T3s of giants. --Greg Adams

... so mike asked ...

(?) Hmm, I have a Cisco something-or-other and it's been doing DSL for Linux for almost two years. The external modems are fine, because there's nothing OS-specific about them, you just configure them in a generic manner.

(!) It's the configuration that can be trouble. When I've called the telco, they've wanted to start a session to get various settings. 'Pon being informed that I'm using Linux, it has generally been `Terribly sorry, sir, but we don't support that.'

(?) There's two ways to configure it: via the special serial cable that came with it or via the regular Ethernet cable using telnet. I tried telnet first but I couldn't figure out the device's IP number (it's different for different models and that information was hard to get ahold of). So I plugged in the serial cable and used minicom as if it were an ordinary null-modem cable. That worked fine.

(!) I had a deal of difficulty with minicom. Screen seems to be doing a right fine job, at least for the moment. Figured I'd let others know.
Enjoy your magazine.
-- Robert Uhl

(!) [Mike] I asked Robert why he didn't just use minicom or kermit which are designed for serial communication, rather than screen, which is meant for multitasking an interactive session. I said that I had used minicom on my Cisco 675 bridge at home.

Guess what. I had to configure a router at work last week. On DSL with a Cisco 675 bridge. Minicom didn't work. Screen did. And I never would have thought of using screen if it hadn't been for this TAG thread.

I pulled out the serial cable inside the box and reseated it before using screen, just in case it was loose, so perhaps it wasn't minicom's fault. But at least now I have more alternatives to try.

-- Mike Orr

(!) sticky notes

From Roy to gazette on Wed, 1 Nov 2000

Want to set a sticky note reminder on your screen? Create the tcl/tk script "memo"
button .b -textvariable argv -command exit
pack .b
and call it with
sh -c 'memo remember opera tickets for dinner date &'
Want to make a larger investment in script typing? Then make "memo" look like this:
if {[lindex $argv 0] == "-"} {
set argv [lrange $argv 1 end] exec echo [exec date "+%x %H:%M"] $argv >>$env(HOME)/.memo }
button .b -textvariable argv -command exit .b config -fg black -bg yellow -wraplength 6i -justify left .b config -activebackground yellow .b config -activeforeground black pack .b
and the memo will appear black on yellow. Also, if the first argument to memo is a dash, the memo will be logged in the .memo file. The simpleness of the script precludes funny characters in the message, as the shell will want to act on them.
In either case, left-click the button and the memo disappears.
Preceed it with a DISPLAY variable,
DISPLAY=someterm:0 sh -c 'memo your time card is due &'
and the note will pop up on another display.

More 2¢ Tips!

Send Linux Tips and Tricks to

2C tip

Sat, 25 Nov 2000 14:13:28 +0100 (CET)
From: Richard Torkar (
I read your 2C tip regarding finding the rpm package a certain file belongs to in the latest edition of
I have no idea if this is what you mean instead of your script but here it goes:
# rpm -qf /usr/bin/afm2tfm
So the file /usr/bin/afm2tfm belongs to tetex-afm-1.0.7-7.
Is this what you meant?
Richard Torkar

on creating tty for virtual consoles...

Mon, 6 Nov 2000 15:43:04 -0800 (PST)
Carlos Torres (

Hi I recently downloaded smalllinux (kernel 2.2.0) and have had slight trouble getting tiny X running. I tries to load onto /dev/tty5 and smalllinux only has four tty's for VT. how do you use mknod? I can't make out what major and minor numbers are and they are required to make the device.
Anyway hope you can help me...

The major number should be the same, and the minor number should increase by one for each. You should be able to see the pattern if you do
ls -al /dev/tty[0-9]
Under bash, the usual linux shell, those brackets indicate that it could be any character 0 through 9 there.

Well i found out what to do by probing LDP but thanx for reding these emails
I believe it was..

# mknod -m 666 /dev/tty5 c 4 5

I think this is what I need for a X server anyway! ;-} well let me kow if on the right track!1
Thanx VlaAd

Yep, you're right on target! -- Heather

Linux and Lexmark Printers

Wed, 1 Nov 2000 10:03:31 -0800
Allen Tate ( asking...
Tip From: Dan Wilder

Has anyone out there had any luck setting up Lexmark Printers on any Linux distribution?

We've been using networked Lexmarks for years at LJ.
Key (at least for the older ones we use) is configuring as a network printer using port 9100, with something like:
for the lp line in your printcap entry.
We've a newer one on order, will post if the key is something very different.
--- Dan

Serial consoles (issue #59)

Sun, 5 Nov 2000 19:59:11 -0700
From: Michal Jaegermann (tag)
In an answer to Joseph Annino you forgot to mention two details. One - you will need to run some getty program on a serial console you want to log in so an entry in /etc/inittab will be needed. As modem controls are not used then 'mingetty' should work fine but most anything ('agetty', 'getty_ps', 'mgetty', ....) also will do.
Also Josephs wants to use that console for admnistration hence, presumably, he wants to login there as 'root'. If this is indeed that case then an entry for the console port in /etc/securetty will be also needed or logging will run into some problems.
I also have a comment to Richard N. Turner entry about cron jobs. I would be much more careful with sourcing there things like /home/mydir/.bash_profile. Cron jobs run unwatched and possibly with root priviledges. Unless you can guarantee that something nasty will not show up in a sourced file now and anytime in the future you can be for a rude surprise. Setting precisely controlled environment in a script meant to be run from cron is much more appealing option. Depending on the whole computing setup such arrangements with sourcing can be ok, although I prefer to err here on a side of a caution, but readers should be at least aware about a potential big security hole.

Gant charts / Project

Mon, 06 Nov 2000 08:50:40 +0000
Clive de Salis ( asked...
Tip From: Ben Okopnik

On Mon, Nov 06, 2000 at 08:50:40AM +0000, Clive de Salis wrote: Dear All

I've converted my office in Birmingham in the UK to run entirely on Linux using Slackware and have successfully run the business for nearly 3 years now without my customers realising that I don't run Windows or use Microsoft Office.... Which just shows that it can be done.

I'm getting ready to convert the Monmouth office to the same using the Mandrake distribution. There is, however, one software application that I can't find for Linux ... and that is the equivalent to Microsoft Project. Do you know of a Gant Chart based project planning tool for Linux?

Good to hear yet another "Linux in business" success story! Project management software for Linux is not a huge field, although there seem to be at least several groups - some with rather serious money behind them - working on remedying the lack. There are several pieces of software already in existence that use Gantt charts; check out
for a good start on software in the Call Center, Bug Tracking and Project Management categories.
Good luck in your endeavours,

"Linux Gazette...making Linux just a little more fun!"

Super Computing 2000
Yet Another Super-Computing Conference

By Stephen Adler

I just love long stupid acronyms... -SA

Off-site links:

The conference

There is a strange background roar which permeates airplanes when they are in full flight. That's the roar which I hear, and feel, right now. I'm on American flight 736, on my way back to LaGuardia, from Dallas, where Super Computing 2000 just finished. It's been over a year since I last wrote up a conference, talk or composed an editorial for the Internet, largely because I've been really busy with my day job at Brookhaven Lab. The Relativistic Heavy Ion Collider (or RHIC, see came on line over the summer and that basically meant 80 hour weeks starting this past February. After our first collisions were recorded and a long summer of data taking, I got rather burnt out. Well, I think I've recovered now... and after attending Super Computing 2000, I hope to be able to get something written up about it. At least the next 3 hours I'll spend on this airplane will give me a chance to get started.

I attended last years Super Computing conference in Portland Oregon, (Nader country), which was quite delightful. The bit which I liked most was the presence of industry, academia and the national labs, all showing off what kind of super computing they are selling or what they are doing with their super computer toys. Both the technical sessions and the show floor were entertaining in the content they presented. This was pretty much the same with this years conference. Unfortunately, there was some overlap from what I saw last year with this year so the, "Wow, this is soooo coool" feeling wasn't quite there, as it was last year. Be that as it may, there was some cool stuff which I want to pass on to those of you who don't have the money, time or both to get yourselves down to Dallas to attend the conference. (The conference fee is $700, which is quite on the steep side for the kind of conferences I attend. Although you end up paying at least twice to go to the full COMDEX conference and you get about a 10 times more information out of the Super Computing conference then you get out of the more main stream COMDEX shows.)

The show lasted 4 days, and I don't want to give you a full detailed account of what I saw, you'll quickly be off surfing to other sites if I did. What I'll do is focus on what I considered to be the highlights of the conference. And as you all now, I'm a bit of a Linux/Open Source/Free Software enthusiast, (Maybe I should rephrase that "Free Software/Open Source/Linux enthusiast (FSOSLE)", to give proper credit to Richard Stallman) and thus I'll tend to concentrate on those topics.

Linux was everywhere. These guys had a nice stuffed Tux keeping an eye on things.
First of all, the bit which I find most exciting was that Linux showed a major presence at the show. This is a direct fallout of Donald Becker's work on the Beowulf clustering software he and others helped create. Super Computers on the cheap was quite pervasive and most of the major computer vendors had some kind of Linux box or other. These include SGI, IBM, Compaq, and a boat load of smaller vendors. Noticeably absent was VA Linux, although there was at least one rack of VA Linux PC's on the show floor. There were several Open Source oriented talks as well. Most notably was Dr. Sterling's lecture on COTS (Commercial Off the Shelf) super computers, (I believe he's Donald Becker's professor who seems to be the guy to give the COTS Super Computer talks,) and the Open Source panel discussion which occurred on the last day of the meeting. More on that later. It was really neat to walk up and down the show floor and see all the stuffed penguin dolls siting on top of booths, nestled between racks of 1U 1GHz AMD PC's, and all the GNOME and KDE desktops which adorned many flat panel LCD consoles at the conference. One notable exception was Sun. No Linux there. I looked around their booth but didn't find any. But I did find some console displays boasting a nice movie animation of F14's or something of that sort. The application used to display the movie was Xanim though. This wasn't the conference for them to be showing off Star Office, so I didn't bother trying to get one of the Sun guys to demo it.

The opening day of the conference was Tuesday, Nov 7th. Yup, voting day. The key note speaker was Steven Wallach, the guy who helped design the Data General 32-bit Eclipse MV superminicomputer, and is now with Center Point venture capital firm. His talk was titled "Petaflops in the Year 2009" and dealt with how he would envision the Petaflop computers of the future. The main point of his talk was that the basic core of the future Petaflop computer is being built right now to service the backbone of the Internet. I must say, Steve Wallach did convince me of his arguments. The basic problem right now is that the chip manufactures or CPU designers or whatever you call these folk, are starting to reach physical boundaries imposed by Mother Nature and her laws of physics which govern our universe. Moore's law only goes so far and there is a barrier which is the speed of light. It could be that some time in the future, one will be able to use takions in some kind of semiconductor to operate transistors which effectively switch faster than the seed of light, (Think about it, with one of them in your PC, when you surf the Internet, you just don't click from one hyper link to the next, you get to surf through space-time. Click here to go to a chat room 2 days from now... Click here to see the price of your stock 10 minutes from now...) Because of these limitations, the bottle necks which are forming are the ones which limit the speed at which you can get data into and out of your CPU. This is where the work being done by Lucent and others comes into play. Lucent is trying to get terabytes of data per second through a routing node. One has to do this by being able to guide the different wavelengths of light from one input port on the router to an output port on the router without slowing down the data rate. This architecture of data in, data out and very high speeds is basically the inner core of the processor design needed for future super computer systems. Remember, super computer systems will never be made up of one big, really fast CPU. They will be made up of many small nodes, interconnected through some kind of data mesh. Therefore Steve Wallach emphasizes that in order to break the last bottle neck in current CPU designs, one needs to push the data around between CPU's optically and not try to push it in and out electrically. The guys building the backbone of the Internet are doing this, and thus the guys building the next generation CPU's should be talking to the guys over at Lucent. By the way, Steve mumbled something about how Linux would be running on this Petaflop computer. Look for the announcement on Slashdot sometime in the year 2009....

The next session after the keynote which I attended was the "Who wants to be a Billionaire" panel discussion. That's a stupid question, of course I want to be a billionaire. The panel discussion was headed up by the same guy who gave the key note, Steve Wallach. There were three guys on the panel. They were Scott Grout of Chorum, Matt Blanton of Startech and Jackie Kimzey of Sevin Rosen Funds. Scott Grout read his introductory comments and didn't say much else. Basically, he worked for some telecommunications company which went through the venture capital funding round and got itself established. Matt Blanton and Jackie Kimzey gave their remarks which again, I can't quite remember the details of. I'm too lazy to check my notes right now.

Photographed above is Steve Wallach, the Convex Computer guy, who also gave the key note address for SC2000. His day job is to review the project you submitted to Center Point Ventures, grill you on it and watch you sweat.
The bit which I want to impress upon you is the venture capital feeling I got from the panel discussion. This feeling is a bit hard to describe, but I'm going to give it a try. The talk started off basically with a bunch of comments from Steve and his panel. They wanted to get across to the audience what the venture capital process is all about. "I got this great idea you see, and I want to run with it. What do I do?" is the question they were trying to answer. Their answer was something like this. Write up your idea, get in touch with your local incubator, get a prototype going, then go to your local VC, show off your prototype to him, he'll give you money, and off you go, onto your IPO. After you IPO, you build up your product a bit more, and finally you get bought out by a major company like Microsoft, Red Hat or Cicso. (They never mentioned Red Hat, but hey, they have been going around buying up companies right and left...). I was sitting there just absorbing this information, typing into my laptop as much of this as I could. (Yes, I really do want to be a billionaire). Then it started to hit me. These guys are the real thing. Jackie Kimzey of Sevin Rosen Funds has just raised 850 Million bucks of venture capital to fund companies which could be the next Yahoo or AOL. And they have to spend all that money. The investors didn't give him the money so that he could buy 30 year bonds with it. These guy's kept talking about the 16th floor. It dawned on me that the "16th floor" is a floor in some high rise building down town where the Dallas high tech VC boys hang out. Steve Wallach and Jackie Kimzey being two of these high tech VC boys. (Actually, Steve Wallach is from Brooklyn and his accent doesn't quite fit in with the Texas good ol' boy drawl, but you get my point...) And as all things Texas, they got Dollars (yup, capital D Dollars, as in capital D Dallas) to spend (ehhmmm I meant to say invest...) Jackie said flat out, don't bother to cold call them. "We prefer referrals, like from our pal Matt of Startech". They review hundreds of proposals which I'm sure end up in the trash. And if you are lucky enough to be considered for 1st round funding, you are rewarded with a review of your proposal. In other words, get ready to be grilled by the boys on the 16th floor. I'm sure they grill you and if they don't see you sweat, they'll grill you even harder. I think that VC feeling hit me when Steve Wallach said something to the effect that they hire Nobel Laureates to come in and review your proposal. That coupled to 850 million dollars ready to invest made me realize that these guys mean business. They kept saying this throughout the presentation. It just took some time for me to really get a feeling for what they were saying. "Look, don't put a management team together made up of your cousin the accountant as the CFO, your best friend the hacker as the CTO etc." said Steve W. The first thing we look at is the quality of your team. "We want to make you rich, and in doing so we will make ourselves rich." This was another Steve Wallach statement. This is business, high tech, high stakes business, sort of like a really bad poker game, with 850 Million in the pot. Every hand is taken seriously. Think big swinging as in the boys from Goldman Sachs.

This concept is so foreign to government research. At least in the government laboratory environment which I work in. Our time is basically worthless and is seldom taken into account when we work on projects. A statement like "the quality of your team" rings rather hollow around here. I think we, as scientists, tend to devalue our time, because of the tight job market for positions where one can freely do research with particle colliders. Thus you put up with the fact that you, Dr. so and so, who just received her Ph.D. in High Energy Physics, has to break out the RJ45 clamp and start cutting CAT-5 cable so that she can wire up the crate controllers for her experiment. That along with having to install and maintain her Linux cluster so that she can store and analyze her data. And forget that trip to XYZ conference, overtime has to be paid to the electricians because if not, her experiment wouldn't be ready on day one when the accelerator turns on and delivers her beam. And believe me, the unionized electricians only work on her experiment if she pays overtime. And then comes the kicker. "Sorry, you spent too much time developing software and hardware and not enough time doing science. Look at your publication record, it stinks! No tenure for you. Go find a job somewhere else...." Don't think I'm kidding, this is why its so hard to attract new talent into High Energy and Nuclear physics.

Then we have the other side of the spectrum, the VC side. Some president of some start-up at this panel discussion, got up and recounted an anecdote regarding a board meeting he attend. He told the board that he managed to save 300,000 bucks or so because he was able to postpone hiring some people. He was expecting some congratulatory remarks and instead he was scolded. "You have a plan to execute, therefore spend the $300K and execute the plan!" he was told. What I got away from this panel discussion is that when serious money is on the line, (850 Million bucks is serious money) you don't f..k around. You make sure the plan is right, hire the best of the best to verify this, (i.e. hire Nobel laureates to review your circuit design and software flow charts) and make sure the guys to which you are giving the money, can stand up to a brutal review. If they can't, the VC's will be throwing their money away on that proposal. The upside of all this is that if you do get your 1st round of funding, then they will be with you to make sure your plan goes right. And, don't expect to retain full ownership of your company, their commission is measured as a percentage ownership of the company you are building. If you don't like it, go down to your local savings and loan and pitch your idea to them, these VC's have another 300 business plans to choose from. You know, this may sound crazy, but after what I've been put through working for the government, I would give my left nut to work in that kind of environment....

Dr. Monty Denneau of IBM. He's helping design IBM's Petaflopper called Blue Gene. And yes, he did so standing right there with no visual aids. He just stood there and talked about out it for 45 minutes. No notes, no power point, no nut'in...
The next talk worth mentioning was given by Monty Denneau of IBM on their Blue Gene machine. Blue Gene is IBM's next Petaflop computer. Its cost? Monty gave the figure of $100 Million bucks. "All big IBM computers cost $100 million." It's sort of the canonical cost for the next generation computer IBM builds. There are a few key concepts of Blue Gene which I picked up on.

First of all, the Blue Gene research team started out by designing a RISC instruction set from scratch. They wanted to use something like the PPC but its instruction set just got too large. This was due to too many people coming and going from the PPC design team and all instructions had to be kept in order to keep backward compatibility. Thus the "typical" RISC architecture had 250 to 300 instructions of which only 50 were really used and some were never used. There were even other instructions, that if used, would break the performance of the chip, and so the instruction had to be turned off by the compiler. After that explanation, it was clear to me that it was a good idea to start the CPU design by tossing out the instruction set and starting from scratch.

The next key concept was to build many small CPU's on one fabrication die. The idea being that one "CPU chip" would have hundreds of CPU's, with floating point units scattered throughout the die along with secondary cache units. Coupled with this idea was the concept that if one of the CPU's didn't work, the OS would detect this and not use it. Therefore, if you have a large die or silicon from which you're going to build your "processor chip", a defect in the fabrication in the sequencer unit or instruction set memory or whatever would not cause you to throw out the chip. This is a big problem with today's current CPU manufacturing. 100 microns of bad silicon in the wrong spot and you had to throw out the CPU. Monty couldn't give exact figures, but he said that because of this ability of having the OS turn off just the bad CPUs then the production yields went from very low to very high. This is very much the same concept as bad blocks on a hard disk drive.

The next concept was that of a water cooled system. The amount of air flow needed to cool a Petaflop machine would require a couple of jet turbo engines providing hurricane equivalent wind forces. Therefore, one had to resort to using water to cool the system. As it turns out, there was great resistance to this idea, but Monty prevailed.

The final idea which I remember was how they were going to connect this Petaflop machine together. The idea was to build cubes of processors and then connect the cubes together with some kind of cabling. The problem being that there was a lot of cable to hook up and it needed to be done right. OSHA got in the way because if you build something which humans must traverse, like a hallway, or a conduit under IBM's Blue Gene, you needed to provide space for a guy, 7 feet tall, to be able to run out of in case there is a fire. No getting around this requirement. So they built Blue Gene over a special floor which was broken up into a grid. Each grid element could be raised and lowered. So you have to imagine this. A large floor area where you see hundreds of CPU cubes. The operator has to check the connector on one of the CPU cubes. He goes and clicks on some Java thing or other on his console and grid point XY raises up to arm level. He then goes out there, checks the cable, and when done, goes back and clicks on his Java interface and the cpu cube is lowered back into the grid. Definitely Space Odyssey 2001 stuff.

I believe IBM is on the right track. With this design, they will get their Petaflop computer, at about $100 Million, give or take a factor of 2 or 3. But what really impressed me about Monty's talk is that he didn't bother to prepare a power point presentation like the rest of the speakers did. He just got out there in front of the audience and started talking away. I'm not sure if this is a good or bad thing, but impressive in the least. Sort of like watching a no hitter.

The speaker after Monty was Keiji Tani speaking about the 40Teraflop machine which Japan is building. The bit which struck me about his talk was that for about $500 Million bucks, Japan is building a 40 Teraflopper which will be housed in a building the size of a large basketball stadium and will have about 20,000 Kilometers of cabling. The speaker before him described a petaflopper which will be housed in a large auditorium for about $100 million bucks. The two will be ready in about 2 or 3 years. You do the math, but if I were reviewing the Japanese project, it would be hard for me to justify the cost..... My guess is that the Japanese need to build this machine to show the rest of the world that they are players in the HPC game. Just like the US spends hundreds of millions on their Giga and Tera floppers in the national labs scattered about the country. Forget about what's housed in the NSA research labs.

Dr. David Anderson is pictured above, who is the director of the seti@home project. He was able to harvest 20 Teraflops of computing power from the Internet to help analyze SETI data recorded at Arecibo for about $800K. Now that's creativity.
The next speaker worth mentioning was David Anderson, the director of the seti@home project. (You can view photos of his presentation here.) I'm a real fan of this project because it shows what can be done with creativity. Actually, one can argue that if it wasn't for the financial constraints which the project underwent, the seti@home concept wouldn't have been created. Necessity is the mother of invention is the rule which can be seen at work in this project. Basically David gave an overview of the SETI project. SETI funding dried up in 1993. In order to continue their research efforts they required two things. One was to find a way to keep taking data and the other was to find the computing resources to find the SETI signal in the data they collected. This is a compute intensive task if there ever was one. The first job, that of recording data was solved by becoming a parasitic experiment at Arecibo, the radio telescope in Puerto Rico. The way they did this is the following.

The telescope is basically fixed, and the ability to point it, is restricted to the positioning of the receiver which sits above the dish of the observatory, and the sweeping of space as the earth turns. Therefore there is a rather elaborate mechanism to move around the receiver above the dish which gives Arecibo its pointing ability. In order to make this movement of the receiver work, there is a counter balance which is needed to stabilize the main receiver. So the SETI people were able to install a second receiver on the counter balance. This made them the parasitic experiment. Those researchers who paid for prime time on the facility got to point the telescope in what ever direction they wished, the SETI people would then pick up what ever signal they could get from where ever their secondary receiver ended up pointing. Sort of like if the guys paying for time on the observatory were looking left, SETI was forced to look right. In the end, this situation worked out OK for them. The SETI researches were able to scan the sky in a random walk, determined by the other experiments running at the time. David explained that they effectively covered the sky in about 6 months time.

With that they solved their data collection problem. Next they needed to solve their number crunching problem and with that they thought up of the seti@home project. What really surprised them was the willingness of people to donate their idle computer time to the project. They were hoping for about 100k people to help out. When they posted their announcement to the Internet, they got over 400K people signing up to their mailing list. When they went online for the first time, they got over 200K users requesting data to be analyzed. They were so overwhelmed by the system overload by having 200K users requesting data to analyze, that it took them 8 months to e-mail out an announcement to the original 400K who signed up to their e-mail list. Basically they were totally swamped and had to work very hard to deal with their success. David talked a bit about the setup of their system which reminded me of the many data acquisition talks I've given and heard. One of the interesting details of the seti@home project I found was that they got a lot of funding from private, non-science institutions like Paramount Pictures. If I remember correctly, of the $700K they got in funding $200K was from these private sources. Paramount was interested in this project because they wanted to get Captain Picard to throw some big power switch which would start the whole experiment. That never happened, but the check did clear. Sun Microsystems donated lots of hardware. David was very grateful of this contribution and spent some time plugging them.

They had problems with making sure the data which was returned was actually processed by their client code. Since seti@home has been made a bit of a game with respect to processing the data, a lot of people have faked results so that they can climb up the "who has analyzed the most data" ladder. He also spoke about the Open Source controversy. As it turned out, there were some people "out there" in the Open Source community which were very angry that the client code was not Open Source'ed. At some point, there were some web sites which wanted to boycott the project because of this and others wanted to launch some kind of attack against the server unless they open source'ed their client code. I was quite ashamed to hear this. He went on to talk about how some users were also angry that the client code was not optimized for their particular hardware which the code was running on. For example, AMD CPU's have some instruction sets which will help speed up FFTs as does the Intel Pentiums with the MMX instructions. In order to make the code portable, the seti@home guys didn't pay much attention to these issues. So there were some users out there which disassembled the client code, found the portions which did the FFT and they replaced that section of the code with their own optimized FFT routines, optimized for their particular CPU instruction set. Now that is hacking. After the talk, I asked David if he realized that if he open sourced his client code, then people would have provided the optimization code for him instead of forcing the users disassemble the code. He told me that he worried about the integrity of the code and that he couldn't trust the scientific code put into the client. I understood where he was coming from. If I were to do something similar, start a phenix@home project, then I would have to provide a way of verifying the results of the computations every time someone added in some code. This verification process could break the @home usefulness of the project. Also, you would have to somehow guarantee that the code, once complied, was really that same code and not some rogue client which someone put together in order to fake fast data processing time. As it stands now, seti@home has accumulated about 450,000 years of computing time or an instantaneous computing rate of 20 Teraflops. This is half the size of the computer the Japanese are building which essentially cost the SETI research team $40K/Teraflop instead of $12,500K/Teraflop for which the Japanese can build HPC systems. Also, half of the data out of the Berkeley domain belongs to the seti@home project. That's a cool factoid.

The left side of the Open Source panel discussion. Todd Needham from Microsoft is on the left, Susan Grahm of UC Berkeley is next to him, and Jose Munoz, my DOE buddy, is next to Susan looking to his left. The guy behind the podium is Robert Brochers of NSF. The guy from Sandia National Lab is not shown.

The final session I want to cover is the open source panel discussion which took place at the very end of the conference. The topic being, how can the high performance computing (HPC) field take advantage of the open source movement and how should the government funding agencies deal with this matter. As it turns out, there is a committee out there titled the "President's Information Technology Advisory Committee", or PITAC, and they were charged with investigating the matter for the HPC field. The result was the publication of a document titled "Developing Open source Software to Advance High End Computing". The members of the PITAC who worked on this report were present on the panel. The first panel member, Susan Graham of UC Berkeley, basically gave a report on the report. The short side of it was that they recognized the potential of open source software and that the government should take advantage of it and do so now. The government should not take its time on this issue. The next panelist to speak was Todd Needham from Microsoft. This was unique to me, the first time I get to hear a bona fide Microsofter speak about open source software. His general attitude was that Open Source was not pixie dust which you could sprinkle over software and suddenly make it all that more powerful. Which is to say that in general, he was rather negative toward the movement. He had a rather angry and defensive attitude throughout the panel discussion which put me off. I guess it's the fallout of the antitrust lawsuit against Microsoft.

From my notes I was able to get the following from his introduction. He argued that Open Source is not a development methodology. In fact, he claimed that many projects are more cathedral than bazaar. (He gave the Linux kernel development as an example, with Linus sitting at the top.) He claimed that it is not a security model. Many eyeballs are not a replacement for a formal design and review process. (It's interesting to hear that coming from a guy who works for a company who just had a major break-in which made headlines around the world...) Open Source does not mean open standards. He also emphasized that open source license does not mean that you don't have access to the source code. He did like the idea of managed source code.
Note: You can find Todd's full presentation in this .pdf file.

In one of his transparencies he alluded to open source as a way of giving away your intellectual property rights and thus diluting the monetary value of your work. After the introductory talks, there was a question regarding this and he was quite adamant about how bad it was to open source your code and thus lose the dollar value you put into the code. He stated that Microsoft is a company which makes its money off of intellectual property and thus the open source model just doesn't work for them. (If Todd said otherwise, it would be a Slashdot headline for sure....) It must have been interesting to see how the report got out, which recommended the use and adoption of open source software with Todd from Microsoft as one of the committee members.

The next guy who talked was Jose Munoz from DOE. He did a Dave Letterman by going through the top 10 reason why Open Source software is bad in reverse order. The last one being, or rather item #1, the question "Would you want to fly in an airplane whose complete flight system was developed using Open Source by the lowest bidder?", followed by a bullet reading "Whom do you sue when the thing goes wrong? (assuming you're a survivor)". It's unfortunate that the guy who works for the same government agency which provides my paycheck gave such a negative perspective to this issue. It was good to listen to one of the members of the audience make a statement, at the end of the session, that if given a choice between the plane running open source software or something running under a Microsoft OS, he would much prefer the open source one, given the track record of Microsoft software. There were a couple of chuckles in the audience and a blushed smile from Todd of Microsoft.
Note: You can find Jose Munoz's full presentation in this .pdf file.

The last panelist to speak was from Sandia National Laboratories. His talk was basically in favor of the Open Source software license model. I asked two questions of the panelists, first I pointed out to them that the Linux and the Linux distributions have fostered a new generation of companies selling super computers. I told them that if you walk around the show floor, you see many small companies selling racks of Linux machines. I personally didn't see any companies selling racks of Windows NT/2000 machines. They responded that the big companies would sell you a rack of either a Linux or windows NT PC and that there was one demonstration booth which had a rack of windows NT PC's running Beowulf applications. Personally I believe they missed the point I was trying to make, which was that Linux was fostering a new industry made up of young start-ups. The second question that I asked them, it was actually more of a statement than a question, was that they should consider the Internet when they discuss issues relating to open source. "Who owns the Internet? The Internet wouldn't exist if it were owned by anyone." I remember a smile coming across Susan Graham from Berkeley once I finished my statement. Todd from Microsoft decided to answer my question. What I remember of his answer was that he though AOL did a "damn good job" of hiding all that stuff from the user in creating the front end which their user community uses. Again, I believe he missed my point. To me, AOL was useless until they connected themselves to the Internet. First by providing e-mail and then when they provided you with a ppp connection.

My "consider the Internet" statement was the last one given before the panel discussion ended. Of course I could have gone on a rant about my "consider the Internet" statement and kept the panel going for at least another 15 minutes by addressing some of the comments the panelists said, but it was the end of 4 days of conferencing and I had to catch my plane back to New York. Besides, no one wants to hear someone rant on and on and turn a discussion personal. Who knows, I can write up a rant, post that on my web site, and get many orders of magnitude more people to read my rant than the few dozen which were in the conference session at the time....

There were many many more talks and events which happened at the conference, but it would take much to much time to write about the whole thing. I tried to touch upon the items which I thought were the most important. Other talks which were of interest were Dr. Sterlins talk on Commercial Off The Shelf (COTS) super computers, Eugene Spafford's talk on security issues on the Internet, and all the stuff which I saw on the show floor. That's left as a page full of captioned pictures.

That's me with my one big winning hand playing poker. This photo was taken at the SGI party. They gave you 2 grand worth of chips and you got to gamble it away. The chips didn't have any real value, it was basically fake money. It hurts not a bit to loose fake money, but its fun none the less.
Congratulation! You made it to the end of my sc2k write-up. I want to thank you for your attention and hope that you got something out of your read. If you have any questions or comments, please e-mail them to me. I especially encourage people to report any corrections to the text you may have found. If the e-mail I receive has some interesting comment about the content of this write-up, I tend to post them at the end of the write-up for others to read. Also, if you enjoyed this write-up, I encourage you to sign up to my announcements mailing list, where you'll get an e-mail when a new write-up has been posted to my website. Also, you can find more of my past write-ups here.

Many thanks go to Duane Clark, Marie Bennington, Tundran and James Burley for submitting e-mails pointing out lots of typos which they found in the text. Again, thank you very much.

I would further like to thank Lee Busby for converting Jose Munoz's and Todd Needham's power point presentations into the more universal format of PDF.


The following are e-mails which I've received with comments on the article. Thank you Frank and Barry for sending in your thoughts on the article.
[All response links are off-site. -Ed.]

Frank Love writes in to tell me about my warts. Actually, everyone has these kinds of warts.

Barry Stinson has comments on my DOE buddy, Jose Munoz and the Open Source panel discussion.

Carl Friedberg, a physicist, agrees with my description of what it's like to work for the government.

Andrew Weiss writes in to let me know that the system which I thought was going to Duke University may in fact be going to U. of Delaware. Also the Bird is not extreem Tux but YoUDee, the U. of Delaware mascot. Thanks Andrew for the clarification.

Brad Lucier was the first to write in informing me that the 1U rack of cpu's belong to API networks. Thanks for the clarification Brad.

David Kinney from NASA writes in to inform me that the aerial picture of the airport is of Moffet Field, home of NASA Ames Research Center. Thank you David for figuring out what the eye in the sky was looking at.

Rich Brueckner from Sun Microsystems writes in with some details of the Sun booth and the party they threw for the SC2000'ers.

Patrick J Melody from the Naval Research Laboratory's Center for Computational Science, e-mailed me to tell me that they are the guys who demoed the 1.5 Gigabit streaming video demo and the earth surface scan demo.

Andy Meyer has sent in the most detailed description of the aerial photo of the Moffett Federal Airfield so far. Good work Andy.

L. Busby of Lawrence Livermore National Laboratory has some comments regarding the Open Source panel which are worth the read. Thank you L. Busby for the e-mail.

Marc sent in some rather frank advice regarding Open Source panel discussions. I'll use his advice at the next opportunity. Maybe someone else has better advice as to how to react in a public forum to anti Open Source talk?

Todd Needham from Microsoft, who was on the Open Source panel discussion, e-mailed me some comments about this article. I think it's important to that his views on the panel and this article be shared with the readers. I replied to Todd who then replied back with further comments. You can read my second reply to Todd here.

Chris Torres writes in to thank me for taking the time to write the article. It's because of e-mail like yours Chris, which motivate me to write these articles in the first place. I'm glad you enjoyed the read.

Steve Conway from Conway Communications, sent in a reminder of a very important event which I missed at the show. This being an announcement on "progress on plans for new performance benchmarks for supercomputers and the hiring of DOE/NERSC to develop the new tests." Sorry for missing it and not writing about it in this article.

Casey King, from Australia, writes in a comment or two about the SC2K NOC picture I took. It looks like the networking gurus do aim for that higher stabling standard in the sky... But it's just to high up there to reach.

Gerardo Cisneros of SGI, wrote in to clarify one of my comments I wrote in the Open Source panel discussion regarding OS'es used to fly airplanes. I knew what he was referring too, as did everyone in the audience, so I went ahead and filled in his blanks.

Louis H Turcotte, the SC2000 conference chair(!), read the article and has some interesting insights. As it turns out, the conference is organized by volunteers from around the country. He writes, "I would like to share with your readers that SC is a conference totally organized by volunteers - who work for 2-3 years to create the week's worth of conference activities." Quite an impressive effort Louis.

Koen Holtman, from Caltech, wrote in to clarify Jose Munoz's presentation on the Open Source panel. According to what Koen could remember, Jose was playing the role of the devil's advocate, and thus the negative slat toward his presentation. Thanks for the clarification Koen.

Of all the people out there on the Internet who read this article, (over 10,000 as of 6 Days after the initial posting on,) it looks like Richard Stallman found some time to read it and write me some comments on the article. He thanks me for recognizing the importance of the Free Software movement. Remember, it's GNU/Linux!

More photos available

One final note. The photographs you find on this web page, in the original article and in the photo gallery are only a small portion of what I took at the conference. Specifically, I took photos of most of the slides shown at the seti@home talk, Dr. Sterling's talk on COTS super computers, Dr. Spafford's talk on computer security, and the slides shown at the Open Source panel on the last day. If you are interested in seeing these photos, e-mail me and I'll see what I can do about getting them to you.

Copyright © 2000, Stephen Adler.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Heroes of Might And Magic III

By Marius Andreiana

[This article contains 937 KB of inline images. Click here to begin reading. Use your browser's Back button to return. -Ed.]

Copyright © 2000, Marius Andreiana.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"


By Shane Collinge


Courtesy Linux Today, where you can read all the latest Help Dex cartoons.

Copyright © 2000, Shane Collinge.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Joe Kaplenk and the OSes

By Fernando Ribeiro Corrêa

Joe kaplenk is dedicated to the teachings about the UNIX alike operating systems. He is the author of many operating systems administration books including UNIX System Administrator's Interactive Workbook and Linux Network Administrator's Interactive Workbook.

OLinux: Tell us about your career, personal life (age, birthplace, hobbies, education...)

Joe Kaplenk: I was born in Middletown, NY. I'm 53. My current hobbies include reading and watching history. I'm particularly fascinated with population migrations and the development of various nations. World War II and the rise of the various political movements fascinate me. The other hobbies include computers, of course, teaching and reading on technical business trends.

My college background includes going to Rensselaer Polytechnic Institute in Troy, NY where I majored in Math with a minor in Physics. From there I went to the University of Utah and graduated in Physics. My undergraduate interests included quantum mechanics. It was something I would only study for about an hour week and did very well in. I could recite much of the history of quantum mechanics while I was in High School.

My favorite instructor was Robert Resnick at RPI. He wrote the premier text series for undergraduate Physics. I was very fortunate to get in his class since there was a long waiting list. He made Physics very real and more exciting for me. Issac Asimov was my favorite author. Both of them influenced me to go into writing.

For graduate work I studied courses without a major in Chemistry, Biology, Biochemistry and Journalism and worked part-time as a science reporter for the Daily Utah Chronicle, the campus paper. I hoped to go into graduate school in Biochemistry and Biophysics and had been accepted at several colleges after this. One of my fascinations included studying the effects of radiation on genetics. I believed that it would be possible to find a way to selectively modify genes with radiation, given the right parameters, and was hoping to pursue this line of research. Several of my advisors advised me against it and that it would never work, but I felt strongly this was worth pursuing.

However, after much thought I left graduate school at the University of South Carolina my first week. At this point I decided that it would be too much effort and the money wasn't there to support me. In the early seventies I spent several years in the southern United States helping in rural black communities. My religious beliefs as a Baha'i strongly influenced me in this.

My wife Ramona has been a really good support network for me. She's the love of my life. I have a daughter, Anisa, from a previous. She has been an outstanding student and has received a number of commendations.

OLinux: For what company do you work and what is your job nowdays?

Joe Kaplenk: I am currently working for Collective Technologies as a consultant. Some of my assignments have been working with Red Hat Linux, but most of them have been with Solaris. Previous to this, and until last March, I worked with IBM Global Services and did some Linux work there as well as supporting Solaris and AIX. In this position I was on the international team that did the IBM Redbooks on Linux. I looked within IBM for opportunities to do more Linux, but did not find anything that was satisfactory at that time.

Some of my spare time is on teaching system admin, researching ways to teach, and developing new methods of teaching. Other time is spent playing with various software, doing installations and testing. The rest of the time is spent on family things.

OLinux: When did you started working with Linux? What was your initial motivation and how do you see it nowadays?

Joe Kaplenk: My first exposure to Linux was around 1992. I was working as the main UNIX system administrator at Loyola University Chicago. There were several students that worked for me. We were all keeping up closely with the USENET, the internet news groups. They found something about Linux online. We had been playing with Minix, which was actually used in one of the Math classes. This was prior to release 1.0. The students were very excited when Linux 1.0 was released. This meant to then that it could now be more stable. It wasn't long after that that Yggdrasil Linux was released. We downloaded the code, did some installs and played with it.

I thought this was great since this gave the students an opportunity to play with a UNIX like operating system as root without causing havoc on production servers. We were running AT&T 3B2s at that time. These were the standard boxes for UNIX development then, so much of what they did on Linux could be done on UNIX also.

I see Linux as being a major player in the operating system arena in a very short time. Linux will not kill all the other versions of UNIX. But I do see a reduction in the versions. With the GNOME foundation being developed and settling on a common desktop for several versions of UNIX it will make Linux even more widely used. However, there are some things that proprietary operating systems can do better. They can be more focused on new apps, throw money at it, and bring together talent quickly to solve a problem. The Linux community is largely dependent on finding developers to do the projects that often do it for free or for the love of the project. But quick development and focus are not necessary attributes of this model. So both models will continue to be used.

OLinux: What role do you play in the Open Source world these days?

Joe Kaplenk:One of my major efforts at the moment is in bringing Linux in the training and academic system administration training area. Recently I attended and did a presentation at Tech Ed Chicago 2000. The presentation covered what I consider are the major areas of difficulty in teaching system admin. I hope to have it on my website shortly at I did it in Star Office and want to make it available in other formats also.

This conference attracted educators and trainers from universities, colleges, companies and institutions in Illinois. At the conference it was strongly emphasized that there is an increasing shortage of system administrators. The need to develop training programs needs to be given a high priority.

The role I see myself playing is in helping to develop programs for training system admins. Because Linux allows itself to run in more places than any other operating system it is a natural solution to the problem. Students can learn and develop skills that they might not otherwise have. The materials I developed over the years developed into my first two books, the UNIX System Administrator's Interactive Workbook and the Linux Network Administrator's Interactive Workbook. They also formed the start of the whole Prentice-Hall series on Interactive Workbooks.

OLinux: As an educator, what do you think about this Linux certification services proliferation? Beside your books, how can extent your Linux/UNIX knowledge to the users?

Joe Kaplenk: Some employers are demanding Linux certification. My last assignment was one that required me to have my Red Hat Certified Engineer (RHCE), which I have. Personally, I think certification is overemphasized and the important thing is what the admin has done and can do. The RHCE comes the closest to being a true test because it has three parts. The first is multiple choice, the second is debugging and the third is installation. The other certifications that I am aware of do not have this. They are only multiple choice type questions. As an instructor that uses multiple choice questions, I am very familiar with the failings and I try to balance this with hands-on work.

I took the Sair Linux certification test right after passing the RHCE test. I passed 3 of the 4 sections, but took the networking twice. I failed the first time, so I answered any suspect questions differently the second time. It made no difference in the final result. I teach networking, have been doing it for 16 years and have written books on it. The pre-test material says that you only need to have several years experience. This indicates to me that there is some failing that will need to be looked at. While someone can and I'm sure has passed it, they may have passed it not because of knowledge but because of choosing the answers that were being looked for. But I know that sometimes the only way to find out whether is test is good is to give it, so I'm sure with time the bugs will be ironed out. The best test is real-life experience.

As I solve problems or during installs I have started writing up docs that explain the process. My focus is usually on the process itself. The outcome is important, but I figure that if I can speed up, clarify, or make easier the steps I have been a success. In one job I decreased the process time from two months to two weeks by analyzing and automating as much as possible. Eventually I'll have my own set of docs that people can refer to for these processes.

OLinux: How good are the Linux support services? Can you indicate some failure in these services?

Joe Kaplenk: I don't have a lot of experience with Linux support services other than doing them. Currently there are a lot of opportunities to do Linux support and this will grow rapidly because of the growth of Linux. Someday the CIOs are going to wakeup and see that they have production Linux boxes and their support guy just left. They will need to find someone to help them out.

The only failures might be in the lack of planning and training for what is becoming a tidal wave of demand for Linux. I have been a user of Solaris and AIX services and my observation is that Linux will be at those levels soon if it isn't already.

OLinux: What are the better and the worse Linux platform features in comparison with Windows platform?

Joe Kaplenk: My jobs have required me to work with DOS, Windows 3.1/NT/95,98, AIX, Solaris, HP-UX, AT&T UNIX and BSD. As a result I have come in contact with many of the features, good and bad, of these operating systems.

Linux is very scaleable. Ignoring hardware memory requirements, Linux can be put on wristwatches or IBM mainframes and run the same program.

The Linux source code is accessible so that a developer can figure out how to talk to the operating system. All the system calls are documented. This is not found in Windows where many system calls are hidden and only Microsoft knows about them. This gives MS a competitive edge. A Linux developer can know exactly what to expect whereas oftentimes Windows developers are shooting blind and hope they hit the target with enough ammo.

Windows does have some good points. It is widely used. There are many applications that only run on Windows, so that the user is forced to use Windows. However the open source community is coming along very quickly and providing equivalent functionality to Linux programs. Microsoft spends a lot of time and money testing applications on users to determine the best way to make something available to the user. I find some Linux apps confusing. They have simplified the process greatly. As long as you do things the Microsoft way and buy Microsoft products you won't have a problem.

But the problem is that there are many software manufacturers that write for Windows and really don't seem to have a clue. I've installed McAffe Office 2000, Norton Utilities and various Norton antivirus products over the years and inevitably remove them. After the installs my boxes will slow to a crawl, crash more often, lose icons and various other insanities. I figure that for about 5 years I could count on spending 12 weeks a year trying to fix my MS boxes and ultimately I would have to reinstall the whole mess. My final solution is to never install anything that gets to close to the operating system like these utilities. Then the boxes run a lot better. But I lose out the functionality of the software. Basically, if I leave it alone once it is running then it works great. But this loses a lot of the fun.

With Linux, and UNIX in general, the operating system and the apps are practically always separate. So when you upgrade to another version of the various system monitoring tools the system runs without a problem. If there is a problem the developer, whose email address is available, can fix it very quickly.

Microsoft is in a difficult position. They are trying to control the process while giving a certain amount of flexibility to other companies. They realize that other developers create ideas more quickly than MS. So if they let others develop the ideas then Microsoft can buy these companies out, steal the ideas or put them out of business. This model won't last long. While MS has been pushing the UCITA laws that passed in Virginia and which prevent reverse engineering they will have closed one of the doors they use. I'm reminded of TI that had the best 16 bit microprocessor in the late 80s. I think it was the 9600. But HP decided they could control the process and tried to design 96% of the software. Eventually people went elsewhere and the processor did not achieve its goals.

OLinux: What does mean the big companies, like IBM, involvement with Linux? Is it really good for the Linux community?

Joe Kaplenk: The Linux community is tending to go in two directions. There is the Free Software Foundation or the GNU/Linux group that is devoted to the purity of the GNU GPL license. These people are very fanatical about keeping Linux in the direction that it started in. This is represented commercially by the Debian GNU/Linux distribution.

However, the other direction is that many companies such as IBM are getting involved. They are finding that they can make a lot of money on Linux services. Let's remember that Bill Gates got his start because IBM didn't want to develop an operating system for the PC. They figured the money was in the hardware. This same mentality is still there. The operating system can sell the hardware. If IBM can sell more boxes by using Linux then they will. IBM is adding their apps to run on Linux. They are pushing Linux because they know the market is going to Linux and they can sell their apps and services on Linux and make money that way. In IBM's world Linux is one more product to support and make money.

I don't see IBM creating their own distribution unless it is for some specialized application such as Point of Sale Equipment (POS) used in stores or for ATM machines. These have special requirements and even in this case they would probably contract with someone else.

There are several manufacturers putting their own front ends on Linux or developing their own version of Linux. But if the libraries and kernel can continue to be compatible then I think Linux will be okay. There may be forks, but the good ideas will be brought back in.

I do see the GNU/Linux folks getting frustrated at some of the directions and I would expect that this will give more impetus to the HURD kernel development. This is the GNU operating system that Richard M Stallman was working on before Linux got fired up. If the Linux community doesn't have a place for them then they may move on to their own kernel and distribution separate from the other Linux distributions. Fortunately FSF has felt very strong about their apps being able to run on as many operating systems as possible, so this shouldn't be too painful to the Linux community.

OLinux: In your opinion, what improvements and support are needed to make Linux a wide world platform for end users?

Joe Kaplenk: Usability is constantly emphasized in the Linux/business community and I agree with this. When I can sit my mother-in-law down at the computer and she can use Linux as easily as Windows then we'll be there. When she realizes that the box doesn't have to be rebooted for silly things that Windows does then it will be a solid sale. Most users don't care about the operating system. They want to use it. Windows has a lot of ease of use and wide usability built-in. Linux is getting close. I try to use Linux whenever I can and am moving things over. I have two windows boxes and a laptop running Windows. My Windows needs have decreased and except for arhived stuff, I don't use my other two Windows boxes. My laptop runs Windows only because I use AOL for my dialup on the road and for some other apps.

OLinux: What was the last book release? Is there any new publication under way?

Joe Kaplenk: My last solo book with the Linux Network Administrator's Interactive Workbook. My last team effort was the IBM Redbook series on Linux which was recently published by Prentice-Hall. This is a four-book series.

There are no publications currently underway. I have been gathering my thoughts and hope to publish a UNIX system administration book based on my research. I plan to merge my first two books and incorporate several very unique concepts that I feel can make teaching and learning system admin much easier. I have a contract offer from Prentice-Hall that I am evaluating. Once I sign the contract the writing will take up most of my spare time.

Joe Kaplenk: Three years ago my goal was a book a year. In two years I had two book published solo and four books as part of a team. I'm basically on track or ahead of schedule.

OLinux: What were your most successful book? How many copies where sold? Did it have many translations to other countries?

Joe Kaplenk: I don't have numbers on the Redbooks, but the UNIX System Administrator's Interactive Workbook was the best seller for the solo books. It has sold at least 20,000 copies. But the numbers are usually up to nine months behind. The networking book was intentionally limited in content to allow the user to just build a network and so didn't sell as well.

There are no translations into other languages as far as I know.

OLinux: How do evaluate the sharp fall os stocks as VA Linux yesterday? Is it possible to make money as a linux company? How do address this problem?

Joe Kaplenk: It was inevitable because new tech stocks in general have been the darlings of the stock market. Linux fits this role perfectly. I also suspect that something was going on that was unanticipated by this process. As I interpret this situation people were doing after-hours bids for the VA Linux stock before it sold. When investors and brokers saw the prices that people were willing to pay, I suspect they made the opening price ridiculously high. As a result many people made quick fortunes. Since the stocks were way overpriced they quickly dropped.

I think the investors in the stock market IPOs have learned their lesson. The IPOs will not be the rockets they once were. Though there are occasional blips.

The biggest money to be made in Linux is in services and training. We will very quickly see this happening. Hardware does not make as much money and neither does the software. Though advanced software such as backup software does sell as well on Linux as on other platforms.

OLinux: What kind of relation do you have with Linux community? Do you currently work for any linux orgs?

Joe Kaplenk: I don't have any formal relations with the Linux community other than being a member of several of the local Linux groups. I am also a member of Uniforum. My time has been so busy with my writing, research, teaching and working that I have avoided additional time commitments. I get over 100 emails a day that I have to deal with also.

I don't work for any Linux orgs, but I do occasionally get assignments that originate from Red Hat.

OLinux: Leave a message to our users.

Joe Kaplenk: Linux is going mainstream. This is an irreversible process. If you want to succeed career-wise and financially you need to understand the obstacles and have some wide experiences with several operating systems. You also need to get down and dirty and play in the sandbox. This means tearing apart the boxes and the software and becoming involved (or should I say intimate?) with them.

Just like the early revolution with PCs and DOS this will move by very quickly. Ten years down the road it might be something else. It won't be MS and Windows and maybe not Linux. So take advantage of it while you can. Keep yourself open to new ideas so that you can again be there when it comes around.

My email is and I am always open to other ideas. Educators that are working on the same issues in training system admins as I am are especially encouraged to contact me.

Copyright © 2000, Fernando Ribeiro Corrêa.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Creating a Linux Certification Program, part 11:
Inviting the World to Participate

By Ray Ferrari

It has been a very busy year for the Linux Professional Institute. At shows and convention halls around the world, people are interested in Linux*, and people want to know more about where Linux is headed and the demands that will be placed upon the administrators of these systems. The spectrum of the interested Linux participant runs the full scale. From the outright beginner who doesn't have a clue what all the buzz is over this new operating system, to the down and dirty hands on faster than a speeding bullet kind of person; everyone wants to be a player in the Linux game.

The audiences for the Linux Professional Institute have been enthusiastic and engaged. It seems everyone has a different opinion of what is truly needed within the Linux community. But one thing that most seem to agree on, that as Linux grows in demand, more professionals will be needed, and a standard for competence is the best approach to ensuring qualified professionals.

Linux appears no longer to be a fringe operating system. The representation at COMDEX/Linux Business Expo in Las Vegas of 47,000 square feet and the participation of the attendees from COMDEX was a sight to behold. Tens of thousands of people toured the Linux related companies and I am sure that the result will be in sales. As sales continue to increase for these companies, a demand will be generated for Linux professionals. For the Linux professional, having the right knowledge, the right tools, and the right qualifications will be everything.

And the Linux Professional Institute(LPI) has helped bridge the wide gap between the many different players within the Linux professional ommunity and the ever growing "newbie". The Linux Professional Institute; ( has been spreading the word about Linux and certification of Linux professionals for the last two years. Their web site and related information an now be read in French, German, Greek, Japanese, Polish, Spanish, English. Chinese is soon to be added, with more to come. All of this has been done with volunteers from around the world. LPI continues to be organized and run with the help of thousands of people from every part of the globe.

Japan just recently went online with Virtual University Enterprises (VUE) for the Linux Professional Institute's tests for certification of Linux administration. This was a milestone within the Linux community. The Japanese are embracing Linux in a big way, and they believe in the certification process. LPI views the interest in Linux in Japan, as one more piece of evidence that Linux is here to stay. There is currently an LPI- China being started by a group of professionals and educators in China. Sponsors have been contacted and enthusiasm runs high for this group as they ready to embrace their Linux audiences.

Enthusiasm and a sense of community has taken hold for the Linux Professional Institute. A lot of work remains to be done, and anyone interested in participating or volunteering their expertise or time should contact, or The organization continues to work on the next set of exams to be administered. Level two testing is scheduled to begin in the first quarter of 2001. Stay tuned to the web site for further details.

The Linux Professional Institute invites all Linux enthusiasts and professionals to participate in their certification exams and become known as an LPI -1, LPI -2 or LPI -3 Professional. This certification is currently being recognized by IBM, SGI, HP, SuSE, and TurboLinux, a partial list. Since the first exam was taken; just a few months ago in June; there have been exams taken in almost every part of the globe. The top five countries (for most tests taken, in descending order) were the U.S, Germany, Canada, Netherlands, and Japan. But, there were participants from Taiwan, Switzerland, Pakistan, Ethiopia, Brazil, Bulgaria, China, Ecuador and many more.

As has been the case for the last two years, 2001 appears to be even busier for LPI. January brings the Linux Professional Institute to Amsterdam, and New York, then to Paris in February. The rest of the year will ontinue on this pace with appearances around the globe. To follow our movements, log on to To help us at shows and conventions, contact or

Come join the Linux Professional Institute and all their sponsors in challenging yourself as a Linux enthusiast. We invite you to participate in our testing procedure which leads to certification. Certification of professional Linux administrators. Be part of a world-wide organization where you an make a difference. Join our many discussions through mailing lists or help staff booths in different parts of the world. The Linux Professional Institute invites anyone interested in helping the organization through its next year of progress, to log on to and click on "Getting Involved". We're looking forward to another great year, and we hope you will be with us for the ride.

*Linux is a trademark of Linux Torvalds; Linux Professional Institute is a trademark of Linux Professional Institute Inc .

Copyright © 2000, Ray Ferrari.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Tuxedo Tails

By Eric Kasten

bootfail.png wwwsanta.png

[Eric also draws the Sun Puppy comic strip at -Ed.]

Copyright © 2000, Eric Kasten.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Secure Communication with GnuPG on Linux

By Kapil Sharma


GnuPG is a tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. GnuPG is a complete and free replacement for PGP. Because it does not use the patented IDEA algorithm, it can be used without any restrictions. GnuPG uses public-key cryptography so that users may communicate securely. In a public-key system, each user has a pair of keys consisting of a private key and a public key. A user's private key is kept secret; it need never be revealed. The public key may be given to anyone with whom the user wants to communicate.


You can find all the software related to GnuPG at


Copy the gnupg source file to ./usr/local/ directory or wherever you want to install it and then cd to that directory.
[root@dragon local]  tar xvzf gnupg-1.0.4.tar.gz
[root@dragon local]# cd gnupg-1.0.4
[root@dragon gnupg-1.0.4]# ./configure
[root@dragon gnupg-1.0.4]# make
This will compile all source files into executable binaries.
[root@dragon gnupg-1.0.4]# make check
It will run any self-tests that come with the package.
[root@dragon gnupg-1.0.4]# make install
It will install the binaries and any supporting files into appropriate locations.
[root@dragon gnupg-1.0.4]# strip /usr/bin/gpg
The "strip" command will reduce the size of the "gpg" binary for better performance.

Common Commands

1: Generating a new keypair
We must create a new key-pair (public and private) for the first time. The command line option --gen-key is used to create a new primary keypair.

Step 1
[root@dragon /]# gpg --gen-key
gpg (GnuPG) 1.0.2; Copyright (C) 2000 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.

gpg: /root/.gnupg: directory created
gpg: /root/.gnupg/options: new options file created
gpg: you have to start GnuPG again, so it can read the new options file

Step 2
Start GnuPG again with the following command:
[root@dragon /]# gpg --gen-key
gpg (GnuPG) 1.0.2; Copyright (C) 2000 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.

gpg:/root/.gnupg/secring.gpg: keyring created
gpg: /root/.gnupg/pubring.gpg: keyring created
Please select what kind of key you want:
   (1) DSA and ElGamal (default)
   (2) DSA (sign only)
   (4) ElGamal (sign and encrypt)
Your selection?  1
DSA keypair will have 1024 bits.
About to generate a new ELG-E keypair.
              minimum keysize is  768 bits
              default keysize is 1024 bits
    highest suggested keysize is 2048 bits
What keysize do you want? (1024) 2048
Do you really need such a large keysize? y
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>   = key expires in n days
      <n> w = key expires in n weeks
      <n> m = key expires in n months
      <n> y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct (y/n)? y

You need a User-ID to identify your key; the software constructs the user id
from Real Name, Comment and Email Address in this form: "

Real name: Kapil sharma
Email address:
Comment: Unix/Linux consultant
You selected this USER-ID:
    "Kapil Sharma (Unix/Linux consultant) <> "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.

Enter passphrase: [enter a passphrase]

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy. ++++++++++.+++++^^^
public and secret key created and signed.

Now I will explain about the various inputs asked during the generation of the keypairs.

  GnuPG is capable of creating different kind of keypairs. There are three options.
 A DSA keypair is the primary keypair usable only for making signatures. An ElGamal subordinate keypair is also created for encryption. Option 2 is similar but creates only a DSA keypair. Option
 4[1] creates a single ElGamal keypair usable for both making signatures and performing encryption. For most users the default option is fine.
                        About to generate a new ELG-E keypair.
                      minimum keysize is  768 bits
                     default keysize is 1024 bits
                     highest suggested keysize is 2048 bits
                     What keysize do you want? (1024)

 There are advantages and disadvantages of choosing a longer key. The advantages are: 1) The longer the key the more secure it is against brute-force attacks
 The disadvantages are: 1) encryption and decryption will be slower as the key size is increased 2) a larger keysize may affect signature length

  The default keysize is adequate for almost all purpose and  the keysize can never be changed after selection.

For most users a key that does not expire is adequate. The expiration time should be chosen with care, however, since although it is possible to change the expiration date after the key is created,
it may be difficult to communicate a change to users who have your public key.
               You need a User-ID to identify your key; the software constructs the user id
              from Real Name, Comment and Email Address in this form:
             "Kapil Sharma (Linux consultant) <> "

             Real name: Enter you name here
             Email address: Enter you email address
           Comment: Enter any comment here

              Enter passphrase:

There is no limit on the length of a passphrase, and it should be carefully chosen. From the perspective of security, the passphrase to unlock the private key is one of the weakest points in GnuPG
(and other public-key encryption systems as well) since it is the only protection you have if another individual gets your private key. Ideally, the passphrase should not use words from a
dictionary and should mix the case of alphabetic characters as well as use non-alphabetic characters. A good passphrase is crucial to the secure use of GnuPG.

2: Generating a revocation certificate

After your keypair is created you should immediately generate a revocation certificate for the primary public key using the option --gen-revoke. If you forget your passphrase or if your private
key is compromised or lost, this revocation certificate may be published to notify others that the public key should no longer be used.

 [root@dragon /]# gpg --output revoke.asc --gen-revoke mykey

Here mykey must be a key specifier, either the key ID of your primary keypair or any part of a user ID that identifies your keypair. The generated certificate will be left in the file
revoke.asc. The certificate should not be stored where others can access it since anybody can publish the revocation certificate and render the corresponding public key

3: Listing Keys

 To list the keys on your public keyring use the command-line option --list-keys.

[root@dragon /]#  gpg --list-keys
pub  1024D/020C9884 2000-11-09 Kapil Sharma (Unix/Linux consultant) <>
sub  2048g/555286CA 2000-11-09

4: Exporting a public key

You can export your public key to use it on your homepage or on a available key server on the Internet or any other method. To send your public key to a correspondent you must first export it. The command-line option --export is used to do this. It takes an additional argument identifying the public key to export.

5: Importing a public key
Once your own keypair is created, you can put it into your public keyring database of all keys from trusted third party in order to be able to use the keys for future encryption and authentication communication. A public key may be added to your public keyring with the --import option.

 [root@dragon /]# gpg --import <filename>
Here "filename" is the name of the exported public key.
For example:
[root@dragon /]# gpg --import mandrake.asc
gpg: key :9B4A4024: public key imported
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: Total number processed: 1
gpg:              imported: 1

In the above example we imported the Public key file "mandrake.asc" from the company Mandrake Linux, downloadable from Mandrake Internet site, into our keyring.

6: Validating the key
Once a key is imported it should be validated.  A key is validated by verifying the key's fingerprint and then signing the key to certify it as a valid key. A key's fingerprint can be quickly viewed with the --fingerprint command-line option.
[root@dragon /]# gpg --fingerprint <UID>
As a example:
[root@dragon /]# gpg --fingerprint mandrake
pub  1024D/9B4A4024 2000-01-06 MandrakeSoft (MandrakeSoft official keys) <>
     Key fingerprint = 63A2 8CBD A7A8 387E 1A53  2C1E 59E7 0DEE 9B4A 4024
sub  1024g/686FF394 2000-01-06

In the above example we verified the fingerprint of mandrake. A key's fingerprint is verified with the key's owner. This may be done in person or over the phone or through any other means as long as you can guarantee that you are communicating with the key's true owner. If the fingerprint you get is the same as the fingerprint the key's owner gets, then you can be sure that you have a correct copy of the key.

7: Key Signing
After importing and verifying the keys that you have imported into your public database, you can start signing them. Signing a key certifies that you know the owner of the keys. You should only sign the keys when you are 100% sure of the authentication of the key.

8:  Checking Signatures
Once signed you can check the key to list the signatures on it and see the signature that you have added. Every user ID on the key will have one or more self-signatures as well as a signature for
each user that has validated the key. We can check the signatures of the keys by the gpg option "--check-sigs:
As an example:
[root@dragon /]# gpg --check-sigs mandrake
pub  1024D/9B4A4024 2000-01-06 MandrakeSoft (MandrakeSoft official keys) <>
sig!       9B4A4024 2000-01-06  MandrakeSoft (MandrakeSoft official keys) <>
sig!       020C9884 2000-11-09  Kapil Sharma (Unix/Linux consultant) <>
sub  1024g/686FF394 2000-01-06
sig!       9B4A4024 2000-01-06  MandrakeSoft (MandrakeSoft official keys) <>

9: Encrypting and decrypting
The procedure for encrypting and decrypting documents is very simple. If you want to encrypt a message to mandrake, you encrypt it using mandrake public key, and then only mandrake can
decrypt that file  with his private key. If Mandrake wants to send you a message, it  encrypts it using your public key, and you decrypt it with your private key.

To encrypt and sign data for the user Mandrake that we have added on our keyring use the following command (You must have a public key of the recipient):
[root@dragon /]# gpg  -sear <UID of the public key> <file>

As an example:
[root@dragon /]# gpg -sear Mandrake document.txt
You need a passphrase to unlock the secret key for
user: "Kapil Sharma (Unix/Linux consultant) <> "
1024-bit DSA key, ID 020C9884, created 2000-11-09

Enter passphrase:

Here "s" is for signing , "e" for encrypting, "a" to create ASCII armored output (".asc" is ready for sending by mail), "r" to encrypt the user id name and <file> is the data you want to encrypt [root@dragon /]# gpg  -d <file>

As an example:
[root@dragon /]# gpg -d documentforkapil.asc
You need a passphrase to unlock the secret key for
user: "Kapil Sharma (Unix/Linux consultant) <> "
1024-bit DSA key, ID 020C9884, created 2000-11-09
Enter passphrase:

Here the parameter "d" is for decrypting the data and <file> is a data you want to decrypt.
[Note: you must have the public key of the sender of the message/data that you want to decrypt in your public keyring database.]

10: Checking the signature
Once you have extracted your public key and exported it then by using the --verify option of GnuPG anybody can check whether encrypted data from you is also signed by you.

Some uses of GnuPG software

1: Send encrypted mail messages.
2: Encrypt files and documents
3: Transmit encrypted files and important documents through network

Here is a list of some of the Frontend and software for GnuPG

                       GPA aims to be the standard GnuPG graphical frontend. This has a very nice GUI interface.
                       GnomePGP is a GNOME desktop tool to control GnuPG.
                       Geheimniss is a KDE frontend for GnuPG.
                       pgp4pine is a Pine filter to handle PGP messages.
                       MagicPGP is yet another set of scripts to use GnuPG with Pine.
                       PinePGP is also a Pine filter for GnuPG.

More Information


Anybody who is cautious about security must use GnuPG. It is one of the best open-source programs which has all the functions for encryption and decryption for all your secure data and can be used without any restrictions since it is under GNU General Public License. It can be used to send encrypted mail messages, files and documents for security. It can also be used to transmit files and important documents through network securely.

Copyright © 2000, Kapil Sharma.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Sharing an Encrypted Windows Partition With Linux
(and notes about Sendmail)

By Juraj Sipos

I published an article in the September issue of Linux Gazette (LG #57) titled Making a Simple Linux Network Including Windows 9x. I received questions regarding my encrypted Windows partition. People asked me questions like. "How did you do that?" So I'd like to answer, "how did I do that?" I would also like to describe my successful configuration of sendmail, which remained open in my previous article.

The above-mentioned article was about how to configure simple network including Windows 9x, but I was at that time unsuccessful with configuration of sendmail. First, let me say that I was not interested to have a standard mail server--one server from which I would fetch mail. I was interested to configure sendmail to have a possibility to send mail from machine one to machine two, and from machine two to machine one. This is something not very usual; however, the information revealed here may also be useful for such a standard sendmail server configuration.

I am using a term "sendmail configuration", by which I do not mean "configuration of file", but rather "making sendmail work". In other texts of Linux documentation files the term "sendmail configuration" is understood as manipulation of sendmail configuration files in /etc directory.

The following article will briefly describe how I configured this and how I successfully shared an encrypted Windows partition with Linux.

Normally, I use Linux at home, so I did not give my Linux workstation a network name - a host name. I found most of the programs people recommended me in their answers as ineffective (webadmin, configure sendmail). This was obviously due to the following reasons including the fact I must strongly emphasize here usually, sendmail is preconfigured and no editing of its configuration file ( is necessary unless you want to do something special or at least something of your particular choice:

1. The first important thing was to give my Linux a host name. I did this with a "hostname" command, where "" may be a name for your machine. If you do not have a real network name, it does not matter. Just use the above-mentioned name and replace my name with your name, e.g. The article in September issue clearly describes how to configure your network, so look there. The information in the article you now read will also apply to configuring sendmail in the plip network. You can open Linuxconf (RedHat) and change permanently your

hostname > Basic sendmail configuration > present your system as:
You should also do this on the computer TWO, where you will put instead of

2. The file in /etc directory must contain a line with the following text: in computer ONE, and in computer TWO. The file is preconfigured as empty and it only contains the following commented text: # - include all aliases for your machine here.

3. DNS must be configured. DNS files are contained in the bind package. Just install bind and change its configuration files in /etc directory. Here I will give my DNS configuration files:

    ; a caching only nameserver config
    directory                             /etc/namedb
    cache           .                      root.cache
    primary   named.local
The content of my /etc/named.conf file is different from the standard Linux configuration. I changed it because I use FreeBSD and I backup the /etc directory regularly. For me it is more convenient to have all configuration files in /etc rather than few in /var and the rest in /etc directory, but this is a matter of your choice. The file root.cache contains the world root DNS servers and it is preconfigured, so I do not include its content here. You will only make use of this file if you are connected to the net. However, if you are not connected, it's OK to leave it as it is. I noticed the file does not make any interference with our configuration.


   options {
           directory "/etc/namedb";
   zone "." {
           type hint;
           file "root.cache";
   zone ""{
           type master;
           file "named.local";
   zone ""{
           type master;
           file "";
   zone "0.0.10.IN-ADDR.ARPA"{
           type master;
           file "10.0.0";


   $TTL    3600
   @               IN      SOA (
                           20000827 ; serial
                           3600 ; refresh
                           900  ; retry
                           3600000 ; expire
                           3600 )  ; Minimum
                   IN      NS
   1              IN      PTR
The periods at the end are not a mistake; they are important here to keep ( You can find more information in the DNS-HOWTO. If you don't understand something, just forget it and feel fine with my assurance that this DNS configuration will work.


    $TTL    3600
    @               IN      SOA (
                            2000080801 ; serial
                            3600 ; refresh
                            900 ; retry
                            1209600 ; expire
                            43200 ; default_ttl
                    IN      NS
                    IN      MX    0
    localhost.      IN      A
    ;info on particular computers
    ns              IN      A
    one            IN      A
    www                   CNAME   one
    ftp                       CNAME   one
    two            IN      A
MX is a mail exchanger. NS is a nameserver, CNAME is a canonical name or alias. Now follows the reverse zone:

/etc/namedb/10.0.0 (yes the name of the file is simply "10.0.0")

    $TTL    3600
    @               IN      SOA (
                            1997022700 ; serial
                            28800 ; refresh
                            14400 ; retry
                            3600000 ; expire
                            86400 ; default_ttl
                     IN      NS
    1               IN     PTR
    2               IN     PTR
    ; the above PTR is reverse mapping
SOA means Start of Authority, notice ";" at the beginning of some lines; it is used as a comment. The numbers represent time in seconds.

Now you can issue a command "ndc start". If your DNS (BIND) is already running, try "ndc restart". You can try the nslookup command, which should answer your queries, for example, issue nslookup. The shell command line will change and you will see something like this:

$ nslookup
  Default Name Server:

Now you can put in the ndc command window and you should receive a feedback that the computer you are asking for is If you put, the reply will be

No DNS server should be running on the other computer (TWO). This is a detail, but newbies often configure DNS server on more machines. In our network connection we have one DNS server and don't worry with the Secondary DNS server. We're dealing here with a SIMPLE NETWORK. It's the only way to start understanding something more complicated.

4. Putting the "domain" in the resolv.conf file will tell the second computer (and all other ones, if we plan to include them into our network) about the domain we are in (,, or, it's your choice, but keep only one domain. There's a possibility to create more domains. This is something like "Workgroups" in MS Windows and only computers in one domain [Workgroup] will be able to communicate with one another, i.e. computers in the domain "" will communicate with one another; if you have computers in the "" domain in the same network, "" computers will not communicate with computers in "" domain, albeit they all are cabled into one network). And because we are using the private IP addresses here, there will be no interference with Internet. Our DNS server will simply translate (or as (However, for Internet connection you need a router, if you want to use any of the networked computers for dialing out. The router gives you a possibility to share one modem with several computers. If you have a simple network with two or three computers and need to make an immediate dial out connection, try to dial out from the DNS server. A router is a computer that serves as a gateway - a way out of the private Intranet. Please look for information elsewhere, or else download a freesco mini dialout router and install it; it's a preconfigured mini router with diald I tested both from Windows and Linux and which worked well. You will only need to configure your ISP. Find the software through search engines, freesco should also be on, it's a diskette mini distribution, so an old 386 without a hard disk might serve you good).

The computer TWO will read the DNS configuration from the computer ONE. So the is the address of the computer ONE (and of the computer TWO). The resolv.conf on the computer ONE has the following syntax:

nameserver         # (this is maybe not necessary, but I have it there)
The resolv.conf on the computer TWO needs this:

Again, read my article from the September issue on how to configure the simple network. If you have a working network and the above-mentioned configuration ready, you will be able to send mails from root or user accounts either from computer ONE to computer TWO, or from computer TWO to computer ONE. If you connect to the net, the DNS name server we just configured will show you all IP addresses of addresses like So when you execute a command nslookup and type any www address in the command line, you will get its numerical IP address. This information will be given to you through these root DNS servers we mentioned above.

If there is anything wrong, try to run "ndc restart". If there is still a problem, check your network connection.

Linux and Windows

I haven't tested it yet, but it will certainly work. However, you must install a Windows mail server like sendmail in Linux. One alternative how to do this is to try some freeware or to use a professional software like Winroute, which has a mail server, DHCP server, etc. (Winroute for MS Windows can also be used as a dial-up router). Here it will be DNS that will help you send mail. Let me repeat the most important information I have from this hard digging - no editing of file is necessary. The sendmail configuration file is preconfigured to work immediately.

Sharing Encrypted Windows Partition With Linux

Some five years ago I downloaded the PCGuardian Encryption Engine ( and used it. Although it is a shareware with expiration, I managed to delete my C: Drive several times, so I could install it even after it was already installed. Please understand that everything you do here like I did will be done at your own risk.

The PCGuardian Encryption Engine will totally encrypt a DOS FAT16 or WINDOWS FAT32 partition and you will have to enter to your system through a password. If you use a diskette and look in the drive C:, you will see a garbage. If you later want to delete the encrypted partition, the DOS fdisk will refuse it, but not Linux fdisk or cfdisk.

Here the problem is, if you have a boot manager, that you must use such a boot manager that would not interfere with the password boot manager. This is quite a complicated issue, but generally speaking, the password engine of PCGuardian software behaves like a boot manager in that it is installed in MBR. I used the BOSS boot manager from FreeBSD distribution disks. BOSS was installed first and the PCGuardian password manager did not damage the BOSS boot manager, or the MBR. This means that first I received a password invitation, then the BOSS boot manager and then I could easily boot the encrypted Windows partition or Linux. When I selected the "Restart in MS-DOS Mode" from the Windows partition, I could also use the loadlin.exe file to boot Linux from the encrypted partition, however, the Linux partition was obviously on a different disk. Other boot managers will not work with PCGuardian or other encryption "MBR password" managers. This means that you will either destroy the MBR (for example, Boot Manager Menu, which also destroyed my whole encrypted disk), or all data on the disk. So far I can say that GAG boot manager also may work. You can download GAG from It is probably the best boot manager and it is free. If you want to download BOSS, follow ftp links from Having two MBR codes is a very dangerous thing. The best thing is not to try it. Obviously, you cannot mount such an encrypted Windows partition from Linux unless the manufacturer gave you a driver.

Copyright © 2000, Juraj Sipos.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

Objects, Classes and Other Things

By Jason Steffler


    For those who haven't read the previous articles, be sure to read the statement of purpose first.  This month, we're going to discuss objects as well as classes, messages and encapsulation.  For those looking to read the whole series locally or info about upcoming articles, you can check the MST page.  For those looking for further information on learning Squeak, here are some good resources.
    I also need to cover another item before we get into this article, and it's important enough to put at the top as opposed to the Q&A section.  I had a number of people ask me how I knew what code to type, and where they can find what objects Smalltalk has.  I plan on getting to this in article 4.  I'm holding off on discussing this to simplify the presentation and concentrate on fundamental concepts first.  I've often thought that the message isn't the medium, but rather the volume of the medium.  This has a number of connotations; in this context I don't want to present too much too fast and overwhelm the folks who are coming in with no programming experience at all.
    As a side note, I find it humourous and sad to see technical books that are selling by the pound these days.  You see things like: '1000 pages of <technology X> for only $19.95!', and the Core Java 2 Fundamentals book that is 742 pages!  It's often been said that the syntax of Smalltalk is so simple that you can put it on a postcard, as there are 2 operators, 5 reserved words1, and 9 reserved characters.  For those interested I'll add an interlude: all I learned about Smalltalk syntax I learned from a postcard.

Quote of the day

"Smalltalk is a wonderful language to work with - in fact, it's hard to imagine a serious programming language being more fun than Smalltalk.  Certainly, I've had more fun programming in Smalltalk than any other language I've worked with; so much fun that at times it's seemed incredible that I've also been paid to enjoy myself."
        -Alec Sharp, "Smalltalk by Example", pXIX

A first look at objects

    Last month, we left off with describing an object as anything you can think of that is a noun.  We implicitly extended this concept by talking about actions that objects can do when asked.  Let's extend this concept explicitly now by describing actions objects can do when asked as verbs.  For example, you could consider a Person as an object.  You could ask the Person object to do things like:
  1. Person, would you please sing?
  2. Person, would you please sing Mary Had A Little Lamb?
  3. Person, would you please sing Mary Had A Little Lamb, and do it loudly.
  4. Person, what is your height?
    Notice action 1 was just an activity with no constraints on it.  We don't tell the person what to sing, or how fast, or how loud, etc.  In our 2nd request, we specify the song to sing and in the 3rd request we also specify that it should be sung loudly.  Action 4 shows that we can not only ask the person to do something, but also ask them something about themselves.  That's right, objects have properties just like a real world thing would (we'll come back to this).  Assuming we had a Person object (we don't, at least not yet), the corresponding Smalltalk code would look like:

   (Person new) sing.
   (Person new) sing: 'MaryHadALittleLamb'.
   (Person new) sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.
   (Person new) sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.
   (Person new) whatIsMyHeight.

    Pretty easy stuff eh?2  Notice how the Smalltalk code is very readable and is very similar to how I initially wrote the questions in English.  Each one of these requests would be what we Smalltalkers call a message that the Person responds to, and the method in which they respond is determined by what we Smalltalkers call a method.  Again, pretty easy and intuitive stuff.
    Note on the last message, I switched the perspective around to whatIsMyHeight as opposed to whatIsYourHeight.  We could easily have made a method called whatIsYourHeight, but it's common practice to name methods from the perspective of the object3.
    Now, you'll notice that each request has (Person new) in it; you'd be correct in assuming we're asking a 5 different people to do something - we're asking a new Person to do something each time.  What if we want to ask the same person to do everything?  There's a few ways we could do this, one of them is:

    | aPerson |
    aPerson := (Person new).
    aPerson sing.
    aPerson sing: 'MaryHadALittleLamb'.
    aPerson sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.
    aPerson sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.
    aPerson whatIsMyHeight.

    The first line is declaring a temporary variable.  Hmm, this is the first traditional computer term that we've used so far in our discussion, not too bad.  Since we don't have a name for the person, we'll just call the person aPerson.  Much better than 'x', 'y', or 'i' that you often see in other programming languages.  Not that you couldn't call a variable x in Smalltalk, it's just that you're encouranged to name things descriptively.  The common convention is to run your words together with capitalizing each successive word (IMHO, this includes acronyms too).  For example, you could ask the Person to runToTheDmv.  So in the above code snippet, we're creating a new person and assigning (:=) that person to a temporary variable called aPerson.  Then we're asking aPerson to perform their various methods by sending them messages.
    So the question naturally arises, what is 'Person'?  Thinking in terms of nouns, a Person is a specific class or subset of nouns.  Well, in Smalltalk Person is an object too, but it's a special kind of object that is called a class.  You can think of a class as a blueprint object for making related objects.  When we ask a class to make a new instance of an object, it's called instantiating an object.  Now, coming back to the properties of an object, they are stored in what are called instance variables of the object4.  When we were asking aPerson for their height, they probably responded with what they had stored in their instance variable (we don't know for sure, as we don't know how the person determines their height).
    Revisting our conception of what an object is, we can now refine it:  an object is a grouping of messages and data that its messages can operate on.  This brings us to our next subject:  Encapsulation.


    Encapsulation is a fancy term to describe the grouping of messages and data within something we call an object, such that other objects can't see the data and can only get access to it via messages.  The reason for the emphasis on this topic is that this is a big difference from the way that procedural programming traditionally viewed programs.  Traditionally, the data and the methods for changing the data were two very separate beasties.  Often, when these are in two different parts of your program, they get out of synch and it's very hard to maintain the functions that manipulate the data when the structure of the data changes or vice versa.  This is one of the problems that OO programming tries to address, by keeping the data and the methods for changing the data close, it's easier to keep them in synch.  In fact, if you change how the data is stored in an object, or the method by which you change that data, any other objects are none the wiser.  This is a Good Thing, as when you make changes, you make them in one spot, as opposed to many spots.
    So, though we could guess at what aPerson's height is, we don't really know until we ask them whatIsYourHeight.  Now, the person could respond by remembering the last time he walked past a height marker in the local Quick-E-Mart.  After a number of times of asking their height, they realize that maybe they should give a better answer, so they change the method of their respone by checking their height against a measuring tape.  To us, we had no idea that how they determined their answer changed, and that's good as we really don't care how they determine it, we only care about the answer.


    A Very Key thing in OO programming, is considering the responsibilities that an object should have.  Just like a real world person, aPerson object also has responsibilities.  In our example, aPerson is rather lucky, as they only have the responsibility of singing or answering their height.  They don't have the changeTheStinkyBaby responsibility.
    Figuring out the appropriate responsibilities for objects is one of the key things in OO programming.  If you don't have appropriate responsibilites, you run into problems like object bloat.  This is when an object does too many things and is 'spread too thin'.  A jack of all trades does everything pretty poorly.  On the other hand though, you need to strike a good balance, as a specialist that is too specialized does nothing very well, and it takes a huge number of specialists to do anything.
    Did I mention figuring out the appropriate responsibilities for objects is one of the key things in OO programming?

Putting it all together

    I've been holding off running any code thus far, as I wanted you to concentrate on the concepts.  In the same vein, I'm going to hold off describing how to make a Person class to concentrate on other concepts first.  We'll get to stepping through creating a Person class in the article after next, as I want to cover inheritence, polymorphism and abstract classes first.  In the meantime though, I've included the source code below, if you're curious and want to peek or if you want to compare the code against other languages you know.
    To load the code, we need to file it in to Squeak.  If you're reading this remotely, you need to first download the code from here, rename it to remove the ".txt" extension, save it where ever you want.  Now open the file list (Menus>open...>file list), find your downloaded file (, left click on the file in the upper right corner to select, then middle click>fileIn. For the read-along folks, the file browser looks like:

    Now, you can go back to the above code and execute it.  If I was really motivated/sadistic, for the singing parts I could have actually recorded myself singing the songs and have the commands play them.  However, I took the obvious shortcut and just opened windows with the song text in them.
    To exeute code, highlight the code, middle click>do it  (or you can hit Alt-d).  Try doing one line at a time, or multiple lines at a time.  You'll note with the second example that uses a temporary variable, you'll have to highlight multiple lines to get that temporary variable included in execution.  Try commenting out parts with  "double quotes" - double quotes, are the Smalltalk comment indicator.  ie:  "This is a comment"  You'll notice that when we comment out code and re-execute that we didn't need to recompile, which is nice... compiling is so passé and time consuming.
    For the read-along folk, when you execute, this is what you'll see:

(Person new) sing.

(Person new) sing: 'MaryHadALittleLamb'.

(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'loudly'.

(Person new) sing: 'MaryHadALittleLamb' andDoIt: 'quietly'.

(Person new) whatIsMyHeight.


Looking forward

    The next article will cover inheritence, polymorphism, and abstract classes as well as introducing the collection classes. Note:  this time around, the sweet squeak is going to do some explaining, so be sure to read that section.

All I learned about Smalltalk syntax I learned from a postcard

This is an aside really, if you're coming into Smalltalk cold and are confused by this table, don't worry about it as we'll be covering this as we go along.  This is here for the curious or the folks who have other programming experience to compare.

A Sweet Squeak

    This section typically won't explore/explain code or example, but this time we'll make an exception.  This time, we're going to play with numbers, as that's a common thing for introductory programming articles/books to do and is an easy way to compare languages.  People with some programming experience will appreciate this more, people with no programming experience will wonder why all languages can't do this.
    Let's start of with factorials.  For those not familiar with a factorial, it's most easily described by examples:
        1 factorial = 1
        2 factorial = 2*1
        3 factorial = 3*2*1
        4 factorial = 4*3*2*1
    When you do the below snippet, you won't see anything happen.  That's because the below code doesn't open any windows to report back it's results.  To see it's results, you can do to things instead of doing it: "Try printing this, you'll see the answer '120' printed in the workspace"
   5 factorial.

"Now print this, and you'll see a very large number as the result, since it's 1067 digits long, I'm not going to paste it in here.  Note, this takes 5.9 seconds to run on my P200, which is pretty respectable performance.
    Also note the size of the numbers you can work with - you don't have the usual predefined fixed limits such as an int that has the range from -2,147,483,648 to 2,147,483,647."
    1000 factorial

"If you want to and have the time, just for grins try 10000 factorial (I didn't have the patience to run this on my machine, even in another thread)"

"For the curious, no I didn't count the number of digits returned from 1000 factorial, since the message factorial returns a LargeInteger, we can just ask that LargeInteger what size it is."
    (1000 factorial) size

"If you want to check that the correct numbers are actually being computed, try this and it should give you an answer of 1000"
    1000 factorial // 999 factorial

"Looking for what kind of precision you can get?  Try:"
"The interesting thing you'll note is that it returns a Fraction!  No rounding off to the first 5 decimal places by default.  Instead of printing it, try inspecting this guy, you'll see a Fraction object, with a numerator and denominator just as you'd expect:"

"Of course, you can use floats too, in which case you do get a rounding off - to 14 places give or take depending on the flavour of Smalltalk you're using.  Try this, and you'll get the answer: 22.72426793416332"

"Finally, for those curious about how long things take, to time something in Smalltalk you can print this, which will print out the milliseconds it took to run.  These measurements are not even meant to be toy benchmarks, but are just presented for interest."
    Time millisecondsToRun: [100 factorial]
    Time millisecondsToRun: [1000 factorial]
"On my P200, the above lines took:
    0.020 seconds
    5.967 seconds"

    People with some programming experience will notice that we didn't have to fuss with what types of numbers we're working with, (integers, large integers, floats, large floats), or type mismatches, or predefined size limitations, or wrapping primitive types in objects then unwrapping them or any other of this type of nonsense ;-).  We just naturally typed in what we wanted to do without having to do any jumping through hoops for the sake of the computer.  This comes from the power of P&P:  Pure objects and Polymorphism (which we'll discuss next time).

Questions and Answers

These are the answers for questions on previous articles that I could work through in my limited time available.  I picked out the ones I thought most appropriate for the series.  If you want a faster response, or I didn't get to your question, try posting your question to the comp.lang.smalltalk newsgroup, or the Swiki.

Q: Can you show how your examples can be done in Java?
I'll try and answer this without getting on a soapbox (language questions are often equivalent to religous questions).  There's three parts to this answer:

  1. Over the years, I've programmed in a decent number of languages/environments6.  I've been programming in Java off and on since '95, and like any language it has its pros and cons.  However, I don't find programming in Java very fun.  On my day job I'll work in Java, C, C++, etc as needed since they're just tools to get a job done, but for my hobby projects I use Smalltalk.
  2. I find Java is just redoing a lot of stuff that Smalltalk already had (garbage collection, virtual machine, JIT VM, write once run anywhere) with the baggage of trying to be similar to C/C++ (primitive types, large amount of syntax, encouraging functional programming, strong typing).  I don't want to play around with old news for hobby projects, I like to play with new and nifty stuff.
  3. I really like meta programming, which I just can't do in Java or Windoze.
Q: What is a good beginner's Smalltalk book?
A: This really depends on what your motivations are.  For Squeak, I don't know of any good beginner's books as all the material I've seen on it has been free online resources (even better than a book IMHO) Be sure to check out The Squeak FAQ (This is also a Wiki, so the cool thing is that you can post your own questions to a living document).
    Personally, I've found many beginner Smalltalk books to be written at too simple of a level.  If pressed, I'd have to say my favourite introductory Smalltalk book is Smalltalk by Example, by Alec Sharp, ISBN 0-07-913036-4.  It's geared towards beginner->intermediate topics for VisualWorks Smalltalk.  If you want you can get the NonCommercial version of VisualWorks to play in, though many of his examples should work in Squeak.

Q: What is a skin?
A: A skin is an installable look-n-feel or theme.  In squeak you can install a Windoze look-n-feel, MacOS Aqua look-n-feel, etc.  (not sure how many skins are out there or what state they're in).  I remember VisualWorks Smalltalk having the skins concept back in '94 (wasn't called a skin back then)- it's one of the things about Smalltalk that first caught my eye.  At the time I had just spent a year doing a very painful port of OpenWindows to Motif for parts of a C based application, then I stolled past a coworker's desk and they showed me how they could switch the look-n-feel of their Smalltalk application from Windoze to Motif to MacOS with a click of the mouse.  Talk about a productivity boost!

Q: Can you have Smalltalk run in web browsers?
A: You certainly can, in fact I thought about setting up a Squeaklet that people could execute the snippets from this series in from the comfort of their web browsers... yeah, you can have a development environment in your web browser, not just runtime code.  However, it was just one more thing for me to do in my limited time and I decided to forgo it for now.  This is a possible future topic.  BTW - most flavours of Smalltalk have some mechanism to run thin clients in a web browser.

Q: Where is the 'main' function?
A: Smalltalk doesn't have a 'main' function, this can be confusing to Smalltalk newbies as so many other languages have this notion.  Conceptually, Smalltalk is an always running set of live objects which is why there is no 'main' function - if your enviroment is always running, having a 'main' function is nonsensical as you're not starting or ending anywhere.  When you want to start an application you've written, you merely ask it to open up its window that it uses as a starting point.  When you deliver an application, you merely open up your application's starting window and package your environment (this is a simplification here).
    Realistically though, you have to have some starting point as you need to shut down your computer sometimes.  Well, Smalltalk does what is called saving an image.  It's called an image because what you're saving is really a snapshot in time of your environment.  When you start it up again, everything is exactly where you left it.  To do this, Smalltalk has some bootstrap code to get itself going again, which could technically be considered a 'main' function.  However, the point is that you do not have a 'main' function when writing an application.

Article Glossary

This is a glossary of terms that I've used for the first time in this series, or a term that I want to refine.  If you don't see a term defined here, try the ongoing glossary in the local location: [LL].
        (def 1-simple)  You can think of a class as a blueprint object for making objects.
        The grouping of messages and data within something we call an object, such that other objects can't see the data and can only get access to it via messages.
File it in
      The act of loading Smalltalk code into Squeak.
Instance Variable
        (def 1-simple)  The place where objects store their properties/characteristics
        {In-stan-shee-ate} When we ask a class to make a new instance of an object, we say that we're instantiating that object.
        (def 1-simple)  A request you can ask of an object.
        (def 1-simple)  Determines how an object will respond to a message.  The method in which an object responds is determined by a method.
        (def 2) an object is a grouping of messages and data that its messages can operate on.
Object bloat
        (def1 - simple) When an object does too many things and is 'spread too thin'.  A jack of all trades does everything pretty poorly.
        The things that an object can/should do.
Toy benchmark
        A benchmark is a method of measuring the performance of something, and a toy benchmark is a trivial benchmark that doesn't give a good refection of performance as it's too simple or too narrow.
        (def 1-simple)  A holding reference for something, for example, a holding space for an object.  It gives you a handle to refer to that something that it is holding on to for you.


[1] As Eric Clayberg once pointed out in comp.lang.smalltalk, technically speaking Smalltalk has no reserved words, since you can create methods using these reserved names (though you sure wouldn't want to!)  Though I agree on this technicality, I include these words, as for practical purposes they are reserved.
[2] Does it show I'm a Canuck?
[3] Actually, in practice we'd probably just name the method height, with the whatIsYour or whatIsMy height implied.
[4] Properties of objects can be stored in other places too, but I'm not going into that now, this is a very common place to store things.
[5] See [1]
[6] Smalltalk (VisualWorks, VSE, VisualAge, Squeak), Java (VisualAge, J++), C, C++, Tcl/Tk/Expect, sh/ksh/csh scripting, Turbo Pascal, Fortran. / Solaris, SunOS, HP-UX, Linux, Windows NT, 95, & 3.1, Mac OS

Statement of purpose

    When I wrote the first Making Smalltalk with the Penguin article back in March of 2000 [LL], my target audience was experienced programmers who didn't have much exposure to OO programming or to Smalltalk.  The article's intent was to give an overview of my favourite programming language on my favourite operating system.  Since then, I've had a fair amount of email asking introductory type questions about Smalltalk and OO programming.  So I thought I'd try my hand at a small series.
    The target audience for this series are people new to OO or new to programming altogether.  The intent is to not only introduce OO programming, but to also spread the fun of Smalltalking.  Why do this format/effort when there's lots of good reference material out there?  Two reasons really:  1) Tutorials are great, but can be static and dated pretty quickly.  2) An ongoing series tends to be more engaging and digestible.
    To help address the second reason above, my intent is to keep the articles concise so they can be digested in under an hour.  Hopefully, as newbies follow along, they can refer back to the original article and make more sense of it.  I plan on having a touch of advanced stuff once in a while to add flavour and as before, the articles are going to be written for read-along or code-along people.
    Something new I'm going to try is to make the ongoing series viewable in a contiguous fashion and downloadable in one chunk for people who want to browse the series locally.  To do this, click on TOC grapic to at the top of the article.  The articles also have 2 sets of links:  one set for www links, another set for local links, indicated as: [LL] for downloaded local reading.

Why Smalltalk?

    I believe Smalltalk is the best environment to learn OO programming in because: In particular, I'm going to use Squeak as the playing vehicle.  You'll notice this is a different flavour of Smalltalk than I used in my first article.  I've never used Squeak before, so this'll be a learning experience for me too.  The reasons for this are:

Person Sample Smalltalk Code

This is a sample of what the Smalltalk code looks like for the curious or for people who want to compare with known languages.  For people who are confused by below code, don't worry, as we'll be stepping through how you create it and what it means in a future article.

"This is a Class definition"

Object subclass: #Person
 instanceVariableNames: ''
 classVariableNames: ''
 poolDictionaries: ''
 category: 'MakingSmalltalk-Article2'

"My Characteristics is a category of methods for the class (similar to an interface in Java (but it's not enforced))"

Person methodsFor: 'My Characteristics'
"The 1 method in the My Characteristics category"
 "Actually, in practice we'd probably just name this method 'height', with the 'whatIsMy' implied.
 Simple example to show how a query about my characteristic can be done.  Ah-ha - notice that the height is not being returned via an instance variable as we guessed at above, but is in fact hardcoded... A BAD PRACTICE TO DO, but is fine for this example to keep things simple, and wanted to show how to do a ' in a string"

 (Workspace new contents: 'My height is 5'' 7"') openLabel: 'This is my height'! !

"This is the singing category, it has 6 methods"
Person methodsFor: 'Singing'
"And the methods for singing - method 1 of 6"

 ^'Mary had a little lamb, little lamb, little lamb, Mary had a little lamb whose fleece was white as snow.'

"singing method 2 of 6, we use the 'my' prefix convention to indicate a private method"
mySing: someLyrics inManner: anAdjective withTitle: aTitle
 "Using simple logic here for illustrative purposes - if the adjective is not 'loudly' or 'quietly' just ignore how we're being asked to sing"

 | tmpLyrics |
 anAdjective = 'loudly'
  ifTrue: [tmpLyrics := someLyrics asUppercase].
 anAdjective = 'quietly'
  ifTrue: [tmpLyrics := someLyrics asLowercase].
 self mySing: tmpLyrics withTitle: aTitle

"singing method 3 of 6"
mySing: someLyrics withTitle: aTitle

 (Workspace new contents: someLyrics) openLabel: aTitle

"singing method 4 of 6"

 self mySing: 'Do be do be doooooo.' withTitle: 'A bad impression of Sinatra'

"singing method 5 of 6"
sing: aSong

 aSong = 'MaryHadALittleLamb'
  ifTrue: [self mySing: self maryHadALittleLambLyrics withTitle: 'Mary had a little lamb']
  ifFalse: [self sing].

"singing method 6 of 6"
sing: aSong andDoIt: anAdjective

 aSong = 'MaryHadALittleLamb'
  ifTrue: [self mySing: self maryHadALittleLambLyrics inManner: anAdjective withTitle: 'Mary had a little lamb']
  ifFalse: [self sing].

Series Table of Contents

Previous Articles

The image on the right links to Steffler's site.

Copyright © 2000, Jason Steffler.
Copying license
Published in Issue 60 of Linux Gazette, December 2000

"Linux Gazette...making Linux just a little more fun!"

The Back Page

About This Month's Authors

Steven Adler

While not building detectors in search of the quark gluon plasma, Steve Adler spends his time either 4 wheeling around the lab grounds or writing articles about the people behind the open source movement.

Marius Andreiana

Marius is 19 years old and a student in the first year at Politehnica Bucharest, Romania. Besides Linux, he also loves music (from rock to dance), dancing, having fun, spending time with friends. He is interested also in science in general (and that quantum spooky connection :) and says, "I like cooking (okay, maybe I don't like it a lot, but I have to cook because I live alone while I'm at studies in Bucharest. Poor me :-) poor neighbours :-)"

Shane Collinge

Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.

Fernando Correa

Fernando is a computer analyst just about to finish his graduation at Federal University of Rio de Janeiro. Now, he has built with his staff the best Linux portal in Brazil and have further plans to improve services and content for their Internet users.

Ray Ferrari

I am a newbie to the Linux community, but have been following the rise of popularity for almost two years now. I continue to learn as much as possible about the Linux operating system in an attempt to become proficient and knowledgeable in it's use as an internet platform. I have volunteered on the Debian mailing list, and I continue to assist the Linux Professional Institute(LPI) promote their agenda. I help staff the LPI booths at events and write articles updating their achievements.

Eric Kasten

I'm a software developer by day and an artist, web developer, big dog, gardener and wine maker by night. This all leaves very little time for sleep, but always enough time for a nice glass of Michigan Pinot Gris. I have a BS double major in Computer Science and Mathematics and an MS in Computer Science. I've been using and modifying Linux since the 0.9x days. I can be reached via email at or through my website at

Kapil Sharma

Kapil is a Linux and Internet security consultant. He has been working on various Linux/Unix systems and Internet Security for more than 2 years. He maintains a web site ( for providing free as well as commercial support for web, Linux and Unix solutions.

Juraj Sipos

I live and work in Bratislava, Slovakia as a library information worker, translator and research reader at the Institute for Child Psychology. I published some of my poetry here and in USA, I translated some books from English (e.g., Zen Flesh, Zen Bones by Paul Reps). You can see some of my stories and poetry at Computers are my hobby.

Jason Steffler

Jason is a Software Architect for McHugh Software International.  His computer related interests include: OO programming & design, Smalltalking, the peopleware aspects of software, and noodl'n around with Linux.

Not Linux

Penguins in Brazil

NPR had a story this month about the penguins arriving on the beaches in Rio de Janeiro and other parts of Brazil. Usually there are five or so penguins, but this year there are hundreds of penguins who used to stick around the Falklands/Malvinas Islands but have now migrated to Brazil.

Scientists suspect maybe it's a long-term climactic change, and the cold ocean currents they follow to find their food may have shifted.

Some Brazilians have adopted penguins as pets, but many don't know how to care for them. The penguins don't do well when the weather turns hot, so some ppl put them in the freezer. Unfortunately, this gives the penguins hypothermia, because this variety is used to a temperate environment. One of Brazil's zoos is building a penguin rehabilitation center for the penguins it has acquired and the ex-pets who have been donated to it.

Can any readers in Brazil provide any more information?

Penguins in Australia

In January, Linux Journal published a short interview article about an oil spil near Australia's Phillip Island where fairy penguins (aka little penguins, the kind that bit Linus) live, and how hackers (including LJ) sent money to help rehabilitate the birds. The article describes how part of the rehabilitation included sweaters for the birds, made apparently from socks. has pictures and whimsical drawings of penguins, and panels from the High Tide comic strip featuring penguins. The site features information about the Phillip Island Nature Park.


The Spam of the Month award goes to the real estate company in Pakistan that offers programmers and sysadmins (certified in MCSE, Oracle, VB, e-commerce, etc.) for 240 hours per month for US$100.


[Nov 30, 3:45pm] As I write, the N30 WTO II demonstrations have started in downtown Seattle. "Several hundred" protesters (600-800 according to the news) are marching to Westlake Park from separate demonstrations on Capitol Hill and the International District. The most ingenious of their plans is a giant cake they'll be presenting to Mayor Paul Schell and booster Pat Davis, to thank them for bringing the WTO trade ministerial last year.... so they [the activists] could expose what bastards they [the WTO] are. In other matters, nine Starbucks were hit yesterday and the day before.... Anyway, you'll know by the time you read this whether news about N30 makes it outside the region.

Michael Orr
Editor, Linux Gazette,

Copyright © 2000, the Editors of Linux Gazette.
Copying license
Published in Issue 60 of Linux Gazette, December 2000