Collection of Interesting Articles on OSS



Desktop Linux: An Average User Success Story

Mar 26th, 2010 by Gene.​

I often see the sentiment expressed that desktop Linux is “too hard” for the average PC user. Yet the qualification for “too hard” is usually that it is too hard to install Linux or too hard to fix problems on Linux for the average user. These arguments seem to completely overlook the fact that an average PC user will never install his own operating system. Also overlooked is the fact that the average PC user will never diagnose and fix her own system. An average PC user is taking a “sick” PC to a local computer repair shop, or to Geek Squad at Best Buy or calling a geek friend to come fix it. An average PC user is buying a PC with an operating system preinstalled and not changing it for something else. Those average PC users would have zero problems using desktop Linux. I have proof.

I am no average computer user. I run a computer consulting and sales business and I steep my brain in computer related news, technical documents and computer trivia on a daily basis. I am the guy that people call on when they do have computer problems or are looking to buy a new PC customized just for them. The fact that I use desktop Linux every day to run my business and for personal use is not remarkable.

On the other hand my friend Chuck is an average computer user. Chuck needs to send and receive e-mail, use Flash based web sites, connect and copy music to his MP3 player, create and print documents, use Instant Messaging to talk to friends and play a few games to pass the time. Chuck does all this on Mandriva Linux and has done so ever since I built him a PC with Mandrake Linux, now known as Mandriva, preinstalled in 2004. When Chuck needs to upgrade Mandriva he calls me and pays me to do it, he does not do it himself. When Chuck has hardware problems he calls me and pays me to fix the PC, he does not do that himself. This is what average PC users do.

Chuck is my average user desktop Linux success story. He has been so for about six years now. Chuck does not want to go back to Microsoft operating systems as he sees no benefit to that. He does see some negatives to going back though. He would have to go back to buying and installing anti-malware software and keeping that up to date. He would have to go back to worrying about malware infections through e-mail or cracked web sites. Certainly if Chuck were using a Microsoft operating system I would do all I could to secure his PC for him. But I could not guarantee Chuck would never get malware “owning” his PC in that case. I am not there to watch over Chuck every time he opens an e-mail or browses web sites. With desktop Linux Chuck and I both know that he does not have to worry about those problems. Chuck is happy to use Linux as an average PC user.

I asked Chuck today, after finishing upgrading his PC to Mandriva 2010, if he considers himself an average PC user. He did not understand the context so I explained what I meant. Chuck agreed that he would never attempt to install his own operating system nor would he attempt to solve problems on his PC himself. He would call an expert for those every time. Just like he calls on an expert when he needs his home sprayed to prevent infestations of termites. Just like he calls on an expert when his SUV needs an oil change, new tires or some repair done. Chuck is very much an average PC user. Yet, Chuck uses desktop Linux on his home PC every day to do the things he needs to do. I asked Chuck if using Linux is hard. The answer? “No”.



Amazing write up. Linux is pretty easy actually. The only hard part is to stop expecting it to work the Windows way. Other than that there is the mental block.
Many people say, better live with viruses than use the hi-fi complex linux, which only hackers use

Respect it as an OS and it is very easy. Try to find Windows in it everywhere, it will be very hard. Many things are actually better in Linux, as compared to Windows.
I think linux has been constantly misunderstood as only a shell based OS as a lot has changes in last so many years with the UI in linux DEs. Thing is that people dont want to try anything else as they are so fed up with their windows xp/vista crashes that now they are afraid to try anything new .Moreover, due to lack of directx in linux, windows take advantage at gaming . A huge gaming community is out of reach to the linux community. however, Linux platform too has some really good games like UrT (Urban terror), W:ET(Wolfenstein : Enemy territory) , TC:E (True Combat Elite ) ,Nexiuz and Torcs but still there is a long way to go.
Last edited:


12 More of the Best Free Linux Books
Part 1

Many computer users have an insatiable appetite to deepen their understanding of computer operating systems and computer software. Linux users are no different in that respect. At the same time as developing a huge range of open source software, the Linux community fortunately has also written a vast range of documentation in the form of books, guides, tutorials, HOWTOs, man pages, and other help to aid the learning process. Some of this documentation is intended specifically for a newcomer to Linux, or those that are seeking to move away from a proprietary world and embrace freedom.

There are literally thousands of Linux books which are available to purchase from any good (online) book shop. However, the focus of this article is to highlight champion Linux books which make an invaluable contribution to learning about Linux, and which are also available to download without charge.

We have tried to select a fairly diverse selection of books in this article so that there should be something of interest here for any type of user whatever their level of computing knowledge. This article should be read in conjunction with our previous article on free Linux books, entitled 20 of the Best Free Linux Books.

1. GNU/Linux Advanced Administration
Website ]
Author Remo Suppi Boldrito, Josep Jorba Esteve
Format PDF
Pages 545

We start of this article with a book that is exhaustive in its treatment of system administration. This book examines many different areas involved in administering Linux systems, with each subject being accompanied by a tutorial to act as an aid in the learning process.

Topics covered include:
Introduction to Linux
Migration and coexistence with non-Linux systems
Basic tools for the administrator
The kernel
Local administration
Network administration
Server administration
Data administration
Security administration
Configuration, tuning and optimisation

2. Using Samba
Author Robert Eckstein, David Collier-Brown, Peter Kelly

Format PDF, HTML
Pages 416

Samba is a suite of tools for sharing resources such as printers and files across a network. Samba uses the Server Message Block (SMB) protocol, which is endorsed by Microsoft and IBM, to communicate low-level data between Windows clients and Unix servers on a TCP/IP network.

It is one of the most important software to bridge the open source and closed source worlds.

The book focuses on two different areas:
Installation including configuring Windows clients
Configuration and optimization exploring areas such as disk shares, browsing and advanced disk shares, set up users, printer and Windows Internet Naming Service setup with Samba, and troubleshooting tips

3. Slackware Linux Basics
Author Daniël de Kok
Format PDF, HTML, Single page HTML
Pages 233

Slackware Linux Basics is a book that aims to provide an introduction to Slackware Linux. It targets people who have little or no GNU/Linux experience. It aims to cover the Slackware Linux installation, basic Linux commands and the configuration of Slackware Linux.

Slackware is one of the earliest Linux distributions with development commencing in 1993.

Installation including partitioning and custom installation
Basic essential information such as the shell, files and directories, text processing, process management, editing and typesetting, and electronic mail
System administration covering topics such as user management, printer configuration, X11, package management, building a kernel, system initialization, and security
Network administration focusing on network configuration, IPsec, the Internet super server, Apache, and BIND

4. Advanced Bash Scripting Guide
Author Mendel Cooper
Format PDF, HTML
Pages 945

Advanced Bash-Scripting Guide is an in-depth exploration of the art of scripting. Almost the complete set of commands, utilities, and tools is available for invocation by a shell script.

The book explains:
  • Basics such as special characters, quoting, exit and exit status
  • Beyond the Basics including loops and branches, command substitution, arithmetric expansion, recess time
  • Commands - Internal commands and builtins; External filters, programs and commands; System and Administrative Commands
  • Advanc
ed topics: Regular Expressons, Here Documents, I/O Redirection, Subshells, Restricted Shells, Process Substitution, Functions, Aliases, List Constructs, Arrays, Indirect References, /dev and /proc, Of Zeros and Nulls, Debugging, Options, Gotchas, Scripting with Style

Part 2
Part 3
Last edited:


How it works: Linux audio explained​

There's a problem with the state of Linux audio, and it's not that it doesn't always work. The issue is that it's overcomplicated. This soon becomes evident if you sit down with a piece of paper and try to draw the relationships between the technologies involved with taking audio from a music file to your speakers: the diagram soon turns into a plate of knotted spaghetti. This is a failure because there's nothing intrinsically more complicated about audio than any other technology. It enters your Linux box at one point and leaves at another.

If you've had enough of this mess and want to understand just how all the bits fit together, we're here to help - read on to learn exactly how Linux audio works!

If we were drawing the OSI model used to describe the networking framework that connects your machine to every other machine on the network, we'd find clear strata, each with its own domain of processes and functionality. There's very little overlap in layers, and you certainly don't find end-user processes in layer seven messing with the electrical impulses of the raw bitstreams in layer one.

Yet this is exactly what can happen with the Linux audio framework. There isn't even a clearly defined bottom level, with several audio technologies messing around with the kernel and your hardware independently. Linux's audio architecture is more like the layers of the Earth's crust than the network model, with lower levels occasionally erupting on to the surface, causing confusion and distress, and upper layers moving to displace the underlying technology that was originally hidden.

The Open Sound Protocol, for example, used to be found at the kernel level talking to your hardware directly, but it's now a compatibility layer that sits on top of ALSA. ALSA itself has a kernel level stack and a higher API for programmers to use, mixing drivers and hardware properties with the ability to play back surround sound or an MP3 codec. When most distributions stick PulseAudio and GStreamer on top, you end up with a melting pot of instability with as much potential for destruction as the San Andreas fault.


INPUTS: PulseAudio, Jack, GStreamer, Xine, SDL, ESD
OUTPUTS: Hardware, OSS

As Maria von Trapp said, "Let's start at the very beginning." When it comes to modern Linux audio, the beginning is the Advanced Linux Sound Architecture, or ALSA. This connects to the Linux kernel and provides audio functionality to the rest of the system. But it's also far more ambitious than a normal kernel driver; it can mix, provide compatibility with other layers, create an API for programmers and work at such a low and stable latency that it can compete with the ASIO and CoreAudio equivalents on the Windows and OS X platforms.

ALSA was designed to replace OSS. However, OSS isn't really dead, thanks to a compatibility layer in ALSA designed to enable older, OSS-only applications to run. It's easiest to think of ALSA as the device driver layer of the Linux sound system. Your audio hardware needs a corresponding kernel module, prefixed with snd_, and this needs to be loaded and running for anything to happen. This is why you need an ALSA kernel driver for any sound to be heard on your system, and why your laptop was mute for so long before someone thought of creating a driver for it. Fortunately, most distros will configure your devices and modules automatically.

ALSA is responsible for translating your audio hardware's capabilities into a software API that the rest of your system uses to manipulate sound. It was designed to tackle many of the shortcomings of OSS (and most other sound drivers at the time), the most notable of which was that only one application could access the hardware at a time. This is why a software component in ALSA needs to manages audio requests and understand your hardware's capabilities.

If you want to play a game while listening to music from Amarok, for example, ALSA needs to be able to take both of these audio streams and mix them together in software, or use a hardware mixer on your soundcard to the same effect. ALSA can also manage up to eight audio devices and sometimes access the MIDI functionality on hardware, although this depends on the specifications of your hardware's audio driver and is becoming less important as computers get more powerful.

Where ALSA does differ from the typical kernel module/device driver is in the way it's partly user-configurable. This is where the complexity in Linux audio starts to appear, because you can alter almost anything about your ALSA configuration by creating your own config file - from how streams of audio are mixed together and which outputs they leave your system from, to the sample rate, bit-depth and real-time effects.

ALSA's relative transparency, efficiency and flexibility have helped to make it the standard for Linux audio, and the layer that almost every other audio framework has to go through in order to communicate with the audio hardware.


INPUTS: GStreamer, Xine, ALSA

If you're thinking that things are going to get easier with ALSA safely behind us, you're sadly mistaken. ALSA covers most of the nuts and bolts of getting audio into and out of your machine, but you must navigate another layer of complexity. This is the domain of PulseAudio - an attempt to bridge the gap between hardware and software capabilities, local and remote machines, and the contents of audio streams. It does for networked audio what ALSA does for multiple soundcards, and has become something of a standard across many Linux distros because of its flexibility.

As with ALSA, this flexibility brings complexity, but the problem is compounded by PulseAudio because it's more user-facing. This means normal users are more likely to get tangled in its web. Most distros keep its configuration at arm's length; with the latest release of Ubuntu, for example, you might not even notice that PulseAudio is installed. If you click on the mixer applet to adjust your soundcard's audio level, you get the ALSA panel, but what you're really seeing is ALSA going to PulseAudio, then back to ALSA - a virtual device.

At first glance, PulseAudio doesn't appear to add anything new to Linux audio, which is why it faces so much hostility. It doesn't simplify what we have already or make audio more robust, but it does add several important features. It's also the catch-all layer for Linux audio applications, regardless of their individual capabilities or the specification of your hardware.

If all applications used PulseAudio, things would be simple. Developers wouldn't need to worry about the complexities of other systems, because PulseAudio brings cross-platform compatibility. But this is one of the main reasons why there are so many other audio solutions. Unlike ALSA, PulseAudio can run on multiple operating systems, including other POSIX platforms and Microsoft Windows. This means that if you build an application to use PulseAudio rather than ALSA, porting that application to a different platform should be easy.

But there's a symbiotic relationship between ALSA and PulseAudio because, on Linux systems, the latter needs the former to survive. PulseAudio configures itself as a virtual device connected to ALSA, like any other piece of hardware. This makes PulseAudio more like Jack, because it sits between ALSA and the desktop, piping data back and forth transparently. It also has its own terminology. Sinks, for instance, are the final destination. These could be another machine on the network or the audio outputs on your soundcard courtesy of the virtual ALSA. The parts of PulseAudio that fill these sinks are called 'sources' - typically audio-generating applications on your system, audio inputs from your soundcard, or a network audio stream being sent from another PulseAudio machine.

Unlike Jack, applications aren't directly responsible for adding and removing sources, and you get a finer degree of control over each stream. Using the PulseAudio mixer, for example, you can adjust the relative volume of every application passing through PulseAudio, regardless of whether that application features its own slider or not. This is a great way of curtailing noisy websites.


INPUTS: Phonon
OUTPUTS: ALSA, PulseAudio, Jack, ESD

With GStreamer, Linux audio starts to look even more confusing. This is because, like PulseAudio, GStreamer doesn't seem to add anything new to the mix. It's another multimedia framework and gained a reasonable following of developers in the years before PulseAudio, especially on the Gnome desktop. It's one of the few ways to install and use proprietary codecs easily on the Linux desktop. It's also the audio framework of choice for GTK developers, and you can even find a version handling audio on the Palm Pre.

GStreamer slots into the audio layers above PulseAudio (which it uses for sound output on most distributions), but below the application level. GStreamer is unique because it's not designed solely for audio - it supports several formats of streaming media, including video, through the use of plugins.

MP3 playback, for example, is normally added to your system through an additional codec download that attaches itself as a GStreamer plugin. The commercial Fluendo MP3 decoder, one of the only officially licenced codecs available for Linux, is supplied as a GStreamer plugin, as are its other proprietary codecs, including MPEG-2, H.264 and MPEG.


INPUTS: PulseAudio, GStreamer, ALSA,

Despite the advantages of open configurations such as PulseAudio, they all pipe audio between applications with the assumption that it will proceed directly to the outputs. Jack is the middle layer - the audio equivalent of remote procedure calls in programming, enabling audio applications to be built from a variety of components.

The best example is a virtual recording studio, where one application is responsible for grabbing the audio data and another for processing the audio with effects, before finally sending the resulting stream through a mastering processor to be readied for release. A real recording studio might use a web of cables, sometimes known as jacks, to build these connections. Jack does the same in software.

Jack is an acronym for 'Jack Audio Connection Kit'. It's built to be low-latency, which means there's no undue processing performed on the audio that might impede its progress. But for Jack to be useful, an audio application has to be specifically designed to handle Jack connections. As a result, it's not a simple replacement for the likes of ALSA and PulseAudio, and needs to be run on top of another system that will generate the sound and provide the physical inputs.

With most Jack-compatible applications, you're free to route the audio and inputs in whichever way you please. You could take the output from VLC, for example, and pipe it directly into Audacity to record the stream as it plays back.

Or you could send it through JackRack, an application that enables you to build a tower of real-time effects, including pinging delays, cavernous reverb and voluptuous vocoding.

This versatility is fantastic for digital audio workstations. Ardour uses Jack for internal and external connections, for instance, and the Jamin mastering processor can only be used as part of a chain of Jack processes. It's the equivalent of having full control over how your studio is wired. Its implementation has been so successful on the Linux desktop that you can find Jack being put to similar use on OS X.


OUTPUTS: Audio hardware

In the world of professional and semi-professional audio, many audio interfaces connect to their host machine using a FireWire port. This approach has many advantages. FireWire is fast and devices can be bus powered. Many laptop and desktop machines have FireWire ports without any further modification, and the standard is stable and mostly mature. You can also take FireWire devices on the road for remote recording with a laptop and plug them back into your desktop machine when you get back to the studio.

But unlike USB, where there's a standard for handling audio without additional drivers, FireWire audio interfaces need their own drivers. The complexities of the FireWire protocol mean these can't easily create an ALSA interface, so they need their own layer. Originally, this work fell to a project called FreeBOB. This took advantage of the fact that many FireWire audio devices were based on the same hardware.

FFADO is the successor to FreeBOB, and opens the driver platform to many other types of FireWire audio interface. Version 2 was released at the end of 2009, and includes support for many units from the likes of Alesis, Apogee, ART, CME, Echo, Edirol, Focusrite, M-Audio, Mackie, Phonic and Terratec. Which devices do and don't work is rather random, so you need to check before investing in one, but many of these manufacturers have helped driver development by providing devices for the developers to use and test.

Another neat feature in FFADO is that some the DSP mixing features of the hardware have been integrated into the driver, complete with a graphical mixer for controlling the balance of the various inputs and outputs. This is different to the ALSA mixer, because it means audio streams can be controlled on the hardware with zero latency, which is exactly what you need if you're recording a live performance.

Unlike other audio layers, FFADO will only shuffle audio between Jack and your audio hardware. There's no back door to PulseAudio or GStreamer, unless you run those against Jack. This means you can't use FFADO as a general audio layer for music playback or movies unless you're prepared to mess around with installation and Jack. But it also means that the driver isn't overwhelmed by support for various different protocols, especially because most serious audio applications include Jack support by default. This makes it one of the best choices for a studio environment.


INPUTS: Phonon

We're starting to get into the niche geology of Linux audio. Xine is a little like the chalk downs; it's what's left after many other audio layers have been washed away. Most users will recognise the name from the very capable DVD movie and media player that most distributions still bundle, despite its age, and this is the key to Xine's longevity.

When Xine was created, the developers split it into a back-end library to handle the media, and a front-end application for user interaction. It's the library that's persisted, thanks to its ability to play numerous containers, including AVI, Matroska and Ogg, and dozens of the formats they contain, such as AAC, Flac, MP3, Vorbis and WMA. It does this by harnessing the powers of many other libraries. As a result, Xine can act as a catch-all framework for developers who want to offer the best range of file compatibility without worrying about the legality of proprietary codecs and patents.

Xine can talk to ALSA and PulseAudio for its output, and there are still many applications that can talk to Xine directly. The most popular are the Gxine front-end and Totem, but Xine is also the default back-end for KDE's Phonon, so you can find it locked to everything from Amarok to Kaffeine.


INPUTS: Qt and KDE applications
OUTPUTS: GStreamer, Xine

Phonon was designed to make life easier for developers and users by removing some of the system's increasing complexity. It started life as another level of audio abstraction for KDE 4 applications, but it was considered such a good idea that Qt developers made it their own, pulling it directly into the Qt framework that KDE itself is based on.

This had great advantages for developers of cross-platform applications. It made it possible to write a music player on Linux with Qt and simply recompile it for OS X and Windows without worrying about how the music would be played back, the capabilities of the sound hardware being used, or how the destination operating system would handle audio. This was all done automatically by Qt and Phonon, passing the audio to the CoreAudio API on OS X, for example, or DirectSound on Windows. On the Linux platform (and unlike the original KDE version of Phonon), Qt's Phonon passes the audio to GStreamer, mostly for its transparent codec support.

Phonon support is being quietly dropped from the Qt framework. There have been many criticisms of the system, the most common being that it's too simplistic and offers nothing new, although it's likely that KDE will hold on to the framework for the duration of the KDE 4 lifecycle.

The rest of the bunch

There are many other audio technologies, including ESD, SDL and PortAudio. ESD is the Enlightenment Sound Daemon, and for a long time it was the default sound server for the Gnome desktop. Eventually, Gnome was ported to use libcanberra (which itself talks to ALSA, GStreamer, OSS and PulseAudio) and ESD was dropped as a requirement in April 2009. Then there's Arts, the KDE equivalent of ESD, although it wasn't as widely supported and seemed to cause more problems than it solved. Most people have now moved to KDE 4, so it's no longer an issue.

SDL, on the other hand, is still thriving as the audio output component in the SDL library, which is used to create hundreds of cross-platform games. It supports plenty of features, and is both mature and stable.

PortAudio is another cross-platform audio library that adds SGI, Unix and Beos to the mix of possible destinations. The most notable application to use PortAudio is the Audacity audio editor, which may explain its sometimes unpredictable sound output and the quality of its Jack support.

And then there's OSS, the Open Sound System. It hasn't been a core Linux audio technology since version 2.4 of the kernel, but there's just no shaking it. This is partly because so many older applications are dependent on it and, unlike ALSA, it works on systems other than Linux. There's even a FreeBSD version. It was a good system for 1992, but ALSA is nearly always recommended as a replacement.

OSS defined how audio would work on Linux, and in particular, the way audio devices are accessed through the ioctl tree, as with /dev/dsp, for example. ALSA features an OSS compatibility layer to enable older applications to stick to OSS without abandoning the current ALSA standard.

The OSS project has experimented with open source and proprietary development, and is still being actively developed as a commercial endeavour by 4 Front Technologies. Build 2002 of OSS 4.2 was released in November 2009.


Linux's worst enemies? Linux fans

Do you know why Unix failed to take off as a mainstream operating system? It wasn't because it was too hard to use. Mac OS X, the universally acclaimed 'easy' operating system is built on top of BSD Unix. It's certainly not because Windows is better. It wasn't and it isn't. No, I put most of the blame for Unix's failure on its internal wars. Unix International vs. Open Software Foundation; BSD vs. System V, etc., etc. For the most part, Linux has avoided this.... for the most part.

That's not to say that Linux doesn't have its share of internal battles that don't do anyone any good. Free software founder Richard M. Stallman's insistence that Linux should be called GNU/Linux puzzles more people than it does bringing anyone to Linux, or GNU/Linux if you insist. In the last few days though, another Linux family fight has erupted.

This time around, it's open-source developer and anti-patent political lobbyist Florien Mueller accusing IBM of breaking its promises to the FOSS (free and open-source software) community of not using patents against it. Mueller's is ticked off that TurboHercules, an open-source z/OS emulator company, over its possible misuse of IBM patents, which includes two that's covered by IBM's pledge to not sue open-source companies or groups using these patents.

I have several problems with this. First, as Pamela Jones of Groklaw points out, TurboHercules started the legal fight with IBM and the open-source software license it uses isn't compatible with the GPL--the license that covers Linux. Second, this is really just a standard-issue business fight that involves patents. It does not, as Mueller would have it, show that "After years of pretending to be a friend of Free and Open Source Software (FOSS), IBM now shows its true colors. IBM breaks the number one taboo of the FOSS community and shamelessly uses its patents against a well-respected FOSS project, the Hercules mainframe emulator."

Come on! IBM has been one of Linux's biggest supporters for over a decade. Why do they support Linux and open source? Ah, would that be because they've invested billions it and they've made even more billions from it? I think so. Does a minor legal-clash with an obscure company means IBM is now a traitor to the cause, and, more importantly, that they've abandoning a wildly successful busiess plan? I can't see it.

To me, this is all a piece with Debian's moronic fight with Mozilla which ended up with Debian re-naming Firefox, IceWeasel, in its Linux distribution; Red Hat's 'betrayal' of Linux by abandoning the consumer side of Linux for RHEL (Red Hat Enterprise Linux); and the never-ending Debian vs. Ubuntu fights.

There's this love of ideological purity that burns in the heart of too many open-source fans that makes them require companies and groups to pass litmus tests before they can approve of them. No matter, for example, that Novell has carried the burden of fighting off SCO's anti-Linux claims, Novell partnered with Microsoft, therefore, Novell must be boycotted!

I'm sorry people. We don't live in a black and white world, nor, for that matter one filled with shades of gray. It's one filled with a multitude of colors and business and ethical choices aren't binary.

I'm not the only one who sees it this way. I recently talked to analysts and executives about the IBM/TurboHercules patent mess and they agreed with me that these fights only end up hurting open source and Linux.

You know, we've been here before. The one real winner when the Unix companies slugged it out was Microsoft. Why would anyone think that turning Linux into dueling fiefdoms arguing over who has betrayed open-source last is going to help anyone except Microsoft and other pure proprietary companies?

Finally, don't you think it more than a little interesting that the other 'open-source' companies, which had attacked IBM on similar grounds in Europe, counted Microsoft among their stock-owners? Coincidence? I think not. Might I suggest those attacking IBM take a long, hard look at what they're really doing and which side of the open-source debates they're really on.

Last, but not least, might I suggest that anyone who thinks that extremism for one side or another in the various open-source debates is a virtue contemplate this classic John Cleese video.


^It had too many pictures, so didn't post the article here for Digit Bandwidth reason :D and not to forget my novice level when it comes to formatting :)


The trouble with Linux: it's just not Sexy
iPad painfully illustrates this massive divide

Graham Morrison

If Linux is to gain some market share this year, then it will need to pull off some magic above and beyond the competition

There are three reasons why Linux isn't succeeding on the desktop, and none of them are to do with missing functionality, using the command line or the politics of free software.

The first is that there's too much momentum behind Microsoft Windows and too many preconceptions about the alternatives. Linux is perceived as having too much of a learning curve for relatively few advantages and an unknown heritage.

Migrating big business to a Linux desktop is akin to turning a T1-class supertanker around mid-Atlantic. The opposite direction may look brighter, but it's easier to chug onwards into the storm. You only have to look at the number of people clinging to Microsoft's venerable Office suite to see this point clearly.

For the vast majority, most of its functional fecundity is wasted. Many people could arguably be just as (un)productive with Notepad, Calculator and Paint, let alone using an open-source alternative such as Its use seems to have more to do with keeping face when attaching files to an email than a genuine operational advantage.

Most people will only consider an alternative when there are bigger issues, larger icebergs or uncertain territories on the horizon, Away from the desktop, Linux is faring better.

Smaller, more agile businesses quickly quantify the cost advantages to produce cheaper and more competitive products. This is why embedded Linux has been such a success on everything from Chinese mobile phones to almost every NAS box around. This may mean that success on the desktop is only a matter of time, or it may mean that the Linux desktop is too far removed from the Linux kernel.

The second reason for failure is that Linux lacks centralised marketing. This is because there's no real Linux Central. It's just a trademark owned by its creator, Linus, and a term normally reserved for just the kernel of the operating system – hardly the easiest product to sell.

There are plenty of people advertising their own Linux endeavours, all keen to push their own angle on its advantages. This divided effort compounds the problem. With the likes of Red Hat, Novel and Canonical all fighting for their own slice of the pie, there's no one left to push Linux as a distinctive brand. That's something Apple and Microsoft do extremely well, and something Linux leaves to Tux the penguin.

Many would argue that standards are the answer to this conundrum, and that would mean a single base distribution. This could then be the only distribution called 'Linux' - everything else would become 'Linux based'.

Mozilla manages this well with the use of the Firefox brand. It's freely distributable and modifiable, but it can only be called 'Firefox' in its untouched incarnation. Change anything and you need to change the name.

For example, Debian calls its Firefox build 'IceMonkey' because it needs to reserve the right to make modifications, thus breaking Mozilla's standards. This may cause confusion if you look for Firefox on your Debian desktop, but it also sets a precedent for the kind of experience that Mozilla expects its users to have, and Debian hackers still have the code to mess around with if they need to. It's a compromise, but it might work in a world with hundreds of Linux distros.

The third reason is easy to see but harder to solve. It's the reason why you're not using Linux now. The solution would make all other problems redundant. The reason why you're not using Linux now is because there isn't a good enough reason to.

Sober advantages such as better security, improved performance, rock solid stability and low cost aren't going to win converts. These advantages aren't exciting enough; they're the equivalent of a spreadsheet of mortgage repayments. What we really want is a significant upgrade, something you'd normally pay for.

Perhaps we should focus on value. Recent analysis of the kernel by Jon Corbet showed that 75 per cent of the 2.8 million lines of code in recent contributions were written by paid-for developers. That puts Linux freedom in context.

But the biggest challenge is sexiness. There's very little of it in Linux unless you're an antisocial geek, and products like the Apple's iPad illustrate this massive divide painfully. As Jim Zemlin, Executive Director of the Linux Foundation, puts it, "Linux can compete with the iPad on price, but where's the magic?"

Linux has the programmers, the managers, the community, the innovation, the time and the skill. But to succeed in 2010 and the coming decade, what it really needs is a magician or two.


Do you agree? Is Linux sexy enough for mainstream use, or does it still need more work? Perhaps a side issue is whether Linux needs to be sexy at all. Please post your views below for inclusion in our next podcast - don't forget to add a name other than Anonymous Penguin, and don't forget to provide some sort of explanation as to how you came about your answer. Pedants who happily answer that Linux is just a kernel might want to question whether they are indeed the "antisocial geeks" that Graham describes.


I hate computers: confessions of a sysadmin
Scott Merrill]


I often wonder if plumbers reach a point in their career, after cleaning clogged drain after clogged drain, that they begin to hate plumbing. They hate pipes. They hate plumber’s putty. They hate all the tricks they’ve learned over the years, and they hate the need to have to learn tricks. It’s plumbing, for goodness sake: pipes fitting together and substances flowing through them. How complicated can it be?

I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life, and I love the escapism they sometimes afford; but I actually hate the computers themselves. Computers are fragile, unintuitive things — a hodge-podge of brittle, hardware and opaque, restrictive software. Why?

I provide computer support all day every day to “users”. I am not one of these snotty IT guys who looks with scorn and derision on people who don’t know what an IRQ is. I recognize that users don’t care about computers. The computer is a means to an end for them: a presentation to solicit more grant money, or a program to investigate a new computational method, or just simply sending a nice note to their family. They don’t want to “use the computer” so much as do something that the computer itself facilitates. I’m the same with with cars: I don’t want to know how an internal combustion engine works or know how to change my oil or in any other way become an automotive expert — I just want to drive to the grocery store!
But the damned computers get in the way of all the things the computers help us do. There’s this whole artificial paradigm about administrator accounts, and security, and permissions, and all other manner of things that people don’t care about. A host of ancillary software is required just to keep your computer running, but that software introduces more complexity and more points of failure, and ends up causing as much grief as it’s intended to resolve.

Computer error messages are worthless

What sparked this current round of ire was a user’s inability to check for Windows Updates. Windows Update, the program, starts up just fine. But clicking on “Check for Updates” results in an unhelpful message that Windows Update could not check for updates. A meaningless error code is presented to the user, as if he’ll know what to do with that. There’s even a helpful link that says “Learn more about common Windows Update problems”. The list of suggested problems includes a variety of other meaningless error codes, but not the one that this user received. The Windows Event Log, which I know how to access but the user does not, contains nothing instructive. For a normal user, this would be a dead-end with one of two options: ignore the problem and hope nothing bad happens in consequence; or try to repair the operating system using some half-baked recovery method provided by the computer manufacturer or the Windows install disk (assuming they have one).

Another user I support has had nothing but trouble with Adobe Acrobat. Trying to open PDFs from within his browser fails spectacularly. Either the links simply never open, or they open a completely blank page, or Internet Explorer renders an error page suggesting that there’s a network problem! The user can right-click and “Save As” the links to get the PDFs, and I’m thankful that this user understand how to right-click at all, such that he has a viable workaround to the problem until I can find the root cause. But many, many users do not know what the right mouse button is for.


I pick on Microsoft a lot, because I think they do a lot of things fundamentally wrong. But plenty of other companies are just as guilty of bad design, bad implementation, and bad communication with their users. Google’s Chrome browser is cute when it says “Aw snap!”, but that leans the other way in terms of uselessness: it doesn’t give the user any better idea of what might be wrong, and users are left to feel helpless, powerless, and stupid.
Even when things go right, users are left to feel powerless and stupid. Installing almost any program on a Windows based system involves an inordinate number of clicks, all of them just saying “Okay” “Okay” “Okay”. No one reads the click-through EULAs, no one changes the default installation location, and no one selects specific installation options. They just keep clicking “Okay” because that’s what they’ve been trained to do. And then they end up with four extra toolbars in their browser and a bunch of “helper” programs that don’t actually help the user in any way and which they user doesn’t actually want. And they don’t know how to get rid of them.

Computers don’t make sense​

There’s an awful lot to be said about the simplicity and usefulness of installing software on Mac or Linux. In the latter case, you simply drag a file to your Applications folder, and you’re done. Linux package managers do all the heavy lifting without any user intervention. If a Linux program requires additional libraries, the package manager finds them and installs them automatically. In both instances, I can install new applications in a fraction of the time it takes to install something on Windows.
Removing software is another cause of much consternation for users. Again, Mac and Linux make it pretty easy most of the time. Heck, on any Linux system I can enumerate all of the packages installed in seconds with a single command from the package manager (or click of the appropriate button using a GUI for the package manager). But in any Windows machine — even a brand new one with top-of-the-line hardware — it requires long minutes to enumerate and display the installed software; and to make things worse the “Add and Remove Software” control panel item doesn’t actually show you all the installed applications. And removing any particular piece of software is not always a clean operation: cruft is left behind in the filesystem and the registry (don’t even get me started on my loathing of the Windows registry!).
Speaking of filesystems, why is it that a SQL database can find a specific record in a database of millions of records in a fraction of a second, but finding a specific file on your hard drive takes minutes? I’m sure there’s some very real reason why filesystems are so unfriendly to users, but I’ll be darned if I can explain it to any of my users.

Computers are too complex to use​

Average folk might take a “computer class” which instructs them on a few specific tasks — usually application specific (How to use Microsoft Word), as opposed to task specific (How to use a word processing program) — but when experiences diverge from those presented in the class, the user is not well equipped to deal with the situation. How does one interpret this new error message? How does one deal with a recurring application fault?
The pace of change in the computer industry works against users. The whole color-coded ports initiative was a great step toward end user convenience, but that’s not enough when users now need to know the difference between VGA, DVI, and DisplayPort. A lot of the computers that are coming into my office have all three video ports, and the monitors support multiple inputs, leaving users to wonder which one(s) they should use when setting up their PC. I’ve had multiple calls from really smart graduate students who couldn’t figure out how to connect the computer to the monitor. Sure, it’s an easy joke to make fun of these situations, but it’s a damning indictment of the computer industry as a whole, if you ask me.
Like Nicholas, I’ve never had a malware infection on any computer I own; but I’ve helped lots of people — users I support professionally, and family and friends — recover from malware infections. Can you imagine your mother-in-law being able to find and follow these instructions for removing malware? Or worse, knowing about and responding to a botched antivirus update from your AV software?

Computers fail spectacularly, taking all our data with them​

Hardware and software companies know that we use our computers to store information that is important to us. And yet backing up data to keep it safe is still a gigantic pain in the ass. Lots of “enterprise” backup software exists to try to protect us from computer failures (hardware, software, and user errors), and a host of “consumer” solutions vie for our consumer dollars; but frankly they all suck. Why do we need third-party software to protect the investment we’ve made in our computers? Users don’t buy backup software because they don’t expect their computers to fail.
It’s so easy to amass a huge amount of data today — digital photo archives, MP3 collections, and video — that it’s a real pain to reliably back up. Not only is it a pain, it’s expensive. You shell out a couple hundred bucks for a fancy new camera, and you’ll need to shell out a couple hundred more bucks to get an external hard drive onto which you can duplicate all your photos for safekeeping. And then, of course, it takes a long time to actually copy your data from your computer to your external hard drive, and you just don’t have the time or patience to commit to that regularly, so you start to neglect it and them *bam* your computer blows up — hard drive failure, malware infection, whatever — and you lose weeks and months worth of irreplaceable data.

Sure, some computers come with redundant disks, but most consumer-level RAID is a fragile mix of hardware and software, further complicating the setup. Why haven’t reliable, low-cost RAID solutions reached the mainstream yet? Why don’t end users have better access to useful things like snapshots, or ZFS yet?

And what about all the little failures that end users can’t possibly begin to detect or diagnose, like bulged capacitors on their mainboard, or a faulty video card, or wonky RAM?

Computers are overwhelming


The mind-numbing number of computers available for purchase at any retail establishment right now is enough to cow even the most stalwart bargain shopper. How is a layperson to proceed in the face of row after row of meaningless statistics? Will that extra 0.2 GHz make a demonstrable difference in their use of the computer? Will it give them an extra six months, or even a year, of useful life? Why should a normal user even care about the number of bits in their operating system?
The Laptop Hunters tried to help people find the right laptop, but Sheila’s $2,000 HP isn’t necessarily the best pick of the available options, is it? Sure, AMD is simplifying its brand. But is that enough to really help people find the best product for their need? Will their branding refresh make any difference at all when there’s still five or ten seemingly identical systems on the shelf at the big box retail computer store?

I hate computers.

I know my little rant here is like shouting at the storm: there’s a huge, lethargic industry making gobs of cash on the complexity of the computer era, and there’s little capitalistic incentive to change the status quo. These complaints aren’t new. Many of them have been made for the past quarter century. We try, in our little way, to highlight some of the deficiencies we perceive in the industry as a whole, but that’s about all we can do from here. What are you doing about these problems?
Maybe I’ll become a plumber…


So, how many of hate computers? :D

PS: This is not OSS-related article but what the heck!!!!!!!!!!


Ubuntu, the family album

A few days before the release of the new Ubuntu, here’s a guided tour through the Ubuntu family album with some annotations telling my story with the different versions.
Ubuntu Warty WARTHOG (4.10)

The first version of the Ubuntu distribution. For the history it was during a discussion between Robert Collins and Mark Shuttleworth that Mark has launched the name that will be adopted and adapted to any future versions of Ubuntu. Thus each version will have a code name composed of an adjective and an animal name.
Ubuntu Hoary HEDGEHOG (5.04)

Ubuntu Breezy BADGER (5.10)

March 2006 – The first time I heard about Ubuntu: One of my students brings me a bag with two original CDs, an installation CD and a live CD, and he tells me he got the CD free and they are sent home by mail at no charge. I take the live CD, I go home, but I can not find an equivalent to my very useful “Drakecenter” to configure the settings for my PC. I give up and I return to my sweet Mandriva.
Ubuntu Dapper DRAKE (6.06)

Many first time for this version. The first version with a long-term support. The first version that lags behind the initial schedule of release (a release every six months). The first version that takes the name of a bird :p All the following versions will follow in alphabetical order.
November 2006 – The first time I install Ubuntu: I am in training in France, I was given a PC and allowed to install my own OS. I take my Mandriva DVD and begin the installation, but can not complete the installation … the system does not detect the SATA disk of the PC. A colleague hands me a CD written on “Ubuntu 6.06″ and asks me to test it. I install the system and presto, SATA disk detected and I find myself with my new GNOME desktop, all in brown and orange icons while. My colleague tells me that tomorrow will bring me the CD of the new version just released a few days ago and tells me that the new version brings a huge number of improvements.
Ubuntu Edgy EFT (6.10)

November 2006 (continued): My colleague brought me back the CD, and presto new installation within 24 hours. During the installation my colleague introduced me to a spirit of this new distribution, its history, its advantages over others, etc. The installation is complete, the hard work starts: how to configure my machine and how to install my applications without my famous “Drakecenter”? With an internet connection available, my first (good) reaction was to ask my best friend (at that time – I mentioned google) to help me and obviously I found the website of the Francophone community. There a very big surprise… the documentation is freely available, forum registration is possible and it is free… (I say it was a surprise because at that time subscribing into the official forum of Mandriva was not free) long life to the free spirit, I am fascinated by this community and I joined it directly. Since that time I have not left my Ubuntu.
Ubuntu Feisty FAWN (7.04)

April 2007 – Community: I’m getting my bearings in this new distribution after years spent with Mandrake and Mandriva, and my “Drakecenter” (yes I repeat: p) don’t miss me more. Thanks to the forum and the documentation of the Francophone community all my questions (or almost) are answered. I discovered the concept of community and of course my first question was is are there a Tunisian community? Yes there is one, and it is two months old, but with no apparent activity (for now).
Ubuntu Gutsy GIBBON (7.10)

October 2007 – The community (bis): More and more e-mails circulating on the mailing list of our community and IRC meetings are held from time to time. Few days after the release of Ubuntu 7.10, ubuntu-tn community participates in the biggest FOSS event in Tunisia “Software Freedom Day – SFD”. Our community is within the landscape of FOSS communities in Tunisia and became one of the main actors.
Ubuntu Hardy HERON (8.04)

The second version with a long term support, as the first LTS version the code name uses a name of a bird. For me it is the version that has the most beautiful wallpaper by default
but not only that.
April 2008 – Purpose, approval of the community: Our community turns on more and more and we set a common goal: become an approved local community. The goal was reached July 22, 2008.
Ubuntu Intrepid IBEX (8.10)

Ubuntu Jaunty JACKALOPE (9.04)

The Jackalope, an imaginary animal, had to find it

Ubuntu Karmic KOALA (9.10)

Ubuntu in the clouds.
and the story history will continue with:
Ubuntu Lucid LYNX (10.04)



The Hobbyists OS

Submitted by ThistleWeb on Tue, 04/27/2010 - 23:51

Microsoft's army of apologists like to spread the word that Linux is a "hobbyists OS", so this post is a look at what that means and why it's a label more suited to Windows. The attack is meant to draw attention to the fact that anyone can write code which appears in Linux, inferring the quality of the code is dubious. Basically, it can't be good quality if people outside the corporation write it.

They try to paint the picture that while Windows just works, Linux needs a lot of tinkering to get anything done. It's pitched at both home users and businesses. For the home user it's about "you have to learn all this stuff, and spend hours fixing it" while the businesses get the "you have your staff PC's down for X hours so they can't be productive, while also spending extra wages on skilled IT people to fix and configure things." The implication is that "Windows is a better investment in man hours, productivity and cost, Linux costs you money."

How many man hours do you have to spend on Windows doing virus scans, spyware scans etc? How many man hours do you have to spend Googling to find how to remove infection because your chosen protection tools can detect it but can't remove it? How can you assure your customers that their data is not compromised by some spyware your tools can't detect? How can you be assured that the site supposedly giving a solution to a particular virus is not itself a phishing scam waiting to sell you some software if you put your credit card details in or a script laden site ready to dump a whole new payload of malware on your plate?

The best (from the malware writer's point of view, worst from your's) malware are the ones that sit undetected for a long time, giving their controllers as much information as possible before they're detected and removed. Just like any spy, if it's cover is blown it's no longer of use. Your protection programs can only detect what they know about, and someone has to be first to get hit. You only hear about the big ones, while the little ones are written to NOT draw any attention to themselves. This means you'll never know if your network is clean or not, only what your software reports. Who knows how many spyware programs are in the wild and have not yet been detected by the companies selling the protection software because they've been well written to avoid detection. You'll never know who else has access to your private data, which includes Microsoft themselves.

How much overtime have you had to pay out for your IT department to try and hunt down yet another malware infection running wild on your network? How much do you have to pay for the same problems to be fixed over and over and over again? What do your employees do when their PC's are down? They still get paid to twiddle their thumbs right? Do they get paid overtime for staying behind to catch up? If not, how does that reflect on their feelings towards you as an employer? They didn't choose Windows. You did.

If your hobby is detecting and removing malware, then Windows is a great hobbyists OS. Linux and OSX don't get a look in on that score.

While Linux is behind the curve in a few very specific areas, as far as malware downtime is concerned, it's all but immune. While the Windows user in the next seat can't get anything done because it's being scanned, or trying to clean some infection, or just has to reboot to finish installing a patch, the Linux user in the next seat continues working without a hitch.

To me, it seems pretty obvious which OS is built to be used for it's intended purpose, and which one you have to spend a lot of man hours just keeping functional. Remember, a PC is a tool which enables you to do other things, the more you have to spend keeping it clean, the less you can devote to it's intended function. It's not just man hours, it's also in products which always claim to solve a problem; for a price. How much is your time worth? Bosses have an instinctive understanding of this concept.

So when someone tries to spin you the "Linux is just for hobbyists" line, ask how much they factor malware into the comparison. It adds significantly to the other Microsoft written line often used by Microsoft apologists, the TCO (Total Cost of Ownership).

Anything with a licence cost has a higher start on that list, dealing with the constant fight against malware is NEVER mentioned as a cost, yet it does cost in real terms. Training is a handy one, when Microsoft change their applications, then all but force people to upgrade by stopping support of older versions, they force people to have to retrain anyway; so retraining in Linux is much the same. When you make an OS with as many exploits built in, the idea of making money from support services is a joke.

There's a whole industry built around fixing the holes Microsoft can't seem to fix; think about it. Does this sound like the work of a bunch of professional programmers? The fact that Windows admins have to lock down the systems they're in charge of, cutting off most of the useful functions of a PC just to save them some work cleaning infections speaks volumes. USB ports are a great thing for data portability, are they blocked on your network in case someone puts an infected USB thumbdrive in a PC and brings the network down because Windows wants to auto run everything blindly?

Regular home users generally don't have that level of knowledge. They have little choice but to brave the wild west, and expect to buy a new PC every couple of years because their current one is so badly infected with malware that it's barely able to boot up. Which of course means more money for Microsoft; which need never have been spent. To make matters even worse, Microsoft have a whole lobbying / bullying machine dedicated to ensuring customers have no choice but to buy a PC with Windows already installed.

For businesses this means you have to splash out on new PCs every few years under the assumption that you need the latest specification hardware. The difference in performance between the old and new is startling of course, because the old one is clogged down with malware, the new one is not; yet. Can you afford to throw away a large chunk of your capital when you don't need to? What benefit do you get from the newer versions? Are there some features not available in the older versions that your business needs? I doubt it. Why should you be forced to upgrade when it only suits Microsoft?

In short, if you want to spend time and money always fixing your PC, then Windows is the way to go, the rest of us can use Linux and actually use our PCs. Remember that Linux is free, both in terms of what you can do with it, and (the context of this post) cost. If you install Linux on 1 PC, it's the same as installing it on 1,000,000 PCs. Windows have volume licensing but it still costs money per PC. This adds up very quickly. Add the same deal for Microsoft Office and you're wasting a fortune before you even get onto the training, support and malware issues.


Broken In
well yes iv seen people who think that computer n windows is one n da same.......their paradigm of seeing at computers is windows wich i think needs to change fast


Why GNU+Linux is > GNU/Linux and > just Linux

Let's get the obvious out of the way, we advocate for computer software users' freedom. The Free Software Foundation and GNU Project are responsible for most of the software we use everyday. At InaTux Computers, we of course use the GNU Compiler Collection (GCC) to compile custom Linux kernels (when customers want them), as well as other custom software modifications customers may want (such as hardware optimization). Our compiling is of course done in GNOME Terminal running GNU BASH, and most software we compile require the GNU C Library, GTK+, gtkmm, etc.

GNU is most of the software we use, without it we wouldn't have a working operating system (unless you define an operating system as just the kernel, but just a kernel is useless), or we would be using a proprietary operating system as that would sadly be the norm-- but we would also not be the business we are if not for Free Software. So with that, we naturally don't think it is justified to call the operating system "Linux" (unless the operating system is not using GNU, in which case it might be justified).

We give credit where credit is due, Linux and GNU (ha, sort of rhymes). So just "Linux" is not acceptable, and I will now explain why we [now] choose to use GNU+Linux instead of GNU/Linux.

Let's look at it mathematically, "GNU / Linux" means GNU divided by Linux. How does that make any sense? At best it implies Linux is dividing GNU, making it harder to work and communicate with contributors; which is simply not true. Linux was what united the GNU operating system, and gave us the functioning system we have today. So GNU divided by Linux makes no sense.

While "GNU + Linux" makes perfect mathematical sense, the GNU operating system plus the Linux kernel. This should actually be more like this "GNU - Hurd/Mach + Linux = GNU+Linux" it makes sense.

Let's look at it linguistically, to say "GNU slash Linux" isn't the norm, typically when a word uses a slash the slash is not pronounced, example "and/or" you would never say "and slash or" because it means to be something or the other at your choice; Bacon and/or Eggs.

So to do the same with "GNU/Linux" is to have a choice, GNU or Linux, when choosing only one either way results in a nonfunctional operating system. Just GNU results in nonfunctional system tools and an incomplete micro-kernel, and just Linux results in a fully-functional monolithic kernel without any system tools, not even a shell to perform simple tasks.

With "GNU + Linux" you would say "GNU plus Linux" it's a little shorter and doesn't sound as strange, as there are many things that use "Something plus something". But it's not much better in terms of sounding "hip" and "cool" or even simple, but we see it as a small price to pay to give credit to those who made it possible to boot and run this powerful Free Software operating system. Richard Stallman's GNU Project and Linus Torvalds.

Now let's look at it in terms of visibility, "GNU/Linux" looks like one word (like it's GNU|Linux or GNULinux), it doesn't separate the two, where as "GNU+Linux" has a nice bit of space between GNU and Linux. Let's compare with highlighting them both.

GNU/Linux GNU/Linux GNU/Linux GNU/Linux

With "GNU/Linux" there is only a small bit of space between the yellow, proving the fact that they will seem as one word to those unfamiliar to the "GNU/Linux" terminology, especially at smaller font sizes.

GNU+Linux GNU+Linux GNU+Linux GNU+Linux

With "GNU+Linux" there is a noticeably larger space between the two, even at smaller font sizes. Being a website it is important that readers that are quickly reading through see the two separated, so if the reader is only familiar with "Linux" the reader will easily see that Linux is involved, instead of quickly reading through only seeing the "GNU/Linux" blob and not recognizing the "Linux" part of the word. Separating them better gives credit to both GNU and Linux, because they are both better recognized within sentences.

That is why we now use the term GNU+Linux instead of GNU/Linux, and we ask people to consider doing the same if you already use the term GNU/Linux. Feel free to discuss this in the comments.

Top Bottom