Wednesday, July 31, 2013

Transparency is a feature, not a bug

There is an escalating struggle between governments around the world and the people they serve. It is a struggle over the right to privacy online. It's important to note from the git-go that there is no such thing as privacy online. Everything we do that is public is permanent until that resource is shut down. Once knowledge is made public, it propagates quickly as far as people are willing to share it.

Governments have been happy to collect this information, you know, in case one of the people they serve turns out to be a terrorist or a political opponent. But governments have not been happy, much less enthusiastic to share with us how the information they collect is shared and interpreted. Our executive branch has made wild interpretations of the laws they are supposed to carry out that have run far afield of the intent of Congress.

To counter this intrusion on privacy, businesses and their customers are becoming familiar with encryption. It is not what we want to do, but if we want privacy, encryption is the way to go. But encryption is not enough.

What code are you running on your computer? Do you know what it does? If you're not a programmer, that's OK. The programmers know what it does. Some programmers will tell you, and others will not. I'm going to make a very clear distinction here to help you understand this better. Most people use Windows. It is well documented that Microsoft has built back-doors in Windows for governments so that they can access a computer remotely. This is not so well known for Apple products, but it wouldn't surprise me if they did that, too.

They can do this because the products they sell are closed source products. If you want privacy, you might consider using open source products. This is a very important distinction, but it may require some explaining so that you can understand it.

All software is written by humans. Software is complicated and cumbersome to write, so humans have devised tools to make it easier to write software. The humans who created computers know that computers understand machine language, you know, 1s and 0s. This is a language that humans can understand, but it's not very easy to read. Humans started programming computers with 1s and 0s.

So humans came up with assembly language. Less complicated, easier to read than 1s and 0s, but still very difficult to use. Assembly is specific to one processor, too. To make things easier for themselves and, it turns out, for others, programmers created programming languages that can be compiled into machine code that machines understand. The most famous example is that of the work of a group of programmers at ATT in 1969 to create UNIX.

All programs are written in human readable code. The human readable code is not understood by machines, so we use another program called a compiler to convert the human readable code to machine code. When humans write code, they include comments in their code to remind them of what the code does.  Good programmers document their code so that others can support and maintain the same code. This makes it easy to share the code. When the code is compiled, the comments are stripped out by the compiler so that only executing code remains.

Once the code is compiled into machine code, humans cannot read it. It is possible to decompile the code, but the comments cannot be recovered because they were stripped out during compilation.

So here is the difference that I'm driving at: Windows is closed source. You don't get access to the source code. Even if you could decompile the source code, you don't get the comments, making the code very hard to read, and you violate the terms of the license for Windows. Yes, the license prohibits you from decompiling Windows to source code.

With Linux and open source software, you get access to the source code. Most of us are not programmers so we can't read the code. But someone out there can. And they are checking it to make sure there are no back doors. Even if someone tried to sneak a back door in, it's hard to get it passed the group of humans who maintain free and open source software.

If you really want privacy, you're not going to get it with an operating system built by Apple or Microsoft. Android is open source. Android runs on Linux and Linux is open source, too. Anyone can look at the code to see what it's doing. Granted, Google and others (such as Samsung, Motorola, LG, etc.) who use Android in phones add a lot of stuff that we can't see, but it's a lot better than Windows or Mac in terms of privacy.

If you wanted to, you can use something like Cyanogen Mod, a free version of Android to run on your phone. Then you'd have a much better idea of how much of your data is private.

At home, if you wanted more privacy on your computer, you could use Linux. I happen to like Ubuntu Gnome and suggest it to anyone who wants to try it. There are many flavors of Linux available called "distributions" or "distros", so you can always find a version that suits you best.

Notice that I use the phrase, "more privacy" in the last paragraph. That is because many of us use Gmail, Facebook, Twitter and so on. For complete privacy, don't use those services.

But we need community and we need to share information to be a part of a community. For now, take note that governments around the world are nervous. They are reading what we're writing and sharing. They don't want *us* banding together to form a new government that makes more sense, is nicer to the people they serve, and gets things done. They have been so busy serving the 1%, that they forgot about the rest of us.

In open source software, transparency is a feature, not a bug. Governments would do well to follow the example of open source software. While open source software projects never forget that they are there to serve others, using open source software for the freedom it provides is a way to remind governments that they serve others, too.

Tuesday, July 30, 2013

Howto: Recovery from a Windows virus

Over the weekend, I was called into service to help a relative. Her machine had been infected with a virus and she didn't have the skills to clear it up, so she called me. I'm happy to help and am quite capable of helping. I've been imaging computers for more than 14 years.

I have no experience with virus removal and I don't think I need it. Here's the reason why: viruses are unpredictable. Once infected, there is no way of knowing for certain that the virus is gone. I know this from reading over the years how virus writers cover their tracks, find unique hiding places to evade detection and prepare to re-install again on the next boot.

There is a story about Steve Ballmer and viruses, and no, this isn't about his love for Linux. His neighbor came to him with a computer one day. The computer was very slow and the neighbor asked Ballmer if he could repair the computer. Ballmer took the computer in and spent a couple of days trying to clean it up. Then he surrendered the computer to his IT staff and asked them to clean it up. But there was no possible way to clean it up as there were hundreds of viruses on the system.

So I began in earnest with a plan to help restore my relative's computer:

  • Backup the data.
  • Wipe the computer.
  • Install Windows.
  • Run updates until there are no more.
  • Install Office.
  • Run updates until there are no more.
  • Install antivirus.
  • Run updates until there are no more.
  • Restore personal data.
  • Image the disk so that we don't have to do this again.
There is a general rule of thumb I follow with any suspect machine. When backing up personal data, NEVER use the host operating system to do the backup. The reason for this is simple: viruses can infect USB drives. To backup the files, I always use a Linux live CD like Ubuntu Gnome, or Knoppix Live CD. This way, I can safely copy the files to another disk.

Once the backup of personal data is done, I'm ready for the next step: wiping the disk. Viruses can hide almost anywhere on a disk. But they can't live on it without structure. Structure on the disk is the file system and the partition table. Viruses have been found in the partition table, the file system table and in the boot sector of a hard drive. The partition table tells the hardware where to find data on the disk. The file system tells the operating system where to find files. Installing Windows again with the partition table, boot sector and file system intact is a big NO-NO. Wipe them out and the virus has no other place to hide.

Wiping the hard disk is not that hard to do. Boot the computer with Knoppix to get started. I use Knoppix because unlike Ubuntu, Knoppix gives me ready access to the root account and I can run the shred command from there. Once Knoppix is loaded, l open a command line and type the following commands:

su - (this makes me root)
shred -n3 -z -v /dev/sda (this command wipes the hard drive)

The su - command takes me right to root without a password. The shred command will write random 0s and 1s from the beginning of the hard drive to the end. For re-installs, I only do one partial pass, but the command above is what I'd use to wipe a hard drive clean before letting anyone else have it. To deal with the virus, I let the shred process run over a couple of gigabytes of space for good measure, then I'm done and can move on to the next step, installing Windows XP.

Some of you may already be familiar with this part. I've done this many times, more than I wish to recount. But it needs to be done here. So I boot to a Windows XP installation CD. I learned the hard way that it's important to disconnect all other external media from the machine during the boot process. By mistake, I did that and Windows proceeded to install on drive E. No, it didn't erase my external drive. I had disconnected that before the boot process was complete. But the installation program did assign the letter E to the boot disk in the computer. I didn't realize this until it was too late and had to start over again.

The next time I started the install, I verified that the boot drive was assigned letter C and allowed it to proceed. About 40 minutes later, Windows XP was installed.

This was a Dell machine and the owner had saved all of the disks, so I could easily install all of the drivers. I started with chipset, video, audio and saved the network for last. Once the network drivers were installed, I began testing the network. 

For some reason, Comcast does something funny with DNS and it just doesn't work. So I use Google DNS (, to ensure that the network adapter can see the internet.

Now I'm ready to run updates. But they don't work and I'm not sure why. I recall that Windows XP Service Pack 2 is no longer supported. I download and install Service Pack 3. Run updates again and they still don't work. I do some research from my phone to discover a tool offered by Microsoft to solve this problem. But when I navigate to the page hosting that tool with Internet Explorer 6, I am invited to install Internet Explorer 8. I download that and install it.

Now Windows Update works. I run updates for about an hour. There were 121 updates to download and install. I reboot. I run Windows Update again, and again with a few reboots, until no more updates are available.

Then I install Microsoft Office. Again, I run Windows Updates until there are no more available. I have to reboot here, too.

At this point, I'd like to point out a contrast between Windows and Linux. With Windows, I spent more than two hours updating an installation from CD. With Linux, this would not happen. With Linux, after running updates just once after an installation from CD, all of the updates are complete. With Windows 7, the update times are shorter, but they will grow longer over the years. With Linux, there is no need to install years of updates to the operating system. The developers keep it up to date.

Now that Windows and Office are installed, it's time for the antivirus. My relative has paid for Norton Antivirus, so I'm going to install that. The install is uneventful. Everything works fine. I see that Norton AV has changed a lot. They've gone black for that bad-ass look. I run updates until there are no more available.

Now I use Norton to scan the files I recovered from the computer before re-installing Windows. I plug in the external hard drive and run a scan. It's clean so I can restore the files to her user profile.

Now it's time for the image. I don't want to lose all of my hard work so I run an image of the computer to an external hard disk, the same one I used to store the data I recovered data from the computer before. I like to use Clonezilla. It's very reliable and it's open source software. In less than 20 minutes, the image process is complete and I have a bit for bit image of the computer.

If there is another virus, I can use this image to restore all that work from before with much less effort since it's been captured in the image.

This time, though. The odds of another virus have been slimmed down. Previously, my relative had been running her computer as a local administrator of her computer. This means that she has full run of the computer. Administrator rights means, you can install software, hardware or even damage your computer with no protest from the operating system.

If you're running as admin on your computer and you come into contact with a virus, you won't know that your computer has been infected. Most viruses install silently, without notice. The virus has all the rights you have. Even if you have antivirus installed, the antivirus software won't stop you from clicking "I agree" to install a virus on the computer.

For this install, my relative will now run her computer as a normal user and use the admin account to install software or hardware. 95% of the threats to Windows XP have no power unless you/re an admin. That's a huge decrease in risk.

So that's how I like to install Windows after a virus infection. I use the nuclear option against the virus and re-install Windows. If you like this article, please share it with your friends and leave a comment below.

Monday, July 29, 2013

The NSA's contempt for Freedom of Expression

It is noteworthy to see that the man I voted for to be president is so persistent and determined to eliminate whistleblowers. Whistleblowers are people who uncover corruption in government and take it to the press. To see Obama now, with the help of Congress, pursuing Edward Snowden with so much energy and determination, informs us that we don't really know the man we elected in 2008. It seems that his promise to protect whistleblowers doesn't mean much now.

There was much that I liked about Obama when I voted for him. Compared to John McCain, who appeared ready and willing to continue fighting two wars, Obama was a better man. But I had no clue that Obama would turn against the people by defending the NSA, their hoovering of personal information at every opportunity and making secret interpretations of law. I am reminded of a song by the Who, "We Won't Get Fooled Again".

Perhaps Obama was indeed idealistic during his run for president. Perhaps his hands are tied and he must do as he is told, unable to debate the other side of the story - the People's side. The People are not criminals. Most of us have committed no crime, have no plans to do so, and happen to enjoy living here in the United States.

This unfettered contempt for whistleblowers is also contempt for freedom of expression, a right protected by the First Amendment to our Constitution. This freedom is what allowed Obama to be elected. This freedom allows us to have elections in the first place. It is the first freedom that is guaranteed by our Constitution.

Yet, Obama's NSA insists on collecting call data on all calls to and from all Americans without a warrant, a violation of the Fourth Amendment to the Constitution. Who are these people, anyway? What do they hope to gain?

Obama's NSA does not represent the will of the majority of Americans. Chris Christie can moan and complain all he wants about the view he had of the twin towers burning during the 9/11 disaster. But nothing, nothing at all, can justify the violation of our rights for a little (more) security.

No, Obama's NSA doesn't represent *us*. It is a representation of the fear and loathing that exists in Lesterland. Lesterland is a fictional representation of our country provided as analogy courtesy of Larry Lessig, a prominent figure in the Free Software Movement, and a lawyer. Lesterland is a place where the people who get elected to high office are selected by not the 1%, but the 0.05% of the People. They are wealthy enough to fund 60% of the costs of running for high office across our country. These are the people who are calling the shots, choosing the people who get to run for office, and egging on the NSA to collect anything and everything they think they can store.

These are the people who want the NSA to make secret interpretations of the law so that they can justify their intelligence gathering. Sure, they may have a laudable intention. Sure, they want to help us and to prevent another terrorist attack. I get that.

But there is no telling what that information can be used for. I don't trust the NSA to be able to keep that information secure for one. Two, insiders who can buy access to the information collected by the NSA can use that information against their adversaries. The 0.05% have enough money and resources to gain access to that information even if such access were illegal. And if they were caught, they have the money to pay for legal representation that would get them cleared without public humiliation much less a trial or detention.

Even if it might be a good idea to collect information as the NSA does now, it is an enormous and dangerous concentration of power. The members of Congress who seek to limit the power of the NSA may already have this in mind. I say go for it if you can do it. But the cards are now stacked against you.

Edward Snowden is no enemy of the state as far as the People are concerned. He has informed us of the contempt that Obama's NSA has for the law. Those who are in power now have been embarrassed by Snowden and they seek retribution for his act of exposing the contempt that the powerful have for the rest of us. That is the real reason for pursuing Snowden to the ends of the Earth.

Saturday, July 27, 2013

Rediscovering my music collection, the digital way

I really enjoy listening to music. I have a fairly decent collection of about 7700 tracks in more than 600 compilations. I know these compilations as "albums". You know, those collections of songs we used to buy on vinyl?

When vinyl gave way to CDs, I let go of vinyl and carried on my collection with CDs. CDs were the perfect format for music for me. Long lasting, consistent playback in a physical format that I can hold and look at.

I like to listen to albums on CD. Albums were a collection of songs performed and compiled to give the listener a complete sense of an artist's state of mind. Sgt. Pepper's Lonely Hearts Club Band is by far, the best example of what I mean by this. Each song can stand on it's own, as John Lennon has fiercely defended when asked about them, insisting it was not a "concept" album. Yet, the songs all seem to lend a common theme. I find this in many CDs that I have listened to over the years and so, I prefer to listen to the entire compilation to gain the sense of depth an artist is expressing with his music.

In the past, my tendency was to just play my favorites. Some would just naturally be played more often. I was playing favorites with my music collection and rightly so. But I discovered that I was beginning to tire of this. So I turned to, Groove Salad and Lush on These music services gave me a sense of random, unpredictable music selections within the same genre that I had come to enjoy on the radio, much further in the past.

There is something about listening to a music service like Pandora or Somafm that I really like. I like hearing a good set of tracks selected by someone else, like in an album. I like the random and unpredictable selection because I don't have to think about what to queue up next.

After a while, although I really like streaming music services, i want to listen to my own music collection again. My listening experience at home tends to be very consistent. Some sort of rock music during workouts and quiet or lounge electronic music at all other times in consideration of my family. Quiet, non-intrusive music is particularly necessary in the early morning hours. I'm a morning person, I admit it. I just naturally rise at 4 or 5 in the morning, without an alarm clock. No coffee needed. Only warm water with lemon juice and a little honey. During that time, letting other people sleep is important so that I can write.

I've built a nice quiet channel on Pandora for that quiet music I like to play while writing. I use Pandora because I can play quiet music without thinking about it. My only problem with this service is that from time to time, Pandora will play a forgotten hit from Motown, without any prompting. I have no idea why this happens and continues to happen when I've been rejecting every one of those songs on this channel.

To send a clear message to their servers at Pandora, I reject the Motown songs I don't want on this channel and then close the browser tab so that they know I'm displeased. I sometimes wonder if maybe they're just throwing an irritating track in there to check and see if I'm still listening. Could be.

Even though I like the streaming services, I still miss my own music. I have songs in my collection that I haven't heard in years and I want to know my music collection again. At home, I tend to play what I remember most, what I think is palatable to my wife and what is safe for my baby's sensitive ears. I can't just go nuts with the Devo or Del Amitri. Although I will play a good set of Rush tunes with the room cleared.

So I came up with a solution enabled by Google Play Music. Google Play Music allows me to stream my entire collection of music from any Android device, or a computer with a browser, assuming that an internet connection exists. To use Google Play Music, I install an application from Google that scans my music collection and then uploads the collection to their servers. Yes, they do support Linux, and they do so very nicely - that is yet another reason I like Google. Here's the most interesting part: the limit is 20,000 tracks. Never mind that some of my tracks are more than an hour long. Some of my tracks are in the FLAC format, too. Bigger, but better audio quality. The per track limit is 300 megabytes.

Once I uploaded my music to Google Play Music, I realized that with all my albums in the cloud, I could do something that I hadn't ever done before. I could play my albums from A-Z. This is better in my mind than playing every artist in sequence, A-Z. Playing alphabetically by album title rather than band title gives me just the right amount of randomness while keeping the themes of each album intact. It requires a little manual effort, but once started, it's easy to keep going.

With Google Play Music, I can stream my collection through my phone in my car every day. My commutes are pretty short, so I only get to hear maybe two or three songs each day. But that is just enough to keep it interesting. For longer errands that I run on the weekends, I hear a bit more. Besides, some of the music in my collection just doesn't play well around the house in consideration of others. So driving alone in the car is the best time for me to listen to that music.

I've been doing this for more than a month now and I'm just nearing the end of album titles that start with the letter A. Already, I've heard tracks I haven't played in years, or tracks that I didn't even remember that I had. It's been quite a treat to run through them this way. At this rate, I should finish in a couple of years.

Friday, July 26, 2013

Reading in the Digital Age

I've been using the internet since about 1994. Since that time, I've noticed a gradual transition in my choices and manner of reading. I've found that with use of the internet, the activity of reading has been transformed from a rather passive activity to something more active.

I used to read lots of books before the dawn of the digital age. Many of you may not remember a time when there was no web. But I remember those days well. The days of pagers, pay phones and when phones had a dial on them. You know, where you put your finger in a dial for the number you want and turn the dial. Yeah, those days.

My first modem connected to my Amiga 500 and GEnie. I also dialed into a number of BBSs. That's Bulletin Board System for those who don't remember. Back then, I played a lot of interactive games with characters represented by ANSI text. That is where the wedge between me and books started. The games were addictive and time consuming. Chatting and gaming were relative constants for a number of years back then.

I didn't discover the web until I got an Apple Powerbook. Back then, the web was slow, still very much in its infancy. The World Wide Web was sometimes referred to as the World Wide Wait. But as I began to do research on Alta Vista, I slowly stopped reading books on a regular basis. I discovered other sources of information, sources that tended to stay fresh, up to date. I rather like that compared to the static nature of books.

Eventually, I succumbed (for about 10 years) to the Microsoft monopoly and bought a Windows laptop. I also bought a copy of Netscape to browse the web, not really understanding that I could use Internet Explorer for free. Back then, in some ways, computers were still confusing for me, but friends still asked me for help. I even coded my own web page and had a little fun with that.

I still read books, but not nearly as often as before. I clung to magazines for quite some time, namely, Car and Driver. God I loved that magazine. They made even boring cars interesting to read about.

But more and more, I was seeing news on the internet. I found blogs. I discovered that there were news sites that catered to my interests in science, technology and politics. At some point, I even dropped my subscription to Car and Driver because I spent so much time on the internet, that I didn't even have time for a magazine. This was a big change because I used to read every article in Car and Driver with each issue. Even if the subject was boring, I read it because their prose gave me inspiration to write.

As I grew a bit older, I noticed that reading books became a bit more difficult. I got bifocals. As an avid internet reader, I found ways to zoom the text, to make it bigger. That makes reading on a screen easier for me than reading on paper. Paper doesn't zoom like the humble web browser can.

Then there is the PDF. With PDF files, I can easily zoom the text or image in the electronic document in ways that I could not with paper. I can download entire books in PDFs. I gotta say that PDFs opened a whole new world of documents to me. I remember downloading my first forms from the IRS and just being amazed at the clarity of the document. But what I really loved about PDFs, is that no matter what operating system I'm on, PDFs will always print the same. Always.

The internet is fresh, constantly updated, a work in progress, documenting the constantly evolving collective consciousness of the human race. It is, in a manner of speaking, the greatest collection of information ever assembled by humans.

The internet is also a sort of river. I put a post on Twitter, and it goes down the river, like a bottle with a message in it. But everyone that follows me can see the bottle at the same time. On Facebook the experience is much the same. I post something and down it goes as posts from other people appear. Google+ also bears much the same experience, but with more scientists and geeks. They are all streams of consciousness for humanity.

It has been a fascinating ride, and I hope it continues, but I don't expect any part of it to remain the same for very long. 

Thursday, July 25, 2013

Life, in a package

When we go shopping for groceries, most everything is in a nice, neat little package. The mayonnaise, the cereal, the vitamins, even the meat - it all comes in a convenient little package. When we go to the local electronics store, like Best Buy, we search for solutions to problems we didn't have before the technology we're planning to buy even existed. Those solutions also come in bigger, nice and neat packages.

The tendency to try to put everything into nice little packages is pervasive. Watch the evening news and see how very complex stories are wrapped up into 30-60 second soundbites. Watch the commercials and see how solutions to problems that many of us have, have been reduced to buying this or that product. Buy a car and feel better. Buy an expectorant and cough less. Buy a hair spray to get total control of your hair.

The reality often results in disappointment or, "buyer's remorse". First impressions from advertising set us up for expectations that may or may not be realistic. One commercial comes to mind, the Charmin commercial with the bears in the woods unable to "go" without a roll of Charmin. Of course, they feel much better, more secure knowing that there is a roll of Charmin nearby.

While the Charmin commercial troubles me, the fact that people who make buying decisions based on that commercial may also vote, troubles me more. A fantasy about toilet paper is not a sound basis upon which to make a purchasing decision. Oh, the bears are cute in a somewhat nauseating way, but do they really tip the purchasing decision in favor of Charmin? If so, is it really necessary for the NSA to collect phone records on all Americans?

We all make decisions based upon impressions, often combined with intuition. There is simply no way to know everything about something. As soon as you think you've got it nailed down, some smart-ass is going to show you something you didn't know about that thing. Or you'll discover it yourself in a moment of serendipity. Scientists show us stuff we didn't know all the time, and they do it very well. That's their job and I would be very disappointed if they stopped doing all that studying and discovering stuff.

With almost every decision, we have to weigh the cost of an error versus the cost of the time spent investigating the options available prior to making a decision. Should we read the entire credit card contract, or just sign it and get that shiny thing we wanted? Should we look into the health risks of genetically modified food or just hope that the FDA is right? Should we vote for a man who tells that a vote for him is a vote for change, or should we investigate the people who back him to see where his allegiances point to?

At the end of the day, the best we can hope for with each decision is that we made the right decision and that if not, we can hope that it wasn't a fatal error and that we live to learn from it.

Wednesday, July 24, 2013

Who wants to run a million servers?

In recent tech news, Microsoft CEO Steve Ballmer has bragged announced that they are now running more than a million servers. More than a million? Really? That's a lot of Windows servers to maintain, man. How do you keep them all up to date? What if an update fails to install as so often does on Windows? Who's gonna fix it?

Google, on the other hand doesn't make any announcements about how many servers they are running. I've searched around and found estimates, but no hard and fast numbers. The best estimates that I can find are ~900,000 (as of late 2011) and somewhere between 1 and 2 million (as of early 2012, with a projection suggesting 2.3 million by 2013). I don't think that Google needs to impress us with the number of servers they run. However, they do need to impress us with fast search results.

Google and Microsoft offer a study in contrasts. Google publishes general purpose best practices such as hard disk failure statistics as well as a wealth of other experience for others to follow should they choose to. Microsoft publishes best practices for all their servers, too. But that is just for Microsoft software. Google is publishing best practices for hardware and software. Google is also agnostic about what you run on your hardware. Their only hope is that you will use Google search.

After looking at pictures of Google's data centers, I'd say that Google has plenty of experience to lend. Everything is color-coded, organized, clean and efficiently run.

Does Microsoft's announcement tell us anything about Microsoft? Could Microsoft be trying to convince us that they are relevant to the internet? As if they ever were?

One thing that we know for sure, is that Google is serving at least 25% of all internet traffic. Even Windows Updates on Patch Tuesday doesn't even come close to that. The only organization that clearly surpasses Google in traffic handling is Netflix with an estimated 30% of traffic. Well, OK, maybe the NSA can top Google.

I suspect that Microsoft really needs a million severs to get their job done since they are running Windows. It is quite possible that the same job could be done with much less hardware running Linux. But Microsoft makes a special effort to ensure that their products won't run on Linux. That may change, but the wind says "no".

One philosophical difference between Google and Microsoft concerns freedom. Google uses free software, contributes code to free software projects and makes no efforts to restrict your ability to use the hardware Google sells. For example, Google has published instructions for how to make a copy of the hard disk in a Chromebook or Chromebox to an external drive. There is nothing stopping you from installing your favorite distribution of Linux on that hardware.

Microsoft, on the other hand, seems to have hijacked the development of UEFI, the new boot process for new computers that replaces the BIOS. UEFI is secured using a set of keys that at the start, only Microsoft had. Anyone wishing to distribute a version of Linux that would run with UEFI, had to negotiate with Microsoft to generate keys that would work. Microsoft just doesn't want people buying Windows machines and loading them up with Linux. For all we know, UEFI is just security theater.

I believe that this philosophical difference between Microsoft and Google defines their opportunities for success. While Google let's everyone know that they are free to use another service, or other software on their hardware, Microsoft has been busy using every technical and legal means to limit user options to Windows.

Microsoft needs a million servers to a fraction of the work that Google could with the same number of servers. Why? The restrictions imposed by Windows on its customers also work against Microsoft. Linux users like Google have access to the source code, can change it, learn from others and improve Linux. Microsoft could change the source code, but cannot learn how to improve it from others, they have to do it alone.

Microsoft has tried to hold everyone else down below them, but to do so, they have to stay down too. Until the internet came around, Microsoft has been largely successful at holding others down, because with the internet, choices are very limited or non-existent.

Google has promoted user freedom not just because they want to, but because doing so holds their own feet to the fire of their customers and spurs them to do better. If their engineers know that anyone could leave at anytime, they will work diligently to keep their customers. Our choices for search engines and other services are evidence of Google's determination to offer a better search experience with fewer servers per customer than Microsoft needs.

If Microsoft thinks that the number of servers they're running is an indication of their value as a vendor, they need only look to Google for the values customers are looking for.

Sunday, July 21, 2013

What were they called? "Microsoft"?

It seems that the numbers are in. The long, slow decline of Microsoft coincides very nicely with the rise of the internet and the numbers show that Microsoft is no longer relevant. If you're in high school and you're still planning a career on Windows, think again. Change course. Work on something that ends in "nix".

What ends in "nix"? UNIX, Linux, and there is FreeBSD, but that doesn't end in "nix". Any version of UNIX or Linux will do. Why? Because that is what runs the internet.

Every major player on the internet is running Linux or some other *nix. Google, eBay, Amazon, Godaddy, Facebook and the list goes on. The biggest web hosting service in the world, Bluehost, runs Linux.

Allow me to put this in historical perspective. I started out with a Commodore 64. A very interesting command line based computer that sold more than 16 million units during it's run. But I wanted something with a graphical user interface. The first version of Windows had been out since 1985, but I had seen that and I didn't want it.

So I got an Amiga 500 with a monitor and printer. That was my gift to myself for Christmas in 1988. It had the command line, but it also had a windowing interface that made it easy for me to run programs and create files. I used it for my accounting and word processing. It was a blast to own, behold and to use.

When Windows 95 came out and obliterated the competition with all their marketing, I still wanted choices. Microsoft, with their "take no prisoners" attitude, wiped out almost all of the choices for consumers. I didn't want to run Windows and I sure as hell didn't want to run a Mac.

But the Mac was my next computer in 1994 and that was the first computer I connected to the internet. I ran the Mac until 1997 when I bought my first Windows PC. It was a dog, but it worked. I connected it to the internet like the Mac before it. I did a lot of work on it, yet I still missed my Amiga.

In 2000 I got a new computer running Windows 2000. That was much better than Windows 95, and certainly more secure. Then in 2001, I discovered Red Hat Linux and played around with it on a spare computer. I even downloaded a few Linux boot CDs and played with them. But I had taken a college course in Linux where the instructor said that eventually, Windows would win out. As if *nix was dying. So I made a mistake and put my efforts into Windows. That was what they were running at work, so I needed to learn that.

In 2007, I rediscovered Linux, desperate for an escape from Windows and installed Ubuntu Linux on a spare computer. Eventually, that spare computer became my main computer and I never went back. I made an oath to myself that I would never use Windows in the house again. Alice, my wife, was still running Windows on her computer, but that was ok. If she wants to run Windows, I'm ok with that as long as we do no online banking with Windows.

In 2009, Alice said that when the antivirus expires, I can set her up with Linux. She don't really need Windows anymore. She just needed to do a little writing, browse the internet, check her email and chat with her friends on Yahoo. Sure enough, when the antivirus expired, I backed up her documents, wiped out Windows and installed Ubuntu for her, too. She just gets her work done and never really complains about it. She seems to like it, but she's not really a computer person like I am. She just wants it to work, and it does.

I can recall that one minor irritation of Linux was the way that my email program, Evolution, worked with my Palm Pilot based phone. I could never get it to work exactly right. When the first Android phone came out, I got the G1 as soon as I was eligible for an upgrade. That was the last link to the Windows PC for me. The G1 phone exchanges data directly with Google's servers. My contacts, calendar and emails were perfect on it. I still use an Android phone to this day. My does now, too.

There is a legend that quotes Bill Gates as follows: "The internet is a fad." It is legend because Gates actually recognized the power of the internet and wrote about that power in an email sent to every employee at Microsoft. Gates referred to that power as the "internet tidal wave". Contrast that with a very public statement that, "I see little commercial potential for the Internet for at least 10 years", in 1994, just before the release of Internet Explorer in Windows 95. Gates is such a sly dog, isn't he?

But somehow, Microsoft managed to miss that internet tidal wave. I believe I know why they missed it. The internet represents freedom. Freedom to learn, to build, to share and to earn. Microsoft doesn't represent freedom. They are fine if you use their development tools, and you don't compete with them. But if you do compete with them, your business will be ground into dust.

Linux was promoted as a free (as in freedom) alternative to proprietary operating systems. Linux is licensed under the GPL, the General Public License, a license that requires that the code be shared when improvements made to it are shared, too. It is the ultimate utility operating system. In order for Microsoft to maintain its monopoly, it has to eliminate freedom enough to prevent competitors from gaining an advantage. Microsoft fights that freedom, our freedom, every day when they fight Linux.

The numbers prove that, in the long run, Microsoft can't compete against freedom.

Friday, July 19, 2013

The Takers

I had a conversation with a conservative I happen to know. Without prompting, he went on to tell me about the problems posed by the "takers". You know, those 47%'ers of the population who are takers? He was talking about them.

Who are they? Some say that they are the people who pay no income taxes. Ah, but they do pay income taxes - Social Security taxes. Those payroll taxes for Social Security and Medicare are taxes on income. Those taxes are paid one way or another.

More to the point, those taxes are not paid on capital gains or dividend income. They are not exacted upon corporate income, either. How do we know this? We have computers to collect the data and tabulate totals. Capital gains, dividends and corporate taxes are all much lower than taxes on ordinary income. How did that happen? Did the takers do that?

When the term "takers" is used in political discourse, they are referring to those lazy people on unemployment. You know, those people who paid taxes into an insurance program that helps to deal with the possibility that they may be unemployed someday? Yeah, those takers.

But there is a class of takers you're not going to hear about from conservatives. Maybe there is no name for them yet, so I offer a few examples.

Let's start with The SCO Group. There was once a company that has been in litigation for more than ten years over copyrights. They sued IBM and talked up the lawsuit hoping that IBM would just buy their company and everyone would be happy. Led by Darl McBride in Linden, Utah, The SCO Group insisted that IBM had taken code from UNIX and put it into Linux, infringing SCO's copyrights.

Years later, in other litigation, Novell prevailed against SCO to prove that Novell was the owner of the copyrights, not SCO. During all this litigation, McBride worked out the finances so that the only people who ever got paid were the lawyers by declaring bankruptcy before their case against IBM went to trial. Over the years, SCO managed to burn through millions in cash to ensure that neither IBM nor Novell could collect on any damages from counterclaims against SCO.

When I think of "takers" I think of The SCO Group first.

Dean Baker is a very interesting free market economist and he's also a bit liberal. He makes a valid point that directors in publicly held corporations meet with the board of directors maybe 3-4 times a year for a lot of money. Typical compensation for a member of the board of directors? $250,000 a year on average. What happens when the company does not report a profit? Do members of the board of directors take a hit in compensation? How about the CEO? Not very often.

To highlight this chasm between performance and compensation, a new website is in the works that will identify directors and CEOs that walk away with huge compensation packages even when they tank the company stock. It will be called DirectorWatch. Here's an interesting example of a taker from their Indiegogo site:
Home Depot dumped its CEO Bob Nardelli at the beginning of 2007, just over six years after he took the position.  During his tenure, the market value of Home Depot stock had fallen by 40 percent, according to one estimate.  Lowe’s, the main competitor for Home Depot, saw its stock price nearly double over the same period. Yet, Nardelli walked away with $240 million for his efforts.
When a CEO tanks the company stock and walks away with nearly a quarter billion, wouldn't you call *him* a taker? I would.

You would think that with today's technology, CEOs and members of the boards of directors would be more efficient, more effective, maybe even more productive. Apparently technology isn't really helping those CEOs do a better job.

Remember how the economy tanked in 2008? Who exactly was responsible for that? At least a few takers.

Thursday, July 18, 2013

The virtues I see in Samsung

I can remember when Sony was the name to get. I spent some time working at the Great Indoors in Irvine years ago to try my hand at sales. BTW, you won't find The Great Indoors anywhere. They've long since been closed down. While I was there, I was captivated by the top end Sony Bravia LCD TVs. Their Bravia TVs back in 2006 were fantastic for their time and expensive. Today? That same tech is run of the mill. Most sets can match that experience.

But somehow, Sony lost their mojo. They lost their lead to Samsung and maybe even LG. Most headlines these days are for Samsung in the tech news. And Samsung isn't necessarily the name brand on the products you buy. Their parts are often inside.

For example, inside most iPhones and iPads, you will find Samsung A4 and A5 processors. Samsung also makes the screens for many of those devices. But Samsung no longer makes the screens for Apple. Why? Samsung makes Android phones. Steve Jobs mistakenly believes that Android is a "stolen product" and decided to go nuclear against Android. So Apple sued the lead Android cell phone manufacturer and began in earnest to find new suppliers and terminate all contracts with Samsung. One look at how Apple's stock price has been trending will tell you how that war is going. Very badly for Apple.

Samsung has been innovating in other ways. It is not widely known that Samsung makes disk drives. But these are no ordinary disk drives. They are solid state disk drives. No spinning platter, no moving parts. A typical SSD is about 20 times faster than a traditional spinning platter drive. This kind of speed leads to at least a 62% decrease in boot times for Windows, as tested more than two years ago.

In the past week or so, Samsung has made two more announcements. The first is for a new line of consumer SSD, the 840 EVO series. These 840 EVO drives will double or triple the write speed over the previous version of the 840 model series.

Samsung has also announced released a new enterprise drive that is about 6x faster than the fastest SSD previously announced, or, 3,000MB/s. That is an insane number to hit for hard disk performance. Note that your typical SATA cable won't be big enough to handle the flow of data. No, this baby will require a PCIe slot for maximum throughput. Of course, 3,000MB/s is overkill for a desktop, but if you're running a very fast database for your internet based shopping cart, this should fit the bill.

There is one very interesting aspect to Samsung's business which doesn't get much notice in the press. They don't unilaterally sue others for patent infringement. Apple has sued Samsung. Twice. Apple stock is in decline and they're losing a major supplier.

Microsoft sought and got a patent licensing agreement with Samsung to get them to make a Windows smartphones. The proposition from Microsoft? "Hey, that's a really nice cell phone business you got there. I'd hate for anything bad to happen to it. Look, if you just sign this patent license agreement, the cost is less than the cost of litigation and nobody gets hurt. Once you sign this, you can make Windows Mobile phones and we'll look the other way." Microsoft's stock has been going sideways for years and their market share is rapidly eroding.

Samsung? For the last several years, their stock has been hitting new highs until recently. So what's the problem there? Analysts are worried that Samsung is overwhelming the customer with model choices for smart phones. But their current stock price is trading at over $800 a share, almost double the price for Apple's stock.

I have a Samsung TV. I have a Samsung DVD player. My wife has a Samsung Galaxy S3. I buy their products because Samsung is more interested in innovation than litigation. That is a rather peaceful attitude to have and that is something I want to promote with my purchasing dollars.

Wednesday, July 17, 2013

Infants and tech

Scientists have been unable to fix a point at which babies achieve consciousness. Some have determined that humans generally achieve consciousness at about 5 months after birth. This is consistent with my observations of my daughter, Emily. For most of the first 5 months, she seemed to be unaware of her surroundings and very keen on getting her own immediate needs met. Food, sleep, diaper, mom. Not necessarily in that order, but it was repeated seemingly at random.

5 months of irregular sleep for the parents later, we see that Emily is not only aware of her surroundings, she is starting to see things that she wants. She likes shiny things that reflect light. She also likes translucent things with color, like a green Perrier bottle. She has also seen her parents watching screens. Big screens and little screens. She likes them, too.

We started by just showing her the weather report on our computer. Pretty innocent, right? So there's Sterling, the local weather guy on KUTV, giving us the news on the latest storm brewing across the Wassatch Front. I watch Emily and notice that she is apparently transfixed with the moving images. She hears the voices and seems to connect them to the events on the screen.

My wife, Alice, has found many YouTube videos that provide lullabies, alphabets and songs I used to sing as a kid. Emily loves that Twinkle, Twinkle Little Star video. You know, the one with the owl and the star? So cute and calming. Except that they don't tell little kids that owls are predators and that they hunt rodents at night.

Anyway, along with all that, Emily has developed a fascination with our smart phones. She sees us playing games with the phones, checking email, watching videos, checking the weather. What does she want to do? She wants to put the phone in her mouth, you know, like everything else.

My experience has prompted me to wonder how she will adapt to the tech that she lives with as she grows up. Probably much like the kids raised by my sibs: they use tech frequently, are often ahead of the curve relative to their parents and are quite comfortable with the tech.

One thing I can say for sure: Emily will not see Windows running in our home. Not even a Mac. There is nothing but Linux here and she will learn more about computers with Linux than she ever will with Mac or Windows. Why? Mac and Windows try to hide how the computer works from users. Linux encourages us to peek inside and find out what's happening.

There are Linux distributions for everything and more than a few of them are designed for education with variations for different age groups. The most famous is Edubuntu. These Linux distributions teach math, typing, reading, writing, science, music and the list goes on. There is some pretty amazing stuff out there.

For now, though, we can only imagine the technology that will be available when she becomes a teenager. What will she do with that tech? Who knows? But I will teach her that technology is not what makes us happy. Tech can facilitate happiness, but it doesn't make us happy.

Technology is not capable of love. A computer can't love you like a father, mother or brother or sister or a friend. Cell phones have love only for the carriers, like AT&T, Verizon and T-Mobile.

People are where the love is at. It is with people that we find humanity, fellowship and peace. Technology just helps us to connect to others in ways we could not without the tech.

These are just a few of the things I hope to teach Emily.

Sunday, July 14, 2013

Impressions of the Zimmerman Verdict

So, "Georgie" Zimmerman is a free man. I found it interesting that Zimmerman's friends and family called him "Georgie", as if here were some sort of big teddy bear during their testimony during the trial. Do they really feel that way after he shot someone in "self defense"?

I imagine that people are going to be a lot more polite to Zimmerman now that he's been acquitted. "Yes, sir." "No, sir." "Is there anything else I can do for you, *sir*?" He's is now going to get the respect that he's always wanted. After seeing his smug little smile in pictures taken after the verdict was read, I had the impression that he nothing but contempt for his opponents.

Why did Zimmerman go free? For one, there was lots of conflicting testimony. There is only one living witness who was there the whole time during the altercation between Zimmerman and Trayvon Martin. But no one else can give a complete account of the incident. Isn't it interesting the Zimmerman didn't even testify? If he truly believed he was not guilty, then he should have no trouble testifying on his own behalf.

There was some confusion as to which charge Zimmerman was accused of. Some say it was 2nd degree murder. Others say manslaughter. The prosecution tried both and still failed.

Even the technology at hand could not lend much certainty to the evidence. Many reports indicated that the screams heard on the phone could not be distinguished by voice recognition between Martin or Zimmerman. That could have been a deciding factor. But the expert witnesses qualified to make that determination were whittled down to those only in favor of the defense.

But the biggest determining factor was the "stand your ground" law in force at the time of the shooting. The jury had to take into account Zimmerman's state of mind. Zimmerman claimed he was in fear for his life and felt justified to use his concealed gun. Some say that Zimmerman never would have left his car had he not had a gun. That would make the fight more even. But Zimmerman left his car and followed Martin even after the 911 dispatch told him not to.

Zimmerman may be a free man, but is he really free today? He's a celebrity. It's only a matter of time before he finds himself interviewed on Fox News gloating over his legal victory. If he shows camera presence, he might be a good candidate for a job on Fox News that will pay his legal bills. What legal bills?

The legal bills he will pay when someone, the DOJ, Martin's family, almost certainly somebody, who will come looking for money with a civil lawsuit against Zimmerman.

Zimmerman may be free, but he may spend the rest of his days never really knowing for sure if he is safe.

Friday, July 12, 2013

Why I Am Anti-Mouse

For years, I've been anti-mouse. Upon reflection, I find that the only time that I really use the mouse is when I'm browsing the web. That's pretty much it.

Oh, I might have to mouse with some GUI to set audio from speakers to headset. Or maybe I want to move applications from one virtual desktop to another. Perhaps I'll play Mahjongg with a mouse. Or Solitaire. But the use case for the mouse in productivity applications is very slim for me.

Whenever I work on a new operating system or a new application, I am quick to find the keyboard shortcuts. Why?

I find that keyboard shortcuts are so much more satisfying than using a mouse. I am more prone to error with a mouse. I might miss the button or click the wrong button. I just have greater accuracy and speed with the keyboard. The mouse slows me down.

Here's one of my favorite examples: selection of text in a text document. Try it with a mouse and see how much effort is expended for the desire precision. If you select text and you move too far down the screen, you'll select too much text. Move too little and you'll be waiting for the text to be selected as the text crawls up during your selection. The mouse is unpredictable when it comes to selecting text.

Using a keyboard is relatively straightforward. Use the arrow keys to position the cursor at the beginning of the text you want to select. Then press the shift key with one hand and the down or right arrow keys with the other hand until the text you want to select is selected. While holding down the shift key, you can tap the arrow keys until your selection is exactly what you want. Easy. Far easier than using a mouse.

The only easier example of using a mouse to select text is to double-click on a word to select it.

I can whip around a user interface with keyboard shortcuts faster than I can with a mouse and with greater accuracy, fewer errors. My confidence with the keyboard far exceeds that of the mouse.

Alt-tab allows me to switch applications easily and to choose between multiple applications with better precision. I use keyboard shortcuts for common application operations, too.  Ctrl-X, C and V for cut, copy and paste. Ctrl-Z to undo. Ctrl-Y to repeat formatting. Ctrl-O to open. Ctrl-P to print. Ctrl-N for a new document. Many of these keyboard shortcuts are cross-platform, too. If you find them on Windows, chances are very good they work in Linux. Not sure about Mac, but they are worth a try there, too.

I look for application menu shortcuts, too. Press the Alt key while running any application and the keyboard shortcuts for navigating the menus will appear. This works great until you try working with the MS Office Ribbon - what a monstrosity!

But with LibreOffice, our sanity is saved. When I press the Alt key, the menus show underlined characters indicating the keyboard combination to use. For example, Alt reveals that the letter F in the file menu is underlined. Enter the sequence Alt followed by F and the file menu drops down. Look for more underlined characters to select the menu option you want to use and just press that letter. It's that simple.

In Windows, navigating the file system in Windows Explorer is easy. to get to the C drive, press the Windows key to bring up the Start menu. Then type "c:" and press enter. Windows Explorer brings up the C drive for your review. If you type the first letter of a folder name, that folder will be selected. Type the letter "U". This will select the Users folder. Press enter to open the folder. Look for your folder (the one with your name on it, usually) and press the first letter of that folder name to select it. Then press enter to open it. To go back up one level, press the backspace key.

If you see a document, like a Word or Excel document in a folder, you can press the first letter of the file name and press enter. This will open that document.

File system navigation in Nautilus (a Linux file manager for Gnome) is very similar in Linux, but backspace to go up one level has been replaced by Alt-left arrow key. One other difference is that in Windows, repeatedly pressing the first letter of a file or folder name will cycle through all the files and folders that have names starting with the same letter. In Linux, once you enter the first letter, a window will pop up to show you what you've entered so far and wait for other characters to use for matching. As you enter succeeding characters, the folder with the closest match will be selected. Then you can press enter to open that folder, or application if the object is a document rather than a folder.

Keyboard shortcuts work very well for repeating complex tasks, too. I was asked to inventory the MAC address for all printers on a network. A MAC address is a unique address assigned to every piece of hardware that can connect to a network. The MAC address system ensures that no two devices will ever have the same MAC address.

The reason I was asked to get the MAC addresses was so that they could be use for DHCP reservations on a network that used automatic addressing for clients on the network. DHCP stands for Dynamic Host Configuration Protocol, which is a system for automatically assigning addresses to devices on a network. Such devices include but are not limited to computers, networking equipment like switches and routers, and printers. DHCP assigns IP addresses to devices so that we don't have to manually assign addresses to everything and keep track of the addresses. A DHCP reservation is used to reserve an IP Address based on the MAC address, so that each time that device is connected to the network, the same IP address is used every time. The DHCP reservation also prevents another device from being assigned the same address.

In my survey, the printers were listed in a spreadsheet and the list provided the IP address of each printer. I started out using the keyboard to copy the IP address for each printer from the spreadsheet into a browser, Internet Explorer, to access the web page for each printer to get the MAC address. Once the MAC address was displayed I used the mouse to copy the MAC address to the spreadsheet.

But on some HP printers, I found that I could not copy the MAC address. No matter how I tried, Internet Explorer would not let me copy the address. So I tried memorizing it and then typing it on the spreadsheet. This was not easy to do, considering that a typical MAC address looks like this: 18:03:73:4b:47:76.

So I installed a telnet client on the client machine I was running. Then I copied the IP address from the spreadsheet for each printer, one at a time and pasted it after the telnet command to connect to the printer. Once connected, I often only needed to enter the "/" character to display the current status of the printer, and that included the display of the MAC address. Then I found the keyboard sequence to copy that text and paste it into the spreadsheet.

Once I figured out the keyboard steps, I could do the entire operation in a few seconds instead of a a minute or more with a mouse. For repetitive procedures, there is no equal to the keyboard except for scripting.

With web forms I always tab around the fields to complete them. But sometimes, the web programmer wasn't thinking of guys like me and forces me to use the mouse to get around. It's unfortunate but rare.

Keyboard shortcuts have made my life on the computer much easier, much saner. Sure, I have to memorize the shortcut, but after awhile, it's not the shortcut I remember, it's the finger pattern, so I really don't mind.

If you want to know more about keyboard shortcuts, a search for keyboard shortcuts will yield plenty of results. I hope you find this article informative. Got a keyboard shortcut you like? Please share it with us below in the comments section. Thanks.

Thursday, July 11, 2013

Review: Ubuntu Gnome from a LIve CD

For the last few days, I've been playing with Ubuntu Live CDs to run as a secured workstation. It's a very interesting way to work. I use this setup to conduct work related research, log my activities and note important events while I'm at work since I have not been assigned my own PC, but need one to work. I have checked this out with my employer and they are OK with this setup, for anyone wanting to know.

There are some good reasons for doing this. One is that I didn't want to create any security problems on the company network. Part of the solution provided by employer is direct network access to the internet which will keep traffic from my computer off of the business network. It's probably through a router and that's nice, but Windows machines are notoriously easy to hack. An un-patched Windows machine won't last more than a minute when directly connected to the internet before it's been compromised. So, using the Linux boot CD is a security measure while accessing the internet to protect the image that is on the hard drive.

Two, I can run on my preferred platform, Gnome 3 on Ubuntu. I am seriously anti-mouse. I won't use the mouse unless absolutely necessary and, as it turns out, there is a good reason to avoid it. I just naturally prefer to use the keyboard rather than the mouse and Gnome 3 is particularly well suited for this kind of work. For example, when I want to launch an application, I just press the Windows key and type the first few letters of the application name. Then the application appears, usually as the first entry on the list, then I press enter to run the application. This makes it easy for me to find and launch applications and it is much faster than the menu driven approach used in KDE, Windows and Gnome 2.x.

Three, if they need this computer, I can always find another older computer to work from. I don't have to worry about re-installing a good setup on another machine since I only need to boot to a CD. Since the work I do stays in the cloud at Google Docs, I have no worries about any work product being left on the hard drive.

To summarize, this is what I'm doing:
  • Use an unassigned computer as my temporary workstation.
  • Boot that workstation with an Ubuntu Live CD.
  • Install Chrome temporarily for my work.
  • Browse the Internet to do my research.
I've tried using two different forms of live media: USB flash drive and CD/DVD.

I've found the USB drive to be painfully slow, but it does offer persistence, something that I don't have while running a live CD. Persistence allows me to keep settings and applications after shutting down and rebooting the computer. Persistence also allows me to save files to the USB drive that I can gather again later. I can even booth the USB drive on another computer and my settings will be there.

For example, on a USB drive, I can install Google Chrome and know that it will be there on the next boot. Space permitting, any applications that I install will remain on the USB drive. While this is convenient, it is still painfully slow. It appears that caching data to the USB drive is slow for read/write compared to working from a hard drive. I wonder how an external SSD might work, but I may wait awhile for the prices to come down before I try them out.

On the other hand, running Ubuntu from a Live CD is much faster, much more responsive and a far more enjoyable experience. The only problem is that if I want to run Chrome, I have to re-install it over and over again with each boot. Considering the performance gains over the USB drive, that is not much of an inconvenience. Besides, I'll always be on the latest version of Chrome with each re-install.

Where are the programs installed? Good question. The CD is read-only, so no programs will be installed there. What the live CD will do is take a part of the host machine's hard disk that has no data on it, and write to the disk to cache information needed to run Ubuntu. Any programs installed will be saved there. But on reboot, they won't be there later, thus, I have to re-install them for use again.

It's not so bad to run this way. I don't mind re-installing the packages as they're small and don't take that long to install. In fact, running the live CD and installing what I need is much faster than waiting for the USB drive to load and run Chrome starting with multiple tabs.

The Gnome interface is very snappy on the live CD. It's easy and quick to move from virtual desktop to virtual desktop. Applications load fairly quickly from a CD and they run quick once they're loaded.

The most interesting aspect of my live CD experience is noting how quick and nimble Google Apps will run from the live CD. Gmail, Blogger, Google Drive and Google Docs run great. I have a large text document that takes forever to load on the USB drive, but on the Live CD, it loads quick. It feels very much like a desktop and I find that I'm very comfortable working this way.

There is a big difference in speed between the Live CD and the USB drive. The Live CD is actually faster than the USB drive, something that I did not expect. The USB drive was so slow that I just gave up on it in frustration and switched to the Live CD instead. I suspect the speed is due to caching the data on the hard disk rather than the USB drive which is much slower than the hard disk.

There is a security caveat that should be noted about both USB drive and Live CD sessions of Ubuntu. With USB drives, there is room for persistent files to be stored. The entire configuration of Google Chrome, for example, is stored in the space for persistence. Chrome is very good about syncing the settings of the browser from computer to computer. Chrome settings on my home computer sync wherever I happen to sign into Chrome on another computer. This is very convenient and presents no security risks as long as I decline to save the password and as long as I log out when I'm done with my session and close the browser. But on a USB drive, browser syncing presents certain security risks.

For example, any passwords saved in Chrome at home will transfer to Chrome running from a USB drive while working away from home. If the USB drive were lost and someone else picked it up, I'd have to change all the passwords affected by the loss. I'd need to do this before the person who found the USB drive figures out what to do with it, too. When it comes to security, I feel much more comfortable with a Live CD.

With a Live CD running Ubuntu, there is no worry about potential security problems with persistent settings. Once you shut down the computer, all the programs you've installed, and their settings, are gone forever from that machine. This is reassuring in that I won't have to worry about leaving my data on any drive anywhere. When the CD pops out, the data that I was caching on the hard disk is gone.

The use of a Linux Live CD is not limited to work. I could use it while on vacation, away from the home computer. If I need to write extensive email correspondence while away from home the Live CD can help. If I need to check my online banking while away from home, the live CD comes in handy.

A few years ago, I read a story about a businessman who understood the security risks relating to Windows, so he used a Mac instead. He used his Mac for his business and personal computing. One day, he needed to access his bank account online, but he was away from his office, and so, didn't have immediate access to his Mac.

To save time, he used a Windows PC at a friends house to access his online bank account. When he attempted to access his account later that day, he found that he had been cleaned out to the tune of more than $100,000. Turns out that his friend's computer was infected with malware that had installed a keylogger. The keylogger captured his keystrokes and sent them to a server where black hats could pick it up and hack into his bank account. He was not able to recover his funds from the bank or the thieves.

Note: I do not use the term "hacker" to identify the bad guys. Linux comes from a hacker community and they are definitely good guys. To identify the bad guys, I use the term "black hats". They are the bad guys who want your money.

A Linux Live CD can prevent losses like the one sustained by the businessman above. We're not going to see a Live CD for MacOS in our lifetimes. Windows live CDs are not considered secure in my opinion. But Linux, an operating system built on security from the get-go, has many flavors of live CDs to choose from, provides an escape from the monoculture that is Windows or Mac. The diversity of the live CD makes them especially hard for black hats to anticipate in order to compromise secured communications or accounts. That makes a Live CD ideal for use as a way to access an online bank account or other secured resource when the computer I normally use is not available.

The Linux Live CD also makes a fine temporary workstation. Here are some examples of Live CDs you might like to try:

Linux Mint
Fedora Project
Ubuntu Gnome

If you're new to Linux, a live CD is a great way to put your toes in the waters of free software as well as a great temporary workstation.

Wednesday, July 10, 2013

As the tide goes out, it carries Microsoft

There is a definite sea change in the IT space. There was a time when you couldn't get fired for buying Microsoft products, but that time may be gone or on the way out. According to the latest news, Netflix has dumped Exchange and other onsite application servers for Google Apps and other cloud based infrastructure.

Netflix is making the change to simplify their operations, as many companies like to do. I think that Netflix's move is evidence of a sea change, a much greater trend. Netflix is not alone in moving away from Microsoft to the cloud. Most of the cloud runs on Linux. Why is that important?

Late last year, the Linux Foundation released a survey of Linux use among the Fortune 500 companies, companies that gross better than $500 million in business or had 500 or more employees. The survey found that 8 of 10 respondents were planning on adding new Linux servers. There is also evidence that Windows 8 is driving the enterprise to Linux as well.

Google, IBM, Amazon, eBay, Twiiter and Netflix all use Linux. There are many more, too numerous to name here, but suffice it to say that the biggest companies in the world use Linux. The vast majority of the cloud computing space runs on Linux.

Linux skills are also in very high demand. If you're a professional Linux user, then you know already that your skills will fetch a much higher salary than if you were an expert in Windows. Linux skills are hard to find, so if you have them, you're in good company. If you want them, training is not that hard to find. It's just a question of time and money. is where techs can go to find jobs and Dice reports that demand for Linux skills is higher than ever.

As someone who has been using computers for more than 30 years, and having watched Microsoft almost completely eliminate choice in the operating system market, I'm pleased to see these trends emerging. One big reason why Apple is still alive today is that Microsoft made a token investment in Apple to keep it alive - mainly to avoid further antitrust scrutiny. Linux arose because a small but determined group of people wanted an alternative to proprietary software. They wanted software freedom.

So, if you're still in high school or college and you have any interest in computers as a profession, learning Linux will put you years ahead of the competition. If you're an adult who's been working Windows for most of your career, beware: if you don't know Linux in 5 years, you may be limiting your earning potential.

Tuesday, July 09, 2013

Testing for Download Speed

I've been on the Internet since about 1991, when I got my first email address and found myself corresponding with people across the world. I started out with a 14.4k modem, then upgraded to a 56.6k modem and at that time, I thought that was pretty nice. In 2001, I got my first cable modem connection running at about 1.5 megabits per second. I loved how Windows Updates would download so fast then. Since then, things have changed.

That experience has made me susceptible to an ongoing fascination with download speeds. So, from time to time, I like to test my download speeds. Part of my desire to know the download speed arises out of curiosity, the "How cool is that!?!" factor, and to ensure that the ISP is doing their job.

Now I could jog on over to, but they tend to be somewhat inconsistent. I've also used the tools at and they never seem to match So, maybe I could just download a really large file like an Ubuntu ISO file, weighing in around 955 MB. During the download, I can click on the menu in Chrome (three little bars next to the last tab on top), select Downloads and observe the download in action. Chrome will report the speed there, as well. But even that can be inconsistent, probably due to some overhead.

In Linux, Mac and I believe Windows, there is a text based tool called wget. Wget is the tool to use to download pretty much whatever you want. You can download an entire website if you have the space and if the website permits you to do it. I like wget because it's a very clean download from the command line. There is no user interface overhead from Windows or Gnome, nor is there overhead from the browser. For example, I could type the following command:


And that would download the Gnome desktop version of Ubuntu. Notice the file extension is .iso, that is a CD/DVD file that can be burned to optical disk. When I run the command, the progress, speed and ETA are displayed until the download is complete, like so...

What's really cool about wget (besides about 100 other options to the command), is that when it's done, it displays the download time and average download speed. In the image above, we can see that the download rate was more than 6 MB/s (Megabytes per second). That is consistent with an advertised speed of 50 Mb/s (Megabits per second). The speed here averages a bit more.

I have also found that I can write these commands in shell script. For example, this is what I would write in Linux:
cd /media/scott/ISOs
# wget
Then I can run that file to do the download. Over time, I can add more commands to the file and keep a nice little list of places to go to download the file of choice. So I browse to the location where I want to download the ISO file. Then I copy the link to the ISO file and paste it into my script after the command, wget.

To keep from running the previous wget command (since I already have that ISO), I place a hash character (#) at the beginning of that line to mark it as a comment and keep it from running. One other option to consider for this script is to change the working directory. I have a collection of ISOs in a directory that is not on my main hard drive, so I added a command at the beginning of the script (after the "she-bang" at the very beginning) to change the working directory to the desired location. This way I don't have to copy it again to another location after downloading.

I have found that wget is faster and that storing the commands in a script makes for an easy reference for later. And I just like to watch the progress of the download.

I hope you have enjoyed this little tutorial.

Monday, July 08, 2013

The constant speed of Windows

Over the years, I've had a chance to see Windows in operation on many different CPUs. Starting around 1997 to present, I've seen Windows on the early Pentium processors all the way up to an i5 and one thing remains relatively constant: user interface speed. I remember long ago, how pundits were saying that the 486 was more than enough to handle word processing, after that, we didn't really notice any increase in speed.

While it is true that gamers have realized impressive gains in rendering speed, that is often because the game exploits the graphics processor, not the CPU. What I'm talking about is just plain Windows Explorer action on the desktop as well as ordinary productivity applications.

Here is a typical upgrade for a business to use for comparison: Start with the Dell Optiplex 3010 with 4 GB of RAM, a Core i5 CPU and SATA III connecting the hard drive to the motherboard. These machines typically replace Dell Optiplex 755 machines with Core2 Duo processors, 2GB of RAM and SATA II, running at 3 Gbits per second. What I find interesting is that these new machines have specs that are far better than their previous machines, yet, Windows still runs about the same as before.

Take the simple act of opening a folder on the hard drive. In the past 10 years years, hard disk interface speeds have quintupled. The interface speed has grown from 133 megabytes per second using PATA to to 600 megabytes per second using the SATA III interface, which is a very substantial increase in speed. Yet, again, with a directory filled with hundreds of files and folders, Windows will give us the green progress bar as Windows attempts to inventory the folder.

As a point of comparison, I have a 5 year old computer running Ubuntu Linux. On this machine, I have a music folder with more than 600 objects at the root of that folder. Listing the contents of that folder is nearly instantaneous. On Windows, the computer would spend about a minute running inventory on all folders and files in the folder. The result would be the same whether the computer is older or newer. The subjective speed of Windows operating on the desktop is relatively the same.

This isn't the only difference. I have a wide range of choices in desktops to install on Linux. Some are definitely hardware intensive like Gnome-Shell and KDE. They require at least 2GB RAM to feel normal. If I want something small and light, there is XFCE desktop, a desktop interface that is designed for speed. XFCE works exceptionally well on older hardware, too. Each desktop has a particular focus; some are fast and light, some are very pretty with nice animations, and others provide an alternative to the familiar menu driven interface we know from Windows. There are at least 8 different base desktops to choose from in Linux.

One reason for the apparent constant in speed observed on Windows could be traced to the desire of Microsoft to sell more licenses. Windows has an interesting model for sales. Original Equipment Manufacturers (OEMs like Dell, Hewlett-Packard and Acer) buy licenses from Microsoft and apply that pretty little sticker to the box. The license does not permit transfer to another box. Adding RAM? No problem. New hard drive? Easy.

But if you want more speed, there is a limit to what your motherboard will support. Upgrading the hardware in OEM boxes is not that easy if you want a new CPU. OEM motherboards are customized for the box and many of the parts, like CPU fans and fan shrouds, can be to source. If you want the latest and greatest CPU, you're going to have to buy a new box if your computer is a few years old.

The OEM Windows license is priced much lower than a shrink-wrapped box, too. Some anecdotal estimates the suggest a the cost is around $50 per license for consumer PCs. If you want to be able to move your copy of Windows around, you need to be ready to buy a full license for up to $299.

It is interesting to note that during the days prior to the release of Windows Vista, there were many articles discussing the fatter hardware requirements of Windows Vista. The purpose of these requirements was to get manufacturers on board with the prospect of selling new machines to support greater hardware requirements. At introduction, Vista was seen as bloated and high maintenance and was shunned by most businesses and many consumers.

While the Windows interface could probably run much faster, the need to sell more powerful hardware to satisfy the OEMs seems to be paramount. So if you're wondering why Windows seems to never change in speed, you now have some idea why. If you'd like an alternative to Windows, you may find one in a Linux desktop, a free desktop environment that is under constant improvement from a worldwide community of users and programmers.

Sunday, July 07, 2013

Information Overload

A few weeks ago, I signed up for Feedly and installed it as an app in Google Chrome. I had tried Google Reader and didn't really see much use for it. Yet, after reading about the impending demise of Google Reader, I decided to see if an RSS reader would be really useful.

For those who don't know, RSS stands for Really Simple Syndication. RSS is a worldwide standard that makes it easy for websites to publish their headlines and a snippet. As such, an RSS reader allows you to pull the headlines from nearly any news source and put them on one page. One very appealing feature of RSS readers is that they provide a handy way to scan the latest news on any website, without having to visit that website and look at any advertising. RSS readers also allow you to scan all of your favorite news sources in one place.

Feedly is a very slick and smooth application for RSS. Feedly runs in the background as I browse. When I come across a page that loads headlines, I can click on a translucent icon at the bottom right corner of the page to add it to my RSS reader feed. When I loaded Feedly for the first time, it prompted me to use a Gmail account, which I selected. Then it gathered the feeds from Google Reader for me and used that to start.

From there, I went to all the places I like to read news and added them to my Feedly application. As I added them, I created a few categories, and at first, I actually used them. But there is one category, "All" that I now use every day. The All category will display everything. I can set preferences so that the headlines are displayed in a way that is appealing to me.

Of the four display settings available, I like the Title Only setting, represented by 4 horizontal bars. This shows only headlines with no pictures and very short snippets. It is, in a sense, the beer-bong of news. The headlines are straight and fast. Feedly is very fast, too. So even if you refresh the page, you'll only be waiting a fraction of a second.

After using this application for a few weeks, I found that I developed a definite sense of information overload. There is only so much time in a day. When the sky is blue with puffy little clouds everywhere, I'd rather be out flying a kite than spending all day reading news on my computer. I get a little overwhelmed by all the headlines and really don't know where to start. Maybe I am tracking too many news sources. 

If you're a newshound, you're going to just love Feedly. If you have a life after computers, then you'll want to keep Feedly on a leash to be sure you'll have time to get out and play with your friends and family. The Internet is designed to route around damage and distribute information at the lowest possible cost, and does the job with amazing aplomb and success. Feedly only makes that more apparent, at the cost of having a life.

Friday, July 05, 2013

Achille's Heel

So many electrons have carried the message that the NSA is vacuuming online correspondence with their PRISM program, that I choose not to bore you with the details here. Suffice it to say that in the realm of the 4th Amendment, the new boss is the same as the old boss. Overbearing, unrelenting, and unreasonable in his efforts to keep an eye on the people he serves.

When I voted for Obama, I had high hopes that I was voting for change, especially in the realm of intelligence gathering. I could not have been more wrong. After reading about Lesterland, at least now I have a clue why this surveillance state is gathering steam, even under Obama. Apparently, the top 0.05% income earners of the American population that finance 60% of electoral campaigns across this country are very concerned that the rest of us might be up to something. You know, like organizing political opposition to their vision of what kind of country the United States should be.

The enormous data-gathering efforts taking place to "find a terrorist" may be well-intentioned. They may even be effective at finding and rooting out terrorists. But there are a few problems that merit discussion. First, someone has to read and interpret that data, even if computer algorithms can find a good chunk of the terrorist communications that the government just knows it is going to find.

There is the other problem of NSA employee integrity. Can employees be trusted to leave celebrity data alone? Can employees be trusted not to accept a bribe for intelligence information on a political rival or to divert an investigation to someone else? Can employees be trusted to make honest interpretations that actually lead to real terrorists rather than innocent people? It might be worth the effort and risk for a Homeland Security employee to find a way to finger a rival or adversary outside of the agency. Having the power to lock someone up without bail or access to an attorney seems like it would be mighty tempting for a grumbling employee.

While the problems and questions above could use more scrutiny and public debate, there is one question that seems absent from the mainstream press: Can the federal government keep the data they collect safe from adversaries like foreign governments? How about groups like Anonymous and Lulzsec?

Given the success of Anonymous in breaching security in a variety of contexts and with a wide range of groups and individuals, I'm going to place low odds that the Federal government will be able to protect their treasure trove of information resulting from intelligence gathering. The political, military and commercial value of that information provides an enormous temptation for foreign governments and loosely associated opposition groups. Anonymous seems to have a penchant for turning in really bad guys and fighting for freedom and justice so they may actually be an ally for the people in this struggle for personal freedom and privacy on the Internet.

Certain foreign governments would take delight in capturing information collected by the NSA. China has been caught breaching the security of various organizations and absconding with the information they collect in what can best be described as state sponsored computer warfare. I think it's only a matter of time before an organized campaign originating in a foreign country will succeed in breaching NSA security on a scale that cannot be hidden or denied.

Organizations like Anonymous have demonstrated ample skill at surveying and circumventing network security. They are clearly opposed to the surveillance state, so that could be a motivation for proving the government wrong. Worse, they tend to do datadumps that can embarrass a fair number of well-placed people.

Besides, it is well known axiom that no security system can protect against all threats. While it's not difficult to believe that the NSA has employed numerous security mechanisms to protect the the information they collect, they most certainly cannot protect against all possible attacks. Over time, that risk will grow as technically superior opposition groups test that security and eventually find a hole. To put this in perspective, I have more trust in Amazon and Google to protect my data than the NSA.

Maybe it's just me, but I don't think the government is who we should be worried about when it comes to the use of the information stored in the NSA. I think we should be more concerned about whether or not the government can protect that information from unauthorized access.