The Geeks Daily

OP
sygeek

sygeek

Technomancer
The Failed Experiment of Software Patents
I've noted before that we are witnessing a classic patent thicket in the realm of smartphones, with everyone and his or her dog suing everyone else (and their dog.) But without doubt one of the more cynical applications of intellectual monopolies is Oracle suit against Google. This smacked entirely of the lovely Larry Ellison spotting a chance to extra some money without needing to do much other than point his legal department in the right direction.

If that sounds harsh, take a read of this document from the case that turned up recently. It's Google's response to an Oracle expert witness's estimate of how much the former should be paying the latter:

Cockburn opines that Google, if found to infringe, would owe Oracle between 1.4 and 6.1 billion dollars -- a breathtaking figure that is out of proportion to any meaningful measure of the intellectual property at issue. Even the low end of Cockburn’s range is over 10 times the amount that Sun Microsystems, Inc. made each year for the entirety of its Java licensing program and 20 times what Sun made for Java-based mobile licensing. Cockburn’s theory is neatly tailored to enable Oracle to finance nearly all of its multi-billion dollar acquisition of Sun, even though the asserted patents and copyrights accounted for only a fraction of the value of Sun.

It does, indeed, sound rather as if Ellison is trying to get his entire purchase price back in a single swoop.

Now, I may be somewhat biased against this action, since it is causing all sorts of problems for the Linux-based Android, and I am certainly not a lawyer, but it does seem to me that the points of Google's lawyers are pretty spot on. For example:

First, Cockburn has no basis for including all of Google’s revenue from Android phones into the base of his royalty calculation. The accused product here is the Android software platform, which Google does not sell (and Google does not receive any payment, fee, royalty, or other remuneration for its contributions to Android). Cockburn seems to be arguing that Google’s advertising revenue from, e.g., mobile searches on Android devices should be included in the royalty base as a convoyed sale, though he never articulates or supports this justification and ignores the applicable principles under Uniloc and other cases. In fact, the value of the Android software and of Google’s ads are entirely separate: the software allows for phones to function, whether or not the user is viewing ads; and Google’s ads are viewable on any software and are not uniquely enabled by Android. Cockburn’s analysis effectively seeks disgorgement of Google’s profits even though “[t]he determination of a reasonable royalty . . . is based not on the infringer’s profit, but on the royalty to which a willing licensor and a willing licensee would have agreed at the time the infringement began.”

Oracle's expert seems to be adopting the old kitchen-sink approach, throwing in everything he can think of.
Second, Cockburn includes Oracle’s “lost profits and opportunities” in his purported royalty base. This is an obvious ploy to avoid the more demanding test for recovery of lost profits that Oracle cannot meet. ... Most audaciously, Cockburn tries to import into his royalty base the alleged harm Sun and Oracle would have suffered from so-called “fragmentation” of Java into myriad competing standards, opining that Oracle’s damages from the Android software includes theoretical downstream harm to a wholly different Oracle product. This is not a cognizable patent damages theory, and is unsupported by any precedent or analytical reasoning.

Even assuming that Google has willfully infringed on all the patents that Oracle claims - and that has still to be proved - it's hard to see how Oracle has really lost “opportunities” as a result. If anything, the huge success of Android, based as it is on Java, is likely to increase the demand for Java programmers, and generally make the entire Java ecosystem more valuable - greatly to Oracle's benefit.

So, irrespective of any royalties that may or may not be due, Oracle has in any case already gained from Google's action, and will continue to benefit from the rise of Android as the leading smartphone operating system. Moreover, as Android is used in other areas - tablets, set-top boxes, TVs etc. - Oracle will again benefit from the vastly increased size of the Java ecosystem over which it has substantial control.

Of course, I am totally unsurprised to find Oracle doing this. But to be fair to the Larry Ellison and his company, this isn't just about Oracle, but is also to do with the inherent problems of software patents, which encourage this kind of behavior (not least by rewarding it handsomely, sometimes.)

Lest you think this is just my jaundiced viewpoint, let's turn to recent paper from James Bessen, who is a Fellow of the Berkman Center for Internet and Society at Harvard, and Lecturer at the Boston University School of Law. I've mentioned Bessen several times in this blog, in connection with his book “Patent Failure”, which is a look at the US patent system in general. Here's the background to the current paper, entitled “A Generation of Software Patents”:

In 1994, the Court of Appeals for the Federal Circuit decided in In re Alappat that an invention that had a novel software algorithm combined with a trivial physical step was eligible for patent protection. This ruling opened the way for a large scale increase in patenting of software. Alappat and his fellow inventors were granted patent 5,440,676, the patent at issue in the appeal, in 1995. That patent expired in 2008. In other words, we have now experienced a full generation of software patents.

The Alappat decision was controversial, not least because the software industry had been highly innovative without patent protection. In fact, there had long been industry opposition to patenting software. Since the 1960s, computer companies opposed patents on software, first, in their input to a report by a presidential commission in 1966 and then in amici briefs to the Supreme Court in Gottschalk v. Benson in 1972 (they later changed their views). Major software firms opposed software patents through the mid-1990s.6 Perhaps more surprising, software developers themselves have mostly been opposed to patents on software.

That's a useful reminder that the software industry was innovative before there were software patents, and didn't want them introduced. The key question that Bessen addresses in his paper is a good one: how have things panned out in the 15 or so years since software patents have been granted in the US?

Here's what he says happened:

To summarize the literature, in the 1990s, the number of software patents granted grew rapidly, but these were acquired primarily by firms outside the software industry and perhaps for reasons other than to protect innovations. Relatively few software firms obtained patents in the 1990s and so, it seems that most software firms did not benefit from software patents. More recently, the majority of venture-backed startups do seem to have obtained patents. The reasons for this, however, are not entirely clear and so it is hard to know whether these firms realized substantial positive incentives for investing in innovation from patents. On the other hand, software patents are distinctly implicated in the tripling of patent litigation since the early 1990s. This litigation implies that software patents imposed significant disincentives for investment in R&D for most industries including software.

It is hard to conclude from the above findings that software patents significantly increased R&D incentives in the software industry.

And yet this is one of the reasons that is often given to justify the existence of software patents despite their manifest problems.

Bessen then goes on to look at how things have changed more recently:

most software firms still do not patent, although the percentage has increased. And most software patents go to firms outside the software industry, despite the industry’s substantial role in software innovation. While the share of patents going to the software industry has increased, that increase is largely the result of patenting by a few large firms.

Again, this gives the lie to the claim that software patents are crucial for smaller companies in order to protect their investments; instead, the evidence is that large companies are simply building up bigger and bigger patent portfolios, largely for defensive purposes, as Bessen notes in his concluding remarks:

Has the patent system adapted to software patents so as to overcome initial problems of too little benefit for the software industry and too much litigation? The evidence makes it hard to conclude that these problems have been resolved. While more software firms now obtain patents, most still do not, hence most software firms do not directly benefit from software patents. Patenting in the software industry is largely the activity of a few large firms. These firms realize benefits from patents, but the incentives that patents provide them might well be limited because these firms likely have other ways of earning returns from their innovations, such as network effects and complementary services. Moreover, anecdotal evidence suggests that some of these firms patent for defensive reasons, rather than to realize rents on their innovations: Adobe, Oracle and others announced that patents were not necessary in order to promote innovation at USPTO hearings in 1994, yet they now patent heavily.

On the other hand, the number of lawsuits involving software patents has more than tripled since 1999. This represents a substantial increase in litigation risk and hence a disincentive to invest in innovation. The silver lining is that the probability that a software patent is in a lawsuit has stopped increasing and might have begun a declining trend. This occurred perhaps in response to a new attitude in the courts and several Supreme Court decisions that have reined in some of the worst excesses related to software patents.

These comments come from an academic who certainly has no animus against patents. They hardly represent a ringing endorsement, but emphasize, rather, that very little is gained by granting such intellectual monopolies. Careful academic work like this, taken together with the extraordinary circus we are witnessing in the smartphone arena, strengthens the case for calling a halt now to the failed experiment of software patents.
 
OP
sygeek

sygeek

Technomancer
The PicoLisp Ticker

The PicoLisp Ticker
Around end of May, I was playing with an algorithm I had received from Bengt Grahn, many years ago. A small program - it was even part of the PicoLisp distribution ("misc/crap.l") for many years - which when given an arbitrary sample text in some language, produces an endless stream of pseudo-text which strongly resembles that language.

It was fun, so I decided to set up a PicoLisp "Ticker" page, producing a stream of "news": *ticker.picolisp.com

The source for the server is simple:
PHP:
(allowed ()
      *Page "!start" "@lib.css" "ticker.zip" )

   (load "@lib/http.l" "@lib/xhtml.l")
   (load "misc/crap.l")

   (one *Page)

   (de start ()
      (seed (time))
      (html 0 "PicoLisp Ticker" "@lib.css" NIL
         (<h2> NIL "Page " *Page)
         (<div> 'em50
            (do 3 (<p> NIL (crap 4)))
            (<spread>
               (<href> "Sources" "ticker.zip")
               (<this> '*Page (inc *Page) "Next page") ) ) ) )

   (de main ()
      (learn "misc/ticker.txt") )

   (de go ()
      (server 21000 "!start") )

The sample text for the learning phase, "misc/ticker.txt", is a plain text version of the PicoLisp FAQ. The complete source, including the text generator, can be downloaded via the "Sources" link as "ticker.zip".

Now look at the "Next page" link, appearing on the bottom right of the page. It always points to a page with a number one greater than the current page, providing an unlimited supply of ticker pages.

I went ahead, and installed and started the server. To get some logging, I inserted the line
PHP:
   (out 2 (prinl (stamp) " {" *Url "} Page " *Page "  [" *Adr "] " *Agent))/PHP]

at the beginning of the 'start' function.

On June 18th I announced it on Twitter, and watched the log files. Immediately, within one or two seconds (!), Googlebot accessed it: 
[PHP]   2011-06-18 11:22:04  Page 1  [66.249.71.139] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)

Wow, I thought, that was fast! Don't know if this was just by chance, or if Google always has such a close watch on Twitter.

Anyway, I was curious about what the search engine would do with such nonsense text, and how it would handle the infinite number of pages. During the next seconds and minutes, other bots and possibly human users accessed the ticker:
PHP:
   2011-06-18 11:22:08  Page 1  [65.52.23.76] Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)
   2011-06-18 11:22:10  Page 1  [65.52.4.133] Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)
   2011-06-18 11:22:20  Page 1  [50.16.239.111] Mozilla/5.0 (compatible; Birubot/1.0) Gecko/2009032608 Firefox/3.0.8
   2011-06-18 11:29:52  Page 1  [174.129.42.87] Python-urllib/2.6
   2011-06-18 11:30:34  Page 1  [174.129.42.87] Python-urllib/2.6
   2011-06-18 11:33:54  Page 1  [89.151.99.92] Mozilla/5.0 (compatible; MSIE 6.0b; Windows NT 5.0) Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
   2011-06-18 11:33:54  Page 1  [89.151.99.92] Mozilla/5.0 (compatible; MSIE 6.0b; Windows NT 5.0) Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
   2011-06-18 13:47:21  Page 1  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 13:49:13  Page 2  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 13:49:21  Page 3  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 19:43:36  Page 1  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:43:54  Page 2  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:11  Page 3  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:13  Page 4  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:16  Page 5  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:18  Page 6  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:20  Page 7  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30

Mr. Google came back the following day:
PHP:
   2011-06-19 00:25:57  Page 2  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 01:03:13  Page 3  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 01:35:57  Page 4  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 02:39:19  Page 5  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 03:43:39  Page 6  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 04:17:02  Page 7  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)

In between (not shown here) were also some accesses, probably by non-bots, who usually gave up after a few pages.

Mr. Google, however, assiduously went through "all" pages. The page numbers increased sequentially, but he also re-visited page 1, going up again. Now there were several indexing threads, and by June 23rd the first one exceeded page 150.

I felt sorry for poor Googlebot, and installed a "robots.txt" the same day, disallowing the ticker page for robots. I could see that several other bots fetched "robots.txt". But not Google. Instead, it kept following the pages of the ticker.

Then, finally, on July 5th, Googlebot looked at "robots.txt":
PHP:
   "robots.txt" 2011-07-05 07:03:05 Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html) ticker.picolisp.com
   "robots.txt: disallowed all"

The indexing, however, went on. Excerpt:
PHP:
   2011-07-05 04:27:46 {!start} Page 500  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 04:58:50 {!start} Page 501  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 05:30:24 {!start} Page 502  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 06:02:10 {!start} Page 503  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 06:32:14 {!start} Page 504  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 07:02:41 {!start} Page 505  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 08:02:31 {!start} Page 506  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 08:45:52 {!start} Page 507  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 09:20:06 {!start} Page 508  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 09:51:49 {!start} Page 509  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)

Strange. I would have expected the indexing to stop after Page 505.

In fact, all other robots seem to obey "robots.txt". Mr. Google, however, even started a new thread five days later again:
PHP:
   2011-07-10 02:22:52 {!start} Page 1  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)

I should feel flattered, if the PicoLisp news ticker is so interesting!

How will that go on? As of today, we have reached
PHP:
   2011-07-15 09:42:36 {!start} Page 879  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)

I'll stay tuned ...
 
OP
sygeek

sygeek

Technomancer
Before Python​
By Gudio van Rossum​
This morning I had a chat with the students at Google's CAPE program. Since I wrote up what I wanted to say I figured I might as well blog it here. Warning: this is pretty unedited (or else it would never be published :). I'm posting it in my "personal" blog instead of the "Python history" blog because it mostly touches on my career before Python. Here goes.

Have you ever written a computer program? Using which language?
  • HTML
  • Javascript
  • Java
  • Python
  • C++
  • C
  • Other - which?
[It turned out the students had used a mixture of Scratch, App Inventor, and Processing. A few students had also used Python or Java.]

Have you ever invented a programming language? :)

If you have programmed, you know some of the problems with programming languages. Have you ever thought about why programming isn't easier? Would it help if you could just talk to your computer? Have you tried speech recognition software? I have. It doesn't work very well yet. :)

How do you think programmers will write software 10 years from now? Or 30? 50?

Do you know how programmers worked 30 years ago?

I do.

I was born in Holland in 1956. Things were different.

I didn't know what a computer was until I was 18. However, I tinkered with electronics. I built a digital clock. My dream was to build my own calculator.

Then I went to university in Amsterdam to study mathematics and they had a computer that was free for students to use! (Not unlimited though. We were allowed to use something like one second of CPU time per day. :)

I had to learn how to use punch cards. There were machines to create them that had a keyboard. The machines were as big as a desk and made a terrible noise when you hit a key: a small hole was punched in the card with a huge force and great precision. If you made a mistake you had to start over.

I didn't get to see the actual computer for several more years. What we had in the basement of the math department was just an end point for a network that ran across the city. There were card readers and line printers and operators who controlled them. But the actual computer was elsewhere.

It was a huge, busy place, where programmers got together and discussed their problems, and I loved to hang out there. In fact, I loved it so much I nearly dropped out of university. But eventually I graduated.

Aside: Punch cards weren't invented for computers; they were invented for sorting census data and the like before WW2. [UPDATE: actually much earlier, though the IBM 80-column format I used did originate in 1928.] There were large mechanical machines for sorting stacks of cards. But punch cards are the reason that some software still limits you (or just defaults) to 80 characters per line.

My first program was a kind of "hello world" program written in Algol-60. That language was only popular in Europe, I believe. After another student gave me a few hints I learned the rest of the language straight from the official definition of the language, the "Revised Report on the Algorithmic Language Algol-60." That was not an easy report to read! The language was a bit cumbersome, but I didn't mind, I learned the basics of programming anyway: variables, expressions, functions, input/output.

Then a professor mentioned that there was a new programming language named Pascal. There was a Pascal compiler on our mainframe so I decided to learn it. I borrowed the book on Pascal from the departmental library (there was only one book, and only one copy, and I couldn't afford my own). After skimming it, I decided that the only thing I really needed were the "railroad diagrams" at the end of the book that summarized the language's syntax. I made photocopies of those and returned the book to the library.

Aside: Pascal really had only one new feature compared to Algol-60, pointers. These baffled me for the longest time. Eventually I learned assembly programming, which explained the memory model of a computer for the first time. I realized that a pointer was just an address. Then I finally understood them.

I guess this is how I got interested in programming languages. I learned the other languages of the day along the way: Fortran, Lisp, Basic, Cobol. With all this knowledge of programming, I managed to get a plum part-time job at the data center maintaining the mainframe's operating system. It was the most coveted job among programmers. It gave me access to unlimited computer time, the fastest terminals (still 80 x 24 though :), and most important, a stimulating environment where I got to learn from other programmers. I also got access to a Unix system, learned C and shell programming, and at some point we had an Apple II (mostly remembered for hours of playing space invaders). I even got to implement a new (but very crummy) programming language!

All this time, programming was one of the most fun things in my life. I thought of ideas for new programs to write all the time. But interestingly, I wasn't very interested in using computers for practical stuff! Nor even to solve mathematical puzzles (except that I invented a clever way of programming Conway's Game of Life that came from my understanding of using logic gates to build a binary addition circuit).

What I liked most though was write programs to make the life of programmers better. One of my early creations was a text editor that was better than the system's standard text editor (which wasn't very hard :). I also wrote an archive program that helped conserve disk space; it was so popular and useful that the data center offered it to all its customers. I liked sharing programs, and my own principles for sharing were very similar to what later would become Open Source (except I didn't care about licenses -- still don't :).

As a term project I wrote a static analyzer for Pascal programs with another student. Looking back I think it was a horrible program, but our professor thought it was brilliant and we both got an A+. That's where I learned about parsers and such, and that you can do more with a parser than write a compiler.

I combined pleasure with a good cause when I helped out a small left-wing political party in Holland automate their membership database. This was until then maintained by hand as a collection of metal plates plates into which letters were stamped using an antiquated machine not unlike a steam hammer :). In the end the project was not a great success, but my contributions (including an emulation of Unix's venerable "ed" editor program written in Cobol) piqued the attention of another volunteer, whose day job was as computer science researcher at the Mathematical Center. (Now CWI.)

This was Lambert Meertens. It so happened that he was designing his own programming language, named B (later ABC), and when I graduated he offered me a job on his team of programmers who were implementing an interpreter for the language (what we would now call a virtual machine).

The rest I have written up earlier in my Python history blog.
 
OP
sygeek

sygeek

Technomancer
When patents attack Android
By David Drummond
I have worked in the tech sector for over two decades. Microsoft and Apple have always been at each other’s throats, so when they get into bed together you have to start wondering what's going on. Here is what’s happening:

Android is on fire. More than 550,000 Android devices are activated every day, through a network of 39 manufacturers and 231 carriers. Android and other platforms are competing hard against each other, and that’s yielding cool new devices and amazing mobile apps for consumers.

But Android’s success has yielded something else: a hostile, organized campaign against Android by Microsoft, Oracle, Apple and other companies, waged through bogus patents.

They’re doing this by banding together to acquire Novell’s old patents (the “CPTN” group including Microsoft and Apple) and Nortel’s old patents (the “Rockstar” group including Microsoft and Apple), to make sure Google didn’t get them; seeking $15 licensing fees for every Android device; attempting to make it more expensive for phone manufacturers to license Android (which we provide free of charge) than Windows Mobile; and even suing Barnes & Noble, HTC, Motorola, and Samsung. Patents were meant to encourage innovation, but lately they are being used as a weapon to stop it.

A smartphone might involve as many as 250,000 (largely questionable) patent claims, and our competitors want to impose a “tax” for these dubious patents that makes Android devices more expensive for consumers. They want to make it harder for manufacturers to sell Android devices. Instead of competing by building new features or devices, they are fighting through litigation.

This anti-competitive strategy is also escalating the cost of patents way beyond what they’re really worth. Microsoft and Apple’s winning $4.5 billion for Nortel’s patent portfolio was nearly five times larger than the pre-auction estimate of $1 billion. Fortunately, the law frowns on the accumulation of dubious patents for anti-competitive means — which means these deals are likely to draw regulatory scrutiny, and this patent bubble will pop.

We’re not naive; technology is a tough and ever-changing industry and we work very hard to stay focused on our own business and make better products. But in this instance we thought it was important to speak out and make it clear that we’re determined to preserve Android as a competitive choice for consumers, by stopping those who are trying to strangle it.

We’re looking intensely at a number of ways to do that. We’re encouraged that the Department of Justice forced the group I mentioned earlier to license the former Novell patents on fair terms, and that it’s looking into whether Microsoft and Apple acquired the Nortel patents for anti-competitive means. We’re also looking at other ways to reduce the anti-competitive threats against Android by strengthening our own patent portfolio. Unless we act, consumers could face rising costs for Android devices — and fewer choices for their next phone.
 
OP
sygeek

sygeek

Technomancer
We were raised by the Valley
By Pablo Villalba​
My romanian friend

Romanian and Spanish are quite different languages. I’m spanish and it’s almost impossible to understand for me, since I have never studied it.

Some years ago I met a group of romanians who were in Spain for their first time. I was surprised to see they could understand Spanish quite well, even if they could barely speak it. They had never studied Spanish before and I was very puzzled. How could it be that they understood me while I couldn’t understand them?

My friend explained: As a child, she had been spent long hours watching latin american soap operas on TV. These shows had spanish audio and romanian subtitles. Because she was a child, like many others, she just picked it up naturally from hearing it – but she never learned how to speak in Spanish.

She was raised by her environment and close family, but while she watched those shows she learned about a completely different language.

This story is not about romanian and spanish, or a (perhaps rare) case of somebody learning a language by accident. I understood the meaning of this story much later, when I looked back at my first years with a computer.

A kid with a computer

Your parents and teachers are just some of your influences. You have also been raised in a way by Disney, Hollywood, TV series. I was, like many others, raised by the Valley.

As a kid I would play video games and enjoy the quirky humor from LucasArts. Then my father would set up QBasic for me and help me get started write my own games. I’d hop into the IRC to learn and share with others. I’d play online and meet like-minded people, and learn their language and style by imitation. I’d read and learn about just about everything fun I could get my hands on. I’d read Slashdot and embrace the open-source anti-Microsoft ideas. I’d learn web design and try to build something like others were building out there. And I’d dream of growing up and having a game development startup.

Unknowingly, I was growing up into a culture that wasn’t my immediate environment. And I felt incredibly at home. That doesn’t mean I got disconnected from my surroundings, I just had a connection with that new world, with its trends and stories and memes.

The pilgrims

I was 24 the first time I went to the Valley. As I walked through San Francisco and met people there, I couldn’t help having a déja-vu feeling. It was like I had already been there a long time ago, like the city had been waiting for me all those years.

As Nolan Bushnell gave me a ride on his car through the city, I was thinking about this: all the kids who grew up in their little towns hacking on their computers would someday do their pilgrimage and meet each other here. The Valley had raised us all, and we were finally coming back home.

*blog.teambox.com/wp-content/uploads/2011/08/spectre-town-big-fish.jpeg​
 
OP
sygeek

sygeek

Technomancer
I'm a phony. Are you?​
By Scott Hanselman
pho·ny also pho·ney (fō'nē) adj. pho·ni·er, pho·ni·est

1.
a. Not genuine or real; counterfeit: a phony credit card.
b. False; spurious: a phony name.

2. Not honest or truthful; deceptive: a phony excuse.

3.
a. Insincere or hypocritical.
b. Giving a false impression of truth or authenticity; specious.


Along with my regular job at Microsoft I also mentor a number of developers and program managers. I spoke to a young man recently who is extremely thoughtful and talented and he confessed he was having a crisis of confidence. He was getting stuck on things he didn't think he should be getting stuck on, not moving projects forward, and it was starting to seep into his regular life.

He said:

"Deep down know I’m ok. Programming since 13, graduated top of CS degree, got into Microsoft – but [I feel like I'm] an imposter."

I told him, straight up, You Are Not Alone.

For example, I've got 30 domains and I've only done something awesome with 3 of them. Sometimes when I log into my DNS manager I just see 27 failures. I think to myself, there's 27 potential businesses, 27 potential cool open source projects just languishing. If you knew anything you'd have made those happen. What a phony.

I hit Zero Email a week ago, now I'm at 122 today in my Inbox and it's stressing me out. And I teach people how to manage their inboxes. What a phony.

When I was 21 I was untouchable. I thought I was a gift to the world and you couldn't tell me anything. The older I get the more I realize that I'm just never going to get it all, and I don't think as fast as I used to. What a phony.

I try to learn a new language each year and be a Polyglot Programmer but I can feel F# leaking out of my head as I type this and I still can't get my head around really poetic idiomatic Ruby. What a phony.

I used to speak Spanish really well and I still study Zulu with my wife but I spoke to a native Spanish speaker today and realize I'm lucky if I can order a burrito. I've all but forgotten my years of Amharic. My Arabic, Hindi and Chinese have atrophied into catch phrases at this point. What a phony. (Clarification: This one is not intended as a humblebrag. I was a linguist and languages were part of my identity and I'm losing that and it makes me sad.)

But here's the thing. We all feel like phonies sometimes. We are all phonies. That's how we grow. We get into situations that are just a little more than we can handle, or we get in a little over our heads. Then we can handle them, and we aren't phonies, and we move on to the next challenge.

The idea of the Imposter Syndrome is not a new one.

Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

The opposite of this is even more interesting, the Dunning-Kruger effect. You may have had a manager or two with this issue. ;)

The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to recognize their mistakes.

It's a great read for a Wikipedia article, but here's the best line and the one you should remember.

...people with true ability tended to underestimate their relative competence.

I got an email from a podcast listener a few years ago. I remembered it when writing this post, found it in the archives and I'm including some of it here with emphasis mine.

I am a regular listener to your podcast and have great respect for you. With that in mind, I was quite shocked to hear you say on a recent podcast, "Everyone is lucky to have a job" and apply that you include yourself in this sentiment.

I have heard developers much lesser than your stature indicate a much more healthy (and accurate) attitude that they feel they are good enough that they can get a job whenever they want and so it's not worth letting their current job cause them stress. Do you seriously think that you would have a hard time getting a job or for that matter starting your own business? If you do, you have a self-image problem that you should seriously get help with.

But it's actually not you I'm really concerned about... it's your influence on your listeners. If they hear that you are worried about their job, they may be influenced to feel that surely they should be worried.


I really appreciated what this listener said and emailed him so. Perhaps my attitude is a Western Cultural thing, or a uniquely American one. I'd be interested in what you think, Dear Non-US Reader. I maintain that most of us feel this way sometimes. Perhaps we're unable to admit it. When I see programmers with blog titles like "I'm a freaking ninja" or "bad ass world's greatest programmer" I honestly wonder if they are delusional or psychotic. Maybe they just aren't very humble.

I stand by my original statement that I feel like a phony sometimes. Sometimes I joke, "Hey, it's a good day, my badge still works" or I answer "How are you?" with "I'm still working." I do that because it's true. I'm happy to have a job, while I could certainly work somewhere else. Do I need to work at Microsoft? Of course not. I could probably work anywhere if I put my mind to it, even the IT department at Little Debbie Snack Cakes. I use insecurity as a motivator to achieve and continue teaching.

I asked some friends if they felt this way and here's some of what they said.


  • [*]Totally! Not. I've worked hard to develop and hone my craft, I try to be innovative, and deliver results.
    [*] Plenty of times! Most recently I started a new job where I've been doing a lot of work in a language I'm rusty in and all the "Woot I've been doing 10 years worth of X language" doesn't mean jack. Very eye opening, very humbling, very refreshing
    [*] Quite often actually, especially on sites like stack overflow. It can be pretty intimidating and demotivating at times. Getting started in open source as well. I usually get over it and just tell myself that I just haven't encountered a particular topic before so I'm not an expert at it yet. I then dive in and learn all I can about it.
    [*] I always feel like a phony just biding my time until I'm found out. It definitely motivates me to excel further, hoping to outrun that sensation that I'm going to be called out for something I can't do
    [*] Phony? I don't. If anything, I wish I was doing more stuff on a grander scale. But I'm content with where I am now (entrepreneurship and teaching).
    [*] I think you are only a phony when you reflect your past work and don't feel comfortable about your own efforts and achievements.
    [*] Hell, no. I work my ass off. I own up to what I don't know, admit my mistakes, give credit freely to other when it's due and spend a lot of time always trying to learn more. I never feel like a phony.
    [*] Quite often. I don't truly think I'm a phony, but certainly there are crises of confidence that happen... particularly when I get stuck on something and start thrashing.

There are some folks who totally have self-confidence. Of the comment sample above, there are three "I don't feel like a phony" comments. But check this out: two of those folks aren't in IT. Perhaps IT people are more likely to have low self-confidence?

The important thing is to recognize this: If you are reading this or any blog, writing a blog of your own, or working in IT, you are probably in the top 1% of the wealth in the world. It may not feel like it, but you are very fortunate and likely very skilled. There are a thousand reasons why you are where you are and your self-confidence and ability are just one factor. It's OK to feel like a phony sometimes. It's healthy if it's moves you forward.

I'll leave you with this wonderful comment from Dave Ward:

I think the more you know, the more you realize just how much you don't know. So paradoxically, the deeper down the rabbit hole you go, the more you might tend to fixate on the growing collection of unlearned peripheral concepts that you become conscious of along the way.

That can manifest itself as feelings of fraudulence when people are calling you a "guru" or "expert" while you're internally overwhelmed by the ever-expanding volumes of things you're learning that you don't know.

However, I think it's important to tamp those insecurities down and continue on with confidence enough to continue learning. After all, you've got the advantage of having this long list of things you know you don't know, whereas most people haven't even taken the time to uncover that treasure map yet. What's more, no one else has it all figured out either. We're all just fumbling around in the adjacent possible, grasping at whatever good ideas and understanding we can manage to wrap our heads around.


Tell me your stories in the comments. We're also discussing this on this Google+ thread.

And remember, "Fake it til' you make it."
 

Who

Guess Who's Back
I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.
 

Nipun

Whompy Whomperson
I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.
Thats great..! :D

The articles are really very good, sygeek! :)
 
Last edited:
OP
sygeek

sygeek

Technomancer
What Netflix Could Have Said This Week​
In the 14 years since we started Netflix, we’ve gained more than 25 million customers worldwide by providing the best DVD by mail service anywhere. Along the way, we’ve built an unrivaled streaming service that continues to grow every day. Today we want to tell you about some big changes at Netflix as we get ready for the future.

To begin, we’re adding a video games upgrade option to our DVD by mail service. Similar to our upgrade option for Blu-ray, DVD members can now rent Wii, PS3 and Xbox 360 games. This is something we’ve been asked to do for years, and we’re pleased to finally provide it.

As we worked on this new addition, we could no longer deny that the DVD and streaming services are growing apart very quickly. These are in fact two very different businesses, with different customer needs. We even have different offices for them! Providing an experience that simultaneously addresses these two very different worlds has become an increasing challenge for us as we grow the company and evolve the website.

That’s why today we’re announcing significant changes to our company. First, we are renaming the DVD by mail business to Netflix Classic. This is the same DVD rental service you’re used to, but it’s more than just a name: Netflix Classic is a new company, operating independently as a subsidiary of Netflix.

Moving forward, Netflix as a company will be dedicated to streaming media. This is a realization of our original vision, and of the company’s name: watching movies over the Internet. The Netflix.com website and mobile apps will exclusively service our streaming library. DVD members will manage their queues at classic.netflix.com.

If you subscribe to both services, you’ll see two charges on your credit card instead of one, but you’ll pay the same total amount per month you do now. This, along with our recent pricing changes, is just a necessary outcome from creating two separate companies. DVD members will of course still receive the same red Netflix envelope that has been familiar to them all these years.

Members can log into both sites using the same login. This will allow streaming-only members to add DVD by mail, and DVD-only members to upgrade to streaming, at any time. The websites, however, will remain separate, so that we can start giving these different worlds the unique attention they deserve.

We think the benefits are going to be huge. We’ll be rolling out the new websites in a few weeks, and you’ll see right away what we’re able to accomplish by providing a dedicated experience for each service. Until then, check out this video we made to see just a few examples of the new sites in action.

We want to thank you for supporting us for all these years, and we are very excited about all the new benefits Netflix and Netflix Classic are about to bring you. We can’t wait for you to try it out yourselves.
 
OP
sygeek

sygeek

Technomancer
How Quake changed my life forever.​
*4.bp.blogspot.com/-1KxMyL6gix0/Tn4QuO-LAcI/AAAAAAAADkE/hlYRs7fL9BY/s400/Quaked.JPG​

Recently I saw this article on Rock Paper Shotgun and realized, wow I am not the only person in the world who has had his life changed by one video game. Even more interesting, this person had a bit of a life altering experience thanks to the same game.

I realize how ridiculous it sounds to say "Quake changed my life," but it honestly did.

So let's go back to 1996, I am 24 years old and living with two really great friends, one is a software engineer just out of Case Western Reserve University, and the other is a successful entrepreneur/electronics buff/all around PC geek who rebuilds terminals, resells them, and makes a good living doing it.

What am I doing? Oh I am a guy who barely graduated high school, matter of fact I don't think I legitimately earned my high school diploma...I think someone at the school decided I should just be let out into the world. I am a glorified painter, who calls himself an artist, with no formal art education who gets to work on murals from time to time.

I can recall being in high school and sitting with my guidance counselor after a particularly lack luster semester's performance. I am sure she had good intentions, and was trying the best she could to motivate me. She told me, if I did not get better grades and buckle down I might wind up homeless on the streets. I of course found this to be a bit shocking, and being so close to graduation I pretty much assumed I was going to be relegated to a job no one would like and doing my drawings when I had the spare time.

I always loved to draw, listen to hard rock/metal, read too many violent Dark Horse/Lobo Comics, and had an insane/nerdy fascination with Star Trek, Star Wars, Aliens, Predator, Terminator, Lord of the Rings, and just about any dark Sci-Fi/Fantasy style movie or book you could imagine. The kind of stuff that most people think is really cool now, but would immediately relegate you to punching bag status, and honestly not very cool with the chicks back then.

Following graduation I was an obnoxious guy, loud and just trying to have a good time all the while working as a tradesman in residential painting. Most of my days included at least 8 hours with some of the most right wing conservative Christian people you could possibly imagine. Using the word **** could get a "you need to spend more time with Jesus." reaction. It was an odd experience sometimes, but a few of these guys were very generous and provided me, a kid who had very little direction, with patience, a steady paycheck, solid work ethic, an understanding of quality craftsmanship, and the ability to live on my own and help my then fiance, now wife of 13 years through her undergrad. I don't know that I helped her so much as I bought the beer and food, she is incredibly motivated and far more intelligent than I am, she really didn't need any helping.

So I am living with my two buddies Nick and Pete, they both have PCs and I am saving up my cash for a PC so I can play Doom and Dune II guilt free, but I never seem to be able to put the coins together in a timely enough manner to purchase one. I feel guilty because I am always logging hours on their machines, and feel like I am annoying...but the pull of video games is so strong.

Sometimes the painting work dries up and I have to sit home for a few days, or a week in some cases and eat into my savings to pay the bills. Another time the car dies and I manage to convince myself to take on a payment for a pickup truck I could barely afford...in other words, the PC is not getting bought and spare cash is not readily available.

On a Friday night Nick comes home and says to Pete, hey I have QTest and they immediately go down the basement and magically "install" QTest on Pete's 486DX2, the fastest of their 2 machines at that time. I was hooked, from the very first moment I was given the keyboard and played Quake I was completely hooked. I say "magically installed" because at this point Windows 95 is just out, and everything has to be run through Dos commands which baffles me because I understand nothing about file structure, paths, or how a PC even works. I learn enough to launch Quake and change my yawspeed (this is before I knew about +mlook.)

What I did know was that Quake was so atmospheric, moody, and scary that I quickly forgot about Doom. I would turn off the lights in the basement and play QTest for hours after my roommates had gone off to bed and my wife was upstairs diligently studying towards her Bachelors.

A few months went by and Quake finally arrived on store shelves. I still did not have my own PC, and to my shock and dismay a 486DX2 would not run Quake fullscreen MP very well. If I wanted to get the full Quake experience I was going to have to shell out more money than I could imagine to get a new Pentium class PC.

Thankfully my roommate was kind enough to allow me to play Quake on his new Pentium machine from time to time, and the 486DX2 was just fast enough to run Quake MP in a window about the size of a postage stamp...which is what I did.

I played Quake on that 486 with the window the size of a stamp for hours, I played SP and I played MP. It did not matter that the game world was tiny on that 15 inch monitor, I just needed that window into the world of Quake. My imagination filled in all the gaps, I was so engaged with the setting I came up with my own stories in my head of what might be going on in that world. id had left the story incredibly vague, so the world of Quake was this creepy place that I imagined my own stories around and layered ideas on top of.

After about a year or two the lease on our house was up, and one of my friends built himself a new house, while the other moved into an apartment. I decided to move back in with my parents to save some cash for my upcoming wedding, and eventually I was able to get myself a PC, but only by caving in and putting the cost on a credit card I should not have put the charge on. This of course only led me deeper down the rabbit hole of Quake and Ben Morris' Worldcraft level editor. (Some of you might know it as Hammer these days as Valve purchased the rights to it from Ben ages ago.)

I still recall the day I sat in my room with my younger brother and fired up a Quake level editor for the first time. I think it was either Qed, or Qoole... I sat there and stared at what looked like the most complicated user interface I had ever seen. Now you have to remember, this is the guy who barely made it through high school. I don't know that I ever took a math course more advanced than Algebra, and I forgot anything I learned the moment I left the room each day.

Suddenly I am sitting here staring at 4 windows, X, Y, Z, and something that looks like the player view in Quake. The only thing I could think at the time was, WTF is X, Y, Z?

Thank god my younger brother Josh, who shares my addiction to video games to this day, happened to be sitting there with me. Thank god Josh actually paid attention in school, took a geometry course, and thank god he had no fear of experimenting with the software at all. If he had not been there that day, I might have closed the editor and never opened it again, just figuring I wasn't smart enough for this computer games stuff, so I'll just go back to Deathmatch.

Instead the two of us sat there for hours, figuring out what a brush was, how to put a texture on it, how to place lights in the scene, how to get a monster in the game (a wireframe box in the editor) I still recall the first time we compiled a box room and saw a big error message that said "Leaked." We thought for sure our box wouldn't work, but it did...so we ignored that "Leaked Crap" for now. At the start of it all, we did it together.

At some point late into the night, I got tired and had to work the next day; I was forced to go to sleep. I woke up at 7am the next morning to see my younger brother still sitting at the PC looking weary, tired, and incredibly addicted. He never went to bed, and had built what looked like a crazy spiral stair case to hell...with jumping Shamblers in a lava pit at the bottom.

We were both hooked...I was pissed I had to go to work and he was probably going to sit at my computer all day and build maps. I was pissed of course at my situation, not him.

For the next year or two I spent every moment of my free time in Worldcraft making Quake maps. My younger brother went off to school and his interest in Quake Mappery died off with his responsibility to classes and lack of a PC to work on. Some members of my family seemed to get annoyed with me, with the exception of my wife, over my new addiction. My wife was incredibly supportive and allowed me the time other women would demand of their partner to edit Quake Maps.

I found myself sketching levels, drawing out floor plans, coming up with ways to create new traps and trying to figure out how to properly trigger events and get solid gameplay going. Around the same time I got married, moved into an apartment with my new wife, and was getting very tired of the kinds of silliness going on at my painting job.

It seemed the more time I spent on the PC, interacting with people online, soaking up all of the Quake/PC knowledge I could find, the more lame my regular job that consumed 8 hours of my day felt. Each day was a struggle to get out of bed and pull myself away from what I was loving, only to go into work to do something that began to feel less and less valuable. I was also working for some incredibly wealthy people and it quickly became apparent to me that there were socio-economic/class issues I did not like about being a decorative artist/painter.

It took about two years for me to get the nerve to post one of my levels online. By this point Quake 2 was out, dial-up was in, and I was knee deep in the Quake 2 engine and assets. All the while feeling like the heart and soul of Quake 2 was missing and lacked something visceral and intense. I released my first Quake 2 level titled Retaliatory Strike, and expected harsh criticism. I was scared to death people would not like it, and feared the negative feedback I would receive.

My fears proved to be unfounded and I was pretty happy with of all the kind words being said about my work. I found the positive reviews and emails people sent telling me how much they liked my maps to be incredibly rewarding. In fact it was far more rewarding than any paycheck after a week of filling nail holes and caulking cracks could be. This eventually led me to lose my fear entirely and polish up some of my Quake maps/ideas and put them out there for people to play. All the while slowly realizing that if I could, I would spend all day and all night working on my maps. I had to tear myself away in order to make sure to give my wife, friends and family the attention they deserved.

I still recall the day my wife came home and said, we have saved up a good chunk of money, maybe we should start looking at houses. I thought this could be cool, lots of our friends have houses and they seem happy with them. I of course had no idea how much a house actually costs. We decided to make an appointment with the bank and get an idea if we were even able to get a home loan. Up until this point the only loan I had was for my Pickup Truck, and I think I had some unrealistic expectations of how long it takes to pay for a house.

The day we went to the bank was an especially nasty afternoon at work. When the bank representative started talking about 30 year mortgage rates I had the realization then and there; I could not possibly spend the rest of my life doing something I woke up and dreaded going to do each and every day.

It still had not occurred to me I could actually do games for a living...I barely graduated high school. Even my guidance counselor had assured me, if I didn't get good grades I wouldn't be able to do anything with myself. I was destined to the trades, or a fast food restaurant. I just was not smart enough.

Thankfully my wife did not share my sentiments. She encouraged me to look into schools, people make video games for a living and why on earth couldn't I do the same thing? Thousands of people were downloading my Quake levels and a few of them were sending me emails to say how much they liked them. Looking back it's a good thing she did not see the limitations I saw for myself, my wife picked me up and pushed me to try something I did not think I was capable of. Months later I was enrolled part time in a local community college, and began a long 6 year journey to my eventual 5 year BFA from the Cleveland Institute of Art, which eventually led me to a Cleveland area post production facility, EA Chicago, Raven, and finally Epic Games.

So Quake really did change my life. Quake was an approachable piece of technology, and the tools were simple enough for an artist with enough persistence to struggle through and learn the ropes. The fact that the engine and resources were open gave me the ability to see assets created by the original creators, as well as all of the additional content being churned out online. I could experiment and bring my own ideas to life and there was an entire sub-culture and community online line which supported and surrounded this pursuit.

Quake really did help change my life. It taught me if I wanted something bad enough I had to get out there and do it myself. It taught me how to type so I could communicate online, it taught me how to seek out information and familiarized me with the inner workings of a PC. Most of all I learned not to give up, to push myself to learn more and more each day, and that I was smart enough to do something other than paint or work at a hardware store.

Most important of all, my wife taught me that I was smart enough, and that the limits I saw for myself based on what I had been told when I was younger simply was not truthful. My time in the trades as a painter taught me that hard work, a strong work ethic, and persistence are sometimes worth more than any 101 course at an early age, when I may not have known exactly what it was I wanted to be doing after college.

My counselor had been mistaken, a few years later I would be paying for an apartment in Chicago, and a Mortgage in Cleveland. I was not homeless after all, at that time I had two homes and was flying back and forth between them on weekends.

My path to being an Effects Artist at Epic Games is by no means a straight one. I originally wanted to be a level designer or environment artist. There were plenty of challenges along the way, but I think it is obvious to me now. If I can make it from being a directionless, lost, obnoxious, nerdy artist in high school who barely graduated to working for Epic Games, anyone with enough talent, effort, and motivation can achieve their goals and dreams.

At the end of all this, it wasn't just Quake that really changed my life, my wife did. Quake gave me a direction to point in, and my wife picked me up and pushed me forward when I thought the road was closed to me.

In other words, to anyone wishing to work in the game industry, if I can do this, with enough effort and persistence so can you.
 
OP
sygeek

sygeek

Technomancer
What are the chances of your coming into being?​
A little while ago I had the privilege of attending TEDx San Francisco, organized by the incomparable Christine Mason McCaull. One of the talks was by Mel Robbins, a riotously funny self-help author and life coach with a syndicated radio show. In it, she mentioned that scientists calculate the probability of your existing as you, today, at about one in 400 trillion (4×1014).

“That’s a pretty big number,” I thought to myself. If I had 400 trillion pennies to my name, I could probably retire.

Previously, I had heard the Buddhist version of the probability of ‘this precious incarnation’. Imagine there was one life preserver thrown somewhere in some ocean and there is exactly one turtle in all of these oceans, swimming underwater somewhere. The probability that you came about and exist today is the same as that turtle sticking its head out of the water — into the middle of that life preserver. On one try.

So I got curious: are either of these numbers correct? Which one’s bigger? Are they gross exaggerations? Or is it possible that they are underestimates of the true number?

First, let us figure out the probability of one turtle sticking its head out of the one life preserver we toss out somewhere in the ocean. That’s a pretty straightforward calculation.

According to WolframAlpha, the total area of oceans in the world is 3.409×108 square kilometers, or 340,900,000 km2 (131.6 million square miles, for those benighted souls who still cling to user-hostile British measures). Let’s say a life preserver’s hole is about 80cm in diameter, which would make the area inside

3.14(0.4)2=0.5024 m2

which we will conveniently round to 0.5 square meters. If one square kilometer is a million square meters, then the probability of Mr Turtle sticking his head out of that life preserver is simply the area inside the life preserver divided by the total area of all oceans, or

0.5m2/3.409×108x106m2 = 1.47 x 10-15

or one in 6.82×1014, or about 1 in 700 trillion.

One in 400 trillion vs one in 700 trillion? I gotta say, the two numbers are pretty darn close, for such a farfetched notion from two completely different sources: old-time Buddhist scholars and present-day scientists. They agree to within a factor of two!

So to the second question: how accurate is this number? What would we come up with ourselves starting with first principles, making some reasonable assumptions and putting them all together? That is, instead of making one big hand-waving gesture and pronouncing, “The answer is five hundred bazillion squintillion,” we make a series of sequentially-reasoned, smaller hand-waving gestures so as to make it all seem scientific. (This is also known as ‘consulting’ – especially if you show it all in a PowerPoint deck.)

Oh, this is going to be fun.

First, let’s talk about the probability of your parents meeting. If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people. Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited. Half of those people, or 200 million, will be of the opposite sex. So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

Probability of boy meeting girl: 1 in 20,000.

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10. And the chances of that turning into another meeting is about one in 10 also. And the chances of that turning into a long-term relationship is also one in 10. And the chances of that lasting long enough to result in offspring is one in 2. So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

Probability of same boy knocking up same girl: 1 in 2000.

So the combined probability is already around 1 in 40 million — long but not insurmountable odds. Now things start getting interesting. Why? Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm. A fertile woman has 100,000 viable eggs on average. A man will produce about 12 trillion sperm over the course of his reproductive lifetime. Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count. So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

Probability of right sperm meeting right egg: 1 in 400 quadrillion.

To that, we could add the probability that the one sperm and the one egg met one another because she wasn’t in the mood, but let’s not split hairs here. The numbers are getting plenty huge as it is.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events. Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism. You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage. Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years. That’s 150,000 generations. Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2. Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

Probability of every one of your ancestors reproducing successfully: 1 in 1045,000

But let’s think about this some more. Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique? Well, the right sperm also had to meet the right egg to create your grandparents. Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you. This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time. If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one. It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of mine with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter. For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

Probability of your existing at all: 1 in 102,685,000

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027. The number of atoms making up the earth is about 1050. The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing? It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.

A miracle is an event so unlikely as to be almost impossible. By that definition, I’ve just shown that you are a miracle.

Now go forth and feel and act like the miracle that you are.

Think about it,

Ali B

Thanks for visiting! You can find more of my writing here and here. I also wrote a book on how smart women can find more love, which turns out to be the highest-rated of its kind on Amazon (4.9/5 stars). The book on how smart men can be more successful with women is also alright.

PS: Update 9/26/11: To all you smartypants out there who just can’t wait to tell me “the probably of existing of something that exists is 100%” and “this is all just hand-waving” — yes, Einstein, I know, and you’re totally missing the point. The probability of sentient life is not something that can be measured accurately, and hundreds of steps have been deleted for simplicity. It’s all an exercise to get you thinking, but some of you are so damn smart and obsessed with being right that you’ve lost the mental capacity to wonder and instead harp on the numerical accuracy of the calculation. And no matter how you slice it, it’s pretty remarkable that you and I, self-absorbed scallywags that we are, stand at the end of an unbroken chain of life going all the way back to the primordial slime. That’s the point. Now if you have something interesting to say, I’ll approve the comment, otherwise into the slag-heap of trolls it goes.
 

nisargshah95

Your Ad here
How Quake changed my life forever.​
*4.bp.blogspot.com/-1KxMyL6gix0/Tn4QuO-LAcI/AAAAAAAADkE/hlYRs7fL9BY/s400/Quaked.JPG​

Recently I saw this article on Rock Paper Shotgun and realized, wow I am not the only person in the world who has had his life changed by one video game. Even more interesting, this person had a bit of a life altering experience thanks to the same game.

I realize how ridiculous it sounds to say "Quake changed my life," but it honestly did.

So let's go back to 1996, I am 24 years old and living with two really great friends, one is a software engineer just out of Case Western Reserve University, and the other is a successful entrepreneur/electronics buff/all around PC geek who rebuilds terminals, resells them, and makes a good living doing it.

What am I doing? Oh I am a guy who barely graduated high school, matter of fact I don't think I legitimately earned my high school diploma...I think someone at the school decided I should just be let out into the world. I am a glorified painter, who calls himself an artist, with no formal art education who gets to work on murals from time to time.

I can recall being in high school and sitting with my guidance counselor after a particularly lack luster semester's performance. I am sure she had good intentions, and was trying the best she could to motivate me. She told me, if I did not get better grades and buckle down I might wind up homeless on the streets. I of course found this to be a bit shocking, and being so close to graduation I pretty much assumed I was going to be relegated to a job no one would like and doing my drawings when I had the spare time.

I always loved to draw, listen to hard rock/metal, read too many violent Dark Horse/Lobo Comics, and had an insane/nerdy fascination with Star Trek, Star Wars, Aliens, Predator, Terminator, Lord of the Rings, and just about any dark Sci-Fi/Fantasy style movie or book you could imagine. The kind of stuff that most people think is really cool now, but would immediately relegate you to punching bag status, and honestly not very cool with the chicks back then.

Following graduation I was an obnoxious guy, loud and just trying to have a good time all the while working as a tradesman in residential painting. Most of my days included at least 8 hours with some of the most right wing conservative Christian people you could possibly imagine. Using the word **** could get a "you need to spend more time with Jesus." reaction. It was an odd experience sometimes, but a few of these guys were very generous and provided me, a kid who had very little direction, with patience, a steady paycheck, solid work ethic, an understanding of quality craftsmanship, and the ability to live on my own and help my then fiance, now wife of 13 years through her undergrad. I don't know that I helped her so much as I bought the beer and food, she is incredibly motivated and far more intelligent than I am, she really didn't need any helping.

So I am living with my two buddies Nick and Pete, they both have PCs and I am saving up my cash for a PC so I can play Doom and Dune II guilt free, but I never seem to be able to put the coins together in a timely enough manner to purchase one. I feel guilty because I am always logging hours on their machines, and feel like I am annoying...but the pull of video games is so strong.

Sometimes the painting work dries up and I have to sit home for a few days, or a week in some cases and eat into my savings to pay the bills. Another time the car dies and I manage to convince myself to take on a payment for a pickup truck I could barely afford...in other words, the PC is not getting bought and spare cash is not readily available.

On a Friday night Nick comes home and says to Pete, hey I have QTest and they immediately go down the basement and magically "install" QTest on Pete's 486DX2, the fastest of their 2 machines at that time. I was hooked, from the very first moment I was given the keyboard and played Quake I was completely hooked. I say "magically installed" because at this point Windows 95 is just out, and everything has to be run through Dos commands which baffles me because I understand nothing about file structure, paths, or how a PC even works. I learn enough to launch Quake and change my yawspeed (this is before I knew about +mlook.)

What I did know was that Quake was so atmospheric, moody, and scary that I quickly forgot about Doom. I would turn off the lights in the basement and play QTest for hours after my roommates had gone off to bed and my wife was upstairs diligently studying towards her Bachelors.

A few months went by and Quake finally arrived on store shelves. I still did not have my own PC, and to my shock and dismay a 486DX2 would not run Quake fullscreen MP very well. If I wanted to get the full Quake experience I was going to have to shell out more money than I could imagine to get a new Pentium class PC.

Thankfully my roommate was kind enough to allow me to play Quake on his new Pentium machine from time to time, and the 486DX2 was just fast enough to run Quake MP in a window about the size of a postage stamp...which is what I did.

I played Quake on that 486 with the window the size of a stamp for hours, I played SP and I played MP. It did not matter that the game world was tiny on that 15 inch monitor, I just needed that window into the world of Quake. My imagination filled in all the gaps, I was so engaged with the setting I came up with my own stories in my head of what might be going on in that world. id had left the story incredibly vague, so the world of Quake was this creepy place that I imagined my own stories around and layered ideas on top of.

After about a year or two the lease on our house was up, and one of my friends built himself a new house, while the other moved into an apartment. I decided to move back in with my parents to save some cash for my upcoming wedding, and eventually I was able to get myself a PC, but only by caving in and putting the cost on a credit card I should not have put the charge on. This of course only led me deeper down the rabbit hole of Quake and Ben Morris' Worldcraft level editor. (Some of you might know it as Hammer these days as Valve purchased the rights to it from Ben ages ago.)

I still recall the day I sat in my room with my younger brother and fired up a Quake level editor for the first time. I think it was either Qed, or Qoole... I sat there and stared at what looked like the most complicated user interface I had ever seen. Now you have to remember, this is the guy who barely made it through high school. I don't know that I ever took a math course more advanced than Algebra, and I forgot anything I learned the moment I left the room each day.

Suddenly I am sitting here staring at 4 windows, X, Y, Z, and something that looks like the player view in Quake. The only thing I could think at the time was, WTF is X, Y, Z?

Thank god my younger brother Josh, who shares my addiction to video games to this day, happened to be sitting there with me. Thank god Josh actually paid attention in school, took a geometry course, and thank god he had no fear of experimenting with the software at all. If he had not been there that day, I might have closed the editor and never opened it again, just figuring I wasn't smart enough for this computer games stuff, so I'll just go back to Deathmatch.

Instead the two of us sat there for hours, figuring out what a brush was, how to put a texture on it, how to place lights in the scene, how to get a monster in the game (a wireframe box in the editor) I still recall the first time we compiled a box room and saw a big error message that said "Leaked." We thought for sure our box wouldn't work, but it did...so we ignored that "Leaked Crap" for now. At the start of it all, we did it together.

At some point late into the night, I got tired and had to work the next day; I was forced to go to sleep. I woke up at 7am the next morning to see my younger brother still sitting at the PC looking weary, tired, and incredibly addicted. He never went to bed, and had built what looked like a crazy spiral stair case to hell...with jumping Shamblers in a lava pit at the bottom.

We were both hooked...I was pissed I had to go to work and he was probably going to sit at my computer all day and build maps. I was pissed of course at my situation, not him.

For the next year or two I spent every moment of my free time in Worldcraft making Quake maps. My younger brother went off to school and his interest in Quake Mappery died off with his responsibility to classes and lack of a PC to work on. Some members of my family seemed to get annoyed with me, with the exception of my wife, over my new addiction. My wife was incredibly supportive and allowed me the time other women would demand of their partner to edit Quake Maps.

I found myself sketching levels, drawing out floor plans, coming up with ways to create new traps and trying to figure out how to properly trigger events and get solid gameplay going. Around the same time I got married, moved into an apartment with my new wife, and was getting very tired of the kinds of silliness going on at my painting job.

It seemed the more time I spent on the PC, interacting with people online, soaking up all of the Quake/PC knowledge I could find, the more lame my regular job that consumed 8 hours of my day felt. Each day was a struggle to get out of bed and pull myself away from what I was loving, only to go into work to do something that began to feel less and less valuable. I was also working for some incredibly wealthy people and it quickly became apparent to me that there were socio-economic/class issues I did not like about being a decorative artist/painter.

It took about two years for me to get the nerve to post one of my levels online. By this point Quake 2 was out, dial-up was in, and I was knee deep in the Quake 2 engine and assets. All the while feeling like the heart and soul of Quake 2 was missing and lacked something visceral and intense. I released my first Quake 2 level titled Retaliatory Strike, and expected harsh criticism. I was scared to death people would not like it, and feared the negative feedback I would receive.

My fears proved to be unfounded and I was pretty happy with of all the kind words being said about my work. I found the positive reviews and emails people sent telling me how much they liked my maps to be incredibly rewarding. In fact it was far more rewarding than any paycheck after a week of filling nail holes and caulking cracks could be. This eventually led me to lose my fear entirely and polish up some of my Quake maps/ideas and put them out there for people to play. All the while slowly realizing that if I could, I would spend all day and all night working on my maps. I had to tear myself away in order to make sure to give my wife, friends and family the attention they deserved.

I still recall the day my wife came home and said, we have saved up a good chunk of money, maybe we should start looking at houses. I thought this could be cool, lots of our friends have houses and they seem happy with them. I of course had no idea how much a house actually costs. We decided to make an appointment with the bank and get an idea if we were even able to get a home loan. Up until this point the only loan I had was for my Pickup Truck, and I think I had some unrealistic expectations of how long it takes to pay for a house.

The day we went to the bank was an especially nasty afternoon at work. When the bank representative started talking about 30 year mortgage rates I had the realization then and there; I could not possibly spend the rest of my life doing something I woke up and dreaded going to do each and every day.

It still had not occurred to me I could actually do games for a living...I barely graduated high school. Even my guidance counselor had assured me, if I didn't get good grades I wouldn't be able to do anything with myself. I was destined to the trades, or a fast food restaurant. I just was not smart enough.

Thankfully my wife did not share my sentiments. She encouraged me to look into schools, people make video games for a living and why on earth couldn't I do the same thing? Thousands of people were downloading my Quake levels and a few of them were sending me emails to say how much they liked them. Looking back it's a good thing she did not see the limitations I saw for myself, my wife picked me up and pushed me to try something I did not think I was capable of. Months later I was enrolled part time in a local community college, and began a long 6 year journey to my eventual 5 year BFA from the Cleveland Institute of Art, which eventually led me to a Cleveland area post production facility, EA Chicago, Raven, and finally Epic Games.

So Quake really did change my life. Quake was an approachable piece of technology, and the tools were simple enough for an artist with enough persistence to struggle through and learn the ropes. The fact that the engine and resources were open gave me the ability to see assets created by the original creators, as well as all of the additional content being churned out online. I could experiment and bring my own ideas to life and there was an entire sub-culture and community online line which supported and surrounded this pursuit.

Quake really did help change my life. It taught me if I wanted something bad enough I had to get out there and do it myself. It taught me how to type so I could communicate online, it taught me how to seek out information and familiarized me with the inner workings of a PC. Most of all I learned not to give up, to push myself to learn more and more each day, and that I was smart enough to do something other than paint or work at a hardware store.

Most important of all, my wife taught me that I was smart enough, and that the limits I saw for myself based on what I had been told when I was younger simply was not truthful. My time in the trades as a painter taught me that hard work, a strong work ethic, and persistence are sometimes worth more than any 101 course at an early age, when I may not have known exactly what it was I wanted to be doing after college.

My counselor had been mistaken, a few years later I would be paying for an apartment in Chicago, and a Mortgage in Cleveland. I was not homeless after all, at that time I had two homes and was flying back and forth between them on weekends.

My path to being an Effects Artist at Epic Games is by no means a straight one. I originally wanted to be a level designer or environment artist. There were plenty of challenges along the way, but I think it is obvious to me now. If I can make it from being a directionless, lost, obnoxious, nerdy artist in high school who barely graduated to working for Epic Games, anyone with enough talent, effort, and motivation can achieve their goals and dreams.

At the end of all this, it wasn't just Quake that really changed my life, my wife did. Quake gave me a direction to point in, and my wife picked me up and pushed me forward when I thought the road was closed to me.

In other words, to anyone wishing to work in the game industry, if I can do this, with enough effort and persistence so can you.

What are the chances of your coming into being?​
A little while ago I had the privilege of attending TEDx San Francisco, organized by the incomparable Christine Mason McCaull. One of the talks was by Mel Robbins, a riotously funny self-help author and life coach with a syndicated radio show. In it, she mentioned that scientists calculate the probability of your existing as you, today, at about one in 400 trillion (4×1014).

“That’s a pretty big number,” I thought to myself. If I had 400 trillion pennies to my name, I could probably retire.

Previously, I had heard the Buddhist version of the probability of ‘this precious incarnation’. Imagine there was one life preserver thrown somewhere in some ocean and there is exactly one turtle in all of these oceans, swimming underwater somewhere. The probability that you came about and exist today is the same as that turtle sticking its head out of the water — into the middle of that life preserver. On one try.

So I got curious: are either of these numbers correct? Which one’s bigger? Are they gross exaggerations? Or is it possible that they are underestimates of the true number?

First, let us figure out the probability of one turtle sticking its head out of the one life preserver we toss out somewhere in the ocean. That’s a pretty straightforward calculation.

According to WolframAlpha, the total area of oceans in the world is 3.409×108 square kilometers, or 340,900,000 km2 (131.6 million square miles, for those benighted souls who still cling to user-hostile British measures). Let’s say a life preserver’s hole is about 80cm in diameter, which would make the area inside

3.14(0.4)2=0.5024 m2

which we will conveniently round to 0.5 square meters. If one square kilometer is a million square meters, then the probability of Mr Turtle sticking his head out of that life preserver is simply the area inside the life preserver divided by the total area of all oceans, or

0.5m2/3.409×108x106m2 = 1.47 x 10-15

or one in 6.82×1014, or about 1 in 700 trillion.

One in 400 trillion vs one in 700 trillion? I gotta say, the two numbers are pretty darn close, for such a farfetched notion from two completely different sources: old-time Buddhist scholars and present-day scientists. They agree to within a factor of two!

So to the second question: how accurate is this number? What would we come up with ourselves starting with first principles, making some reasonable assumptions and putting them all together? That is, instead of making one big hand-waving gesture and pronouncing, “The answer is five hundred bazillion squintillion,” we make a series of sequentially-reasoned, smaller hand-waving gestures so as to make it all seem scientific. (This is also known as ‘consulting’ – especially if you show it all in a PowerPoint deck.)

Oh, this is going to be fun.

First, let’s talk about the probability of your parents meeting. If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people. Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited. Half of those people, or 200 million, will be of the opposite sex. So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

Probability of boy meeting girl: 1 in 20,000.

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10. And the chances of that turning into another meeting is about one in 10 also. And the chances of that turning into a long-term relationship is also one in 10. And the chances of that lasting long enough to result in offspring is one in 2. So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

Probability of same boy knocking up same girl: 1 in 2000.

So the combined probability is already around 1 in 40 million — long but not insurmountable odds. Now things start getting interesting. Why? Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm. A fertile woman has 100,000 viable eggs on average. A man will produce about 12 trillion sperm over the course of his reproductive lifetime. Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count. So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

Probability of right sperm meeting right egg: 1 in 400 quadrillion.

To that, we could add the probability that the one sperm and the one egg met one another because she wasn’t in the mood, but let’s not split hairs here. The numbers are getting plenty huge as it is.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events. Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism. You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage. Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years. That’s 150,000 generations. Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2. Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

Probability of every one of your ancestors reproducing successfully: 1 in 1045,000

But let’s think about this some more. Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique? Well, the right sperm also had to meet the right egg to create your grandparents. Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you. This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time. If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one. It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of mine with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter. For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

Probability of your existing at all: 1 in 102,685,000

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027. The number of atoms making up the earth is about 1050. The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing? It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.

A miracle is an event so unlikely as to be almost impossible. By that definition, I’ve just shown that you are a miracle.

Now go forth and feel and act like the miracle that you are.

Think about it,

Ali B

Thanks for visiting! You can find more of my writing here and here. I also wrote a book on how smart women can find more love, which turns out to be the highest-rated of its kind on Amazon (4.9/5 stars). The book on how smart men can be more successful with women is also alright.

PS: Update 9/26/11: To all you smartypants out there who just can’t wait to tell me “the probably of existing of something that exists is 100%” and “this is all just hand-waving” — yes, Einstein, I know, and you’re totally missing the point. The probability of sentient life is not something that can be measured accurately, and hundreds of steps have been deleted for simplicity. It’s all an exercise to get you thinking, but some of you are so damn smart and obsessed with being right that you’ve lost the mental capacity to wonder and instead harp on the numerical accuracy of the calculation. And no matter how you slice it, it’s pretty remarkable that you and I, self-absorbed scallywags that we are, stand at the end of an unbroken chain of life going all the way back to the primordial slime. That’s the point. Now if you have something interesting to say, I’ll approve the comment, otherwise into the slag-heap of trolls it goes.
The articles are really good dude. Don't stop posting them!
:+1:
 
OP
sygeek

sygeek

Technomancer
The Humble Programmer​
By Edsger W. Dijkstra
As a result of a long sequence of coincidences I entered the programming profession officially on the first spring morning of 1952 and as far as I have been able to trace, I was the first Dutchman to do so in my country. In retrospect the most amazing thing was the slowness with which, at least in my part of the world, the programming profession emerged, a slowness which is now hard to believe. But I am grateful for two vivid recollections from that period that establish that slowness beyond any doubt.

After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Centre in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become....., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on van Wijngaarden's office door, asking him whether I could "speak to him for a moment"; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could. One moral of the above story is, of course, that we must be very careful when we give advice to younger people; sometimes they follow it!

Another two years later, in 1957, I married and Dutch marriage rites require you to state your profession and I stated that I was a programmer. But the municipal authorities of the town of Amsterdam did not accept it on the grounds that there was no such profession. And, believe it or not, but under the heading "profession" my marriage act shows the ridiculous entry "theoretical physicist"!

So much for the slowness with which I saw the programming profession emerge in my own country. Since then I have seen more of the world, and it is my general impression that in other countries, apart from a possible shift of dates, the growth pattern has been very much the same.

Let me try to capture the situation in those old days in a little bit more detail, in the hope of getting a better understanding of the situation today. While we pursue our analysis, we shall see how many common misunderstandings about the true nature of the programming task can be traced back to that now distant past.

The first automatic electronic computers were all unique, single-copy machines and they were all to be found in an environment with the exciting flavour of an experimental laboratory. Once the vision of the automatic computer was there, its realisation was a tremendous challenge to the electronic technology then available, and one thing is certain: we cannot deny the courage of the groups that decided to try and build such a fantastic piece of equipment. For fantastic pieces of equipment they were: in retrospect one can only wonder that those first machines worked at all, at least sometimes. The overwhelming problem was to get and keep the machine in working order. The preoccupation with the physical aspects of automatic computing is still reflected in the names of the older scientific societies in the field, such as the Association for Computing Machinery or the British Computer Society, names in which explicit reference is made to the physical equipment.

What about the poor programmer? Well, to tell the honest truth: he was hardly noticed. For one thing, the first machines were so bulky that you could hardly move them and besides that, they required such extensive maintenance that it was quite natural that the place where people tried to use the machine was the same laboratory where the machine had been developed. Secondly, his somewhat invisible work was without any glamour: you could show the machine to visitors and that was several orders of magnitude more spectacular than some sheets of coding. But most important of all, the programmer himself had a very modest view of his own work: his work derived all its significance from the existence of that wonderful machine. Because that was a unique machine, he knew only too well that his programs had only local significance and also, because it was patently obvious that this machine would have a limited lifetime, he knew that very little of his work would have a lasting value. Finally, there is yet another circumstance that had a profound influence on the programmer's attitude to his work: on the one hand, besides being unreliable, his machine was usually too slow and its memory was usually too small, i.e. he was faced with a pinching shoe, while on the other hand its usually somewhat queer order code would cater for the most unexpected constructions. And in those days many a clever programmer derived an immense intellectual satisfaction from the cunning tricks by means of which he contrived to squeeze the impossible into the constraints of his equipment.

Two opinions about programming date from those days. I mention them now, I shall return to them later. The one opinion was that a really competent programmer should be puzzle-minded and very fond of clever tricks; the other opinon was that programming was nothing more than optimizing the efficiency of the computational process, in one direction or the other.

The latter opinion was the result of the frequent circumstance that, indeed, the available equipment was a painfully pinching shoe, and in those days one often encountered the naive expectation that, once more powerful machines were available, programming would no longer be a problem, for then the struggle to push the machine to its limits would no longer be necessary and that was all what programming was about, wasn't it? But in the next decades something completely different happened: more powerful machines became available, not just an order of magnitude more powerful, even several orders of magnitude more powerful. But instead of finding ourselves in the state of eternal bliss of all progamming problems solved, we found ourselves up to our necks in the software crisis! How come?

There is a minor cause: in one or two respects modern machinery is basically more difficult to handle than the old machinery. Firstly, we have got the I/O interrupts, occurring at unpredictable and irreproducible moments; compared with the old sequential machine that pretended to be a fully deterministic automaton, this has been a dramatic change and many a systems programmer's grey hair bears witness to the fact that we should not talk lightly about the logical problems created by that feature. Secondly, we have got machines equipped with multi-level stores, presenting us problems of management strategy that, in spite of the extensive literature on the subject, still remain rather elusive. So much for the added complication due to structural changes of the actual machines.

But I called this a minor cause; the major cause is... that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming had become an equally gigantic problem. In this sense the electronic industry has not solved a single problem, it has only created them, it has created the problem of using its products. To put it in another way: as the power of available machines grew by a factor of more than a thousand, society's ambition to apply these machines grew in proportion, and it was the poor programmer who found his job in this exploded field of tension between ends and means. The increased power of the hardware, together with the perhaps even more dramatic increase in its reliability, made solutions feasible that the programmer had not dared to dream about a few years before. And now, a few years later, he had to dream about them and, even worse, he had to transform such dreams into reality! Is it a wonder that we found ourselves in a software crisis? No, certainly not, and as you may guess, it was even predicted well in advance; but the trouble with minor prophets, of course, is that it is only five years later that you really know that they had been right.

Then, in the mid-sixties, something terrible happened: the computers of the so-called third generation made their appearance. The official literature tells us that their price/performance ratio has been one of the major design objectives. But if you take as "performance" the duty cycle of the machine's various components, little will prevent you from ending up with a design in which the major part of your performance goal is reached by internal housekeeping activities of doubtful necessity. And if your definition of price is the price to be paid for the hardware, little will prevent you from ending up wth a design that is terribly hard to program for: for instance the order code might be such as to enforce, either upon the progrmmer or upon the system, early binding decisions presenting conflicts that really cannot be resolved. And to a large extent these unpleasant possibilities seem to have become reality.

When these machines were announced and their functional specifications became known, quite a few among us must have become quite miserable; at least I was. It was only reasonable to expect that such machines would flood the computing community, and it was therefore all the more important that their design should be as sound as possible. But the design embodied such serious flaws that I felt that with a single stroke the progress of computing science had been retarded by at least ten years: it was then that I had the blackest week in the whole of my professional life. Perhaps the most saddening thing now is that, even after all those years of frustrating experience, still so many people honestly believe that some law of nature tells us that machines have to be that way. They silence their doubts by observing how many of these machines have been sold, and derive from that observation the false sense of security that, after all, the design cannot have been that bad. But upon closer inspection, that line of defense has the same convincing strength as the argument that cigarette smoking must be healthy because so many people do it.

It is in this connection that I regret that it is not customary for scientific journals in the computing area to publish reviews of newly announced computers in much the same way as we review scientific publications: to review machines would be at least as important. And here I have a confession to make: in the early sixties I wrote such a review with the intention of submitting it to the CACM, but in spite of the fact that the few colleagues to whom the text was sent for their advice, urged me all to do so, I did not dare to do it, fearing that the difficulties either for myself or for the editorial board would prove to be too great. This suppression was an act of cowardice on my side for which I blame myself more and more. The difficulties I foresaw were a consequence of the absence of generally accepted criteria, and although I was convinced of the validity of the criteria I had chosen to apply, I feared that my review would be refused or discarded as "a matter of personal taste". I still think that such reviews would be extremely useful and I am longing to see them appear, for their accepted appearance would be a sure sign of maturity of the computing community.

The reason that I have paid the above attention to the hardware scene is because I have the feeling that one of the most important aspects of any computing tool is its influence on the thinking habits of those that try to use it, and because I have reasons to believe that that influence is many times stronger than is commonly assumed. Let us now switch our attention to the software scene.

Here the diversity has been so large that I must confine myself to a few stepping stones. I am painfully aware of the arbitrariness of my choice and I beg you not to draw any conclusions with regard to my appreciation of the many efforts that will remain unmentioned.

In the beginning there was the EDSAC in Cambridge, England, and I think it quite impressive that right from the start the notion of a subroutine library played a central role in the design of that machine and of the way in which it should be used. It is now nearly 25 years later and the computing scene has changed dramatically, but the notion of basic software is still with us, and the notion of the closed subroutine is still one of the key concepts in programming. We should recognise the closed subroutines as one of the greatest software inventions; it has survived three generations of computers and it will survive a few more, because it caters for the implementation of one of our basic patterns of abstraction. Regrettably enough, its importance has been underestimated in the design of the third generation computers, in which the great number of explicitly named registers of the arithmetic unit implies a large overhead on the subroutine mechanism. But even that did not kill the concept of the subroutine, and we can only pray that the mutation won't prove to be hereditary.

The second major development on the software scene that I would like to mention is the birth of FORTRAN. At that time this was a project of great temerity and the people responsible for it deserve our great admiration. It would be absolutely unfair to blame them for shortcomings that only became apparent after a decade or so of extensive usage: groups with a successful look-ahead of ten years are quite rare! In retrospect we must rate FORTRAN as a successful coding technique, but with very few effective aids to conception, aids which are now so urgently needed that time has come to consider it out of date. The sooner we can forget that FORTRAN has ever existed, the better, for as a vehicle of thought it is no longer adequate: it wastes our brainpower, is too risky and therefore too expensive to use. FORTRAN's tragic fate has been its wide acceptance, mentally chaining thousands and thousands of programmers to our past mistakes. I pray daily that more of my fellow-programmers may find the means of freeing themselves from the curse of compatibility.

The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as "the most intelligent way to misuse a computer". I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.

The fourth project to be mentioned is ALGOL 60. While up to the present day FORTRAN programmers still tend to understand their programming language in terms of the specific implementation they are working with —hence the prevalence of octal and hexadecimal dumps—, while the definition of LISP is still a curious mixture of what the language means and how the mechanism works, the famous Report on the Algorithmic Language ALGOL 60 is the fruit of a genuine effort to carry abstraction a vital step further and to define a programming language in an implementation-independent way. One could argue that in this respect its authors have been so successful that they have created serious doubts as to whether it could be implemented at all! The report gloriously demonstrated the power of the formal method BNF, now fairly known as Backus-Naur-Form, and the power of carefully phrased English, a least when used by someone as brilliant as Peter Naur. I think that it is fair to say that only very few documents as short as this have had an equally profound influence on the computing community. The ease with which in later years the names ALGOL and ALGOL-like have been used, as an unprotected trade mark, to lend some of its glory to a number of sometimes hardly related younger projects, is a somewhat shocking compliment to its standing. The strength of BNF as a defining device is responsible for what I regard as one of the weaknesses of the language: an over-elaborate and not too systematic syntax could now be crammed into the confines of very few pages. With a device as powerful as BNF, the Report on the Algorithmic Language ALGOL 60 should have been much shorter. Besides that I am getting very doubtful about ALGOL 60's parameter mechanism: it allows the programmer so much combinatorial freedom, that its confident use requires a strong discipline from the programmer. Besides expensive to implement it seems dangerous to use.

Finally, although the subject is not a pleasant one, I must mention PL/1, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control. And if I have to describe the influence PL/1 can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember from a symposium on higher level programming language a lecture given in defense of PL/1 by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/1. he managed to ask for the addition of about fifty new "features", little supposing that the main source of his problems could very well be that it contained already far too many "features". The speaker displayed all the depressing symptoms of addiction, reduced as he was to the state of mental stagnation in which he could only ask for more, more, more... When FORTRAN has been called an infantile disorder, full PL/1, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease.

So much for the past. But there is no point in making mistakes unless thereafter we are able to learn from them. As a matter of fact, I think that we have learned so much, that within a few years programming can be an activity vastly different from what it has been up till now, so different that we had better prepare ourselves for the shock. Let me sketch for you one of the posssible futures. At first sight, this vision of programming in perhaps already the near future may strike you as utterly fantastic. Let me therefore also add the considerations that might lead one to the conclusion that this vision could be a very real possibility.

The vision is that, well before the seventies have run to completion, we shall be able to design and implement the kind of systems that are now straining our programming ability, at the expense of only a few percent in man-years of what they cost us now, and that besides that, these systems will be virtually free of bugs. These two improvements go hand in hand. In the latter respect software seems to be different from many other products, where as a rule a higher quality implies a higher price. Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with, and as a result the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with. In other words: both goals point to the same change.

Such a drastic change in such a short period of time would be a revolution, and to all persons that base their expectations for the future on smooth extrapolation of the recent past —appealing to some unwritten laws of social and cultural inertia— the chance that this drastic change will take place must seem negligible. But we all know that sometimes revolutions do take place! And what are the chances for this one?

There seem to be three major conditions that must be fulfilled. The world at large must recognize the need for the change; secondly the economic need for it must be sufficiently strong; and, thirdly, the change must be technically feasible. Let me discuss these three conditions in the above order.

With respect to the recognition of the need for greater reliability of software, I expect no disagreement anymore. Only a few years ago this was different: to talk about a software crisis was blasphemy. The turning point was the Conference on Software Engineering in Garmisch, October 1968, a conference that created a sensation as there occured the first open admission of the software crisis. And by now it is generally recognized that the design of any large sophisticated system is going to be a very difficult job, and whenever one meets people responsible for such undertakings, one finds them very much concerned about the reliability issue, and rightly so. In short, our first condition seems to be satisfied.

Now for the economic need. Nowadays one often encounters the opinion that in the sixties programming has been an overpaid profession, and that in the coming years programmer salaries may be expected to go down. Usually this opinion is expressed in connection with the recession, but it could be a symptom of something different and quite healthy, viz. that perhaps the programmers of the past decade have not done so good a job as they should have done. Society is getting dissatisfied with the performance of programmers and of their products. But there is another factor of much greater weight. In the present situation it is quite usual that for a specific system, the price to be paid for the development of the software is of the same order of magnitude as the price of the hardware needed, and society more or less accepts that. But hardware manufacturers tell us that in the next decade hardware prices can be expected to drop with a factor of ten. If software development were to continue to be the same clumsy and expensive process as it is now, things would get completely out of balance. You cannot expect society to accept this, and therefore we must learn to program an order of magnitude more effectively. To put it in another way: as long as machines were the largest item on the budget, the programming profession could get away with its clumsy techniques, but that umbrella will fold rapidly. In short, also our second condition seems to be satisfied.

And now the third condition: is it technically feasible? I think it might and I shall give you six arguments in support of that opinion.

A study of program structure had revealed that programs —even alternative programs for the same task and with the same mathematical content— can differ tremendously in their intellectual manageability. A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program. These rules are of two kinds. Those of the first kind are easily imposed mechanically, viz. by a suitably chosen programming language. Examples are the exclusion of goto-statements and of procedures with more than one output parameter. For those of the second kind I at least —but that may be due to lack of competence on my side— see no way of imposing them mechanically, as it seems to need some sort of automatic theorem prover for which I have no existence proof. Therefore, for the time being and perhaps forever, the rules of the second kind present themselves as elements of discipline required from the programmer. Some of the rules I have in mind are so clear that they can be taught and that there never needs to be an argument as to whether a given program violates them or not. Examples are the requirements that no loop should be written down without providing a proof for termination nor without stating the relation whose invariance will not be destroyed by the execution of the repeatable statement.

I now suggest that we confine ourselves to the design and implementation of intellectually manageable programs. If someone fears that this restriction is so severe that we cannot live with it, I can reassure him: the class of intellectually manageable programs is still sufficiently rich to contain many very realistic programs for any problem capable of algorithmic solution. We must not forget that it is not our business to make programs, it is our business to design classes of computations that will display a desired behaviour. The suggestion of confining ourselves to intellectually manageable programs is the basis for the first two of my announced six arguments.

Argument one is that, as the programmer only needs to consider intellectually manageable programs, the alternatives he is choosing between are much, much easier to cope with.

Argument two is that, as soon as we have decided to restrict ourselves to the subset of the intellectually manageable programs, we have achieved, once and for all, a drastic reduction of the solution space to be considered. And this argument is distinct from argument one.

Argument three is based on the constructive approach to the problem of program correctness. Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer's burden. On the contrary: the programmer should let correctness proof and program grow hand in hand. Argument three is essentially based on the following observation. If one first asks oneself what the structure of a convincing proof would be and, having found this, then constructs a program satisfying this proof's requirements, then these correctness concerns turn out to be a very effective heuristic guidance. By definition this approach is only applicable when we restrict ourselves to intellectually manageable programs, but it provides us with effective means for finding a satisfactory one among these.

Argument four has to do with the way in which the amount of intellectual effort needed to design a program depends on the program length. It has been suggested that there is some kind of law of nature telling us that the amount of intellectual effort needed grows with the square of program length. But, thank goodness, no one has been able to prove this law. And this is because it need not be true. We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called "abstraction"; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. Of course I have tried to find a fundamental cause that would prevent our abstraction mechanisms from being sufficiently effective. But no matter how hard I tried, I did not find such a cause. As a result I tend to the assumption —up till now not disproved by experience— that by suitable application of our powers of abstraction, the intellectual effort needed to conceive or to understand a program need not grow more than proportional to program length. But a by-product of these investigations may be of much greater practical significance, and is, in fact, the basis of my fourth argument. The by-product was the identification of a number of patterns of abstraction that play a vital role in the whole process of composing programs. Enough is now known about these patterns of abstraction that you could devote a lecture to about each of them. What the familiarity and conscious knowledge of these patterns of abstraction imply dawned upon me when I realized that, had they been common knowledge fifteen years ago, the step from BNF to syntax-directed compilers, for instance, could have taken a few minutes instead of a few years. Therefore I present our recent knowledge of vital abstraction patterns as the fourth argument.

Now for the fifth argument. It has to do with the influence of the tool we are trying to use upon our own thinking habits. I observe a cultural tradition, which in all probability has its roots in the Renaissance, to ignore this influence, to regard the human mind as the supreme and autonomous master of its artefacts. But if I start to analyse the thinking habits of myself and of my fellow human beings, I come, whether I like it or not, to a completely different conclusion, viz. that the tools we are trying to use and the language or notation we are using to express or record our thoughts, are the major factors determining what we can think or express at all! The analysis of the influence that programming languages have on the thinking habits of its users, and the recognition that, by now, brainpower is by far our scarcest resource, they together give us a new collection of yardsticks for comparing the relative merits of various programming languages. The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language I have been told from various sides that as soon as a programming community is equipped with a terminal for it, a specific phenomenon occurs that even has a well-established name: it is called "the one-liners". It takes one of two different forms: one programmer places a one-line program on the desk of another and either he proudly tells what it does and adds the question "Can you code this in less symbols?" —as if this were of any conceptual relevance!— or he just asks "Guess what it does!". From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz. to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. Another lesson we should have learned from the recent past is that the development of "richer" or "more powerful" programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally. I see a great future for very systematic and very modest programming languages. When I say "modest", I mean that, for instance, not only ALGOL 60's "for clause", but even FORTRAN's "DO loop" may find themselves thrown out as being too baroque. I have run a a little programming experiment with really experienced volunteers, but something quite unintended and quite unexpected turned up. None of my volunteers found the obvious and most elegant solution. Upon closer analysis this turned out to have a common source: their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up, that they were mentally blocked from seeing the obvious. Their solutions were less efficient, needlessly hard to understand, and it took them a very long time to find them. It was a revealing, but also shocking experience for me. Finally, in one respect one hopes that tomorrow's programming languages will differ greatly from what we are used to now: to a much greater extent than hitherto they should invite us to reflect in the structure of what we write down all abstractions needed to cope conceptually with the complexity of what we are designing. So much for the greater adequacy of our future tools, which was the basis of the fifth argument.

As an aside I would like to insert a warning to those who identify the difficulty of the programming task with the struggle against the inadequacies of our current tools, because they might conclude that, once our tools will be much more adequate, programming will no longer be a problem. Programming will remain very difficult, because once we have freed ourselves from the circumstantial cumbersomeness, we will find ourselves free to tackle the problems that are now well beyond our programming capacity.

You can quarrel with my sixth argument, for it is not so easy to collect experimental evidence for its support, a fact that will not prevent me from believing in its validity. Up till now I have not mentioned the word "hierarchy", but I think that it is fair to say that this is a key concept for all systems embodying a nicely factored solution. I could even go one step further and make an article of faith out of it, viz. that the only problems we can really solve in a satisfactory manner are those that finally admit a nicely factored solution. At first sight this view of human limitations may strike you as a rather depressing view of our predicament, but I don't feel it that way, on the contrary! The best way to learn to live with our limitations is to know them. By the time that we are sufficiently modest to try factored solutions only, because the other efforts escape our intellectual grip, we shall do our utmost best to avoid all those interfaces impairing our ability to factor the system in a helpful way. And I cannot but expect that this will repeatedly lead to the discovery that an initially untractable problem can be factored after all. Anyone who has seen how the majority of the troubles of the compiling phase called "code generation" can be tracked down to funny properties of the order code, will know a simple example of the kind of things I have in mind. The wider applicability of nicely factored solutions is my sixth and last argument for the technical feasibiilty of the revolution that might take place in the current decade.

In principle I leave it to you to decide for yourself how much weight you are going to give to my considerations, knowing only too well that I can force no one else to share my beliefs. As each serious revolution, it will provoke violent opposition and one can ask oneself where to expect the conservative forces trying to counteract such a development. I don't expect them primarily in big business, not even in the computer business; I expect them rather in the educational institutions that provide today's training and in those conservative groups of computer users that think their old programs so important that they don't think it worth-while to rewrite and improve them. In this connection it is sad to observe that on many a university campus the choice of the central computing facility has too often been determined by the demands of a few established but expensive applications with a disregard of the question how many thousands of "small users" that are willing to write their own programs were going to suffer from this choice. Too often, for instance, high-energy physics seems to have blackmailed the scientific community with the price of its remaining experimental equipment. The easiest answer, of course, is a flat denial of the technical feasibility, but I am afraid that you need pretty strong arguments for that. No reassurance, alas, can be obtained from the remark that the intellectual ceiling of today's average programmer will prevent the revolution from taking place: with others programming so much more effectively, he is liable to be edged out of the picture anyway.

There may also be political impediments. Even if we know how to educate tomorrow's professional programmer, it is not certain that the society we are living in will allow us to do so. The first effect of teaching a methodology —rather than disseminating knowledge— is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence. In a society in which the educational system is used as an instrument for the establishment of a homogenized culture, in which the cream is prevented from rising to the top, the education of competent programmers could be politically impalatable.

Let me conclude. Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind. Hierarchical systems seem to have the property that something considered as an undivided entity on one level, is considered as a composite object on the next lower level of greater detail; as a result the natural grain of space or time that is applicable at each level decreases by an order of magnitude when we shift our attention from one level to the next lower one. We understand walls in terms of bricks, bricks in terms of crystals, crystals in terms of molecules etc. As a result the number of levels that can be distinguished meaningfully in a hierarchical system is kind of proportional to the logarithm of the ratio between the largest and the smallest grain, and therefore, unless this ratio is very large, we cannot expect many levels. In computer programming our basic building block has an associated time grain of less than a microsecond, but our program may take hours of computation time. I do not know of any other technology covering a ratio of 1010 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artefacts are both possible and necessary. This challenge, viz. the confrontation with the programming task, is so unique that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation, it should give us better control over the task of organizing our thoughts. If it did not do so, to my taste we should not deserve the computer at all!

It has already taught us a few lessons, and the one I have chosen to stress in this talk is the following. We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers.
 
OP
sygeek

sygeek

Technomancer
I Think Your App Should Be Free​
By Joey Flores
*blog.earbits.com/online_radio/wp-content/uploads/2011/10/App-Pirates.png​

Well, you’ve done it. After $15,000 invested and six months of slaving away with 3 of your hacker buddies, you’ve launched your awesome-sauce new app in the Android app store. It is truly a thing of marvel. If the first day’s downloads are any indicator, your $0.99 app is going to make you and your friends a cool $50,000 the first year and some straggler dollars for years to come. You’ve got app idea #2 brewing and this is the beginning of something good. You all toast to your hard work, stay up late watching the first day download totals, and dollars, adding up, and go to bed exhausted and happy.

What the hell? That’s Our App!

What, no champagne with breakfast? Clearly you should be riding high on the success of your application’s immense day two downloads! But no, you wake up to see that your numbers are flat. You search Google for the name of your app and, lo and behold, you find an app alright… your awesome FREE app, uploaded by another user to a shady black market app store with your app name in the description, and it’s getting 100 times the downloads that your paid app got, and climbing.

Your app has been cracked and uploaded for free. It ranks higher in Google than your app does. The whole world is linking to it.

You contact the app store furious. You manage to have it taken down, but every day, every ****ING DAY, there is another cracked and free version of your app in this slimy app store. There it is, again, available for free, and your paid app in the real retailer is a stagnant pile of code being ignored.

You are forced to play police every day. You find the next cracked version of your app on shady site #132 and report it. You scream to high heavens at the people from the app store. Why can’t they do a better job of making sure copycat apps don’t make it into the store? These thiefs, err…sorry, pirates…errr…whatever, are getting all the downloads. Nobody is buying your app…and yet, there is such clear demand.

They’re doing the best they can, they say. Most of all, they’re complying with the law, they say.

But every day, your app is in the free store. Poor users don’t even know they’re downloading something they’re not supposed to. I mean, who’s to understand these unclear laws or know which sites are legal and which are not? Pooooor users.

Information Wants to Be Free!

After ranting endlessly on Hacker News and the like, finally the person who keeps stealing your app posts a reply.

They think all apps should be free.

It’s not stealing, they’re just giving away copies.

Your code is still there for you to do with as you please. Nobody has stolen it. You’ve got your original and can do whatever you want with it.

You plead with them. You spent your own money, and that of an investor’s, making this app. You want and need to recoup your expenses or nobody will invest in you again.

The reply? They don’t like your VC.

Your VC has a long history of screwing over entrepreneurs and they don’t want to see them make any money. Only a fraction goes to you anyway. It’s really the VC who’s losing out, and screw them. They’ve been known to patent troll and stop innovation. Your VC is evil.

Fine! Maybe the VC isn’t a friend to consumers or their own portfolio, but that isn’t your fault, and this is your app! You put everything you had into it and, look, the downloads are now in the millions. The FREE downloads.

Isn’t it better to be known for creating a cool app that you didn’t make money from than making a few bucks and remaining obscure, they ask.

You tell them that’s your choice to make, but they don’t think it is.

They tell you your business model is broken. You should make money some other way. Maybe you should sell t-shirts with your company’s name on them, or put on events of some kind and charge for tickets. That’s where the real money is. Paid apps are a thing of the past, they say.

Look to the future.
 
OP
sygeek

sygeek

Technomancer
The Steve Jobs I Knew
By Walt Mossberg​
*allthingsd.com/files/2011/10/walt_and_steve-380x253.png​

That Steve Jobs was a genius, a giant influence on multiple industries and billions of lives, has been written many times since he retired as Apple’s CEO in August. He was a historical figure on the scale of a Thomas Edison or a Henry Ford, and set the mold for many other corporate leaders in many other industries.

He did what a CEO should: Hired and inspired great people; managed for the long term, not the quarter or the short-term stock price; made big bets and took big risks. He insisted on the highest product quality and on building things to delight and empower actual users, not intermediaries like corporate IT directors or wireless carriers. And he could sell. Man, he could sell.

As he liked to say, he lived at the intersection of technology and liberal arts.

But there was a more personal side of Steve Jobs, of course, and I was fortunate enough to see a bit of it, because I spent hours in conversation with him, over the 14 years he ran Apple. Since I am a product reviewer, and not a news reporter charged with covering the company’s business, he felt a bit more comfortable talking to me about things he might not have said to most other journalists.

Even in his death, I won’t violate the privacy of those conversations. But here are a few stories that illustrate the man as I knew him.

The Phone Calls

I never knew Steve when he was first at Apple. I wasn’t covering technology then. And I only met him once, briefly, between his stints at the company. But, within days of his return, in 1997, he began calling my house, on Sunday nights, for four or five straight weekends. As a veteran reporter, I understood that part of this was an attempt to flatter me, to get me on the side of a teetering company whose products I had once recommended, but had, more recently, advised readers to avoid.

Yet there was more to the calls than that. They turned into marathon, 90-minute, wide-ranging, off-the-record discussions that revealed to me the stunning breadth of the man. One minute he’d be talking about sweeping ideas for the digital revolution. The next about why Apple’s current products were awful, and how a color, or angle, or curve, or icon was embarrassing.

After the second such call, my wife became annoyed at the intrusion he was making in our weekend. I didn’t.

Later, he’d sometimes call to complain about some reviews, or parts of reviews — though, in truth, I felt very comfortable recommending most of his products for the average, non-techie consumers at whom I aim my columns. (That may have been because they were his target, too.) I knew he would be complaining because he’d start every call by saying “Hi, Walt. I’m not calling to complain about today’s column, but I have some comments, if that’s okay.” I usually disagreed with his comments, but that was okay, too.

The Product Unveilings

Sometimes, not always, he’d invite me in to see certain big products before he unveiled them to the world. He may have done the same with other journalists. We’d meet in a giant boardroom, with just a few of his aides present, and he’d insist — even in private — on covering the new gadgets with cloths and then uncovering them like the showman he was, a gleam in his eye and passion in his voice. We’d then often sit down for a long, long discussion of the present, the future, and general industry gossip.

I still remember the day he showed me the first iPod. I was amazed that a computer company would branch off into music players, but he explained, without giving any specifics away, that he saw Apple as a digital products company, not a computer company. It was the same with the iPhone, the iTunes music store, and later the iPad, which he asked me to his home to see, because he was too ill at the time to go to the office.

The Slides

To my knowledge, the only tech conference Steve Jobs regularly appeared at, the only event he didn’t somehow control, was our D: All Things Digital conference, where he appeared repeatedly for unrehearsed, onstage interviews. We had one rule that really bothered him: We never allowed slides, which were his main presentation tool.

One year, about an hour before his appearance, I was informed that he was backstage preparing dozens of slides, even though I had reminded him a week earlier of the no-slides policy. I asked two of his top aides to tell him he couldn’t use the slides, but they each said they couldn’t do it, that I had to. So, I went backstage and told him the slides were out. Famously prickly, he could have stormed out, refused to go on. And he did try to argue with me. But, when I insisted, he just said “Okay.” And he went on stage without them, and was, as usual, the audience’s favorite speaker.

Ice Water in Hell

For our fifth D conference, both Steve and his longtime rival, the brilliant Bill Gates, surprisingly agreed to a joint appearance, their first extended onstage joint interview ever. But it almost got derailed.

Earlier in the day, before Gates arrived, I did a solo onstage interview with Jobs, and asked him what it was like to be a major Windows developer, since Apple’s iTunes program was by then installed on hundreds of millions of Windows PCs.

He quipped: “It’s like giving a glass of ice water to someone in Hell.” When Gates later arrived and heard about the comment, he was, naturally, enraged, because my partner Kara Swisher and I had assured both men that we hoped to keep the joint session on a high plane.

In a pre-interview meeting, Gates said to Jobs: “So I guess I’m the representative from Hell.” Jobs merely handed Gates a cold bottle of water he was carrying. The tension was broken, and the interview was a triumph, with both men acting like statesmen. When it was over, the audience rose in a standing ovation, some of them in tears.

The Optimist

I have no way of knowing how Steve talked to his team during Apple’s darkest days in 1997 and 1998, when the company was on the brink and he was forced to turn to archrival Microsoft for a rescue. He certainly had a nasty, mercurial side to him, and I expect that, then and later, it emerged inside the company and in dealings with partners and vendors, who tell believable stories about how hard he was to deal with.

But I can honestly say that, in my many conversations with him, the dominant tone he struck was optimism and certainty, both for Apple and for the digital revolution as a whole. Even when he was telling me about his struggles to get the music industry to let him sell digital songs, or griping about competitors, at least in my presence, his tone was always marked by patience and a long-term view. This may have been for my benefit, knowing that I was a journalist, but it was striking nonetheless.

At times in our conversations, when I would criticize the decisions of record labels or phone carriers, he’d surprise me by forcefully disagreeing, explaining how the world looked from their point of view, how hard their jobs were in a time of digital disruption, and how they would come around.

This quality was on display when Apple opened its first retail store. It happened to be in the Washington, D.C., suburbs, near my home. He conducted a press tour for journalists, as proud of the store as a father is of his first child. I commented that, surely, there’d only be a few stores, and asked what Apple knew about retailing.

He looked at me like I was crazy, said there’d be many, many stores, and that the company had spent a year tweaking the layout of the stores, using a mockup at a secret location. I teased him by asking if he, personally, despite his hard duties as CEO, had approved tiny details like the translucency of the glass and the color of the wood.

He said he had, of course.

The Walk

After his liver transplant, while he was recuperating at home in Palo Alto, California, Steve invited me over to catch up on industry events that had transpired during his illness. It turned into a three-hour visit, punctuated by a walk to a nearby park that he insisted we take, despite my nervousness about his frail condition.

He explained that he walked each day, and that each day he set a farther goal for himself, and that, today, the neighborhood park was his goal. As we were walking and talking, he suddenly stopped, not looking well. I begged him to return to the house, noting that I didn’t know CPR and could visualize the headline: “Helpless Reporter Lets Steve Jobs Die on the Sidewalk.”

But he laughed, and refused, and, after a pause, kept heading for the park. We sat on a bench there, talking about life, our families, and our respective illnesses (I had had a heart attack some years earlier). He lectured me about staying healthy. And then we walked back.

Steve Jobs didn’t die that day, to my everlasting relief. But now he really is gone, much too young, and it is the world’s loss.

Editors Note: Here is a video of Walt talking about that walk with Jobs:

*i.eho.st/pp4xek92.png
 
Last edited:
OP
sygeek

sygeek

Technomancer
Steve Jobs, Atari Employee Number 40
By Frank Cifaldi
*www.gamasutra.com/db_area/images/news2001/37762/stevejobsold.jpg​

Steve Jobs was called many things during his tragically short life -- innovator, entrepreneur, leader, father -- but back when he showed up at the Los Gatos doorstep of arcade game leader Atari in May of 1974, he was an unwashed, bearded college dropout more interested in scoring some acid than changing the world.

As Atari alumni and Pong engineer Al Alcorn tells it, it was a pretty typical day at the company's then-modest warehouse digs -- walls lined with Pong and Pong-like cabinets, barefoot technicians reeking of pot after some early afternoon hot boxing -- when personnel handler Penny Chapler came into his office.

"We've got this kid in the lobby," Alcorn recalls her saying. "He's either got something or is a crackpot."

By this time Alcorn was used to unkempt guys wandering into the office looking to make some bread. In the greater Los Gatos area, engineers saw Atari as the cool place to work: there was no dress code, your bosses didn't care what you did in your offtime, and working on games was way better than the televisions and industrial equipment you might touch your soldering iron to at other companies.

"He was this real scuzzy kid," Alcorn once told video game historian Steven Kent. "I think I said, 'We should either call the cops or we should talk to him.' So I talked to him."

Jobs had no real engineering experience to bring to the table. He had a small amount of education from Reed College, but it was in a completely unrelated major, and he had dropped out early. But he had a way with words, seemed to have a passion for technology, and probably lied about having worked at Hewlett-Packard.

"I figured, this guy's gotta be cheap, man. He really doesn't have much skills at all," Alcorn remembers. "So I figured I'd hire him."

A Diet Of Air And Water

Jobs was hired as Atari employee #40, as a technician fixing up and tweaking circuit board designs. One of his first roles was finishing the technical design of Touch Me, a simple arcade memory game similar to Ralph Baer's later Simon toy. He more than likely helped out on other games that year, such as racer Gran Trak 20 and the odd experiment Puppy Pong.

But the young, abrasive Jobs didn't fit in. As the various stories go, complaints ranged from poor hygiene to an abrasive attitude to strange dietary habits.

"He says if I pass out, just push me onto the workbench. Don't call 911 or anything. I'm on this new diet of just air and water," Alcorn recently recalled (though the story sometimes involves a jar of cranberry juice).

Though he didn't have much personal interaction with him at the time, Atari co-founder Nolan Bushnell remembers the young Jobs as a "brilliant, curious and aggressive" young man, though very abrasive as well. Though Jobs would come to be praised as a brash, firm leader, at 18 this quality manifested itself in a negative way, making several enemies at the company by openly mocking them and treating them like they were idiots. Despite this, he was a promising employee, so Atari found a way to keep him on board.

"I always felt to run a good company you had to have room for everybody -- you could always figure out a way to make room for smart people," Bushnell recently recalled. "So, we decided to have a night shift in engineering -- he was the only one in it."

Spiritual Research

After about five or six months of saving money and working the night shift (often inviting friend, collaborator and eventual Apple co-founder Steve Wozniak into the office to help him with engineering challenges), Jobs approached Alcorn to let him know he was quitting the company to go to India, meet his guru, and conduct what he referred to as "spiritual research."

Alcorn turned his trip into an opportunity for the company: Atari's German distributors were having trouble assembling the games due to a problem with the country's incompatible power supplies. It was a relatively simple fix, but Alcorn's attempts to troubleshoot long-distance were proving fruitless. He needed someone out there to show them how to fix the problem.

"I said Steve I'll cut you a deal. I'll give you a one-way ticket to Germany -- it's gotta be cheaper to get to India from Germany than it is from here -- if you'll do a day or two of work over in Germany for me," Alcorn recently recalled.

As it turned out it would have been cheaper to fly out of California, but they didn't know that at the time, so Jobs accepted. He flew out and though he was able to fix the problem, it wasn't a joyous business trip for either party involved: vegetarian Jobs struggled to eat in the "meat and potatoes" country, and Atari's German distributors didn't know what to make of the odd foreigner.

"He wasn't dressed appropriately, he didn't behave appropriately," Alcorn remembers. "The Germans were horrified at this."

From there, Jobs went on to India as planned (there do not appear to be any historical accounts of what exactly he did during his trip, though "backpacking" and "acid" are common words used by those who have recounted it). He returned to Atari several months later with a shaved head, saffron robes, and a copy of Be Here Now for Alcorn, asking for his old job back.

"Apparently, he had hepatitis or something and had to get out of India before he died," Alcorn told historian Steven Kent. "I put him to work again. That's when the famous story about Breakout took place."

Jobs and Woz Break Out

As the story goes, Atari suddenly found itself facing competition in the arcade video game industry it created, most of it from former Atari engineers who struck out on their own, stolen parts and plans in tow. No longer able to survive on various iterations of Pong, the company designed a single-player game called Breakout, which saw players bouncing a ball vertically to destroy a series of bricks at the top of the screen.

The game was prototyped, though the number of TTL chips used would have made manufacturing expensive. The company offered a bounty to whoever was up to the task of reducing its chip count: the exact numbers seem to have become muddled throughout history, but the general consensus among those who are there said that the company offered $100 for each chip successfully removed from the design, with a bonus if the total chip count went below a certain number. The young Jobs, who in retrospect comes across as an excellent liar, somehow won the bid for the project.

"Jobs never did a lick of engineering in his life. He had me snowed," Alcorn later recalled. "It took years before I figured out that he was getting Woz to 'come in the back door' and do all the work while he got the credit."

Jobs convinced Wozniak to work on the game during his day job at Hewlett-Packard, when he was meant to be designing calculators. At night the two would collaborate on building it at Atari: Wozniak as engineer, Jobs as breadboarder and tester.

Allegedly, Jobs told Wozniak that he could have half of a $700 bounty if they were able to get the chip count under 50 (typical games of the day tended to require around 100 chips). After four sleepless days that gave both of them a case of mono (an artificial time limit, it turns out: Jobs had a plane to catch, Atari wasn't in that much of a rush), the brilliantly gifted Wozniak delivered a working board with just 46 chips.

Jobs made good on his promise and gave Wozniak his promised $350. What he didn't tell him -- and what Wozniak didn't find out until several years later -- was that Jobs also pocketed a bonus somewhere in the neighborhood of $5,000. Though it's often reported that this caused a rift in their friendship, Wozniak seems to have no hard feelings.

"The money's irrelevant -- and it was then. I would have done it for free," he said in a recent interview. "I was happy to be able to design a video game that people would actually play. I think Steve needed money and just didn't tell me the truth. If he'd told me the truth, he'd have gotten it."

The Forbidden Fruit

As this was going on, Jobs and Wozniak were designing a personal home computer during their offtime, which would eventually become the Apple I. Even Alcorn himself got involved, unofficially.

"I helped them with parts, I helped them design it. It was a cool engineering project, but it seemed [like it would] make no money," he recalled.

"He offered the Apple II to Atari ... we said no. No thank you. But I liked him. He was a nice guy. So I introduced him to venture capitalists."

Jobs and Atari soon parted ways, and Apple Computer was formed on April 1, 1976. The rest, as they say, is history.
 

Vyom

The Power of x480
Staff member
Admin
What is it about Steve Jobs?

By Anisha Oommen | The Water Cooler

What is it that makes millions of people who have never met him, mourn his passing? What makes them feel like they knew him. What made his death so personal.

In an outpouring from the far corners of the world, people appear to share a personal connection with him. A friend of mine called it the 'Unites States of Apple.' Grief creates a leveling platform, where country, language and economics become irrelevant, and shared loss reminds us again of how much we have really in common.

The word "inspiration" keeps re-surfacing. We see our potential in him. A post on Twitter captured it —

"Jobs was born out of wedlock, put up for adoption, dropped out of college, and still, he changed the world. What's your excuse?" -
- Twitter

Is that what we see reflected in him, is that what unites digital titans and everyday gadget fans like you and me as we mourn him?

Messages from Apple fans across the world echo the same message — he created dreams for people, he made them believe in themselves, believe that they too could reach for greatness.

From Paris, Russia and Munich. From Sydney and Tokyo, from teachers to entrepreneurs, fans pay tribute to their hero. From Shanghai, voices express concern over the future of Apple, without the leadership of its iconic guide. From India, the Kanchi Dham temple in Uttarakhand pays its respects to the man who visited them at the young age of 18, in search of enlightenment.

From Hong Kong, nineteen year old Jonathon Mak's homage to Jobs went viral on the internet. His design incorporates Steve Jobs' silhouette into the bite of the Apple logo. Tribute in ingenious simplicity. He says, "I just wanted it to be a very quiet commemoration. It's just this quiet realization that Apple is now missing a piece. It's just kind of implying his absence."

Jobs' family recognized what he meant to the public, and that many will mourn with them. They are building a website where people can share their memories of Jobs. In the meanwhile though, Apple is collecting thoughts, memories and condolences at an e-mail address it has set up: rememberingsteve@apple.com.

Jobs may be gone, but he will live online.
 
Last edited:
Top Bottom