The Geeks Daily

Discussion in 'Chit-Chat' started by sygeek, May 30, 2011.

  1. sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    This thread is meant for sharing interesting articles related to Technology and Geekism on a wide base such that it doesn't fit in Technology news section, Random news section or the OSS article thread.

    Now, The Rules:
    1. Please don't copy-paste the entire article if the site's Terms and Conditions doesn't allow it. A link with a summary of the article in quotes would be better.
    2. If the site doesn't have any "Terms and conditions" or it allows for article to be fully published (with a link to the site), then you are free to paste the entire article.
    3. Keep in mind while pasting a full article that it should be under SPOILER tags - |SPOILER][/SPOILER|.
    4. Custom written articles can be posted here too. Add a [Custom] Tag to topics of such posts.
    5. Please send trackbacks to the site whose article you are using in the post.
    6. Discussions related to a corresponding article is allowed unless and until it sticks to the topic.
    7. Off-topic posts and Posts not following the corresponding site's T&C will be immediately reported to the Mods.
     
    Last edited: Jun 7, 2011
  2. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    CPU vs. The Human Brain


    The brain's waves drive computation, sort of, in a 5 million core, 9 Hz computer.

    Computer manufacturers have worked in recent years to wean us off the speed metric for their chips and systems. No longer do they scream out GHz values, but use chip brands like atom, core duo, and quad core, or just give up altogether and sell on other features. They don't really have much to crow about, since chip speed increases have slowed with the increasing difficulty of cramming more elements and heat into ever smaller areas. The current state of the art is about 3 GHz, (far below predictions from 2001), on four cores in one computer, meaning that computations are spread over four different processors, which each run at 0.3 nanosecond per computation cycle.

    The division of CPUs into different cores hasn't been a matter of choice, and it hasn't been well-supported by software, most of which continues to conceived and written in linear fashion, with the top-level computer system doling out whole programs to the different processors, now that we typically have several things going on at once on our computers. Each program sends its instructions in linear order through one processor/core, in soda-straw fashion. Ever-higher clock speeds, allowing more rapid progress through the straw, still remain critical for getting more work done.

    Our brains take a rather different approach to cores, clock speeds, and parallel processing, however. They operate at variable clock speeds between 5 and 500 Hertz. No Giga here, or Mega or even Kilo. Brain waves, whose relationship to computation remains somewhat mysterious, are very slow, ranging from the delta (sleep) waves of 0-4 Hz through theta, alpha, beta, and gamma waves at 30-100+ Hz which are energetically most costly and may correlate with attention / consciousness.

    On the other hand, the brain has about 1e15 synapses, making it analogous to five million contemporary 200 million transistor chip "cores". Needless to say, the brain takes a massively parallel approach to computation. Signals run through millions of parallel nerve fibers from, say, the eye, (1.2 million in each optic nerve), through massive brain regions where each signal traverses only perhaps ten to twenty nerves in any serial path, while branching out in millions of directions as the data is sliced, diced, and re-assembled into vision. If you are interested in visual pathways, I would recommend Christof Koch's Quest for Consciousness, whose treatment of visual pathways is better than its treatment of other topics.

    Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run, though the present articles deal with one such case. Neurons can spontaneously generate synchrony in large numbers due to their intrinsic rhythmicity.

    Nor are neurons passive input-output integrators of whatever hits their dendrites, as early theories had them. Instead, they spontaneously generate cycles and noise, which enhances their sensitivity to external signals, and their ability to act collectively. They are also subject to many other influences like hormones and local non-neural glial cells. A great deal of integration happens at the synapse and regional multi-synapse levels, long before the cell body or axon is activated. This is why the synapse count is a better analog to transistor counts on chips than the neuron count. If you are interested in the topics of noise and rhythmicity, I would recommend the outstanding and advanced book by Gyorgy Buzsaki, Rhythms of the Brain. Without buying a book, you can read Buzsaki's take on consciousness.

    Two recent articles (Brandon et al., Koenig et al.) provide a small advance in this field of figuring out how brain rhythms connect with computation. Two groups seem to have had the same idea and did very similar experiments to show that a specific type of spatial computation in a brain area called the medial entorhinal cortex (mEC) near the hippocampus depends on theta rhythm clocking from a loosely connected area called the medial septum (MS). (In-depth essay on alcohol, blackouts, memory formation, the medial septum, and hippocampus, with a helpful anatomical drawing).

    Damage to the MS (situated just below the corpus collosum that connects the two brain hemispheres) was known to have a variety of effects on functions not located in the MS, but in the hippocampus and mEC, like loss of spatial memory, slowed learning of simple aversive associations, and altered patterns of food and water intake.

    The hippocampus and allied areas like the mEC are one of the best-investigated areas of the brain, along with the visual system. They mediate most short-term memory, especially spatial memory (i.e rats running in mazes). The spatial system as understood so far has several types of cells:

    Head direction cells, which know which way the head is pointed (some of them fire when the head points at one angle, others fire at other angles.

    Grid cells, which are sensitive to an abstract grid in space covering the ambient environment. Some of these cells fire when the rat is on one of the grid boundaries. So we literally have a latitude/logitude-style map in our heads, which may be why map-making comes so naturally to humans.

    Border cells, which fire when the rat is close to a wall.

    Place cells, which respond to specific locations in the ambient space- not periodically like grid cells, but typically to one place only.

    Spatial view cells, which fire when the rat is looking at a particular location, rather than when it is in that location. They also respond, as do the other cells above, when a location is being recalled rather than experienced.

    Clearly, once these cells all network together, a rather detailed self-orientation system is possible, based on high-level input from various senses (vestibular, whiskers, vision, touch). The role of rhythm is complicated in this system. For instance, the phase relation of place cell firing versus the underlying theta rhythm, (leading or following it, in a sort of syncopation), indicates closely where the animal is within the place cell's region as movement occurs. Upon entry, firing begins at the peak of the theta wave, but then precesses to the trough of the theta wave as the animal reaches the exit. Combined over many adjacent and overlapping place fields, this could conceptually provide very high precision to the animal's sense of position.

    [​IMG]
    One rat's repeated tracks in a closed maze, mapped versus firing patterns of several of its place cells, each given a different color.​

    We are eavesdropping here on the unconscious processes of an animal, which it could not itself really articulate even if it wished and had language to do so. The grid and place fields are not conscious at all, but enormously intricate mechanisms that underlie implicit mapping. The animal has a "sense" of its position, (projecting a bit from our own experience), which is critical to many of its further decisions, but the details don't necessarily reach consciousness.

    The current papers deal not with place cells, which still fire in a place-specifc way without the theta rhythm, but with grid cells, whose "gridness" appears to depend strongly on the theta rhythm. The real-life fields of rat grid cells have a honeycomb-like hexagonal shape with diameters ranging from 40 to 90cm, ordered in systematic fashion from top to bottom within the mEC anatomy. The theta rhythm frequency they respond to also varies along the same axis, from 10 to 4 Hz. These values stretch and vary with the environment the animal finds itself in.

    [​IMG]
    Field size of grid cells, plotted against anatomical depth in the mEC.

    The current papers ask a simple question: do the grid cells of the mEC depend on the theta rhythm supplied from the MS, as has long been suspected from work with mEC lesions, or do they work independently and generate their own rhythm(s)?

    This was investigated by the expedient of injecting anaesthetics into the MC to temporarily stop its theta wave generation, and then polling electrodes stuck into the mEC for their grid firing characteristics as the rats were freely moving around. The grid cells still fired, but lost their spatial coherence, firing without regard to where the rat was or was going physically (see bottom trajectory maps). Spatial mapping was lost when the clock-like rhythm was lost.

    [​IMG]
    One experimental sequence. Top is the schematic of what was done. Rate map shows the firing rate of the target grid cells in a sampled 3cm square, with m=mean rate, and p=peak rate. Spatial autocorrelation shows how spatially periodic the rate map data is, and at what interval. Gridness is an abstract metric of how spatially periodic the cells fire. Trajectory shows the rat's physical paths during free behavior, overlaid with the grid cell firing data.

    "These data support the hypothesized role of theta rhythm oscillations in the generation of grid cell spatial periodicity or at least a role of MS input. The loss of grid cell spatial periodicity could contribute to the spatial memory impairments caused by lesions or inactivation of the MS."

    This is somewhat reminiscent of an artificial computer system, where computation ceases (here it becomes chaotic) when clocking ceases. Brain systems are clearly much more robust, breaking down more gracefully and not being as heavily dependent on clocking of this kind, not to mention being capable of generating most rhythms endogenously. But a similar phenomenon happens more generally, of course, during anesthesia, where the controlled long-range chaos of the gamma oscillation ceases along with attention and consciousness.

    It might be worth adding that brain waves have no particular connection with rhythmic sensory inputs like sound waves, some of which come in the same frequency range, at least at the very low end. The transduction of sound through the cochlea into neural impulses encodes them in a much more sophisticated way than simply reproducing their frequency in electrical form, and leads to wonders of computational processing such as perfect pitch, speech interpretation, and echolocation.

    Clearly, these are still early days in the effort to know how computation takes place in the brain. There is a highly mysterious bundling of widely varying timing/clocking rhythms with messy anatomy and complex content flowing through. But we also understand a lot- far more with each successive decade of work and with advancing technologies. For a few systems, (vision, position, some forms of emotion), we can track much of the circuitry from sensation to high-level processing, such as the level of face recognition. Consciousness remains unexplained, but scientists are definitely knocking at the door.

    You'd think it'd be easy to reboot a PC, wouldn't you? But then you'd also think that it'd be straightforward to convince people that at least making some effort to be nice to each other would be a mutually beneficial proposal, and look how well that's worked for us.

    Linux has a bunch of different ways to reset an x86. Some of them are 32-bit only and so I'm just going to ignore them because honestly just what are you doing with your life. Also, they're horrible. So, that leaves us with five of them.

    • kbd - reboot via the keyboard controller. The original IBM PC had the CPU reset line tied to the keyboard controller. Writing the appropriate magic value pulses the line and the machine resets. This is all very straightforward, except for the fact that modern machines don't have keyboard controllers (they're actually part of the embedded controller) and even more modern machines don't even pretend to have a keyboard controller. Now, embedded controllers run software. And, as we all know, software is dreadful. But, worse, the software on the embedded controller has been written by BIOS authors. So clearly any pretence that this ever works is some kind of elaborate fiction. Some machines are very picky about hardware being in the exact state that Windows would program. Some machines work 9 times out of 10 and then lock up due to some odd timing issue. And others simply don't work at all. Hurrah!
    • triple - attempt to generate a triple fault. This is done by loading an empty interrupt descriptor table and then calling int(3). The interrupt fails (there's no IDT), the fault handler fails (there's no IDT) and the CPU enters a condition which should, in theory, then trigger a reset. Except there doesn't seem to be a requirement that this happen and it just doesn't work on a bunch of machines.
    • pci - not actually pci. Traditional PCI config space access is achieved by writing a 32 bit value to io port 0xcf8 to identify the bus, device, function and config register. Port 0xcfc then contains the register in question. But if you write the appropriate pair of magic values to 0xcf9, the machine will reboot. Spectacular! And not standardised in any way (certainly not part of the PCI spec), so different chipsets may have different requirements. Booo.
    • efi - EFI runtime services provide an entry point to reboot the machine. It usually even works! As long as EFI runtime services are working at all, which may be a stretch.
    • acpi - Recent versions of the ACPI spec let you provide an address (typically memory or system IO space) and a value to write there. The idea is that writing the value to the address resets the system. It turns out that doing so often fails. It's also impossible to represent the PCI reboot method via ACPI, because the PCI reboot method requires a pair of values and ACPI only gives you one.


    Now, I'll admit that this all sounds pretty depressing. But people clearly sell computers with the expectation that they'll reboot correctly, so what's going on here?

    A while back I did some tests with Windows running on top of qemu. This is a great way to evaluate OS behaviour, because you've got complete control of what's handed to the OS and what the OS tries to do to the hardware. And what I discovered was a little surprising. In the absence of an ACPI reboot vector, Windows will hit the keyboard controller, wait a while, hit it again and then give up. If an ACPI reboot vector is present, windows will poke it, try the keyboard controller, poke the ACPI vector again and try the keyboard controller one more time.

    This turns out to be important. The first thing it means is that it generates two writes to the ACPI reboot vector. The second is that it leaves a gap between them while it's fiddling with the keyboard controller. And, shockingly, it turns out that on most systems the ACPI reboot vector points at 0xcf9 in system IO space. Even though most implementations nominally require two different values be written, it seems that this isn't a strict requirement and the ACPI method works.

    3.0 will ship with this behaviour by default. It makes various machines work (some Apples, for instance), improves things on some others (some Thinkpads seem to sit around for extended periods of time otherwise) and hopefully avoids the need to add any more machine-specific quirks to the reboot code. There's still some divergence between us and Windows (mostly in how often we write to the keyboard controller), which can be cleaned up if it turns out to make a difference anywhere.

    Now. Back to EFI bugs.
     
    Last edited: Jun 4, 2011
  3. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    Ten Oddities And Secrets About JavaScript​
    Visit link for full article

     
  4. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    By James Somers

    [​IMG]

    When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn't colorful or cartoonish; it didn't seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. It called itself the "ORIC-1 Micro Computer." The package included two cassette tapes, a few cords and a 130-page programming manual.

    On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin, not least because they had just bought the thing for more than £129. And so he did. And so, he says, "I was sucked into a hole from which I would never escape."

    It's not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word "Ready," and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.

    In less than an hour, the ORIC-1 manual took you from printing the word "hello" to writing short programs in BASIC -- the Beginner's All-Purpose Symbolic Instruction Code -- that played digital music and drew wildly interesting pictures on the screen. Just when you got the urge to try something more complicated, the manual showed you how.

    In a way, the ORIC-1 was so mesmerizing because it stripped computing down to its most basic form: you typed some instructions; it did something cool. This was the computer's essential magic laid bare. Somehow ten or twenty lines of code became shapes and sounds; somehow the machine breathed life into a block of text.

    No wonder Colin got hooked. The ORIC-1 wasn't really a toy, but a toy maker. All it asked for was a special kind of blueprint.

    Once he learned the language, it wasn't long before he was writing his own simple computer games, and, soon after, teaching himself trigonometry, calculus and Newtonian mechanics to make them better. He learned how to model gravity, friction and viscosity. He learned how to make intelligent enemies.

    More than all that, though, he learned how to teach. Without quite knowing it, Colin had absorbed from his early days with the ORIC-1 and other such microcomputers a sense for how the right mix of accessibility and complexity, of constraints and open-endedness, could take a student from total ignorance to near mastery quicker than anyone -- including his own teachers -- thought possible.

    It was a sense that would come in handy, years later, when he gave birth to Project Euler, a peculiar website that has trained tens of thousands of new programmers, and that is in its own modest way the emblem of a nascent revolution in education.

    [​IMG]
    * * *​

    Sometime between middle and high school, in the early 2000s, I got a hankering to write code. It was very much a "monkey see, monkey do" sort of impulse. I had been watching a lot of TechTV -- an obscure but much-loved cable channel focused on computing, gadgets, gaming and the Web -- and Hackers, the 1995 cult classic starring Angelina Jolie in which teenaged computer whizzes, accused of cybercrimes they didn't commit, have to hack their way to the truth.

    I wanted in. So I did what you might expect an over-enthusiastic suburban nitwit to do, and asked my mom to drive me to the mall to buy Ivor Horton's 1,181-page, 4.6-pound Beginning Visual C++ 6. I imagined myself working montage-like through the book, smoothly accruing expertise one chapter at a time.

    What happened instead is that I burned out after a week. The text itself was dense and unsmiling; the exercises were difficult. It was quite possibly the least fun I've ever had with a book, or, for that matter, with anything at all. I dropped it as quickly as I had picked it up.

    Remarkably I went through this cycle several times: I saw people programming and thought it looked cool, resolved myself to learn, sought out a book and crashed the moment it got hard.

    For a while I thought I didn't have the right kind of brain for programming. Maybe I needed to be better at math. Maybe I needed to be smarter.

    But it turns out that the people trying to teach me were just doing a bad job. Those books that dragged me through a series of structured principles were just bad books. I should have ignored them. I should have just played.

    Nobody misses that fact more egregiously than the American College Board, the folks responsible for setting the AP Computer Science high school curriculum. The AP curriculum ought to be a model for how to teach people to program. Instead it's an example of how something intrinsically amusing can be made into a lifeless slog.

    [​IMG]

    I imagine that the College Board approached the problem from the top down. I imagine a group of people sat in a room somewhere and asked themselves, "What should students know by the time they finish this course?"; listed some concepts, vocabulary terms, snippets of code and provisional test questions; arranged them into "modules," swaths of exposition followed by exercises; then handed off the course, ready-made, to teachers who had no choice but to follow it to the letter.

    Whatever the process, the product is a nightmare described eloquently by Paul Lockhart, a high school mathematics teacher, in his short booklet, A Mathematician's Lament, about the sorry state of high school mathematics. His argument applies almost beat for beat to computer programming.

    Lockhart illustrates our system's sickness by imagining a fun problem, then showing how it might be gutted by educators trying to "cover" more "material."

    Take a look at this picture:
    [​IMG]

    It's sort of neat to wonder, How much of the box does the triangle take up? Two-thirds, maybe? Take a moment and try to figure it out.

    If you're having trouble, it could be because you don't have much training in real math, that is, in solving open-ended problems about simple shapes and objects. It's hard work. But it's also kind of fun -- it requires patience, creativity, an insight here and there. It feels more like working on a puzzle than one of those tedious drills at the back of a textbook.

    If you struggle for long enough you might strike upon the rather clever idea of chopping your rectangle into two pieces like so:

    [​IMG]

    Now you have two rectangles, each cut diagonally in half by a leg of the triangle. So there is exactly as much space inside the triangle as outside, which means the triangle must take up exactly half the box!
    But this is not what math feels like in school. The creative process is inverted, vitiated:
    * * *​
    My struggle to become a hacker finally saw a breakthrough late in my freshman year of college, when I stumbled on a simple question:
    This was the puzzle that turned me into a programmer. This was Project Euler problem #1, written in 2001 by a then much older Colin Hughes, that student of the ORIC-1 who had gone on to become a math teacher at a small British grammar school and, not long after, the unseen professor to tens of thousands of fledglings like myself.

    The problem itself is a lot like Lockhart's triangle question -- simple enough to entice the freshest beginner, sufficiently complicated to require some thought.

    What's especially neat about it is that someone who has never programmed -- someone who doesn't even know what a program is -- can learn to write code that solves this problem in less than three hours. I've seen it happen. All it takes is a little hunger. You just have to want the answer.

    That's the pedagological ballgame: get your student to want to find something out. All that's left after that is to make yourself available for hints and questions. "That student is taught the best who is told the least."

    It's like sitting a kid down at the ORIC-1. Kids are naturally curious. They love blank slates: a sandbox, a bag of LEGOs. Once you show them a little of what the machine can do they'll clamor for more. They'll want to know how to make that circle a little smaller or how to make that song go a little faster. They'll imagine a game in their head and then relentlessly fight to build it.

    Along the way, of course, they'll start to pick up all the concepts you wanted to teach them in the first place. And those concepts will stick because they learned them not in a vacuum, but in the service of a problem they were itching to solve.

    Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes -- and later, a team of eight or nine hand-picked helpers -- crafted problems that lots of people get the itch to solve. And it's an effective teacher because those problems are arranged like the programs in the ORIC-1's manual, in what Hughes calls an "inductive chain":

    The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.

    This is an idea that's long been familiar to video game designers, who know that players have the most fun when they're pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge -- a quick demonstration that's hard to screw up -- and certified with a harder version, the idea being to only let players move on when they've shown that they're ready. The result is a gradual ratcheting up the learning curve.

    Project Euler is engaging in part because it's set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only "rule" is that the program you use to solve the problem should take no more than one minute of computer time to run.

    On top of this there is one brilliant feature: once you get the right answer you're given access to a forum where successful solvers share their approaches. It's the ideal time to pick up new ideas -- after you've wrapped your head around a problem enough to solve it.

    This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem's forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.

    Even if you're not a programmer, it's worth solving a Project Euler problem just to see what happens in these forums. What you'll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it's been quietly thriving on this site. It's the global, distributed classroom, a nurturing community of self-motivated learners -- old, young, from more than two hundred countries -- all sharing in the pleasure of finding things out.

    * * *​

    It's tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?

    Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.

    The first is that it's naturally addictive. Computers are really fast; even in the '80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff -- perhaps a small hit of dopamine -- as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.

    It's important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.

    The second feature, by contrast, is something that at first glance looks totally immaterial. It's the simple fact that code is text.

    Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

    Unfortunately that's not going to happen. You can't just copy and paste a Bob Villa video to fix your garage door.

    But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

    I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

    Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.

    * * *​

    Garry Kasparov, a chess grandmaster who was famously bested by IBM's Deep Blue supercomputer, notes how machines have changed the way the game is learned:
    A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.

    Take that thought to its logical end. Imagine a future in which the best way to learn how to do something -- how to write prose, how to solve differential equations, how to fly a plane -- is to download software, not unlike today's chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.

    If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.
     
    Last edited: Jun 4, 2011
  5. nisargshah95

    nisargshah95 Your Ad here

    Joined:
    Feb 13, 2010
    Messages:
    425
    Likes Received:
    2
    Trophy Points:
    0
    For those who want to know why it's like this, go to 14. Floating Point Arithmetic: Issues and Limitations — Python v2.7.1 documentation

    Great article buddy. Keep posting! I guess we should start a thread where we discuss Euler's problems :p What say?
     
    Last edited: Jun 4, 2011
  6. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    Sure, but no one looks interested in it and so I didn't bother creating one. Also, Euler's forums already have a section dedicated to this, so it doesn't make much sense unless you guys want a familiar community discussion to this.
     
    Last edited: Jun 4, 2011
  7. nisargshah95

    nisargshah95 Your Ad here

    Joined:
    Feb 13, 2010
    Messages:
    425
    Likes Received:
    2
    Trophy Points:
    0
    Oh. Anyways don't stop postin the articles. They're good.

    BTW Yay! I solved the first problem - If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
    Find the sum of all the multiples of 3 or 5 below 1000. Did it using JavaScript (and Python console for calculations).
     
    Last edited: Jun 4, 2011
  8. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    Hate Java? You’re fighting the wrong battle.


    One of the most interesting trends I’ve seen lately is the unpopularity of Java around blogs, DZone and others. It seems some people are even offended, some even on a personal level, by suggesting the Java is superior in any way to their favorite web 2.0 language.

    Java has been widely successful for a number of reasons:
    • It’s widely accepted in the established companies.
    • It’s one of the fastest languages.
    • It’s one of the most secure languages.
    • Synchronization primitives are built into the language.
    • It’s platform independent.
    • Hotspot is open source.
    • Thousands of vendors exist for a multitude of Java products.
    • Thousands of open source libraries exist for Java.
    • Community governance via that JCP (pre-Oracle).
    This is quite a resume for any language, and it shows, as Java has enjoyed a long streak as being one of the most popular languages around.
    So, why suddenly, in late 2010 and 2011, is Java suddenly the hated demon it is?
    It’s popular to hate Java.
    • C-like syntax is no longer popular.
    • Hate for Oracle is being leveraged to promote individual interests.
    • People have been exposed to really bad code, that’s been written in Java.
    • … insert next hundred reasons here.
    Java, the actual language and API, does have quite a few real problems… too many to list here (a mix of native and object types, an abundance of abandoned APIs, inconsistent use of checked exceptions). But I’m offering an olive branch… Lets discuss the real problem and not throw the baby out with the bath water.

    So what is the real problem in the this industry? Java, with its faults, has completely conquered web application programming. On the sidelines, charging hard, new languages are being invented at a rate that is mind-blowing, to also conquer web application programming. The two are pitted together, and we’re left with what looks a bunch of preppy mall-kids battling for street territory by break dancing. And while everyone is bickering around whether PHP or Rails 3.1 runs faster and can serve more simultaneous requests, there lurks a silent elephant in the room, which is laughing quietly as we duke it out in childish arguments over syntax and runtimes.

    Tell me, what do the following have in common?
    • Paying with a credit card.
    • Going to the emergency room.
    • Adjusting your 401k.
    • Using your insurance card at the dentist.
    • Shopping around for the best car insurance.
    • A BNSF train pulling a Union Pacific coal car.
    • Transferring money between banks.
    • Filling a prescription.
    All the above industries are billion dollar players in our economy. All of the above industries write new COBOL and mainframe assembler programs. I’m not making this up, I work in the last industry, and I’ve interviewed and interned in the others.

    For god sakes people, COBOL, invented in 1959, is still being written today, for real! We’re not talking maintaining a few lines here and there, we’re talking thousands of new lines, every day, to implement new functionality and new requirements. These industries haven’t even caught word the breeze has shifted to the cloud. These industries are essential; they form the building blocks of our economy. Despite this, they do not innovate and they carry massive expenses with their legacy technology. The costs of running business are enormous, and a good percentage of those are IT costs.

    How expensive? Lets talk about mainframe licensing, for instance. Lets say you buy the Enterprise version of MongoDB and put in on a box. You then proceed to peg out the CPU doing transaction after transaction to the database… The next week, you go on vacation, and leave MongoDB running without doing a thing. How much did MongoDB cost in both weeks? The same.

    Mainframes software is licensed much different. Lets say you buy your mainframe for a couple million and buy a database product for it. You then spend all week pegging the CPU(s) with database requests. You check your mail, and you now have a million dollar bill from the database vendor. Wait, I bought the hardware, why am I paying another bill? The software on a mainframe is often billed by usage, or how many CPU cycles you spend using it. If you spend 2,000,000 cpu cycles running the database, you will end up owing the vendor $2mil. Bizzare? Absolutely!

    These invisible industries you utilize every day are full of bloat, legacy systems, and high costs. Java set out to conquer many fronts, and while it thoroughly took over the web application arena, it fizzled out in centralized computing. These industries are ripe for reducing costs and becoming more efficient, but honestly, we’re embarrassing ourselves. These industries stick with their legacy systems because they don’t think Ruby, Python, Scala, Lua, PHP, Java could possibly handle the ‘load’, scalability, or uptime requirements that their legacy systems provide. This is so far from the truth, but again, there has been 0 innovation in the arenas in the last 15 years, despite the progress of web technology making galaxy-sized leaps.

    So next week someone will invent another DSL that makes Twitter easier to use, but your bank will be writing new COBOL to more efficiently transfer funds to another Bank. We’re embarrassing ourselves with our petty arguments. There is an entire economy that needs to see the benefits of distributed computing, but if the friendly fire continues, we’ll all lose. Lest stop these ridiculous arguments, pass the torch peacefully, and conquer some of these behemoths!
     
  9. tejaslok

    tejaslok New Member

    Joined:
    May 8, 2009
    Messages:
    115
    Likes Received:
    0
    Trophy Points:
    0
    Location:
    BANgalore
    thanks for posting this article "How I Failed, Failed, and Finally Succeeded at Learning How to Code". been going through it :)
     
  10. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    By Alex Schiff, University of Michigan

    A month ago, I turned down a very good opportunity from a just-funded startup to continue my job for the rest of the summer. It was in an industry I was passionate about, I would have had a leadership position and having just received a raise, the pay would have been substantially higher than most jobs for 20-year-old college students. I had worked there for a year (full-time during last summer and part-time during the school year) and common sense should have pushed me to go back.

    But I didn’t.

    I’ve never been one to base my actions on others’ expectations. Just ask my dad, with whom I was having arguments about moral relativism by the time I was 13. That’s why I didn’t think twice about the implications of turning down an opportunity most people my age would kill for to start my own company. When you take a leap of faith of that magnitude, you can’t look back.

    That’s not how the rest of the world sees it, though. As a college student, I’m expected to spend my summers either gaining experience in an internship or working at some job (no matter how menial) to earn money. Every April, the “So where are you working this summer?” conversation descends on the University of Michigan campus like a storm cloud. When I told people I was foregoing a paycheck for at least the next several months to build a startup, the reactions were a mix of confusion and misinformed assumptions that I couldn’t land a “real job.”

    This sentiment surfaced recently with a conversation with a family member that asserted I needed to “pay my dues to society” by joining the workforce. And most adults I know tell me I need to get a real job first before starting my own company. One common thought is, “Most of the world has to wait until they’re at least 40 before they can even think about doing something like that. Why should you be any different?” It almost feels like people assume we have some sort of secular “original sin” that demands I work for someone else before I do what makes me happy. Even when I talk to peers who don’t understand entrepreneurship, their reaction can be subtle condescension and comments like, “Oh that’s cool, but you’re going to get a real job next summer or when you graduate, right?”

    This is my real job. Building startups is what I want to do with my life, preferably as a founder. I’m really bad at working for other people. I have no deference to authority figures and have never been shy to voice my opinions, oftentimes to my detriment. I also can’t stand waiting on people that are in higher positions than me. It makes me feel like I should be in their place and really gets under my skin. All this makes me terrible at learning things from other people and taking advice. I need to learn by doing things and figuring out how to solve problems by myself. I’ll ask questions later.

    As a first-time founder, I can’t escape admitting that starting fetchnotes is an immense learning experience. I’m under no illusion that I have any idea what I’m doing. I’m thankful I had a job where I learned a lot of core skills on the fly — recruiting, business development, management, a little sales and a lot about culture creation. But what I learned — and what most people learn in generalist, non-specialized jobs available to people our age — was the tip of the iceberg.

    When you start something from scratch, you gain a much deeper understanding of these skills. Instead of being told, “We need Drupal developers. Go find Drupal developers here, here and here,” you need to brainstorm the best technical implementation of your idea, figure out what skills that requires and then figure out how to reach those people. Instead of being told, “Go reach out to these people for partnerships to do X, Y and Z,” you need to figure out what types of people and entities you’ll need to grow and how to convince them to do what you need them to do. When you’re an employee, you learn the “what”, when you’re a founder, you learn the “how” and “why.” You need to learn how to rally and motivate people and create a culture in a way that just isn’t remotely the same as a later-hired manager. There are at least 50 orders of magnitude in the difference between the strategic and innovative thinking required by a founder and that of even the most integral first employee.

    Besides, put yourself in an employer’s shoes. You’re interviewing two college graduates — one who started a company and can clearly articulate why it succeeded or failed, and one who had an internship from a “brand name” institution. If I’m interviewing with someone who chooses the latter candidate, they’re not a place I want to work for. It’s likely a “do what we tell you because you’re our employee” working environment. And if that sounds like someone you want to work for, this article is probably irrelevant to you anyway.

    That’s why I never understood the argument about needing to get a job or internship as a “learning experience” or to “pay your dues.” There’s no better learning experience than starting with nothing and figuring it out for yourself (or, thankfully for me, with a co-founder). And there’s no better time to start a company than as a student. When else will your bills, foregone wages and cost of failure be so low? If I fail right now, I’ll be out some money and some time. If I wait until I’m out of college, have a family to support and student loans to pay back, that cost could be being poor, hungry and homeless.

    Okay, maybe that’s a little bit of hyperbole, but you get my point. If you have a game-changing idea, don’t make yourself wait because society says you need an internship every summer to get ahead. To quote a former boss, “just **** it out.”

    Alex Schiff is a co-founder of The New Student Union.
     
  11. nisargshah95

    nisargshah95 Your Ad here

    Joined:
    Feb 13, 2010
    Messages:
    425
    Likes Received:
    2
    Trophy Points:
    0
    Waiting for another article buddy...
     
  12. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow

    So the Sony saga continues. As if the whole thing about 77 million breached PlayStation Network accounts wasn’t bad enough, numerous other security breaches in other Sony services have followed in the ensuing weeks, most recently with SonyPictures.com.

    As bad guys often like to do, the culprits quickly stood up and put their handiwork on show. This time around it was a group going by the name of LulzSec. Here’s the interesting bit:
    Well actually, the really interesting bit is that they created a torrent of some of the breached accounts so that anyone could go and grab a copy. Ouch. Remember these are innocent customers’ usernames and passwords so we’re talking pretty serious data here. There’s no need to delve into everything Sony did wrong here, that’s both mostly obvious and not the objective of this post.

    I thought it would be interesting to take a look at password practices from a real data source. I spend a bit of time writing about how people and software manage passwords and often talk about thing like entropy and reuse, but are these really discussion worthy topics? I mean do people generally get passwords right anyway and regularly use long, random, unique strings? We’ve got the data – let’s find out.

    What’s in the torrent

    The Sony Pictures torrent contains a number of text files with breached information and a few instructions:

    [​IMG]

    The interesting bits are in the “Sony Pictures” folder and in particular, three files with a whole bunch of accounts in them:

    [​IMG]

    After a little bit of cleansing, de-duping and an import into SQL Server for analysis, we end up with a total of 37,608 accounts. The LulzSec post earlier on did mention this was only a subset of the million they managed to obtain but it should be sufficient for our purposes here today.

    Analysis

    Here’s what I’m really interested in:
    1. Length
    2. Variety of character types
    3. Randomness
    4. Uniqueness
    These are pretty well accepted measures for password entropy and the more you have of each, the better. Preferably heaps of all of them.

    Length

    Firstly there’s length; the accepted principle is that as length increases, as does entropy. Longer password = stronger password (all things else being equal). How long is long enough? Well, part of the problem is that there’s no consensus and you end up with all sorts of opinions on the subject. Considering the usability versus security balance, around eight characters plus is a pretty generally accepted yardstick. Let’s see the Sony breakdown:

    [​IMG]

    We end up with 93% of accounts being between 6 and 10 characters long which is pretty predictable. Bang on 50% of these are less than eight characters. It’s interesting that seven character long passwords are a bit of an outlier – odd number discrimination, perhaps?

    I ended up grouping the instances of 20 or more characters together – there are literally only a small handful of them. In fact there’s really only a handful from the teens onwards so what we’d consider is a relatively secure length really just doesn’t feature.

    Character types

    Length only gives us so much, what’s really important is the diversity within that length. Let’s take a look at character types and we’ll categorise them as follows:
    1. Numbers
    2. Uppercase
    3. Lowercase
    4. Everything else
    Again, we’ve got this issue of usability and security to consider but good practice would normally be considered as having three or more character types. Let’s see what we’ve got:

    [​IMG]

    Or put another way, only 4% of passwords had three or more character types. But it’s the spread of character types which is also interesting, particularly when only a single type is used:

    [​IMG]

    In short, half of the passwords had only one character type and nine out of ten of those where all lowercase. But the really startling bit is the use of non-alphanumeric or characters:

    [​IMG]

    Yep, less than 1% of passwords contained a non-alphanumeric character. Interestingly, this also reconciles with the analysis done on the Gawker database a little while back.

    Randomness

    So how about randomness? Well, one way to look at this is how many of the passwords are identical. The top 25 were:

    seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, bailey


    Many of the usual culprits are in there; “password”, “123456” and “abc123”. We saw all these back in the top 25 from the Gawker breach. We also see lots of passwords related to the fact this database was apparently related to a competition: “winner”, “sweeps” and “contest”. A few of these look very specific (9452, for example), but there may have been context to this in the signup process which lead multiple people to choose the same password.

    However in the grand scheme of things, there weren’t a whole lot of instances of multiple people choosing the same password, in fact the 25 above boiled down to only 2.5%. Furthermore, 80% of passwords actually only occurred once so whilst poor password entropy is looking rampant, most people are making these poor choices independently and achieving different results.

    Another way of assessing the randomness is to compare the passwords to a password dictionary. Now this doesn’t necessarily mean an English dictionary in the way we know it, rather it’s a collection of words which may be used as passwords so you’ll get things like obfuscated characters and letter / number combinations. I’ll use this one which has about 1.7 million entries. Let’s see how many of the Sony passwords are in there:

    [​IMG]

    So more than one third of passwords conform to a relatively predictable pattern. That’s not to say they’re not long enough or don’t contain sufficient character types, in fact the passwords “1qazZAQ!” and “dallascowboys” were both matched so you’ve got four character types (even with a special character) and then a 13 character long password respectively. The thing is that they’re simply not random – they’ve obviously made appearances in password databases before.

    Uniqueness

    This is the one that gets really interesting as it asks the question “are people creating unique passwords across multiple accounts?” The thing about this latest Sony exploit is that it included data from multiple apparently independent locations within the organisation and as we saw earlier on, the dump LulzSec provided consists of several different data sources.

    Of particular interest in those data sources are the “Beauty” and “Delboca” files as they contain almost all the accounts with a pretty even split between them. They also contain well over 2,000 accounts with the same email address, i.e. someone has registered on both databases.

    So how rampant is password reuse between these two systems? Let’s take a look:

    [​IMG]

    92% of passwords were reused across both systems. That’s a pretty damning indictment of the whole “unique password” mantra. Is the situation really this bad? Or are the figures skewed by folks perhaps thinking “Sony is Sony” and being a little relaxed with their reuse?

    Let’s make it really interesting and compare accounts against Gawker. The internet being what it is there will always be the full Gawker database floating around out there and a quick Google search easily discovers live torrents. Gnosis (the group behind the Gawker breach) was a bit more generous than LulzSec and provided over 188,000 accounts for us to take a look at.

    Although there were only 88 email addresses found in common with Sony (I had thought it might be a bit higher but then again, they’re pretty independent fields), the results are still very interesting:

    [​IMG]

    Two thirds of people with accounts at both Sony and Gawker reused their passwords. Now I’m not sure how much crossover there was timeframe wise in terms of when the Gawker accounts were created versus when the Sony ones were. It’s quite possible the Sony accounts came after the Gawker breach (remember this was six months ago now), and people got a little wise to the non-unique risk. But whichever way you look at it, there’s an awful lot of reuse going on here.

    What really strikes me in this case is that between these two systems we have a couple of hundred thousand email addresses, usernames (the Gawker dump included these) and passwords. Based on the finding above, there’s a statistically good chance that the majority of them will work with other websites. How many Gmail or eBay or Facebook accounts are we holding the keys to here? And of course “we” is a bit misleading because anyone can grab these off the net right now. Scary stuff.

    Putting it in a exploit context

    When an entire database is compromised and all the passwords are just sitting there in plain text, the only thing saving customers of the service is their password uniqueness. Forget about rainbow tables and brute force – we’ll come back to that – the one thing which stops the problem becoming any worse for them is that it’s the only place those credentials appear. Of course we know that both from the findings above and many other online examples, password reuse is the norm rather than the exception.

    But what if the passwords in the database were hashed? Not even salted, just hashed? How vulnerable would the passwords have been to a garden variety rainbow attack? It’s pretty easy to get your hands on a rainbow table of hashed passwords containing between one and nine lowercase and numeric characters (RainbowCrack is a good place to start), so how many of the Sony passwords would easily fall?

    [​IMG]

    82% of passwords would easily fall to a basic rainbow table attack. Not good, but you can see why the rainbow table approach can be so effective, not so much because of its ability to make smart use of the time-memory trade-off scenario, but simply because it only needs to work against a narrow character set of very limited length to achieve a high success rate.

    And if the passwords were salted before the hash is applied? Well, more than a third of the passwords were easily found in a common dictionary so it’s just a matter of having the compute power to brute force them and repeat the salt plus hash process. It may not be a trivial exercise, but there’s a very high probability of a significant portion of the passwords being exposed.

    Summary

    None of this is overly surprising, although it remains alarming. We know passwords are too short, too simple, too predictable and too much like the other ones the individual has created in other locations. The bit which did take me back a bit was the extent to which passwords conformed to very predictable patterns, namely only using alphanumeric character, being 10 characters or less and having a much better than average chance of being the same as other passwords the user has created on totally independent systems.

    Sony has clearly screwed up big time here, no doubt. The usual process with these exploits is to berate the responsible organisation for only using MD5 or because they didn’t salt the password before hashing, but to not even attempt to obfuscate passwords and simply store them in the clear? Wow.

    But the bigger story here, at least to my eye, is that users continue to apply lousy password practices. Sony’s breach is Sony’s fault, no doubt, but a whole bunch of people have made the situation far worse than it needs to be through reuse. Next week when another Sony database is exposed (it’s a pretty safe bet based on recent form), even if an attempt has been made to secure passwords, there’s a damn good chance a significant portion of them will be exposed anyway. And that is simply the fault of the end users.

    Conclusion? Well, I’ll simply draw back to a previous post and say it again: The only secure password is the one you can’t remember.

    There are loads of pending articles ATM, I'll publish them whenever I'm free.
     
  13. Vyom

    Vyom waiting in Tomorrowland.. Staff Member

    Joined:
    May 16, 2009
    Messages:
    6,202
    Likes Received:
    35
    Trophy Points:
    48
    Location:
    Sometime in Delhi

    [​IMG]

    Time flies when you're having fun. But you're at work, and work sucks. So how is it 5:00 already?

    When we talk about "losing time," we aren't referring to that great night out, or that week of wonderful vacation, or the three-hour film that honestly didn't feel like more than an hour. No, when we fret about not having enough time, or wonder where exactly all those hours went, we're talking about mundane things. The workday. A lazy, unremarkable Sunday. Days when we gave time no apparent reason to fly, and it flew anyway.

    Why does that happen? And where did all the time go? The secret lies in your brain's ticking clock—an elusive, inexact, and easily ignorable clock.

    First of all, yes

    In understanding any complex issue, especially a psychological one, intuition doesn't usually get us too far. As often as you can scrabble together a theory about how the mind works, a man in a lab coat will adjust his glasses, tilt forward his brow, and deliver a carefully intoned, "Actually..."

    But not today. Most of what you think you know about the perception of time is true.

    Read More...
     
  14. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    ^Nice post, though I've already read it a few months ago :).
     
  15. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    The Internet Is My Religion



    Today, I was lucky enough to attend the second day of sessions at Personal Democracy Forum. I didn’t really know what I was getting myself into. As a social web / identity junkie, I was excited to see Vivek Kundra, Jay Rosen, Dan Gillmor, and Doc Searls. I hadn’t heard of many of the other presenters, including one whose talk would be the most inspiring I had ever seen on a live stage.

    As Jim Gilliam took the stage, his slightly nervous, ever-so-geeky sensibility betrayed no signs of the passion, earnestness, and magnificence with which he would deliver what can only described as a modern epic: his life story.

    Watch it now:

    [​IMG]


    [Don't read on unless you have watched the video. The rest of this post probably won't make much sense.]


    Apologies for the long quote, but I find his closing words incredibly profound [my bolding]:
    The audience rose in a standing ovation, twice. A few of the reactions:
    As I walked back to the office from the Skirball Center this afternoon, I found myself thinking through what his message means to me, and why I was so moved by his words. Working at betaworks, I am confronted with and fascinated daily by the creative opportunities on the Web – for opportunities to change the way that we connect, communicate, share, learn, discover, live, and grow. Technology is only as good as the people who wield it, so perhaps I’m a bit idyllic and naive in my boundless optimism, but I am consistently awestruck at the power of the Web as a creative force.

    I’m not a religious person, but I do believe there is something humbling about the act of creation – whether your form of creation is art, software, ideas, words, music – there is something about the act of creation that is worth striving for, worth sacrificing worth, worth living for. Regardless of your view of her politics, Ayn Rand spoke to this notion beautifully:
    The Web – at its simplest, an open and generally accessible medium for two-way connectivity – bridges creative energy irrespective of geography, socioeconomic status, field of study, and language. It enables and even encourages the collision of ideas, problem statements, inspirations, and solutions. As Stephen Johnson offers in his fantastic book, Where Good Ideas Come From, “good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time.” He might as well be describing the Web.

    The Internet is a medium capable of unlocking and combining the creative energies of Earth’s seven billion in a way never before imaginable. Through the near-infinite scale with which it powers human connectivity, the Internet has shown in just a few short years its ability to enable anything from a collection of the world’s information, to a revolution, to, in the case of Jim Gilliam, life itself.

    I’m so excited to be a small part of what can only be called a movement. I’m excited to build, I’m excited to change, and, perhaps most critically, I’m excited to defend.
     
    Last edited: Jun 17, 2011
  16. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    Write​

    Yesterday was my 49th birthday. By fortuitous circumstance, I spotted an item on Hacker News explaining that reputation on Stack Overflow seems to rise with age. I don’t have very much Stack Overflow reputation, but I do have a little Hacker News karma and over the years I’ve written a few articles that made it to the front page of reddit, programming.reddit.com, and Hacker News.

    Somebody suggested that age was my secret for garnering reputation and writing well. I don’t think so. Here’s my secret, here’s what I think I do to get reputation, and what I think will may work for you:

    Write.

    That’s it. That’s everything. Just write. If you need more words, the secret to internet reputation is to write more. If you aren’t writing now, start writing. If you are writing now, write more.

    Now some of you want more exposition, so for entertainment purposes only, I’ll explain why I think this is the case. But even if I’m wrong about why it’s the case, I’m sure I’m right that it is the case. So write.

    Now here’s why I think writing more is the right strategy. The wrong strategy is to write less often but increase the quality.

    This is a wrong strategy because it is based on a wrong assumption, namely that there’s a big tradeoff between quality and quantity. I agree that given more time, I can polish an essay. I can fix typos, tighten things up, clarify things. That’s very true, and if you are talking about the difference between one essay a day done well and three done poorly, I’ll buy that you are already writing enough if you write one a day, and you are better off getting the spelling right than to write two more unpolished essays.

    But in quantities of less than one essay a day or one essay a week, the choice between writing more essays and writing higher quality essays is a false dichotomy. In fact, you may find that practice writing improves your writing, so writing more often leads to writing with higher quality. You also get nearly instantaneous feedback on the Internet, so the more you write, the more you learn about what works and what doesn’t work when you write.

    Now that I’ve explained why I think writing less often is the wrong strategy, I will explain how writing for the Internet rewards writing more often. Writing on the Internet is nothing like writing on dead trees. For various legacy reasons, writing on dead trees involves writing books. The entire industry is built around long feedback cycles. It’s very expensive to get things wrong up front, so the process is optimized around doing it right the first time, with editors and proof-readers and what-not all conspiring to delay publishing your words where people can read them.

    Worse, the feedback loop is appalling. What are you supposed to do with a bad review on Amazon.com? Incorporate it into the second edition of your masterpiece?

    Speaking of masterpieces, that’s the other problem. Since books are what sell, if you want to write on dead trees, you have to write books. A book is a Big Thing, involving a lot of Planning. And structure. And organization. It demands a quality approach. Books are the “Big Design Up Front” poster children for writing.

    Essays, rants, opinions… If writing book is Big Design Up Front, blogging and commenting is Cowboy Coding. A book is a Cathedral, a blog is a Bazaar. And in a good way! You get feedback faster. It’s the ultimate in Release Early, Release Often. You have an idea, you write it, you get feedback, you edit.

    I am unapologetic about editing my comments and essays. Some criticize me for retracting my words when faced with a good argument. I say, **** You, this is not a debate, this is a process for generating and refining good ideas. I lie, of course, I have never said that. I actually say “Thank You!” Or I try. When I fail to be gracious in accepting criticism, that is my failing. The process of releasing ideas and refining them in the spotlight is one I value and think is a win for everyone.

    Another problem with a book is that it’s One Big Thing. Very few book reviews say “Chapter two is a gem, buy the book for this and ignore chapter six, the author is confused.” Most just say “He’s an idiot, chapter six is proof of that.”

    A blog is not One Big Thing. Many people say my blog is worth reading. They are probably wrong: I have had many popular essays. But for every “hit,” I have had an outrageous number of misses. If you read everything I wrote starting in 2004 to now, you’d be amazed I get any work in this industry. What people mean is, my good stuff is worth reading.

    That’s the magic of the Internet. Thanks to Twitter and Hacker News and whatever else, if you write a good thing, it gets judged on its own. You can write 99 failures for every success, but you are judged by your best work, not your worst.

    And let me tell you something about my Best Work: I often think I am writing something Important, something They’ll Remember Me For. And it sinks without a trace. A recent essay on the value of planning and the unimportance of plans comes to mind.

    And then a day later I’ll dash off a rant based on a simple idea or insight, and the next thing I know it’s #1 on Hacker News. If I was writing a book, I’d do a terrible job, because my nose for what people want is broken. When I write essays, I don’t care, I write everything and I let Hacker News and Twitter sort out the wheat from my chaff.

    If you have a good nose, a great instinct, maybe you can write less. But even if you don’t, you write more and you crowd-source the nose for you. And thanks to the fine granularity of essays and the willingness of the crowd to ignore you remises and celebrate your hits, your reputation grows inexorably whenever you sit down and simply write.

    So write.

    (discuss)

    p.s. Here's an interesting counter-point.
     
  17. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow

    Some April morning last year I received a letter from the local police department, bureau of criminal investigation. “Whoops”, I thought. What could have happened there? Had I forgot to pay for a speeding ticket? I opened the letter. It said I was the main suspect in a case of “data destruction” and I was supposed to visit the police department as soon as possible to file a testimony.

    Wait. What is “data destruction”? Well, I had to translate it, but, I am from Austria where there is a paragraph (§126a, StGB) that basically says the following: If you modify, delete or destroy data that is not yours, you may get a prison sentence of six months or a fine. There are probably similar laws in other countries.

    But how could I have done that? I wasn’t aware of any situation in which I could have deleted anyone’s data. I work as a sysadmin for a small consulting company, but it seemed implausible that they would charge me with the above mentioned.

    What I supposedly did wrong

    So I went to the police department. I was terrified because I had absolutely no idea what I had done wrong. The police officer however was very friendly and asked me to take a seat. He wanted to know if I knew a person X from Tyrol. Of course I didn’t. That was more than 500 kilometers away. Turns out, I hacked their Facebook profile.

    Here’s the summary of what I was being charged with:
    • Creating a fake e-mail address impersonating as the victim
    • Using this e-mail address to hack into their Facebook account
    • Deleting all data from the Facebook profile and then changing the e-mail address and password
    • Deleting the fake e-mail address
    All that had happened one Sunday evening. I recall being at home with my girlfriend, watching TV. I like to keep a detailed schedule in my calendar, therefore I knew. And I knew I was absolutely innocent. But how did they think it was me?

    How I became suspect

    Well, at that time I had an iPhone. I also had a mobile broadband contract with a major telephone company, let’s call them Company X. The police officer told me that upon investigation, they positively identified the IP address under which the e-mail address was created. It was the IP address assigned to my iPhone that evening.

    That seemed impossible. There were several proofs supporting the fact that I could never have done this:
    • We have no 3G reception in our apartment.
    • The e-mail address was deleted five minutes after being created. Nobody is that quick on an iPhone.
    • The e-mail provider doesn’t offer the feature to register an address on their mobile sites.
    You can’t change Facebook account details on their mobile interface as well. I know, I could have used the non-mobile site, but I wouldn’t have been that fast.

    All that I told the police officer. He said he understood and jotted down some notes. They would contact me and I shouldn’t have to worry. At least he was on my side. But now I was there, main suspect in a case I never wanted to be in. The real offender was still out there.

    What I did next? I called the telephone company.

    Contacting the Telco

    Just like most of the time when you call your ISP/Telco, they don’t really care what you have to say. I probably talked to ten different people. Chances are you have more knowledge about computers and how the internet works than they do. That’s why it didn’t surprise me that I was told things like:

    • “That’s absolutely impossible”
    • “If they say it’s your IP, you’re guilty!”
    • “Let me get a supervisor” (hung up after a minute of elevator music)
    • “I really don’t know what this is all about”

    At that point I just gave up. I had already contacted a lawyer who would be prepared to go to court with me if necessary. As a student without proper insurance, it didn’t help that I had to pay him in advance just to get hold of the case files and take a look at them. I waited and waited, and then I got a phone call.

    How everything sorted itself out

    It was the legal department of the Telco. A lady was calling, and the first thing she did was to deeply apologize. She told me what had happened: Normally, when the prosecutor asks for the IP address and the corresponding owner, they have to fill out a form containing both information, which is then sent to the authorities. In my case they had gotten the IP address from the e-mail provider and the employee’s job was to match it against their records. The flaw could not be simpler: She had just swapped two digits in the IP address.

    As a compensation they said I’d no longer have to pay the base fee – how generous! Luckily, they also accepted to pay my lawyer’s costs, whose invoice I just forwarded to them. I think they were just scared that I would take them to court for wrongfully delivering me.

    A few weeks later the police officer contacted me. He also confirmed that the real offender was X’s ex-boyfriend, who probably just knew the password and wanted some payback.

    What we can learn from this

    One can clearly see from such an example is that there are still some holes in the security of current data retention policies. While governments have an understandable interest in storing communication data to allow effective criminal prosecution, the following should not be forgotten: No matter how perfect a system is, there is always the possibility of a weak implementation. Also, once the human factor comes into play, we can’t rely on the principles of an automated system anymore (even if it was flawless). To err is human, it seems. Luckily, I was forgiven in that case.

    So, should you ever get into a situation where you are wrongfully suspected, make sure to let people know that there is a possibility of an error, even if they tell you otherwise.
     
  18. nisargshah95

    nisargshah95 Your Ad here

    Joined:
    Feb 13, 2010
    Messages:
    425
    Likes Received:
    2
    Trophy Points:
    0
    :+1:. This one's good.
     
  19. Nipun

    Nipun Whompy Whomperson

    Joined:
    Mar 10, 2011
    Messages:
    1,514
    Likes Received:
    19
    Trophy Points:
    0
    Location:
    New Delhi
  20. OP
    sygeek

    sygeek Active Member

    Joined:
    Apr 16, 2010
    Messages:
    2,187
    Likes Received:
    23
    Trophy Points:
    38
    Location:
    Lucknow
    By David Eagleman

    [​IMG]

    Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundations of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order.

    On the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell; he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

    The evening before, Whitman had sat at his typewriter and composed a suicide note:
    By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep.
    Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

    For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had.
    Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

    Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

    At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

    The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

    When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

    Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

    Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

    What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

    The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

    Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

    On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

    As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

    I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

    If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.

    Many of us like to believe that all adults possess the same capacity to make sound choices. It’s a charitable idea, but demonstrably wrong. People’s brains are vastly different.

    Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

    And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

    Genes are part of the story, but they’re not the whole story. We are likewise influenced by the environments in which we grow up. Substance abuse by a mother during pregnancy, maternal stress, and low birth weight all can influence how a baby will turn out as an adult. As a child grows, neglect, physical abuse, and head injury can impede mental development, as can the physical environment. (For example, the major public-health movement to eliminate lead-based paint grew out of an understanding that ingesting lead can cause brain damage, making children less intelligent and, in some cases, more impulsive and aggressive.) And every experience throughout our lives can modify genetic expression—activating certain genes or switching others off—which in turn can inaugurate new behaviors. In this way, genes and environments intertwine.

    When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices; these are the cards we’re dealt.

    Because we did not choose the factors that affected the formation and structure of our brain, the concepts of free will and personal responsibility begin to sprout question marks. Is it meaningful to say that Alex made bad choices, even though his brain tumor was not his fault? Is it justifiable to say that the patients with frontotemporal dementia or Parkinson’s should be punished for their bad behavior?

    It is problematic to imagine yourself in the shoes of someone breaking the law and conclude, “Well, I wouldn’t have done that”—because if you weren’t exposed to in utero cocaine, lead poisoning, and physical abuse, and he was, then you and he are not directly comparable. You cannot walk a mile in his shoes.

    The legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

    The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

    Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

    We immediately learn two things from the Tourette’s patient. First, actions can occur in the absence of free will. Second, the Tourette’s patient has no free won’t. He cannot use free will to override or control what subconscious parts of his brain have decided to do. What the lack of free will and the lack of free won’t have in common is the lack of “free.” Tourette’s syndrome provides a case in which the underlying neural machinery does its thing, and we all agree that the person is not responsible.

    This same phenomenon arises in people with a condition known as chorea, for whom actions of the hands, arms, legs, and face are involuntary, even though they certainly look voluntary: ask such a patient why she is moving her fingers up and down, and she will explain that she has no control over her hand. She cannot not do it. Similarly, some split-brain patients (who have had the two hemispheres of the brain surgically disconnected) develop alien-hand syndrome: while one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing. The movements are not “his” to freely start or stop.

    Unconscious acts are not limited to unintended shouts or wayward hands; they can be surprisingly sophisticated. Consider Kenneth Parks, a 23-year-old Canadian with a wife, a five-month-old daughter, and a close relationship with his in-laws (his mother-in-law described him as a “gentle giant”). Suffering from financial difficulties, marital problems, and a gambling addiction, he made plans to go see his in-laws to talk about his troubles.

    In the wee hours of May 23, 1987, Kenneth arose from the couch on which he had fallen asleep, but he did not awaken. Sleepwalking, he climbed into his car and drove the 14 miles to his in-laws’ home. He broke in, stabbed his mother-in-law to death, and assaulted his father-in-law, who survived. Afterward, he drove himself to the police station. Once there, he said, “I think I have killed some people … My hands,” realizing for the first time that his own hands were severely cut.

    Over the next year, Kenneth’s testimony was remarkably consistent, even in the face of attempts to lead him astray: he remembered nothing of the incident. Moreover, while all parties agreed that Kenneth had undoubtedly committed the murder, they also agreed that he had no motive. His defense attorneys argued that this was a case of killing while sleepwalking, known as homicidal somnambulism.

    Although critics cried “Faker!,” sleepwalking is a verifiable phenomenon. On May 25, 1988, after lengthy consideration of electrical recordings from Kenneth’s brain, the jury concluded that his actions had indeed been involuntary, and declared him not guilty.

    As with Tourette’s sufferers, split-brain patients, and those with choreic movements, Kenneth’s case illustrates that high-level behaviors can take place in the absence of free will. Like your heartbeat, breathing, blinking, and swallowing, even your mental machinery can run on autopilot. The crux of the question is whether all of your actions are fundamentally on autopilot or whether some little bit of you is “free” to choose, independent of the rules of biology.

    This has always been the sticking point for philosophers and scientists alike. After all, there is no spot in the brain that is not densely interconnected with—and driven by—other brain parts. And that suggests that no part is independent and therefore “free.” In modern science, it is difficult to find the gap into which to slip free will—the uncaused causer—because there seems to be no part of the machinery that does not follow in a causal relationship from the other parts.

    Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease.

    The study of brains and behaviors is in the midst of a conceptual shift. Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”). As recently as a century ago, a common approach was to get psychiatric patients to “toughen up,” through deprivation, pleading, or torture. Not surprisingly, this approach was medically fruitless. After all, while psychiatric disorders tend to be the product of more-subtle forms of brain pathology, they, too, are based in the biological details of the brain.

    What accounts for the shift from blame to biology? Perhaps the largest driving force is the effectiveness of pharmaceutical treatments. No amount of threatening will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but they can be controlled by risperidone. Mania responds not to talk or to ostracism, but to lithium. These successes, most of them introduced in the past 60 years, have underscored the idea that calling some disorders “brain problems” while consigning others to the ineffable realm of “the psychic” does not make sense. Instead, we have begun to approach mental problems in the same way we might approach a broken leg. The neuroscientist Robert Sapolsky invites us to contemplate this conceptual shift with a series of questions:
    Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications. Tom Bingham, Britain’s former senior law lord, once put it this way:
    The more we discover about the circuitry of the brain, the more we tip away from accusations of indulgence, lack of motivation, and poor discipline—and toward the details of biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are steered by deeply embedded neural programs.

    Imagine a spectrum of culpability. On one end, we find people like Alex the pedophile, or a patient with frontotemporal dementia who exposes himself in public. In the eyes of the judge and jury, these are people who suffered brain damage at the hands of fate and did not choose their neural situation. On the other end of the spectrum—the blameworthy side of the “fault” line—we find the common criminal, whose brain receives little study, and about whom our current technology might be able to say little anyway. The overwhelming majority of lawbreakers are on this side of the line, because they don’t have any obvious, measurable biological problems. They are simply thought of as freely choosing actors.

    Such a spectrum captures the common intuition that juries hold regarding blameworthiness. But there is a deep problem with this intuition. Technology will continue to improve, and as we grow better at measuring problems in the brain, the fault line will drift into the territory of people we currently hold fully accountable for their crimes. Problems that are now opaque will open up to examination by new techniques, and we may someday find that many types of bad behavior have a basic biological explanation—as has happened with schizophrenia, epilepsy, depression, and mania.

    Today, neuroimaging is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

    This puts us in a strange situation. After all, a just legal system cannot define culpability simply by the limitations of current technology. Expert medical testimony generally reflects only whether we yet have names and measurements for a problem, not whether a problem exists. A legal system that declares a person culpable at the beginning of a decade and not culpable at the end is one in which culpability carries no clear meaning.

    The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

    While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

    Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

    The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals; we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go; the judge must keep society safe.

    Those who break social contracts need to be confined, but in this framework, the future is more important than the past. Deeper biological insight into behavior will foster a better understanding of recidivism—and this offers a basis for empirically based sentencing. Some people will need to be taken off the streets for a longer time (even a lifetime), because their likelihood of reoffense is high; others, because of differences in neural constitution, are less likely to recidivate, and so can be released sooner.

    The law is already forward-looking in some respects: consider the leniency afforded a crime of passion versus a premeditated murder. Those who commit the former are less likely to recidivate than those who commit the latter, and their sentences sensibly reflect that. Likewise, American law draws a bright line between criminal acts committed by minors and those by adults, punishing the latter more harshly. This approach may be crude, but the intuition behind it is sound: adolescents command lesser skills in decision-making and impulse control than do adults; a teenager’s brain is simply not like an adult’s brain. Lighter sentences are appropriate for those whose impulse control is likely to improve naturally as adolescence gives way to adulthood.

    Taking a more scientific approach to sentencing, case by case, could move us beyond these limited examples. For instance, important changes are happening in the sentencing of sex offenders. In the past, researchers have asked psychiatrists and parole-board members how likely specific sex offenders were to relapse when let out of prison. Both groups had experience with sex offenders, so predicting who was going straight and who was coming back seemed simple. But surprisingly, the expert guesses showed almost no correlation with the actual outcomes. The psychiatrists and parole-board members had only slightly better predictive accuracy than coin-flippers. This astounded the legal community.

    So researchers tried a more actuarial approach. They set about recording dozens of characteristics of some 23,000 released sex offenders: whether the offender had unstable employment, had been sexually abused as a child, was addicted to drugs, showed remorse, had deviant sexual interests, and so on. Researchers then tracked the offenders for an average of five years after release to see who wound up back in prison. At the end of the study, they computed which factors best explained the reoffense rates, and from these and later data they were able to build actuarial tables to be used in sentencing.

    Which factors mattered? Take, for instance, low remorse, denial of the crime, and sexual abuse as a child. You might guess that these factors would correlate with sex offenders’ recidivism. But you would be wrong: those factors offer no predictive power. How about antisocial personality disorder and failure to complete treatment? These offer somewhat more predictive power. But among the strongest predictors of recidivism are prior sexual offenses and sexual interest in children. When you compare the predictive power of the actuarial approach with that of the parole boards and psychiatrists, there is no contest: numbers beat intuition. In courtrooms across the nation, these actuarial tests are now used in presentencing to modulate the length of prison terms.

    We will never know with certainty what someone will do upon release from prison, because real life is complicated. But greater predictive power is hidden in the numbers than people generally expect. Statistically based sentencing is imperfect, but it nonetheless allows evidence to trump folk intuition, and it offers customization in place of the blunt guidelines that the legal system typically employs. The current actuarial approaches do not require a deep understanding of genes or brain chemistry, but as we introduce more science into these measures—for example, with neuroimaging studies—the predictive power will only improve. (To make such a system immune to government abuse, the data and equations that compose the sentencing guidelines must be transparent and available online for anyone to verify.)

    Beyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences; they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

    A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

    Many people recognize the long-term cost-effectiveness of rehabilitating offenders instead of packing them into overcrowded prisons. The challenge has been the dearth of new ideas about how to rehabilitate them. A better understanding of the brain offers new ideas. For example, poor impulse control is characteristic of many prisoners. These people generally can express the difference between right and wrong actions, and they understand the disadvantages of punishment—but they are handicapped by poor control of their impulses. Whether as a result of anger or temptation, their actions override reasoned consideration of the future.

    If it seems difficult to empathize with people who have poor impulse control, just think of all the things you succumb to against your better judgment. Alcohol? Chocolate cake? Television? It’s not that we don’t know what’s best for us, it’s simply that the frontal-lobe circuits representing long-term considerations can’t always win against short-term desire when temptation is in front of us.

    With this understanding in mind, we can modify the justice system in several ways. One approach, advocated by Mark A. R. Kleiman, a professor of public policy at UCLA, is to ramp up the certainty and swiftness of punishment—for instance, by requiring drug offenders to undergo twice-weekly drug testing, with automatic, immediate consequences for failure—thereby not relying on distant abstraction alone. Similarly, economists have suggested that the drop in crime since the early 1990s has been due, in part, to the increased presence of police on the streets: their visibility shores up support for the parts of the brain that weigh long-term consequences.

    We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

    The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high; if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving; perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

    If this sounds like biofeedback from the 1970s, it is—but this time with vastly more sophistication, monitoring specific networks inside the head rather than a single electrode on the skin. This research is just beginning, so the method’s efficacy is not yet known—but if it works well, it will be a game changer. We will be able to take it to the incarcerated population, especially those approaching release, to try to help them avoid coming back through the revolving prison doors.

    This prefrontal workout is designed to better balance the debate between the long- and short-term parties of the brain, giving the option of reflection before action to those who lack it. And really, that’s all maturation is. The main difference between teenage and adult brains is the development of the frontal lobes. The human prefrontal cortex does not fully develop until the early 20s, and this fact underlies the impulsive behavior of teenagers. The frontal lobes are sometimes called the organ of socialization, because becoming socialized largely involves developing the circuitry to squelch our first impulses.

    This explains why damage to the frontal lobes unmasks unsocialized behavior that we would never have thought was hidden inside us. Recall the patients with frontotemporal dementia who shoplift, expose themselves, and burst into song at inappropriate times. The networks for those behaviors have been lurking under the surface all along, but they’ve been masked by normally functioning frontal lobes. The same sort of unmasking happens in people who go out and get rip-roaring drunk on a Saturday night: they’re disinhibiting normal frontal-lobe function and letting more-impulsive networks climb onto the main stage. After training at the prefrontal gym, a person might still crave a cigarette, but he’ll know how to beat the craving instead of letting it win. It’s not that we don’t want to enjoy our impulsive thoughts (Mmm, cake), it’s merely that we want to endow the frontal cortex with some control over whether we act upon them (I’ll pass). Similarly, if a person thinks about committing a criminal act, that’s permissible as long as he doesn’t take action.

    For the pedophile, we cannot hope to control whether he is attracted to children. That he never acts on the attraction may be the best we can hope for, especially as a society that respects individual rights and freedom of thought. Social policy can hope only to prevent impulsive thoughts from tipping into behavior without reflection. The goal is to give more control to the neural populations that care about long-term consequences—to inhibit impulsivity, to encourage reflection. If a person thinks about long-term consequences and still decides to move forward with an illegal act, then we’ll respond accordingly. The prefrontal workout leaves the brain intact—no drugs or surgery—and uses the natural mechanisms of brain plasticity to help the brain help itself. It’s a tune-up rather than a product recall.

    We have hope that this approach represents the correct model: it is grounded simultaneously in biology and in libertarian ethics, allowing a person to help himself by improving his long-term decision-making. Like any scientific attempt, it could fail for any number of unforeseen reasons. But at least we have reached a point where we can develop new ideas rather than assuming that repeated incarceration is the single practical solution for deterring crime.

    Along any axis that we use to measure human beings, we discover a wide-ranging distribution, whether in empathy, intelligence, impulse control, or aggression. People are not created equal. Although this variability is often imagined to be best swept under the rug, it is in fact the engine of evolution. In each generation, nature tries out as many varieties as it can produce, along all available dimensions.

    Variation gives rise to lushly diverse societies—but it serves as a source of trouble for the legal system, which is largely built on the premise that humans are all equal before the law. This myth of human equality suggests that people are equally capable of controlling impulses, making decisions, and comprehending consequences. While admirable in spirit, the notion of neural equality is simply not true.

    As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people; psychiatrists have no capacity to guess which sex offenders will reoffend; and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

    Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.
     

Share This Page