Showing posts from October, 2008

GMail Labs and the Shotgun of Innovation

For those of you old enough to recall the Great Google Take Over of the earlier days of this new millennium, you will recall one exciting day: The day Google Labs was released. It was an exciting time - a corporation to exercise such transparency, teasing it's users with a great slew of tools to come. Many of these have since become "graduates" and things we have incorporated into our lives so much to the point that it's hard to remember how we ever got to John's house party before the aid of Google Maps. More recently, in this past year, Google's GMail team took a similar approach with GMail Labs. Remembering the day I first saw Google Labs, I immediately went to check out GMail Labs expecting a vast array of useful and innovating tools that would improve my emailing experience. And while I wasn't entirely disappointed with the features I saw, I expected much much more. But I'm not here to review the Lab tools. Instead, I'm commenting on the bizar

Artificial Intelligence in Nature

When you think of Artificial Intelligence, what comes to mind? Is it Spielberg's 2001 movie? Is it the Matrix? Is it your computer beating you at Chess? Or the characters within a video game who clearly have no form of intelligence, as they continuously fire at the walls? However which way you think of artificial intelligence, there's pretty much one commonality - electronics. Computers are thought to be a series of wires and connections, all made working through the magic of electricity. And there's no reason for you to think otherwise, because after all that's how it's always been. Take a look at your computer... consider: if we had a really good artificial program available, and it was running on your computer, where would the consciousness lie? Somewhere inside the inner workings of that box, no doubt. And because that box looks somewhat utilitarian (unless you have a shiny, glossy Mac), because that box can remind you in some ways of HAL, you may feel a little

Intelligently Passing the Turing Test

The Turing Test has long been the standard test when it comes to determining whether a machine is performing intelligently. The experiment is as follows: A human has a conversation with a hidden computer, through some sort of interface. He doesn't know beforehand whether that he is conversing with a computer, or a human. If the human, by means of conversation, cannot determine whether the thing behind the wall is a computer or a human, then we've successfully programmed artificial intelligence. That is, the computer is functioning so normally - normally, that is, as a human would function, that it isn't possible to see it as anything but human. The pitfall of the Turing Test is that it automatically assumes the intelligence of the human - and the intelligence of conversations two humans can have. I'm sure, particularly when it comes to instant messaging, we've all come across humans we just don't understand. Humans who are, in fact, more robotic than our own com


Let's Clear Up The Ambiguity!

FAQs for a Software Engineering Hiring Manager

7 Steps to Writing an Amazing Resume

7 Steps to Building your Portfolio MVP

On Systems Debt