By now most people working in the tech sector have heard about the bug in Apple’s implementation of SSL/TLS. What you may not know is that the bug has been in the code since 2012 apparently and that the code has been sitting in plain view as part of Apples open source code. And it’s not in some obscure deeply buried file. It’s in a file called sslKeyExchange.c which is only about 2,000 lines long.

http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c

Over the years we’ve been led to believe that security through obscurity is bad, and that the best way to guarantee robust, bulletproof security is to publish everything out in the open so that the eyes of many experts can review it and quickly find holes which will just as quickly get plugged.

So what went wrong? I’ll leave the answer to the open source proponents, but I have another embarrassing question to ask.

The code in question is shown below – a goto fail that completely skips a bunch of stuff.

Shouldn’t the compiler have issued a warning “unreachable code detected”? I see this error all the time when I temporarily put in statements of any kind to step around code during debugging. It’s possible that someone missed the extra goto when looking at the source (although it literally jumps off the page to me), but how is it possible that in a completely critical piece of code like this someone didn’t read the compiler warnings and check out what unreachable code was being skipped?

    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
/* My note - everything between here and the fail label should have been flagged in an unreachable code warning */
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

	err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,				/* plaintext */
                       dataToSignLen,			/* plaintext length */
                       signature,
                       signatureLen);
	if(err) {
		sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                    "returned %d\n", (int)err);
		goto fail;
	}

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

left1I see a lot of posts about automated or self driving cars in my Linked In stream, probably because of all the QNX people I follow. One of the main areas the QNX operating system is used in is automotive systems so that’s not surprising. And Google is also frequently in the news for their self driving cars, so the topic is really starting to get a lot of attention.

I have no doubt that we’ll eventually have cars that can drive for themselves. We’ve seen them demonstrated and it’s a really cool idea but I still see a lot of problems with them.

The biggest problem I see is that they will need to deal with unpredictable humans who don’t follow the rules of the road. If we program self driving cars to ‘believe’ the turn signals on other cars, what will they do when they meet the stereotypical Sunday driver that forgets to turn off their turn signal?

What will they do when the car signals a lane change, but a rude driver won’t let them in?

I think it will be a long time before we allow computer driven cars without a human override. What I think makes sense, and what is already happening, is that we will allow our cars to incorporate safety overrides at first for things that it makes sense to do and that we can’t do as humans quick enough to do effectively or reliably. We’ll let them take over driving for us once there is no doubt that we’re in a situation we can’t deal with.

Call it ‘augmented driving’.

Two examples of that kind of thing are airbags, and anti lock brakes. Bracing for a collision is something we have always done and an airbag is just a fancy robotic collision bracing system that auto deploys (hopefully) only when needed. Pumping the brakes manually in slippery conditions is something we’ve always done. An anti lock braking system just does it better than we ever could and only when needed.

So I think the evolution from here to fully driving cars will be very long, and will take place in incremental steps like that. We will start by instrumenting everything that’s important so that the cars can sense when important things happen, and we will equip them with specialized information displays, annunciators, actuators and software to take over at the right time and override us.

The rear cameras that tell people when they are backing up and there is an obstacle are a great example of that, and so is the parallel parking mechanism being deployed in some cars. Collision avoidance radar will eventually be turned into a system that actively veers to avoid a collision for you if someone hasn’t already done that.

I’m personally trying to promote the idea that smart cars should  sense when some living thing has been left in a hot car during a heatwave and actually do something to prevent harm. That’s pretty low hanging fruit.

There is plenty more of that kind of assistive technology coming down the pipe very soon, but it’s a co-pilot we’re building in now, not a pilot.

It’s likely that for a while, there will be special cases where roads are built for the exclusive use of completely self driving cars and we may develop an infrastructure for commercial traffic that uses them. We will need ‘smart roads’ for the concept of self driving cars to be fully realized and it will take a long time for them to be built. It will likely start out being done in small islands of experimentation before it gets rolled out and adopted for all roads.

In the meantime, we will continue to augment the reality within human piloted vehicles with more and more information systems and technology. Big data will probably be used to collect information about how people in the real world actually drive and run it through some fancy AI – maybe IBM’s Watson – to come up with come kind of neural net ‘driving brain’ that could allow a machine to drive in much the same way that a typical human does, even on regular non-smart roads.

I just hope we don’t train them to leave their blinkers on.

I was looking at my résumé and trying to figure out how to simplify and organize it better and it struck me that the most important thing on it is not the list of things I have done or could do, but the list of things that I want to do – the things that interest me.

A recruiter or a potential client certainly isn’t going to type in a laundry list of skills to find the right candidate. They’re going to look for people who are  interested in and passionate about the kind of work that’s needed to solve a problem, and work out from there if the person can fit the role.

So for any given person, the ideal job is to be at the intersection of interesting things and that’s different for each of us. Once I started thinking along those lines, cleaning up my cluttered resume was easy. I can learn practically any technical skill that would be needed to do any job I would want to do so there’s no reason to make the skills the main highlight.

And once I started making the areas I’m passionate about the center of attention, I was able to list things that I would love to do, and am certainly capable of doing, but have never done for lack of an opportunity.

I know from a recent experience as volunteer Producer for the 48 Hour Film Project in Ottawa that passion can be a much more important criteria than experience in someone. I was fortunate to recruit a Twitter acquaintance named Kim Doel to help, and she has proven invaluable despite having not very much experience at coordinating events.

She’s a natural.

My role as City Producer is another example in fact. I thought it would be interesting to make the 48 Hour film Project happen here in Ottawa, and I’ll be darned if it’s not going to happen.

So here’s wishing any of you reading this that you always be at the intersection of interesting things, whatever that happens to mean to you.

meGLASS2So after my post about wearable computing the other day, a colleague from my time at QNX who had read it got in touch and asked if I wanted to try Google Glass myself. I was thrilled! Someone had actually read my post!

And yes.. it was a pretty cool chance to see Google Glass in person and try it out.

So I headed to Kanata to see my friend Bobby Chawla from RTeng.pro He’s a consultant by day and a tinkerer in his spare time like myself and has a pair on loan from a friend in the US. He gave me a tour of Google Glass as we talked about what we’ve each been doing since we were working on the pre-Blackberry QNX OS.

It turned out to be really easy to adapt to switching between focusing on what GLASS was displaying, and looking at Bobby as we talked. The voice activation feature for the menu was self-explanatory since every time Bobby told me “to activate the menu say OK GLASS” GLASS would hear him saying that and it would bring up the menu.

It aggressively turns off the display to save power, which does get in the way of exploring it, so I found myself having to do the wake up head gesture often, which is basically tossing your head back and then forward to level again – kind of like sneezing. I’m sure that will lead to a new kind of game similar to “Bluetooth or Crazy” – perhaps “Allergies or Glasshole”?

It could also cause issues where a waiter is asking if you want to try the raw monkey brains and you accidentally nod in agreement or bid on an expensive antique at an auction because you tried to access GLASS to find out more about it.

Between the voice activation, head gestures, and a touch/swipe sensitive frame, it’s pretty easy to activate and use the device but it certainly won’t be easy to hide the fact that you’re using GLASS from a mile away.

I didn’t have time to explore everything it had to offer in great detail, but what it has now is only the beginning. Clever folks like Bobby and others will come up with new apps for it and what I saw today is just a preview of what’s in store. In that sense, GLASS seems half empty at this point, until you realize that Google is handing it to developers and asking them to top it up with their own flavor of Glassware. If you have any ideas for something that would be a good application for it, I’m sure Bobby would love to hear from you.

I did get a chance to try out the GPS mapping feature, which I think relies on getting the data from his Android phone. We got in his car and he told me to ask it to find a Starbucks and away we went with GPS guidance and the usual turn by turn navigation.

The most surprising thing about them to me was that they don’t come with lenses. There is of course the projection screen, but that little block of glass is what gives them their name. They don’t project anything onto the lens of a pair of glasses from the block of glass, they project an image into the block of glass, and because it’s focused at infinity, it appears to float in the air – kind of/sort of/maybe.

So they work at the same time as a regular pair of glasses, more or less. They have a novel pair of nose grips to sit on your nose that’s mounted on legs that are long enough to allow it to peacefully, but uneasily, co-exist with a typical pair of regular glasses or sunglasses.

There are two cameras in it – one that faces forward, and another that looks at your eye for some reason – perhaps to send a retinal scan to the NSA! You never know these days. Actually, the sensor looking at your eye detects eye blinks to trigger taking a picture among other things.

So would I get a pair of these and wear them around all the time – like that fad with the people who used to wear Bluetooth phones in their ears at the grocery store? No.. I don’t think so, but for certain people in certain roles, I can see them being invaluable.

Bouncers or security at nightclubs and other events could wear them and take photos of trouble makers, and share that information with the other security people at the event immediately so they don’t get kicked out of one door and get back in another.

I’m sure we’ll see mall cops using them as a way to record things they might need to document later for legal purposes like vandalism and shoplifting. Insurance investigators and real estate folks will surely derive value from having a device that can document and record a walk through of a location without having to carry a camera and audio recorder.

Any occupation that uses a still or video camera to gather documentary evidence is worth a look as a candidate for using GLASS, although it would be better if longer sections of video could be recorded. In some cases a real camera will still be needed, but as the saying goes with smartphones – the best camera is the one you have with you at the time.

GLASS doesn’t really do anything that a smartphone can’t already do. The main value proposition GLASS offers is a hands free experience and instant access. Some of the functionality even still requires you to carry a phone.  It’s definitely going to make selfies easier to take.

The penalty is that you look a bit like an idiot at this point, until fashion steps in and makes less obtrusive and obvious items with similar functionality.

My main takeaway on the experience is that if you ever want to piss off a GLASS user…. wait until just after they sneeze, and then say “OK GLASS.. google yahoo bing”

 

[EDIT] – I’ve since learned that GLASS comes with lenses and my friends relative lust left them back in the US since he also wears glasses, and I also learned that you can get prescription lenses made for them or buy sunglass lenses.

Wearable computing as an idea has been around for ages. I was watching Toys with Robin Williams on Netflix or somewhere and it seemed to me that the concept really hasn’t moved forward since his musical suit, and I was reading an article recently that tried to argue that everything is there for it to catch on except for the fashion part, but that still didn’t really seem right to me either.

And then it kind of struck me. Wearable computing will never catch on with the masses. Nobody wants to ‘wear’ a computer, and there is already a schism developing between those who choose to do so (Glassholes) and people who resent and dislike where that branch of wearables leads.

But what people do want, and what will catch on, is pretty much the same technology but we won’t call it that.

I like to call it ‘functional clothing’ and we don’t need to wait to see if that will catch on because it’s already all around us and completely accepted in every culture. The functionality just doesn’t include any of the new fancy electronic or wireless stuff yet.

“Uniforms” are functional clothing, and we can already see that ‘wearable computing’ has already been incorporated into some very specialized uniforms – military and police being the prime example. But firemen, UPS delivery folks, meter readers and many other occupations already lug around lots of equipment that they need to do their jobs.

Imagine what McDonalds could do by wiring their kitchens with Bluetooth low energy beacons and their staff with smart wearables woven into the uniforms. Forget that clunky headset the drive thru attendant has to wear. Put google glass on the McDonalds manager and now no matter where they in the kitchen are the display screen showing the orders is there for them to see. As the development costs come down, big companies will see the value in building wearables into the uniforms of their front line staff.

For my part, I’m going to get ahead of the curve and start working on machine washable computing. I suspect dry cleaning is about to make a comeback.

dragonI recently had a chance to work with some extremely accomplished 3D artists on a product for Android using Unity 3D called Dragon Strike Live and it was quite an eye opener. These folks are used to working on feature films like Avatar, 2012, and Abraham Lincoln Vampire Hunter but they decided to try their hand at making a mobile game and I got to help them with some of the coding.

It was pretty cool to work with people who can do serious 3D animation. Most game programmers have worked with humanoid characters and are able to do basic animations and modeling of humans, but arbitrary creatures are a different story, and full blown dragons are quite a complex rig.

Not only do these dragons look way awesome but the animations are totally kickass and mind blowing. I helped create the code that randomly sequences them and even after watching these critters romp around the screen for days I can still waste lots of time just watching the animations.

Check out http://motionlogicstudios.com to see what kind of other stuff these folks have done, and if you have an Android, check out the live wallpaper app at https://play.google.com/store/apps/details?id=com.motionlogicstudios.dragons

I’m looking forward to working on future projects with these guys. Hopefully some of their talent will wear off on me!

Lots of people engage in fantasy sports leagues, but sports isn’t my thing. I love movies though, and I sometimes fantasize about doing remakes of movies using a different cast.

I’ve always had this fantasy about doing a Gilligan’s Island movie and my recent blog post about Vince Gilligan’s island didn’t just remind me of my fantasy. My fantasy Gilligan’s Island movie reminded me of something Vince Gilligan might write.

There’s nothing particularly inventive about my plot idea – it’s downright derivative. I guess that’s not something Vince Gilligan would aspire to, but I would blatantly rip off the Usual Suspects plotline and play on that. The whole movie idea is spoof-ish anyway, with lots of sight gags and homages to the original TV show, so ripping off a plotline is a minor sin. I think the Wayans brothers have lowered the bar on that so far that you don’t even trip over it anymore.

No, the creativity in my version is all about casting, and the idea would sink or swim on being able to assemble the ensemble cast that would have the right chemistry to make it work.

The core cast would be Adam Sandler as Gilligan and Will Ferrel as The Skipper. If you can’t close your eyes and picture those two doing the “Little Buddy” and “SkipppperrrRRR!” routine, then stop reading now and go read something else.

I sure can. I can picture Will Ferrel wringing the skippers hat between his hands, fretting, resisting the urge to hit his shipmate, saying “doggone it” and the whole nine yards. And I can definitely picture Adam Sandler being chased by hungry cannibals while yelling “Skippperrrr!”.

They both have the physiques for their respective roles, hair color and style – straight black hair for Gilligan and curlier lighter hair for Will. They might be a bit older than ideal for the roles now, but they could probably still manage.

If I couldn’t get those two to play the main roles, I don’t think I would even think of going further. I’m pretty attached to most of my other casting choices as well, but the first two are definite showstoppers.

And who should play the Professor? We need someone intelligent, forthright, brutally honest, assertive.

Samuel L. <frickin> Jackson.

I’m not going for the G rated Gilligan’s Island here obviously.

Yes.. Samuel L. Jackson would make a fine Professor.

So my whole “Usual Castaways” plotline revolves around the Howells interrogating Gilligan about a failed rescue attempt that results in a burning rescue ship and most of the other castaways ending up dead or drowned. I’m not going for the bright and cheery version of Gilligan’s Island here either. Like I said – this is more like Vince Gilligan’s island.

So who should play the Howells? I like Sandra Bernhardt and Eddy Izzard. I actually asked Sandra Bernhardt on Twitter if she thought she could get into playing Mrs. Howell, and she loved the idea.

For Mary-Anne, I like Halle Berry. For Ginger – Anne Hathaway. To be honest, these two characters wouldn’t play much of a central role in my plotline and for cost reasons you could probably talk me into actresses with a lower price tag.

So that’s it. My fantasy cast for a Gilligan’s Island remake, along with essentially the elevator pitch of the plotline.

Anyone want to lend me.. oh.. $20 million to get started on it?

It occurred to me while binge watching Dexter that the series is about a monster becoming more human. Then it occurred to me that Breaking Bad was about a human becoming a monster. In a sense, both series are about the main themes of the Frankenstein story – Dexter being the monster and Walter White being the misguided scientist.

There must be something about that ‘monster within all of us’ theme that is compelling. Think of how many of the bigger cultural ‘hits’ involve vampires, werewolves, superheroes, mutants and other examples of humans imbued with an extreme amount of something inhuman and it shows just how much we’re fascinated by our darker sides.

For a long time I resisted watching certain shows because I didn’t like what they were about on the surface. Dexter and Breaking Bad are both examples of that – I don’t generally like anything that involves gore, and I don’t really have any interest in the ‘meth’ culture as it were, but in the end I’m glad I got over those objections and watched both of these series.

Dexter does have a lot of gore in it, and Breaking Bad does deal with the meth culture, but that isn’t really what either of those series is about. They are about human beings and what they will do under extreme circumstances.

TV series today rarely deal with mundane people in mundane situations. Even The Beverly Hillbillies or Green Acres were ‘fish out of water’ stories that worked because they dealt with unusual characters in awkward situations.

We’ve gradually raised the bar over the years so that the stakes have to be much higher in order for us to want to stay along for the ride. Being able to write such a show so that it doesn’t cross over the line from drama and turn into comedy is a tremendous skill and the writers of these series deserve a lot of credit for nailing it.

And now I can’t stop wondering if Gilligan wasn’t secretly responsible for keeping everyone on that island for his own selfish reasons. After all, he always seemed to be around when one of their plans went wrong….

I was working on a product that is a simulation of something I won’t mention at this point, and I suddenly realized it was running inside a simulation of a device in a game editor that was running inside a Virtual Machine (simulation of a computer).

And it was performing just fine!

I frequently develop for iOS devices, and 9 out of 10 test runs are made in a simulator. The simulators in XCode cover nearly all kinds of devices Apple makes from the iPod touch to the iPad.

Most of my BlackBerry testing is done in a simulator. I test things for Android devices using a simulator.

What we all really need is a smartphone that can run all of these simulators.

So you used to hear about a four-day work week all the time, but lately there hasn’t been much talk about it, and I think I know why. I think we’ve been on a four-day work week for a while now, but the extra day has been quietly stolen and put to good use by the big software giants.

As a software developer, I estimate that I spend about 20% of my time at this point updating software. I have several computers that run different operating systems, and the operating systems on those computers need to update themselves periodically. When that happens, I need to close Photoshop and the 20 or thirty files that are open, close Visual Studio and the project I am working on, close all the browser windows showing vital documentation I need to work with, etc.

I have to close all the windows my Mac has open that point at my Windows machine (or vice versa if I’m updating Mac OS X). Once the update is done, assuming everything went smoothly, I need to spend a bunch of time re-opening those files, relocating the project, etc.

And that is just the operating systems on the computers I use.

Then there are the tools I use. I use XCode on the Mac, and that gets updated often - usually to match the updates to the operating systems on the devices I use XCode to build software for. It’s a big update too – many GB.

So when XCode gets updated, I usually need to get an update to Unity 3D, which produces the XCode project that gets compiled and put on the devices. I have to update Unity when Windows changes too, or a major web browser, or the Xbox or playstation or what have you. Unity has to run on a lot of platforms and I don’t envy them the task of keeping everything working on all of those platforms.

And after getting these updates, I sometimes have to re-register my devices as development devices (after I set them up again as I need to do with a major update like iOS7).

The last month has been particularly unproductive because of the (three) iOS7 update(s). but this hidden software update tax has been going on for a while.

If you have an iOS device, you know how quickly the little number showing you how many app updates there are can increase. Even if you’re not a developer, you probably spend an hour a week just updating your apps on your smartphone and cleaning things up.

If you are trying to support multiple mobile platforms, it’s crazy how much time you spend updating devices and development systems.

So is anyone up for a three-day work week?