By now most people working in the tech sector have heard about the bug in Apple’s implementation of SSL/TLS. What you may not know is that the bug has been in the code since 2012 apparently and that the code has been sitting in plain view as part of Apples open source code. And it’s not in some obscure deeply buried file. It’s in a file called sslKeyExchange.c which is only about 2,000 lines long.

http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c

Over the years we’ve been led to believe that security through obscurity is bad, and that the best way to guarantee robust, bulletproof security is to publish everything out in the open so that the eyes of many experts can review it and quickly find holes which will just as quickly get plugged.

So what went wrong? I’ll leave the answer to the open source proponents, but I have another embarrassing question to ask.

The code in question is shown below – a goto fail that completely skips a bunch of stuff.

Shouldn’t the compiler have issued a warning “unreachable code detected”? I see this error all the time when I temporarily put in statements of any kind to step around code during debugging. It’s possible that someone missed the extra goto when looking at the source (although it literally jumps off the page to me), but how is it possible that in a completely critical piece of code like this someone didn’t read the compiler warnings and check out what unreachable code was being skipped?

    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
/* My note - everything between here and the fail label should have been flagged in an unreachable code warning */
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

	err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,				/* plaintext */
                       dataToSignLen,			/* plaintext length */
                       signature,
                       signatureLen);
	if(err) {
		sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                    "returned %d\n", (int)err);
		goto fail;
	}

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

GoogleHammerSmallSo now that I’ve had a chance to see Google Glass, and gone out on a limb posting predictions about the future of wearable computing, which sector do I see wearable computing making a huge impact where it doesn’t now?

Construction workers. These are highly skilled and highly paid workers who need to keep both hands free and who need ready access to all sorts of tools – they need brain tools just as much and in my opinion, they aren’t getting them fast enough.

On my smartphone I have an app that lets me take measurements of distant buildings using my GPS location, accelerometers, video camera and the like. A construction worker can certainly bring an iPhone on site and use it for those purposes, but nobody would argue that an iPhone is a rugged piece of construction equipment.

Clearly Google Glass could help here, but it’s the functionality that’s needed and not the current form. Google Glass is also too delicate, and it’s not specialized enough for construction workers. It wouldn’t last a day in the field. The battery life alone make it a bit moot for a 12+ hour a day profession.

But construction workers are already required to wear a specific piece of functional clothing and that’s a helmet. The display HUD portion of GLASS is needed, but the helmet could provide a perfect platform for wireless antennas, solar power collectors, a micro weather station, and all kinds of other useful equipment.

But especially batteries. Enough batteries to make the smart helmet a very practical idea. Construction helmets need to meet objective standards in order to comply with regulations, and that means they need to weigh a certain amount no matter how you cut it. It’s also a substantial weight when compared to rechargeable batteries and with some clever material science you could probably engineer batteries into a construction helmet that met regulations without even increasing the current weights by much.

And safety glasses would make a dandy display for it. A laser sight onboard would allow the kind of measurement application I mentioned earlier to become very precise, and although it would not replace real surveyors with real surveying equipment, it would allow most workers to become information sources feeding ad-hoc measurement data into the backend scheduling and planning systems of the construction company running the project. Other devices like sonar range finders could be added to make them even more flexible.

And cost will certainly be a factor, but the benefits are real and the cost is incremental given that many of the components of an integrated wearable construction workers suit are already there for safety reasons and already have to meet standards that a cheap T-Shirt would never need to meet.

Video footage and photos, and even sound recordings could be quickly shared among the people who need them when they need them, and ad hoc meetings to deal with issues as they arise could be held via video conferencing to keep projects running smoothly.

The workers equipment could eventually be wired to provide any necessary status and operating information via the smart helmet. A torque wrench could not only display the torque reading being applied to a lug nut, the display could also show what the required torque was for the job, and maybe allow reporting and logging that the nut had been tightened for compliance purposes. An alert could tell the worker that the nut had been tightened a certain number of times and was due for replacement.

A network of Bluetooth Low Energy sensors around the construction site could track workers movements for safety and logistical purposes. Hands free multi band communication devices in the helmet could tie in with multiple communications and data systems on a job site. The helmet could send out a help squawk if it detects a blow to the head and the safety glasses could monitor vitals to detect if the worker is injured.

Different types of workers could accompany the smart helmet with different types of specialized uniforms. A welders specialty could include a smart welders visor attachment that automatically switched an LCD filter on in the visor when the welder started arc welding, and maybe a thermal imaging camera overlay so they could judge weld quality. A site foreman would have a focus on communication and logistics.

A crane operator could have an Oculus Rift type of visor that allowed them to rotate around an augmented view of the cranes payload synthesized from camera views and other data even if they couldn’t see where the payload was directly.

And with Bluetooth Low Energy beacons becoming a cheap and practical reality, your basic carpenter would finally be able to answer the age old question, where did I leave my @#%$%$ hammer!

left1I see a lot of posts about automated or self driving cars in my Linked In stream, probably because of all the QNX people I follow. One of the main areas the QNX operating system is used in is automotive systems so that’s not surprising. And Google is also frequently in the news for their self driving cars, so the topic is really starting to get a lot of attention.

I have no doubt that we’ll eventually have cars that can drive for themselves. We’ve seen them demonstrated and it’s a really cool idea but I still see a lot of problems with them.

The biggest problem I see is that they will need to deal with unpredictable humans who don’t follow the rules of the road. If we program self driving cars to ‘believe’ the turn signals on other cars, what will they do when they meet the stereotypical Sunday driver that forgets to turn off their turn signal?

What will they do when the car signals a lane change, but a rude driver won’t let them in?

I think it will be a long time before we allow computer driven cars without a human override. What I think makes sense, and what is already happening, is that we will allow our cars to incorporate safety overrides at first for things that it makes sense to do and that we can’t do as humans quick enough to do effectively or reliably. We’ll let them take over driving for us once there is no doubt that we’re in a situation we can’t deal with.

Call it ‘augmented driving’.

Two examples of that kind of thing are airbags, and anti lock brakes. Bracing for a collision is something we have always done and an airbag is just a fancy robotic collision bracing system that auto deploys (hopefully) only when needed. Pumping the brakes manually in slippery conditions is something we’ve always done. An anti lock braking system just does it better than we ever could and only when needed.

So I think the evolution from here to fully driving cars will be very long, and will take place in incremental steps like that. We will start by instrumenting everything that’s important so that the cars can sense when important things happen, and we will equip them with specialized information displays, annunciators, actuators and software to take over at the right time and override us.

The rear cameras that tell people when they are backing up and there is an obstacle are a great example of that, and so is the parallel parking mechanism being deployed in some cars. Collision avoidance radar will eventually be turned into a system that actively veers to avoid a collision for you if someone hasn’t already done that.

I’m personally trying to promote the idea that smart cars should  sense when some living thing has been left in a hot car during a heatwave and actually do something to prevent harm. That’s pretty low hanging fruit.

There is plenty more of that kind of assistive technology coming down the pipe very soon, but it’s a co-pilot we’re building in now, not a pilot.

It’s likely that for a while, there will be special cases where roads are built for the exclusive use of completely self driving cars and we may develop an infrastructure for commercial traffic that uses them. We will need ‘smart roads’ for the concept of self driving cars to be fully realized and it will take a long time for them to be built. It will likely start out being done in small islands of experimentation before it gets rolled out and adopted for all roads.

In the meantime, we will continue to augment the reality within human piloted vehicles with more and more information systems and technology. Big data will probably be used to collect information about how people in the real world actually drive and run it through some fancy AI – maybe IBM’s Watson – to come up with come kind of neural net ‘driving brain’ that could allow a machine to drive in much the same way that a typical human does, even on regular non-smart roads.

I just hope we don’t train them to leave their blinkers on.

I was looking at my résumé and trying to figure out how to simplify and organize it better and it struck me that the most important thing on it is not the list of things I have done or could do, but the list of things that I want to do – the things that interest me.

A recruiter or a potential client certainly isn’t going to type in a laundry list of skills to find the right candidate. They’re going to look for people who are  interested in and passionate about the kind of work that’s needed to solve a problem, and work out from there if the person can fit the role.

So for any given person, the ideal job is to be at the intersection of interesting things and that’s different for each of us. Once I started thinking along those lines, cleaning up my cluttered resume was easy. I can learn practically any technical skill that would be needed to do any job I would want to do so there’s no reason to make the skills the main highlight.

And once I started making the areas I’m passionate about the center of attention, I was able to list things that I would love to do, and am certainly capable of doing, but have never done for lack of an opportunity.

I know from a recent experience as volunteer Producer for the 48 Hour Film Project in Ottawa that passion can be a much more important criteria than experience in someone. I was fortunate to recruit a Twitter acquaintance named Kim Doel to help, and she has proven invaluable despite having not very much experience at coordinating events.

She’s a natural.

My role as City Producer is another example in fact. I thought it would be interesting to make the 48 Hour film Project happen here in Ottawa, and I’ll be darned if it’s not going to happen.

So here’s wishing any of you reading this that you always be at the intersection of interesting things, whatever that happens to mean to you.

meGLASS2So after my post about wearable computing the other day, a colleague from my time at QNX who had read it got in touch and asked if I wanted to try Google Glass myself. I was thrilled! Someone had actually read my post!

And yes.. it was a pretty cool chance to see Google Glass in person and try it out.

So I headed to Kanata to see my friend Bobby Chawla from RTeng.pro He’s a consultant by day and a tinkerer in his spare time like myself and has a pair on loan from a friend in the US. He gave me a tour of Google Glass as we talked about what we’ve each been doing since we were working on the pre-Blackberry QNX OS.

It turned out to be really easy to adapt to switching between focusing on what GLASS was displaying, and looking at Bobby as we talked. The voice activation feature for the menu was self-explanatory since every time Bobby told me “to activate the menu say OK GLASS” GLASS would hear him saying that and it would bring up the menu.

It aggressively turns off the display to save power, which does get in the way of exploring it, so I found myself having to do the wake up head gesture often, which is basically tossing your head back and then forward to level again – kind of like sneezing. I’m sure that will lead to a new kind of game similar to “Bluetooth or Crazy” – perhaps “Allergies or Glasshole”?

It could also cause issues where a waiter is asking if you want to try the raw monkey brains and you accidentally nod in agreement or bid on an expensive antique at an auction because you tried to access GLASS to find out more about it.

Between the voice activation, head gestures, and a touch/swipe sensitive frame, it’s pretty easy to activate and use the device but it certainly won’t be easy to hide the fact that you’re using GLASS from a mile away.

I didn’t have time to explore everything it had to offer in great detail, but what it has now is only the beginning. Clever folks like Bobby and others will come up with new apps for it and what I saw today is just a preview of what’s in store. In that sense, GLASS seems half empty at this point, until you realize that Google is handing it to developers and asking them to top it up with their own flavor of Glassware. If you have any ideas for something that would be a good application for it, I’m sure Bobby would love to hear from you.

I did get a chance to try out the GPS mapping feature, which I think relies on getting the data from his Android phone. We got in his car and he told me to ask it to find a Starbucks and away we went with GPS guidance and the usual turn by turn navigation.

The most surprising thing about them to me was that they don’t come with lenses. There is of course the projection screen, but that little block of glass is what gives them their name. They don’t project anything onto the lens of a pair of glasses from the block of glass, they project an image into the block of glass, and because it’s focused at infinity, it appears to float in the air – kind of/sort of/maybe.

So they work at the same time as a regular pair of glasses, more or less. They have a novel pair of nose grips to sit on your nose that’s mounted on legs that are long enough to allow it to peacefully, but uneasily, co-exist with a typical pair of regular glasses or sunglasses.

There are two cameras in it – one that faces forward, and another that looks at your eye for some reason – perhaps to send a retinal scan to the NSA! You never know these days. Actually, the sensor looking at your eye detects eye blinks to trigger taking a picture among other things.

So would I get a pair of these and wear them around all the time – like that fad with the people who used to wear Bluetooth phones in their ears at the grocery store? No.. I don’t think so, but for certain people in certain roles, I can see them being invaluable.

Bouncers or security at nightclubs and other events could wear them and take photos of trouble makers, and share that information with the other security people at the event immediately so they don’t get kicked out of one door and get back in another.

I’m sure we’ll see mall cops using them as a way to record things they might need to document later for legal purposes like vandalism and shoplifting. Insurance investigators and real estate folks will surely derive value from having a device that can document and record a walk through of a location without having to carry a camera and audio recorder.

Any occupation that uses a still or video camera to gather documentary evidence is worth a look as a candidate for using GLASS, although it would be better if longer sections of video could be recorded. In some cases a real camera will still be needed, but as the saying goes with smartphones – the best camera is the one you have with you at the time.

GLASS doesn’t really do anything that a smartphone can’t already do. The main value proposition GLASS offers is a hands free experience and instant access. Some of the functionality even still requires you to carry a phone.  It’s definitely going to make selfies easier to take.

The penalty is that you look a bit like an idiot at this point, until fashion steps in and makes less obtrusive and obvious items with similar functionality.

My main takeaway on the experience is that if you ever want to piss off a GLASS user…. wait until just after they sneeze, and then say “OK GLASS.. google yahoo bing”

 

[EDIT] – I’ve since learned that GLASS comes with lenses and my friends relative lust left them back in the US since he also wears glasses, and I also learned that you can get prescription lenses made for them or buy sunglass lenses.

businesscardPeteI mentioned in passing last month that my offer to volunteer as a City Producer and bring the 48 Hour Film Project to the City of Ottawa for the first time had been accepted, but I knew that it would get lost in the holiday noise. The 48 Hour Film Project Headquarters in Washington, DC was also in Holiday mode as well as doing their year-end wrap up and focusing on producing the annual Filmapalooza event for all the city winners from 2013.

But things are now getting back to normal, and a lot of the mechanisms to work me in to the organization have worked themselves through and the ball is definitely rolling to bring this incredibly fun event to our fair city in 2014.

What is the 48 Hour Film Project?

http://www.48hourfilm.com/en/ottawa/

It’s a lot of fun that a bunch of teams of filmmakers, actors, musicians, editors, writers and other creative people have during a hectic and creative whirlwind of a weekend, and which many more people get to enjoy the fruits of during the screenings and awards and interviews with the creators later on.

I have several other volunteers helping me and I would never have even volunteered without knowing that I could call on them for help. The first person I turned to was Shawna Tregunna, the founder of ReSoMe - a social media marketing and promotion company. Shawna was the lead voice actress in one of my 48 Hour Project entries of earlier years, so she is familiar with the event and how much fun it is to take part.

The other person I have helping so far is Kim Doel – a long time social media acquaintance and event organizer. Kim has already proven invaluable by obtaining quotes for theater rentals and contributing many great ideas for saving or raising money.

We haven’t finalized the date yes, but we’re looking at mid May. We haven’t nailed down the location either, but we hope to have that set soon.

We’re having a meeting at 6 PM next Tuesday January 14th at the Daily Grind on Somerset, but this is a meeting for volunteers who want to help organize and plan the event, not for people who just want to enter a team. If you want to help out or you can help out by all means let us know and show up next week!

We are also looking for sponsors. If your business caters to anyone involved in making films or videos, we will be getting our message in front of your audience and we welcome the chance to promote your brand in exchange for prizes for award winners, stuff to put in bling bags for competitors, printing services or whatever else might help us make this a successful event.

My personal goal is to create an event that outlasts my involvement and becomes a yearly part of the cultural fabric of Ottawa. The Tulip Festival, the Fringe Festival, the Animation Festival. If I can launch an event that people look forward to every year, that would go at the very top of my list of life accomplishments.

If anyone reading this would like to help please contact me at ottawa@48hourfilm.com I’ll be very glad to hear from you.

 

Wearable computing as an idea has been around for ages. I was watching Toys with Robin Williams on Netflix or somewhere and it seemed to me that the concept really hasn’t moved forward since his musical suit, and I was reading an article recently that tried to argue that everything is there for it to catch on except for the fashion part, but that still didn’t really seem right to me either.

And then it kind of struck me. Wearable computing will never catch on with the masses. Nobody wants to ‘wear’ a computer, and there is already a schism developing between those who choose to do so (Glassholes) and people who resent and dislike where that branch of wearables leads.

But what people do want, and what will catch on, is pretty much the same technology but we won’t call it that.

I like to call it ‘functional clothing’ and we don’t need to wait to see if that will catch on because it’s already all around us and completely accepted in every culture. The functionality just doesn’t include any of the new fancy electronic or wireless stuff yet.

“Uniforms” are functional clothing, and we can already see that ‘wearable computing’ has already been incorporated into some very specialized uniforms – military and police being the prime example. But firemen, UPS delivery folks, meter readers and many other occupations already lug around lots of equipment that they need to do their jobs.

Imagine what McDonalds could do by wiring their kitchens with Bluetooth low energy beacons and their staff with smart wearables woven into the uniforms. Forget that clunky headset the drive thru attendant has to wear. Put google glass on the McDonalds manager and now no matter where they in the kitchen are the display screen showing the orders is there for them to see. As the development costs come down, big companies will see the value in building wearables into the uniforms of their front line staff.

For my part, I’m going to get ahead of the curve and start working on machine washable computing. I suspect dry cleaning is about to make a comeback.

So normally people make machinima using a game engine that was made to play a specific game. Machinima purists would argue that these are the only valid forms of machinima and all others are ‘CGI’ or computer animation.

There are however specific apps being made now that will create ‘machinima’, and there are also general purpose 3D worlds with user-generated content that are used to make machinima.

Well I wrote a special app using Unity to make one specific machinima, and that’s the apps only purpose. It’s now useless.

That may be a first. It’s certainly a first for me.

So without further comment, I present “Bass Xylophone” :

Bass Xylophone from CodeWarrior Carling on Vimeo.

Because I can. That’s the only reason.

This was made using a custom app I created with the Unity game engine. I had the idea for a Bass Xylophone and built it with a fish model and a simple script that scales the fish up and places it along a track like Xylophone bars.

You can play the Xylophone with a mouse or on touch device, but I’m not very good at playing the Xylophone so I got a Creative Commons MIDI file and the rest is history.

dragonI recently had a chance to work with some extremely accomplished 3D artists on a product for Android using Unity 3D called Dragon Strike Live and it was quite an eye opener. These folks are used to working on feature films like Avatar, 2012, and Abraham Lincoln Vampire Hunter but they decided to try their hand at making a mobile game and I got to help them with some of the coding.

It was pretty cool to work with people who can do serious 3D animation. Most game programmers have worked with humanoid characters and are able to do basic animations and modeling of humans, but arbitrary creatures are a different story, and full blown dragons are quite a complex rig.

Not only do these dragons look way awesome but the animations are totally kickass and mind blowing. I helped create the code that randomly sequences them and even after watching these critters romp around the screen for days I can still waste lots of time just watching the animations.

Check out http://motionlogicstudios.com to see what kind of other stuff these folks have done, and if you have an Android, check out the live wallpaper app at https://play.google.com/store/apps/details?id=com.motionlogicstudios.dragons

I’m looking forward to working on future projects with these guys. Hopefully some of their talent will wear off on me!

WOW!

In my 6th 48 Hour Competition for Machinima this month, I won 6 awards - Best Film, Best Writing, Best Directing, Best Sound Design and Best Use of both Prop and Line of Dialog.

If you have never heard of a 48 Hour Film Project, it’s quite a fun excuse for people who make films to get together and spend a weekend making films. There are some simple rules:

- each team is assigned a film genre drawn from a hat

- all teams must use several common elements including a specific character, prop, and line of dialog

- the teams have 48 hours to write, film, edit and submit a completed short film between 4 to 7 minutes

There are a lot more details than that but you get the idea. This year, the common elements were a character named Pat Runyan (male or female) who is a politician, the line of dialog “You just don’t get it do you?” and the prop dice.

There were ten teams entered this year. The full set of entries can be viewed at http://aviewtv.com and will also be available on the 48 Hour Film Project website. As a ‘city’ winner, my entry will also be screened at the annual Filmapalooza, and if hell freezes over and the planets align could potentially be eligible for the grand prize whatever it is this year – in prior years the top 12 entries were screened at the Cannes festival!

So during the kickoff, my team drew the Thriller/Suspense genre from the hat, and away we (I) went.

I pretty much did the entire submission on my own this year. I wasn’t sure I would even be able to enter and didn’t want to get a team all fired up if I had to withdraw at the last minute. In the end I had the time to enter, and I am certainly glad I did so now! My friend Susan helped me by recording the one or two lines of dialog I needed other than my own.

My original plan was to use Unity 3D, but in the end I went with iClone because my idea for the script involved a lot of facial closeups, camera switching and eye contact, and I know how to do those things well with iClone.

I don’t want to spoil the story for you if you haven’t watched it yet, but once I had the basic idea in my head, the rest was straightforward. The first thing I did was to write the script, reading it out loud frequently to get the flow and rhythm and also to time out four minutes worth of dialog.

With the script written, I worked backwards from the time limit we were given and laid out an iClone scene of the proper length with the characters I needed. I positioned them and animated their basic actions by puppeteering each character in realtime for the entire four minute plus segment and recording the actions using iClones MixMoves feature. I did just basic animation at that point, essentially puppeteering between several different idle movements for each character.

Next, I placed about ten different cameras in the scene, basically a medium shot and a close up shot for each character as well as an extreme close up for Pat and a couple of other cameras for the end shots.

Next, I recorded the audio and seperated it into phrases that are thought and phrases that are spoken. The thought narrative that occurs during the entire film is essentially the audio master track, so I imported that into iClone, and used it to know where to import the spoken phrases as audio clips into the characters. This makes them automatically move their lips correctly.

Once all of the audio was in place, I proceeded through the entire timeline setting the points at which the cameras switch from one view to another to coincide with the main points in the script. After that, one pass through the timeline for each of the characters telling their character when to strategically look at or look away from another character, again to coincide with highlights in the script, all with the aim of adding suspense.

It was about then that I exported a draft and sent it in early Sunday morning about five hours before the close of the contest. I’m really glad I did that too because I got bogged down in some technical problems involving the dice and was not able to submit a more polished version in time. Apparently though, my efforts were good enough to garner five awards, including the triple crown of Best Film, Best Writing and Best Directing.

The version you see here has been modified from what I was able to submit. I follow a tradition when I do these of creating a “Directors Cut” which is basically a version with a bit of extra work done on it so it is what I think I should have been able to do in the time allotted if everything had gone perfectly.

So I hope you enjoy.. “A Dicey Deal”