GoogleHammerSmallSo now that I’ve had a chance to see Google Glass, and gone out on a limb posting predictions about the future of wearable computing, which sector do I see wearable computing making a huge impact where it doesn’t now?

Construction workers. These are highly skilled and highly paid workers who need to keep both hands free and who need ready access to all sorts of tools – they need brain tools just as much and in my opinion, they aren’t getting them fast enough.

On my smartphone I have an app that lets me take measurements of distant buildings using my GPS location, accelerometers, video camera and the like. A construction worker can certainly bring an iPhone on site and use it for those purposes, but nobody would argue that an iPhone is a rugged piece of construction equipment.

Clearly Google Glass could help here, but it’s the functionality that’s needed and not the current form. Google Glass is also too delicate, and it’s not specialized enough for construction workers. It wouldn’t last a day in the field. The battery life alone make it a bit moot for a 12+ hour a day profession.

But construction workers are already required to wear a specific piece of functional clothing and that’s a helmet. The display HUD portion of GLASS is needed, but the helmet could provide a perfect platform for wireless antennas, solar power collectors, a micro weather station, and all kinds of other useful equipment.

But especially batteries. Enough batteries to make the smart helmet a very practical idea. Construction helmets need to meet objective standards in order to comply with regulations, and that means they need to weigh a certain amount no matter how you cut it. It’s also a substantial weight when compared to rechargeable batteries and with some clever material science you could probably engineer batteries into a construction helmet that met regulations without even increasing the current weights by much.

And safety glasses would make a dandy display for it. A laser sight onboard would allow the kind of measurement application I mentioned earlier to become very precise, and although it would not replace real surveyors with real surveying equipment, it would allow most workers to become information sources feeding ad-hoc measurement data into the backend scheduling and planning systems of the construction company running the project. Other devices like sonar range finders could be added to make them even more flexible.

And cost will certainly be a factor, but the benefits are real and the cost is incremental given that many of the components of an integrated wearable construction workers suit are already there for safety reasons and already have to meet standards that a cheap T-Shirt would never need to meet.

Video footage and photos, and even sound recordings could be quickly shared among the people who need them when they need them, and ad hoc meetings to deal with issues as they arise could be held via video conferencing to keep projects running smoothly.

The workers equipment could eventually be wired to provide any necessary status and operating information via the smart helmet. A torque wrench could not only display the torque reading being applied to a lug nut, the display could also show what the required torque was for the job, and maybe allow reporting and logging that the nut had been tightened for compliance purposes. An alert could tell the worker that the nut had been tightened a certain number of times and was due for replacement.

A network of Bluetooth Low Energy sensors around the construction site could track workers movements for safety and logistical purposes. Hands free multi band communication devices in the helmet could tie in with multiple communications and data systems on a job site. The helmet could send out a help squawk if it detects a blow to the head and the safety glasses could monitor vitals to detect if the worker is injured.

Different types of workers could accompany the smart helmet with different types of specialized uniforms. A welders specialty could include a smart welders visor attachment that automatically switched an LCD filter on in the visor when the welder started arc welding, and maybe a thermal imaging camera overlay so they could judge weld quality. A site foreman would have a focus on communication and logistics.

A crane operator could have an Oculus Rift type of visor that allowed them to rotate around an augmented view of the cranes payload synthesized from camera views and other data even if they couldn’t see where the payload was directly.

And with Bluetooth Low Energy beacons becoming a cheap and practical reality, your basic carpenter would finally be able to answer the age old question, where did I leave my @#%$%$ hammer!

left1I see a lot of posts about automated or self driving cars in my Linked In stream, probably because of all the QNX people I follow. One of the main areas the QNX operating system is used in is automotive systems so that’s not surprising. And Google is also frequently in the news for their self driving cars, so the topic is really starting to get a lot of attention.

I have no doubt that we’ll eventually have cars that can drive for themselves. We’ve seen them demonstrated and it’s a really cool idea but I still see a lot of problems with them.

The biggest problem I see is that they will need to deal with unpredictable humans who don’t follow the rules of the road. If we program self driving cars to ‘believe’ the turn signals on other cars, what will they do when they meet the stereotypical Sunday driver that forgets to turn off their turn signal?

What will they do when the car signals a lane change, but a rude driver won’t let them in?

I think it will be a long time before we allow computer driven cars without a human override. What I think makes sense, and what is already happening, is that we will allow our cars to incorporate safety overrides at first for things that it makes sense to do and that we can’t do as humans quick enough to do effectively or reliably. We’ll let them take over driving for us once there is no doubt that we’re in a situation we can’t deal with.

Call it ‘augmented driving’.

Two examples of that kind of thing are airbags, and anti lock brakes. Bracing for a collision is something we have always done and an airbag is just a fancy robotic collision bracing system that auto deploys (hopefully) only when needed. Pumping the brakes manually in slippery conditions is something we’ve always done. An anti lock braking system just does it better than we ever could and only when needed.

So I think the evolution from here to fully driving cars will be very long, and will take place in incremental steps like that. We will start by instrumenting everything that’s important so that the cars can sense when important things happen, and we will equip them with specialized information displays, annunciators, actuators and software to take over at the right time and override us.

The rear cameras that tell people when they are backing up and there is an obstacle are a great example of that, and so is the parallel parking mechanism being deployed in some cars. Collision avoidance radar will eventually be turned into a system that actively veers to avoid a collision for you if someone hasn’t already done that.

I’m personally trying to promote the idea that smart cars should  sense when some living thing has been left in a hot car during a heatwave and actually do something to prevent harm. That’s pretty low hanging fruit.

There is plenty more of that kind of assistive technology coming down the pipe very soon, but it’s a co-pilot we’re building in now, not a pilot.

It’s likely that for a while, there will be special cases where roads are built for the exclusive use of completely self driving cars and we may develop an infrastructure for commercial traffic that uses them. We will need ‘smart roads’ for the concept of self driving cars to be fully realized and it will take a long time for them to be built. It will likely start out being done in small islands of experimentation before it gets rolled out and adopted for all roads.

In the meantime, we will continue to augment the reality within human piloted vehicles with more and more information systems and technology. Big data will probably be used to collect information about how people in the real world actually drive and run it through some fancy AI – maybe IBM’s Watson – to come up with come kind of neural net ‘driving brain’ that could allow a machine to drive in much the same way that a typical human does, even on regular non-smart roads.

I just hope we don’t train them to leave their blinkers on.

meGLASS2So after my post about wearable computing the other day, a colleague from my time at QNX who had read it got in touch and asked if I wanted to try Google Glass myself. I was thrilled! Someone had actually read my post!

And yes.. it was a pretty cool chance to see Google Glass in person and try it out.

So I headed to Kanata to see my friend Bobby Chawla from RTeng.pro He’s a consultant by day and a tinkerer in his spare time like myself and has a pair on loan from a friend in the US. He gave me a tour of Google Glass as we talked about what we’ve each been doing since we were working on the pre-Blackberry QNX OS.

It turned out to be really easy to adapt to switching between focusing on what GLASS was displaying, and looking at Bobby as we talked. The voice activation feature for the menu was self-explanatory since every time Bobby told me “to activate the menu say OK GLASS” GLASS would hear him saying that and it would bring up the menu.

It aggressively turns off the display to save power, which does get in the way of exploring it, so I found myself having to do the wake up head gesture often, which is basically tossing your head back and then forward to level again – kind of like sneezing. I’m sure that will lead to a new kind of game similar to “Bluetooth or Crazy” – perhaps “Allergies or Glasshole”?

It could also cause issues where a waiter is asking if you want to try the raw monkey brains and you accidentally nod in agreement or bid on an expensive antique at an auction because you tried to access GLASS to find out more about it.

Between the voice activation, head gestures, and a touch/swipe sensitive frame, it’s pretty easy to activate and use the device but it certainly won’t be easy to hide the fact that you’re using GLASS from a mile away.

I didn’t have time to explore everything it had to offer in great detail, but what it has now is only the beginning. Clever folks like Bobby and others will come up with new apps for it and what I saw today is just a preview of what’s in store. In that sense, GLASS seems half empty at this point, until you realize that Google is handing it to developers and asking them to top it up with their own flavor of Glassware. If you have any ideas for something that would be a good application for it, I’m sure Bobby would love to hear from you.

I did get a chance to try out the GPS mapping feature, which I think relies on getting the data from his Android phone. We got in his car and he told me to ask it to find a Starbucks and away we went with GPS guidance and the usual turn by turn navigation.

The most surprising thing about them to me was that they don’t come with lenses. There is of course the projection screen, but that little block of glass is what gives them their name. They don’t project anything onto the lens of a pair of glasses from the block of glass, they project an image into the block of glass, and because it’s focused at infinity, it appears to float in the air – kind of/sort of/maybe.

So they work at the same time as a regular pair of glasses, more or less. They have a novel pair of nose grips to sit on your nose that’s mounted on legs that are long enough to allow it to peacefully, but uneasily, co-exist with a typical pair of regular glasses or sunglasses.

There are two cameras in it – one that faces forward, and another that looks at your eye for some reason – perhaps to send a retinal scan to the NSA! You never know these days. Actually, the sensor looking at your eye detects eye blinks to trigger taking a picture among other things.

So would I get a pair of these and wear them around all the time – like that fad with the people who used to wear Bluetooth phones in their ears at the grocery store? No.. I don’t think so, but for certain people in certain roles, I can see them being invaluable.

Bouncers or security at nightclubs and other events could wear them and take photos of trouble makers, and share that information with the other security people at the event immediately so they don’t get kicked out of one door and get back in another.

I’m sure we’ll see mall cops using them as a way to record things they might need to document later for legal purposes like vandalism and shoplifting. Insurance investigators and real estate folks will surely derive value from having a device that can document and record a walk through of a location without having to carry a camera and audio recorder.

Any occupation that uses a still or video camera to gather documentary evidence is worth a look as a candidate for using GLASS, although it would be better if longer sections of video could be recorded. In some cases a real camera will still be needed, but as the saying goes with smartphones – the best camera is the one you have with you at the time.

GLASS doesn’t really do anything that a smartphone can’t already do. The main value proposition GLASS offers is a hands free experience and instant access. Some of the functionality even still requires you to carry a phone.  It’s definitely going to make selfies easier to take.

The penalty is that you look a bit like an idiot at this point, until fashion steps in and makes less obtrusive and obvious items with similar functionality.

My main takeaway on the experience is that if you ever want to piss off a GLASS user…. wait until just after they sneeze, and then say “OK GLASS.. google yahoo bing”

 

[EDIT] – I’ve since learned that GLASS comes with lenses and my friends relative lust left them back in the US since he also wears glasses, and I also learned that you can get prescription lenses made for them or buy sunglass lenses.

Wearable computing as an idea has been around for ages. I was watching Toys with Robin Williams on Netflix or somewhere and it seemed to me that the concept really hasn’t moved forward since his musical suit, and I was reading an article recently that tried to argue that everything is there for it to catch on except for the fashion part, but that still didn’t really seem right to me either.

And then it kind of struck me. Wearable computing will never catch on with the masses. Nobody wants to ‘wear’ a computer, and there is already a schism developing between those who choose to do so (Glassholes) and people who resent and dislike where that branch of wearables leads.

But what people do want, and what will catch on, is pretty much the same technology but we won’t call it that.

I like to call it ‘functional clothing’ and we don’t need to wait to see if that will catch on because it’s already all around us and completely accepted in every culture. The functionality just doesn’t include any of the new fancy electronic or wireless stuff yet.

“Uniforms” are functional clothing, and we can already see that ‘wearable computing’ has already been incorporated into some very specialized uniforms – military and police being the prime example. But firemen, UPS delivery folks, meter readers and many other occupations already lug around lots of equipment that they need to do their jobs.

Imagine what McDonalds could do by wiring their kitchens with Bluetooth low energy beacons and their staff with smart wearables woven into the uniforms. Forget that clunky headset the drive thru attendant has to wear. Put google glass on the McDonalds manager and now no matter where they in the kitchen are the display screen showing the orders is there for them to see. As the development costs come down, big companies will see the value in building wearables into the uniforms of their front line staff.

For my part, I’m going to get ahead of the curve and start working on machine washable computing. I suspect dry cleaning is about to make a comeback.

WOW!

In my 6th 48 Hour Competition for Machinima this month, I won 6 awards - Best Film, Best Writing, Best Directing, Best Sound Design and Best Use of both Prop and Line of Dialog.

If you have never heard of a 48 Hour Film Project, it’s quite a fun excuse for people who make films to get together and spend a weekend making films. There are some simple rules:

- each team is assigned a film genre drawn from a hat

- all teams must use several common elements including a specific character, prop, and line of dialog

- the teams have 48 hours to write, film, edit and submit a completed short film between 4 to 7 minutes

There are a lot more details than that but you get the idea. This year, the common elements were a character named Pat Runyan (male or female) who is a politician, the line of dialog “You just don’t get it do you?” and the prop dice.

There were ten teams entered this year. The full set of entries can be viewed at http://aviewtv.com and will also be available on the 48 Hour Film Project website. As a ‘city’ winner, my entry will also be screened at the annual Filmapalooza, and if hell freezes over and the planets align could potentially be eligible for the grand prize whatever it is this year – in prior years the top 12 entries were screened at the Cannes festival!

So during the kickoff, my team drew the Thriller/Suspense genre from the hat, and away we (I) went.

I pretty much did the entire submission on my own this year. I wasn’t sure I would even be able to enter and didn’t want to get a team all fired up if I had to withdraw at the last minute. In the end I had the time to enter, and I am certainly glad I did so now! My friend Susan helped me by recording the one or two lines of dialog I needed other than my own.

My original plan was to use Unity 3D, but in the end I went with iClone because my idea for the script involved a lot of facial closeups, camera switching and eye contact, and I know how to do those things well with iClone.

I don’t want to spoil the story for you if you haven’t watched it yet, but once I had the basic idea in my head, the rest was straightforward. The first thing I did was to write the script, reading it out loud frequently to get the flow and rhythm and also to time out four minutes worth of dialog.

With the script written, I worked backwards from the time limit we were given and laid out an iClone scene of the proper length with the characters I needed. I positioned them and animated their basic actions by puppeteering each character in realtime for the entire four minute plus segment and recording the actions using iClones MixMoves feature. I did just basic animation at that point, essentially puppeteering between several different idle movements for each character.

Next, I placed about ten different cameras in the scene, basically a medium shot and a close up shot for each character as well as an extreme close up for Pat and a couple of other cameras for the end shots.

Next, I recorded the audio and seperated it into phrases that are thought and phrases that are spoken. The thought narrative that occurs during the entire film is essentially the audio master track, so I imported that into iClone, and used it to know where to import the spoken phrases as audio clips into the characters. This makes them automatically move their lips correctly.

Once all of the audio was in place, I proceeded through the entire timeline setting the points at which the cameras switch from one view to another to coincide with the main points in the script. After that, one pass through the timeline for each of the characters telling their character when to strategically look at or look away from another character, again to coincide with highlights in the script, all with the aim of adding suspense.

It was about then that I exported a draft and sent it in early Sunday morning about five hours before the close of the contest. I’m really glad I did that too because I got bogged down in some technical problems involving the dice and was not able to submit a more polished version in time. Apparently though, my efforts were good enough to garner five awards, including the triple crown of Best Film, Best Writing and Best Directing.

The version you see here has been modified from what I was able to submit. I follow a tradition when I do these of creating a “Directors Cut” which is basically a version with a bit of extra work done on it so it is what I think I should have been able to do in the time allotted if everything had gone perfectly.

So I hope you enjoy.. “A Dicey Deal”

In any industry, each competitor seeks to distinguish themselves from the other competitors by providing something none of the others can offer. This is generally known as the ‘special sauce’ and I think I’ve found mine.

Most people have heard of the Arduino, but if you haven’t it is a very cheap and very powerful electronic device that hooks up to a computer via USB cable and allows software to control a huge number of real world devices like temperature, pressure, motion and other sensors, motors, heaters, light dimmers, electrical switches.

My education is in process control, which is precisely the act of reading such sensors and controlling such devices so Arduinos are definitely my thing, but what is a Uniduino?

Well a Uniduino is a library of C# code that allows the Arduinos functionality to be controlled from within the Unity game engine.

What that means is that I can create a game world that gets information from real world devices and uses that information to control how something in the game looks and acts. I can use input from the person playing the game to send information back to the uniduino to get it to do something in the real world.

The game world can be running on a remote server, and the person(s) in the game world could be logged in remotely, so this is the core of some interesting telepresence applications. Throw in some remote video cameras and you could do some serious things.

Remote health care is already being explored with video technology. How could it benefit from the added element of a virtual world? What kind of devices could we make for chronic care patients to allow them to live their own lives while being connected to and within reach of health care intervention when they need it?

Remote monitoring of many industrial systems is already commonplace. How could this be extended by adding virtual or augmented reality? Fixed cameras monitor a lot of installations – what about a 3D overlay (underlay) on the video to tell the operator where things *should* be when they are normal?

If you’ve played any recent first person shooter video games, you know how far we have come in recreating experiences virtually. We manage to get by using non immersive flat panels crammed with abstracted information from sensors that try to tell us what is going on somewhere else with our machines.

The next generation of workers will be able to walk around virtually inside those environments and see, touch, feel and hear what is going on.

As any old time engineer on a ship or a train will tell you, they can ‘feel’ when something isn’t right with the machine. We’re not going to ‘feel’ anything about our machines until we start making the man-machine interface much more immersive than a bunch of numbers on a touchscreen.

Many people today like me have multiple online personalities. Sometimes it is because they have different roles to play at various times during the day, and a lot of it has to do with there being many different kinds of social media, so even if we play the same role or project the same identity in each of them, we still find ourselves managing them more and more.

And let’s face it. We all have accounts for playing games or posting things in forums about body piercings or some of us just want to like Justin Bieber openly without being judged and create a Facebook fan page for him.

Whatever. It’s not really anyone’s business, but on the one hand we’re told to be careful what we do online because we may be judged for it, and on the other hand we’re tempted by all kinds of freedoms that we might want to indulge in without being judged.

When I first got into Virtual Reality, a lot of people considered it un-serious stuff. Not only that, but people were free to invent their avatars as anything they wanted. And that was back in a time when the only real protection against identity theft was security through obscurity. To a large extent, it’s still true today that many people don’t want to use their real names online.

Even when Second Life came along five years later, you weren’t even allowed to use your real last name. The last name of my Twitter handle – CodeWarrior Carling comes from a big long list of choices I had to decide from when I joined Second Life. Carling Avenue is a famous street and was a famous person where I live in Ottawa, so I combined it with the ‘nickname’ I had been using for flight simming and CodeWarrior Carling was born.

Ideajuice Fullstop avatar image
My Female Side

And of course anyone who has ever worked with game engines or VR building things, and in particular things about avatars, knows that you need more than one account to test anything worth testing and you also need a female avatar to test things like female clothing and hair, so Ideajuice Fullstop was created (again – I had to choose her last name from a limited list).

So including the earlier identities I had accumulated from the different Active World Universes I had been in (Active Worlds, Active Worlds Europe, Outer Worlds, Dreamland Park and several others) I was up to half a dozen online identities by 2010.

And I’m not really trying to hide anything I’m ashamed of.

So when I  stopped socializing online in Second Life and switched to Twitter, I came into Twitter as CodeWarrior Carling from Second Life and nearly all my first 1,000 or so followers knew me as that. It was a way of keeping in touch with that community even though I wasn’t really ‘going there’ anymore in a virtual sense.

After a while, I realized I was mixing in a lot of local people who had no idea who or what CodeWarrior Carling or Ideajuice or Second Life was, but who I shared interests with by virtue of geography or technology interests, so I started another account called @OttawaPete to try to keep things a little saner for both them and me.

I have other identities online as well. @BrashWorksPete, @VennData, @VARWA. There’s nothing malicious about any of them and it’s kind of fun to wear many hats and sometimes it’s even necessary.

My point is that many of us online have many accounts, but it’s only multi personality disorder if the personalities are not aware of each other.

In my case we all know about each other, and we’re all OK with it.

At least I think we all know about each other….