GoogleHammerSmallSo now that I’ve had a chance to see Google Glass, and gone out on a limb posting predictions about the future of wearable computing, which sector do I see wearable computing making a huge impact where it doesn’t now?

Construction workers. These are highly skilled and highly paid workers who need to keep both hands free and who need ready access to all sorts of tools – they need brain tools just as much and in my opinion, they aren’t getting them fast enough.

On my smartphone I have an app that lets me take measurements of distant buildings using my GPS location, accelerometers, video camera and the like. A construction worker can certainly bring an iPhone on site and use it for those purposes, but nobody would argue that an iPhone is a rugged piece of construction equipment.

Clearly Google Glass could help here, but it’s the functionality that’s needed and not the current form. Google Glass is also too delicate, and it’s not specialized enough for construction workers. It wouldn’t last a day in the field. The battery life alone make it a bit moot for a 12+ hour a day profession.

But construction workers are already required to wear a specific piece of functional clothing and that’s a helmet. The display HUD portion of GLASS is needed, but the helmet could provide a perfect platform for wireless antennas, solar power collectors, a micro weather station, and all kinds of other useful equipment.

But especially batteries. Enough batteries to make the smart helmet a very practical idea. Construction helmets need to meet objective standards in order to comply with regulations, and that means they need to weigh a certain amount no matter how you cut it. It’s also a substantial weight when compared to rechargeable batteries and with some clever material science you could probably engineer batteries into a construction helmet that met regulations without even increasing the current weights by much.

And safety glasses would make a dandy display for it. A laser sight onboard would allow the kind of measurement application I mentioned earlier to become very precise, and although it would not replace real surveyors with real surveying equipment, it would allow most workers to become information sources feeding ad-hoc measurement data into the backend scheduling and planning systems of the construction company running the project. Other devices like sonar range finders could be added to make them even more flexible.

And cost will certainly be a factor, but the benefits are real and the cost is incremental given that many of the components of an integrated wearable construction workers suit are already there for safety reasons and already have to meet standards that a cheap T-Shirt would never need to meet.

Video footage and photos, and even sound recordings could be quickly shared among the people who need them when they need them, and ad hoc meetings to deal with issues as they arise could be held via video conferencing to keep projects running smoothly.

The workers equipment could eventually be wired to provide any necessary status and operating information via the smart helmet. A torque wrench could not only display the torque reading being applied to a lug nut, the display could also show what the required torque was for the job, and maybe allow reporting and logging that the nut had been tightened for compliance purposes. An alert could tell the worker that the nut had been tightened a certain number of times and was due for replacement.

A network of Bluetooth Low Energy sensors around the construction site could track workers movements for safety and logistical purposes. Hands free multi band communication devices in the helmet could tie in with multiple communications and data systems on a job site. The helmet could send out a help squawk if it detects a blow to the head and the safety glasses could monitor vitals to detect if the worker is injured.

Different types of workers could accompany the smart helmet with different types of specialized uniforms. A welders specialty could include a smart welders visor attachment that automatically switched an LCD filter on in the visor when the welder started arc welding, and maybe a thermal imaging camera overlay so they could judge weld quality. A site foreman would have a focus on communication and logistics.

A crane operator could have an Oculus Rift type of visor that allowed them to rotate around an augmented view of the cranes payload synthesized from camera views and other data even if they couldn’t see where the payload was directly.

And with Bluetooth Low Energy beacons becoming a cheap and practical reality, your basic carpenter would finally be able to answer the age old question, where did I leave my @#%$%$ hammer!

left1I see a lot of posts about automated or self driving cars in my Linked In stream, probably because of all the QNX people I follow. One of the main areas the QNX operating system is used in is automotive systems so that’s not surprising. And Google is also frequently in the news for their self driving cars, so the topic is really starting to get a lot of attention.

I have no doubt that we’ll eventually have cars that can drive for themselves. We’ve seen them demonstrated and it’s a really cool idea but I still see a lot of problems with them.

The biggest problem I see is that they will need to deal with unpredictable humans who don’t follow the rules of the road. If we program self driving cars to ‘believe’ the turn signals on other cars, what will they do when they meet the stereotypical Sunday driver that forgets to turn off their turn signal?

What will they do when the car signals a lane change, but a rude driver won’t let them in?

I think it will be a long time before we allow computer driven cars without a human override. What I think makes sense, and what is already happening, is that we will allow our cars to incorporate safety overrides at first for things that it makes sense to do and that we can’t do as humans quick enough to do effectively or reliably. We’ll let them take over driving for us once there is no doubt that we’re in a situation we can’t deal with.

Call it ‘augmented driving’.

Two examples of that kind of thing are airbags, and anti lock brakes. Bracing for a collision is something we have always done and an airbag is just a fancy robotic collision bracing system that auto deploys (hopefully) only when needed. Pumping the brakes manually in slippery conditions is something we’ve always done. An anti lock braking system just does it better than we ever could and only when needed.

So I think the evolution from here to fully driving cars will be very long, and will take place in incremental steps like that. We will start by instrumenting everything that’s important so that the cars can sense when important things happen, and we will equip them with specialized information displays, annunciators, actuators and software to take over at the right time and override us.

The rear cameras that tell people when they are backing up and there is an obstacle are a great example of that, and so is the parallel parking mechanism being deployed in some cars. Collision avoidance radar will eventually be turned into a system that actively veers to avoid a collision for you if someone hasn’t already done that.

I’m personally trying to promote the idea that smart cars should  sense when some living thing has been left in a hot car during a heatwave and actually do something to prevent harm. That’s pretty low hanging fruit.

There is plenty more of that kind of assistive technology coming down the pipe very soon, but it’s a co-pilot we’re building in now, not a pilot.

It’s likely that for a while, there will be special cases where roads are built for the exclusive use of completely self driving cars and we may develop an infrastructure for commercial traffic that uses them. We will need ‘smart roads’ for the concept of self driving cars to be fully realized and it will take a long time for them to be built. It will likely start out being done in small islands of experimentation before it gets rolled out and adopted for all roads.

In the meantime, we will continue to augment the reality within human piloted vehicles with more and more information systems and technology. Big data will probably be used to collect information about how people in the real world actually drive and run it through some fancy AI – maybe IBM’s Watson – to come up with come kind of neural net ‘driving brain’ that could allow a machine to drive in much the same way that a typical human does, even on regular non-smart roads.

I just hope we don’t train them to leave their blinkers on.

meGLASS2So after my post about wearable computing the other day, a colleague from my time at QNX who had read it got in touch and asked if I wanted to try Google Glass myself. I was thrilled! Someone had actually read my post!

And yes.. it was a pretty cool chance to see Google Glass in person and try it out.

So I headed to Kanata to see my friend Bobby Chawla from RTeng.pro He’s a consultant by day and a tinkerer in his spare time like myself and has a pair on loan from a friend in the US. He gave me a tour of Google Glass as we talked about what we’ve each been doing since we were working on the pre-Blackberry QNX OS.

It turned out to be really easy to adapt to switching between focusing on what GLASS was displaying, and looking at Bobby as we talked. The voice activation feature for the menu was self-explanatory since every time Bobby told me “to activate the menu say OK GLASS” GLASS would hear him saying that and it would bring up the menu.

It aggressively turns off the display to save power, which does get in the way of exploring it, so I found myself having to do the wake up head gesture often, which is basically tossing your head back and then forward to level again – kind of like sneezing. I’m sure that will lead to a new kind of game similar to “Bluetooth or Crazy” – perhaps “Allergies or Glasshole”?

It could also cause issues where a waiter is asking if you want to try the raw monkey brains and you accidentally nod in agreement or bid on an expensive antique at an auction because you tried to access GLASS to find out more about it.

Between the voice activation, head gestures, and a touch/swipe sensitive frame, it’s pretty easy to activate and use the device but it certainly won’t be easy to hide the fact that you’re using GLASS from a mile away.

I didn’t have time to explore everything it had to offer in great detail, but what it has now is only the beginning. Clever folks like Bobby and others will come up with new apps for it and what I saw today is just a preview of what’s in store. In that sense, GLASS seems half empty at this point, until you realize that Google is handing it to developers and asking them to top it up with their own flavor of Glassware. If you have any ideas for something that would be a good application for it, I’m sure Bobby would love to hear from you.

I did get a chance to try out the GPS mapping feature, which I think relies on getting the data from his Android phone. We got in his car and he told me to ask it to find a Starbucks and away we went with GPS guidance and the usual turn by turn navigation.

The most surprising thing about them to me was that they don’t come with lenses. There is of course the projection screen, but that little block of glass is what gives them their name. They don’t project anything onto the lens of a pair of glasses from the block of glass, they project an image into the block of glass, and because it’s focused at infinity, it appears to float in the air – kind of/sort of/maybe.

So they work at the same time as a regular pair of glasses, more or less. They have a novel pair of nose grips to sit on your nose that’s mounted on legs that are long enough to allow it to peacefully, but uneasily, co-exist with a typical pair of regular glasses or sunglasses.

There are two cameras in it – one that faces forward, and another that looks at your eye for some reason – perhaps to send a retinal scan to the NSA! You never know these days. Actually, the sensor looking at your eye detects eye blinks to trigger taking a picture among other things.

So would I get a pair of these and wear them around all the time – like that fad with the people who used to wear Bluetooth phones in their ears at the grocery store? No.. I don’t think so, but for certain people in certain roles, I can see them being invaluable.

Bouncers or security at nightclubs and other events could wear them and take photos of trouble makers, and share that information with the other security people at the event immediately so they don’t get kicked out of one door and get back in another.

I’m sure we’ll see mall cops using them as a way to record things they might need to document later for legal purposes like vandalism and shoplifting. Insurance investigators and real estate folks will surely derive value from having a device that can document and record a walk through of a location without having to carry a camera and audio recorder.

Any occupation that uses a still or video camera to gather documentary evidence is worth a look as a candidate for using GLASS, although it would be better if longer sections of video could be recorded. In some cases a real camera will still be needed, but as the saying goes with smartphones – the best camera is the one you have with you at the time.

GLASS doesn’t really do anything that a smartphone can’t already do. The main value proposition GLASS offers is a hands free experience and instant access. Some of the functionality even still requires you to carry a phone.  It’s definitely going to make selfies easier to take.

The penalty is that you look a bit like an idiot at this point, until fashion steps in and makes less obtrusive and obvious items with similar functionality.

My main takeaway on the experience is that if you ever want to piss off a GLASS user…. wait until just after they sneeze, and then say “OK GLASS.. google yahoo bing”

 

[EDIT] – I’ve since learned that GLASS comes with lenses and my friends relative lust left them back in the US since he also wears glasses, and I also learned that you can get prescription lenses made for them or buy sunglass lenses.

Wearable computing as an idea has been around for ages. I was watching Toys with Robin Williams on Netflix or somewhere and it seemed to me that the concept really hasn’t moved forward since his musical suit, and I was reading an article recently that tried to argue that everything is there for it to catch on except for the fashion part, but that still didn’t really seem right to me either.

And then it kind of struck me. Wearable computing will never catch on with the masses. Nobody wants to ‘wear’ a computer, and there is already a schism developing between those who choose to do so (Glassholes) and people who resent and dislike where that branch of wearables leads.

But what people do want, and what will catch on, is pretty much the same technology but we won’t call it that.

I like to call it ‘functional clothing’ and we don’t need to wait to see if that will catch on because it’s already all around us and completely accepted in every culture. The functionality just doesn’t include any of the new fancy electronic or wireless stuff yet.

“Uniforms” are functional clothing, and we can already see that ‘wearable computing’ has already been incorporated into some very specialized uniforms – military and police being the prime example. But firemen, UPS delivery folks, meter readers and many other occupations already lug around lots of equipment that they need to do their jobs.

Imagine what McDonalds could do by wiring their kitchens with Bluetooth low energy beacons and their staff with smart wearables woven into the uniforms. Forget that clunky headset the drive thru attendant has to wear. Put google glass on the McDonalds manager and now no matter where they in the kitchen are the display screen showing the orders is there for them to see. As the development costs come down, big companies will see the value in building wearables into the uniforms of their front line staff.

For my part, I’m going to get ahead of the curve and start working on machine washable computing. I suspect dry cleaning is about to make a comeback.

In any industry, each competitor seeks to distinguish themselves from the other competitors by providing something none of the others can offer. This is generally known as the ‘special sauce’ and I think I’ve found mine.

Most people have heard of the Arduino, but if you haven’t it is a very cheap and very powerful electronic device that hooks up to a computer via USB cable and allows software to control a huge number of real world devices like temperature, pressure, motion and other sensors, motors, heaters, light dimmers, electrical switches.

My education is in process control, which is precisely the act of reading such sensors and controlling such devices so Arduinos are definitely my thing, but what is a Uniduino?

Well a Uniduino is a library of C# code that allows the Arduinos functionality to be controlled from within the Unity game engine.

What that means is that I can create a game world that gets information from real world devices and uses that information to control how something in the game looks and acts. I can use input from the person playing the game to send information back to the uniduino to get it to do something in the real world.

The game world can be running on a remote server, and the person(s) in the game world could be logged in remotely, so this is the core of some interesting telepresence applications. Throw in some remote video cameras and you could do some serious things.

Remote health care is already being explored with video technology. How could it benefit from the added element of a virtual world? What kind of devices could we make for chronic care patients to allow them to live their own lives while being connected to and within reach of health care intervention when they need it?

Remote monitoring of many industrial systems is already commonplace. How could this be extended by adding virtual or augmented reality? Fixed cameras monitor a lot of installations – what about a 3D overlay (underlay) on the video to tell the operator where things *should* be when they are normal?

If you’ve played any recent first person shooter video games, you know how far we have come in recreating experiences virtually. We manage to get by using non immersive flat panels crammed with abstracted information from sensors that try to tell us what is going on somewhere else with our machines.

The next generation of workers will be able to walk around virtually inside those environments and see, touch, feel and hear what is going on.

As any old time engineer on a ship or a train will tell you, they can ‘feel’ when something isn’t right with the machine. We’re not going to ‘feel’ anything about our machines until we start making the man-machine interface much more immersive than a bunch of numbers on a touchscreen.

I was looking at some artwork through a magnifying visor (like YOU don’t need reading glasses too) and I noticed that up close the pixels on my flat panel monitors are very discrete and distinct from each other and it made me think about when I used to tune televisions sets (briefly) for a living.

Until flat panel monitors came along, and to a large extent even now, the video signal that forms the picture didn’t consist of discrete pixels. It was an analog signal, and there were all kinds of flaws and limitations inherent in trying to jam so much information into so little bandwidth.

Take the herringbone effect for example, or oversaturation buzz. Because of how color, grayscale and audio information was all encoded into the same signal, if someone wore a black and white pinstripe pattern it would bloom into a rainbow-colored moiré effect, and could also cause crackling and buzzing effects to creep into the audio channels.

We all remember the ads with the giant bright white lettering on a yellow background that caused the TV to emit an annoying buzz when they flashed on. OK.. some of us remember it?

There are actually laws in place that require broadcasters to watch out for and eliminate some of these effects or face fines because if they are severe enough they can actually cause interference with other television channels or even different kinds of receivers like police or taxi radios etc.

We marvel at modern technology and our JPEG and MPEG and PNG file formats, but they are all rooted in the hardware advances made by television broadcast engineers in the 1950s when they added color information to the black and white signal.

They are also one of the earliest examples of people worrying about backward compatibility. A black and white television made before color television was ever invented was still capable of displaying a broadcast television signal right up until analog broadcast was recently dropped.

To get back to my search for a point, tuning an old analog television set was a frustrating back and forth process that could involve potentially a dozen or more adjustments that all interacted with each other and had to be repeated for several iterations with more delicate adjustments each time through.

It also involved a fair bit of nerve, reaching inside live, plugged in and turned on television sets with a screwdriver while paying attention to the picture on the front in a mirror and trying not to think about the giant “NO USER SERVICABLE PARTS INSIDE” and “DANGER – HIGH RISK OF ELECTRIC SHOCK” signs on the back cover of the television.

And it struck me that modern agile based methods of web development are the modern equivalent of setting up the picture controls and alignment inside an old analog TV set.