Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 3 hours 32 min ago

Conference Recaps and Such

Fri, 2015-03-27 09:28

I’m currently in Washington D.C. at Oracle HCM World. It’s been a busy conference; on Wednesday, Thao and Ben ran a brainstorming session on wearables as part of the HCM product strategy council’s day of activities.

brainstorm

Then yesterday, the dynamic duo ran a focus group around emerging technologies and their impact on HCM, specifically wearables and Internet of Things (IoT). I haven’t got a full download of the session yet, but I hear the discussion was lively. They didn’t even get to IoT, sorry Noel (@noelportual).

I’m still new to the user research side of our still-kinda-new house, so it was great to watch these two in action as a proverbial fly on the wall. They’ll be doing similar user research activities at Collaborate 15 and OHUG 15.

If you’re attending Collaborate and want to hang out with the OAUX team and participate in a user research or usability testing activity, hit this link. The OHUG 15 page isn’t up yet, but if you’re too excited to wait, contact Gozel Aamoth, gozel dot aamoth at oracle dot com.

Back to HCM World, in a short while, I’ll be presenting a session with Aylin Uysal called Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy, and then it’s off to the airport.

Earlier this week, Noel was in Eindhoven for OBUG Experience 2015. From the pictures I’ve seen, it was a fun event. Jeremy (@jrwashley) not only gave the keynote, but he found time to hang out with some robot footballers.

robot

Check out the highlights:

Busy week, right? Next week is more of the same as Noel and Tony head to Modern CX in Las Vegas.

Maybe we’ll run into you at one of these conferences? Drop a comment.

In other news, as promised last week, I updated the feed name. Doesn’t look like that affected anything, but tell your friends just in case.

Update: Nope, changing the name totally borks the old feed, so update your subscription if you want to keep getting AppsLab goodness delivered to your feed reader or inbox.Possibly Related Posts:

Time to Update the Feed

Thu, 2015-03-19 11:40

For those of you who enjoy our content via the feed (thank you), I have news.

Next week, I’ll be changing the feed’s name, so if you want to continue to receive AppsLab goodness in your feed reader of choice or in your inbox, you’ll need to come back here and subscribe again.

Or maybe it’s time to switch over to our Twitter (@theappslab) or Facebook Page, if that’s your thing. I did nuke the Google+ Page, but I doubt anyone will notice it’s gone.

Nothing else has changed.Possibly Related Posts:

OAUX Tidbits

Wed, 2015-03-18 10:32

Here come some rapid fire tidbits about upcoming and recently past Oracle Applications User Experience (@usableapps) events.

Events of the Near Past

Laurie Pattison’s (@lsptahoe) team (@InnovateOracle) has been organizing events focused around stimulating and fostering innovation for quite some time now.

I’ve always been a big fan of group-think-and-work exercises, e.g. design jams, hackathons, ShipIts, code sprints, etc.

Our team frequently participates in and supports these events, e.g. Tony O was on a team that won a couple awards at the Future of Information design jam back in early February and John and Julia served as mentors at the Visualizing Information design jam a few weeks ago.

You may recall Julia’s visualization analysis and thinking; John has an equally informative presentation, not yet shared here, but we can hope.

Watch Laurie’s blog for information about more innovation events.

Events of the Near Future

It’s conference season again, and we’ll be bouncing around the globe spreading our emerging technologies user experience goodness.

Fresh off a hot session at UTOUG (h/t OG Friend of the ‘Lab Floyd) and gadget-hounding at SXSW Interactive, Noel (@noelportugal) will be in Eindhoven, the Netherlands for the Oracle Benelux User Group Experience Day, March 23 and 24.

Our fearless leader, Jeremy Ashley (@jrwashley) will be there as well giving the opening keynote. Bob Rhubart (@OTNArchBeat) recorded a video to tell you all about that. Check it out here:

While Noel enjoys Europe, I’ll be in Washington D.C. speaking at Oracle HCM World, along with Thao and Ben.

After that, we’ll have boots on the ground at Oracle Modern CX and Collaborate 15 in Las Vegas. Stay tuned for more, or if you’ll be at any conferences during Conference Season 2015 and wonder if OAUX will be there, check out our Events page.

Update: Here’s what OAUX will be doing at Collaborate 15. If you’re attending, come by and say hello.Possibly Related Posts:

Three Weeks with the Nike+ Fuelband SE

Wed, 2015-03-11 11:42

I don’t like wearing stuff on my wrist, but in my ongoing quest to learn more about the wearables our users wear, I have embarked on a journey.

For science! And for better living through math, a.k.a. the quantified self.

And because I’ll be at HCM World later this month talking about wearables, and because wearables are a thing, and we have a Storify to prove it, and we need to understand them better, and the Apple Watch is coming (squee!) to save us all from our phones and restore good old face time (not that Facetime) and and and. Just keep reading.

Moving on, I just finished wearing the Nike+ Fuelband SE for three weeks, and today, I’m starting on a new wearable. It’s a surprise, just wait three weeks.

Now that I’ve compiled a fair amount of anecdotal data, I figured a loosely organized manifest of observations (not quite a review) was in order.

The band

The Fuelband isn’t my first fitness tracker; you might recall I wore the Misfit Shine for a few months. Unlike the minimalist Shine, the Fuelband has quite a few more bells and whistles, starting with its snazzy display.

Check out a teardown of the nifty little bracelet, some pretty impressive stuff inside there, not bad for a shoe and apparel company.

I’ve always admired the design aspects of Nike’s wearables, dating back to 2012 when Noel (@noelportugal) first started wearing one. So, it was a bit sad to hear about a year ago that Nike was closing that division.

Turns out the Fuelband wasn’t dead, and when Nike finally dropped an Android version of the Nike+ Fuelband app, I sprang into action, quite literally.

Anyway, the band didn’t disappoint. It’s lightweight and can be resized using a nifty set of links that can be added or removed.

IMG_20150217_121004

The fit wasn’t terribly tight, and the band is surprisingly rigid, which eventually caused a couple areas on my wrist to rub a little raw, no biggie.

The biggest surprise was the first pinch I got closing the clasp. After a while, it got easier to close and less pinchy, but man that first one was a zinger.

The battery life was good, something that I initially worried about, lasting about a week per full charge. Nike provides an adapter cord, but the band’s clasp can be plugged directly into  a USB port, which is a cool feature, albeit a bit awkward looking.

It’s water-resistant too, which is a nice plus.

Frankly, the band is very much the same one that Noel showed me in 2012, and the lack of advancement is one of the complaints users have had over the years.

The app and data

Entering into this, I fully expected to be sucked back into the statistical vortex that consumed me with the Misfit Shine, and yeah, that happened again. At least, I knew what to expect this time.

Initial setup of the band requires a computer and a software download, which isn’t ideal. Once that was out of the way, I could do everything using the mobile app.

The app worked flawlessly, and it looks good, more good design from Nike. I can’t remember any sync issues or crashes during the three-week period. Surprising, considering Nike resisted Android for so long. I guess I expected their foray into Android to be janky.

I did find one little annoyance. The app doesn’t support the Android Gallery for adding a profile picture, but that’s the only quibble I have.

Everything on the app is easily figured out; there’s a point system, NikeFuel. The band calculates steps and calories too, but NikeFuel is Nike’s attempt to normalize effort for specific activities, which also allows for measurement and competition among participants.

The default the NikeFuel goal for each day is 2,000, a number that can be configured. I left it at 2,000 because I found that to be easy to reach.

The app includes Sessions too, which allow the wearer to specify the type of activity s/he is doing. I suppose this makes the NikeFuel calculation more accurate. I used Sessions as a way to categorize and compare workouts.

I tested a few Session types and was stunned to discover that the elliptical earned me less than half the NikeFuel than running on a treadmill for the same duration.

Screenshot_2015-03-09-23-31-12 Screenshot_2015-03-09-23-31-52 Screenshot_2015-03-09-23-31-47 Screenshot_2015-03-09-23-32-14 Screenshot_2015-03-09-23-32-23

Update: Forgot to mention that the app communicates in real time with the band (vs. periodic syncing), so you can watch your NikeFuel increase during a workout, pretty cool.

Overall, the Android app and the web app at nikeplus.com are both well-done and intuitive. There’s a community aspect too, but that’s not for me. Although I did enjoy watching my progress vs. other men my age in the web app.

One missing feature of the Fuelband, at least compared to its competition, is the lack of sleep tracking. I didn’t really miss this at first, but now that I have it again, with the surprise wearable I’m testing now, I’m realizing I want it.

Honestly, I was a bit sad to take off the Fuelband after investing three weeks into it. Turns out, I really liked wearing it. I even pondered continuing its testing and wearing multiple devices to do an apples-to-apples comparison, but Ultan (@ultan) makes that look good. I can’t.

So, stay tuned for more wearable reviews, and find me at HCM World if you’re attending.

Anything to add? Find the comments.Possibly Related Posts:

Development on Windows 8.1 Phone and Tablet

Mon, 2015-03-09 14:59

This is a follow up to my previous post (“Where are the Mobile Windows Devices?“) in which I gave my initial impressions of mobile windows devices.  As part of our assessment of these devices we also developed a few apps and this post details how that went.

Getting Started

Windows Phone 8.1 applications have to be developed on Windows 8.1.  I am using a Mac so I installed Windows 8.1 Enterprise Trial (90-day Free Trial) in a Parallels VM.  In order to run the Phone Emulator (which is also a VM and so I was running a VM in a VM), I had to enable Nested Virtualization in Parallels.

Development is done in Visual Studio, I don’t think you can use any other IDE. You can download a version of Visual Studio Express for free.

Finally, you’ll need a developer license to develop and test a Windows Store app before the Store can certify it. When you run Visual Studio for the first time, it prompts you to obtain a developer license. Read the license terms and then click I Accept if you agree. In the User Account Control (UAC) dialog box, click Yes if you want to continue. It was $19 for a developer license.

Development

There are 2 distinct ways to develop applications on the Windows Platform.

Using the Windows Runtime (WinRT)

Applications build with WinRT are called “Windows Runtime apps”, again, there are 2 types of these:

  • “Windows Phone Store apps” are WinRT apps that run on the Windows Phone.
  • “Windows Store apps” that run on a Windows device such as a PC or tablet.

What’s really cool is that Visual Studio provide a universal Windows app template that lets you create a Windows Store app (for PCs, tablets, and laptops) and a Windows Phone Store app in the same project. When your work is finished, you can produce app packages for the Windows Store and Windows Phone Store with a single action to get your app out to customers on any Windows device. These applications can share a lot of their code, both business logic and presentation layer.

Even better, you can create Windows Runtime apps using the programming languages you’re most familiar with, like JavaScript, C#, Visual Basic, or C++. You can even write components in one language and use them in an app that’s written in another language.  Windows Runtime apps can use the Windows Runtime, a native API built into the operating system. This API is implemented in C++ and bindings (called “projections”) are created for  JavaScript, C#, Visual Basic, and C++ in a way that feels natural for each language.

Note that this is very different than the Phonegap/Cordova approach that also let you write apps in JavaScript. Universal Windows Apps do not run in a UIWebView/WebView, they are native applications for which (some of) the application logic gets run through the JavaScript engine. This means that they do not suffer from the challenges we face with Phonegap/Cordova (you can’t use cutting edge features, performance issues, etc.), yet you still get the benefits of using the language you are already familiar with.

This also allows you to use existing JavaScript libraries and CSS templates, no porting requires. You can even write one app use multiple languages, leveraging the dynamic nature of JavaScript for app logic while leveraging languages like C# and C++ for more computationally intensive tasks.

Traditional (Not using the WinRT)

Applications that do not use the WinRT are called Windows desktop app and are executables or browser plug-ins that runs in the Windows desktop environment. These apps are typically written in Win32 and COM, .NET, WPF, or Direct3D APIs. There are also Windows Phone Silverlight apps which are Windows Phone apps that uses the Windows Phone Silverlight UI Framework instead of the Windows Runtime and can be sold in the Windows Phone Store.

Deployment

To deploy to my device I had to first “developer unlock” my phone (instructions).

Deployment is a breeze from Visual Studio, just hook up your phone, select your device and hit deploy. The application gets saved to your phone and it opens. It appears in the apps list like all other apps.  You can also “side-load” applications to other windows machines for testing purpose, just package your application up in Visual Studio, put it on a USB stick, stick it in the other Tablet/PC and run the install script created by the packaging process.

I created 2 simple application, one was a C# Universal Application and one was a JavaScript/CSS3/HTML5 Universal Application. I was able to deploy and run both on a Tablet, Desktop and Phone without any problem. They were very simple applications but I could not see any performance difference between the C# application and the JS application.

Additional Findings

For best User Experience when developing Universal Apps using JS/HTML5/CSS3 you should develop Single Page Applications (SPA).  This ensures there are no weird “page loads” in the middle of your app running.  Users will not expect this from their application, remember, these are universal apps and could be run by a user on his desktop.

State can be easily shared between devices by automatically roaming app settings and state, along with Windows settings, between trusted devices on which the user is logged in with the same Microsoft account.

Applications on the Windows App Store come with build in crashanalytics: This is one of the valuable services you get in exchange for your annual registration with the Store, no need to build it yourself.
<h3Conclusion

As a JavaScript developer myself I am extremely excited by the fact that I can develop native applications on the Windows Platform using tools that I am already familiar with.  Furthermore, with Windows 10 it seems that Microsoft is doubling down on Universal Apps and with that OS Upgrade, my JavaScript apps can soon also be deployed to the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2!Possibly Related Posts:

Where are the Mobile Windows Devices?

Mon, 2015-03-09 09:27

That was one of the questions one of the Oracle’s Executives asked when we presented our new Cloud UX Lab.  The short answer was that there were none.  As far as I am aware, we never did any testing of any of our prototypes and applications on Windows Phones or tablets because, frankly, we thought it didn’t matter.   Windows Phones (and tablets) are a distant third to the 2 behemoths in this space, Android and iOS, and even lost market share in the year just wrapped up compared (2.7%) to 2013 (3.3%) according to IDC.  However, they are predicted to do better in the years ahead (although these predictions have been widely off in the past) and it seems that there is some pressure from our Enterprise Apps customers to look at the Windows Mobile platform, hence the question.  Never afraid of a challenge, we ordered a Surface Pro 3 and a Nokia Lumia 1520, used them for a few weeks, ran some test, wrote some apps and jotted down our findings, leading to this blog post.

Initial impressions Surface Pro 3

I’m going to be short about the Surface Pro 3, it’s basically a PC without a physical keyboard (although you can get one if you want) but with a touch screen and a stylus.  It even runs the same version of Windows 8.1 as your PC.  I must admit that the Tiles seem more practical on the tablet than on a PC, but I could do without the constant reminders to “upgrade Windows” and “upgrade Defender,” complete with mandatory reboots, just like on your PC.  The most infuriating part about this is that the virtual keyboard does not automatically pop up when you tap on an input field, just like on your PC that doesn’t have the concept of a Virtual Keyboard.  Instead you have to explicitly open it to be able to type anything.

Fortunately, there are some advantages too, e.g. anything that runs on your Windows PC probably will run fine on the Windows tablet, confirmed by our tests.  It has a USB 3.0 port that works just like … a USB port.  Plug in a USB Drive and you can instantly access it, just like on your PC, quite handy for when you have to side-load applications (more on that in a later post).

The whole package is also quite pricy, similar to a premium laptop.  It’s more of a competitor for the likes of Apple’s Macbook Air than the iPad I think.  I’m thinking people who try to use their iPads as little laptops are probably better of with this.

Lumia 1520

The phone on the other hand is a different beast.  The Windows 8.1 Phone OS, unlike the tablet version, is a smartphone OS.  As such, it has none of the drawbacks that the tablet displayed.  My first impression of the phone was that it is absolutely huge.  It measures 6 inches across and dwarfs my iPhone 6, which I already thought was big.  It’s even bigger than the iPhone 6+ and the Samsung Galaxy Note 4.  My thumb can reach less than 50% of the screen, this is not a phone you can handle with one hand.

iPhone 4S vs iPhone 6 vs Lumia 1520

iPhone 4S vs iPhone 6 vs Lumia 1520

Initial setup was relatively quick, it comes “preinstalled” with a bunch of apps, although, they are not really installed on the phone yet, they get installed on first boot.  It took about 10-15 minutes for all “preinstalled” phone apps to be installed.

The screen is absolutely gorgeous with bright colors and supreme fine detail, courtesy of a 367ppi AMOLED ClearBlack screen.  It also performs very good outside, in bright light.  It has an FM Radio which uses your headphone cable as the antenna (no headphones, no radio), a USB port and a microSD port.  It also has a dedicated, two stage camera shutter button.  There’s no physical mute button though.  The Tiles work really well on the phone.  They are much easier to tap than the app icons on either Android or iOS and you can resize them.

I tried installing the same apps as I have on my iPhone, but this was unfortunately where I hit my first giant snag.  I knew the ecosystem was underdeveloped compared to Android and iOS, but I didn’t know it was this bad.  Staples on my iPhone like Feedly, Flickr, VLC, Instapaper and Pocket don’t exist on the Windows Phone platform.  You also won’t find a dedicated app to listen to your Amazon Prime music or watch your movies.  If you want to watch the latest exploits of the Lannisters, you are also going to have to do it on another device, no HBO Go or XFinity on the Windows Phone.  There is also no version of Cisco VPN, which means it’s a non-starter for Oracle employees as that is the only way to access our intranet.  Weirder still, there is no Chrome or Firefox available on Windows Phones, which means I had to do all my testing on the version of IE that came with the phone (gulp!).

Impressions after a week of usage

I used the Lumia as my main phone for a week (poor pockets), I just popped in the micro SIM card from my iPhone into the Lumia and it worked.  I really got hooked to the constantly updating Live Tiles.  News, stock prices, weather, calendar notifications, facebook notifications etc. get pushed straight to my main screen without having to open any apps.  I can glance and drill down if I want to, or just ignore them.  They are a little bit of a distraction with their constant flipping motion, but overall very cool.

The other thing that was very noticeable was that the top notification bar is actually transparent and so it doesn’t seem like you lose that part of your screen, I liked that.

The Windows Store has a try-before-you-buy feature, something that would be a godsend on the iPhone: my kids love to buy games and then drop them within a day never to be used again.  You can also connect the Windows Phone to your XBox One and use it as an input device/remote control.

Another feature that I highly appreciated, especially as a newbie to the Windows Phone, was the smart learning notifications (not sure if that is the official name).  Rather than dumping all the help-information on you when you open the app for the first time, the phone seems to be monitoring what you do and how you do it.  If there is a better/easier way of doing that task, after repeated use, it will let you know, in a completely non condescending way, that “You are doing it wrong.” This seems to be a much better approach because if you tell me the first time I use the app how to use all its features, I will forget by the time I actually want to use that feature, or worse, I might never use that feature so now you wasted my time telling me about it.

As for overall performance, there was some noticeable “jank” in the phones animations, it just didn’t feel as buttery smooth as the iPhone 6.

The camera

The camera really deserves its own chapter.  The 1520 is the sister phone of the Lumia 1020, which has a whopping 41 megapixel image sensor.  The 1520 has to make due with 20 megapixels but that is still at least double of what you find in most smartphones.  Megapixel size isn’t everything but it does produce some wonderful pictures.  One of the reasons that Nokia went with these large sensors is because they wanted to support better zooming.  Because you can’t optically zoom with a phone camera, you need a much bigger lens for that, a phone does digital zooming which typically leads to a pixelated mess when you zoom in.  Unless of course you start with a very high resolution image, which is what Nokia did.

One of the interesting features of the photo app is that it supports “lenses.”  These are plugins you can install in the photo app that add features not available out-of-the-box.  There are dozens of these lenses, it’s basically an app store in an app, that add features like (instagram) filters, 360 shots, panoramic pictures etc.  One lens promises to make you look better in selfies (it didn’t work on me).  One really neat lens is Nokia’s “Refocus” lens that brings a Lytro-like variable depth of field to your phone, and it works great too.

Refocus

In the same lens app you can also filter out all colors except for the object you click on, called “color pop,” so you get this effect:

color pop

Color pop in action

In the app, you can keep clicking on other objects (e.g. the table) to pop their color.

Other than the 20 megapixel sensor, the phone is also equipped with a top notch Carl Zeiss lens.  The phone has a physical, dedicated, two-stage shutter button, half-press for focus and full press for taking the picture.  It also has a larger-than-usual degree of manual control. You’ll find the usual settings for flash mode, ISO, white balance and exposure compensation but also parameters for shutter speed and focus. The latter two are not usually available on mobile phones.  The camera also performs really well in low light conditions.

Summary

I like the phone and its OS, and I really like the camera. The Tiles also works really well on a phone. I dislike the performance, the size and the lack of applications, the latter is a deal-breaker for me. I had some trepidation about going cold turkey Windows Phone for the week but it turned out alright. However, I was happy to switch back to my iPhone 6 at the end of the week.
I’m a bit more on the fence about the tablet. If you get the physical keyboard, it might work out better but then you basically have a laptop, so not sure what the point is. The fact that it runs windows has it’s advantages (everything runs just as on windows) and disadvantages (keyboard issues).

I can’t wait to get my hands on Windows 10 and a HoloLens :-)

Happy Coding!

Mark.Possibly Related Posts:

Automatic: Nice, but Not Necessary

Fri, 2015-02-20 14:35

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.

Overall

I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.Possibly Related Posts:

Fun with an Android Wear Watch

Tue, 2015-02-03 15:46

A couple days ago, I was preparing to show some development work Luis (@lsgaleana) did for Android Wear using the Samsung Gear Live.

One of the interesting problems we’ve encountered lately is projecting our device work onto larger screens to show to an audience. I know, bit of a first world problem, which is why I said “interesting.”

At OpenWorld last year, I used an IPEVO camera to project two watches, the Gear Live and the Pebble, using a combination of jewelry felt displays. That worked OK, but the contrast differences between the watches made it a bit tough to see them equally well through the camera.

Plus, any slight movement of the table, and the image shook badly. Not ideal.

Lately, we haven’t been showing the Pebble much, which actually makes the whole process much easier because . . . it’s all Android. An Android Wear watch is just another Android device, so you can project its image to your screen using tools like Android Screen Monitor (ASM) or Android Projector.

Of course, as with any other Android device, you’ll have to put the watch into debugging mode first. If you’re developing for Android Wear, you already know all this, and for the rest of us, the Android Police have a comprehensive how-to hacking guide.

For my purposes, all I needed to do is get adb to recognize the watch. Here are the steps (h/t Android Police):

  • Tap on Wear’s watch face to get a menu of options. Be sure to hit the watch face instead of a notification card.
  • Scroll down the list of options and select Settings.
  • Open About, which is the last option in the list.
  • Find Build number and tap on it seven times, and you’ll get the “You are now a developer!” message.
  • Swipe right (to go back) to the Settings menu.
  • Open Developer options, which is now the last option in the list.
  • Find and set ADB debugging to Enabled.
  • Tap the checkmark button to confirm.

Now, when I need to show a tablet app driving the Wear watch, I can use adb and ASM to show both screens on my Mac, which I can then project. Like so.

allTheScreens

Bonus points, the iPod Touch in that screen is projected using a new feature for QuickTime in Mavericks that works with iOS 8 devices.

49578622Possibly Related Posts:

Stories Are the Best, Plus News on Nest!

Wed, 2015-01-28 14:21

Friend of the ‘Lab, Kathy, has been using Storify for a while now to compile easy-to-consume, erm, stories about the exploits of Oracle Applications User Experience (@usableapps).

You might remember Storify from past stories such as the In the U.K.: Special events and Apps 14 with UKOUG and Our OpenWorld 2014 Journey.

Anyway, Kathy has a new story, The Internet of Things and the Oracle user experience, which just so happens to feature some of our content. If you read here with any regularity or know Noel (@noelportugal), you’ll know we love our internet-connect things.

So, check out Kathy’s story to get the bigger picture, and hey, why not read all the stories on the Usableapps Storify page.

And bonus content on IoT!

Google keeps making the Nest smarter and marginally, depending on your perspective, more useful. In December, a Google Now integration rolled out, pairing a couple of my favorite products.

More gimmick than useful feature, at least for me, I ran into issues with the NLP on commands, as you can see:

Screenshot_2015-01-28-12-01-40

Saying “set the temperature to 70 degrees” frequently results in an interpretation of 270 degrees. Works fine if you don’t say “to” there. Google Now becomes a more effective assistant, this integration will be more useful, I’ve no doubt.

Then, at CES, Nest announced partnerships that form a loose alliance of household appliances. It may take a big player like Nest (ahem, Google) to standardize the home IoT ecosystem.

And just this week, Misfit announced a partnership with Nest to allow their fitness tracker, the one I used to wear, to control the Nest. I’m tempted to give the Shine another go, but I’m worried about falling back into a streak-spiral.

Thoughts on IoT? Nest? Ad-supported world domination? You know what to do.Possibly Related Posts:

BusinessTown

Fri, 2015-01-23 12:53

Maybe you remember Busytown, Richard Scarry’s famous town, from your childhood or from reading it to your kids.

Tony Ruth has created the Silicon Valley equivalent, BusinessTown, (h/t The Verge) populated by the archetypes we all know and sometimes love. What do the inhabitants of BusinessTown do? “What Value-Creating Winners Do All Day,” natch.

brogrammers

Who’s up for a Silicon Valley marathon?Possibly Related Posts:

Mash up Oracle Cloud Application Web Services with Web APIs and HTML5 APIs

Thu, 2015-01-22 13:48

No more an “honorary” but now a full-blown member of the AppsLab team, I gave a presentation at the Chicago & Dubai Oracle Usability Advisory Board in November on REST and Web APIs and how they can facilitate the transition from on-premise software to cloud-based solutions (the content of which can be fodder for a future post).

As we all are transitioning from on-premise implementations to cloud-based solutions, there seems to be a growing fear among customers and partners (ISV, OEM) alike that they will lose the capability to extend these cloud-based applications.  After all, they do not have access to the server anymore to deploy and run their own reports/forms/scripts.

I knocked up a very simple JavaScript client side application as part of my presentation to prove my point, which was that (well-designed) REST APIs and these JavaScript frameworks make it trivial to create new applications on top of existing backend infrastructure and add functionality that is not present in the original application.

My example application is based on existing Oracle Sales Cloud Web Services.  I added the capability to tweet, send text messages (SMS) and make phone calls straight from my application and speech-enable the UI.  Although you can debate the usefulness of how I am using  some of these feature, that was obviously not the purpose of this exercise.

Instead, I wanted to show that, with just a few lines of code, you can easily add these extremely complex features to an existing application. When was the last time you wrote a bridge to the Public Switched Telephone Network or a Speech synthesizer that can speak 25 different languages?

Here’s a 40,000 foot view of the architecture:

High level view of Demo APP Architecture

High level view of Demo APP Architecture

The application itself is written as a Single Page Application (SPA) in plain JavaScript.  It relies heavily on open source JavaScript libraries that are available for free to add functionality like declarative DOM binding and templating (knockout.js), ES6 style Promises (es6-promise.js), AMD loading (require.js) etc.  I didn’t have to do anything to add all this functionality (other than including the libraries).

It makes use of the HTML5 Speech Synthesis API, which is now available in most modern browsers to add Text-to-Speech functionality to my application.  I didn’t have to do anything to add all this functionality.

I also used the Twitter APIs to be able to send tweets from my application and the Twilio APIs to be able to make phone calls and send SMS text messages from my application.  I didn’t have to do anything to add all this functionality.  Can you see a theme emerging here?

Finally I used the Oracle Sales Cloud Web Services to display all the Business Objects I wanted to be present in my application, Opportunities, Interactions and Customers.  As with the other pieces of functionality, I didn’t have to do anything to add this functionality!

You basically get access to all the functionality of your CRM system through these web services where available, i.e. not every piece of functionality is exposed through web services.

Note that I am not accessing the Web Services directly from my JS but I go through a proxy server in order to adhere to browser’s same-origin policy restrictions.  The proxy also decorates the Oracle Applications SOAP Services as REST end-points.  If you are interested in how to do this, you can have a look at mine, it’s freely available.

For looks I am using some CSS that makes the application look like a regular ADF application.  Of course you don’t have to do this, you can e.g. use bootstrap if you prefer.  The point being is that you can make this application look however you want.  As I am trying to present this as an extension to an Oracle Cloud Application, I would like it to look like any other Oracle Cloud Application.

With all these pieces in place, it is now relatively easy to create a new application that makes use of all this functionality.  I created a single index.html page that bootstraps the JS application on first load.  Depending on the menu item that is clicked, a list of Customers, Opportunities or Interactions is requested from Oracle Sales Cloud, and on return, those are laid out in a simple table.

For demonstration purposes, I provided switches to enable or disable each feature.  Whenever a feature is enabled and the user would click on something in the table, I would trigger either the phone call, SMS sending, speech or tweet, whichever is enabled, e.g. here is the code to do Text-to-Speech using the HTML5 Speech Synthesis API, currently available in webkit browsers so use Safari or Chrome (mobile or desktop), and yes I have feature detection in the original code, I just left it out to keep the code simple:

.gist table { margin-bottom: 0; }

Ditto for the SMS sending using the Twilio API:

.gist table { margin-bottom: 0; }

And calling somebody, using the Phone Call API from Twilio, using the same user and twilio object from above:

.gist table { margin-bottom: 0; }

The tweeting is done by adding the tweet button to the HTML, dynamically filling in the tweet’s content with some text from the Opportunity or Interaction.

Here is a screencast of the application in action:

As I mentioned earlier, how I am using the APIs might not be particularly useful, but the point is to show how easy it is to integrate this functionality with Oracle Cloud Applications to extend the functionality beyond what is delivered out of the box.  It probably makes more sense to use Twilio to actually call or text a contact attached to the opportunity or interaction, rather than me.  Or to tweet when an opportunity moves to a “win” status, the possibilities are literally endless, but I leave that up to you.

Happy Coding!

Mark.Possibly Related Posts:

Dowsing for Smarties

Wed, 2015-01-21 17:15

Editor’s note: John and Noel (@noelportugal) need to chat about Google’s Physical Web gBeacons.

I have been a tad skeptical about the usefulness of smart watches, but my colleague Julia Blyumen has changed my thinking.

Woodblock of dowserIn her recent blog post, Julia noted that a smart watch could become both a detector and a universal remote control for all IoT “smart things”. She backed this up with a link to an excellent academic paper (pdf) “User Interfaces for Smart Things: A Generative Approach with Semantic Interaction Descriptions.”

I strongly encourage anyone interested in the Internet of Things to read this paper. In it the authors lay the foundations for a general purpose way of interacting with “smart things”, interactive components that can sense and report on current conditions (counters, thermometers), or respond to commands (light switches, volume knobs).

These smarties (as I like to call them) will have much to tell us and will be eager to accept our commands. But how will we interact with them? Will they adapt to us or must we adapt to them? How will we even find them?

The authors propose a brilliant solution: let each smartie emit a description of what it can show or do. Once we have that description, we can devise whatever graphical user interface (or voice command or gesture) we want. And we could display that interface anywhere: on a webpage or a smartphone – or a watch!

Another one of my AppsLab colleagues, Raymond Xie, immediately saw a logical division of labor: use a phone or tablet for complex interactions, use a watch for simple monitoring and short command bursts.

Another way a watch could work in concert with a phone would be as a “smartie detector.”  It will be a long time (if ever) before every thing is smart.  Until then it will often not be obvious whether the nearby refrigerator, copy machine, projector, or lamp is controllable.

Watches could fill this gap nicely.  Every time your watch comes within a few feet of a smartie it could vibrate or display an icon or show the object’s state or whatever.  You could then just glance at your wrist to see if the object is smart instead of pulling out your phone and using it as a dowsing rod.

One way of implementing this would be for objects or fixed locations (room doors, cubicles, etc.) to simply emit a short-range bluetooth ID beacon.  The watch or its paired phone could constantly scan for such signals (as long as its battery holds out).  If one was detected it would use local wifi to query for the ID and pull up an associated web page.  Embedded code in the web page would provide enough information to display a simple readout or controller. The watch could either display it automatically or just show an indicator to let the user know she could tap or speak for quick interactions or pull out her phone to play with a complete web interface.

An example I would find useful would be meeting room scheduling.  I often arrive at a meeting room to find someone else is already using it.  It would be nice to wave my watch at the door and have it confirm who had reserved the room or when it would next be free. Ideally, I could reserve it myself just by tapping my watch. If I realized that I was in the wrong place or needed to find another room, I could then pull out my phone or tablet with a meeting room search-and-reserve interface already up and running.

But that’s just the beginning.

One of the possibilities that excites me the most about this idea is the ability to override all the confusing and aggravating UIs that currently assault me from every direction and replace them with my own UIs, customized to my tastes.  So whenever I am confronted with a mysterious copy machine or the ridiculously complicated internet phone we use at work, or a pile of TV remote controls with 80 buttons apiece, or a BART ticket machine with poorly marked slots and multiple OK buttons, or a rental car with diabolically hidden wiper controls, I could pull out my phone (or maybe even just glance at my watch) to see a more sane and sensible UI.

Designers could perfect and sell these replacement UIs, thus freeing users from the tyranny of having to rely on whatever built-in UI is provided.  This would democratize the user experience in a revolutionary way.  It would also be a boon for accessibility. Blind users or old people or children or the wheelchair-bound could replace any UI they encounter in the wild with one specially adapted for them.

Virtual interfaces could also end the tedium of waiting in lines. Lines tend to form in parking garages and conference registration because only one person can use a kiosk at a time. But if you could tap into a kiosk from your smart watch, dozens of people could complete their transactions at the same time.

Things get even more interesting if people start wearing their own beacons.  You could then use your watch to quickly capture contact information or create reminders; during a hallway conversation, a single tap could “set up meeting with Jake.” Even automatically displaying the full name of the person next to you would be helpful to those of us who sometimes have trouble remembering names.

If this capability was ubiquitous and the range was a bit wider you could see and interact with a whole roomful of people or even make friends during a plane ride. Even a watch could display avatars for nearby people and let you bring any one into focus. You could then take a quick action from the watch or pass the selected avatar to your phone/tablet/laptop to initiate something more complex like transferring a file.

Of course this could get creepy pretty fast.  People should have some control over the information they are willing to share and the kind of interactions they wish to permit. It’s an interesting design question: “What interaction UIs should a person emit?”

We are still at the dawn of the Internet of Things, of course, so it will be a while before all of this comes to pass. But after reading this paper I now look at the things (and people) around me with new eyes. What kind of interfaces could they emit? Suddenly the idea of using a watch to dowse for smarties seems pretty cool.Possibly Related Posts:

Dear Julia: SmartWatch Habits and Preferences

Tue, 2015-01-13 11:40

Julia’s recent post about her experiences with the Samsung Gear watches triggered a lively conversation here at the AppsLab. I’m going to share my response here and sprinkle in some of Julia’s replies.  I’ll also make a separate post about the interesting paper she referenced.

Dear Julia,

You embraced the idea of the smart watch as a fully functional replacement for the smart phone (nicely captured by your Fred Flintstone image). I am on the other end of the spectrum. I like my Pebble precisely because it is so simple and limited.

I wonder if gender-typical fashion and habit is a partial factor here. One reason I prefer my phone to my watch is that I always keep my phone in my hip pocket and can reliably pull it out in less than two seconds. My attitude might change if I had to fish around for it in a purse which may or may not be close at hand.

Julia’s response:

I don’t do much on the watch either. I use it on the go to:

  • read and send SMS
  • make and receive a call
  • read email headlines
  • receive alerts when meetings start
  • take small notes

and with Gear Live:

  • get driving directions
  • ask for factoids

I have two modes to my typical day. One is when I am moving around with hands busy. Second is when I have 5+ minutes of still time with my hands free. In the first mode I would prefer to use a watch instead of a phone. In the second mode I would prefer to use a tablet or a desktop instead of a phone. I understand that some people find it useful to have just one device – the phone – for both modes. From Raymond’s description of Gear S, it sounds like reading on a watch is also okay.

Another possible differentiator, correlated with gender, is finger size. For delicate tasks I sometimes ask my wife for help. Her small, nimble fingers can do some things more easily than my big man paws. Thus I am wary of depending too heavily on interactions with the small screen of a watch. Pinch-zooming a map is delightful on a phone but almost impossible on a watch. Even pushing a virtual button is awkward because my finger obscures almost the entire surface of the watch. I am comfortable swiping the surface of the watch, and tapping one or two button targets on it, but not much more. For this reason I actually prefer the analog side buttons of the Pebble.

Julia’s response:

Gear has a very usable interface. It is controlled by a tap, swipe, single analog button, and voice. Pinch-zoom of images was enabled on old Gear, but there were no interaction that depended on pinch-zoom.

How comfortable are you talking to your watch in public? I have become a big fan of dictation, and do ask Siri questions from time to time, but generally only when I am alone (in my car, on a walk, or after everyone else has gone to bed). I am a bit self-conscious about talking to gadgets in public spaces. When other people do it near me I sometimes wonder if they are talking to me or are crazy, which is distracting or alarming, so I don’t want to commit the same offense.

I can still remember watching Noel talking to his Google Glass at a meeting we were in. He stood in a corner of the room, facing the wall, so that other people wouldn’t be distracted or think he was talking to them. An interesting adaption to this problem, but I’m not sure I want a world in which people are literally driven into corners.

Julia’s Response:

I am not at all comfortable talking to my watch. We should teach lipreading to our devices (wouldn’t that be a good kickstarter project?) But I would speak to the watch out of safety or convenience. Speaking to a watch is not as bad as to glasses. I am holding the watch to my mouth, looking at it, and, in case of Gear Live, first say “Okay, Google.” I don’t think many think I am talking to them. I must say most look at me with curiosity and, yes, admiration.

What acrobatics did you have to go through to use your watch as a camera? Did you take it off your wrist? Or were you able to simultaneously point your watch at your subject while watching the image on the watch? Did tapping the watch to take the photo jiggle the camera? Using the watch to take pictures of wine bottles and books and what-not is a compelling use case but often means that you have to use your non-watch hand to hold the object. If you ever expand your evaluation, I would love it if you could have someone else video you (with their smart watch?) as you take photos of wine bottles and children with your watch.

Julia’s Response:

No acrobatics at all. The camera was positioned at the right place. As a piece of industrial design it looked awful. My husband called it the “carbuncle” (I suspect it might be the true reason for camera’s disappearance in Gear Live). But it worked great. See my reflection in the mirror as I was taking the picture below? No acrobatics. The screen of the watch worked well as a viewfinder. I didn’t have to hold these “objects” in my hands. Tapping didn’t jiggle the screen.

dhdibfff      julia-spy-photo

Thanks again for a thought-provoking post, Julia.  I am also not sure how typical I am. But clearly there is a spectrum of how much smart watch interaction people are comfortable with.

JohnPossibly Related Posts:

An Interaction Designer’s Perspective: Samsung Gear vs. Samsung Gear Live

Mon, 2015-01-12 11:52

Editor’s note: In January of 2014, our team held a wearables summit of sorts, test-driving five popular watches, fitness bands and head-mounted displays to collect experiential evidence of each form factor, initial experience, device software and ecosystem and development capabilities.

Julia drew the original Samsung Galaxy Gear smartwatch, and she’s been using it ever since. A few months ago, she began using the new Android Wear hotness, the Samsung Gear Live, which several of us have.

What follows are Julia’s impressions and opinions of the two watches. Enjoy.

Original Galaxy Gear versus Gear Live

When I had to keep track of time, I used to wear my Skagen watch, and I loved my little Skagen. Last year it ran out of battery. Coincidently, it happened when Thao (@thaobnguyen) ordered then just released Samsung Galaxy Gear for me to “test.”

Life is busy, and it took me some ten months to get new battery for my Skagen.

In the meantime, I wore Gear. When I got my Skagen back, I had a “Lucy of Prince Caspian” moment. I felt my watch was bewitched – I couldn’t talk to it (I tried), and it couldn’t talk back to me. Mute and dumb. That’s how I realized I am hooked on smart watches.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

This is just a preface, the write up is about original Gear versus Gear Live, which I’ve been testing for few months. In a nutshell, I have mixed feelings about Gear Live. Though there are some improvements over the original watch, I find many setbacks.

Typography

On the left, original Gear, on the right, Gear Live.

Left, original Gear, right, Gear Live, note the minimalistic typography of original Gear versus decorative typography of Android Wear.

Original Samsung Galaxy Gear featured clean bold typography. I could read a notification at a glance even when driving. In Gear Live, the minimalistic typography of Samsung Gear was replaced by smaller fonts and decorative backgrounds of Android Wear. Not only those decorations are useless, they make the watch unusable in the situations when it could’ve been most helpful. (And yes, I understand Samsung had to showcase the impressive display).

Speaker

 Gear Live I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then (unless I am connected to car speakers) I need to pick up the phone to talk.

Left, original Gear, right Gear Live, 
I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then, unless I am connected to car speakers, I need to pick up the phone to talk.

Getting calls on a Gear in awkward situations was my main usage of it. As clunky as placement of the speaker and mic was on the original Gear, I still was able to get the calls safely while driving, or while walking with my hands full. Gear Live has no speaker. It can initiate the call hands-free, but what is the use if I still need to get to my phone to speak?

Camera

 Gear Live Gear Live has no camera

Left, original Gear, right, Gear Live, which has no camera.

Location, voice-to-text, AND image-to-text are three most logical input methods for the watch. I got very used to taking image notes with the original Gear. Did you know that Evernote can search for text in images? For me, the flagman demo application of the original Gear was Vivino. With Vivino, one can take a picture of a wine label at a store with a watch camera, and get the rating/pricing back on the watch. This application was a great demonstration of smart watch retail potential. Gear Live has no camera, dismissing all such use cases.

Vivino application on original Gear (no longer supported) Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Vivino application on original Gear, no longer supported.
Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Google Speech Recognition

five

Google Speech Recognition is superbly usable technology, way beyond S-Voice or Siri. Big Data in real action! Voice Search, Voice Commands, and dictation work fabulously. The only issue I found is with recognizing email contacts from speech.

Smart Watch

Google Voice Search makes Smart Watch smart. It brings the knowledgebase of the world – Internet – to the tip of your tongue, and it is MAGIC!

six

seven

eight

Google Now

I must confess I am annoyed by Google Now cards. I know it tries really hard, but the recommendations are wrong about 50% of the time. The other 49% they are irrelevant. Given that, I feel that Now shall stick to the back rows. Instead, it is putting itself on a central stage. Lesson learned – for smart watch, precision/recall balance needs to be skewed heavily toward precision.

Google Now on Gear Live Ah? I am at home, silly!

Google Now on Gear Live Ah? I am at home, silly!

Conclusions

These opinions are my own. At least half of my day is spent on the go – driving kids around, in classrooms or lessons, and doing family errands. I rarely have idle hands or idle time.

You’ll be the judge if I am an atypical user. In addition, I do not subscribe to the school of thought that a smart watch is a phone satellite, and a fetish. I believe it can be useful gadget way beyond that.

Yes, it is given that no one will use the watch to write or read a novel, not even a long email. Afar from that, I don’t see why a good smart watch cannot do all a person on a go needs to do, replacing the phones, and giving us back our other hand.

Therefore, I feel that a good smart watch shall aspire to:

  • work at a glance
  • be precise
  • hands free
  • self-contained

ten

If that is your typical day, then this is your gadget.

eleven

Last Thought: Smart Watch and IoT

Last but not the least, I believe that a smart watch naturally lends itself to become a universal remote control for all IoT “smart things” – it can be your ID, it can sense “smart things,” it can output small chunks of information as voice or text, and it can take commands. As you walk next to (your) refrigerator, refrigerator can remind you via your watch to buy more milk, and you can adjust refrigerator’s temperature via the watch. This assumes that a “smart thing” can beam a description of all the knobs and buttons you need to control it.

twelve

I am surprised there is not much written on that, but here is a very good paper (pdf) “User Interfaces for Smart Things A Generative Approach with Semantic Interaction Descriptions” Simon Mayer, Andreas Tschofen, Anind K. Dey, and Friedemann Mattern, Institute for Pervasive Computing, ETH Zurich, HCI Institute, Carnegie Mellon University, April 4, 2014.Possibly Related Posts:

2015 AT&T Developer Summit & Hackathon

Fri, 2015-01-09 12:16

Editor’s Note: Noel did it! After competing in 2013 and 2014, he broke through and won a prize at the annual AT&T Developer Summit Hackathon (@attdeveloper). Congrats to the whole team.

MediaTek.prize

The whole team minus Anthony who was too sick to enjoy the moment.

 

This year, Anthony (@anthonyslai), Raymond, Osvaldo (@vaini11a), Luis (@lsgaleana), Tony and I (@noelportugal) participated in the AT&T Developer Summit & Hackathon.

From the beginning we realized we had too much brain power for just one project so we decided to split the group. The first group would attempt to go for the first overall prize. And the second group would focus on just one accelerator price from a sponsor.

“Your Voice”  – First Overall Prize Entry:

We knew we only had three minutes to present, and we had to leave an impression with the judges. So, we opted to build our solution around a specific use case with only one persona in mind. The use case was to use our voice to control AT&T Digital Life, AT&T WebRTC and the AT&T Speech APIs. The persona was an older gentleman going around his daily life around the house. We opted to use our latest toy, the Amazon Echo as way to interface with AT&T services. We went to work, found a couple limitations, but at the end we overcame them and we felt pretty confident with our solution.

Here is our use case:

Tony is an 89 year old man and lives alone. He is pretty self sufficient, but his daughter (Cindy) worries about his well being. So she bought AT&T Digital Life to make sure her dad was safe and sound. Tony doesn’t want to be bothered to learn all the new mumbo-jumbo that comes with new technology, like a mobile app, a fancy remote, etc. Instead he prefers to use “Your Voice” to make sure all doors are locked, garage door closed, lights off/on, etc. “Your Voice” also works as personal assistant that can take care reminding Tony of important things, read email, initiate video calls (WebRTC), etc.

So that’s it! We pre-programmed sequences to identify actions. When Tony said “Alexa, I’m tired. I’m going to bed,” the system started a series of actions, not just one. When Tony said “Alexa, call my grandson,” the system automatically started the projector and did a video conference.

And finally we created a video introduction for our presentation:

 “Sensus” – Accelerator Entry:

Raymond and Anthony decided to enter the “MediaTek Labs IoT and Wearables Challenge.” MediaTek (@MediaTekLabs) has a very nice multipurpose development board called LinkIt ONE that includes an array of connectivity options (BLE, Wifi, GSM, GPRS, etc.), plus access to a lot of plug-and-play sensors.

They built a sensor station to monitor environmental safety metrics (temperature, fire hazard) and environmental health metrics (noise, dust, UV). They used Android Wear as the wearable platform to notify users when things happen, using an IFTTT model.

Their solution was an end-to-end solution using only the MediaTek LinkIt One and their cloud platform. This gave them the edge since this was a pure MediaTek solution. It became a clear solution when the judges came to talk to them. Our room had A/C issues and constantly overheated, so we had to chase the maintenance guys quite often for them to fix it. Raymond talk to them about the opportunity to solve the issue by giving a wearable device to the head of maintenance so he would know whats going on in the building by just “glancing.”

“Sensus” got the first prize for the accelerator entry.  And as team we could not be happier!

Conclusion:

Hackathons or developer challenges are a great way to work as a team, learn new technologies and push the limits of what can be accomplished on such short time. As a team we have proven to be always ahead of the curve with our solutions, e.g. last year we built a Smart Holster for Law Enforcement, and if you have been following CES 2015, there are some companies doing similar implementations.

There is no doubt that voice control will be huge this year. The technology is maturing at a very fast rate and we are bound to see a lot more great implementations.

Finally, winning is not everything in these events. The journey is what matters. What we learned along the way. I find it very apt to have this competition in Las Vegas since this place is full of chance, probability and ultimately pure luck.Possibly Related Posts:

Here Are Your First Links of 2015

Wed, 2015-01-07 20:16

Our team has been busy since the New Year, competing in the AT&T Developer Summit hackathon, which is Noel’s (@noelportugal) Everest, i.e. he tries to climb it every year, see 2013 and 2014.

If you follow our Twitter (@theappslab) or Facebook page, you might have seen the teaser. If not, here it is:

Image courtesy of AT&T Developer Program

Image courtesy of AT&T Developer Program’s Facebook page

Look for details later this week.

While you wait for that, enjoy these tidbits from our Oracle Applications User Experience colleagues.

Fit for Work: A Team Experience of Wearable Technology

Wearables are a thing, just look at the CES 2015 coverage, so Misha (@mishavaughandecided to distribute Fitbits among her team to collect impressions.

Good idea, get everyone to use the same device, collect feedback, although it seems unfair, given Ultan (@ultan) is perhaps the fittest person I know. Luckily, this wasn’t a contest of fitness or of most-wrist-worn-gadgets. Rather, the goal was to gather as much anecdotal experience as possible.

Bonus, there’s a screenshot of the Oracle HCM Cloud Employee Wellness prototype.

¡Viva Mexico!

Fresh off a trip to Jolly Old England, the OAUX team will be in Santa Fe, Mexico in late February. Stay tuned for details.

Speaking of, one of our developers in Oracle’s Mexico Development Center, Sarahi Mireles (@sarahimireles) wrote up her impressions and thoughts on the Shape and ShipIt we held in November, en español.

And finally, OAUX and the Oracle College Hire Program

Oracle has long had programs for new hires right out of college. Fun fact, I went through one myself many eons ago.

Anyway, we in OAUX have been graciously invited to speak to these new hires several times now, and this past October, Noel, several other OAUX luminaries and David (@dhaimes) were on a Morning Joe panel titled “Head in the Clouds,” focused loosely around emerging technologies, trends and the impact on our future lives.

Ackshaey Singh (from left to right), DJ Ursal (@djursal), Misha Vaughan (@mishavaughan), Joe Goldberg, Noel Portugal (@noelportugal), and David Haimes (@dhaimes)

 

Interesting discussion to be sure, and after attending three of these Morning Joe panels now, I’m happy to report that the attendance seems to grow with each iteration, as does the audience interaction.

Good times.Possibly Related Posts:

Another Echo Hack from Noel

Tue, 2015-01-06 10:44

Noel (@noelportugal) spent a lot of time during his holidays geeking out with his latest toy, Amazon Echo. Check out his initial review and his lights hack.

For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.

Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.

If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.

How do I know this? I just accepted my invitation and bought my very own Echo.

With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.

Fingers crossed. The comments you must find.Possibly Related Posts:

Chromecast Guest Mode Rules

Mon, 2015-01-05 14:40

If you read here regularly, you’ll know I’m a huge fan of the Google Chromecast.

It’s helped me cut the cable, I gave it as a Christmas gift two years in a row (to different people), I have several in my home, and I carry one in my laptop bag to stream content on the road.

And if you’ve seen any of us on the road, you may have seen some cool stuff we’ve built for the Chromecast.

Back in June, Google announced a killer feature for the little HDMI gizmo, ultrasonic pairing, which promised to remove the necessity for a device to be connected to the same wifi network as the Chromecast to which it was casting.

That feature, guest mode, rolled out in December for Android devices running 4.3 and higher, and it is as awesome as expected.

It’s very easy to setup and use.

First, you need to enable guest mode for your Chromecast. I tried this initially in the Mac Chromecast app, but alas, it has not yet been updated to include this option, same with iOS. So, you’ll need to use the Android Chromecast mobile app, like so:

Screenshot_2015-01-05-12-05-14

Once enabled, the PIN is displayed on the Chromecast’s backdrop, and anyone in the room can cast to it via the PIN or by audio pairing.

IMG_20150105_104202

 

When attempting to connect, the Chromecast first tries the audio method; the Chromecast app asks to use the device’s microphone, and Chromecast broadcasts the PIN via audio tone.

Click to view slideshow.

Failing that (or if you skip the audio pairing), the user is prompted by the Chromecast app to enter the PIN manually.

Easy stuff, right? In case you’re worried that someone not in the room could commandeer your Chromecast, they can’t, at least according to Google.  Being a skeptic, I tested this myself, and sure enough, the audio method won’t work if there are walls separating the device from the Chromecast. The app fails to pair via audio and asks for the PIN, which you can only get from the TV screen itself.

Not entirely foolproof, but good enough.

So why is this a cool feature? In a word, collaboration. Guest mode allows people to share artifacts and collaborate (remember, Chromecast has a browser) on a big screen without requiring them all to join the same wifi network.

Plus, it’s a modern way to torture your friends and family with your boring vacation pictures and movies.

More and more apps now support Chromecast, making it all the more valuable, e.g. the Disney Movies app, a must-have for me. Bonus for that app, it’s among the first that I know of to bridge the Google and Apple ecosystems, i.e. it consolidates all the Disney movies I’ve bought on iTunes and Google Play into a single app.

Thoughts? Find the comments.Possibly Related Posts:

Noel’s Amazon Echo Hack

Mon, 2015-01-05 12:32

Noel (@noelportugal) is one of a handful of early adopters to get his hands on the Amazon Echo, Amazon’s in-home personal assistant, and being the curious, hacker that he is, of course he used an unpublished API to bend Alexa, that’s the Echo’s personality, to his will.

Video, because it happened:

And look, Noel’s hack got picked up by Hackaday (@hackaday), kudos. You can grab his code on GitHub.

We’re hoping Amazon releases official APIs for the Echo soon, lots of great ideas on deck.Possibly Related Posts:

Our Week at UKOUG

Fri, 2014-12-26 12:05

Earlier this month, Noel (@noelportugal) and I (@joybot12) represented the AppsLab crew at the UKOUG Apps 14 and Tech 14 conferences in Liverpool.

I conducted customer feedback sessions with users who fit the “c-level executive” user profile, to collect feedback on some of our new interactive data visualizations. Unfortunately, I can’t share any of these design concepts just yet, but I can share a bunch of pics of Noel, who gave several talks over the course of the 3-day conference.

This first photo is a candid taken after Noel’s talk on Monday about “Wearables at Work.”

2014-12-08 12.58.50

Photo by Joyce Ohgi

I was thrilled to see so many conference attendees sticking around afterwards to pepper Noel with questions; usually at conferences, people leave promptly to get to their next session, but in this case, they stuck around to chat with Noel (and try on Google Glass for the first time).

Here’s another of Noel taken by Misha Vaughan (@mishavaughan) with his table of goodies.

B4bTSfHCMAAV3NR

Photo by Misha Vaughan

The next photo is from Tuesday, where Noel and Vivek Naryan hosted a roundtable panel on UX. Because this was a more intimate, round-table style talk, the conference attendees felt comfortable speaking up and adding to the conversation. They raised concerns about data privacy, their thoughts on where technology is headed in the future, and generally chatted about the future of UX and technology.

2014-12-08 15.20.51

Photo by Joyce Ohgi

This last photo is from Monday afternoon, when I made Noel take a break from his grueling schedule to play table tennis with me. The ACC Liverpool conference center thoughtfully provided table tennis in their Demo Grounds as a way to relieve stress and get some exercise (was a bit too cold to run around outside).

I put up a valiant effort, but Noel beat me handily. In my defense, I played the first half of the game in heels; once I took those off my returns improved markedly. I’ll get him next time! :) A special thank-you to Gustavo Gonzalez (@ggonza4itc), CTO at IT Convergence for the great action shot, and also for giving excellent feedback and thoughtful input about the design concepts I showed him the day following.

Photo by Gustavo Gonzalez

Photo by Gustavo Gonzalez

All-in-all, we enjoyed the Apps 14 and Tech 14 conferences. It’s always great to get out among the users of our products to collect real feedback.

For more on the OAUX team’s activities at the 2014 editions of the UKOUG’s annual conferences, check out the Storify thread.Possibly Related Posts: