Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 8 hours 27 min ago

QS15: Measurement with Meaning

Sun, 2015-06-28 10:02

Walking into something as a newcomer is always an adventure of reality interacting with expectations. Though I wasn’t quite sure what to expect at the Quantified Self conference, it wasn’t what I expected. But in a good way.

QS15 Twitter robot

Tweet-painting robot at QS15

The conference was structured around three main activities: talks given on the main stage, breakout sessions, which took place at different smaller areas during the talks, and break times, where one might check out the vendors, grab a snack, or chat with fellow attendees.

The talks, about ten minutes each, were mostly about the speaker’s successes in changing some aspect of their life via quantifying and analyzing it. This is partly what I wasn’t expecting—the goal-focused and very positive nature of (most) everyone’s projects.

True, some of the presenters might be tallied on the obsessive side of the spectrum, but by and large, it was all about improving your life, and not recording everything as a method of self-preservation.

On this last point, one presenter even provided this quote from Nabokov, which generated a touch of controversy: “the collecting of daily details … is always a poor method of self-preservation.”

One important theme I saw, however, is the role of measuring itself—that the very act of quantifying your behaviors, whether it’s diet, exercise, TV watching, or your productivity, can change your behavior for the better.

Granted, there can also be profound personal insights from analyzing the data, especially when combining multiple sources, but it’s possible some of these benefits come from simply tracking. Especially when it’s done manually, which takes a great deal of persistence, with many people petering out after a few weeks at the most.

This presents an interesting question about technology’s increasing proficiency at passive tracking, and the aim to provide insights automatically. For instance, the Jawbone UP platform’s Smart Coach is supposed look at your exercise and activity data with your sleep data and give you advice about how to get better sleep.

If someone had tracked this manually, and done the analysis themselves, they may not only be a lot more familiar with the facts about their own sleep and exercise, but any insights derived might be more likely to be absorbed and translate to genuine change.

When insights are automatically provided will they lead to just as much adoption?

Probably not, but they could reach a lot more people who may not be able to keep up with measuring. So it’s probably still a good thing in the end.

The other important theme was something that I’ve also been encountering in other areas of my work—the importance of good questions.

For most of the QS projects, this took the form of achieving a personal goal, but sometimes it was simply a specific inquiry into a realm of one’s life. Just looking at data can be interesting, but without a good question motivating an analysis, it’s often not very useful.

In the worst case, you can find spurious connections and correlations within a large set of data that may get you off in the wrong direction.

And while at the beginning of the conference it was made clear that QS15 was not a tech conference, there was plenty of cool technology in the main hall to check out and discuss.

There are too many to cover in much detail, but here are a few that intrigued me:

  • Spire, a breath tracking device that says it can measure focus by analyzing your breathing pattern. If someone is interested in examining their productivity, this could be a promising device to check out. Also, it can let you know when you need a deep breath, which has various physiological and emotional benefits.
  • Faurecia manufactures seats for automobiles, and they were showing off a prototype that uses piezoelectric bands within the chair itself to measure heart rate and breathing patterns. This is great because it can do this through your clothing, and detect when you’re falling asleep, and possibly institute some countermeasures. The data could also sync up with your phone, say through Apple’s Healthkit, if you want to add it to your logs.
  • Oura
    is an activity and sleep tracker that uses a ring form factor, which for some people may be easier to sleep with than a wrist band. Their focus is on sleep and measuring how restorative your rest is. I look forward to seeing how this one develops.

The conference had a lot to offer—some inspiration, some cool technologies, surprisingly good lunches, and quite a bit to think about.Possibly Related Posts:

More About Me at QS15

Sat, 2015-06-27 10:30

I always thought of myself as a control freak, Type A, self-aware (flaws and all) person but then I attended the Quantified Self Conference last week in San Francisco.

Image from QS15

Image from QS15

There is so much more one can do to learn about one’s self. The possibilities are endless on what I can quantify (measure about myself) and there are so many people capturing many surprising things.

Quantified Self, if you haven’t heard, is “a collaboration of users and tool makers who share an interest in self knowledge through self-tracking,” as described by by Gary Wolf and Kevin Kelly. I’ve also been an admirer of Nicholas Felton, who has beautiful visualizations of his data.

The two-day conference consisted of morning and afternoon plenary sessions, and in between, the day is filled with ten-minute talks on the main stage (where practitioners share their own QS work, tools, and personal data), with breakout sessions for group discussions and office hours for hands-on help happening concurrently. There were plenty of topics for a newbie QS-er like me or a longtime enthusiast.

My conference experience in numbers:

Videos and presentations should be posted in the coming weeks but until then, here is a summary of from Gary Wolf.

Beyond the numbers, I was surprised, inspired and learned a few lessons. It is amazing what quantified self-ers are capturing, the extent and effort they take, and their life changing impacts. There is plenty of fitness, diet, and health tracking happening, but others are tracking things such as:

The list goes on but this sampling gives you a sense of the range of self tracking.

While lots of recording was being done with commonly available sensors, devices, and apps, there was a lot of data being recorded manually through pen-paper journals and spreadsheets.

There are endless measures (and many low and high tech tools) but recording is not the end goal. The measures help inform our goals and the actions to achieve those goals. There were several talks about the importance of self-tracking to understand your numbers, your similarities and your differences to population normals.

In “Beyond Normal: A Conversation,” Dawn Nafus (@dawnnafus) and Anne Wright (@annerwright) discussed the importance of self-tracking to gain awareness on whether the standards, baselines, and conventions apply to you. Population normals are a good starting point but they shouldn’t define your target as you are unique and the normals may not be right for you (#resistemplotment).

Image from QS15

Image from QS15

My takeaway, don’t worry about getting the perfect device or tool. Start with finding a goal or change that is important to you. Record, measure, and analyze – glean insights that move you along to being your best self. It is not about the Q but the S.Possibly Related Posts:

Mid-June Roundup

Tue, 2015-06-16 16:07

A busy June is half over now, but we still have miles to go before July.

We’ve been busy, which you know if you read here. Raymond went to Boston. Tony, Thao (@thaobnguyen), Ben and I were in Las Vegas at OHUG 15. John and Thao were in Minneapolis the week before that. Oh, and Anthony was at Google I/O.

The globetrotting continues this week, as John and Anthony (@anthonyslai) are in the UK giving a workshop on Visualizations at the OUAB meeting. Plus, Thao and Ben are attending the QS15 conference in San Francisco.

And next week, Noel (@noelportugal), Raymond, Mark (@mvilrokx) and I head to Hollywood, FL for Kscope15 (#kscope15).

Did you hear we’re collaborating with the awesome organizers (@odtug) to put on a super fun and cool Scavenger Hunt? If you’re going to Kscope15, you should register.

You can do it now, I’ll wait.

Back? Good check out the sweet infographic Tony C. on our team created for the big Hunt:

posterLayout

Coincidentally, one of the tasks is to attend our OAUX session on Tuesday at 2pm, “Smart Things All Around.” Jeremy Ashley (@jrwashley), our GVP, and Noel will talk about the Scavenger Hunt, IoT, new experiences, design philosophies, all that good stuff.

Speaking of philosophies, VoX has a post on glance-scan-commit the design philosophy that informs our research and development, and more importantly, how glance-scan-commit trickles into product. You should read it.

And finally, Ultan (@ultan) and Mark collaborated on a post about partners, APIs, PaaS and IoT that you should also read, if only so you can drop a PaaS4SaaS into your next conversation.

If you’re attending any of these upcoming events, say hi to us, and look for updates here.Possibly Related Posts:

A Framework for Wearables, Glance

Fri, 2015-06-12 12:27

Not long ago, Ultan (@ultan) wrote about our framework for wearable, and other, devices. We’re calling it Glance to reflect the OAUX glance-scan-commit design philosophy.

Noel (@noelportugal) produced a video highlighting Glance on several smartwatches as well as in the car, on Android Auto.

It’s pretty sweet. Check it out:

Glance has been in the works for more than a year now, and it arose out of our collective frustration with the effort involved developing for multiple device SDKs.

The goal of Glance is to do 75-80% of the overlapping work: calling Oracle Cloud Applications APIs, working with required cloud services like Apple Push Notifications and Google Cloud Messaging, deploying a companion mobile application, built in Oracle’s Mobile Application Framework, of course.

With all that done, we can build for and plug in new devices (ahem, Pebble Time) much more easily and with much less effort. Initially, we built Glance to support the original Pebble and Android Wear smartwatches, and the Apple Watch was our first proof-point for it.

IMG_0098

IMG_0875

We’re happy with the results so far, and Glance has made it much easier for us to build prototypes on new devices. Now, if only we could get access to CarPlay.Possibly Related Posts:

Some Wearable Tech Glances Beat Others

Fri, 2015-06-12 11:53

This idea of the glanceable user experience of wearable technology is now everywhere.

They’re all at it.

Testing out the Apple Watch in Oracle HQ’s Fitness Center spin class. On the road is another matter.

Testing out the Apple Watch in Oracle HQ’s Fitness Center spin class. On the road is another matter.

There is the OG Misfit Wearables Shine, Apple’s Glances, and of our course the Oracle Applications Cloud User Experience (@usableapps) concept of glance on the smartwatch, part of our Glance, Scan, Commit design philosophy.

But, not all glances are equal. How well a glance works for the wearer depends on the user experience notion of context of use: the wearer, the type device, what the wearer’s up to at the time, the information they need, the connectivity, et cetera.

Take road cycling for example. I find the Google Glass cards-based glance as applied to the Strava Glassware totally awesome when cycling. Still.

Strava glassware. Heads-up performance details on a card.

Strava glassware. Heads-up performance details on a card.

Glass is a heads-up device, so that means eyes on the road. Combined with the audio updates on my cycled segments and so on, it’s a fantastic UX. It’s convenient. It doesn’t distract me. And, it’s safe. I don’t have to look down at my wrist and take my eyes off the road even for a second to glance at the important stuff.

Ain’t no stoppin’ us now. Except to look at smartwatches.

Ain’t no stoppin’ us now. Except to look at smartwatches.

Looking down at my wrist or changing hand position to glance at my progress on a smartwatch such as my Apple Watch Activity or Workout built-in apps, at my Fitbit Surge Bike stats, or at my Motorola Moto 360 Android Wear Google Fit analytics while hammering along on a bike at 30 mph on a public road is just too risky for me.

apple moto

Glancing at these smartwatches’ UIs later, of course is great, whether it’s for progress on miles, calories, duration, or even to ensure that important data’s actually being sent to the cloud where I can do more with it.

Fight for pink. Moto 360 glance notification of Strava bike ride data heading to the cloud.

Fight for pink. Moto 360 glance notification of Strava bike ride data heading to the cloud.

I have the same opinion about heads-up glance on devices like Google Glass when I am running, though the durability of Google Glass, battery life, and still having to pair it with another device is a pain.

No sweat. Context of use for sporting devices needs to bear in mind personal weather conditions.

No sweat. Context of use for sporting devices needs to bear in mind personal weather conditions.

Running in cities requires you to keep your wits about you: be sharp and look ahead. Glancing down from the upcoming path even for a second might mean going home with an injury or worse. Generally, with my smartwatches, when I’m out running, I’ll glance at the data or analytics when I stop at a traffic signal rely on the audio update from my paired smartphone (although it ruins the music) on occasion.

Somewhere to run in London with the Apple Watch Glances (see what I did there?)

Somewhere to run in London with the Apple Watch Glances, see what I did there?

The ability to glance at performance statistics in heads-up mode, combined with those audio progress reports in your ear, is the way to go when cycling and running with wearable tech IMHO. Arguably, too, an audio component is “glance for the ears”. Glance should be multi-mode and not just about the visual, not least for accessibility reasons. We can’t all see as well as each other.

Activity wearable tech designers and developers take note. Eyes on the prize, or road in this case, please. It’s a good reminder about the importance of context of use when gathering user requirements.Possibly Related Posts:

Sweet Dreams at the EyeO Festival

Wed, 2015-06-10 15:17

 

cherryjohn-at-eyeO


I often tell people that you need both a left brain and a right brain to be a software designer: a left brain to analyze and understand, a right brain to dream and wonder. The EyeO Festival which Thao and I just attended in Minneapolis, was food for our right brains.

EyeO is about the intersection of art and code: generative artists (who use data and algorithms to produce works of art), art installations (which often require sophisticated coding), and those who see coding itself as an art form. It is not so much about knowledge transfer as it about building a community, meeting world-class data artists and hearing their back stories.

I attended fourteen talks in all and saw many wonders.

The JPL crew controlling the Mars rover use Microsoft HoloLens goggles to create an augmented reality, allowing scientists in remote locations to stand together on the surface of the planet. Each person sees their own desk, chair and monitor sitting in a crater with the rover just a few feet away. As their eyes scan the area, little dots of light show where each person is looking; when they use their mouse to teleport to a nearby ridge, others see their avatars walk to the new location. They can even walk around the rover and point to where it should go next.

The design team at nervo.us (she’s a biologist, he’s a physicist) is interested in how complex forms arise in nature from cells growing at different rates. Using their own custom software, they create spectacular simulations and turn these into 3-D printed art objects. One of their most stunning creations is a kinematics dress, made supple using thousands of tiny interlocking plastic hinges perfectly fitted to the laser-scanned image of a customer’s body. With scary-hard math, they generalize a moving body from a single scan, compute not just how the dress will look but how it will hang and twirl, and even prefold it so that it will fit in today’s small 3-D printers.

Perhaps the most jaw-dropping demonstration was a sneak preview of “Connected Worlds,” an installation that will be opening soon at the New York Hall of Science. Three years in the making, it creates a Star Trek style holodeck with a 50-foot waterfall and six distinct biomes populated by whimsical plants and animals. Children move physical logs to redirect virtual water into the various biomes; if they make the right decisions wonderful trees will grow and attract ever more magical animals. The team at Design I/O described technical challenges and lessons learned, some of which might be applicable to future AppsLab motion-tracking projects.

One of the topics I found most stimulating was new and improved coding languages. I have used Processing, a language developed specifically for artists, to create some of the interactive visualizations we show in our cloud lab. It was a thrill to meet and talk with Processing’s co-inventors and hear their plans for new evolutions of the language, including P5.js, Processing.py, and the upcoming Processing 3.0.

But the most interesting talk about languages was by a guy named Ramsey Nassar. Ramsey is an uber-coder who creates new computer languages for fun. He argues that most coders are stuck using alienating, frustrating, brittle languages created decades ago for a world that no longer exists. He wants to create languages that facilitate “post-human creativity,” new forms of creativity not possible before computers. Some of his languages, like god.js (which makes code look like biblical text) and Emojinal (made completely out of emoji), are just for fun. Others, like Alb (the first entirely Arabic coding language), Arcadia (for Unity 3D VR game development), Zajal (inspired by Processing), and Rejoice (a stack language based on Joy), are practical and mind-expanding. I plan to talk more about why coding languages should matter to designers in a future blog post.

As with any conference there were countless small discoveries, too many to report in full. Here are just a few…

Amanda Cox of the New York Times talked about making data more relatable by using geocoding to default the initial view of a large geographical dataset to the user’s own locale. Another interesting technique was having users guess what a plotted curve would look like by drawing it before showing the actual curve.

One clever flourish I noticed was the use of tiny single-value pie charts placed beneath each year in the X axis of a time-driven line chart to add an extra dimension of data about each year without distracting from the main point of the chart.

Sprint, the telephone company, started out as a railroad company that used their existing right of way to plant cell towers. Sprint stands for Southern Pacific Railroad Internal Networking Telephony.

Reza Ali is an amazing generative artist who turns data and music into images, animations, and tangible objects. One of his secret weapons is ofxPro. Check out his music videos for the band OK Go.

Into LED arrays and Raspberry Pi? Check out Fadecandy.

Timescape is a visualization-based storytelling platform, currently in beta. Looks interesting.

How long does it take the New York Times team to create world-class infographics? As long as you have plus one half hour.

What kind of music do coding language nerds listen to? The Lisps of course!

My right brain is full now. Time to dream!Possibly Related Posts:

Asteroid Explorer Launched

Wed, 2015-06-10 13:28

On Monday, we launched Asteroid Explorer at the Harvard-Smithsonian Center for Astrophysics.

Jeremy (@jrwashley), DJ, Kris Robison and I attended the launch event, which you can watch here. My part of the presentation begins at 36:36.

This event was the culmination of NASA’s Asteroid Hackathon event back in October. Remember that?

Here’s the event abstract:

Abstract: In October 2014, NASA’s Asteroid Hackathon event was hosted (with several other NASA partners) at the SETI institute in Mountain View, California. Team NOVA’s overall winning solution for this hackathon allowed users to explore relationships among the Minor Planet Center’s asteroid data. The elegant interface was not just eye-catching, the repeated learning that hackathon participants experienced in the “science fair” portion of judging greatly impressed the judges. More than once, people discovered relationships among asteroid data parameters that they didn’t previously know about. A perfect outcome for one of the primary goals: to increase public knowledge of asteroids. Dr José Luis Galache (Acting Deputy Director, Minor Planet Center) and DJ Ursal (Director, Product Management at Oracle) teamed up together through the Oracle Volunteering initiative to implement the winning entry from the Asteroid Hackathon on the Minor Planet Center website. On June 8th they will be launching the website as part of the for the Harvard-Smithsonian Center for Astrophysics’s Solar, Stellar and Planetary Sciences division seminar series. The team will be discussing this project as it relates to cooperation between the Minor Planet Center, NASA, Oracle Volunteering, and its goal to inform and involve the general public.

This volunteer effort is a great success, and the result is well received and appreciated by the astrophysicists attending the launch event.

IMAG6211

Jeremy Ashley, GVP Oracle Applications User Experience, speaking at the launch of Asteroid Explorer

The NASA Grand Challenge program executive Jason Kessler (@soughin) was at the White House, talking up the Asteroid Hackathon and this volunteer work there, before calling into the event via Skype.

IMAG6205

The event was live broadcast through the Minor Planet Center’s YouTube channel, and the audience at the Center was mostly astrophysicists.

On the roof-top of the Harvard-Smithsonian Center for Astrophysics, there are several telescopes, including the famous Harvard Great Refractor. But we liked this cute mini Astro Haven.

IMAG6201

A bit about Asteroid Explorer, the main part of the web tool utilizes Crossfilter, D3.js and Highcharts. I processed Asteroid data into proper slicing, grouping to feed into Crossfilter to render the interactive filter bar charts and tables; also, I created bubble chart to render series of property data for looking into correlation of any pair of properties, and dynamically react to filter bar charts range slider.

Screen Shot 2015-06-08 at 11.48.36 PM

Screen Shot 2015-06-08 at 11.49.58 PM

Possibly Related Posts:

Google I/O 2015 & User Experience

Wed, 2015-06-03 12:47

Google I/O 2015 has just ended.  There are lots of aspects regarding to Google I/O.  Lets have a taste of it from the user experience perspective.

There are lots of features announced in Google I/O, and you may find a lot of focus have been around user experience this year.

First, for the pre-show, there were planets and a whale flying across multiple screens surrounding the keynote room, showcasing the latest advancement of VR and animations.  Below was the whale that someone recorded.

Next, here is the visualization of the Android mobile adoption.

This visualization is pretty impressive.  It uses the screen asset efficiently, it is easy to understand, and it shows a lot of key information in an elegant way.

As Dave Burke pointed out, “the central theme of M is improving the core user experience of Android.”  To name a few, Android M now provides simpler and more granular control for app permissions, better web experience, app links, and fingerprint support.

Before M, users have to allow all the permissions that an app requested in order to install the app.  I had a lot of issues with this approach in the past.

For example, I did wish to install Facebook app on my phone, and I did at one point.  To my dismay, the Facebook app immediately scanned through all my contacts and suggested friends to me from my contact list.  But hey, I do not wish to become a Facebook friend with my banker, please.  With the new app permissions, it is promising that I do not need to disclose my contact information to Facebook anymore if I have the Facebook app installed.

Google Now, other than showing you your traveling cards and parking cards, now drills into the user app to derive the current user context. Being able to retrieve contexts will not only allow Google to provide more relevant results, but more appropriate actions as well. Users will no longer need to switch to another app (such as the browser), look for some information, and get back to the previous app. Now they can simply go to Google Now and ask questions like “who is the author of this song,” and it will return back the results they need without losing context. This is a huge time savings for users.  Privacy is a potential concern for lots of people, but let us defer it to somewhere else.

IoT is certainly the next thing, and everyone is looking for a winner.  Google is doing the same and introduced Project Brillo and Eave as their step and direction towards this area.  With almost everything now moving to the cloud and become easily accessible, IoT development is easier than ever before.  This is the era of IoT, and it is part of user experience as well.  How can you call yourself having a great user experience when the user can not even pick and choose the device they like?

Last but not least, having Cardboard for education is such a great idea.  I now wish that I was born in this era and being able to experience and learn about geography, biology, chemistry classes in this new interactive way.

To close this off, here is me lying on a hammock and watching one of the completely packed sessions in the cafeteria.

IMG_20150529_161903

Did you go to Google I/O?  Please feel free to comment below if you have anything you would like to share.Possibly Related Posts:

Kscope15 Scavenger Hunt

Fri, 2015-05-29 10:17

Are you attending Kscope15 (#kscope15)?

If yes, then you should join the Kscope15 Scavenger Hunt. We have partnered with the Oracle Development Tools User Group (aka @odtug) and created a fun way to get points and win prizes!

Why? The Oracle AppsLab (@theappslab) team wanted to find a user-friendly way to start a conversation with the ODTUG membership about this whole “Internet of Things” thing. Plus, we love Kscope and ODTUG!

This scavenger hunt is a fun way to start dipping your toes into an emerging technology that is going to be a major focus area for Oracle on the road ahead.

So join us, and you’ll be able to go back to work on Monday with some cool IoT talking points. Like that fact that you now know what “IoT” stands means.

Visit http://kscope15.com/scavenger and register now!

posterLayout

Stay tuned to hear more on how we built this and how we are leveraging mobile, IoT and wearable technologies for this fun activity.

Oh, and be sure to attend our Tuesday session at 2pm, “Smart Things All Around” to hear Jeremy Ashley (@jrwashley), our GVP, and me present a deep dive discussion and wax philosophically on what it all means.

But wait, there’s more. Make sure to stop by our AppsLab table to chat us up during the conference.Possibly Related Posts:

Busy Times Are Afoot

Wed, 2015-05-27 18:32

Lots going on here in AppsLab land and in Oracle Applications User Experience (@usableapps). This here is a recap post.

Showing the Oracle Applications User Experience Roadmap to Oracle’s Asia Partners

At the beginning of May, Anthony (@anthonyslai) and Raymond joined a large contingent of the OAUX team in an two-stop tour of Asia. The first stop was Singapore.

Here’s the dynamic duo in action, setting up our demos to show to a large group of Oracle partners.

singapore

For a full download of the event, be sure to read Misha’s (@mishavaughan) recap post.

Laying out the Oracle Applications User Experience Strategy for Partners in Beijing

After Singapore, the team headed to Beijing for more partner events, and as you can, the turnout was phenomenal.

beijing

Misha has a full debrief of the Beijing leg of the tour as well.

IoT Hackathon in Guadalajara

Even though Noel (@noelportugal) was bummin he didn’t get to go to Singapore and China, his spirits brightened when Laurie (@lsptahoe) asked him to serve as a mentor for her Internet of Things (IoT) hackathon in Guadalajara.

noel_lab

That’s his happy face, as he sits in his mobile IoT lab. Check out his pet Amazon Echo there in the front. That case is full of goodies. Note the soldering iron.

Laurie has a full review of the event, and you can read about the AppsLab team’s entry here.

Coming Soon

We’ll get a brief respite, then come several conferences.

Next week, Thao (@thaobnguyen) and John will be attending Eyeo (@eyeofestival), as in the festival, not the Google conference (@googledevs). Anthony will be at that one, i.e. Google I/O, so look for his recap here next week.

On June 8, OHUG 2015 begins, and several of us will be attending, doing research and testing. Gozel (@gozelaamothhas a full rundown.

And hey, I’ll be presenting with Aylin Uysal (@aylinuysal); our session is called Oracle HCM Cloud User Experiences: Designed for Work Styles across Devices, and it’s Tuesday, June 9 at 1 PM. So, come by if you’ll be at the show.

In mid-June, Anthony and John head to the UK for the OUAB meeting, specifically to present and demo some of the team’s visualizations work.

Near the end of the month comes KScope 15 (#kscope15), and several of us will be going.  We have something special planned for ODTUG’s (@odtug) annual get-together. Stay tuned for details.

Consider yourself current.Possibly Related Posts:

Nymi Band Impressions

Tue, 2015-05-26 13:35

Editor’s note: Here’s the first post from Osvaldo Villagrana (@vaini11a), one of our AppsLab Mexico team members. Enjoy.

During last week I’ve been playing with Nymi Discovery Kit I got back in our AT&T hackathon participation, and here are my impressions as a developer for Nymi SDK point of view and as a user.

For those who don’t know, this band is wearable biometric identity device that let’s you use your heart’s unique signature (a.k.a. Electrocardiogram or EGC) to authenticate and validate your identity.

Main problem they want to solve is avoid user remembering all passwords, PIN numbers and security codes used in our daily basis.

First off Discovery Kit includes the band, Bluetooth dongle for Windows and USB cable for charging the band. Bluetooth dongle is included because at the beginning Nymi band only could be paired with Windows OS but now can be paired through OS X and Android as well.

unbox

Nymi band material at first feels cheap and easy to bend it and break it, but it really fits very well on my wrist. Band connections terminals are very exposed in both ends of the cord to water or dust but they say is water resistant but not waterproof.

band3

Band is adjustable and can accommodate wrist sizes up to 7.5” in circumference. A full charge takes approximately two hours when you use a wall outlet or computer and battery last 3 days.

band band1 band2

Setting up the band is requires some steps; band must be enrolled and authenticated with your own ECG using the NCA (Nymi Companion App) app available in Windows, OS X and Android. I decided use Android app this time. I tried OS X and Windows but it’s the same. Once the band is clasped on your wrist it will confirm you the charge level and immediately will enter in broadcast mode.

I found this step a bit confusing as there’s no feedback when band is already in broadcast mode so you are not quite sure if your band is ready to be discoverable. Funny thing is there’s no way to turn it off.

After band is clasped, Android app asks for putting your finger over the sensor in the band. It takes like a minute for the app to analyze and save your ECG info in the app. After that, you’re ready to pair your Nymi with any NEA (Nymi Enabled app or third party apps). Band supports just up to 7 different apps profiles (they say in coming updates will be supported more).

nca nca1

Anytime clasp is opened, band must be authenticated once again but with the same NCA app was before. If you want to use any other NCA app (OS X or Windows), the band should be reset and start over the setting up. This is not ideal.

NEA’s must provision a unique key-value (profile) that is saved in the band for future use and this happen only once for each NEA. The NEA should store the provision returned from the band for future communication. On subsequent usage, NEA’s validate against the provisioned Nymi band. Once validation is successful, the NEA can assume an authenticated user. All those steps must be implemented by the developer using the SDK’s for different platforms.

band4

To complete the exercise, I wrote an Android app that makes provisioning and validating flow and finally gets user authenticated if user is close enough to the device, in this case mobile or tablet. After I got authenticated my wife wore the band and tried to get authenticated but authentication failed all the time as expected.

SDK is good but needs some enhancements, though. Even at Nymi, they are having hard time with problems in their own NEA’s like the unlock app for Mac OS X that currently is not working and I have posted couple of issues and bugs I found.

As first attempt for this new authentication automatization niche, I like it, and I think is good enough.

I see a lot of potential and possibles use cases for this band in enterprise. Definitely I would use it, but what I would really love is a band that can handle authentication, sport tracking and motion, notifications and time in the same device. Probably that’s too much for now but I’m looking forward to seeing that device soon.Possibly Related Posts:

IoT Hackathon Field Report: Mexico Edition

Mon, 2015-05-25 12:02

I recently ventured down to Mexico to participate in an Internet of Things (IoT) hackathon organized by Laurie Pattison’s (@lsptahoe) Apps UX Innovation Events team with some of my fellow AppsLab members, Luis Galeana, Tony Orciuoli, and Osvaldo Villagrana.

IoTgraphic

Being the lone non-developer, I wasn’t sure how much I would be able to contribute—but I had done some research pertaining to our use case, so I felt I had at least that much to offer.

Our rather illustrious use case pertained to a perennial workplace problem—lines to use the bathroom. In MDC, there is a preponderance of men, and so apparently waiting can be an issue. Some of my research has found that elsewhere, where there are more women than men, lines to use the women’s bathroom in the office place can be a serious annoyance.

Thus was born what was originally playfully titled Bathroom Management (BM), though we ended up with Presence, which would work more generally as a presence management system that could also handle conference room reservations, among other things.

image1 image5

I had never been part of a hackathon, but I definitely found out the appeal. As a lover of deadlines, and my own experiences coding at night (definitely the best time for coding), it seems just right for this sort of thing. Free snacks and beverages, food carts for lunch and dinner, and a beautiful view from the MDC office 9th floor, it was an excellent setting.

I was able to help intermittently with thinking through some of the logic of our scheduling system, and with our pitch at the end, so I did feel I added something, even if the lion’s share of the work was done by the other three. Being a two-day hackathon, we had one late night, which I stuck around for, and ended up reading about and playing with Python, in the hopes it might come in handy. It didn’t, but there’s always next time.

Our presentation of Presence garnered some good laughs, which we didn’t quite expect, but at least everyone was engaged. We had a great demo showing our scheduling system for bathroom stalls, which included proximity sensors, sounds, and displays in the stall, and a web interface for scheduling, as well as IM, phone, and watch notifications when the stall you reserved becomes free.

We came in third, after two other solid entries, and took home the People’s Choice award, perhaps because our solution filled a real need in the office! I did learn a lot from the other winners, particularly on how we could have pitched it better to highlight the enterprise applicability. So again, there’s always next time.

All in all I found it highly favorable, and hope I have another chance to do it again in the future.

walkingPossibly Related Posts:

Another Take on Twilio Signal 2015

Fri, 2015-05-22 16:21

Editor’s note: Mark (@mvilrokx) and Raymond are at it again. Earlier in the week, they each provided a take on last weekend’s Bay Are Maker Faire, and this week, they both attended Twilio’s (@twilio) first developer conference, Signal. Mark’s take is here; now, it’s Raymond’s turn. Enjoy.

Twilio is no stranger to us at AppsLab. We have embedded Twilio Voice, SMS in applications such as Taleo Interview Evaluations, IoT call at Maker Faire 2014, and Daily Asteroid report, etc. It is simple yet powerful approach to achieve some real useful communication for some interesting projects.

But I never imagined Twilio is so big, that it is big enough to host a conference and get thousands of enthusiastic attendees.

They have come a long way – at the conference, they announcemed a slew of new products, and some of them are rightfully timely and empowering. A couple of samples:

  • Twilio Authy, a perfect way of embedding a two-factor Authentication into your next awesome and secure application.
  • Twilio Conference (basic, global, epic) – lets you bypass carrier lock-down, and bypass the nasty roaming.
  • Twilio Video – a Twilio flavored WebRTC, provides your application to conserve context over communication through voice, text, and video. That’s contextual communication even when you jump from your application over to communication channel.
  • Twilio IP Message – allow us embedded advanced messaging in all types of mobile and web apps

I think Twilio worked on its strength to position itself really well. They strive to provide composeable API as building block (just like Lego), and make it easy for developer to embed communication capability, and non-friction from users (no need to install anything).

In the current world, you pretty much have one app for one of anything, you have one app for ordering pizza, one app for calling taxi. Let’s Magic help you, a service built on Twilio, by just texting your desire to a number, and “hopefully” your wish is fulfilled :) That’s called non-friction!

Another use case is “Code for American”. Users can text to a number, and get your card balance. Such quick easy way to access some quick information provides real “accessibility to information”.

And one more use case is “American Red Cross” for disaster response, where they can form and coordinate the ad-hoc group of volunteers, where the group may be fluid.

In retrospective, our Taleo Interview Evaluation demo build can be thought of a very good use case for providing easy access to information and transaction.

With Twilio’s new release and capabilities, I look forward to building new contextual enterprise application for easy access and interaction.

Now here is a fun bit:

As usual, the Conference gives every attendee a backpack, and this time, with a twist. It has littleBits to power a 8×8 LED panel which can be attached to the backpack.

littleBit

And during $Bash event, they have cloudBit as prize to give out. Mark and I were determined to win that cloudBit, so that we can extend the LED panel display on the backpack, to be controlled remotely over Internet! We found out the most efficient way to win points, which is by playing Pinball games. We worked together, and of course, we got what we aimed for.

cloudBit

And by the way, I became the champion of the night for Pinball games on the floor, by scoring over 430,000 points in one game.
That’s a nice surprise to me too, that I have got talent in Pinball game!Possibly Related Posts:

Twilio Signal Conference 2015

Fri, 2015-05-22 08:40

Editor’s note: If you read here, you know we heart Twilio, especially Noel (@noelportugal). Remember the Rock ’em Sock ’em robot build?

This week, Twilio (@twilio) held its first Signal conference and Raymond and I were there to see what’s new in the world of web enabled communications and the likes.

Signal-Twilio-Conference-640x265

For those of you not familiar with Twilio, here’s their spiel from their About page:

Twilio powers the future of business communications.  Enabling phones, VoIP, and messaging to be embedded into web, desktop, and mobile software.

For example, they provide REST APIs that can send and receive phone calls and text messages (SMS), allowing you, as a user of their services, to implement these extremely complex features in your applications, whether they are mobile, web or desktop apps with very little effort.  They provide many more features and announced a bunch of new ones at the conference, see their website for more details on those features.

I had no idea that Twilio is as big as it is: there were 2000 attendees at the conference and apparently, Twilio is the second largest provider of phone numbers in the us, right behind T-Mobile.

The conference started of with a pretty impressive magician’s act in which actual Twilio APIs were used, very original I thought.  It the proceeded with a bunch of keynotes, lead by the CEO of Twilio, Jeff Lawson.  He stressed the importance of services, comparing them to Lego blocks that, in the right hands, allow you to build anything by composing these services, just like you would do with Lego.

Among the lineup of key speakers was Werner Vogels, CTO of Amazon who gave a history of how Amazon moved from a monolithic architecture to a more Service Oriented Architecture, then towards Micro Services and finally towards an architecture that now aggregates these Services into useful components.  They had to build an infrastructure to support these changes which eventually led to what we now know as AWS, very interesting talk.

One other interesting topic I remember from the opening presentations was Jeff Lawson mentioning that the next big evolution in communication will be for them to become context-aware. i.e. rather than you having to enter your 17-digit account number on your phone and then having to identify yourself again and again to the agent that you get transferred to with some weird question about the street you grew up in, this information should be available when a call gets made, leading to much better quality of service and a much higher throughput of calls.

The rest consisted of product announcements and partners getting to explain how they use Twilio in their business.  We then attended a bunch of sessions, some more interesting than others, I’ll limit myself here to the more interesting ones.

4Y6A3701-640x265

Image from Twilio

I’m a huge fan of ngrok so I was delighted to attend a session by the maker of this tool, Alan Shreve.  Turns out that it was written in Go, and Alan gave a few examples of how this language made it easier to build these types of tools.  He also mentioned that rewriting an existing tool into a new language is a great way to learn that new language as you limit the scope and can focus purely on the language itself.  He also stressed  not to be discouraged if you discover that a tool already exists, competition is a good thing and it validates the business case.

Also very informative was a talk from Guillermo Rauch, the creator of socket.io of which I also am a huge fan.  The talk didn’t focus on socket.io itself, but on the challenges you will face when you start building realtime applications, something that socket.io allows you to do: conflict resolution, throughput, diffing etc.

Kate Heddleston gave a talk about One-click deploy for service-oriented architectures which is a project that she worked on that allows you to deploy (with 1 click), a fully operational environment, including load balancers, db servers etc. on Amazon EC2, using Docker.  It seemed like an excellent alternative to the likes of Heroku and I definitely will check this out more in the near future and see if this could be leverage somewhere for our work in the AppsLab.

Probably the most interesting talk of the whole conference, for me at least, was by Neil Mansilla from Runscope about API testing & debugging.  He didn’t just gave a sales pitch about Runscope but laid out a whole bunch of tools that you can use to test APIs, from Apache Benchmark to Charles and Wireshark.  I am definitely going to check out Runscope!

What I took away most from this conference though is that APIs are the future: IT infrastructure is turning into APIs (AWS), electronics is turning into APIs (littleBits) and telecommunication is turning into APIs (Twilio, of course, but also switch).  I am convinced that Enterprise apps will also evolve into this direction and Enterprise APIs will enable developers to compose and integrate easily with other, non-enterprise APIs, allowing them to build new and exciting applications, just as developers started doing with tele-communications when Twilio appeared.Possibly Related Posts:

Another Take on Maker Faire 2015

Wed, 2015-05-20 09:05

Editor’s note: Here’s another Maker Faire 2015 post, this one from Raymond. Check out Mark’s (@mvilrokx) recap too for AppsLab completeness.

I went to the Maker Faire 2015 Bay Area show over the weekend. A lot of similarity to last year, but a few new things.

In place of our spot last year, it was HP-Sprout demo stations. I guess HP is the main sponsor this year.

hp-sprout

Sprout is an acquisition by HP, that they build a large touchpad and projector, as attachment to HP computer. It is kind of combination of projector, extended screen, touch screen, and working pad – that seems to blend physical things with virtual computer objects, such as capture objects into 3D graphics.

TechHive’s Mole-A-Whack is quite good station too – it is a reverse of classical Whack-A-Mole.

mole-a-whack

Here’s a video of it in action:

They use arduino-controlled Mole to whack kids who hide in the mole holes, but need raise head out of the hole cover (which is arduino-monitored), and reach to push a button (MaKey connected) to earn points.

The signals go into a Scratch program on computer for tally the winner.

This pipe organ is an impressive build:

fire-pipe-organ

As usual, lots of 3D printers, CNC mills, etc. and lots of drones flying.

Also I saw many college groups attending the events this year, bringing in all kinds of small builds for various applications.Possibly Related Posts:

Maker Faire 2015

Tue, 2015-05-19 09:17

This weekend the 10th Annual Maker Faire Bay Area took place in my backyard and rather than fighting traffic for 2 days with the +130,000 attendees I decided, as I have for the last 9 years, to join them.

Unlike last year, Oracle had no presence at the Maker Faire itself, so I had plenty of time to walk around the grounds and attend sessions.  This post is an overview of what I saw and experienced in the 2 day madness that is called the Maker Faire.

For those of you who have never been to the Maker Faire, the easiest way to describe it is as a mix of Burning Man and a completely out of control hobbyist’s garage, where the hobbyist’s hobbies include, but are not limited to: everything tech related, everything food related, everything engineering related and everything art related, all wrapped up in a family friendly atmosphere, my kids love the Maker Faire.

You can find the tech giants of the world next to the one person startup, beer brewers next to crazy knitting contraptions, bus sized, fire breathing rhino’s next to giant cardboard robots etc.  And nobody takes themselves too seriously, e.g. Google was handing out Google Glasses to everybody … Google Safety Glasses that is :-)

Google Safety Goggles

My new Google Glasses :-)

The first thing I noticed was that the Faire expanded . . . again.  A huge tent was erected on what was a parking lot last year that was housing the Make:Labs, I didn’t actually get to spend any time in there but it contained an exploratorium, startup stuff and a section for Young Makers.

Which brings me to the first trend I observed, makers are getting younger and younger and the faire is doubling down on these young folk.

Don’t get me wrong, the faire has always attracted young kids, and some of them were making stuff, but there seem to be more and more of them, the projects they bring are getting more and more impressive and the faire’s expansions all seem to be to cater to these younger makers.

One of the sessions I attended was called “Meet Some Amazing Young Makers” where a 14 year old girl showed of a semi-autonomous robot that could map the inside of caves.  She was showing us the second iteration, she build the first version . . . when she was 8!  Another young man, 13, build a contraption that solved a Rubik’s cube in under 90 seconds.  It wasn’t just that they build these things, they gave solid presentations to a majority adult audience talking about their builds and future plans.

Another trend that was hard to ignore is that the Internet of Things (IoT) is getting huge and it’s definitely here to stay.  There weren’t just many, many vendors promoting their brand of IoT hardware, but a whole ecosystem is developing around them.

From tools that let you visualize all the data collected by your “things” to remote configuration and customization.  This trend will not just Cross the Chasm, it’s going to rocket right passed it.

I attended a panel discussion with Dominic Pajak (Director IoT Segments, ARM), Paul Rothman (Director of R&D at littleBits Electronics), Andrew Witte (CTO, Pebble), Alasdair Allan (scientist, tinkerer) and Pierre Roux (Atmel) about the current state of IoT and the challenges that lay ahead.

One of the interesting points raised during the discussions is that there currently is no such thing as the Internet of Things!  All these “things” have to be tethered to a phone or other internet capable device (typically using BLE), they cannot connect to the internet directly.

Furthermore, they cannot communicate with each other directly.  So it’s not really an IoT rather the regular “human internet” with regular computers/phones connecting to it, which in turn happen to have have some sensors attached to them that use the internet as a communication vehicle, but that doesn’t really roll of the tongue that well.

There is no interoperability standard at the moment so you can’t really have one device talk to a random other device.  This is one of the challenges the panel felt has to be solved in the sort term.  This could happen with the adoption of IP in BLE or some other mechanism like Fog Computing.

Another challenge brought up was securing IoT devices, especially given that some of the devices could be broadcasting extremely personal information.  This will have to be solved at the manufacturing level as well as at the application level.

Finally, they also mentioned that lowering power consumption needs to be a top priority for these devices.  Even though they have already come a long way, there still is a lot of work to be done.  The ultimate goal would be self sufficient devices that need no external power at all but can harvest the energy they need from their environment.

One such example mentioned is a button/switch that when pressed, uses the energy you put in to press it to generate enough power to send a on/off signal to another device.

Massimo Banzi, co-founder of the Arduino Project, also gave a talk (as he does every year) about the State of Arduino.  It seems that a lot of that state is in legal limbo at the moment as there are now seemingly 2 arduino companies (arduino.cc and arduino.org) with different views of the future of the project.

As part of his vision, Massimo introduced a partnership with Adafruit to let them produce arduino’s in the USA.  Also as a result of the legal issues with the Arduino brand name, he introduced a new “sister” brand called Genuino (Get it? Genuine Arduino) which will allow them to keep producing at least in the US.

Other announcements included the release of the Arduino Gemma, the smallest Arduino ever, the Modulino, a arduino like product designed and produced in their Bangalore, India, office and a focus on online tools to manage and program arduino’s.

I also attended a few sessions that talked about the BeagleBone board.  I am interested in this board because it bridges that gap between the Raspberry Pi and the Arduino, on the one hand it has a Linux OS, but on the other hand it also has Real Time GPIO pins making it interesting for IoT projects that require this.

It also can be easily programmed using JavaScript (it comes with a node server build in) which is something I am currently working with, I’ll probably write up another blog post about my findings with that board when I get some time to play with it (yes, I got one at the Maker Faire :-).

And finally, some other things you can find at the Maker Faire:

Game of Drones:

Fire and Art:

IMG_5591

Robots that solve Rubik’s cubes:

Cheers,

Mark.Possibly Related Posts:

Design Time @ Run Time: Apple Watch Put Through Its Paces in Beijing

Mon, 2015-05-18 10:50

Observations on UX research and road-testing wearable tech in the wild. The vehicle for today’s message is Ultan O’Broin (@usableapps), taking advantage of Oracle Applications User Experience events and outreach to evaluate the fitness and health option on the Apple Watch—and to continue his Fitbit Surge exploration—this time in China.

Emirates Apple Watch app used during the OAUX Asia trip. Emirates Apple Watch app used during the OAUX Asia trip.

The Watch Ethnography (say what?)

All the warnings about running in Beijing proved wrong: that my clothes would turn black; my skin would turn grey; I’d need a facemask; I wouldn’t see any other runners; I’d attract the attention of security personnel with my blue hair.

None of this happened.

I shoulda guessed. Running is one of the most “unasked-for-advice” activities out there, usually from non-runners or “joggers.”

Instead, I saw lots of other runners in Beijing’s parks and streets, mostly locals, with a small number of “ex-pats.” At times there were so many runners—and power walkers—early in the morning that I had to weave hard to get by them. On the long, straight streets of Beijing, I saw hardcore runners in action, percentage-wise more than, say, in Dublin.

Running in Beijing. Scene from Temple of Sun Park.

Running in Beijing. Scene from Temple of Sun Park.

I saw lots of runners sporting colorful running gear; more than I’ve seen in San Francisco, though the styling was far short of the effortless funky co-ordination of the lemons, oranges, and blacks of the Nordic scene. Yes, I’m a running fashion snob. It was kinda hard to tell what fitness devices the Beijing crowd was packing, but I did see some Garmins: a sure sign of serious runners.

I did one run to the Forbidden City and Tiananmen Square, a 10 miler; hauling myself around the Central Business District and diplomatic zones on other days. The eyes of Chinese security guards swiveled to follow me as I strode by, but generally they seemed nonplussed with my blue hair and obvious Apple Watch. I was kinda disappointed I didn’t end up on CNN.

Running to the Forbidden City. Alas, selfie sticks were not forbidden.

Running to the Forbidden City. Alas, selfie sticks were not forbidden.

The best time to run in Beijing is clearly in the early morning. Public parks were open by 5:30 AM and full of runners and walkers by the time I arrived. There is very bad air pollution in Beijing, but growing up in pre-smokeless-coal-carbon-fuel-ban Dublin, it really didn’t seem that menacing. However, I did detect a markedly poorer air quality later in the day. Your mileage may vary on that one, I guess.

The Device Findings

These runs in Beijing were another opportunity to test out the Fitbit Surge but really to try out the newer Apple Watch in another location. There are other comparisons between these two devices.

Both performed flawlessly, though I preferred the superior build quality of the Apple Watch, which is outstanding, and its UX with configurable glances display and superior styling. Henry Ford’s “Any Color As Long As It’s Black” as applied to smartwatches and fitness bands is #fashtech #fail by this stage.

Again, I was particularly impressed with the rapid GPS acquisition and holding capability of the Surge. I’ve used it on three continents now, and I love its robustness and long life battery.

Fitbit Surge GPS recording from Tiananmen Square run (on iOS)

Fitbit Surge GPS recording from Tiananmen Square run (on iOS)

The Apple Watch’s built-in Workout app proved easy to use for my runs. It has indoor and outdoor options for other activities too, whether with target metrics, distance, time, or calories, or you can use it for an “open” hustle. I was a little disappointed that the watch app doesn’t enable wearers to recall more basic run details from the last activity but being able to see real-time progress was great. I also enjoyed using the Apple Watch built-in Activity app too. Its simple and colorful progress analytics for exercise, moving, and standing were fun to glance at throughout the day, though the data is not for any serious runners or QS fanbois out there.

Using both of these Apple Watch apps together provided a compelling health and fitness experience.

Apple Watch Activity App

Apple Watch Activity App

Apple Watch Activity App

Apple Watch Activity App

Being able to use both devices without carrying a smartphone with me on a run was the UX joy. Being freed from dodgy Bluetooth pairing and GPS signal worries, and that tricky music selection procedure required by a smartphone, saved me 5 mins (about three quarters of a mile distance at my speeds) at the start of each run. Being able to see my performance in real time—on the go—without having to fish out a smartphone, was awesome.

That’s what a smartwatch glance UX is all about: being kept in the moment.

The battery life of the Apple Watch didn’t make it longer than 10 hours because of my runs, though without this kind of exertion, it seemed to last most of my waking day, which is reasonable.

What’s Next?

I normally carry a smartphone when running as my music platform, but increasingly to take Instagram images during my journey. The Strava app GPS integration with Instagram is a fave running experience. I did carry my Apple iPhone 5 in Beijing, to take pictures—no, I don’t really carry a selfie stick—and to try out the Strava app for comparison. The Instagram integration seemed to be DOA though.

So, my thoughts on wearable tech super watch evolution, and the emergence of the standalone wearable device as the way to go for smartwatches, were reinforced from my Beijing experience.

However, a super watch UX needs to be flexible and offer more capability. I’d like to see onboard music and image capture capability on the watches themselves somehow. Audio notifications for time, speed and distance and geographic points would also enhance the experience immensely. However, what such enhancements would mean for the bane of wearable tech UX right now—battery life—yet alone device size, remains just another challenge to be solved. And it will be.

And what UX research methodology lessons might be gleaned from running in Beijing with wearable tech? Firstly, don’t assume anything about your ethnographic experience upfront. Try it yourself on a dry run first to iron out any possible kinks. Run at different times of the day, over different distances and routes, in varying weather conditions, and, of course, with different devices along the way. Most importantly, find real native runners to follow around, and record what they do from start to finish, what they do offline as well as online, and with what tools, on their runs.

Running, just like user experience, is about the complete journey, a total contextual experience, not just where your rubber meets the road.Possibly Related Posts:

Amazon Echo Official SDK

Sun, 2015-05-17 16:03

Image from wired.com

Back in February I was invited to participate in an pre-beta release of the Amazon Echo SDK. I was under NDA so I couldn’t share any of my finding here. But now that NDA has expired and I can share some of the integrations I did with this interesting device.

First of all I want to comment on the fact that not any of the OS level voice assistants in the market are quite getting it right when it comes to interacting with third party integrations. Let me explain, neither Google Now nor Siri or Amazon Echo will let you interact with a voice “app” unless you “open” or “start” that app first. For example to start an app in the any of the OSes mentioned above I have to do the following:

“[Ok Google], [Hey Siri], or [Alexa] open [name of application]”…”close” or “exit” [name of application]

Then I can start interacting with that application. This interaction paradigm belongs to a desktop model where you are used to open and close programs. And furthermore these actions are not even part of the mobile experience.

My proposal solution to fix this problem would be for the systems to create an “intent” model where a user could decide what to do with certain defined utterances. For example:

“[Ok Google], [Hey Siri], or [Alexa] do I have any new mail?”

In this case, the user should have the option to decide which will be the default application to handle “mail” through settings or through a first program run.

When you install app for the first time the system should ask:

“Would you like to use this app to handle your voice command for mail?”

Voice as the next user interface

Voice recognition and natural language processing (NLP) algorithms have advanced exponentially. These systems are getting truly ready for primetime. The use cases are only limited by our futuristic view of interacting with our systems with just our voice.

This is where the Amazon Echo shines. The idea of picking up my phone and commanding it with my voice, feels unnatural to me.  The Amazon Echo just sits there on my desk and is always ready for my commands. One could argue that Google Now and Siri could do the same but the lack of the rich sound presence and visual cues (RGB ring around the top) of the Echo are enough to have a better experience.

Demos

Without further ado, here are two demos of service integration I did with the Echo.  I used Temboo libraries for the Facebook, Twitter and Uber integrations. For IMAP mail, iCal, Philips Hue I created my own. All this of course was done in Java.

Office Automation

Internet of Things demo

So would you get an Amazon Echo?Possibly Related Posts:

Four Weeks and a Day with the Jawbone UP24

Wed, 2015-05-13 12:46

After three weeks with the Nike+ Fuelband and four weeks with the Basis Peak, I moved on to the Jawbone UP24.

The UP24 has been out for quite a while now. Back in January 2014, Noel (@noelportugal) and Luis (@lsgaleana) did cursory evaluation, and not much has changed in the Jawbone lineup since then.

At least, not until recently when the new hotness arrived, the UP2, UP3 and soon, the UP4, pushing the venerable UP24 into retirement. Honestly, I would have bought one of the new ones (because shiny objects), but they had yet to be released when I embarked on this journey of wearables discovery.

After starting out with a fitness band and moving to a super watch, going back to the comparatively feature-poor UP24 was a bit shocking initially. I had just become accustomed to having the time on my wrist and all that other stuff.

However, what it lacks in features, the UP24 more than makes up for in comfort. Makes sense, fewer features, smaller form factor, but even compared to the other fitness bands I’ve worn (the Fuelband and Misfit Shine), the rubbery industrial design makes it nice to wear.

Aside from comfort, surprisingly, one feature that made the UP24 sticky and enjoyable was the Smart Coach, which I expected to dislike. Jawbone has a very usable mobile app companion that all its devices share, and inevitably, that is what retains users, not the hardware on the wrist.

Overall, despite its relative age, I enjoyed wearing the UP24. I even decided to wear it a bit longer, hence the extra day.

IMG_20150512_091139

Here are my observations.

The band

Yes, there’s yet another initial software install required to configure the UP24 for use the first time. Yes, that still annoys me, but I get why it’s needed.

As I’ve said, the band is comfortable to wear, mainly because of its flexible, rubber material. Smart Coach reminded me a few times to be gentle with the band, saying something about there being a bunch of electronics packed in there.

I’m not sure if this was a regular reminder or if the band somehow detected that I was being too rough, hoping for the former. The Coach also reminded me that the band isn’t waterproof. While I did get it wet, I wasn’t brave enough to submerge it.

These reminders made me curious about the sensors Jawbone packed inside the UP24, and while looking for a teardown, I found this cool X-ray of the band.

JawboneUp24-X-Ray1

Image from Creative Electron

Impressive industrial design. One minor correction, the audio plug is 2.5 mm, not the standard 3.5 mm, something Noel and Luis found out quickly. From my use, it didn’t really matter, since the UP24 comes with a custom USB-2.5 mm audio adapter for charging.

IMG_20150405_100135

 

The UP24 uses a button to set specific modes, like Stopwatch (for exercise) and Sleep. These took a bit of learning, like anything new. I expected to have push-sequence failure, i.e. using the wrong push and hold combination, but no.

Aside from being red, which seemed to fade to orange, the band is unobtrusive. I found myself wearing it upside down to allow for scratch-free typing, a very nice plus.

The fit did seem to loosen over time, probably just the rubber losing some of its elasticity. Not a big deal for a month, but not a good long-term sign.

The battery life was nice, about nine days initially, but the app seems to misrepresent the remaining charge. One night, it reported five days charge left, and overnight, the band died. Same thing happened a week later when the app reported seven days of charge.

Because the UP24 isn’t constantly connected to Bluetooth, to save battery, I guess maybe the charge wasn’t reported accurately. Although when the app opens, the band connects and dumps its data right away.

Bit of a mystery, but happily, I didn’t lose my sleep data, which tells me the band still had some charge. The sleep data it collected on those nights wasn’t as detailed as the other nights. Maybe the band has some intelligence to preserve its battery.

Sleep data from a low battery. Sleep data from a charged battery

The UP24 didn’t attract the same amount of curious attention that the Basis Peak did, thank you Apple Watch, but a few people did ask what Fitbit I had, which tells me a lot about their brand recognition.

Is Fitbit the Kleenex of facial tissue? The Reynolds wrap of aluminum foil?

The app and data

Jawbone only provides the data collected by its bands and the Smart Coach through its mobile apps. Their web app only manages account information, which is fine, and bonus, you can download your device data in csv format from the web app.

There are, however, several different Jawbone UP mobile apps, so finding the right one was key.

The app is quite nice, both visually and informationally. I really like the stream approach (vs. a dashboard), and again, Smart Coach is nice. Each day, I checked my sleep data and read the tips provided, and yeah, some were interesting.

The stream is easily understood at a glance, so kudos to the UX. Orange shows activity, purple sleep. There are other things you can add, weight, mood, etc. I did those for a few days, but that didn’t last, too lazy.

Screenshot_2015-05-12-09-14-29 Screenshot_2015-05-12-09-14-34 Screenshot_2015-05-12-09-14-44

Each item in the stream can be tapped for details.

Unlike the Fuelband and the Peak, the UP24 uses very minimal game mechanics. The Smart Coach did congratulate me on specific milestones and encourage me to do more, but beyond that, the entire experience was free from gamified elements.

-63993410 Screenshot_2015-05-07-07-31-18 Screenshot_2015-05-05-16-19-33

Did I mention I liked the Smart Coach? Yeah, I did.

In addition to the stream, the UP24 provides historic data as days and aggregated into months and years, which is equally nice and easy to understand.

Screenshot_2015-05-12-09-15-04 Screenshot_2015-05-12-09-15-08

Jawbone has an integration with IFTTT among many other apps, making its ecosystem attractive to developers. I didn’t find any IFTTT recipes that made sense for me, but I like having the option.

There’s social stuff too, but meh.

Data sync between the band and app was snappy. As I mentioned above, the band isn’t always connected to Bluetooth, or at least, you won’t see it in the Bluetooth settings. Maybe it’s connected but not listed, dunno, but Noel would.

Minor downsides I noticed, sleep tracking is an absolute mystery. The UP24 lists both light and deep sleep, but who knows how it can tell. Not that I really need to know, but looking at its guts above, what combination of sensor data would track that?

Speaking of sensors, nearly every run I completed on a treadmill showed a wide variance, e.g. the treadmill says 3.25 miles, whereas UP24 says 2.45 miles. I tried calibrating the band after each run, but that didn’t seem to help.

I saw the same variance with steps.

Not a bid deal to me and definitely a difficult nut to crack, but some people care deeply about the accuracy of theses devices, like this guy who filed a lawsuit against Fitbit for overestimating sleep.

What I’m finding through personal experience and stories like that is that these little guys are very personal devices, much more so than a simple watch. I actually felt a little sad to take off my UP24.

I wonder why. Thoughts?

Find the comments.Possibly Related Posts:

Watch-First Design and Development

Tue, 2015-05-12 17:36

 

So as you might already know, it has been all about THE Watch these past days.

Laucher Home

So having this new toy in my wrist made me want to explore the possibilities. So I set myself to push my skill boundaries and dove right into WatchKit development. To kick off the wheels I spent this past weekend doing what  I like to call “Noel’s Apple Watch weekend hackathon,” my favorite kind of event, because somehow I always end up as a finalist.

Detail Glance

So as the title suggests, I focused in watch-first design (remember mobile-first? thats so 2014!) My goal was to start with a Watch app as the main feature and not even worry about a mobile companion app. As it stands now, Apple Watch, as well as Android Wear rely on “parent” mobile apps.

The result of my weekend fun was an app that I simply called “MyFamily”. The ideas is to add simple reminders, tasks, goals, etc., based on each individual member of my little family (which btw, names have been changed.) The app include an Apple Watch “Glance” which is some sort of a widget, or live tile with very limited dynamic content and interactions.

Having so limited real-estate and features really makes you think twice on how you want to present your data. The WatchKit interface objects are limited to a few subset of their parent iOS counterparts. Most of the design layout can be done by grouping labels (WKInterfaceLabel), images (WKInterfaceImage), and a couple other interface objects available (table, separator, and buttons.)

xCode copy

Having no keyboard (thank goodness!) one needs to rely in voice input to insert new data. During my test the voice recognition worked as advertised. Also during this exercise I finally realized that apps can display a “contextual” menu by “force touching” the screen. I opted to put a text hint (to delete item) , because even after a couple weeks of wearing the watch I didn’t realize this feature was available.

Speech Menu

After creating my Storyboard layouts, it was almost trivial to add data to them. I created custom classes to bind each Interface Controller. Override lifecycle events (awakeWithContext,willActivate,didDeactivate). Created a “member” object and an “event” object. And finally added data to the the tables with something like this:

- (void)setupTable
{
    _membersData = [Member membersList];
    [tableView setNumberOfRows:_membersData.count withRowType:@"MemberRow"];
    for (NSInteger i = 0; i < tableView.numberOfRows; i++)
    {
        NSObject *row = [tableView rowControllerAtIndex:i];
        Member *member = _membersData[i];
        MemberRow *memberRow = (MemberRow *) row;
        [memberRow.memberImage setImage:[UIImage imageNamed:member.memberImage]];
        [memberRow.memberName setText:member.memberName];
        [memberRow.memberEventCount setText:member.memberEventCount];   
    }
}

In conclusion, the WatchKit DX (development experience) is pretty smooth. This is due the the limited and minimalistic set of tools available to you. I suspect I will add more functionality to this app in the future by adding “Mobile-second, and Web-third” design. Oh, and maybe even going “public” and put it in the App Store.

IMG_1048

Photo Proof

Possibly Related Posts: