Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s a report from Luis (@lsgaleana) on the Mexico Development Center (MDC) ShipIt held in mid-August. Enjoy.
Oracle Mexico Development Center (MDC) has grown to bring together some 700 employees, and it continues to grow weekly. However, there is only a single point of entry for every non-employee person, the receptionist.
Every visitor that comes to MDC goes roughly through the same process. The person talks to the receptionist. The receptionist goes to the corporate directory to find the employee being visited. The employee meets the person at the lobby. This is a pretty straightforward job and it is similar for regular visitors, delivery personnel and interview candidates, which are the user roles we picked for our ShipIt project.
The days before the ShipIt event, Rafael Belloni (Rafa) gathered the Mariachis team, Osvaldo Villagrana (Os), Oscar Vargas, Juan Pablo Martinez (Juampi) and myself, and talked to us about his idea of a virtual kiosk that would serve as an entry point for every visitor. It would consist of a screen with a simple but elegant user interface, that in the background would take care of the tedious and repetitive job of finding people in the corporate directory, contacting them, printing badges, saving the information into a log and even entertaining visitors with a video about Oracle and all its wonders.
Technically, we wanted to have an Android app loaded onto a Nexus Player, which Oscar owned, connected to a touch-screen monitor, that Rafa had recently bought (with the ShipIt in mind, of course). This 3-part device would represent the kiosk. In the background, we would have web services to scrape the corporate directory, notify employees via e-mail, IM and text, print a badge for the visitor, and save all of the information into a log. One final part was a web panel, where HR personnel could schedule interviews and assign interviewers.
The day of the event, we started putting things together. However, as is common in this kind of events, issues started arising quickly.
To enable touch in the touch-screen monitor, a USB cable was needed between the monitor and the device controlling it. But the Nexus Player has no input for a USB. We debated on our options and concluded that we would try to make it work (somehow). In the mean time, we would try to have Android running on a Raspberry Pi connected to the monitor. If all failed, we would make a web app running on a laptop. In the end, Oscar came up with BlueStacks, an Android emulator for the PC, on which we could easily install an Android app. We went with that.
Rafa worked on the web panel for the interviews; Os worked on the web services, the pdf conversion, and IM and e-mail communication; Juampi did the design; and Oscar and I made the Android app. We all had our series of problems, but we discussed them among all. We all proposed solutions, but it always came down to the easiest and quickest.
The day went by and we called it for the night around 3am.
The next day, after breakfast, the kiosk was coming together. The design was ready, we had screens to show, the web panel was working and most of the web services were in place. We just needed to put everything together and polish some features.
We rehearsed the presentation about an hour before the scheduled time. Rafa did all the talking, I controlled the demo, and Os showed the e-mails, IM messages and pdf. We were number 3 of 5 to present. All teams made their voting. The time came to announce the winners: 3rd, UAE Tacklers; 2nd, Team Roller; 1st, Mariachis.
In the end, we skimmed the project. We decided that it was a bit too much to print the badge – generating a pdf was cool enough. We also discarded the texting, saving information to a log and the Oracle video. However, these features can easily be added in production.
This is how it looked:
- Applications User Experience Wants You . . .
- Here Are Your First Links of 2015
- Find Me at Collaborate
- Oracle Gets Social
- Big Ideas, Bigger Participation
Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s Part 1 of Mark’s (@mvilrokx) project from the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.
Unfortunately our entry didn’t win any prizes but of course that wasn’t the reason I entered (no really, it wasn’t!) I am trying to learn more about the Internet of Things and what better way to improve my skills then to test them against my peers. In this post I’ll delve into what I learned and how I am planning to us this knowledge going forward.
A few weeks before the Hackathon, I was approached by Diane Boross with an idea for the Hackathon which intrigued me because it was simple, but could have far reaching consequences. California is suffering it’s worst drought in over a century and we are all asked to reduce our water usage by an average of 20% over previous years.
The problem is that, as a consumer, be that an individual or a company, you have no clue how much water each individual appliance is using; you only know the total amount of water your house or company is consuming. So how can you know what to use less, or turn of completely, in order to save 20%?
The answer is you can’t, it’s a process of trail and error. Oh sure, there are lots of tips and suggestions from the state and the water utility companies, but those are very generic and not at all tailored to individual situations.
A very concrete example: I stopped watering my lawn a few months ago and it is now very dead indeed, however, this turned out to be nowhere near enough to save 20% as I thought it would be, in fact it barely made a dent (I have a very small lawn), it’s quite possible I let it die for no good reason.
And now, I have to figure out where else to save water and I don’t know where to start.
Diane’s idea was to create a device that can measure the water usage of each individual appliance, something you can already do for electricity usage, but for some reason not for water.
It would be a smart device that connects to the internet and transmits this data in real time to a central hub that would collect this data and present it in a user friendly manner to the consumer. This would allow that person to then make much more informed decisions about how to save water.
Once you have this data, you can conjure up many other use cases, e.g. you could pit individuals or even whole neighborhoods against each other in a friendly competition of “Who can save the most water in August?”
You could also give much more tailored advice to consumers, e.g. if their lawn watering system is using 50% of their overall water usage you can tell them to turn it a bit lower, preserving the lawn but still saving a bunch of water. Or in my case they could have told me not to even bother and instead shorten my shower routine.
Unusual high water usage of an individual appliance, e.g. due to a leak, would also be much easier to detect, instead of being drowned in the noise – pardon the pun.
You could create a waterbudget per device and provide a feedback mechanism at that device that in real time can tell you where you are in relationship to your budget. As you can see, once you have this data at your disposal, the possibilities are endless.
My initial thought was to use a Water Flow meter to measure the flow through the pipe, connect it to a Raspberry Pi which would in turn connect to the Internet and send the data to a backend server for processing and eventually a database for storage.
The problem with that solution however was that it required plumbing; each device would have to be installed by a plumber, inline with the water pipe that runs to the appliance you want to measure the flow of. I consulted with Joe Goldberg in the next door office who agreed that this would never scale and he proposed instead to “listen” for water running through the pipe using a simple and very cheap piezo.
A piezo measures vibrations, kinda like a microphone, and since water that runs through a pipe causes that pipe to vibrate, putting a piezo on that pipe should, in theory, allow us to not only verify that water is flowing through the pipe, but also how much water is flowing through it.
The latter turned out to be a bit harder than we theorized and was the subject of many experiments over the course of a few weeks leading up to the Hackathon. For his efforts, Joe was recruited by our team which by now consisted of Diane, Joe and myself.
During these experiment, we tried to correlate the vibrations of the piezo with the actual flow rate through the pipe, which we were measuring precisely using a flow meter, using some regression analysis.
Once we would establish this correlation, we could then use that as a model to extrapolate flow rates using just a piezo stuck to a pipe, no plumbing needed. We assumed that we probably would need several models for different environmental circumstances, like e.g. the type of pipe that was used, how long the pipe is etc., but for the Hackathon obviously proving the concept would be enough.
Unfortunately we never found a significant correlation using the cheap piezos, all we were able to do was determine that water was running through the pipe, or not, i.e. whether the device was on or off. It turns out that in practice, in most cases, this can actually be used to measure flow rate as well.
Most appliances that use water either use it all (on) or none (off). Think about it, when you flush your toilet, it fills up again as quickly as it can, basically at the maximum flow capacity the pipe can handle. The same is true for a dishwasher, a washing machine and even most showers. I cannot turn down my shower, it always flows at the same rate, all I can control is the temperature.
The only exception to this rule is a tap where you can manually control the flow. Considering this and our failure to establish a good model, we fell back on assuming that if we measure vibrations, the device is using the maximum amount of water that can flow through a 1/2 inch pipe, which was about 23 liters per minute (or for the decimally challenged, about 6 gallons per minute).
We would simply measure the duration the appliance is “on” and then calculate the actual flow, e.g. if the appliance would be on for a minute, we would calculate that it used 23 liters.
We then had to figure out how to connect the piezo to the internet. Initially I was thinking of using the provided Raspberry Pi’s, but in the margin of the Scavenger Hunt that we ran for KScope15, Noel (@noelportugal) mentioned to me the existence of a the ESP8266 chip.
This is essentially a dirt cheap wifi chip that comes with a bunch of GPIO pins and is powerful enough to be programmed, right on the chip.
There are a bunch of breakout boards that use this chip as their basis and make development even easier. I went for the NodeMCU board, partly because I thought it would allow me to code in node.js (which is incorrect, it uses Lua, a language that has a lot in common with JS, but is not JS).
More importantly though was the prize, the breakout development boards cost ~$10 but the actual chip costs less than $3 (and dropping). Given that our piezo cost 20 cents, we would have a potential IoT product that costs less than $4 to produce.
Once we settled on the sensor and the board, we started to focus on our use case.
We contacted Oracle Facilities, explained what we were trying to do and asked them if they could give us some real numbers of water usage at Oracle to use in our presentation and as a base line for our product.
To our surprise, not only did they provide us the information we needed, they also wanted 1,500 devices from us to test themselves. We had to disappoint them at that point explaining to them what a Hackathon is, however, this clearly demonstrated to us that this would be a useful product, regardless of what would happen during the Hackaton.
We also got another interesting use case from them, essentially A/B testing for water using appliances; they are in the process of installing low flow toilets and faucets on some floors and they immediately realized that this would be the perfect device to confirm whether these expensive upgrades are cost effective, compared to regular appliances.
This is very hard to do right now as, again, sometimes the savings get lost in the overall usage, which is all they can currently measure.
That concluded al the investigative work we did for the Hackathon. In the second part of this blog post I will drill down into the technical aspects of the solution we eventually presented to the judges.Possibly Related Posts:
- Nymi Band Impressions
- ESP8266 – Cheap WiFi for your IoT
- Noel and the AT&T Developer Summit
- What Do You Think of the iPad?
- An IoT Hackathon in Utrecht
“The wind was a torrent of darkness among the gusty trees,
The moon was a ghostly galleon tossed upon cloudy seas…”
—The Highwayman, Alfred Noyes
To Luis, Osvaldo, and me this beautiful poetry calls to mind one thing. Quadcopters. We started flying these machines about a year ago. The Syma X1, for example, costs ≅$30 and I cannot recommend it enough. Guilty feelings come with this amount of fun and exhilaration, at this price. Relax. Order now!
Trust me here. You can justify the use cases and guarantee the safe, professional operation of a quadcopter in a corporate environment. And you can do it with less than 100 emails, discussions, and offhand comments executed strategically over the course of a year or so. A quadcopter seed will, if properly nurtured, sprout a seedling of official approval. Only minutes after approval, that seedling grows propellers.
The Parrot hatches. I hear its first peeps–beeps–and 2 seconds later it flies the nest. I close my front door behind me, the Parrot hovers outside. Surreal excitement, I fly. Flight! Flying! Crashing, fixing, caring, charging. Repeat, repeat,…
But there was another problem. We could barely control the Parrot. We did not shell out for a Parrot with GPS, and we naively thought that a magnetometer would provide the data needed to precisely control flight. But that’s not the way the world works, baby. How many times did our code raise the Parrot into the air only to flip it over on its back or careen it into a wall? So many times, it was truly disheartening. At first we didn’t even have a clue what the problem might be. In our minds, as software guys, you tell the drone to go forward for 500ms and it goes directly forward as commanded. But what’s “forward” mean? The compass was not accurate and our other efforts…
Will you believe me if I tell you it worked? It did! When carrying our red target, the Parrot would follow us around the room. Our simple code performed smoothly. Amazement, disbelief, and joy! Strangely, surreally, the Parrot seemed alive. All by itself, like a puppy coming to play, it knew how to get around.
Then it stopped working. We had split the code into parts: control, vision, sensors, UI. At one point, the code worked properly but then we got fancy. In our confidence we quickly added frills like a little dance, and a flip! Then the Parrot kept losing sight of the target. We only had two batteries and we did not have enough charge to debug. In retrospect, I think it started acting poorly when I tweaked the RGB color identifier function. We needed a video and had just enough charge for this…
See how jerky it acts? That’s me controlling it. When the code works, it runs much more smoothly. Given these complications, and some stiff competition, we did not win anything at the hackathon. I think Luis, Osvaldo, and I are OK with it though. The Parrot works well and if we get some more time to mess with it, it will fly autonomously once again.Possibly Related Posts:
- Quadcopters and the Internet of Things
- How to Make Your Own Raspberry Pi-Powered Smart Target
- Drone, Data X Conference – Trip Report
- Light the App Bar or Not?
I’m very pleased to announce that our colleague and friend Kathy recently pushed live a page on oracle.com under the OAUX section, dedicated to our Emerging Technologies team and our work.
Big moment for us. I feel like a made guy.
Don’t worry. This little blog will continue to be our home and stream of consciousness, but now, we have another, more structured home, focused on our projects and thinking.
Like that’s our home, all neat and organized, and this is our garage, where we tinker and talk about interesting stuff, all disorganized, with exposed wires and cracked open cases.
Anyway, the content there won’t be as fluid as what you read here, but it won’t be a static collection. We’ll keep it updated as we progress on our projects and find new shiny objects to chase.
- Lots of OAUX Updates
- The Cloud UX Lab
- OAUX Emerging Technologies RD&D Strategy
- Here Are Your First Links of 2015
- Behold: The Simplified UI Rapid Development Kit
The theme was Internet of Things (IoT) and this marked the first time that they organized a hackathon specifically for an outside partner, eProseed. All the previous hackathons were internal Oracle events. Initially the plan was for us Oracle folks to go over and mentor teams as needed, but later on, the decision was made to place us as technical resources in a team and actually participate. After some initial hiccups with my own team, I ended up in a team with Lonneke Dikmans (@lonnekedikmans), Karel Bruckman, DJ Ursal and Antonio Aguilar. Here’s what happened next …
If you have ever been to the Netherlands, you probably noticed they like bikes … a lot! This is the first thing you see when you get off the train in Utrecht:
Not exactly organized.
Lonneke’s team’s idea was to solve this with some IoT ingenuity. We tried to solve the following issues:
- For the individual:
- Where can I park my bike, i.e. where are there free bike parking spots?
- Where did I park my bike?
- How do I pay?
- For the city/municipality/parking management company:
- Where are there free parking spots?
- What is the overall usage, for parking management/planning?
- How long has a bike occupied a parking spot, for payment?
The tools at our disposal where a Raspberry PI with a fully loaded GrovePi kit and an Oracle Mobile Cloud Service (MCS) account. We were free to use any other tools as needed, but we decided to stick with these as they were freely available. Plus, we had direct access domain experts on site.
We used sensors in the GrovePi kit to detect a bike’s presence in the bike rack. As soon as we detected a bike being put into the rack, we used a Raspberry Pi camera to take a picture of the (presumably) bike owner and identified the person using her/his own phone. Users of the parking system had to register themselves so we could identify and charge them, but this part we did not build as part of the hackathon. We then sent a notification to the person’s phone using MCS. This notification contained the picture we took, the location of the bike and the time it was parked.
The location of the bike could be traced using a phone app and a web application. This app could also be used to keep track of how long the bike had been parked and how much this was going to cost the the user.
As soon as the bike was removed from the bike rack, another notification would be sent to the bike’s owner through MCS informing her/him of the usage time and how much the charge would be, and the system would automatically charge the user.
Besides the app for end users, we also had a dashboard that could be used by the parking management company. This could be a municipality or a for-profit company. The dashboard web application gave an overview of bike distribution throughout the company’s territory, e.g. the city.
This would allow the company to direct cyclists to places where there were free bike racks. Over time, we would also collect bike rack usage data that could be used to enhance parking infrastructure and overall usage, e.g. with this data you can predict usage peaks and proactively redirect cyclists, plan where more parking is needed or inform city planners on how to avoid “parking hot spots.”
In the end, our entry was good for third place.
Mark.Possibly Related Posts:
- Mobile Apps Taking Aim at Parking Tickets
- Trying Pivotal Tracker
- IoT Hackathon Field Report: Mexico Edition
- OK Google, Where’s My Car?
- University of Chicago Shows off its Automated Library
Despite the amount of iOS and Apple Watch chatter here lately, we still have a dedicated Android user base on the team, including me.
If you read here, you’ll know that projecting an Android device screen to an audience has always been important for us, and usually we have more than one screen to project. Over the years, I’ve used several tools for this BBQScreen, Android Screen Monitor and lately, Chromecast’s screen mirroring feature, which is handy because it cuts out the laptop middleman.
You may remember him from such projects as CyanogenMod, AllCast, Helium, ROM Manager, etc. He’s kind of a big deal in the Android modding community because he’s constantly filling Android gaps and improving the Android experience.
And Vysor doesn’t disappoint. Just install the Chrome app from the Chrome Web Store, set the developer options on your Android device and plug it into your machine.
Vysor opens your device’s screen, and sit down for this one, allows you to control it via mouse and keyboard.You read that right. This is a huge feature that none of the other options I’ve tried offer.
As an aside, Chrome apps are the bee’s knees. They are truly cross-platform, can run in their own windows outside Chrome and can run when Chrome itself is closed. Seriously, I can quit Chrome and launch the Vysor app on its own. I just did this on OS X and again on Ubuntu.
Did you know that? Although I did, and I shared the enthusiasm of the tech press back in late 2013, I had forgotten. Google hasn’t done a very good job promoting this awesome capability of the Chrome ecosystem.
Not to wander too far off topic, but it is a shame to see Chrome take a back seat to Android when the two are equally useful.
Moving on, Vysor is in beta. It works really well with one device from what I’ve seen. However, once you get two devices connected, it gets confused.
I might have something set wrong, or maybe it’s that annoying Android File Transfer app that’s required (is it still?) for OS X.
Might be user error.
Anyway, if you’re an Android user, check out Vysor. It’s awesome.
Find the comments.Possibly Related Posts:
- Fun with an Android Wear Watch
- Projecting Multiple Android Device Screens
- Android 2.3.3 Makes Screen Shots Root-Free
- Enhancing the WebCenter Chrome Extension
- Android 4.2.2 and the Mystery of Device Offline
Editor’s note: If you read here, you might recall that we have dos hermanos en Guadalajara, Luis (@lsgaleana) y Osvaldo (@vaini11a). Last month, the Mexico Development Center (MDC) celebrated its fifth anniversary. Here’s to many more. Reposted from VoX.
As you may know, Oracle has a couple Development Centers around the globe, and one of them is in Guadalajara, México. The Oracle Mexico Development Center, aka Oracle MDC (where I work), was 5 years old on Aug. 18, and the celebration was just as tech-y and fun as it can be for a development center.
The celebration was split in two events, an open event called “Plug in” and a private event (just Oracle staff). Topics were related to what we love: Database, Cloud and, of course, User Experience. Some of the guest speakers were Hector García, who was chairman of the Computer Science Department at Stanford University; Javier Cordero, Managing Director of Oracle México; Jeremy Ashley (@jrwashley), Group Vice President, Applications User Experience, and Erik Peterson, General Manager of Oracle MDC.
Late in the afternoon, the real celebration started! We got to celebrate with all of our friends, colleagues, mates and the whole staff of Oracle MDC, and we all got to be in the anniversary picture of this awesome team, team Oracle!
- Get a Look at the Future Oracle Cloud User Experience at Oracle OpenWorld 2015
- Profit Magazine Showcases the Innovation Challenge in Mexico
- Oracle OpenWorld and JavaOne 2014 Cometh
- Asteroid Explorer Launched
- Find AppsLab at Collaborate 08
The likes of Webvan pioneered grocery delivery back in the dot com days, and fun fact, Amazon owns and continues to operate what is left of Webvan. In those halcyon days, I used Webvan quite a bit. Sounds a bit silly, but being able to order groceries online and have them delivered within the hour felt like living in the future.
Alas, like jetpacks, the future had to wait a bit.
Well, the future is now, erm, again, at least in some cities. Amazon Prime Now launched in Manhattan in December, and I’ve followed its progress eagerly awaiting the happy day when Prime Now launched in Portland. That day came on August 26, and last week, I tried out Prime Now for the first time.
Prime Now offers Amazon Prime members in participating cities free two-hour delivery of various items, not just groceries, from Amazon and several other local stores. You can pay $7.99 for one-hour delivery.
Perfect for scratching that itch for (near) instant gratification, or (almost) last-minute gift ideas.
Prime Now only has a mobile app for now, which makes browsing a bit cumbersome, and item availability for delivery depends on the stores. Eventually, we found all the stuff we needed and upon checking out were presented with several delivery times, most of which were taken. Makes sense, it’s a very new service, so I’m sure the stores are measuring demand before committing too many resources.
We chose the 6-8 PM delivery window, and around 6, the person collecting the order texted us to say two items we’d chosen were out of stock. He recommended substitutions, and the order was on its way.
The order arrived easily within the 6-8 PM window, and everything was as expected. One nice feature, the Prime Now app allows for a tip on checkout, so no awkwardness at the door.
There were some unexpected hiccups, but we agreed that Prime Now was worth future tries. I expect as the service is used, the demand should prompt ironing out of the hiccups we encountered.
Like Domino’s, Amazon is moving fast to remove friction between its goods and services and its customers with testing services like drone delivery, restaurant food delivery, the growing list of features the Echo offers, the Dash restocking buttons and its Fresh service.
These services overlap each other, which speaks to Amazon’s culture, but presumably, we, the consumers, will benefit.
Find the comments.Possibly Related Posts:
- Noel’s Amazon Echo Hack
- Amazon Echo, The Future or Fad?
- Another Echo Hack from Noel
- A Week with the Amazon Fire TV
- Amazon Echo Official SDK
Editor’s Note: Reposted from Ultan’s (@ultan) Tumblr, a great read. Ultan knows his fitness (and fashion), so his rousing endorsement of the Peak is legit. Read my impressions of the Basis Peak for more. Since I wore it last, Basis has updated the watch’s firmware to add some pretty cool features. Enjoy.
“You’ve been running! Take a selfie, see how exercise changes you!” I smile when that message pops into the notifications list on my Android smartphone after using the Basis Peak. All part of what endears me to using it even more to track my activity and sleep patterns.
This “smile-o-meter” approach of the Basis Peak Photo Finish feature is a great mix of the analog and digital, leveraging well-familiar smart phone functionality to enable me to choose to add even more “in the moment” context that adds to creating a better user experience.
Not that I need encouragement to take selfies. But the qualitative self, fun, and motivational power of selfies, even if you do not want to share them, should not be underestimated in today’s fitness world. On the other hand, there is evidence of less than attractive dimensions to the phenomenon.
I’ve documented my earlier pains in setting up the Basis Peak, now resolved with Basis Help team support through Twitter and an onsite visit. Now up and running (#seewhatIdidthere), I really like the thing. I am glad I stuck with it. Would others persevere after my initial experience?
I love the look of the device itself, its shape, sleek finish, and the option to add other colored sporty straps (I have the green and white SportVent straps now). The device UI is compact, easy to use, glanceable, and supports simple gesture interaction. Although I think the lack of a color UI takes away from the #fashtech aesthetic, most people remark on how great the Basis Peak looks on.
The phrasing of some of the messages, that shouty “CHARGE ME!” in particular, seems out of step with the crafted look and a modern UX for the mobile, selfie, visual, fit’n’finish world. Nothing major though.
I also enjoy those visualizations presented from the sensed-out data from my activity– though some may be a little dense for some people to grasp. These visualizations allow me plenty of insight to track my progress and are an easy way to explore personal habits and data of all useful sorts.
The sleep analytics are awesome and for the first time enabled me to see a relationship between the nature of my sleep (or lack of it) and my fitness. I track everything diligently and rely on unfolding habits and patterns to progress things. I am never bored by the Basis Peak.
That sleep summary email at the end of the week reminds me of how well I am doing (or not!). A pause for thought as I enter another week.
Although most of my activity is running and walking, this ability to unlock more, diverse “habits”, to mix it up, to try new things, and to explore variants of activities is both motivationally challenging and rewarding. “Torch More Calories”? Hell, yeah! An “Evening Lap”? Go for it. “Run Club”? I’m there. My kind of gamification (personal exhortation, really).
I am sure the lack of built-in social and community features won’t work for everyone, and there are merits to sharing, but what the Basis Peaks offers works for me. Fitness isn’t a Facebook-level activity for a lot of us. As my Basis Peak-using colleague Jake Kuramoto (@jkuramot) might say: “This is about quantified self, not quantified community.”
Other things I have noticed about my usage of the Basis Peak are that I am more inclined to rely on the mobile dashboard and activity steam than I did with other fitness bands, turning to the desktop dashboard only to obtain more data and analysis at the weekend. I dig that activity feed and glanceable style of the dashboard, and those little messages again:
In all, a great user experience. It’s my favourite piece of wearable tech right now. I wear my Basis Peak all the time (another “habit” unlocked) and the battery life works great for my global, mobile lifestyle.
Even my 11 year-old wants one. And in preference to… that other Watch. I prefer it too.
That says something of the appeal and potential of the Basis Peak.Possibly Related Posts:
- Four Weeks with Nothing on My Wrist
- Four Weeks with the Basis Peak
- Seven Weeks with the Fitbit Surge
- Four Weeks and a Day with the Jawbone UP24
- Four Weeks with the Garmin Vivosmart
The Year of Data continues for me, and yesterday, I finished a four-week relationship with the Garmin Vivosmart.
I use relationship purposefully here because if you use a wearable to track fitness and sleep, you’re wearing it a lot, and it actually becomes a little friend (or enemy) that’s almost always with you. Wearables are very personal devices.
If you’re scoring at home, 2015 has gone thusly for me:
- Three weeks with the Nike+ Fuelband
- Four weeks with the Basis Peak
- Four Weeks and a day with the Jawbone UP24
- Seven Weeks with the Fitbit Surge
- Four Weeks with Nothing
After that month of nothing, I nearly ended the experimentation. However, I already had two more wearables new and still in the box. So, next up was the Vivosmart.
I didn’t know Garmin made wearables at all until OHUG 2014 where I met a couple people wearing Garmin devices. Turns out, Garmin makes an impressive array of wearable devices, running the gamut from casual to hardcore athlete.
I chose the Vivosmart, at the casual end of the spectrum, because of its display and notification capabilities.
Finally, a wearable that doesn’t require a laptop to configure. The setup was all mobile, download the app and pair, very nice for a change.
After the initial setup, however, I did need to tether the Vivosmart to my laptop, but I don’t think my case is common.
The firmware version that came out-of-the-box was 2.60, and after reading the Engadget review, I decided to update to the latest version. Specifically, I wanted the notification actions that came in 3.40. There didn’t seem to be a way to get this update over-the-air, so I had to install Garmin Express on my Mac and tether the Vivosmart to install the update, a very quick and painless process.
This must have been because I was going through several updates because the Vivosmart got an over-the-air update at some point without Garmin Express.
Like all the rest, the Vivosmart has a custom cable for charging and tethering, and this one looks like mouthguard.
Looks aside, getting the contacts to line up just right was a learning process, but happily, I didn’t charge it very often.
The low power, touch display is pretty cool. The band feels rubbery, and the display is completely integrated with no visible bezel, pretty impressive bit of industrial design. The display is surprisingly bright, easily visible in full sunlight and useful as a flashlight in the dark.
There are several screens you swipe to access, and they can be configured from the mobile app, e.g. I quickly ended up hiding the music control, more on that in a minute. Long-pressing opens another set of options and menus.
The Vivosmart has sleep tracking, one thing I actually missed during my device cleanse. Like the Jawbone UP24, it provides a way to track sleep manually. I tried this and failed miserably because somehow during the night the sleep tracking ended.
The reason? The display activates when anything touches it. So, while I slept, the display touched the sheets, the pillow, etc. registering each touch as an interaction, which finally resulted in turning off sleep mode.
This is exactly how I discovered the find phone option. While using my laptop, I wore the Vivosmart upside down to prevent the metal Garmin clasp on the underside of the device from scratching the aluminum, a very common problem with wrist-worn accessories.
During a meeting my phone started blinking its camera flash and blaring a noise. A notification from Garmin Connect declared it had found my phone. I looked at the band, and sure enough, it was in one of the nested menus.
So, the screen is cool, but it tends to register everything it touches, even water activated it. Not to mention the rather unnerving experience of the display coming on in a dark room while partially awake, definitely not cool.
Luckily, I found the band and app auto-detect sleep, a huge save.
Functionally, the battery life was about five days, which is nice. When the battery got low, a low battery icon appeared on the time and date screen. You can see it in the picture. Once full, that icon disappeared, also nice.
The Vivosmart can control audio playing on the phone, a nice feature for running I guess. I run with Bluetooth headphones, and having two devices paired for audio confused my phone, causing it to play through its own speakers. So, I disabled the playback screen via the app.
Like most fitness bands, this one is water resistant to 5 ATM (50 meters), and I wore it in the shower with no ill effects, except for the random touches when water hit the device’s screen. I actually tested this by running water on it and using the water to navigate through the screens.
Syncing the band with the phone was an adventure. Sometimes, it was immediate. Other times, I had to toggle Bluetooth off/on. Could be my impatience, but the band would lose connectivity sometimes when it was clearly within range, so I don’t think it was me.
The Vivosmart has a move indicator which is nice as a reminder. However, I quickly disabled it because its times weren’t configurable, and it would go off while I was moving. Seriously, that happened a few times.
The App and Data
As with most fitness trackers, Garmin provides both a mobile app and a web app. Both are cleanly designed and easy to use, although I didn’t use the web app much at all. Garmin Connect has a nice array of features, to match the range of athletes to which they cater, I suppose.
I probably only used 25% of the total features, and I liked what I used.
I did find the mobile app a bit tree-based, meaning I found myself backing up to the main dashboard and then proceeding into another section.
Garmin tracks the usual activity data, steps, calories, miles, etc. There’s a wide array of activities you can choose from, but I’m a boring treadmill runner so I used none of that.
For sleep, it tracks deep and light sleep and awake time, and I found something called “Sleep Mood” no idea what that is.
One feature I don’t recall seeing anywhere else is the automatic goal setting for steps which increases incrementally as you meet your daily goal. The starting default was 7,500 steps, and each day, the goal rose a little, I assume based on how much I had surpassed it the previous day. It topped out at 13,610.
I passed the goal every day I wore the Vivosmart, so I don’t know what happens if you fail to meet it.
You can set the goal to be fixed, but I liked this daily challenge approach. There were days I worried I wouldn’t make the step number, and it actually did spur me to be more active. I guess I’m easily manipulated.
Possibly the biggest win for Garmin Connect is its notification capabilities. It supports call, text and calendar notifications, like some others do, but in addition, there is also a nice range of other apps from which you can get notifications.
And there’s the feature I mentioned earlier, taking actions from the band. I tried this with little success, but I only turned on notifications for text messages.
One possible reason why Garmin has such robust notifications may be its developer ecosystem. There’s a Garmin Connect API and a store for third party apps. I didn’t use any, mostly because I’m lazy.
That, and one of the kind volunteers for our guerrilla Apple Watch testing at OHUG warned me that some apps had borked his Garmin. He had the high-end fenix 3, quite a nice piece of technology in an Ultan-approved design.
Finally, Garmin Connect offers exports and integrations with other fitness services like RunKeeper, Strava, etc. They’re definitely developer-friendly, which we like.
Overall, I found the Vivosmart to be an average device, some stuff to like, some stuff to dislike. The bland black version I chose didn’t help; Ultan (@ultan) would hate it, but Garmin does offer some color options.
I like the apps and the ecosystem, and I think the wide range of devices Garmin offers should make them very sticky for people who move from casual running to higher level fitness.
If I end up going back to Garmin, I’ll probably get a different device. If only I could justify the fenix 3, I’m just not serious enough, would feel like a poseur.
Find the comments.Possibly Related Posts:
- Four Weeks with the Basis Peak
- Seven Weeks with the Fitbit Surge
- Four Weeks and a Day with the Jawbone UP24
- Three Weeks with the Nike+ Fuelband SE
- The Fitbit Surge: Watching Where the Super Watch Puck Goes
A few nights ago a Domino’s Pizza commercial got my attention. It is called “Sarah Loves Emoji.”
At the end, the fictional character Sarah finishes by simply saying “only Domino’s gets me.”
The idea of texting an emoji, tweeting, using a Smart TV, or a smartwatch to automagically order pizza fascinates me. What Domino’s is attempting to do here is to reduce user friction, which is defined as anything that prevents a user from accomplishing a goal. After researching Domino’s Anywhere user accounts, I found a negative post of a frustrated user, of course! Thus proving that even if the system is designed to reduce friction, the human element on the process is bound to fail at some point. Regardless I think is pretty cool that consumer oriented companies are thinking “outside the box.”
As a long fan of building Instant Messaging (xmpp/jabber) and SMS (Twilio) bots, I understand how these technologies can actually increase productivity and reduce user friction. Even single-button devices (think Amazon Dash, or my Staples Easy Button hack) can actually serve some useful purpose.
I believe we will start to see more use cases, where input is no longer tied to a single Web UI or mobile app. Instead we will see how more ubiquitous input process like text, twitter, etc. can be used to start or complete a process. After all it seems like email and text are here to stay for a while, but that’s the content of a different post.
I think we should all strive that our customers will ultimate say that we “get them.”Possibly Related Posts:
- Another Take on Twilio Signal 2015
- Supertrackr Tracks Keywords in Real Time
- Taleo Interview Evaluations, Part 2
- Twilio Signal Conference 2015
- Reducing Friction with Amazon Prime Now
This article is nice companion piece to our strategic approach to emerging technologies and how we apply the “Glance, Scan, Commit” design philosophy to our work.
I’m honored to be quoted in the article and proud to see our little team getting this level of recognition.
If you want to learn more about the R&D projects mentioned, you’re in luck. You can read about the Glance framework and approach and see a quick video of it in action on several smartwatches, including the Apple Watch.
If you want to see some of these emerging technologies projects in person, register to visit the OAUX Cloud Exchange at Oracle OpenWorld 2015, or come tour the new Cloud UX Lab at Oracle HQ.Possibly Related Posts:
- Emerging Technologies and the ‘Glance, Scan, Commit’ Design Philosophy
- Lots of OAUX Updates
- Get a Look at the Future Oracle Cloud User Experience at Oracle OpenWorld 2015
- Are We Ready for the Apple Watch?
- Another Home for Our Stuff
Behind the Oracle user experience goals of designing for simplicity, mobility, and extensibility is a core design philosophy guiding the Oracle Applications User Experience (OAUX) team’s work in emerging technologies: “Glance, Scan, Commit.”
It nicely boils down a mountain of research and a design experience that shapes the concepts you can see from us.
The philosophy of “Glance, Scan, Commit” permeates all of our work in the Oracle Applications Cloud user experience, especially when investigating emerging technologies.
On your wrist
Consumers demand glance and scan interactions on their wearable devices. The Oracle user experiences provide just the right amount of information on wearable devices and enable the ability to commit to more detail via the accompanying smartphone app.
On your ‘Things’
How else does the OAUX team apply the “Glance, Scan, Commit” design philosophy?
Let’s look at another example: The “Things” in the Internet of Things (IoT) represent a very broad category of Internet-connected devices, and generally speaking, consumers can’t rely on these things to have large screens, or even screens at all. This reduces the experience down to the lightest “glance” of proximity, and in some cases a sonic “glance.”
Sometimes we tap into the user’s context such as micro-location, provided by Bluetooth beacons or Near Field Communications (NFC) tags, to capture a small chunk of information. The “glance” here is the lightest touch of a beacon coming within range or a near field tag brushing up against a sensor.
In some cases, we use the philosophy to build sound “glances,” by capturing chunks of information that are then dictated by a personal assistant, like Amazon Echo. These are simple, small, discrete tasks powered by the human voice and Internet-connected devices.
For the eyes
We are also actively exploring and building visualizations to provide “glance” and “scan” experiences that allow users to consume report data quickly and easily, without poring over tables of information.
Video Storytelling, for example, permits complicated and detailed reports to be animated and delivered via audio and video. Think about the intricacies of a quarterly financial statement; video storytelling does the thinking for you by producing the information in very scannable, organized buckets of audio and video.
The “Glance, Scan, Commit” philosophy becomes even more important when building new experiences. As users are exposed to new experiences, data from the Oracle Applications Cloud provides a constant that helps them embrace these new technologies. Delivering the data in a particular way, using designs shaped by “Glance, Scan, Commit,” increases that consistency.
If the Oracle user experience can provide customers with the information they need to do work every day, in a meaningful way, then new technologies are tools to increase user participation, not barriers.
In the not-so-distant past, “walk up and use” was the bar for experiences, meaning that the interactions should be easy enough to support use without any prior knowledge or training. The user would simply walk up and use it.
The rise of smartphones, ubiquitous connectivity, and IoT — and the emerging technology that enables their use — make our new goal as close to simply “walk up” as possible. Workers can use the system without interacting with it directly, because context collected from phones, combined with smart things around them and enterprise data in the cloud, allow the environment to pass useful information to users without any interactions. This removes more barriers and also works to increase user participation. The more users are engaging with an enterprise system, the more data goes in – and the more value our customers can get out of their investment.
And that, in the end, is the overarching goal of the Oracle user experience.
See it for yourself
If you want to put hands-on what we do, we will be at Oracle OpenWorld participating in the OAUX Cloud Exchange. Attendance requires a non-disclosure agreement, so please register early.Possibly Related Posts:
- OAUX Emerging Technologies in Profit Magazine
- A Framework for Wearables, Glance
- Lots of OAUX Updates
- Some Wearable Tech Glances Beat Others
- Mid-June Roundup
Yesterday I received an Amazon Dash for ordering IZZE juice.
I think it is a great device, not that I would order tons of IZZE from Amazon, but at $5 it has wifi module + a micro controller + a LED light + a battery + nice enclosure, and it’s usually in deep sleep which means the device can last for years. That’s a bargain – a similar device would cost $20 – $40, at least before ESP8266 became available.
First thing I tried is to re-purpose it to toggle on/off the poor man’s Nest screen, because the PiTFT screen is quite bright and gets warm, I want to turn it off without unplugging the cord, and turn it back on instantly if needed. And here is it. Note the IZZE sticker color coordinates well with Nest warm yellow color
The signal cycle goes through PubNub, which has MQTT at its core, the response time is less than 0.5 second. So it is remotely controlled – if leave poor man’s Nest in office, I can push the button at home to turn it off.
While my daughter is excited about the IZZE toggle button controlling PiTFT screen, my wife asked me to do something more meaningful So I made a second try, turned IZZE button to be a “dinner time” call button.
Every time when dinner is ready, I have to shout toward upstairs to get my kids, and more often than not, I couldn’t get them because they have head-phones on.
So I modified a little bit of the code, to listen for IZZE wake up and try to connect to my router, then use that signal to ask Philips Hue lights to blink 3 times.
Now my wife can just press IZZE button in the kitchen, the Hue lights at my kid’s desk start to blink, and that’s dinner time call.
I guess that is more meaningful, at least I don’t have to shout toward upstairs again Possibly Related Posts:
- Everything New is a Swimming Pool
- Amazon Echo, The Future or Fad?
- Socially Aware Christmas Lights
- Unexpected Innovation
- Mind Control?
We are always on the hunt for interesting new uses of the Apple Watch, so when my colleague Ben Bendig alerted me to AstroPad’s new iPhone/Apple Watch app, I downloaded it immediately.
The app, AstroPad Mini, is intended to let you use your iPhone as a graphics tablet and controls Photoshop nicely right out of the box. But it will work with any Mac app; it lets you map any area of your Mac screen to the iPhone and map up to eight keyboard commands to buttons in an Apple Watch app. I reprogrammed it to work with NodeBox.
Although you can zoom and pan the Mac screen from your iPhone, this seems awkward for precision work (the iPad app would work better for that). It was more useful to map a small control area of the screen to the iPhone instead. For Photoshop you could arrange palettes (tools, layers, history) and dialogs (e.g. color picker) into a corner somewhere (maybe on a second monitor), map the iPhone to that, and use the iPhone as an auxiliary screen so you don’t have to keep moving your mouse back and forth. This worked particularly well for the color picker.
For NodeBox I mapped the node pane, a small area which displays properties of the currently selected node. I could then select any node on my ginormous screen using a mouse or trackpad and then scrub its properties from the phone (without having to relocate the mouse).
Even more fun: I mapped common actions to Apple Watch buttons: Save, Full Screen, Escape, New Node, Undo, Redo, Play, and Rewind. When creating animations, it’s pleasant to lean back in my chair, put the display in full screen, and play and rewind to my hearts content all from my watch.
I was also able to focus the iPhone on the slider of my transforming table (running as a web app) and could then stand back from the display and move the slider back and forth from my phone. You could do the same thing by just running the table app on the phone and mirroring it via AirPlay, but AstroPad let me focus the entire iPhone screen on the slider so that it was easier to manipulate.
The app did occasionally lose its wifi connection for a few moments, but otherwise worked fine.
I think with a little thought and practice this setup could speed my workflow somewhat. The benefits are marginal, though, not revolutionary. One tip: if you use the Apple Watch be sure to set “Activate on Wrist Raise” to “Resume Previous Activity” instead of “Show Watch Face” so that you don’t have to keep relaunching the AstoPad app.
We could conceivably use this app in some of the concept demos our group does. It would be a quick and dirty way of controlling some features from an iPhone or Apple Watch without having to write any special code. The catch is that the demo would have to run on a Mac. One advantage: they have an option for controlling the Mac via USB cable instead of wifi, a handy workaround at HQ or demo grounds when sharing a local wifi router is problematic.
Hmmm. I wonder if I could aim and fire my USB Rocket Launcher from my watch. Now THAT might be a killer app.Possibly Related Posts:
- Create Different Realities with Content-Aware Fills in Photoshop CS6
- My Weekend with Apple Watch
- Eight Cell Phones in Ten Years Part 2
- More Apple Watch-ness from Oracle Social Network
The Intel Compute Stick provides a full desktop experience in an ultra-portable HDMI dongle form factor. It’s like Google Chromecast, but an entire PC instead of just a web browser. I tested both the $150 Windows 8 version and the new $110 Ubuntu version.
TAP TAP TAP. IS THIS THING ON?
The HDMI end goes into a display, the power goes into an outlet, and a blue light comes on but the Stick does not boot. Either tap or long press the power button, then switch the display input source after a few seconds. Just by looking at the Stick you cannot tell if it’s off, on, or booting. Long press the power button and you may end up at the boot menu or the blue light may go off—I suppose making the Stick even more off than previously.
It boots. This is where you need to find a keyboard, then a little later find a mouse. See, there’s only one USB port on the Stick so we ended up swapping peripherals during the setup. This gets old instantly so either get a USB hub or some bluetooth peripherals. Unsurprisingly, the Microsoft bluetooth keyboard we got from our local StaplesMax Depot did not like the Ubuntu version of the Stick so we needed a hub.
You will want to plug the included HDMI extension cable into the Stick or your wifi will be—at least in my experience—absent. Use the micro-USB charger that came with the Stick if you want it to boot at all. It’s a proprietary charger masquerading as non-proprietary. It’s better to find this out now rather than on the road. All of these non-moving parts make for something…squiddy.
GETTING THAT FOR WHICH YOU PAID
I type this now with the intestines. The quick brown fox jumps over the lazy dog. It performs well for the tasks that most people perform most often, but then again we live at a time where my $35 MP3 player has a word processor, plays chess, and even runs the game DOOM.
Web pages load with a small delay but I have little complaint there. YouTube, for example, runs smoothly and overall the Stick is fine for common tasks.
Lag is lag. While typing I get periodic freezes. No words appear and then all of a sudden abracadabra. Opening a folder in the file manager sometimes takes a few beats. There’s a 64-bit quad-core Atom processor inside® but it sometimes feels like Mac OS 8 or Windows 3.1 on 20 year old hardware. Fun fact: the Ubuntu Stick has 1GB RAM / 8GB storage while the Windows Stick has 2GB RAM / 32GB storage. The internet says you can install Linux on the Windows version.
Let’s push things a bit. The Leap Motion is a cool USB device which tracks your hands’ motions and provides an API to do things with that data. Even though the Stick doe—oops, freeze-up—does not meet the minimum requirements, why not give it a try? I’m sure it’s fine and no harm will come of it.
PART NUMBER II
The Leap Motion did not work so I tried rebooting the Stick. And tried. And tried. And tried… Sure, I had not mastered the power button but this was different. The Stick would show the Ubuntu splash screen and then go endlessly dark. Luckily, others had faced the same issue. I simply had to hold the power button for just less than 4 seconds—not 4 seconds, mind you—to get into the boot menu, then choose to recover the BIOS. BIOS recovery did not fix the black screen. Update the BIOS then. That went smoothly, but did not fix it. There were other trials too.
At this point, I just wanted the beginnings of this very blog post off of the Stick. I decided to make a bootable USB drive so I could at least grab the document. I’ve only made “live” CDs/DVDs before and making a live USB stick was more challenging and time-consuming than I had anticipated. I was able to get GParted installed but then decided Puppy Linux with persistency would be easier. I tried doing this on my Linux machine at home but in the end the easiest thing I found was LinuxLive USB Creator on Windows. Prepared, I hoped that the next day I would be able to grab those words up there from the borked Stick.
When I got to work I decided to try the Stick again: same problem, of course. I had a meeting so I left it plugged in. When I got back I tried rebooting, ever the naïve optimist.
It boots!!! And an error message popped up, perhaps the cause of all of this: The volume “Filesystem root” has only 156.5 MB disk space remaining. Where had I seen that before?
My confidence restored along with the bootability, I am continuing this blog post on the Stick. It is behaving well with little lag although Firefox crashed a couple times with one tab open. I’m not entirely sure if this is “normal” or if the trials and tribulations took their toll.
If the self-healing mini-miracle had not happened, would I have been able to boot from the USB stick? No. There’s a Catch-22 because of the sole USB port. The keyboard needs that port to use the boot menu. Using the hub or switching to the hub when at that menu ends all input from then on. There is a micro SD slot, and if I wasn’t exhausted from all of this I would try to boot from it.
The EndPossibly Related Posts:
- What’s Your Next Project?
- Windows 8 Secure Boot Spells Trouble for Linux
- Dueling Robots
- Ubuntu/Chrome OS revisited
- My Jaunty Adventure
If not, here’s a recap of some of the tinkering going on in OAUX Emerging Technologies land.
Meanwhile, Raymond (@yuhuaxie) has been busy building a smart thermostat.
And finally, completely unrelated to IoT tinkering, earlier this month the Oracle Mexico Development Center (MDC) in Guadalajara celebrated its fifth anniversary. As you know, we have two dudes in that office, Os (@vaini11a) and Luis (@lsgaleana), as well as an extended OAUX family. Congratulations.
- Mid-June Roundup
- Lots of OAUX Updates
- Here We Grow Again
- Google Reader Updates
- Here Are Your First Links of 2015
If you read here, you’ll recall that Noel (@noelportugal) and I have been supporters of the Raspberry Pi for a long time, Noel on the build side, many, many times, me on the talking-about-how-cool-and-useful-it-is side.
And we’ve been spreading the love through internal hackdays and lots of projects.
So, yeah, we love us some Raspi.
The little guy has become our go-to choice to power all our Internet of Things (IoT) projects.
Given all that we do with Raspis, I’ve always meant to get one for my own tinkering. However, Noel scared me off long ago with stories about how long it took to get one functional and the risks.
For example, I remember reading a long post early on the Pi’s history about how choosing a Micro USB was critical, amperage too high burned out the board, too low and it wouldn’t run.
The information was out there, contributed by a huge and generous community. I just never had the time to invest.
Recently, I’ve been talking the good people at the Oracle Education Foundation (@ORCLcitizenship) about ways our team can continue to help them with their workshops, and one of their focus areas is the Raspberry Pi.
After all, the mission of the Raspi creators was to teach kids about computers, so yeah.
I figured it was finally time to overcome my fears and get dirty, and thanks to Noel, I found a kit that included everything I would need, this Starter Kit from Vilros.
Armed with this kit, I took a day and hoped that would be enough to get the little guy running. About an hour after starting, I was done.
Going from zero to functional is now ridiculously easy, thanks to these kits that include all the necessities.
So, now I have a functioning Pi running Raspbian. All I need is a project, any ideas?
Coda: Happy coincidence, as I wrote this post, I got a DM from Kellyn Pot’Vin-Gorman (@dbakevlar) asking if knew any ways for her to use her Raspberry Pi skills in an educational capacity. Yay kismet.
Possibly Related Posts:
- Raspberry Pi and Java
- Raspi Shutdown Key
- Wondering about the Raspberry Pi
- On Oracle Corporate Citizenship
- A Raspberry Pi Hackday
Here’s the first of many OpenWorld-related posts, this one cross-posted from our colleagues and friends at VoX, the Voice of Experience for Oracle Cloud Applications. Enjoy.
Are you all set for Oracle OpenWorld 2015 (@oracleopenworld)? Even if you think you’re already booked for the event, you’ll want to squeeze in a chance to experience the future of the Oracle Applications User Experience (OAUX) — and maybe even make a few UX buddies along the way — with these sessions, demos, and speakers. We loved OOW 2014, and couldn’t wait to get ready for this year.Save the Date: Oracle Applications Cloud User Experience Strategy & Roadmap Day
The OAUX team is hosting a one-day interactive seminar ahead of Oracle OpenWorld 2015 to get select partners and customers ready for the main event. This session will focus on Oracle’s forward-looking investment in the Oracle Applications Cloud user experience.
You’ll get the opportunity to share feedback about the Oracle Applications Cloud UX in the real world. How is our vision lining up with what needs to happen in your market?
Speaking of our vision, we’ll start the session with the big-picture perspective on trends and emerging technologies we are watching and describe their anticipated effect on your end-user experiences. Attendees will take a deeper dive into specific focus areas of the Oracle Applications Cloud and learn about our impending investments in the user experience including HCM Cloud, CX Cloud, and ERP Cloud.
The team will also share with you the plans for Cloud user experience tools, including extensibility and user experience in the Platform-as-a-Service (PaaS4SaaS) world (get the latest here). We’ll close out the day with a “this-town-ain’t-big-enough” event that was extremely popular last year: the ACE Director Speaker Showdown.Want to go?
When: 9 a.m. to 5 p.m. Wednesday, Oct. 21, 2015
Where: Oracle Conference Center, Room 202, 350 Oracle Pkwy, Redwood City, CA 94065
Who: Applications Cloud partners and customers (especially HCM, CX, or ERP Cloud), Oracle ACE Directors, and Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels
Active confidential disclosure agreement required.Save the Date: Oracle Applications User Experience Cloud Exchange
Speakers and discussions are all well and good, but what is the future of the Oracle Applications UX really like? The OAUX team is providing a daylong, demo-intensive networking event at Oracle OpenWorld 2015 to show you what the results of Oracle’s UX strategy will look like.
User experience is a key differentiator for the Oracle Applications Cloud, and Oracle is investing heavily in its future. Come see what our recently released and near-release user experiences look like, and check out our research and development user experience concepts, then let us know what you think.
These experience experiments for the modern user will delve even deeper into the OAUX team’s guiding principles of simplicity, mobility, and extensibility and come from many different product areas. This is cutting-edge stuff, folks. And, since we know you’re worn out from these long, interactive days, this event will also feature refreshments.Want to go?
When: Monday, October 26, 2015
Where: InterContinental Hotel, San Francisco
Who: Oracle Applications Cloud Partners, Customers, Oracle ACEs and ACE Directors, Analysts, Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels.
Active confidential disclosure agreement required.Possibly Related Posts:
- Magical Links for a Tuesday in December
- Behold: The Simplified UI Rapid Development Kit
- Hot Oracle Applications User Experience News
- New Applications Strategy Blogs
- Oracle OpenWorld and JavaOne 2014 Cometh
A couple months a go IFTTT added a much needed feature: A custom channel for generic urls. They called it the Maker Channel. If you noticed my previous post, I used it to power an IoT Staples Easy Button.
At a closer look this is a very powerful feature. Now you can basically make and receive web requests (webhooks) from any possible connected device to any accessible web service (API, public server, etc..) It is important to highlight that requests “may” be rate limited, so don’t start going crazy with Big Data style pushing.
You can also trigger any of the existing Channels with the Maker Channel. So either you can choose to trigger any of the existing Channels when you POST/GET to the Maker Channel:
Or you could have IFTTT POST/GET/PUT to your server when any of the existing Channels are triggered.
Do you use IFTTT? Do you find it useful? Let me know in the comments.Possibly Related Posts:
- Belkin’s WeMo Brings Your Devices Online
- IFTTT Easy Button
- “Facebook for Every Phone” Is Genius
- AppsLab at the Maker Faire
- It’s Monday, and I Have Links