Skip navigation.

Other

Growth in machine-generated data

DBMS2 - Fri, 2015-01-30 13:31

In one of my favorite posts, namely When I am a VC Overlord, I wrote:

I will not fund any entrepreneur who mentions “market projections” in other than ironic terms. Nobody who talks of market projections with a straight face should be trusted.

Even so, I got talked today into putting on the record a prediction that machine-generated data will grow at more than 40% for a while.

My reasons for this opinion are little more than:

  • Moore’s Law suggests that the same expenditure will buy 40% or so more machine-generated data each year.
  • Budgets spent on producing machine-generated data seem to be going up.

I was referring to the creation of such data, but the growth rates of new creation and of persistent storage are likely, at least at this back-of-the-envelope level, to be similar.

Anecdotal evidence actually suggests 50-60%+ growth rates, so >40% seemed like a responsible claim.

Related links

Categories: Other

Soft robots, Part 2 — implications

DBMS2 - Tue, 2015-01-27 06:31

What will soft, mobile robots be able to do that previous generations cannot? A lot. But I’m particularly intrigued by two large categories:

  • Inspection, maintenance and repair.
  • Health care/family care assistance.

There are still many things that are hard for humans to keep in good working order, including:

  • Power lines.
  • Anything that’s underwater (cables, drilling platforms, etc.)
  • Pipelines, ducts, and water mains (especially from the inside).
  • Any kind of geographically remote power station or other installation.

Sometimes the issue is (hopefully minor) repairs. Sometimes it’s cleaning or lubrication. In some cases one might want to upgrade a structure with fixed sensors, and the “repair” is mainly putting those sensors in place. In all these cases, it seems that soft robots could eventually offer a solution. Further examples, I’m sure, could be found in factories, mines, or farms.

Of course, if there’s a maintenance/repair need, inspection is at least part of the challenge; in some cases it’s almost the whole thing. And so this technology will help lead us toward the point that substantially all major objects will be associated with consistent flows of data. Opportunities for data analysis will abound.

One other point about data flows — suppose you have two kinds of machines that can do a task, one of which is flexible, the other rigid. The flexible one will naturally have much more variance in what happens from one instance of the task to the next one. That’s just another way in which soft robots will induce greater quantities of machine-generated data.

Let’s now consider health care, whose basic characteristics include:

  • It’s done to people …
  • … especially ones who don’t feel very good.

People who are sick, elderly or whatever can often use help with simple tasks — e.g., taking themselves to the bathroom, or fetching a glass water. So can their caretakers — e.g., turning a patient over in bed. That’s even before we get to more medical tasks such as checking and re-bandaging an awkwardly-placed wound. And on the healthier side, I wouldn’t mind having a robot around the house that could, for example, spot me with free weights. Fully general forms of this seem rather futuristic. But even limited forms might augment skilled-nurse labor, or let people stay in their own homes who at the moment can’t quite make it there.

And, once again, any of these use cases would likely be associated with its own stream(s) of observational and introspective data.

Related link

Categories: Other

Soft robots, Part 1 — introduction

DBMS2 - Tue, 2015-01-27 06:29

There may be no other subject on which I’m so potentially biased as robotics, given that:

  • I don’t spend a lot of time on the area, but …
  • … one of the better robotics engineers in the world (Kevin Albert) just happens to be in my family …
  • … and thus he’s overwhelmingly my main source on the general subject of robots.

Still, I’m solely responsible for my own posts and opinions, while Kevin is busy running his startup (Pneubotics) and raising my grandson. And by the way — I’ve been watching the robotics industry slightly longer than Kevin has been alive. ;)

My overview messages about all this are:

  • Historically, robots have been very limited in their scope of motion and action. Indeed, most successful robots to date have been immobile, metallic programmable machines, serving on classic assembly lines.
  • Next-generation robots should and will be much more able to safely and effectively navigate through and work within general human-centric environments.
  • This will affect a variety of application areas in ways that readers of this blog may care about.

Examples of the first point may be found in any number of automobile factory videos, such as:

A famous example of the second point is a 5-year-old video of Kevin’s work on prototype robot locomotion, namely:

Walking robots (such as Big Dog) and general soft robots (such as those from Pneubotics) rely on real-time adaptation to physical feedback. Robots have long enjoyed machine vision,* but their touch capabilities have been very limited. Current research/development proposes to solve that problem, hence allowing robots that can navigate uneven real-world surfaces, grip and lift objects of unpredictable weight or position, and minimize consequences when unwanted collisions do occur. (See for example in the video where Big Dog is kicked sideways across a nasty patch of ice.)

*Little-remembered fact — Symantec spun out ~30 years ago from a vision company called Machine Intelligence, back when “artificial intelligence” was viewed as a meaningful product category. Symantec’s first product — which explains the company name — was in natural language query.

Pneubotics and others take this further, by making robots out of soft, light, flexible materials. Benefits will/could include:

  • Safety (obviously).
  • Cost-effectiveness (better weight/strength ratios –> less power needed –> less lugging of batteries or whatever –> much more capability for actual work).
  • Operation in varied environments (underwater, outer space, etc.).
  • Better locomotion even on dry land (because of weight and safety).

Above all, soft robots will have more effective senses of touch, as they literally bend and conform to contact with real-world surfaces and objects.

Now let’s turn to some of the implications of soft and mobile robotic technology.

Related links

  • This series partially fulfils an IOU left in my recent post on IT innovation.
  • Business Week is one of several publications that have written about soft robots.
  • Kevin shared links to three more videos on robot locomotion.
  • What I wrote about analyst bias back in 2006 still applies.
Categories: Other

Where the innovation is

DBMS2 - Mon, 2015-01-19 02:27

I hoped to write a reasonable overview of current- to medium-term future IT innovation. Yeah, right. :) But if we abandon any hope that this post could be comprehensive, I can at least say:

1. Back in 2011, I ranted against the term Big Data, but expressed more fondness for the V words — Volume, Velocity, Variety and Variability. That said, when it comes to data management and movement, solutions to the V problems have generally been sketched out.

  • Volume has been solved. There are Hadoop installations with 100s of petabytes of data, analytic RDBMS with 10s of petabytes, general-purpose Exadata sites with petabytes, and 10s/100s of petabytes of analytic Accumulo at the NSA. Further examples abound.
  • Velocity is being solved. My recent post on Hadoop-based streaming suggests how. In other use cases, velocity is addressed via memory-centric RDBMS.
  • Variety and Variability have been solved. MongoDB, Cassandra and perhaps others are strong NoSQL choices. Schema-on-need is in earlier days, but may help too.

2. Even so, there’s much room for innovation around data movement and management. I’d start with:

  • Product maturity is a huge issue for all the above, and will remain one for years.
  • Hadoop and Spark show that application execution engines:
    • Have a lot of innovation ahead of them.
    • Are tightly entwined with data management, and with data movement as well.
  • Hadoop is due for another refactoring, focused on both in-memory and persistent storage.
  • There are many issues in storage that can affect data technologies as well, including but not limited to:
    • Solid-state (flash or post-flash) vs. spinning disk.
    • Networked vs. direct-attached.
    • Virtualized vs. identifiable-physical.
    • Object/file/block.
  • Graph analytics and data management are still confused.

3. As I suggested last year, data transformation is an important area for innovation. 

  • MapReduce was invented for data transformation, which is still a large part of what goes on in Hadoop.
  • The smart data preparation crowd is deservedly getting attention.
  • The more different data models — NoSQL and so on — that are used, the greater are the demands on data transformation.

4. There’s a lot going on in investigative analytics. Besides the “platform” technologies already mentioned, in areas such as fast-query, data preparation, and general execution engines, there’s also great innovation higher in the stack. Most recently I’ve written about multiple examples in predictive modeling, such as:

Beyond that:

  • Event-series analytics is another exciting area. (At least on the BI side, I frankly expected it to sweep through the relevant vertical markets more quickly than it has.)
  • I’ve long been disappointed in the progress in text analytics. But sentiment analysis is doing fairly well, many more languages are analyzed than before, and I occasionally hear rumblings of text analytic sophistication inching back towards that already available in the previous decade.
  • While I don’t write about it much, modern BI navigation is an impressive and wonderful thing.

5. Back in 2013, in what was perhaps my previous most comprehensive post on innovation, I drew a link between innovation and refactoring, where what was being refactored was “everything”. Even so, I’ve been ignoring a biggie. Security is a mess, and I don’t see how it can ever be solved unless systems are much more modular from the ground up. By that I mean:

  • “Fencing” processes and resources away from each other improves system quality, in that it defends against both deliberate attacks and inadvertent error.
  • Fencing is costly, both in terms of context-switching and general non-optimization. Nonetheless, I suspect that …
  • … the cost of such process isolation may need to be borne.
  • Object-oriented programming and its associated contracts are good things in this context. But it’s obvious they’re not getting the job done on their own.

More specifically,

  • It is cheap to give single-purpose intelligent devices more computing power than they know what to do with. There is really no excuse for allowing them to be insecure.
  • It is rare for a modern PC to go much above 25% CPU usage, simply because most PC programs are still single-core. This illustrates that — assuming some offsetting improvements in multi-core parallelism — desktop software could take a security performance hit without much pain to users’ wallets.
  • On servers, we may in many cases be talking about lightweight virtual machines.

And to be clear:

  • What I’m talking about would do little to help the authentication/authorization aspects of security, but …
  • … those will never be perfect in any case (because they depend upon fallible humans) …
  • … which is exactly why other forms of security will always be needed.

6. You’ve probably noticed the fuss around an open letter about artificial intelligence, with some press coverage suggesting that AI is a Terminator-level threat to humanity. Underlying all that is a fairly interesting paper summarizing some needs for future research and innovation in AI. In particular, reading the paper reminded me of the previous point about security.

7. Three areas of software innovation that, even though they’re pretty much in my wheelhouse, I have little to say about right now are:

  • Application development technology, languages, frameworks, etc.
  • The integration of analytics into old-style operational apps.
  • The never-ending attempts to make large-enterprise-class application functionality available to outfits with small-enterprise sophistication and budgets.

8. There is, of course, tremendous innovation in robots and other kinds of device. But this post is already long enough, so I’ll address those areas some other time.

Related link

In many cases, I think that innovations will prove more valuable — or at least much easier to monetize — when presented to particular vertical markets.

Categories: Other

Migration

DBMS2 - Sat, 2015-01-10 00:45

There is much confusion about migration, by which I mean applications or investment being moved from one “platform” technology — hardware, operating system, DBMS, Hadoop, appliance, cluster, cloud, etc. — to another. Let’s sort some of that out. For starters:

  • There are several fundamentally different kinds of “migration”.
    • You can re-host an existing application.
    • You can replace an existing application with another one that does similar (and hopefully also new) things. This new application may be on a different platform than the old one.
    • You can build or buy a wholly new application.
    • There’s also the inbetween case in which you extend an old application with significant new capabilities — which may not be well-suited for the existing platform.
  • Motives for migration generally fall into a few buckets. The main ones are:
    • You want to use a new app, and it only runs on certain platforms.
    • The new platform may be cheaper to buy, rent or lease.
    • The new platform may have lower operating costs in other ways, such as administration.
    • Your employees may like the new platform’s “cool” aspect. (If the employee is sufficiently high-ranking, substitute “strategic” for “cool”.)
  • Different apps may be much easier or harder to re-host. At two extremes:
    • It can be forbiddingly difficult to re-host an OLTP (OnLine Transaction Processing) app that is heavily tuned, tightly integrated with your other apps, and built using your DBMS vendor’s proprietary stored-procedure language.
    • It might be trivial to migrate a few long-running SQL queries to a new engine, and pretty easy to handle the data connectivity part of the move as well.
  • Certain organizations, usually packaged software companies, design portability into their products from the get-go, with at least partial success.

I mixed together true migration and new-app platforms in a post last year about DBMS architecture choices, when I wrote:

  • Sometimes something isn’t broken, and doesn’t need fixing.
  • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
  • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

In particular, migration away from legacy DBMS raises many issues:

  • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
  • Your staff’s programming and administrative skill-sets.
  • Your investment in DBMS-related tools.
  • Your supply of hockey tickets from the vendor’s salesman.

Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

I then argued that such reasons are likely to exist for NoSQL DBMS, but less commonly for NewSQL. My views on that haven’t changed in the interim.

More generally, my pro-con thoughts on migration start:

  • Pure application re-hosting is rarely worthwhile. Migration risks and costs outweigh the benefits, except in a few cases, one of which is the migration of ELT (Extract/Load/Transform) from expensive analytic RDBMS to Hadoop.
  • Moving from in-house to co-located data centers can offer straightforward cost savings, because it’s not accompanied by much in the way of programming costs, risks, or delays. Hence Rackspace’s refocus on colo at the expense of cloud. (But it can be hard on your data center employees.)
  • Moving to an in-house cluster can be straightforward, and is common. VMware is the most famous such example. Exadata consolidation is another.
  • Much of new application/new functionality development is in areas where application lifespans are short — e.g. analytics, or customer-facing internet. Platform changes are then more practical as well.
  • New apps or app functionality often should and do go where the data already is. This is especially true in the case of cloud/colo/on-premises decisions. Whether it’s important in a single location may depend upon the challenges of data integration.

I’m also often asked for predictions about migration. In light of the above, I’d say:

  • Successful DBMS aren’t going away.
    • OLTP workloads can usually be lost only so fast as applications are replaced, and that tends to be a slow process. Claims to the contrary are rarely persuasive.
    • Analytic DBMS can lose workloads more easily — but their remaining workloads often grow quickly, creating an offset.
  • A large fraction of new apps are up for grabs. Analytic applications go well on new data platforms. So do internet apps of many kinds. The underlying data for these apps often starts out in the cloud. SaaS (Software as a Service) is coming on strong. Etc.
  • I stand by my previous view that most computing will wind up on appliances, clusters or clouds.
  • New relational DBMS will be slow to capture old workloads, even if they are slathered with in-memory fairy dust.

And for a final prediction — discussion of migration isn’t going to go away either. :)

Categories: Other

Notes on machine-generated data, year-end 2014

DBMS2 - Wed, 2014-12-31 21:49

Most IT innovation these days is focused on machine-generated data (sometimes just called “machine data”), rather than human-generated. So as I find myself in the mood for another survey post, I can’t think of any better idea for a unifying theme.

1. There are many kinds of machine-generated data. Important categories include:

  • Web, network and other IT logs.
  • Game and mobile app event data.
  • CDRs (telecom Call Detail Records).
  • “Phone-home” data from large numbers of identical electronic products (for example set-top boxes).
  • Sensor network output (for example from a pipeline or other utility network).
  • Vehicle telemetry.
  • Health care data, in hospitals.
  • Digital health data from consumer devices.
  • Images from public-safety camera networks.
  • Stock tickers (if you regard them as being machine-generated, which I do).

That’s far from a complete list, but if you think about those categories you’ll probably capture most of the issues surrounding other kinds of machine-generated data as well.

2. Technology for better information and analysis is also technology for privacy intrusion. Public awareness of privacy issues is focused in a few areas, mainly:

  • Government snooping on the contents of communications.
  • Communication traffic analysis.
  • Photos and videos (airport scanners, public cameras, etc.)
  • Commercial ad targeting.
  • Traditional medical records.

Other areas, however, continue to be overlooked, with the two biggies in my opinion being:

  • The potential to apply marketing-like psychographic analysis in other areas, such as hiring decisions or criminal justice.
  • The ability to track people’s movements in great detail, which will be increased greatly yet again as the market matures — and some think this will happen soon — for consumer digital health.

My core arguments about privacy and surveillance seem as valid as ever.

3. The natural database structures for machine-generated data vary wildly. Weblog data structure is often remarkably complex. Log data from complex organizations (e.g. IT shops or hospitals) might comprise many streams, each with a different (even if individually simple) organization. But in the majority of my example categories, record structure is very simple and repeatable. Thus, there are many kinds of machine-generated data that can, at least in principle, be handled well by a relational DBMS …

4. … at least to some extent. In a further complication, much machine-generated data arrives as a kind of time series. Many (but not all) time series call for a strong commitment to event-series styles of analytics. Event series analytics are a challenge for relational DBMS, but Vertica and others have tried to step up with various kinds of temporal predicates or datatypes. Event series are also a challenge for business intelligence vendors, and a potentially significant driver for competitive rebalancing in the BI market.

5. Event series even aside, I wish I understood more about business intelligence for non-tabular data. I plan to fix that.

6. Streaming and memory-centric processing are closely related subjects. What I wrote recently about them for Hadoop still applies: Spark, Kafka, etc. is still the base streaming case going forward; Storm is still around as an alternative; Tachyon or something like it will change the game somewhat. But not all streaming machine-generated data needs to land in Hadoop at all. As noted above, relational data stores (especially memory-centric ones) can suffice. So can NoSQL. So can Splunk.

Not all these considerations are important in all use cases. For one thing, latency requirements vary greatly. For example:

  • High-frequency trading is an extreme race; microseconds matter.
  • Internet interaction applications increasingly require data freshness to the last click or other user action. Computational latency requirements can go down to the single-digit milliseconds. Real-time ad auctions have a race aspect that may drive latency lower yet.
  • Minute-plus response can be fine for individual remote systems. Sometimes they ping home more rarely than that.

There’s also still plenty of true batch mode, but — and I say this as part of a conversation that’s been underway for over 40 years — interactive computing is preferable whenever feasible.

7. My views about predictive analytics are still somewhat confused. For starters:

  • The math and technology of predictive modeling both still seem pretty simple …
  • … but sometimes achieve mind-blowing results even so.
  • There’s a lot of recent innovation in predictive modeling, but adoption of the innovative stuff is still fairly tepid.
  • Adoption of the simple stuff is strong in certain market sectors, especially ones connected to customer understanding, such as marketing or anti-fraud.

So I’ll mainly just link to some of my past posts on the subject, and otherwise leave discussion of predictive analytics to another day.

Finally, back in 2011 I tried to broadly categorize analytics use cases. Based on that and also on some points I just raised above, I’d say that a ripe area for breakthroughs is problem and anomaly detection and diagnosis, specifically for machines and physical installations, rather than in the marketing/fraud/credit score areas that are already going strong. That’s an old discipline; the concept of statistical process control dates back before World War II. Perhaps they’re underway; the Conviva retraining example listed above is certainly imaginative. But I’d like to see a lot more in the area.

Even more important, of course, could be some kind of revolution in predictive modeling for medicine.

Categories: Other

“Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube

If you missed Fishbowl’s recent webinar on our new Enterprise Information Portal for Project Management, you can now view a recording of it on YouTube.

 

Innovation in Managing the Chaos of Everyday Project Management discusses our strategy for leveraging the content management and collaboration features of Oracle WebCenter to enable project-centric organizations to build and deploy a project management portal. This solution was designed especially for groups like E & C firms and oil and gas companies, who need applications to be combined into one portal for simple access.

If you’d like to learn more about the Enterprise Information Portal for Project Management, visit our website or email our sales team at sales@fishbowlsolutions.com.

The post “Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

WibiData’s approach to predictive modeling and experimentation

DBMS2 - Tue, 2014-12-16 06:29

A conversation I have too often with vendors goes something like:

  • “That confidential thing you told me is interesting, and wouldn’t harm you if revealed; probably quite the contrary.”
  • “Well, I guess we could let you mention a small subset of it.”
  • “I’m sorry, that’s not enough to make for an interesting post.”

That was the genesis of some tidbits I recently dropped about WibiData and predictive modeling, especially but not only in the area of experimentation. However, Wibi just reversed course and said it would be OK for me to tell more or less the full story, as long as I note that we’re talking about something that’s still in beta test, with all the limitations (to the product and my information alike) that beta implies.

As you may recall:

With that as background, WibiData’s approach to predictive modeling as of its next release will go something like this:

  • There is still a strong element of classical modeling by data scientists/statisticians, with the models re-scored in batch, perhaps nightly.
  • But of course at least some scoring should be done as real-time as possible, to accommodate fresh data such as:
    • User interactions earlier in today’s session.
    • Technology for today’s session (device, connection speed, etc.)
    • Today’s weather.
  • WibiData Express is/incorporates a Scala-based language for modeling and query.
  • WibiData believes Express plus a small algorithm library gives better results than more mature modeling libraries.
    • There is some confirming evidence of this …
    • … but WibiData’s customers have by no means switched over yet to doing the bulk of their modeling in Wibi.
  • WibiData will allow line-of-business folks to experiment with augmentations to the base models.
  • Supporting technology for predictive experimentation in WibiData will include:
    • Automated multi-armed bandit testing (in previous versions even A/B testing has been manual).
    • A facility for allowing fairly arbitrary code to be included into otherwise conventional model-scoring algorithms, where conventional scoring models can come:
      • Straight from WibiData Express.
      • Via PMML (Predictive Modeling Markup Language) generated by other modeling tools.
    • An appropriate user interface for the line-of-business folks to do certain kinds of injecting.

Let’s talk more about predictive experimentation. WibiData’s paradigm for that is:

  • Models are worked out in the usual way.
  • Businesspeople have reasons for tweaking the choices the models would otherwise dictate.
  • They enter those tweaks as rules.
  • The resulting combination — models plus rules — are executed and hence tested.

If those reasons for tweaking are in the form of hypotheses, then the experiment is a test of those hypotheses. However, WibiData has no provision at this time to automagically incorporate successful tweaks back into the base model.

What might those hypotheses be like? It’s a little tough to say, because I don’t know in fine detail what is already captured in the usual modeling process. WibiData gave me only one real-life example, in which somebody hypothesized that shoppers would be in more of a hurry at some times of day than others, and hence would want more streamlined experiences when they could spare less time. Tests confirmed that was correct.

That said, I did grow up around retailing, and so I’ll add:

  • Way back in the 1970s, Wal-Mart figured out that in large college towns, clothing in the football team’s colors was wildly popular. I’d hypothesize such a rule at any vendor selling clothing suitable for being worn in stadiums.
  • A news event, blockbuster movie or whatever might trigger a sudden change in/addition to fashion. An alert merchant might guess that before the models pick it up. Even better, she might guess which psychographic groups among her customers were most likely to be paying attention.
  • Similarly, if a news event caused a sudden shift in buyers’ optimism/pessimism/fear of disaster, I’d test that a response to that immediately.

Finally, data scientists seem to still be a few years away from neatly solving the problem of multiple shopping personas — are you shopping in your business capacity, or for yourself, or for a gift for somebody else (and what can we infer about that person)? Experimentation could help fill the gap.

Categories: Other

Notes and links, December 12, 2014

DBMS2 - Fri, 2014-12-12 05:05

1. A couple years ago I wrote skeptically about integrating predictive modeling and business intelligence. I’m less skeptical now.

For starters:

  • The predictive experimentation I wrote about over Thanksgiving calls naturally for some BI/dashboarding to monitor how it’s going.
  • If you think about Nutonian’s pitch, it can be approximated as “Root-cause analysis so easy a business analyst can do it.” That could be interesting to jump to after BI has turned up anomalies. And it should be pretty easy to whip up a UI for choosing a data set and objective function to model on, since those are both things that the BI tool would know how to get to anyway.

I’ve also heard a couple of ideas about how predictive modeling can support BI. One is via my client Omer Trajman, whose startup ScalingData is still semi-stealthy, but says they’re “working at the intersection of big data and IT operations”. The idea goes something like this:

  • Suppose we have lots of logs about lots of things.* Machine learning can help:
    • Notice what’s an anomaly.
    • Group* together things that seem to be experiencing similar anomalies.
  • That can inform a BI-plus interface for a human to figure out what is happening.

Makes sense to me.

* The word “cluster” could have been used here in a couple of different ways, so I decided to avoid it altogether.

Finally, I’m hearing a variety of “smart ETL/data preparation” and “we recommend what columns you should join” stories. I don’t know how much machine learning there’s been in those to date, but it’s usually at least on the roadmap to make the systems (yet) smarter in the future. The end benefit is usually to facilitate BI.

2. Discussion of graph DBMS can get confusing. For example:

  • Use cases run the gamut from short-request to highly analytic; no graph DBMS is well-suited for all graph use cases.
  • Graph DBMS have huge problems scaling, because graphs are very hard to partition usefully; hence some of the more analytic use cases may not benefit from a graph DBMS at all.
  • The term “graph” has meanings in computer science that have little to do with the problems graph DBMS try to solve, notably directed acyclic graphs for program execution, which famously are at the heart of both Spark and Tez.
  • My clients at Neo Technology/Neo4j call one of their major use cases MDM (Master Data Management), without getting much acknowledgement of that from the mainstream MDM community.

I mention this in part because that “MDM” use case actually has some merit. The idea is that hierarchies such as organization charts, product hierarchies and so on often aren’t actually strict hierarchies. And even when they are, they’re usually strict only at specific points in time; if you care about their past state as well as their present one, a hierarchical model might have trouble describing them. Thus, LDAP (Lightweight Directory Access Protocol) engines may not be an ideal way to manage and reference such “hierarchies:; a graph DBMS might do better.

3. There is a surprising degree of controversy among predictive modelers as to whether more data yields better results. Besides, the most common predictive modeling stacks have difficulty scaling. And so it is common to model against samples of a data set rather than the whole thing.*

*Strictly speaking, almost the whole thing — you’ll often want to hold at least a sample of the data back for model testing.

Well, WibiData’s couple of Very Famous Department Store customers have tested WibiData’s ability to model against an entire database vs. their alternative predictive modeling stacks’ need to sample data. WibiData says that both report significantly better results from training over the whole data set than from using just samples.

4. Scaling Data is on the bandwagon for Spark Streaming and Kafka.

5. Derrick Harris and Pivotal turn out to have been earlier than me in posting about Tachyon bullishness.

6. With the Hortonworks deal now officially priced, Derrick was also free to post more about/from Hortonworks’ pitch. Of course, Hortonworks is saying Hadoop will be Big Big Big, and suggesting we should thus not be dismayed by Hortonworks’ financial performance so far. However, Derrick did not cite Hortonworks actually giving any reasons why its competitive position among Hadoop distribution vendors should improve.

Beyond that, Hortonworks says YARN is a big deal, but doesn’t seem to like Spark Streaming.

Categories: Other

A few numbers from MapR

DBMS2 - Wed, 2014-12-10 00:55

MapR put out a press release aggregating some customer information; unfortunately, the release is a monument to vagueness. Let me start by saying:

  • I don’t know for sure, but I’m guessing Derrick Harris was incorrect in suspecting that this release was a reaction to my recent post about Hortonworks’ numbers. For one thing, press releases usually don’t happen that quickly.
  • And as should be obvious from the previous point — notwithstanding that MapR is a client, I had no direct involvement in this release.
  • In general, I advise clients and other vendors to put out the kind of aggregate of customer success stories found in this release. However, I would like to see more substance than MapR offered.

Anyhow, the key statement in the MapR release is:

… the number of companies that have a paid subscription for MapR now exceeds 700.

Unfortunately, that includes OEM customers as well as direct ones; I imagine MapR’s direct customer count is much lower.

In one gesture to numerical conservatism, MapR did indicate by email that it counts by overall customer organization, not by department/cluster/contract (i.e., not the way Hortonworks does).

The MapR press release also said:

As of November 2014, MapR has one or more customers in eight vertical markets that have purchased more than one million dollars of MapR software and services.  These vertical markets are advertising/media, financial services, healthcare, internet, information technology, retail, security, and telecom.

Since the word “each” isn’t in that quote, so we don’t even know whether MapR is referring to individual big customers or just general sector penetration. We also don’t know whether the revenue is predominantly subscription or some other kind of relationship.

MapR also indicated that the average customer more than doubled its annualized subscription rate vs. a year ago; the comparable figure — albeit with heavy disclaimers — from Hortonworks was 25%.

Categories: Other

Hadoop’s next refactoring?

DBMS2 - Sun, 2014-12-07 08:59

I believe in all of the following trends:

  • Hadoop is a Big Deal, and here to stay.
  • Spark, for most practical purposes, is becoming a big part of Hadoop.
  • Most servers will be operated away from user premises, whether via SaaS (Software as a Service), co-location, or “true” cloud computing.

Trickier is the meme that Hadoop is “the new OS”. My thoughts on that start:

  • People would like this to be true, although in most cases only as one of several cluster computing platforms.
  • Hadoop, when viewed as an operating system, is extremely primitive.
  • Even so, the greatest awkwardness I’m seeing when different software shares a Hadoop cluster isn’t actually in scheduling, but rather in data interchange.

There is also a minor issue that if you distribute your Hadoop work among extra nodes you might have to pay a bit more to your Hadoop distro support vendor. Fortunately, the software industry routinely solves more difficult pricing problems than that.

Recall now that Hadoop — like much else in IT — has always been about two things: data storage and program execution. The evolution of Hadoop program execution to date has been approximately:

  • Originally, MapReduce and JobTracker were the way to execute programs in Hadoop, period, at least if we leave HBase out of the discussion.
  • In a major refactoring, YARN replaced a lot of what JobTracker did, with the result that different program execution frameworks became easier to support.
  • Most of the relevant program execution frameworks — such as MapReduce, Spark or Tez — have data movement and temporary storage near their core.

Meanwhile, Hadoop data storage is mainly about HDFS (Hadoop Distributed File System). Its evolution, besides general enhancement, has included the addition of file types suitable for specific kinds of processing (e.g. Parquet and ORC to accelerate analytic database queries). Also, there have long been hacks that more or less bypassed central Hadoop data management, and let data be moved in parallel on a node-by-node basis. But several signs suggest that Hadoop data storage should and will be refactored too. Three efforts in particular point in that direction:

The part of all this I find most overlooked is inter-program data exchange. If two programs both running on Hadoop want to exchange data, what do they do, other than reading and writing to HDFS, or invoking some kind of a custom connector? What’s missing is a nice, flexible distributed memory layer, which:

  • Works well with Hadoop execution engines (Spark, Tez, Impala …).
  • Works well with other software people might want to put on their Hadoop nodes.
  • Interfaces nicely to HDFS, Isilon, object storage, et al.
  • Is fully parallel any time it needs to talk with persistent or external storage.
  • Can be fully parallel any time it needs to talk with any other software on the Hadoop cluster.

Tachyon could, I imagine, become that. HDFS caching probably could not.

In the past, I’ve been skeptical of in-memory data grids. But now I think that a such a grid could take Hadoop to the next level of generality and adoption.

Related links

Categories: Other

Notes on the Hortonworks IPO S-1 filing

DBMS2 - Sun, 2014-12-07 07:53

Given my stock research experience, perhaps I should post about Hortonworks’ initial public offering S-1 filing. :) For starters, let me say:

  • Hortonworks’ subscription revenues for the 9 months ended last September 30 appear to be:
    • $11.7 million from everybody but Microsoft, …
    • … plus $7.5 million from Microsoft, …
    • … for a total of $19.2 million.
  • Hortonworks states subscription customer counts (as per Page 55 this includes multiple “customers” within the same organization) of:
    • 2 on April 30, 2012.
    • 9 on December 31, 2012.
    • 25 on April 30, 2013.
    • 54 on September 30, 2013.
    • 95 on December 31, 2013.
    • 233 on September 30, 2014.
  • Per Page 70, Hortonworks’ total September 30, 2014 customer count was 292, including professional services customers.
  • Non-Microsoft subscription revenue in the quarter ended September 30, 2014 seems to have been $5.6 million, or $22.5 million annualized. This suggests Hortonworks’ average subscription revenue per non-Microsoft customer is a little over $100K/year.
  • This IPO looks to be a sharply “down round” vs. Hortonworks’ Series D financing earlier this year.
    • In March and June, 2014, Hortonworks sold stock that subsequently was converted into 1/2 a Hortonworks share each at $12.1871 per share.
    • The tentative top of the offering’s price range is $14/share.
    • That’s also slightly down from the Series C price in mid-2013.

And, perhaps of interest only to me — there are approximately 50 references to YARN in the Hortonworks S-1, but only 1 mention of Tez.

Overall, the Hortonworks S-1 is about 180 pages long, and — as is typical — most of it is boilerplate, minutiae or drivel. As is also typical, two of the most informative sections of the Hortonworks S-1 are:

The clearest financial statements in the Hortonworks S-1 are probably the quarterly figures on Page 62, along with the tables on Pages F3, F4, and F7.

Special difficulties in interpreting Hortonworks’ numbers include:

  • A large fraction of revenue has come from a few large customers, most notably Microsoft. Details about those revenues are further confused by:
    • Difficulty in some cases getting a fix on the subscription/professional services split. (It does seem clear that Microsoft revenues are 100% subscription.)
    • Some revenue deductions associated with stock deals, called “contra-revenue”.
  • Hortonworks changed the end of its fiscal year from April to December, leading to comparisons of a couple of eight-month periods.
  • There was a $6 million lawsuit settlement (some kind of employee poaching/trade secrets case), discussed on Page F-21.
  • There is some counter-intuitive treatment of Windows-related development (cost of revenue rather than R&D).

One weirdness is that cost of professional services revenue far exceeds 100% of such revenue in every period Hortonworks reports. Hortonworks suggests that this is because:

  • Professional services revenue is commonly bundled with support contracts.
  • Such revenue is recognized ratably over the life of the contract, as opposed to a more natural policy of recognizing professional services revenue when the services are actually performed.

I’m struggling to come up with a benign explanation for this.

In the interest of space, I won’t quote Hortonworks’ S-1 verbatim; instead, I’ll just note where some of the more specifically informative parts may be found.

  • Page 53 describes Hortonworks’ typical sales cycles (they’re long).
  • Page 54 says the average customer has increased subscription payments 25% year over year, but emphasize that the sample size is too small to be reliable.
  • Pages 55-63 have a lot of revenue and expense breakdowns.
  • Deferred revenue numbers (which are a proxy for billings and thus signed contracts) are on Page 65.
  • Pages II 2-3 list all (I think) Hortonworks financings in a concise manner.

And finally, Hortonworks’ dealings with its largest customers and strategic partners are cited in a number of places. In particular:

  • Pages 52-3 cover dealings with Yahoo, Teradata, Microsoft, and AT&T.
  • Pages 82-3 discusses OEM revenue from Hewlett-Packard, Red Hat, and Teradata, none of which amounts to very much.
  • Page 109 covers the Teradata agreement. It seems that there’s less going on than originally envisioned, in that Teradata made a nonrefundable prepayment far greater than turns out to have been necessary for subsequent work actually done. That could produce a sudden revenue spike or else positive revenue restatement as of February, 2015.
  • Page F-10 has a table showing revenue from Hortonworks’ biggest customers (Company A is Microsoft and Company B is Yahoo).
  • Pages F37-38 further cover Hortonworks’ relationships with Yahoo, Teradata and AT&T.

Correction notice: Some of the page numbers in this post were originally wrong, surely because Hortonworks posted an original and amended version of this filing, and I got the two documents mixed up.  A huge Thank You goes to Merv Adrian for calling my attention to this, and I think I’ve now fixed them. I apologize for the errors!

Related links

Categories: Other

Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST

Cole OrndorffThere’s still time to register for the webinar that Fishbowl Solutions and Oracle will be holding tomorrow from 1 PM-2 PM CST! Innovation in Managing the Chaos of Everyday Project Management will feature Fishbowl’s AEC Practice Director Cole Orndorff. Orndorff, who has a great deal of experience with enterprise information portals, said the following about the webinar:

“According to Psychology Today, the average employee can lose up to 40% of their productivity switching from task to task. The number of tasks executed across a disparate set of systems over the lifecycle of a complex project is overwhelming, and in most cases, 20% of each solution is utilized 80% of the time.

I am thrilled to have the opportunity to present on how improving workforce effectiveness can enhance your margins. This can be accomplished by providing a consistent, intuitive user experience across the diverse systems project teams use and by reusing the intellectual assets that already exist in your organization.”

To register for the webinar, visit Oracle’s website. To learn more about Fishbowl’s new Enterprise Information Portal for Project Management, visit our website.

The post Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Thoughts and notes, Thanksgiving weekend 2014

DBMS2 - Sun, 2014-11-30 19:48

I’m taking a few weeks defocused from work, as a kind of grandpaternity leave. That said, the venue for my Dances of Infant Calming is a small-but-nice apartment in San Francisco, so a certain amount of thinking about tech industries is inevitable. I even found time last Tuesday to meet or speak with my clients at WibiData, MemSQL, Cloudera, Citus Data, and MongoDB. And thus:

1. I’ve been sloppy in my terminology around “geo-distribution”, in that I don’t always make it easy to distinguish between:

  • Storing different parts of a database in different geographies, often for reasons of data privacy regulatory compliance.
  • Replicating an entire database into different geographies, often for reasons of latency and/or availability/ disaster recovery,

The latter case can be subdivided further depending on whether multiple copies of the data can accept first writes (aka active-active, multi-master, or multi-active), or whether there’s a clear single master for each part of the database.

What made me think of this was a phone call with MongoDB in which I learned that the limit on number of replicas had been raised from 12 to 50, to support the full-replication/latency-reduction use case.

2. Three years ago I posted about agile (predictive) analytics. One of the points was:

… if you change your offers, prices, ad placement, ad text, ad appearance, call center scripts, or anything else, you immediately gain new information that isn’t well-reflected in your previous models.

Subsequently I’ve been hearing more about predictive experimentation such as bandit testing. WibiData, whose views are influenced by a couple of Very Famous Department Store clients (one of which is Macy’s), thinks experimentation is quite important. And it could be argued that experimentation is one of the simplest and most direct ways to increase the value of your data.

3. I’d further say that a number of developments, trends or possibilities I’m seeing are or could be connected. These include agile and experimental predictive analytics in general, as noted in the previous point, along with: 

Also, the flashiest application I know of for only-moderately-successful KXEN came when one or more large retailers decided to run separate models for each of thousands of stores.

4. MongoDB, the product, has been refactored to support pluggable storage engines. In connection with that, MongoDB does/will ship with two storage engines – the traditional one and a new one from WiredTiger (but not TokuMX). Both will be equally supported by MongoDB, the company, although there surely are some tiers of support that will get bounced back to WiredTiger.

WiredTiger has the same techie principals as SleepyKat – get the wordplay?! – which was Mike Olson’s company before Cloudera. When asked, Mike spoke of those techies in remarkably glowing terms.

I wouldn’t be shocked if WiredTiger wound up playing the role for MongoDB that InnoDB played for MySQL. What I mean is that there were a lot of use cases for which the MySQL/MyISAM combination was insufficiently serious, but InnoDB turned MySQL into a respectable DBMS.

5. Hadoop’s traditional data distribution story goes something like:

  • Data lives on every non-special Hadoop node that does processing.
  • This gives the advantage of parallel data scans.
  • Sometimes data locality works well; sometimes it doesn’t.
  • Of course, if the output of every MapReduce step is persisted to disk, as is the case with Hadoop MapReduce 1, you might create some of your own data locality …
  • … but Hadoop is getting away from that kind of strict, I/O-intensive processing model.

However, Cloudera has noticed that some large enterprises really, really like to have storage separate from processing. Hence its recent partnership to work with EMC Isilon. Other storage partnerships, as well as a better fit with S3/object storage kinds of environments, are sure to follow, but I have no details to offer at this time.

6. Cloudera’s count of Spark users in its customer base is currently around 60. That includes everything from playing around to full production.

7. Things still seem to be going well at MemSQL, but I didn’t press for any details that I would be free to report.

8. Speaking of MemSQL, one would think that at some point something newer would replace Oracle et al. in the general-purpose RDBMS world, much as Unix and Linux grew to overshadow the powerful, secure, reliable, cumbersome IBM mainframe operating systems. On the other hand:

  • IBM blew away its mainframe competitors and had pretty close to a monopoly. But Oracle has some close and somewhat newer competitors in DB2 and Microsoft SQL Server. Therefore …
  • … upstarts have three behemoths to outdo, not just one.
  • MySQL, PostgreSQL and to some extent Sybase are still around as well.

Also, perhaps no replacement will be needed. If we subdivide the database management world into multiple categories including:

  • General-purpose RDBMS.
  • Analytic RDBMS.
  • NoSQL.
  • Non-relational analytic data stores (perhaps Hadoop-based).

it’s not obvious that the general-purpose RDBMS category on its own requires any new entrants to ever supplant the current leaders.

All that said – if any of the current new entrants do pull off the feat, SAP HANA is probably the best (longshot) guess to do so, and MemSQL the second-best.

9. If you’re a PostgreSQL user with performance or scalability concerns, you might want to check what Citus Data is doing.

Categories: Other

Upcoming Webinar: Innovation in Managing the Chaos of Everyday Project Management

On Thursday, December 4th from 1 PM-2 PM CST, Fishbowl Solutions will hold a webinar in conjunction with Oracle about our new solution for enterprise project management. This solution transforms how project-based tools, like Oracle Primavera, and project assets, such as documents and diagrams, are accessed and shared.

With this solution:

  • Project teams will have access to the most accurate and up to date project assets based on their role within a specific project
  • Through a single dashboard, project managers will gain new real-time insight to the overall status of even the most complex projects
  • The new mobile workforce will now have direct access to the same insight and project assets through an intuitive mobile application

With real-time insight and enhanced information sharing and access, this solution can help project teams increase the ability to deliver on time and on budget. To learn more about our Enterprise Information Portal for Project Management, visit Fishbowl’s website.

Fishbowl’s Cole Orndorff, who has 10+ years in the engineering and construction industry, will keynote and share how a mobile-ready portal can integrate project information from Oracle Primavera and other sources to serve information up to users in a personalized, intuitive user experience.

Register here

The post Upcoming Webinar: Innovation in Managing the Chaos of Everyday Project Management appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Technical differentiation

DBMS2 - Sat, 2014-11-15 06:00

I commonly write about real or apparent technical differentiation, in a broad variety of domains. But actually, computers only do a couple of kinds of things:

  • Accept instructions.
  • Execute them.

And hence almost all IT product differentiation fits into two buckets:

  • Easier instruction-giving, whether that’s in the form of a user interface, a language, or an API.
  • Better execution, where “better” usually boils down to “faster”, “more reliable” or “more reliably fast”.

As examples of this reductionism, please consider:

  • Application development is of course a matter of giving instructions to a computer.
  • Database management systems accept and execute data manipulation instructions.
  • Data integration tools accept and execute data integration instructions.
  • System management software accepts and executes system management instructions.
  • Business intelligence tools accept and execute instructions for data retrieval, navigation, aggregation and display.

Similar stories are true about application software, or about anything that has an API (Application Programming Interface) or SDK (Software Development Kit).

Yes, all my examples are in software. That’s what I focus on. If I wanted to be more balanced in including hardware or data centers, I might phrase the discussion a little differently — but the core points would still remain true.

What I’ve said so far should make more sense if we combine it with the observation that differentiation is usually restricted to particular domains. I mean several different things by that last bit. First, most software only purports to do a limited class of things — manage data, display query results, optimize analytic models, manage a cluster, run a payroll, whatever. Even beyond that, any inherent superiority is usually restricted to a subset of potential use cases. For example:

  • Relational DBMS presuppose that data fits well (enough) into tabular structures. Further, most RDBMS differentiation is restricted to a further subset of such cases; there are many applications that don’t require — for example — columnar query selectivity or declarative referential integrity or Oracle’s elite set of security certifications.
  • Some BI tools are great for ad-hoc navigation. Some excel at high-volume report displays, perhaps with a particular flair for mobile devices. Some are learning how to query non-tabular data.
  • Hadoop, especially in its early days, presupposed data volumes big enough to cluster and application models that fit well with MapReduce.
  • A lot of distributed computing aids presuppose particular kinds of topologies.

A third reason for technical superiority to be domain-specific is that advantages are commonly coupled with drawbacks. Common causes of that include:

  • Many otherwise-advantageous choices strain hardware budgets. Examples include:
    • Robust data protection features (most famously RAID and two-phase commit)
    • Various kinds of translation or interpretation overhead.
  • Yet other choices are good for some purposes but bad for others. It’s fastest to write data in the exact way it comes in, but then it would be slow to retrieve later on.
  • Innovative technical strategies are likely to be found in new products that haven’t had time to become mature yet.

And that brings us to the main message of this post: Your spiffy innovation is important in fewer situations than you would like to believe. Many, many other smart organizations are solving the same kinds of problems as you; their solutions just happen to be effective in somewhat different scenarios than yours. This is especially true when your product and company are young. You may eventually grow to cover a broad variety of use cases, but to get there you’ll have to more or less match the effects of many other innovations that have come along before yours.

When advising vendors, I tend to think in terms of the layered messaging model, and ask the questions:

  • Which of your architectural features gives you sustainable advantages in features or performance?
  • Which of your sustainable advantages in features or performance provides substantial business value in which use cases?

Closely connected are the questions:

  • What lingering disadvantages, if any, does your architecture create?
  • What maturity advantages do your competitors have, and when (if ever) will you be able to catch up with them?
  • In which use cases are your disadvantages important?

Buyers and analysts should think in such terms as well.

Related links

Daniel Abadi, who is now connected to Teradata via their acquisition of Hadapt, put up a post promoting some interesting new features of theirs. Then he tweeted that this was an example of what I call Bottleneck Whack-A-Mole. He’s right. But since much of his theme was general praise of Teradata’s mature DBMS technology, it would also have been accurate to reference my post about The Cardinal Rules of DBMS Development.

Categories: Other

Enterprise Libraries: The Next Iteration of WebCenter Folders

This blog post was written by Matt Rudd, Enterprise Support Team Lead at Fishbowl Solutions. Matt has participated in multiple WebCenter 11g upgrades during his time with Fishbowl, and recently developed a solution for an issue he ran into frequently while performing upgrades.

With the release of the ADF Content UI for WebCenter Content, it has become clear that the long-term road map for folder-based storage within WebCenter Content is based on enterprise libraries. The new Content UI only allows you to browse content contained within these libraries, which are top-level “buckets” of logically grouped content for your enterprise. However, content (i.e. files) cannot be added directly under an enterprise library. One or more folders must be added under an enterprise library, and then files can be directly added to the folders. The enterprise libraries container can also be viewed via the legacy WebCenter Content UI by navigating to Browse Content->Folders, as shown below.

WebCenter Content screenshot

In order to use the ADF Content UI with existing folders and content, they need to be migrated to enterprise libraries via the Move command on a folder’s action menu.

WebCenter 11g screenshotFor customers that have been using folders-based storage within WebCenter Content for a number of years, this can be especially difficult. Changes involving special characters and double spaces have presented problems, as well as other more challenging issues. The most challenging has been the nondescript error message “Unable to update the content item information for MyContent.” This error message pops up repeatedly for content that is not in workflow, is in Released status, has no other errors related to the content of any kind, and to which the moving user has full admin permissions. In addition, the content can be moved individually without issue, but not as part of a Framework Folders to enterprise libraries migration.

During the course of alleviating errors for a successful enterprise libraries migration, we discovered that if we copied the folders but moved the content, we were able to successfully migrate the majority of the content while still being able to clean up special character issues as necessary. In order to do this efficiently for large folder structures, the process needed to be automated.

Rather than building a custom component, we opted to build a custom RIDC application to recursively copy all, or any portion of, a folder structure from one parent to another while moving the content to the newly copied destination. This flexibility, along with ensuring that duplicate folders were not created in the destination folder structure, allowed us to run the application as many times as necessary. If a folder failed to move due to an issue (e.g. disallowed special character), the folder name could be changed and the application could re-run to only recursively process that folder. The number of content items under a particular level of the folder structure was verified with database queries to ensure all content was moved before deleting the old, and now empty, folder structure. This iterative process allowed us to migrate approximately 50,000 folders that contained 400,000 content items in about 15 hours. However, this was after rigorously testing the content migration in development and staging environments to alleviate as many content and folder issues as possible prior to the go-live migration. The RIDC application used no custom services of any kind and relied solely on those provided by core WebCenter Content and the Framework Folders component.

 

 

The post Enterprise Libraries: The Next Iteration of WebCenter Folders appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Notes on predictive modeling, November 2, 2014

DBMS2 - Sun, 2014-11-02 05:49

Following up on my notes on predictive modeling post from three weeks ago, I’d like to tackle some areas of recurring confusion.

Why are we modeling?

Ultimately, there are two reasons to model some aspect of your business:

  • You generally want insight and understanding.
    • This is analogous to why you might want to do business intelligence.
    • It commonly includes a search for causality, whether or not “root cause analysis” is exactly the right phrase to describe the process.
  • You want to do calculations from the model to drive wholly or partially automated decisions.
    • A big set of examples can be found in website recommenders and personalizers.
    • Another big set of examples can be found in marketing campaigns.
    • For an example of partial automation, consider a tool that advises call center workers.

How precise do models need to be?

Use cases vary greatly with respect to the importance of modeling precision. If you’re doing an expensive mass mailing, 1% additional accuracy is a big deal. But if you’re doing root cause analysis, a 10% error may be immaterial.

Who is doing the work?

It is traditional to have a modeling department, of “data scientists” or SAS programmers as the case may be. While it seems cool to put predictive modeling straight in the hands of business users — some business users, at least — it’s rare for them to use predictive modeling tools more sophisticated than Excel. For example, KXEN never did all that well.

That said, I support the idea of putting more modeling in the hands of business users. Just be aware that doing so is still a small business at this time.

“Operationalizing” predictive models

The topic of “operationalizing” models arises often, and it turns out to be rather complex. Usually, to operationalize a model, you need:

  • A program that generates scores, based on the model.
  • A program that consumes scores (for example a recommender or fraud alerter).

In some cases, the two programs might be viewed as different modules of the same system.

While it is not actually necessary for there to be a numerical score — or scores — in the process, it seems pretty common that there are such. Certainly the score calculations can create a boundary for loose-coupling between model evaluation and the rest of the system.

That said:

  • Sometimes the scoring is done on the fly. In that case, the two programs mentioned above are closely integrated.
  • Sometimes the scoring is done in batch. In that case, loose coupling seems likely. Often, there will be ETL (Extract/Transform/Load) to make the scores available to the program that will eventually use them.
  • PMML (Predictive Modeling Markup Language) is good for some kinds of scoring but not others. (I’m not clear on the details.)

In any case, operationalizing a predictive model can or should include:

  • A process for creating the model.
  • A process for validating and refreshing the model.
  • A flow of derived data.
  • A program that consumes the model’s outputs.

Traditional IT considerations, such as testing and versioning, apply.

What do we call it anyway?

The term “predictive analytics” was coined by SPSS. It basically won. However, some folks — including whoever named PMML — like the term “predictive modeling” better. I’m in that camp, since “modeling” seems to be a somewhat more accurate description of what’s going on, but I’m fine with either phrase.

Some marketers now use the term “prescriptive analytics”. In theory that makes sense, since:

  • “Prescriptive” can be taken to mean “operationalized predictive”, saving precious syllables and pixels.
  • What’s going on is usually more directly about prescription than prescription anyway.

Edit: Ack! I left the final paragraph out of the post, namely:

In practice, however, the term “prescriptive analytics” is a strong indicator of marketing nonsense. Predictive modeling has long been used to — as it were — prescribe business decisions; marketers who use the term “prescriptive analytics” are usually trying to deny that very obvious fact.

Categories: Other

Analytics for lots and lots of business users

DBMS2 - Sun, 2014-11-02 05:45

A common marketing theme in the 2010s decade has been to claim that you make analytics available to many business users, as opposed to your competition, who only make analytics available to (pick one):

  • Specialists (with “PhD”s).
  • Fewer business users (a thinner part of the horizontally segmented pyramid — perhaps inverted — on your marketing slide, not to be confused with the horizontally segmented pyramids — perhaps inverted — on your competition’s marketing slides).

Versions of this claim were also common in the 1970s, 1980s, 1990s and 2000s.

Some of that is real. In particular:

  • Early adoption of analytic technology is often in line-of-business departments.
  • Business users on average really do get more numerate over time, my three favorite examples of that being:
    • Statistics is taught much more in business schools than it used to be.
    • Statistics is taught much more in high schools than it used to be.
    • Many people use Excel.

Even so, for most analytic tools, power users tend to be:

  • People with titles or roles like “business analyst”.
  • More junior folks pulling things together for their bosses.
  • A hardcore minority who fall into neither of the first two categories.

Asserting otherwise is rarely more than marketing hype.

Related link

Categories: Other

Datameer at the time of Datameer 5.0

DBMS2 - Sun, 2014-10-26 02:42

Datameer checked in, having recently announced general availability of Datameer 5.0. So far as I understood, Datameer is still clearly in the investigative analytics business, in that:

  • Datameer does business intelligence, but not at human real-time speeds. Datameer query durations are sometimes sub-minute, but surely not sub-second.
  • Datameer also does lightweight predictive analytics/machine learning — k-means clustering, decision trees, and so on.

Key aspects include:

  • Datameer runs straight against Hadoop.
  • Like many other analytic offerings, Datameer is meant to be “self-service”, for line-of-business business analysts, and includes some “data preparation”. Datameer also has had some data profiling since Datameer 4.0.
  • The main way of interacting with Datameer seems to be visual analytic programming. However, I Datameer has evolved somewhat away from its original spreadsheet metaphor.
  • Datameer’s primitives resemble those you’d find in SQL (e.g. JOINs, GROUPBYs). More precisely, that would be SQL with a sessionization extension; e.g., there’s a function called GROUPBYGAP.
  • Datameer lets you write derived data back into Hadoop.

Datameer use cases sound like the usual mix, consisting mainly of a lot of customer analytics, a bit of anti-fraud, and some operational analytics/internet-of-things. Datameer claims 200 customers and 240 installations, the majority of which are low-end/single-node users, but at least one of which is a multi-million dollar relationship. I don’t think those figures include OEM sell-through. I forgot to ask for any company size metrics, such as headcount.

In a chargeable add-on, Datameer 5.0 has an interesting approach to execution. (The lower-cost version just uses MapReduce.)

  • An overall task can of course be regarded as a DAG (Directed Acyclic Graph).
  • Datameer automagically picks an execution strategy for each node. Administrator hints are allowed.
  • There are currently three choices for execution: MapReduce, clustered in-memory, or single-node. This all works over Tez and YARN.
  • Spark is a likely future option.

Datameer calls this “Smart Execution”. Notes on Smart Execution include:

  • Datameer sees a lot of tasks that look at 10-100 megabytes of data, especially in malware/anomaly detection. Datameer believes there can be a huge speed-up from running those on a single-node rather than in a clustered mode requiring data (re)distributed, with at least one customer reporting >20X speedup of at least one job.
  • Yes, each step of the overall DAG might look to the underlying execution engine as a DAG of its own.
  • Tez can fire up processes ahead of when they’re needed, so you don’t have to wait for all the process start-up delays in series.
  • Datameer had a sampling/preview engine from the getgo that outside of Hadoop MapReduce. That’s the basis for the non-MapReduce options now.

Strictly from a BI standpoint, Datameer seems clunky.

  • Datameer doesn’t have drilldown.
  • Datameer certainly doesn’t let you navigate from one visualization to the next ala QlikView/Tableau/et al. (Note to self: I really need to settle on a name for that feature.)
  • While Datameer does have a bit in the way of event series visualization, it seems limited.
  • Of course, Datameer doesn’t have streaming-oriented visualizations.
  • I’m not aware of any kind of text search navigation.

Datameer does let you publish BI artifacts, but doesn’t seem to have any collaboration features beyond that.

Last and also least: In an earlier positioning, Datameer made a big fuss about an online app store. Since analytic apps stores never amount to much, I scoffed.* That said, they do have it, so I asked which apps got the most uptake. Most of them seem to be apps which boil down to connectors, access to outside data sets, and/or tutorials. Also mentioned were two more substantive apps, one for path-oriented clickstream analysis, and one for funnel analysis combining several event series.

*I once had a conversation with a client that ended:

  • “This app store you’re proposing will not be a significant success.”
  • “Are you sure?”
  • “Almost certain. It really just sounds like StreamBase’s.”
  • “I ‘m not familiar with StreamBase’s app store.”
  • “My point exactly.”
Categories: Other