Lessons from Apple WWDC & Data Management – Part 2: Ubiquity

I paid for it, so I should be able to have access to it when and where I want it.  Moving from one computer to another, or for a mobile device shouldn’t be a barrier.  The interface should be intuitive and content accurate.  There should also be a mechanism to handle exceptions.  Yes, I am talking about iCloud’s music support but I could easily be talking about an organization’s business data and the same principles should apply.

Consumer technologies made IT’s job both easier and harder at the same time.  The computing power that used to cost tens or hundreds of thousands are not available for a fraction of the cost.  Specialized devices are being replaced by mobile apps (have you see Square on iPhone/iPad/Android?  Why need complex credit card machinery when you have accessibility anytime and almost anywhere).  Application designs are becoming more targeted also.  The limited real estate is getting developers and users alike to focus on the most important information.  Push alerts allow users to be notified when there is an update vs. having to run the weekly report for a comparison or waiting on an email.  Enterprise Workflow can now be accessible, practical, and useful at a smart phone near you.

What about information content and quality.  Apple’s new offering state if they know you have rights to certain data, it will be available to you on any of your registered devices.  If you have additional personal data (e.g. ripped CDs) you also have the option, for a modest but transparently stated fee, to make that available in the same platform.  Can your enterprise apps do that or does any data not centrally governed come with a large price tag?

The information age is challenging many paradigms.  I believe one is that of “acceptable data quality”.  In the past, we had more time to check and understand what the quality of the information was.  Now, given the pace of new data creation, we have a lot less time.  So we need to modify our thinking from traditional per-incident assessments to reusable and scalable exception handling processes.

The world is an interesting place.  This can be an adventure and a curse.  Wishing you a productive and fun journey.

Cheers

Lessons from Apple WWDC for Data Management

Yes, I really will attempt to draw parallels, even though I wasn’t at the event. I have however become a Mac user over the years and having seen how Apple’s approach is driving user and even corporate user behavior, I thought seeking parallels would be a useful exercise.

“PC free” was one of the key points of Steve Jobs’ presentation. With iCloud, Apple continues to cannibalize it’s own market (and likely others’ as well). iPods, except for the ultra portable ones still favored for the battery life, are no longer the hot sellers. Gone too are Flip and other technologies and vendors with the success of smart phones.

Lesson 1. Be aware of what’s coming, for others will
From tool vendors to IT professionals, I have seen many parties succeed or slow down based on their willingness to change and adopt. After the .com boom, the data profession lamented on how quick development had killed many data efforts. Many of the same groups also missed trends that others embraced, putting themselves in a tougher position. Lluck or executive support are still factors. More on that later.)

Apple didn’t give up the music store business with fewer iPods being sold. They reinvented and maintained, even expanded relevance through other services people could relate to.

What would it take data governance to move from maintaining standards to a business results focused service delivery. I mean data discovery, advanced analytical support. I refer to embracing agile techniques an EII/EDM like solutions. These would still rely on and make even more relevant data modeling and data standards. It would also make their value easier to describe and maintain.

More lessons and parallels coming soon

Cheers

Saying Yes!! (or at least Maybe) to NO SQL

Since the beginning of this year, the amount of chatter around the “NO SQL” (Not Only SQL) topic has increased. EDW conference seems to have made more people aware and whether on LinkedIn or other forums, there is a health amount of information exchange going on.

  • Some are excited about the possibility of a new way to analyze data
  • Others think this would make data quality management even more difficult

I think both groups are right. I also think it wouldn’t necessarily make data quality management more difficult, but make the challenge more obvious.

Where we all seem to agree on is the need to understand what the different tools are and the core strengths of different approaches. I think as a data profession, we need to be personally accountable to understand what can offer value to our colleagues and customers, and even if we don’t have a lot of time for research. I fundamentally believe that even if I don’t make the time to be aware of how a new technology or process may be of use, if the value (or marketing 🙂 is good enough, others will experiment and then when things stick, we may be playing catchup. I think it would be an err for the data profession is to repeat the same mistakes that were made when XML was first becoming popular and leaving these data structure definitions to the developers.

So saying Yes, No, or at least Maybe to NO SQL or any other innovation it up to us. In many instances, our experience with past technologies (relational, IMS…) can carry forward to how new technologies can make older approaches more scalable and viable. For the truely innovative thinking, it should at least be interesting enough to do some reading on it.

We live in “interesting times”, with the excitement and challenges of it all.