After most Gartner Data & Analytics Summits, Andrew White, Distinguished VP Analyst at Gartner, posts a short blog post reflecting back on key topics from each gathering – and sometimes an almost hour by hour rundown of his time. Recently, White blogged about his experience that this year’s annual Summit in Sydney, Australia, which bears some reflection leading up to the US version of the conference in Orlando, FL from March 17-21.
Before we dive into White’s observations, maybe it’s best to set the stage and talk about the event, in brief. Simply put, the event, or rather events as they are held in multiple locations all over the world, is, perhaps, the largest gathering of data professionals that occurs yearly. Managers and executives researching or engaged in projects ranging from Master Data Management to BI & Analytics attend to hear from Gartner analysts, technology vendors and, of course, other project leaders who have undertaken such projects at other companies.
In 2017, Gartner made the decision to merge the previously separate BI & Analytics Summit with the Enterprise Integration & Summit resulting in a much larger crowd. The decision was likely a boon for Gartner with increased attendance but also represented an opportunity for all attendees to gain knowledge about a wider range of subjects and network with other data geeks responsible to different, and often fascinating, projects. (Having attended a few of the MDM Summits before the merger and one of the “new” format, I can personally attest to the additional benefits of concentrating the focus on the entire information lifecycle.)
And, Gartner analysts, like White, are understandably the busiest people at these events. They are rushing from presentations to many, many meetings with Gartner customers asking questions about a full range of topics, such as these (the most frequent subjects about which White was inquired in Sydney):
- Governing data; master data management; data quality; trust in data
- Coping with silos and modernizing information infrastructure
- Developing a data, and/or analytics, BI and AI strategy (what to use, where and when)
- Data Literacy: the language, posture and methods used to explain how data drives better business outcomes
None of these topics are particularly surprising to me, but a couple stood out. I will address the other topic in a later blog, but, first things first. The topic that stood out in big capital letters was this:
TRUST IN DATA
Over the decade-plus that I have worked in the data management space, the topic of trust is one that consistently bubbles up past the buzzword of the year, and for good reason. If business users, or worse yet customers, don’t trust the information they are receiving, then they will look to alternative sources. This may mean more silos within your organization, lack of adoption of your analytics investments, or lost business from customers.
What instills trust in your enterprise data and how can you ensure that trust is earned?
- Data Modeling – As we explored recently, a successful data management project starts with a robust data model that incorporates all of the necessary information your business users and customers require. The same is true for trust in your overall data. Great care should be taken to ensure that all of the elements and attributes necessary are reflected in your data model. On the flip side, your data model should not contain extraneous or otherwise unneeded data. Stakeholders should be interviewed; source systems and business applications should be surveyed; a clear model should be planned and agreed upon prior to implementing any technology. And, if you have already implemented a data management platform and need to investigate where trust has broken down, we suggest that you start by interrogating the data model.
- Data Quality – We use the cliche “garbage in, garbage out” often because it is appropriate for data management. The fastest way to have users and customers lose faith in your data is to consistently deliver information that is inaccurate or out of date. Once a data model is set, processes and tools, like ETL, to standardize existing and new data should be put in place to make sure that the data adheres to the model. But, that is only one step. Quality requires oversight. Tools exist and should be used to measure the completeness of data records or to grade the quality of source systems and vendors in order to encourage improvement. And, workflows should be created for data stewards to remediate poor quality data.
- Data Governance – We have also addressed a few topics concerning Data Governance lately, but it bears repeating in this context. A Data Governance program that is well executed and well maintained will enforce the work you have done to install trust in your data.
If you are attending the Gartner Data & Analytics Summit in Orlando and are interested in how to improve trust in your enterprise data, you may consider the following sessions:
MONDAY, MARCH 18, 2019 / 10:30 AM – 11:15 AM
The Foundation of Master Data Management
Simon Walker, Sr Principal Analyst, Gartner
MONDAY, MARCH 18, 2019 / 11:30 AM – 12:15 PM
The Future of Master Data Management
Michael Moran, VP Analyst, Gartner
MONDAY, MARCH 18, 2019 / 11:30 AM – 12:15 PM
Adopt a Data Hub Strategy for Controlled and Streamlined Data Sharing
Ted Friedman, Distinguished VP Analyst, Gartner
MONDAY, MARCH 18, 2019 / 04:15 PM – 05:00 PM
How to Evolve Data Quality Programs for Best Business Impacts
Melody Chien, Sr Director Analyst, Gartner
Still want more information? We will also be in Orlando for the event. Schedule a meeting with us during the Summit!