Your company’s data strategy is in control, you might not know it yet
In this special guest article, Nick Bonfiglio, CEO of Syncari, presents key lessons from a recent cross-functional leadership panel: Data interoperability is the key to effective operational data. Nick is a CEO, Founder and Author with over 25 years of technology experience writing on data ecosystems, SaaS, and product development. He spent nearly seven years as Executive Vice President of Products at Marketo and is now CEO and Founder of Syncari, the company behind the codeless data automation platform.
Today’s sprawling multi-cloud, on-premise environment offers businesses many great solutions for managing resources and data. However, according to the IDG Cloud Computing study, 46% of IT managers in the survey said it also increased management complexity.
The average business can operate dozens or even hundreds of disparate SaaS applications; that’s a lot of crucial business information scattered across these data silos, each with the potential to store slightly differently what should be identical customer information. For example: Does marketing or customer service know when a particular customer last interacted with the business?
This is a major headache for operational teams, as a precise understanding of a customer’s journey through the various touchpoints of any business is essential if you want to develop and deliver a tailored experience. The situation is particularly evident in the departments in contact with customers: marketing, sales, customer success, support and revenue operations.
To tackle this data silo problem, organizations are turning to a mix of integration technologies, APIs, modern database architectures, and ETL approaches to unify all of their customer data into one warehouse. cloud data. To help them, Snowflake, Amazon Redshift, and Google BigQuery have made it incredibly easy to centralize huge amounts of data.
And as the popularity of the cloud data warehouse has grown, its primary use cases have shifted from data storage and reporting to a more common notion of a data lake, where raw data can be. stored, transformed and unified for analysis.
Sounds good, doesn’t it? But, when we look at it from an operational point of view, the picture becomes more complicated. How are companies achieving this magical data transformation within their cloud data warehouse? Who owns this strategy? More importantly, how can companies leverage this information when it is not available in the systems where the teams working with customers work?
A cross-functional leadership panel recently discussed the new rules for enterprise data in 2021. The panel looked at the operational issues faced by functional leaders: from how companies prioritize the right data to why solutions run. integration have struggled to help companies get a single source of truth. We looked at the role of the data warehouse in managing the volumes of inbound data from an ever-growing list of SaaS providers. And, in particular, we discussed issues of data variety, ownership of data strategy, and the role that operational executives can play in shaping an organization’s data strategy.
Let’s start with the different varieties of data that come from the different operational systems. Ilya Kirnos, product manager for Google Analytics and now CTO and founding partner of SignalFire, identified the need for better solutions to standardize data so that it can be processed.
“When I talk to people about Big Data, I divide it into three categories: volume of data, speed of data, and variety of data. For the first two, we now have great tools to store a lot of data and process it quickly. The third: having to manage data from many different sources, and then canonize and unify it, remains an unresolved issue. A new entrant, the cloud data warehouse, provides this notion where you can extract and load data into your data warehouse, and once there you can magically transform it inside.
As we saw earlier, it is easy, if not too easy, to bring data into a data warehouse. But once it’s there, what’s the best way to define ownership of the data policy within the organization? Ross Mason, founder of MuleSoft and Digg Ventures, recommends a common approach to ensure ownership is owned by the people who truly understand the business impact of data:
“Too often when people start unifying their data in a data warehouse or data lake, they drop too much stuff in one place, hoping to glean future information from that information store.” . Unfortunately, it ends up creating a swamp of data that no one gets close to because it’s too complicated – no one understands how to really reach and get what they need. The key is to break things down into manageable pieces and keep the domain of your data warehouse tight, based on the users who will be interacting with it.
The panel also addressed the practical issue of prioritizing how to apply data and analysis to solve problems and glean information. Of course, the operational teams must be closely involved in these decisions. Eileen Treanor, CFO at Inkling, observed the important role leaders need to play in data strategy decision making so they can get the information they need:
“Sometimes we think of data warehouses as a panacea. A big part of my role is to orient the business around the five or six pieces of data that we need to consolidate that will give us more insight into how the business is doing. There has to be a strategic lens to any data initiative that can answer the C-level question of “what are we going to do with this data”. And let’s not try to do 500 things, let’s do five things. I still think it’s best to start small and build on that.
Eileen also observed how difficult it is to get everyone on the same page when it comes to data, even with all of these investments in integration, warehousing, and BI. And with the resulting indecision comes inaction. Before you know it, your data warehouse unintentionally becomes another data silo, where the resulting information is only available to executives or is passed on to customer-facing teams too late for. be useful. In other words, data warehouses may have become information silos.
A key point of the panel: data interoperability is the key to effective operational data. Cloud data warehouses are here to stay, so rather than dedicating them to reporting on business intelligence information, companies should view their warehouse as part of an overall data solution and business model. unified data.
To maximize the value of these efforts, you need to ensure that the information gleaned from your data warehouse is quickly fed back to operational systems where teams can act on it, otherwise its value quickly loses value. And while this is possible today using integration tools, it often involves bringing together disparate solutions to get data into your warehouse, transform and unify that data, and then retrieve information where it is. required.
How can we make it easier for revenue managers to get the information they need? Over the past 10 years, we have focused on solving big data problems, developing increasingly sophisticated technologies to perform data unification, standardization and transformation operations within warehouses. of data.
But, for 95% of B2B companies, better management of big data is not the key to unlocking business growth, it aligns between departments on key customer and financial data so that key activities focus in the market (or plans or strategies) begin with a common understanding of what is happening in the business.
Today’s operations teams need unified, consistent and reliable data that is available on demand and across the enterprise. None of the integration solutions on the market today manage data on the growing number of SaaS products typically used by businesses. They also cannot align data across these multiple systems. In fact, the various products focused on data integration create a spaghetti-like jumble of integrations, effectively scrambling data as it moves through the enterprise.
What was missing was data interoperability: a way to give business users confidence in their data by eliminating the worry of whether the data they are seeing is reliable. One approach to solving problems resulting from uncoordinated point-to-point connections between systems is to apply stateful multidirectional synchronization. This is a major change from the current “we connect to everything” approach which is considered state of the art by some data scientists or IT teams.
With this approach, non-technical business users can assemble disparate systems into a single, unified data model. They can then use this unified data model to strengthen data consistency across the technology stack, applying codeless functions to transform, manage, and synchronize data so that the resulting “unified customer view” does not. is not trapped in a non-operational data silo.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @ InsideBigData1 – https://twitter.com/InsideBigData1