Three Fundamentals for Building a Solid Data Governance Program

Time and again, we talk with clients who are neglecting perhaps the most important feature in a solid data strategy: data governance. With the explosion of data resulting from an increasing adoption of digital initiatives and the undeniable fact that we are now living in a data-driven world, it is more important than ever for organizations to recognize the importance of protecting data as a key asset. From regulatory challenges in the U.S. driving a need for better data governance programs and a trend in hiring chief data officers to the imminent General Data Protection Regulation (GDPR) in the European Union, the pressure is growing on organizations across all industries to recognize the need for better maturity in managing and governing data assets.

Data governance as a practice has been around for some time, but many organizations continue to struggle to incorporate basic data governance processes into their overarching data strategies. Those who fail do not always do so from a lack of effort. Where to start and how to build a data governance plan is still a significant issue for most companies, and we have seen many firms have multiple false starts before they are able to gain the needed traction.

During a recent webinar we hosted, we asked the audience – primarily IT, audit, finance, and risk and compliance professionals ­– to weigh in on how well their organizations are doing with data governance. A full 39 percent of this group told us they have no idea whether their data governance programs are effective. Even more startling, just short of 20 percent admitted their enterprise has no data governance program in place.

These numbers may appear surprising, but they are typical of what we see across all industries – although certain groups, such as financial services, do have a higher maturity when it comes to data governance due to specific regulatory and compliance requirements that include anti-money laundering (AML) and Dodd-Frank regulations, and the fact that many banks have a global presence, making them subject to GDPR for their EU clients. Many organizations recognize the need for strong governance but often find it takes years to work through the complexities involved in establishing workable governance functions.

We understand the situation. We also know there is a way for organizations to build an outstanding data governance program that fits their needs, without the frustration. Here are just three tips to help get a data governance program started:

  1. Begin with an assessment of the organization’s current state. At Protiviti, we leverage multiple assessment models, including the Enterprise Data Management (EDM) Council’s Data Management Capability Assessment Model (DCAM) for financial services companies, and the Data Management Association (DAMA) International’s Guide to the Data Management Body of Knowledge (DMBOK®) across other industries. The DCAM framework includes eight core components ranging from data management strategy, data and technology architecture, and data quality to the rules of engagement for data governance programs. Whatever the model used, it should be matched to the organization’s needs and not applied generically.
  2. Establish a pragmatic operating model. Data governance programs must combine functional expertise, industry knowledge and technology in a well-organized and coordinated way that is planned, holistic, actionable, simple and efficient. We call that our PHASE approach, and it sets a solid foundation for future data governance by bringing together these three key components and identifying tactical steps to execute and operationalize data governance.
  3. Have simple guiding principles. We recommend that organizations:
    • Establish clear goals and purpose
    • Only put governance where needed
    • Keep the plan simple
    • Design from the top down, but implement from the bottom up
    • Be flexible
    • Communicate, communicate, communicate.

One of the most critical success factors in establishing a data governance program is to identify the value it will deliver to the organization. There is a risk this focus on value may get lost in compliance situations, where meeting a specific requirement is unquestionably the goal. Therefore, it is important for organizations to also ask: What real business problem are we addressing through our governance strategy? How will the organization be better off tomorrow than today as a result of our governance work?  What are our data problems costing us – both in opportunity costs (not being able to pursue something) as well as real monetary costs?  And how can we do all of this with a smaller spend, showing quick value?

As chief data officers join the executive suite in increasing numbers, the importance of maturing data governance is confirmed. Ensuring that the data governance team has a seat at the table for all major business decisions and key projects – both business and technology – is proving to be a best practice and a critical success factor for the future of the organization’s data strategy. Data governance is a process, not a project. By making it a core competency, organizations will be ready to take on the data-driven future.

Matt McGivern

 

 

 

 

 

 

Josh Hewitt

 

 

Categories: Data Governance

What’s New in SAP S/4 HANA Implementations? A Report from GRC 2018

Note: Several of our colleagues from Protiviti’s Technology Consulting practice attended the SAPInsider 2018 GRC and Financials  Conferences. Their blogs on SAP-related topics are shared here. Mithilesh Kotwal, Director, discusses the importance of proactively addressing implementation risks during S/4HANA migrations.

Ronan O’Shea, our ERP Solutions global lead, delivered an insightful session reviewing the different responsibilities of the business during a system implementation. As he pointed out, systems must be designed from the outset to support the business. Organizations cannot expect system integrators (SI) to develop these designs alone, as SI are technical experts – not business process experts. This is why the business should be responsible for defining the vision and operational expectations for the future state of each business process that the new system will impact.

During his session, Ronan shared key system Implementation Statistics, including:

  • 74.1% of ERP projects exceed budget
  • 40% report major operational disruptions after go-live

What do you do to ensure your implementation does not become a part of statistics like these?

The role that the business plays in an ERP system implementation is at least as critical as those played by IT and the system integrator (SI). The business owns the top four risks on an ERP implementation:

  • Program Governance
    • Misconception: The SI will manage the governance of the entire ERP implementation.
    • The truth: Typically, it is beyond the scope of the SI to provide the level of management needed to oversee the implementation end-to-end.
    • What should companies do? Establish a comprehensive PMO structure that manages the program beyond just the SI deliverables i.e. it includes things like:
      • Oversight of business and IT resources
      • Management of other vendors
      • Open engagement with company leadership on the risks and issues within the program
      • Unrelenting commitment to the transformation goals of the program.

These implementations are complex and have impact across many functions; the incentives of different parties must be checked and balanced.

  • Business Process Design
    • Misconception: The SI will guide us to adopt leading design practices baked into the software.
    • The truth: The requirements and design of the future solution emerges over time (if at all), leading to rework, changes, delays and missed user expectations both pre- and post-go-live. The SI is primarily a technical expert and not a business process expert.
    • What should companies do? The business retains the responsibility to define the vision for what to expect operationally of the new system with regard to each business process. This vision can take the form of:
      • Future-state end-to-end process flows that outline the automation level expected
      • Governing business rules (e.g., customer price calculations, cost allocations, tax computations)
      • Data requirements and event triggers for integrations to other systems
      • Controls and contingency or exception workflows
      • Who takes action

Take your time to define this vision so that you have a baseline against which to evaluate the technical solution delivered by the SI and make sure you are meeting your transformation objectives. Assess process owners’ awareness and understanding of key design decisions’ expected outcome.

  • Data Conversion
    • Misconception: Data conversion is a technical task with no business involvement and we can just move the data from legacy to the current system.
    • The truth: Companies leave this activity till too late, without any business involvement resulting in incorrect mapping of data and poor data quality that cause delays in implementation and impact operational effectiveness of the new system.
    • What should companies do?
      • Review the plans and design for the overall information strategy, data governance and data conversions and ability to ensure complete and accurate data will be available at Go-Live
      • Perform project-specific quality assurance procedures
      • Provide recommendations for longer-term initiatives to maintain data quality

Data is key, the business should treat data conversion design and data cleansing as a top priority work stream and take operational and audit considerations into account. The business must establish strong data governance that extends beyond successful rollout of the new system.

  • Organizational Change
    • Misconception: Organizational change is training, right?
    • The truth: Users and business process owners are unprepared to participate effectively on the project, business requirements, design, testing, training and adoption. Lack of focus on building user and management support, adoption, and readiness leads to ineffective and inefficient processes, and post-Go-Live disruptions, regardless of the quality of the system implemented.
    • What should companies do?
      • Examine user adoption / enablement plans for the system and processes, including ongoing user support and training processes, process organization change, and process performance measurement.
      • The business must plan to develop policies and procedure and define new roles and responsibilities as well as delivering practical training.

Prepare the organization well for the transformation project you are undertaking. Engage the users frequently to prepare them for the change to increase user adoption.

These four key risk areas, in addition to other risk areas, are explored in detail in this white paper.

Mithilesh Kotwal, Director
Technology Consulting
mithilesh.kotwal@protiviti.com

Categories: S/4HANA

ICYMI: Protiviti’s Brian Jordan Talks Data Mining

In case you missed it, click here to listen to a recent episode of the “Coffee Break with Game Changers” radio show, presented by SAP.

In this episode, Protiviti Managing Director Brian Jordan joined Marc Kinast from Celonis and SAP’s John Santic to discuss “Digital Footprints: Mining the Data in Your Operations.” Tune in to learn why Brian’s favorite movie quote is from Clint Eastwood: A man’s got to know his limitations.”

You’ll also learn why process mining is one of the hot trends in business intelligence today.

  Brian Jordan

Tech Trends at BI/HANA 2018

Protivit’s Narjit Aujla was a first-time attendee at the 2018 BI/HANA conference. He shares his observations here.

The morning begins with Spanish guitars before the keynote session at the BI & HANA 2018 conference in Las Vegas. Taking the stage is Ivo Bauermann, SAP’s Global Vice President, SAP Analytics, Head of Business Development & Global Center of Excellence. He tells an amusing story about how people had shunned the newly created automobile in favor of the horse-drawn carriage—a decision in favor of drinking more alcohol. To Ivo’s point, people are resistant to change. What may seem farfetched now becomes the de facto standard tomorrow, such as the way we solve this problem today:  Uber.

Today’s analytics market is growing rapidly. SAP’s data warehousing solution, SAP BW, is still commonly used in companies, with numerous conference sessions supporting it. Businesses are seeking enhancements to their data solutions, such as SAP HANA® in-memory computing and cloud agility. The good news is that there is a wide range of solutions for SAP users such as BW on HANA, BW/4HANA, and S/4HANA®. However, the ability to choose the right system architecture to meet your business needs is going to be critical for managing your data effectively going forward.

Analytical toolsets are more varied than ever. While products such as Tableau and Microsoft Power BI rival for the self-service spotlight, SAP’s Analytics Cloud (SAC) is also a large consideration. Riding the bleeding edge of real-time and predictive, SAC is looking to reinvent the way we approach analytics. Simple workflows and powerful natural language queries might be the difference between a successful Business Intelligence (BI) strategy and an outdated software graveyard. Regardless of tool selection, what we continue to see is the criticality of established data foundations and governance practices.

Read more

About Narjit Aujla

Narjit Aujla is a Manager in the Data & Analytics practice at Protiviti. Working with companies from all industries, he works to understand a business’s primary concerns and delivers end-to-end business intelligence solutions.

Narjit’s specializes in Data and Analytics with a focus on Enterprise Information Management (EIM), front-end dashboard development, data modeling, and architecture strategy. With Protiviti, Narjit specializes in supporting the SAP BusinessObjects Enterprise suite of tools including SAP Design Studio and SAP Web Intelligence in addition to other analytical tools such as SAP HANA Studio and SAP Lumira. He has also helped businesses refine their Data Governance strategy and identify gaps in business process using data profiling tools such as SAP Information Steward.

Categories: Industry Trends

Saving Analytical Data Without Violating GDPR – Part 2: Aggregation and Anonymization

In a previous post, we reviewed two GDPR anonymization options – minimization and masking. In this installment we discuss two additional options.

Aggregation

Another way to comply with GDPR is to group data in such a way that individual records no longer exist and cannot be distinguished from other records in the same grouping. This may be accomplished through a single aggregation of the data into the most commonly consumed set or, more commonly, by creating multiple aggregations of the data for different use cases.

For this strategy to work, the data set needs to remove data elements that can directly (national number identifier, name, passport ID, etc.) or indirectly (region, area code, etc.) allow the identity of a record to be derived. This can be somewhat complicated as the indirect identification needs to take into consideration things like set size and dimensionality of the data as well as background or publically available data. For thousands of daily sales records across a country, this may easily be sufficient, but for mobile telephone locational data in a large metro area it would be very ineffective.

The potential downside of this strategy is that the effectiveness of the data for broad data analytical purposes may need to be reduced to provide adequate anonymization. For a more technical explanation of this type of aggregation, take a look at the following publication on l-diversity and privacy-centric data mining algorithms, A Comprehensive Review on Privacy Preserving Data Mining.

Anonymization

If data must be maintained at a detail level, then anonymization of personal data may be the best solution available. Anonymization is generally achieved through encryption or a one-way hash algorithm. Generally, if the organization creates a hash of all the key values of the record along with the personal data contained in the record, it can create a hash key that allows for dynamic reporting and aggregation on the data set without exposing the personal data.

When using an anonymization strategy of this type, the company will need to hash all of the personal data concatenated as a single field to effectively prevent rainbow table solutions. In cases where surrogate keys are used, hashing them into the string as well introduces elements that are more difficult to derive and will further degrade the effectiveness of rainbow table type attacks. Creating a hash on just one field (credit card number or social security number) is not effective due to the small number of possible combinations, producing a set that is trivial for rainbow tables to solve.

When selecting a hash, organizations need to have “due regard to the state of the art,” so careful consideration should be given to select an algorithm which is computationally infeasible to invert (high preimage resistance). A selection of algorithms that meet this criteria would be SHA-256 or SHA-512, Blake2s and Blake2b, for example. MD5 can also be used but for small input sets (strings under 50 characters or so) though it may be vulnerable to breaking in the next couple years on very advanced hardware.

Closing Notes

GDPR is a complex subject area with wide ranging impacts across the business environment. In this and our previous post, we addressed only one small part of the landscape (analytical data) and a subset of the GDPR requirements, specifically de-identification or anonymization. For more information on GDPR and other helpful resources, see this post or visit our website.

No single software program, vendor, or strategy will make an organization GDPR compliant on its own. Companies should consult with their legal and information systems teams to verify that whatever measures are taken align with the organization’s overall GDPR strategy.

Authors 

 

Don Loden

 

 

 

 

 

 

 

Ernie Phelps

 

Categories: GDPR

Saving Analytical Data Without Violating GDPR – Part 1: Data Minimization and Masking

With an effective date less than four months away, the General Data Protection Regulation (GDPR), known officially as “REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016,” is becoming a pressing concern for companies inside and outside the European Union (EU). Broadly, the regulation specifies that personal data protection of natural persons residing in the EU (aka EU data subjects) is a fundamental right. Personal data has a broad definition in the EU, applying to typical personal identifiers (national number identifier, passport number, etc.) as well as broader categories like location data and online identifiers (IP address, cookies). GDPR goes on to outline severe measures for non-compliance, including fines up to the greater of 20 million euros or 4 percent of total worldwide annual revenue for the preceding financial year.

The GDPR spells out a number of restrictions for the use, storage, removal and access to personal data. This can have potentially significant effects on analytical data (enterprise data warehouse, data mart, data lakes, report systems, etc.) as data removal and rectification requests can change historical reporting, introduce data gaps and complicate backup and ETL processes (“ETL” refers to three database functions – extract, transform and load – that are combined into a single tool designed to pull data out of one database and place it into another database).

Mitigation Techniques

There are several possible strategies for reducing the impact of GDPR on a company’s analytical data. Since compliance will be required for a large number of companies by May 25, the best methods are those that can either utilize processes already in place or that can be implemented with as small an effort as possible. Each company will need to look at the strategies below and make decisions on which strategy to apply and to which data elements to apply it. Below, we discuss two of those techniques – minimization and masking.

Minimization

The simplest way to comply with GDPR is to remove any non-essential personal data from analytical systems. The lower the number of data elements that identify a unique individual, the easier it is to deal with any remaining elements. The viability of this strategy will vary widely, but in many cases companies have taken the approach that it is better to have data and not need it than need it and not have it. GDPR turns this axiom on its head but it also provides an opportunity to take a hard look at what the company is storing and what the use case is for keeping it in an increasingly privacy-centric international environment.

Minimization will likely not be a standalone solution. Most companies cannot simply remove all personal data and still use the data for the business purposes it was originally designed to satisfy. However, minimization will reduce the number of data elements that need to be addressed by other strategies and thus should be strongly considered as a first priority.

Masking

Masking is replacing some or all of the characters in a data field with data that is not tied to the original string. These can be random or static, depending on the situation (i.e. 999-99-2479) but should always remove the ability to uniquely identify the record even when combined with other elements from the company’s records.

Masking is probably the least desirable solution from a security standpoint, since in many cases it does not sufficiently de-identify the record. If a phone number, for example, has its area or city code digits masked but is associated with a person’s place of residence, one would only need to know the area or city code(s) of the place of residence to unmask the identity of the person. Even if the entity masks some of the non-area digits, the number of possible exchanges may still be low enough that an automated hacking algorithm can uncover the number.

There are some cases when masking can still be useful or can augment other strategies. If the company has transactional data sets that must be retained for statutory, business or other exception cases, masking can help control data access by limiting the data shown based on existing access control mechanisms. In other cases with more possible combinations (credit card number, street address, etc.), masking can be used situationally to satisfy GDPR requirements.

In Part II of this topic, we will review data aggregation and anonymization.

Authors 

Don Loden 

 

 

 

 

 Ernie Phelps

Categories: GDPR

Want to Increase User Adoption? Try This Simple FRA2MEwork

For as long as Business Intelligence (BI) has existed, organizations have made significant investments in high-performing platforms – only to find no one will use the solution. Why? For one, end users cannot find information quickly, or at all. Two, the information they do find isn’t relevant. Three, they expect their BI systems to work as effortlessly as popular search engines and social media, which yield results within seconds of a query, and the systems often don’t. So users drift away, the system goes stale, and the effort the organization has put into building the system goes to waste.

Getting the right information to the right people at the right time is intrinsically valuable to any organization. The ROI is not in how you drive your BI program, but in how effortlessly your organization can achieve a “nirvana-like” state where collaboration really happens.

To alleviate the user adoption issue, the Protiviti Data and Analytics group has devised a simple, six-step process that can be easily put in place to ensure organizations can maximize use of their data. The FRA2ME methodology adds the foundational elements organizations need to ensure that end-user adoption is not lost in the hubbub of building a state-of-the-art BI solution.

The FRA2ME Methodology

FRA2ME focuses on the importance of understanding end-user workflow and use cases to drive relevance, in turn ensuring usefulness and adoption. To understand the methodology, it helps to explain what the FRA2ME acronym represents:

Foundation

  • Creating a BI program that is trustworthy, performs well and is accessible when and where the end user needs it is essential to user adoption. Strong foundational elements, such as governance, speed, security and reliability, create user trust in the data.

Relevancy

  • A BI solution should focus on the business user, the use case and the desired outcome. The final solution put in place must be relevant for the purpose it was built to serve, or it will fall out of use.

Agility

  • We have learned that organizations need to build, adapt and perform outreach to achieve that nirvana state of collaboration. With an eye to the cadence of change, continuous improvement should be delivered incrementally to support end-user engagement. And, the technology required to support an agile BI team must be agile, too.

Advocacy

  • Gaining and promoting advocacy is a very important step in the FRA2ME methodology, accomplished through creative, well-defined efforts. One client internally branded their BI program to gain visibility, in turn generating advocates while growing adoption of the system. The client’s success was solidified by activities such as social media posts, competitions among users, and other promotions that encouraged users to try the new system. The goal: A scenario in which users say, “I can’t imagine my life before this solution” or “I can’t imagine living without this solution.”

Monitoring/Measuring

  • Keeping an eye on user activity and data usage is essential to establishing a positive track record for reliable data, in turn building the trust of business users.

Education

  • Training on new solutions should be situational, contextual and personal, which means using the kinds of training tools users relate to best.

At the end of the day, what organizations need is to delineate between information and insights. Information is, by its very definition, informative, and some information might be useful. But insights are actionable, adaptive and help achieve the desired objectives.

There are many areas where a methodology like FRA2ME can help organizations achieve insight, including:

  • Process optimization (“How will we anticipate and reduce costs?”)
  • Operational efficiency (“How can we increase sales and improve customer satisfaction?”)
  • Financial visibility (“How can we better understand and improve profitability?”)
  • Sales effectiveness (“What steps are needed to increase sales and improve supplier service level agreements?”)
  • Consumer behavior (“How can we engage our customers more effectively? What consumer trends are developing in our industry?”)

One Size Does Not Fit All

What BI solution is right for one organization may not be appropriate for another, and that’s where the FRA2ME methodology is particularly useful, as it helps pinpoint where to focus. One of our clients, for example, used the methodology to cut through the distractions of an upcoming IPO to quickly implement a real-time, interactive and highly intuitive dashboard providing visibility across 50 metrics and their related tolerances, all while launching a new manufacturing facility. The client saw 100 percent effectiveness in its first 90 days of operation at the new plant.

Another client, the fastest growing optical retailer in the U.S., needed to understand how to best segment and target customers while also determining when and where to open new markets. The FRA2ME methodology allowed us to identify how this client could effectively build a trusted data platform and implement customer analysis models that provided greater visibility into customer behavior for targeted sales and marketing campaigns, improved customer retention and optimized site selection for new stores.

A healthy, profitable company is in a constant state of change. And the cadence of change, at least from a BI perspective is this: Build, adapt, outreach. Build the solution that is best for current needs and resources. Adapt the solution and  the organization, as monitoring and measuring will define how well the solution is working and how the organization is responding to it. Outreach, by developing those advocates or “raving fans” who drive user adoption at the grassroots level.

To learn more about FRA2ME, download our white paper.

About Steve Freeman

Steve is a Managing Director – Protiviti’s Data Management and Advanced Analytics Practice. He developed the FRA2ME methodology to help clients generate “raving fans” among end users. Steve is also responsible for Protiviti’s SAP Analytics practice. He serves on the firm’s Financial Services practice leadership team. Steve has held numerous roles in sales and executive management in the Business Intelligence and Analytics space including: SAP BusinessObjects, Oracle, Verint and Cognos. A thought leader in analytics and end user adoption, Steve’s expertise also centers on Customer Insight Analytics, Sales & Financial Forecasting, and Organizational Optimization.

Categories: Data Strategy