I come across a lot of situations where customers run SalesForce and ask me whether they can combine their SalesForce data with other corporate data. A most recent request was to combine SAP BW Copa data (profitability analysis) with SalesForce data on clients with payment risks. Today this can be easily be done using data blending techniques of the SAP BusinessObjects BI Suite. More in particular, I used the SAP Lumira salesforce IDBC connector to connect to the customer’s SalesForce data and blend it with their SAP BW data.
But customers these days require more than just combining their SalesForce data with other data. They require to have the full SAP BusinessObjects BI Suite running embedded into their SalesForce environment. With the use of Protiviti SalesForce Connect for SAP Analytics, this can be done.
Read the rest of Iver van de Zand’s article here!
In the traditional EIM process, typically there are separate ETL, Replication and SDA processes and functions. Traditional processes introduce latency to the data in the data warehouse, as well as other challenges to getting data into one location seamlessly. Today’s world is moving faster, and the need for clean, real-time data is imperative.
Chances are, if you’re reading this blog, you’re no stranger to the power of SAP HANA. But to recap, the SAP HANA platform removes the burden of maintaining separated legacy systems and siloed data. It hold capabilities to transform and cleanse data in real-time from multiple sources.
SAP HANA also has features that enable you to both manage data and improve your data quality. These options offer real-time transformational possibilities that were unthinkable until recently. These features allow you to transform data in real-time with Smart Data Integration (SDI) or conform and cleanse data with Smart Data Quality (SDQ).
To download full PDF and Continue Reading…
Don Loden is an information management and information governance professional with experience in multiple verticals. He is an SAP-certified applications associate on SAP EIM products. He has more than 15 years of information technology experience in the following areas: ETL architect, development, and tuning: logical and physical data modeling; and mentoring on data warehouse, data quality, information governance, and ETL concepts. Don speaks globally and mentors on information management, governance, and quality. He authored book SAP Information Steward: Monitoring Data in Real-Time and is the co-author of two books: Implementing SAP HANA, as well as Creating SAP HANA Information Views. Don has also authored numerous articles for publications such as SAPinsider magazine, Tech Target and Information Management magazine.
The ability to attract and retain a loyal employee base and understand root causes for employee disengagement and disloyalty are key strategic objectives for every organization – big or small. If you want to improve employee productivity and/or decrease the cost associated with attracting and retaining employees, you need to move along the analytics maturity curve and start leveraging People Analytics.
What is People Analytics?
Put simply, people analytics is a predictive, data-driven approach to managing people at work. Analytics centered around your employees. It is used to address people-related issues, such as talent acquisition, performance evaluations, leadership positioning, hiring and promotion, job and team design, and employee compensation.
Increasing Employee Loyalty Using People Analytics
People analytics help you merge employee data, company data, and market data to predict and interpret valuable employees’ behaviors, as well as operations-level insights, giving you competitive vision for developing your retention strategies.
To download full PDF and Continue Reading...
John Harris, Senior Manager – Predictive Modeling and Advanced Analytics, has over 16 years of industry experience applying strategic thinking and advanced analytical skill set to optimize resources, improve processes and develop quantitative models that turn data into decision-aid information for all levels of leadership. Airline and energy utility employers have attempted to patent his deliverables related to predictive and optimization modeling.
What are Repository Roles?
Repository roles are roles that are created as development artifacts within the SAP HANA system. They start as design-time objects and become runtime objects upon activation. They can be utilized within a security model just like database roles. However, they offer numerous advantages over catalog or database roles.
Why Should I Use Repository Roles?
- The definition of the repository role is stored in the SAP HANA system repository. Using package security, we can control access to individual repository roles for a diverse group of security administrators. For example, super administrators can be given access to change the definition of all repository roles while a department level administrator can be given limited access to roles that affect only his/her users.
- They help prevent assigned privileges from being deleted from the system when the grantor’s user account gets deleted. Repository roles are granted as the system user _SYS_REPO and this account cannot be deleted from your system.
- They are owned by _SYS_REPO. This system account cannot be deleted, unlike database roles that can be removed from the system when their creator’s database user account is deleted.
- Repository roles can be transported from one HANA system to another using HANA Application Lifecycle Management (HALM) or using the Delivery Units (DU) Export and Import process.
- Through auditing, we can track the true grantor of the repository roles.
- When a user is coped in SAP HANA Studio, all of the repository roles are also copied.
- Because they are development artifacts, the standard SAP HANA Version management tools are supported.
- They are granted and revoked via special stored procedures. Users only need EXECUTE access to these procedures to grant and revoke rights. Unlike database roles which can only be revoked by the original grantor.
To download full PDF and Continue Reading…
Jonathan Haun is a Director in the Data & Analytics solution at Protiviti. He has over 15 years of BI and IT experience. He currently focuses exclusively on Business Intelligence tools, technologies and EIM processes. He has helped hundreds of companies implement BI tools and strategies over the past 10 years.
End-to-End Security for Your SAP Environment
From designing and configuring your SAP application security to implementing SAP BI and HANA security, our Webinar Series gives you tips, tricks and best practices from our top SAP experts.
Join us on Thursdays @ 2:00 pm Eastern beginning April 21st. This series gives you real life examples on how to properly secure and manage your SAP landscape.
Register for one or all three!
Securing Your SAP Application
Speaker: Michele Makuch
Thursday, April 21 @ 11:00 Pacific | 1:00 Central | 2:00 pm Eastern
Designing and maintaining a best in class SAP security environment is a challenging endeavor given the complexity of parameters and general IT controls that need to be properly configured and monitored.
Join Michelle Makuch, SAP security expert, to understand the key steps to design and maintain an optimized SAP security environment helping you avoid the need for a redesign later.
Key Learning Points
- Improve your SAP role design strategy
- Learn key general IT controls to implement and monitor
- See how to establish an adequate SoD management framework
- Understand how to institute a comprehensive SAP security program, including cyber-security risks
Implementing SAP BI Security Best Practices
Speaker: Bob Zenker
Thursday, April 28 @ 11:00 Pacific | 1:00 Central | 2:00 pm Eastern
BI has become pervasive for decision-making at all levels. With that pervasiveness come risks due to a centralized architecture that houses potentially sensitive data in one place that can be used by many people.
Join Bob Zenker, BI expert, to learn how to properly implement SAP BI security based on practical experiences and real-world use case studies. This session will provide detailed tips and tricks for participants on implementing security for adhoc users and discuss challenges of security data and BI content.
Key Learning Points
- Learn best practices on implementing 3rd party security applications like SAP, Windows AD and LDAP
- Gain an overview on data security in Universes, Crystal Reports and SSO to databases
- Review deploying mobile access to BI content utilizing SSL or VPN
- Examine mobile application security setup on devices within the BI4.1 architecture
Securing Your SAP HANA Environment
Speaker: Jonathan Haun
Thursday, May 5 @ 11:00 Pacific | 1:00 Central | 2:00 pm Eastern
Although all systems have to deal with authentication, authorization and user provisioning, SAP HANA deviates from typical database platforms in the amount of security configuration that is done inside the database.
Join Jonathan Haun, co-author of “Implementing SAP HANA”, for expert recommendations on configuring SAP HANA and setting up the proper security models for your SAP HANA system.
Key Learning Points
- Get best practice strategies to properly provision users, manage repository roles, and implement a manageable security model in SAP HANA
- Gain an overview of the 4 different types of privileges in SAP HANA
- Explore how third-party authentication can integrate with SAP HANA
- Find out how to provide selective access to data using analytic privileges in SAP HANA
It has been some time since I last posted a blog about SAP BusinessObjects. In part that was due to a lack of major changes in the SAP BusinessObject platform. A few Support Pack were released but there were only a handful of changes or enhancement that caught my attention. In addition to this, I am also excited to now be a part of the Protiviti family. In November of 2015, the assets of Decision First Technologies were mutually acquired by Protiviti. I am now acting as a Director within the Protiviti Data and Analytics practice. I look forward to all the benefits we can now offer our customers but unfortunately the acquisition transition required some of my time. Now let’s get to the good parts…
With the release of SAP BusinessObjects 4.2 SP2, SAP has introduced a treasure trove of new enhancements. It contains a proverbial wish list of…
View original post 2,014 more words
In Part 1 of this blog series, I discussed Information Views and provided some tips on how to use them more effectively. In this part, I will be discussing two design strategies and the pros and cons of each.
We’ll look at two methodologies:
- A “real-time centric” design will excel at operational reporting, but performance may be a challenge with increasing data volumes and significant transformation.
- A “storage centric” design will always scale well and offer exceptional performance.
Real-time Centric Design
In terms of HANA, this design style is where natural OLTP tables are utilized as the basis for creating real-time OLAP multi-dimensional models. When using these tables as the source, all join, calculation and transformation logic has to be implemented in the SAP HANA information views. The operations are conducted at runtime each time a query is executed. Generally, this is a very resource expensive way of designing a solution because the work has to be repeated over and over again. It also has a tendency to consume a lot of memory on the SAP HANA system. In some cases, more memory is required to compute the model than is actually required to store the data. It is sometime called ELT (Extract, Load, Transform) and it is a required design technique to achieve true zero latency real-time analytics.
- Products like SAP HANA Live ERP reporting utilize this concept heavily
- Effective for real-time as you can report directly against normalized source tables or replicated source tables in an SAP HANA ‘Sidecar’ format
- No physical ETL is required, but all transformations and processing occur at each query’s runtime
- Great for moderately complex operational reporting
- Excels for reporting scenarios where real-time is the highest concern and you are facing only one transaction table joined to several descriptive tables
- Use with care as the Calculation Engine is the slowest engine in SAP HANA
- Extreme table sizes + heavy transformations = slow performance
- May not scale well
- More difficult to do true business intelligence reporting using this design paradigm
- Many BI concepts require components and concepts that are difficult to model using only Calculation Views. This is due mostly to the number of projections that are required to manipulate multiple datasets for complex comparison activities
In terms of HANA, this design style is where the natural OLTP tables are converted into conformed Dimensions and Fact tables. To produce these new tables, an ETL process is executed. Most of the calculations and transformation logic is conducted in the ETL process. The SAP HANA OLAP model (or Information Views) are then only responsible for aggregating and joining the dimensions to the facts. Because most of the calculation and transformation logic is only calculated a few times each day with the ETL process, the SAP HANA models are not as expensive to execute. Generally, they use considerably less memory and CPU power when executed. Because the OLTP tables have to be copied into new structure, we call this a storage heavy model. It’s a model that re-persists some of the more complex transformations into new tables.
- Exceptional performance
- Scales incredibly well
- The majority of processing is in the OLAP engine; this is the fastest engine in SAP HANA
- ETL/ELT is required for physical transformations
- Data is physically modeled into a star schema design using ETL/ELT processing
- Analytic Views are used extensively
- Real-time is more difficult and complex but still possible with SAP HANA Smart Data Integration
- Multiple sources will likely need a latency based ETL approach. This will render some sources to be batch based in nature. Many customers just run batches every 5 – 15 minutes if near real-time is needed
For pure performance and scalability, stay in the OLAP engine. A storage heavy design paradigm ensures this is maintained and scaling is very linear.
ETL/ELT processing and physically moving data is required for a storage heavy design paradigm. This is technically a drawback, but keep in mind the ETL/ELT becomes simpler as you are really only transforming basic structures. Use SAP HANA to transform what is not required to support a basic star schema.
You can read more about modeling techniques in my article “Mastering SAP HANA Data Modeling for Maximum Performance.”
Don Loden is an information management and information governance professional with experience in multiple verticals. He is an SAP-certified applications associate on SAP EIM products. He has more than 15 years of information technology experience in the following areas: ETL architect, development, and tuning; logical and physical data modeling; and mentoring on data warehouse, data quality, information governance, and ETL concepts. Don speaks globally and mentors on information management, governance, and quality. He authored book SAP Information Steward: Monitoring Data in Real-Time and is the co-author of two books: Implementing SAP HANA, as well as Creating SAP HANA Information Views. Don has also authored numerous articles for publications such as SAPinsider magazine, Tech Target and Information Management magazine.