Hi network, since September I’m working as an SAP Analytics consultant for Cubis. Since then I had two months to hone my knowledge and skills during trainings and evening-sessions. After that, my fellow junior colleagues and I were deployed at different clients.
The client I’m currently working for has made the switch to SAP on HANA some time ago. This was even before I was on the project.
We are always looking for ways to optimize flows and win in query runtime speed. So, the client requested an audit by SAP BPPO (Business Process Performance Optimization) on one of their systems in order to ensure that long running queries would be executed in a more efficient way.
A couple of weeks ago I was asked to investigate the analysis report made by SAP BPPO. The documentation by SAP BPPO provide insights into the software and hardware specifics of the system and the database we are working on.
But more important for me, it showed me the queries and process chains that were taking a lot of time to execute and the possible solutions. This was something for me, as a consultant, to take a closer look at.
After reading through the document the first time, I’ve noticed that the queries were all based on six infocubes that were not HANA optimized. This causes the query processing not to be pushed down to HANA for processing, what is a contribution to the long runtimes. So that’s where I started.
With the transaction code RSMIGRHANADB I was able to fill in the technical names of the cubes and convert them to HANA optimized cubes.
I started testing on development and noticed that there were no issues at all. Because this is something non-transportable, I did exactly the same at the test-system. Here I noticed some errors in one of the cubes.
To understand what this error was, I used the transaction code RSRV to get an analysis of the cube.
By going to that specific table from the error text via se16, we saw that for an unknown reason that there was indeed no zero record in the dimension table. We were able to resolve this by going into the debugger and insert a zero record manually. That fixed the problem.
Now that all cubes were HANA optimized we could take a look at the queries. For these it was nothing more than going to the query monitor (transaction code rsrt), filling in the technical name of the query, going to properties and checking the “Use Selection Of Structured Elements” checkbox.
At least that is what we thought at that moment…
It was a bit more then that, due to the cubes becoming HANA optimized, they changed in internal table structure. (star schema)
To be more precise, a cube loses the need for projection/database view once it’s HANA optimized so these are internal tables that are no longer existing. And we had some old custom ABAP code in a user exit. So, the next step was locating and changing the custom ABAP to use the new table structure of the cubes.
Once that was done, every error was resolved and we could execute the new optimized queries without a problem.
Concerning the process chain optimization, they advised for one chain to up the number of parallel processes from three to five. But this is all depending the capacity of your machine and the amount of currently running tasks.
For the other chain they advised to use ODP and implement a SAP note, but we didn’t do this yet. The moment we implement this I might follow up with another post.
If you are interested in future articles or have some thoughts to share on the article, connect with me or follow Cubis
Before starting my career in IT, I almost never went to online forums. However quickly after starting my career as an IT’er I came across multiple forums/communities in which I became (and still am) active.Read more >
As a consultant I’m responsible for the Analytics environment for an Innovative Company in Natural Chemistry and innovative in their IT. Recently I was forced to ponder on the post-Lumira question “ Now what ? ”.
Read more >
I recently finished a project for which I had to fetch public data of parking lots and load it to a multiprovider to use it in reporting. The data that I needed to fetch was from an open data API. After successfully finishing this project I thought I would share how I did this as an easy guide for others.Read more >