Blogs

Organizing a Business Intelligence department in a world of self-service

27 november 2017

What do you get when asking 10 Cubis customers to fill in a survey and then putting them together in one room to analyze the results? Well, apart from some passionate discussion, interesting insights and daring idea’s also the following result.

Every organization struggles with more or less the same challenges when it comes to business analytics: different business departments working on their own, request for more self-service, how to choose the optimal reporting tools or what about the emerging need for data scientists? Whether you manage a BI/Analytics organization or are a business stakeholder these topics should sound very familiar. And although we really would have wanted it, there is also no straightforward answer or solution to all of these questions, it all depends on your companies culture, structure and politics.

A lot of the discussions start when asking what kind of data BI teams are offering. Only very few see their team as real insights providers. Currently, the focus lies on making sure that data and content is available for business users. It should also be no surprise that creating analytical content is often performed by business or key users (with our without the help of a supporting IT backbone) because it’s they who take the content and try to make business cases out of the available data. Direct involvement of business departments in BI teams comes in many forms. Some give business users a reporting playground to come up with new ideas or reporting use cases. This makes business the driver of the analytics, they act as evangelists who spread the BI words throughout the organization. Other companies take this even to an even further level and have business involved in first-line support of their own BI reports. The red line here, having business own the BI reports is certainly key for adaption.

Giving power to business users prevents them for starting their own shadow IT department. The less shadow IT, the easier it should get to manage one version of the truth in a central data warehouse. And even though the majority of the companies agree with this statement, more than half of the BI teams do not feel in control of their own data. On top of that, only 10% actually maintains a centralized KPI library to structure the data definitions. We all agree that scoping BI projects is challenging and requirements often change throughout the projects. Where is this gap coming from? Is it the complexity of the models or the ever changing requirements that make managing this single version of the truth such a challenge? One thing is striking. The need for a Chief Data Officer as well as the definition of this role is a topic on which no consensus can be made. Some don’t see the value of this person being on executive level, mostly because of conflicting responsibility with either CFO or CIO. On the other hand, having such a role in the organization might be very beneficial for enabling BI related topics at top-level of the organization, like data monetization.

Selling your companies data is not the first thing that comes in mind. However, we think that data monetization is a topic that can be beneficiary for a lot of companies. Selling data can come in many forms, from highly aggregated reports to detailed row-level files through web API’s and the technical challenges of making content externally available are negligible. Many examples already exists, from public transport data to the use of agricultural fertilizers. Only a fraction of the companies are doing this. And since the market is not yet mature, it’s a matter of testing the water regarding what price or value to put on your content. But the initial question remains, what kind of data would you be able to sell?

If there are data scientists in the company, there will be a higher chance for them being elsewhere than within the IT department. Data science is an area driven by business cases that are either unclear or need to be defined. The questions like “can we predict the asset failure?” typically start from the maintenance department trying to improve their processes. It’s the availability of sufficient budgets and business knowledge that guides data scientists to business departments. Only in rare cases, it’s IT who tries to pro-actively convince business with possible data science use cases.

This article started with stating that there is not just one version of the truth or one go-to strategy when it comes to organizing a BI department in a world of self-service, however this doesn’t mean that listening to other companies is a waste of time. On the contrary, with Cubis we try to organize round-table discussions around certain topics for our clients to interact and learn from each other, the only thing we do is moderate. For more information about certain topics, contact info@cubis.be.

  • CubiShare
  • BI Organization
  • Self-Service BI
  • Blogs

    Process chain (flows) optimization Part 3

    This will be the last article about process chain optimization. In the previous article we covered the optimization of infopackages and DTP’s. Now we will close out the topic with DSO’s and Cubes.

    So for DSO’s. The only thing that I found out about DSO’s that has impact on performance is the setting ‘SID generation’. If you aren’t going to use the DSO for reporting then you should set it to never create SID’s.

    Read more >
  • Blogs

    Process chain (flows) optimization Part 2

    In the previous article I did a small intro and showed you how to check if a process chain should be optimized. In this article I’ll be telling how I optimize the infopackages and DTP’s from the chains.

    As for infopackages, I have only encountered 2 things that can slow the process down. When you looked into the detail view of the infopackage at ‘ST13’ it either just loads in a lot of data or as marked above the ‘1st package arrived’ takes a long time.

    Read more >
  • Blogs

    Process chain (flows) optimization Part 1

    Following will be an article which is divided into 3pieces. First an introduction to what I do and why I write this article, as well as a must do before changing process chains and how to check which chains have performance issues. Second article will be on how to optimize infopackages and DTP’s. Lastly, we’ll end with how to improve the performance of DSO’s and Cubes.

    Read more >