kylin_icono Apache Zeppelin Dashboard , with Kylin

With this demo we pretend to show, the effective combination of using Apache Kylin, an analytical engine on top of a Hadoop Cluster, and Apache Zeppelin, a tool for visualizing Big Data Sources in a very intuitive and simple way,

The used data are about the historic academic performance in an big university, with about (>100 millions rows).

With this data source, we have created a Dashboard using some graphs offered by Zeppelin, you can explore it in detail, below.


In the Use Case we present here, we use Apache Kylin and Zeppelin for allowing interactive analysis using a Dashboard, that contains Big Data typical data, (Volume, Speed, Variety).

The data contains, the last 15 years of a big university. We have designed a multidimensional model for analyze the academic performance. We have about 100 millions rows, with metrics like, credits, passed subjects, suspended subject, etc. The analysis of this facts are based on dimensions like sex, qualification, date, time or academic year.

Given such big volume of data, using traditional OLAP (R-OLAP and M-OLAP) systems don't meet the required performance, for this reason we are testing Apache Kylin, that allows response times in a few seconds in worst case for volumes higher than 10 billions rows.

There are keys technologies for Kylin; Apache Hive and Apache HBase.
The Data Warehouse is based on a Start Model stored on Apache Hive.
Using this model and a definition of a meta-data model, Kylin builds a multidimensional MOLAP Cube in HBase.
After the cube is builded the users can query it, using an SQL based language with its JDBC driver.

The last component in the chain is a Zeppelin Server, to visualize queries and build custom Dashboard, we configure a Kylin interprete in Zeppelin and then we are ready to execute queries on Zeppelin.

A Zeppelin Dashboard is a Notebook, this can hold one or more panels, in these panels we write our query and design the graph we prefer. Also we can set the Notebook layout by changing size and order its panels. Zeppelin makes this notebook available through an url we can embed in a web page.


Developed by eBay and later released as Apache Open Source Project, Kylin is an open source analytical middle ware that supports the support analysis OLAP of big volumes of information with Big Data charactertistics, (Volume, Speed, and Variety).

But nevertheless, until Kylin appeared in the market, OLAP technologies was limited to Relational Databases, or in some cases optimized for multidimensional storage, with serious limitations on Big Data.

Apache Kylin, builded on top of many technologies of Hadoop environment, offer an SQL interface that allows querying data set for multidimensional analysis, achieving response time of a few seconds, over 10 millios rows.

There are keys technologies for Kylin; Apache Hive and Apache HBase.
The Data Warehouse is based on a Start Model stored on Apache Hive.
Using this model and a definition of a meta-data model, Kylin builds a multidimensional MOLAP Cube in HBase.
After the cube is builded the users can query it, using an SQL based language with its JDBC driver.

When Kylin receives an SQL query, decide if it can be resolved using the MOLAP cube in HBase (in milliseconds), or not, in this case Kylin build its own query and execute it in the Apache Hive Storage, this case is rarely used.

As Kylin has a JDBC driver, we can connect it, to most popular BI tools, like Tableau, or any framework that uses JDBC.


Apache Zeppelin is a server that offer creation, visualization and sharing Dashboard, it is designed for working on Big Data sources like (Kylin, Spark, Hive, Cassandra, HBase, Elasticsearch).

To connect to different data source, Zeppelin have several interpreters, after we configure it, we can use the language to make queries, visualize graph, and build simple Dashboard bye creating a Notebook. A Notebook is basically a set of panels arranged in our preferred way. Each panels have de query area and visualization area, in the first one, we write and test our query, whereas in the second one we put a graph and show the data.

We can share a dashboard or panel via url, with other users, etc. Also Zeppelin has an API to interact programmatically with the server.


As Big Data sources, we have generated academic data for last 15 years of an university, we more than a million students.

In the Data Warehouse we have 100 millions rows with metrics like sum of credits, approved subjects, suspended subjects or enrolled subjects.

Also there are derivative metrics, like, performance rate, success rate, calculated based on the relation between aprovved credits and enrolled credits.

I+D+i BigData

In StrateBI we believe in the value of Big Data technologies for data processing and the possibility of obtain knowledge using it, with the goal of making easier the process of decisions in any industry. Our team makes a great job on I+D+i in Big Data


We keep updated about news and scientific articles published about Big Data technologies.

Its made with emerging ones that we think have a great potential, as well as the consolidated ones.

With this, we detect new features that can improve the behavior or performance of our solutions.


We put in practice the results of the research phase.

We deploy the improvements and validate its application in real use cases, similar to the ones we show in this demo.


Once we test the usefulness and robustness of improvements or new features added we introduce in our solutions in different projects.

In this way StrateBI guarantees the use of cutting edge Big Data technologies, previous tests and improvements by out I+D+i in Big Data

Used Technologies


Apache Hadoop is the most popular Big Data environment, it allows the distributed computing on clusters with commodity hardware and low cost.

The basic and default configuration for a Hadoop cluster includes distributed storage of data using (HDFS), a resource manager (YARN) Yet Another Resource Negotiator, and running on top of this one, is the (Map Reduce) framework, that perform the distributed processing of data.

Besides these components, there are another set of higher level tools, for storing and processing data, like Hive or Spark, as an example. They offer the abstraction that simplifies the development for that environment.

As mentioned before, Hadoop is the most popular Big Data environment, the reason is because it offer a wide range of technologies and a very high robustness level. It is ideal for the new concept of Data Lake for the later analytics using powerful BI tools.


Flume is a distributed and trustworthy system for the efficient collection, aggregation and processing of Streaming Data.


Kafka is a distributed message system that use the pattern publish-subscribe, is fault tolerant, horizontal scalable and is ideal for Stream Data Processing

hortonworks cloudera

To make easier the management, installation and maintenance of hadoop cluster we work with two main Hadoop Distributions.

A hadoop distribution is a software package, that include the basic components of Hadoop, with a plus of other technologies, frameworks and tools and the possibility of installing using a web application.

About this, in Stratebi we recommend the use of a hadoop distribution. Being Hortonworks and Cloudera the leader distributions currently in the market. For this reason our demo is running over a Cloudera distribution and a Hortonworks distribution.

spark spark streaming

Spark implements the Map Reduce programming paradigm making intensive usage of RAM memory instead of disk.

Using Spark, we can improve the performance of Map Reduce applications by implementing iterative algorithms, machine learning (MLib), statistics analysis R module, or real time analytics Spark Streaming, all this is icluded in our demo.