twitter_icono Twitter

Retweets 0
Users 0
Relevance 0

Most relevant Tweet

Tweets count

Tweets by Geolocation

Last 10 Tweets


A user or API send filter words to a WebSocket connection; in the server is stated a connection to the client by calling the "Stream Holder" component, whose function is to manage the requested connections

The "Stream Holder" component ask for a credential calling the "Credential Pool" components, using the returned credentials we state a connection to the public Twitter API and then the system send the query with specified filter terms, as result we are receiving real time tweets related to our search, using the "Message Receiver".

The "Message Receiver" is a subject inside the observer pattern: when the Twitter connection receive a tuple, notify it to the "Message Receiver" component, and this one, to avoid blocking the thread, uses a Message Queue to communicate with the "Server Socket", i.e. put the messages in the queue and the "Server Socket" pop it from there.

This process optimize the blocking time a O(1), that is the Computational Complexity of inserting in a Queue.

This solution is perfectly scalable to a greater number of nodes, using a Kafka cluster, as we show in our Kafka demo.

I+D+i BigData

In StrateBI we believe in the value of Big Data technologies for data processing and the possibility of obtain knowledge using it, with the goal of making easier the process of decisions in any industry. Our team makes a great job on I+D+i in Big Data


We keep updated about news and scientific articles published about Big Data technologies.

Its made with emerging ones that we think have a great potential, as well as the consolidated ones.

With this, we detect new features that can improve the behavior or performance of our solutions.


We put in practice the results of the research phase.

We deploy the improvements and validate its application in real use cases, similar to the ones we show in this demo.


Once we test the usefulness and robustness of improvements or new features added we introduce in our solutions in different projects.

In this way StrateBI guarantees the use of cutting edge Big Data technologies, previous tests and improvements by out I+D+i in Big Data

Used Technologies


Apache Hadoop is the most popular Big Data environment, it allows the distributed computing on clusters with commodity hardware and low cost.

The basic and default configuration for a Hadoop cluster includes distributed storage of data using (HDFS), a resource manager (YARN) Yet Another Resource Negotiator, and running on top of this one, is the (Map Reduce) framework, that perform the distributed processing of data.

Besides these components, there are another set of higher level tools, for storing and processing data, like Hive or Spark, as an example. They offer the abstraction that simplifies the development for that environment.

As mentioned before, Hadoop is the most popular Big Data environment, the reason is because it offer a wide range of technologies and a very high robustness level. It is ideal for the new concept of Data Lake for the later analytics using powerful BI tools.


Flume is a distributed and trustworthy system for the efficient collection, aggregation and processing of Streaming Data.


Kafka is a distributed message system that use the pattern publish-subscribe, is fault tolerant, horizontal scalable and is ideal for Stream Data Processing

hortonworks cloudera

To make easier the management, installation and maintenance of hadoop cluster we work with two main Hadoop Distributions.

A hadoop distribution is a software package, that include the basic components of Hadoop, with a plus of other technologies, frameworks and tools and the possibility of installing using a web application.

About this, in Stratebi we recommend the use of a hadoop distribution. Being Hortonworks and Cloudera the leader distributions currently in the market. For this reason our demo is running over a Cloudera distribution and a Hortonworks distribution.

spark spark streaming

Spark implements the Map Reduce programming paradigm making intensive usage of RAM memory instead of disk.

Using Spark, we can improve the performance of Map Reduce applications by implementing iterative algorithms, machine learning (MLib), statistics analysis R module, or real time analytics Spark Streaming, all this is icluded in our demo.