Tugdual Grall
Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.comBlogger248125
Updated: 2 hours 12 min ago
Multi-Nodes Redis Cluster With Docker
Read this article on my new blog
As part of my on-boarding/training at RedisLabs I continue to play with the product, and I have decided today to install a local 3 nodes cluster of Redis Enterprise Server (RS); and show how easy is to move from a single node/shard database to a multi nodes highly available one.
Once your cluster is up & running, you will kill some containers to see how Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0
Getting Started With Redis Streams & Java
Read this article on my new blog
As you may have seen, I have joined Redis Labs a month ago; one of the first task as a new hire is to learn more about Redis. So I learned, and I am still learning.
This is when I discovered Redis Streams. I am a big fan of streaming-based applications so it is natural that I start with a small blog post explaining how to use Redis Streams and Java.
Go to Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0
Getting started with MapR-DB Table Replication
Read & comment this article on my new blog
Introduction
MapR-DB Table Replication allows data to be replicated to another table that could be on on the same cluster or in another cluster. This is different from the automatic and intra-cluster replication that copies the data into different physical nodes for high availability and prevent data loss.
This tutorial focuses on the Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Getting Started With Kafka REST Proxy for MapR Streams
Read & comment this article on my new blog
Introduction
MapR Ecosystem Package 2.0 (MEP) is coming with some new features related to MapR Streams:
Kafka REST Proxy for MapR Streams provides a RESTful interface to MapR Streams and Kafka clusters to consume and product messages and to perform administrative operations.
Kafka Connect for MapR Streams is a utility for streaming data Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Getting Started with MQTT and Java
Read & comment this article on my new blog
MQTT (MQ Telemetry Transport) is a lightweight publish/subscribe messaging protocol.
MQTT is used a lot in the Internet of Things applications, since it has been designed to
run on remote locations with system with small footprint.
The MQTT 3.1 is an OASIS standard, and you can find all the information at http://mqtt.org/
This article will Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Getting started with Apache Flink and Kafka
Read this article on my new blog
Introduction
Apache Flink is an open source platform for distributed stream and batch data processing. Flink is a streaming data flow engine with several APIs to create data streams oriented application.
It is very common for Flink applications to use Apache Kafka for data input and output. This article will guide you into the steps to use Apache Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Streaming Analytics in a Digitally Industrialized World
Read this article on my new blog
Get an introduction to streaming analytics, which allows you real-time insight from captured events and big data. There are applications across industries, from finance to wine making, though there are two primary challenges to be addressed.
Did you know that a plane flying from Texas to London can generate 30 million data points per flight? As Jim Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Setting up Spark Dynamic Allocation on MapR
Read this article on my new blog
Apache Spark can use various cluster manager to execute application (Stand Alone, YARN, Apache Mesos). When you install Apache Spark on MapR you can submit application in a Stand Alone mode or using YARN.
This article focuses on YARN and Dynamic Allocation, a feature that lets Spark add or remove executors dynamically based on the workload. You can Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Save MapR Streams messages into MapR DB JSON
Read this article on my new blog
In this article you will learn how to create a MapR Streams Consumer that saves all the messages into a MapR-DB JSON Table.
Install and Run the sample MapR Streams application
The steps to install and run the applications are the same as the one defined in the following article:
MapR Streams application
Once you have the default producer and Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Getting Started with MapR Streams
Read this article on my new blog
You can find a new tutorial that explains how to deploy an Apache Kafka application to MapR Streams, the tutorial is available here:
Getting Started with MapR Streams
MapR Streams is a new distributed messaging system for streaming event data at scale, and it’s integrated into the MapR converged platform.
MapR Streams uses the Apache Kafka API, so Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0
Getting Started With Sample Programs for Apache Kafka 0.9
Read this article on my new blog
Ted Dunning and I have worked on a tutorial that explains how to write your first Kafka application. In this tutorial you will learn how to:
Install and start Kafka
Create and Run a producer and a consumer
You can find the tutorial on the MapR blog:
Getting Started with Sample Programs for Apache Kafka 0.9
Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Using Apache Drill REST API to Build ASCII Dashboard With Node
Read this article on my new blog
Apache Drill has a hidden gem: an easy to use REST interface. This API can be used to Query, Profile and Configure Drill engine.
In this blog post I will explain how to use Drill REST API to create ascii dashboards using Blessed Contrib.
The ASCII Dashboard looks like
Prerequisites
Node.js
Apache Drill 1.2
For this post, you will use the SFO Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Convert CSV file to Apache Parquet... with Drill
Read this article on my new blog
A very common use case when working with Hadoop is to store and query simple files (CSV, TSV, ...); then to get better performance and efficient storage convert these files into more efficient format, for example Apache Parquet.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Apache Parquet has the following Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
Apache Drill : How to Create a New Function?
Read this article on my new blog
Apache Drill allows users to explore any type of data using ANSI SQL. This is great, but Drill goes even further than that and allows you to create custom functions to extend the query engine. These custom functions have all the performance of any of the Drill primitive operations, but allowing that performance makes writing these functions a little trickier Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com1
Introduction to MongoDB Security
View it on my new blog
Last week at the Paris MUG, I had a quick chat about security and MongoDB, and I have decided to create this post that explains how to configure out of the box security available in MongoDB.
You can find all information about MongoDB Security in following documentation chapter:
http://docs.mongodb.org/manual/security/
In this post, I won't go into the detail about Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com1
Moving My Beers From Couchbase to MongoDB
See it on my new blog : here
Few days ago I have posted a joke on Twitter
Moving my Java from Couchbase to MongoDB pic.twitter.com/Wnn3pXfMGi
— Tugdual Grall (@tgrall) January 26, 2015
So I decided to move it from a simple picture to a real project. Let’s look at the two phases of this so called project:
Moving the data from Couchbase to MongoDB
Updating the application code to use Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2
Everybody Says “Hackathon”!
TLTR:
MongoDB & Sage organized an internal Hackathon
We use the new X3 Platform based on MongoDB, Node.js and HTML to add cool features to the ERP
This shows that “any” enterprise can (should) do it to:
look differently at software development
build strong team spirit
have fun!
Introduction
I have like many of you participated to multiple Hackathons where developers, designer and Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2
Nantes MUG : Event #2
Last night the Nantes MUG (MongoDB Users Group) had its second event. More than 45 people signed up and joined us at the Epitech school (thanks for this!). We were lucky to have 2 talks from local community members:
How “MyScript Cloud” uses MongoDB by Mathieu Ruellan
Aggregation Framework by Sebastien Prunier
How “MyScript Cloud” uses MongoDB
First of all, if you do not know MyScript I Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0
How to create a pub/sub application with MongoDB ? Introduction
In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ).
So, what needs to be done to achieve such thing:
an application "publish" a message. In our case, we simply save a document into MongoDB
another application, or thread, subscribe to these events and will received Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2
Big Data... Is Hadoop the good way to start?
In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing.
TL;TR
You believe that you have a big data project?
Do not start with the installation of an Hadoop Cluster -- the "how"
Start to talk to business people to understand their problem -- the "why"
Understand the data you must Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com4