Duration: 3 Days
Training Fee: RM 2650.00
About this course
In this Big Data training attendees will gain practical skill set on Hadoop in detail, including its core and eco system components. This course focuses on case study approach for learning various tools and completely industry relevant training and a great blend of analytics and technology.
Candidates will be awarded Big Data Analytics using Hadoop Attendance Certificate on successful completion of projects that are provided as part of the training.
After the completion of this course, you will be able to:
- Knowing BigData
- Drawing BigData ideas towards your company
- Work on the concepts of HDFS and MapReduce framework.
- Learn data loading techniques
- Perform data analytics using Hive and YARN.
- Implement Spark applications on YARN (Hadoop).
- Stream data and Streaming API.
- Analyze Hive and Spark SQL architecture.
- Implement Spark SQL queries to perform several computations.
Software engineers and programmers who want to understand the BigData and its concept with larger Hadoop ecosystem, and use it to store, analyze, and vend “big data” at scale. Project, program, or product managers who want to understand the lingo and high-level architecture of Hadoop. Data analysts and database administrators who are curious about Hadoop and how it relates to their work. System architects who need to understand the components available in the Hadoop ecosystem, and how they fit together.
All participants must be familiar with the fundamentals of Programming and Web technologies a basic familiarity with the Linux command line will be very helpful.
Introduction and Understanding Big Data
- What is Big Data Definition
- The Network System and Tick Bandwidth
- LAN and W AN Data flow Data packet and block
- Server Clustering
The Hadoop & Ecosystem Installation
- Install Hadoop
- Hadoop Overview and History
- The Ecosystem
Hadoop core components- HDFS
- HDFS: What it is, and how it works
- Install the Sample dataset into HDFS using the UI
- Install the Sample dataset into HDFS using the command line.
Hadoop core components- Mapreduce (YARN)
- MapReduce: What it is, and how it works
- How MapReduce distributes processing
Hadoop Data Analysis Tools: Hadoop-Spark
- Why Spark? The Resilient Distributed Dataset (RDD)?
- Dataset and Spark.
Hadoop Data Analysis Tools: Hadoop-Hive
- What is Hive? How Hive Works?
- MySQL and Hadoop Integration
- Hadoop Projects and Exercise
|Public Class Schedule|
|9 - 10 Jan 2020|
|12 - 13 Mar 2020|
|14 - 15 May 2020|
|9 - 10 Jul 2020|
|10 - 11 Sep 2020|
|12 - 13 Nov 2020|
Please contact us if you need more information about Private or In-House Class Training – click here
Call Us : 03-21165778