Apache Training Overview
This course will provide a brief introduction to Hadoop and help solve real-world problems using Apache Spark. This four day training course for Apache Spark enables participants to build complete, unified big data applications combining batch, streaming, and interactive analytics on all their data.
With Spark, developers can write sophisticated parallel applications to execute faster decisions, better decisions, and real-time actions, applied to a wide variety of use cases, architectures, and industries.
Apache Spark is a powerful, open-source processing engine for data in the Hadoop cluster, optimized for speed, ease of use, and sophisticated analytics.
The Spark framework supports streaming data processing and complex, iterative algorithms, enabling applications to run up to 100x faster than traditional Hadoop MapReduce programs.
Apache Training Learning Objectives
Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:
- Using the Spark shell for interactive data analysis
- The features of Spark’s Resilient Distributed Datasets
- How Spark runs on a cluster
- Parallel programming with Spark
- Writing Spark applications
- Processing streaming data with Spark
Apache Training Audience
- Best suited to developers and engineers
- A developer working with large-scale, high-volume websites.
- An application architect or data architect who needs to understand the available options for high-performance, decentralized high volume data processing
Apache Training Pre-requisites
- Course examples and exercises are presented in Python and Scala, so knowledge of one of these programming languages is required.
- Basic knowledge of Linux is assumed.
- Prior knowledge of Hadoop is not required.
- Knowledge of Java programming would be useful.
Apache Training Course duration
4 days
|