Home    |    Instructor-led Training    |    Online Training     
         
 
Courses
ADA
Adobe
Agile
AJAX
Android
Apache
AutoCAD
Big Data
BlockChain
Business Analysis
Business Intelligence
Business Objects
Business Skills
C/C++/Go programming
Cisco
Citrix
Cloud Computing
COBOL
Cognos
ColdFusion
COM/COM+
CompTIA
CORBA
CRM
Crystal Reports
Data Science
Datawarehousing
DB2
Desktop Application Software
DevOps
DNS
Embedded Systems
Google Web Toolkit (GWT)
IPhone
ITIL
Java
JBoss
LDAP
Leadership Development
Lotus
Machine learning/AI
Macintosh
Mainframe programming
Mobile
MultiMedia and design
.NET
NetApp
Networking
New Manager Development
Object oriented analysis and design
OpenVMS
Oracle
Oracle VM
Perl
PHP
PostgreSQL
PowerBuilder
Professional Soft Skills Workshops
Project Management
Python
Rational
Ruby
Sales Performance
SAP
SAS
Security
SharePoint
SOA
Software quality and tools
SQL Server
Sybase
Symantec
Telecommunications
Teradata
Tivoli
Tomcat
Unix/Linux/Solaris/AIX/
HP-UX
Unisys Mainframe
Visual Basic
Visual Foxpro
VMware
Web Development
WebLogic
WebSphere
Websphere MQ (MQSeries)
Windows programming
XML
XML Web Services
Other
Introduction to Spark Programming
Big Data Training Overview

This course introduces the Apache Spark distributed computing engine, and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner. It is based on the Spark 2.x release.

The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. DataSets/DataFrames and Spark SQL). It includes in-depth coverage of Spark SQL, DataFrames, and DataSets, which are now the preferred programming API. This includes exploring possible performance issues and strategies for optimization.

The course also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and integrating with the Kafka server.

The course is very hands-on, with many labs. Participants will interact with Spark through the Spark shell (for interactive, ad-hoc processing) as well as through programs using the Spark API. After taking this course, you will be ready to work with Spark in an informed and productive manner. Labs currently support Scala - contact us for Python/Java support.

Big Data Training Skills Gained

  • Understand the need for Spark in data processing
  • Understand the Spark architecture and how it distributes computations to cluster nodes
  • Be familiar with basic installation / setup / layout of Spark
  • Use the Spark shell for interactive and ad-hoc operations
  • Understand RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
  • Understand/use RDD ops such as map(), filter() and others.
  • Understand and use Spark SQL and the DataFrame/DataSet API.
  • Understand DataSet/DataFrame capabilities, including the Catalyst query optimizer and Tungsten memory/cpu optimizations.
  • Be familiar with performance issues, and use the DataSet/DataFrame and Spark SQL for efficient computations
  • Understand Spark's data caching and use it for efficient data transfer
  • Write/run standalone Spark programs with the Spark API
  • Use Spark Streaming / Structured Streaming to process streaming (real-time) data
  • Ingest streaming data from Kafka, and process via Spark Structured Streaming
  • Understand performance implications and optimizations when using Spark
Hands-On

Minimum 50% hands-on

Supported Platforms

Spark 2.1+

Big Data Training Prerequisites

Reasonable programming experience. An overview of Scala is provided for those who don't know it.

Big Data Training Course Duration

4 Days

Big Data Training Course outline

Session 1 (Optional): Scala Ramp Up

  • Scala Introduction, Variables, Data Types, Control Flow
  • The Scala Interpreter
  • Collections and their Standard Methods (e.g. map())
  • Functions, Methods, Function Literals
  • Class, Object, Trait, case Class

Session 2: Introduction to Spark

  • Overview, Motivations, Spark Systems
  • Spark Ecosystem
  • Spark vs. Hadoop
  • Acquiring and Installing Spark
  • The Spark Shell, SparkContext

Session 3: RDDs and Spark Architecture

  • RDD Concepts, Lifecycle, Lazy Evaluation
  • RDD Partitioning and Transformations
  • Working with RDDs - Creating and Transforming (map, filter, etc.)

Session 4: Spark SQL, DataFrames, and DataSets

  • Overview
  • SparkSession, Loading/Saving Data, Data Formats (JSON, CSV, Parquet, text ...)
  • Introducing DataFrames and DataSets (Creation and Schema Inference)
  • Supported Data Formats (JSON, Text, CSV, Parquet)
  • Working with the DataFrame (untyped) Query DSL (Column, Filtering, Grouping, Aggregation)
  • SQL-based Queries
  • Working with the DataSet (typed) API
  • Mapping and Splitting (flatMap(), explode(), and split())
  • DataSets vs. DataFrames vs. RDDs

Session 5: Shuffling Transformations and Performance

  • Grouping, Reducing, Joining
  • Shuffling, Narrow vs. Wide Dependencies, and Performance Implications
  • Exploring the Catalyst Query Optimizer (explain(), Query Plans, Issues with lambdas)
  • The Tungsten Optimizer (Binary Format, Cache Awareness, Whole-Stage Code Gen)

Session 6: Performance Tuning

  • Caching - Concepts, Storage Type, Guidelines
  • Minimizing Shuffling for Increased Performance
  • Using Broadcast Variables and Accumulators
  • General Performance Guidelines

Session 7: Creating Standalone Applications

  • Core API, SparkSession.Builder
  • Configuring and Creating a SparkSession
  • Building and Running Applications - sbt/build.sbt and spark-submit
  • Application Lifecycle (Driver, Executors, and Tasks)
  • Cluster Managers (Standalone, YARN, Mesos)
  • Logging and Debugging

Session 8: Spark Streaming

  • Introduction and Streaming Basics
  • Spark Streaming (Spark 1.0+)
    • DStreams, Receivers, Batching
    • Stateless Transformation
    • Windowed Transformation
    • Stateful Transformation
  • Structured Streaming (Spark 2+)
    • Continuous Applications
    • Table Paradigm, Result Table
    • Steps for Structured Streaming
    • Sources and Sinks
  • Consuming Kafka Data
    • Kafka Overview
    • Structured Streaming - "kafka" format
    • Processing the Stream

Please contact your training representative for more details on having this course delivered onsite or online

Training Outlines - the one stop shopping center for IT training.
© Training Outlines All rights reserved