亚洲国产婷婷六月丁香,亚洲av永久中文无码精品 ,亚洲av成人精品一区二区三区,亚洲av无码乱码在线观看富二代,亚洲av乱码一区二区三区香蕉

課程目錄: 用Scala和Spark進(jìn)行大數(shù)據(jù)分析培訓(xùn)

4401 人關(guān)注
(78637/99817)
課程大綱:

用Scala和Spark進(jìn)行大數(shù)據(jù)分析培訓(xùn)

 

 

 

WEEK 1

Getting Started + Spark Basics

Get up and running with Scala on your computer.

Complete an example assignment to familiarize yourself with our unique way of submitting assignments.

In this week, we'll bridge the gap between data parallelism

in the shared memory scenario (learned in the Parallel Programming course, prerequisite)

and the distributed scenario. We'll look at important concerns that arise in distributed systems,

like latency and failure. We'll go on to cover the basics of Spark,

a functionally-oriented framework for big data processing in Scala.

We'll end the first week by exercising what we learned about Spark

by immediately getting our hands dirty analyzing a real-world data set.

WEEK 2

Reduction Operations & Distributed Key-Value Pairs

This week, we'll look at a special kind of RDD called pair RDDs.

With this specialized kind of RDD in hand, we'll cover essential operations on large data sets,

such as reductions and joins.WEEK 3

Partitioning and Shuffling

This week we'll look at some of the performance implications of using operations like joins.

Is it possible to get the same result without having to pay for the overhead of moving data over the network?

We'll answer this question by delving into how we can partition our data to achieve better data locality,

in turn optimizing some of our Spark jobs.WEEK 4

Structured data: SQL, Dataframes, and Datasets

With our newfound understanding of the cost of data movement

in a Spark job, and some experience optimizing jobs for data locality last week,

this week we'll focus on how we can more easily achieve similar optimizations.

Can structured data help us? We'll look at Spark SQL and its powerful optimizer which uses structure

to apply impressive optimizations. We'll move on to cover DataFrames and Datasets,

which give us a way to mix RDDs with the powerful automatic optimizations behind Spark SQL.