Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
M
main
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
CDIS
Computer Sciences
Courses
CS544
s25
main
Commits
67240c87
Commit
67240c87
authored
1 month ago
by
TYLER CARAZA-HARTER
Browse files
Options
Downloads
Patches
Plain Diff
lec 22 setup
parent
cab26261
No related branches found
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
lec/22-spark/Dockerfile
+4
-4
4 additions, 4 deletions
lec/22-spark/Dockerfile
lec/22-spark/docker-compose.yml
+2
-2
2 additions, 2 deletions
lec/22-spark/docker-compose.yml
lec/22-spark/nb/starter.ipynb
+2
-2
2 additions, 2 deletions
lec/22-spark/nb/starter.ipynb
with
8 additions
and
8 deletions
lec/22-spark/Dockerfile
+
4
−
4
View file @
67240c87
FROM
ubuntu:24.04
RUN
apt-get update
;
apt-get
install
-y
wget curl openjdk-11-jdk python3-pip n
et-tools unzip
RUN
apt-get update
;
apt-get
install
-y
wget curl openjdk-11-jdk python3-pip n
ano
# SPARK
RUN
wget https://
dlcdn
.apache.org/spark/spark-3.5.
3
/spark-3.5.
3
-bin-hadoop3.tgz
&&
tar
-xf
spark-3.5.
3
-bin-hadoop3.tgz
&&
rm
spark-3.5.
3
-bin-hadoop3.tgz
RUN
wget https://
archive
.apache.org/
dist/
spark/spark-3.5.
5
/spark-3.5.
5
-bin-hadoop3.tgz
&&
tar
-xf
spark-3.5.
5
-bin-hadoop3.tgz
&&
rm
spark-3.5.
5
-bin-hadoop3.tgz
# HDFS
RUN
wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
&&
tar
-xf
hadoop-3.3.6.tar.gz
&&
rm
hadoop-3.3.6.tar.gz
# Jupyter
RUN
pip3
install
jupyterlab
==
4.
0.3
pandas
==
2.2.3
pyspark
==
3.5.
3
matplotlib
--break-system-packages
RUN
pip3
install
jupyterlab
==
4.
3.5
pandas
==
2.2.3
pyspark
==
3.5.
5
matplotlib
==
3.10.1
--break-system-packages
ENV
JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
ENV
PATH="${PATH}:/hadoop-3.3.6/bin"
...
...
This diff is collapsed.
Click to expand it.
lec/22-spark/docker-compose.yml
+
2
−
2
View file @
67240c87
...
...
@@ -20,10 +20,10 @@ services:
spark-boss
:
image
:
spark-demo
hostname
:
boss
command
:
sh -c "/spark-3.5.
3
-bin-hadoop3/sbin/start-master.sh && sleep infinity"
command
:
sh -c "/spark-3.5.
5
-bin-hadoop3/sbin/start-master.sh && sleep infinity"
spark-worker
:
image
:
spark-demo
command
:
sh -c "/spark-3.5.
3
-bin-hadoop3/sbin/start-worker.sh spark://boss:7077 -c
1
-m
512M
&& sleep infinity"
command
:
sh -c "/spark-3.5.
5
-bin-hadoop3/sbin/start-worker.sh spark://boss:7077 -c
2
-m
2g
&& sleep infinity"
deploy
:
replicas
:
2
This diff is collapsed.
Click to expand it.
lec/22-spark/nb/starter.ipynb
+
2
−
2
View file @
67240c87
...
...
@@ -20,7 +20,7 @@
"from pyspark.sql import SparkSession\n",
"spark = (SparkSession.builder.appName(\"cs544\")\n",
" .master(\"spark://boss:7077\")\n",
" .config(\"spark.executor.memory\", \"
512M
\")\n",
" .config(\"spark.executor.memory\", \"
2G
\")\n",
" .config(\"spark.sql.warehouse.dir\", \"hdfs://nn:9000/user/hive/warehouse\")\n",
" .enableHiveSupport()\n",
" .getOrCreate())"
...
...
@@ -115,7 +115,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.1
0.12
"
"version": "3.1
2.3
"
}
},
"nbformat": 4,
...
...
%% Cell type:code id:c8dca847-54af-4284-97d8-0682e88a6e8d tags:
```
python
from
pyspark.sql
import
SparkSession
spark
=
(
SparkSession
.
builder
.
appName
(
"
cs544
"
)
.
master
(
"
spark://boss:7077
"
)
.
config
(
"
spark.executor.memory
"
,
"
512M
"
)
.
config
(
"
spark.executor.memory
"
,
"
2G
"
)
.
config
(
"
spark.sql.warehouse.dir
"
,
"
hdfs://nn:9000/user/hive/warehouse
"
)
.
enableHiveSupport
()
.
getOrCreate
())
```
%% Output
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/10/27 01:41:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
%% Cell type:code id:2294e4e0-ab19-496c-980f-31df757e7837 tags:
```
python
!
hdfs
dfs
-
cp
sf
.
csv
hdfs
:
//
nn
:
9000
/
sf
.
csv
```
%% Cell type:code id:cb54bacc-b52a-4c25-93d2-2ba0f61de9b0 tags:
```
python
df
=
(
spark
.
read
.
format
(
"
csv
"
)
.
option
(
"
header
"
,
True
)
.
option
(
"
inferSchema
"
,
True
)
.
load
(
"
hdfs://nn:9000/sf.csv
"
))
```
%% Output
%% Cell type:code id:c1298818-83f6-444b-b8a0-4be5b16fd6fb tags:
```
python
from
pyspark.sql.functions
import
col
,
expr
cols
=
[
col
(
c
).
alias
(
c
.
replace
(
"
"
,
"
_
"
))
for
c
in
df
.
columns
]
df
.
select
(
cols
).
write
.
format
(
"
parquet
"
).
save
(
"
hdfs://nn:9000/sf.parquet
"
)
```
%% Output
23/10/27 01:43:57 WARN SparkStringUtils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
%% Cell type:code id:37d1ded3-ed8a-4e39-94cb-dd3a3272af91 tags:
```
python
!
hdfs
dfs
-
rm
hdfs
:
//
nn
:
9000
/
sf
.
csv
```
%% Cell type:code id:abea48b5-e012-4ae2-a53a-e40350f94e20 tags:
```
python
df
=
spark
.
read
.
format
(
"
parquet
"
).
load
(
"
hdfs://nn:9000/sf.parquet
"
)
```
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment