為學習spark,虛擬機中開4臺虛擬機安裝spark3.0.0
底層hadoop集群已經(jīng)安裝好,見ol7.7安裝部署4節(jié)點hadoop 3.2.1分布式集群學習環(huán)境
首先,去http://spark.apache.org/downloads.html下載對應安裝包
![](/d/20211017/aa556b713d9f213f472063c50e6cc0c2.gif)
解壓
[hadoop@master ~]$ sudo tar -zxf spark-3.0.0-bin-without-hadoop.tgz -C /usr/local
[hadoop@master ~]$ cd /usr/local
[hadoop@master /usr/local]$ sudo mv ./spark-3.0.0-bin-without-hadoop/ spark
[hadoop@master /usr/local]$ sudo chown -R hadoop: ./spark
四個節(jié)點都添加環(huán)境變量
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
配置spark
spark目錄中的conf目錄下cp ./conf/spark-env.sh.template ./conf/spark-env.sh后面添加
export SPARK_MASTER_IP=192.168.168.11
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_LOCAL_DIRS=/usr/local/hadoop
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
然后配置work節(jié)點,cp ./conf/slaves.template ./conf/slaves修改為
master
slave1
slave2
slave3
寫死JAVA_HOME,sbin/spark-config.sh最后添加
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_191
復制spark目錄到其他節(jié)點
sudo scp -r /usr/local/spark/ slave1:/usr/local/
sudo scp -r /usr/local/spark/ slave2:/usr/local/
sudo scp -r /usr/local/spark/ slave3:/usr/local/
sudo chown -R hadoop ./spark/
...
啟動集群
先啟動hadoop集群/usr/local/hadoop/sbin/start-all.sh
然后啟動spark集群
![](/d/20211017/641d79a9c4f386cec0d554e069c8f535.gif)
![](/d/20211017/8192db3fa3b31c7d8d24f987914594e2.gif)
通過master8080端口監(jiān)控
![](/d/20211017/eec4897646ca8fd220d43e0185e7e728.gif)
完成安裝
到此這篇關(guān)于ol7.7安裝部署4節(jié)點spark3.0.0分布式集群的詳細教程的文章就介紹到這了,更多相關(guān)ol7.7安裝部署spark集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
您可能感興趣的文章:- Spark學習筆記 (二)Spark2.3 HA集群的分布式安裝圖文詳解
- Python搭建Spark分布式集群環(huán)境
- 使用docker快速搭建Spark集群的方法教程