BlueXIII's Blog

热爱技术,持续学习

0%

Ubuntu16.04下Hadoop3.0.0安装笔记

安装JDK8

PPA方式安装OracleJDK

1
2
3
4
sudo apt-add-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

或安装OpenJDK

1
sudo apt-get install default-jdk

创建hadoop用户

1
2
3
sudo useradd -m hadoop -s /bin/bash
sudo passwd hadoop
sudo adduser hadoop sudo

安装Open SSH Server

1
sudo apt-get install openssh-server

SSH授权:

1
2
3
cd ~/.ssh/
ssh-keygen -t rsa
cat ./id_rsa.pub >> ./authorized_keys

下载Hadoop

http://hadoop.apache.org/releases.html

2018-03-22-16-08-21

选择3.0稳定版的binary下载,并解压

安装Hadoop

1
2
tar -xzvf hadoop-3.0.0.tar.gz
sudo mv hadoop-3.0.0 /opt/hadoop

PATH

export PATH=$PATH:/opt/hadoop/sbin:/opt/hadoop/bin

设置JDK环境变量

readlink -f /usr/bin/java | sed “s:bin/java::”
/usr/lib/jvm/java-8-oracle/jre/

sudo vi ./etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre/

运行Hadoop

./bin/hadoop

mkdir ~/input
cp /opt/hadoop/etc/hadoop/*.xml ~/input

./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0.jar grep ~/input ~/grep_example ‘principal[.]*’

伪分布式配置

1
vi /opt/hadoop/etc/hadoop/core-site.xml
1
2
3
4
5
6
7
8
9
10
11
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
1
vi /opt/hadoop/etc/hadoop/hdfs-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop/tmp/dfs/data</value>
</property>
</configuration>

执行 NameNode 的格式化:
./bin/hdfs namenode -format

开启 NameNode 和 DataNode 守护进程:
./sbin/start-dfs.sh
./sbin/stop-dfs.sh

可以执行jps查看进程

WEB控制台界面:
http://localhost:9870

运行Hadoop伪分布式实例

在 HDFS 中创建用户目录:
./bin/hdfs dfs -mkdir -p /user/hadoop

将示例xml文件作为输入文件复制到分布式文件系统中
./bin/hdfs dfs -mkdir input
./bin/hdfs dfs -put /opt/hadoop/etc/hadoop/*.xml input

查看文件列表:
./bin/hdfs dfs -ls input

伪分布式运行 MapReduce 作业:
./bin/hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output ‘dfs[a-z.]+’

查看运行结果:
./bin/hdfs dfs -cat output/*

将文件取回本地:
./bin/hdfs dfs -get output /opt/hadoop/output

启动YARN

vi mapred-site.xml

1
2
3
4
5
6
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

vi yarn-site.xml

1
2
3
4
5
6
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

启动YARN:

1
2
./sbin/start-yarn.sh      # 启动YARN
./sbin/mr-jobhistory-daemon.sh start historyserver # 开启历史服务器,才能在Web中查看任务运行情况

停止YARN:

1
2
./sbin/stop-yarn.sh
./sbin/mr-jobhistory-daemon.sh stop historyserver

参考文章

https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-in-stand-alone-mode-on-ubuntu-16-04
http://www.powerxing.com/install-hadoop/
http://www.powerxing.com/hadoop-build-project-using-eclipse/