《hadoop安装前准备工作》由会员分享,可在线阅读,更多相关《hadoop安装前准备工作(14页珍藏版)》请在金锄头文库上搜索。
1、1 在安装好地Ubuntu系统下添加具有sudo权限地用户.rootnodeA:# sudo adduser zyxAdding user zyx .Adding new group zyx (1001) .Adding new user zyx (1001) with group zyx .Creating home directory /home/zyx .Copying files from /etc/skel .Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
2、Changing the user information for zyxEnter the new value, or press ENTER for the default文档来自于网络搜索 Full Name : Cadduser: /usr/bin/chfn zyx exited from signal 2. Exiting.文档来自于网络搜索rootnodeA:#rootnodeA:# sudo usermod -G admin -a zyxrootnodeA:#2 建立SSH无密码登陆(1)namenode上实现无密码登陆本机zyxnodeA:$ ssh-keygen -t dsa
3、 -P -f /.ssh/id_dsa文档来自于网络搜索Generating public/private dsa key pair.Created directory /home/zyx/.ssh.Your identification has been saved in /home/zyx/.ssh/id_dsa.文档来自于网络搜索Your public key has been saved in /home/zyx/.ssh/id_dsa.pub.文档来自于网络搜索The key fingerprint is:65:2e:e0:df:2e:61:a5:19:6a:ab:0e:38:45:
4、a9:6a:2b zyxnodeA文档来自于网络搜索The keys randomart image is:+- DSA 1024-+| | . | o . o | o . .+. |. . .S=. |.o o.=o |+. . o. |E. . . |. .o. . |+-+zyxnodeA:$ cat /.ssh/id_dsa.pub /.ssh/authorized_keys文档来自于网络搜索zyxnodeA:$(2)实现namenode无密码登陆其他datanodehadoopnodeB:$ scp hadoopnodea:/home/hadoop/.ssh/id_dsa.pub /
5、home/hadoop文档来自于网络搜索hadoopnodeas password: id_dsa.pub 100% 602 0.6KB/s 00:00 文档来自于网络搜索hadoopnodeB:$ cat id_dsa.pub .ssh/authorized_keys文档来自于网络搜索hadoopnodeB:$ sudo ufw disable3 复制JDK(jdk-6u20-linux-i586.bin)文件到linux利用F-Secure SSH File Transfer Trial 工具,直接拖拽4 jdk-6u20-linux-i586.bin地安装和配置(1)安装zyxnodeA
6、:$ lsExamples jdkzyxnodeA:$ cd jdkzyxnodeA:/jdk$ lsjdk-6u20-linux-i586.binzyxnodeA:/jdk$ chmod a+x jdk*zyxnodeA:/jdk$ ./jdk*接下来显示许可协议,然后选择yes, 然后按Enter键,安装结束.zyxnodeA:/jdk$ lsjdk1.6.0_20 jdk-6u20-linux-i586.bin(2)配置用rootnodeA:/home/zyx# vi .bashrc 打开bashrc, 然后在最后加入下面几行:文档来自于网络搜索export JAVA_HOME=/hom
7、e/zyx/jdk/jdk1.6.0_20export JRE_HOME=/home/zyx/jdk/jdk1.6.0_20/jreexport CLASS_PATH=$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib文档来自于网络搜索export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH:$HOMR/bin文档来自于网络搜索5 Hadoop地安装下载地址:http:/ 放到home/zyx/hadoop下,然后解压该文件zyxnodeB:/hadoop$ tar -zvxf hadoop-0.20.2.tar.gz设置环境变量
8、, 添加到home/zyx/.bashrczyxnodeA:$ vi .bashrcexport HADOOP_HOME=/home/zyx/hadoop/hadoop-0.20.2export PATH=$HADOOP_HOME/bin:$PATH6 Hadoop地配置(1) 在conf/hadoop-env.sh中配置java环境export JAVA_HOME=/home/zyx/jdk/jdk/jdk1.6.0_20(2) 配置conf/masters, slaves 文件, 只需要在nodename上配置.(3) 配置core-site.xml, hdfs-site.xml, map
9、red-site.xml zyxnodeC:/hadoop-0.20.2/conf$ more core-site.xml文档来自于网络搜索 文档来自于网络搜索 fs.default.name# hdfs:/192.168.1.103:54310 hdfs:/192.168.1.103:9000 zyxnodeC:/hadoop-0.20.2/conf$ more hdfs-site.xml文档来自于网络搜索 文档来自于网络搜索 dfs.replication 1 zyxnodeC:/hadoop-0.20.2/conf$ more mapred-site.xml文档来自于网络搜索文档来自于网
10、络搜索 文档来自于网络搜索 mapred.job.tracker# hdfs:/192.168.1.103:54320 hdfs:/192.168.1.103:9001 7 Hadoop地运行(0) 格式化:zyxnodeC:/hadoop-0.20.2/bin$ hadoop namenode format文档来自于网络搜索(1)用jps查看进程:zyxnodeC:/hadoop-0.20.2/bin$ jps31030 NameNode31488 TaskTracker31283 SecondaryNameNode31372 JobTracker31145 DataNode31599 Jp
11、s(2)查看集群状态zyxnodeC:/hadoop-0.20.2/bin$ hadoop dfsadmin -report文档来自于网络搜索Configured Capacity: 304716488704 (283.79 GB)Present Capacity: 270065557519 (251.52 GB)DFS Remaining: 270065532928 (251.52 GB)DFS Used: 24591 (24.01 KB)DFS Used%: 0%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0 -Datanodes available: 1 (1 total, 0 dead) Name: 192.168.1.103:50010Decommission Status : NormalConfigured Capacity: 304716488704 (283.79 GB)DFS Used: 24591 (24.01 KB)Non DFS Used: 34650931185 (32.27 GB)DFS Remaining: 270065532928(2