侧边栏壁纸
博主头像
落叶人生博主等级

走进秋风,寻找秋天的落叶

  • 累计撰写 130562 篇文章
  • 累计创建 28 个标签
  • 累计收到 9 条评论
标签搜索

目 录CONTENT

文章目录

Hadoop学习笔记13:./bin/hadoop namenode -format之脚本分析

2022-07-06 星期三 / 0 评论 / 0 点赞 / 83 阅读 / 8371 字

网上视频和书本都说:启动之前要格式化,本节就来讲讲解格式化HDFS命令背后的原理! 先来看看执行结果! 下面我们来研究原理! ===================== 首先需要研究下

网上视频和书本都说:启动之前要格式化,本节就来讲讲解格式化HDFS命令背后的原理!

先来看看执行结果!

下面我们来研究原理!

=====================

首先需要研究下./bin/hadoop脚本怎么运行

THIS="$0"while [ -h "$THIS" ]; do  ls=`ls -ld "$THIS"`  link=`expr "$ls" : '.*-> /(.*/)$'`  if expr "$link" : '.*/.*' > /dev/null; then    THIS="$link"  else    THIS=`dirname "$THIS"`/"$link"  fidone意思是判断./bin/hadoop是否是一个符号链接,这里不是符号链接,所以不用太关注这段代码!$0的值是 ./bin/hadoop

------------

# if no args specified, show usageif [ $# = 0 ]; then  echo "Usage: hadoop COMMAND"  echo "where COMMAND is one of:"  echo "  namenode -format  format the DFS filesystem"  echo "  namenode          run the DFS namenode"  echo "  datanode          run a DFS datanode"  echo "  dfs               run a DFS admin client"  echo "  fsck              run a DFS filesystem checking utility"  echo "  jobtracker        run the MapReduce job Tracker node"   echo "  tasktracker       run a MapReduce task Tracker node"   echo "  job               manipulate MapReduce jobs"   echo "  jar <jar>         run a jar file"  echo " or"  echo "  CLASSNAME         run the class named CLASSNAME"  echo "Most commands print help when invoked w/o parameters."  exit 1fi接下来是判断./bin/hadoop后面的参数个数,如果是0,表明用户还不太清楚这个命令的用法,则给出这个命令的用法!很简单!

---------

 

# get argumentsCOMMAND=$1shift这里是获取参数,如果执行./bin/hadoop namenode -format则COMMAND的值就是namenode然后执行完shift后,$1就变成-format


---------- 

# some directoriesTHIS_DIR=`dirname "$THIS"`HADOOP_HOME=`cd "$THIS_DIR/.." ; pwd`接下来是获取2个变量值分别是./bin/usr/local/hadoop-0.1.0

---

# Allow alternate conf dir location.HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-$HADOOP_HOME/conf}"接下来就是获取HADOOP_CONF_DIR值为/usr/local/hadoop-0.1.0/conf

=============

if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then  source "${HADOOP_CONF_DIR}/hadoop-env.sh"fi导入hadoop-env.sh里的环境变量,可见后续如果有需求的话,可以修改hadoop-env.sh这个文件就可以了而不是修改hadoop文件,很方便!备注:针对hadoop-env.sh这个文件,我只添加了export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_21

===

 

# some Java parametersif [ "$JAVA_HOME" != "" ]; then  #echo "run java in $JAVA_HOME"  JAVA_HOME=$JAVA_HOMEfi给JAVA_HOME赋值!结果自然就是 /usr/lib/jvm/jdk1.7.0_2

==========

if [ "$JAVA_HOME" = "" ]; then  echo "Error: JAVA_HOME is not set."  exit 1fi这个就是校验JAVA_HOME是否配置了

=======

JAVA=$JAVA_HOME/bin/javaJAVA_HEAP_MAX=-Xmx1000m 设置2个变量,值分别如下:/usr/lib/jvm/jdk1.7.0_21/bin/java-Xmx1000m

---

 

# check envvars which might override default argsif [ "$HADOOP_HEAPSIZE" != "" ]; then  #echo "run with heapsize $HADOOP_HEAPSIZE"  JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"  #echo $JAVA_HEAP_MAXfi这里,通过echo得知 $HADOOP_HEAPSIZE为空,所以也不会执行!

===

 

# CLASSPATH initially contains $HADOOP_CONF_DIRCLASSPATH="${HADOOP_CONF_DIR}"CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar最终CLASSPATH的值是/usr/local/hadoop-0.1.0/conf:/usr/lib/jvm/jdk1.7.0_21/lib/tools.jar
# for developers, add Hadoop classes to CLASSPATHif [ -d "$HADOOP_HOME/build/classes" ]; then  CLASSPATH=${CLASSPATH}:$HADOOP_HOME/build/classesfiif [ -d "$HADOOP_HOME/build/webapps" ]; then  CLASSPATH=${CLASSPATH}:$HADOOP_HOME/buildfiif [ -d "$HADOOP_HOME/build/test/classes" ]; then  CLASSPATH=${CLASSPATH}:$HADOOP_HOME/build/test/classesfi
经过修饰,CLASSPATH最终的值是/usr/local/hadoop-0.1.0/conf:/usr/lib/jvm/jdk1.7.0_21/lib/tools.jar:/usr/local/hadoop-0.1.0/build/classes:/usr/local/hadoop-0.1.0/build:/usr/local/hadoop-0.1.0/build/test/classes这样就把所有的相关类包含进来了!!!!!!!!!!!!!


===

# so that filenames w/ spaces are handled correctly in loops belowIFS=这个简单!

---

 

# for releases, add hadoop jars & webapps to CLASSPATHif [ -d "$HADOOP_HOME/webapps" ]; then  CLASSPATH=${CLASSPATH}:$HADOOP_HOMEfifor f in $HADOOP_HOME/hadoop-*.jar; do  CLASSPATH=${CLASSPATH}:$f;done# add libs to CLASSPATHfor f in $HADOOP_HOME/lib/*.jar; do  CLASSPATH=${CLASSPATH}:$f;donefor f in $HADOOP_HOME/lib/jetty-ext/*.jar; do  CLASSPATH=${CLASSPATH}:$f;done最终CLASSPATH的值是/usr/local/hadoop-0.1.0/conf:/usr/lib/jvm/jdk1.7.0_21/lib/tools.jar:/usr/local/hadoop-0.1.0/build/classes:/usr/local/hadoop-0.1.0/build:/usr/local/hadoop-0.1.0/build/test/classes:/usr/local/hadoop-0.1.0:/usr/local/hadoop-0.1.0/hadoop-0.1.0-examples.jar:/usr/local/hadoop-0.1.0/hadoop-0.1.0.jar:/usr/local/hadoop-0.1.0/lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.1.0/lib/jetty-5.1.4.jar:/usr/local/hadoop-0.1.0/lib/junit-3.8.1.jar:/usr/local/hadoop-0.1.0/lib/lucene-core-1.9.1.jar:/usr/local/hadoop-0.1.0/lib/servlet-api.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/ant.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/commons-el.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jasper-compiler.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jasper-runtime.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jsp-api.jar

===

# figure out which class to runif [ "$COMMAND" = "namenode" ] ; then  CLASS='org.apache.hadoop.dfs.NameNode'elif [ "$COMMAND" = "datanode" ] ; then  CLASS='org.apache.hadoop.dfs.DataNode'elif [ "$COMMAND" = "dfs" ] ; then  CLASS=org.apache.hadoop.dfs.DFSShellelif [ "$COMMAND" = "fsck" ] ; then  CLASS=org.apache.hadoop.dfs.DFSckelif [ "$COMMAND" = "jobtracker" ] ; then  CLASS=org.apache.hadoop.mapred.JobTrackerelif [ "$COMMAND" = "tasktracker" ] ; then  CLASS=org.apache.hadoop.mapred.TaskTrackerelif [ "$COMMAND" = "job" ] ; then  CLASS=org.apache.hadoop.mapred.JobClientelif [ "$COMMAND" = "jar" ] ; then  JAR="$1"  shift  CLASS=`"$0" org.apache.hadoop.util.PrintJarMainClass "$JAR"`  if [ $? != 0 ]; then    echo "Error: Could not find main class in jar file $JAR"    exit 1  fi  CLASSPATH=${CLASSPATH}:${JAR}else  CLASS=$COMMANDfi这里显然执行./bin/hadoop namenode -format的话CLASS就是 org.apache.hadoop.dfs.NameNode

===

 

# cygwin path translationif expr `uname` : 'CYGWIN*' > /dev/null; then  CLASSPATH=`cygpath -p -w "$CLASSPATH"`fi我的是在linux中运行的,所以不用管!

---

最后就是执行了!

# run itexec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS -classpath "$CLASSPATH" $CLASS "$@"相关的值分别如下:/usr/lib/jvm/jdk1.7.0_21/bin/java-Xmx1000m-classpath/usr/local/hadoop-0.1.0/conf:/usr/lib/jvm/jdk1.7.0_21/lib/tools.jar:/usr/local/hadoop-0.1.0/build/classes:/usr/local/hadoop-0.1.0/build:/usr/local/hadoop-0.1.0/build/test/classes:/usr/local/hadoop-0.1.0:/usr/local/hadoop-0.1.0/hadoop-0.1.0-examples.jar:/usr/local/hadoop-0.1.0/hadoop-0.1.0.jar:/usr/local/hadoop-0.1.0/lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.1.0/lib/jetty-5.1.4.jar:/usr/local/hadoop-0.1.0/lib/junit-3.8.1.jar:/usr/local/hadoop-0.1.0/lib/lucene-core-1.9.1.jar:/usr/local/hadoop-0.1.0/lib/servlet-api.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/ant.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/commons-el.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jasper-compiler.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jasper-runtime.jar:/usr/local/hadoop-0.1.0/lib/jetty-ext/jsp-api.jarorg.apache.hadoop.dfs.NameNode-format

这样就运行起来了!

很简单!

 

广告 广告

评论区