How can i run a mapreduce task on multiple nodes?

Pulling the question from Integrating Hadoop with CephFS into its own thread:

@nothand, which charms do you have deployed? Please run the export-bundle command if you are unsure:

$ juju export-bundle

I sorry, I just new to Linux .My previous major was chemistry, I know little about linux command and proper noun.
when I run export-bundle

[root@node002 .ssh]# export-bundle
-bash: export-bundle: command not found

I am not install hadoop-ceph.jar by git clone. I just donwload in http:// ceph.com/download/hadoop -cephfs.jar and put it in /usr/share/java

I try your steps now~, thank you for listen to me , I have say that to many people , but no one anwer me but you , thank you very much!

[root@node001 java]# ls -lih
total 600K
1059664 -rw-r–r-- 1 root root 89K Jun 10 2014 easymock2-2.4.jar
1059665 lrwxrwxrwx 1 root root 17 Jan 9 23:52 easymock2-2.5.2.jar -> easymock2-2.4.jar
1059687 -rw-r–r-- 1 root root 18K Jan 10 00:42 hadoop-cephfs.jar
1197881 drwxr-xr-x 2 root root 4.0K Jan 9 23:52 hamcrest
1059681 lrwxrwxrwx 1 root root 25 Jan 9 23:52 junit4.jar -> /usr/share/java/junit.jar
1059680 -rw-r–r-- 1 root root 284K Jun 10 2014 junit.jar
59685 -rw-r–r-- 1 root root 12K Jan 10 00:42 libcephfs.jar
1059684 -rw-r–r-- 1 root root 12K Apr 11 2019 libcephfs-test.jar
1059669 -rw-r–r-- 1 root root 176K Nov 5 2016 qdox.jar

and about libcephfs_jni.so and other things about cephs , I run this

yum install cephfs-java libcephfs1-devel python-cephfs libcephfs_jni1-devel

then, put libcephfs_jni.so into ~/hadoop-2.7.7/lib/native

[root@node001 native]# pwd && ls -lih
/usr/local/hadoop-2.7.7/lib/native
total 5.0M
1059686 lrwxrwxrwx 1 root root 27 Jan 9 23:54 libcephfs_jni.so -> /usr/lib64/libcephfs_jni.so
1059143 -rw-r–r-- 1 1000 ftp 1.5M Jul 19 2018 libhadoop.a
1059137 -rw-r–r-- 1 1000 ftp 1.6M Jul 19 2018 libhadooppipes.a
1059140 lrwxrwxrwx 1 1000 ftp 18 Jul 19 2018 libhadoop.so -> libhadoop.so.1.0.0
1059141 -rwxr-xr-x 1 1000 ftp 845K Jul 19 2018 libhadoop.so.1.0.0
1059142 -rw-r–r-- 1 1000 ftp 465K Jul 19 2018 libhadooputils.a
1059139 -rw-r–r-- 1 1000 ftp 436K Jul 19 2018 libhdfs.a
1059136 lrwxrwxrwx 1 1000 ftp 16 Jul 19 2018 libhdfs.so -> libhdfs.so.0.0.0
1059138 -rwxr-xr-x 1 1000 ftp 275K Jul 19 2018 libhdfs.so.0.0.0

core-site.xml–>

hadoop.tmp.dir /usr/local/hadoop-2.7.7/tmp
<property>
  <name>fs.defaultFS</name>
  <value>ceph://ceph-node001:6789</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>ceph://ceph-node001:6789</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/usr/local/hadoop-2.7.7/data</value>
</property>

<property>
  <name>ceph.conf.file</name>
  <value>/etc/ceph/ceph.conf</value>
</property>

<property>
  <name>ceph.auth.id</name>
  <value>root</value>
</property>

<property>
  <name>ceph.auth.keyring</name>
  <value>/etc/ceph/ceph.client.admin.keyring</value>
</property>

<property>
  <name>ceph.data.pools</name>
  <value>hadoop1</value>
</property>

<property>
  <name>fs.ceph.impl</name>
  <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
</property>
<property>
  <name>fs.AbstractFileSystem.ceph.impl</name>
  <value>org.apache.hadoop.fs.ceph.CephFs</value>
</property>
<property>
  <name>ceph.object.size</name>
  <value>134217728</value>
</property>

hadoop-env.sh–>

export HADOOP_CLASSPATH=/usr/share/java/libcephfs.jar
export HADOOP_CLASSPATH=/usr/share/java/hadoop-cephfs.jar:$HADOOP_CLASSPATH

I run this teragen, just one slave run task.

hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar teragen -D mapreduce.job.maps=30 -D mapreduce.job.reduces=0 1073741824 /user/input/terasort/100G-input

After follow your all steps, there is nothing different T - T .
Its so hard, I cannot check out what is wrong.
my teacher say:

这个主要问题应该是你的ceph配置文件出错,导致你的master主节点不能跟其他节点通信,所以只有一个节点在跑任务。如果你实在检查不出来,就用单节点的做对比吧。

it means : my configuration file on ceph may set wrong, leading my hadoop-master cannot communicate with other slave, so that just only one slave run tasks. If you cannot solve this situation, then give up and just use one cluster.

But I try ssh command, every cluster can link to every ohter cluster.
What’s more, every hadoop-clusters can run hadoop dfs -*** and control the cephFS.

this is my ceph.conf, I can not figure out what is wrong T-T.

[global]
fsid = 3e2b7630-dd34-44a1-aa97-1cd6e253978b
mon_initial_members = ceph-node001, ceph-node002
mon_host = 172.16.67.255,172.16.68.0
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd_pool_default_size = 1
osd_pool_default_min_size = 1