Wednesday, May 7, 2014

MapR commands - 13 Restart Cluster

  • Shutdown Procedure 

1.  Before shutting down the cluster, you will need a list of NFS nodes.
Determine which nodes are running the NFS gateway:
# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc==nfs]" -columns id,h,hn,svc, rp
id                   service                                                         hostname  health  ip                                                    
4277269757083023248  tasktracker,webserver,cldb,fileserver,nfs,hoststats,jobtracker  mdw       2       172.28.4.250,172.28.8.250,172.28.12.250  
3528082726925061986  tasktracker,fileserver,nfs,hoststats                            sdw1      2       172.28.4.1,172.28.8.1,172.28.12.1                     
5521777324064226112  fileserver,tasktracker,nfs,hoststats                            sdw3      0       172.28.8.3,172.28.12.3,172.28.4.3                     
3482126520576246764  fileserver,tasktracker,nfs,hoststats                            sdw5      0       172.28.4.5,172.28.8.5,172.28.12.5                     
4667932985226440135  fileserver,tasktracker,nfs,hoststats                            sdw7      0       172.28.8.7,172.28.12.7,172.28.4.7 
Determine which nodes are running the CLDB:
# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc==cldb]" -columns id,h,hn,svc, rp
id                   service                                                         hostname  health  ip                                                    
4277269757083023248  tasktracker,webserver,cldb,fileserver,nfs,hoststats,jobtracker  mdw       2       172.28.4.250,172.28.8.250,172.28.12.250 
List all non-CLDB nodes:
# /opt/mapr/bin/maprcli node list -filter "[rp==/*]and[svc!=cldb]" -columns id,h,hn,svc, rp
id                   service                               hostname  health  ip                                 
3528082726925061986  tasktracker,fileserver,nfs,hoststats  sdw1      2       172.28.4.1,172.28.8.1,172.28.12.1  
5521777324064226112  fileserver,tasktracker,nfs,hoststats  sdw3      0       172.28.8.3,172.28.12.3,172.28.4.3  
3482126520576246764  fileserver,tasktracker,nfs,hoststats  sdw5      0       172.28.4.5,172.28.8.5,172.28.12.5  
4667932985226440135  fileserver,tasktracker,nfs,hoststats  sdw7      0       172.28.8.7,172.28.12.7,172.28.4.7 
2.  Shut down all NFS instances
/opt/mapr/bin/maprcli node services -nfs stop -nodes mdw sdw1 sdw3 sdw5 sdw7
3.  SSH into each CLDB node and stop the warden.
/etc/init.d/mapr-warden stop
4.  SSH into each of the remaining nodes and stop the warden.
/etc/init.d/mapr-warden stop
5.  Stop the zookeeper
/etc/init.d/mapr-zookeeper stop
  • Startup Procedure

1.  Start the ZooKeeper on nodes where it is installed
/etc/init.d/mapr-zookeeper start
Verify quorum is established.
service mapr-zookeeper qstatus
2.  On CLDB nodes, start the warden.
/etc/init.d/mapr-warden start
Verify CLDB master:
maprcli node cldbmaster
3. Start warden on remaining nodes
service mapr-warden start
4. Give permission to admin user:
maprcli acl edit -type cluster -user <user>:fc

No comments:

Post a Comment

Popular Posts