Cloud Scheduler in next generation data centers.
INSTALLATION GUIDE: tar -xzvf cloud-scheduler-.tar.gz cd cloud-scheduler- sudo ./install.sh
CONFIGURATION PARAMETERS: To run overlord agent we need to specify some config parameters in the file at /etc/overlord-agent/config.json.
Below are the parameters that are required to be provided:
"Overlord-Ip": "localhost"
"Overlord-Port": "6000"
"Id": "1"
"Base-Priority": "1"
"Mesos-HDFS-Master": "192.168.0.128"
"SSH-Private-Key": "/Users/chemistry_sourabh/.ssh/id_rsa"
"Key-Name": "Sourabh-OSX"
"Username": "Enter here"
"Password": "Enter here"
"Mesos-Master-Ip": "129.10.3.91"
"Mesos-Master-Port": "5050"
"Framework-Priorities": [{"name":"PageRank","priority":2},{"name":"Hadoop","priority":1}]
"Node-Name-Prefix": "Mesos-Slave"
"Node-Name-Suffix": ".cloud",
Below are some optional parameters:
"Poll-Interval": "5000"
"Port": "4500"
"Min-Nodes": "2"
"No-Delete-Slaves": ["Mesos-Slave-1-1.cloud","Mesos-Slave-1-2.cloud"]
"Cluster-Security-Group": "Cluster-1"
"Cluster-Network-Id": "87286d17-9092-47ee-a284-4056065ae508"
"New-Node-Flavor": "dcc95f79-1f29-49c3-a44c-2f915c4cf44e"
"Image-Name": "0418168d-724a-4517-b96c-9d627c64b17d"
"Scale-Up-Cluster-Load": 0.85
"Scale-Down-Cluster-Load": 0.8
"Scale-Up-Cluster-Memory": 0.0001
"Scale-Down-Cluster-Memory": 0.3
"Scale-Up-Slave-Load": 0.85
"Scale-Down-Slave-Load": 0.3
"Scale-Up-Slave-Memory": 0.1
"Scale-Down-Slave-Memory": 0.7
"Manager-Plugin": "./Release/MesosElasticityPlugin.jar"
"Collector-Plugin": "./Release/MesosCollectorPlugin.jar"
"Cluster-Scaler-Plugin": "./Release/OpenStackClusterScalerPlugin.jar"
"DB-Executor-Plugin": "./Release/SQLiteDBExecutorPlugin.jar"
"Policy-Info-Plugin": "./Release/AgingPolicyInfoPlugin.jar"
CLUSTER SETUP: For our testing we use a specific set up for our cluster so that we can test various scenarios. So every cluster will have one master and remaining will be considered slaves. There are a group of security protocols which will be followed for master. For Ingress, the IP Protocol will be TCP and the port range is 5050 and 8080 for being a Mesos Master. An Ingress of TCP and 22 Port Range for successfully SSHing. While setting up the cluster, we need to make sure that the master and all its slave nodes are in the same network.
The below line sets up the mesos master: Sudo mesos master --ip=* --work_dir=/var/lib/mesos
*The IP should be of the current VM which is considered to be the master.
All the slaves machine should set up the mesos slave in the below manner: mesos slave –master=<master’s-ip>:5050
If we are setting up Hadoop in this cluster, the master should be set up using the below steps:
-
Hadoop namenode –format
-
hadoop-daemon.sh start namenode
For the slave machine slaves:
-
Hadoop-daemon.sh start datanode
Then we can put any files in the hdfs using the below command: Hadoop dfs –put / / Any job run with Mesos will be distributed across using Dynamic Partitioning. More details for setting up HDFS cluster and run Hadoop jobs on mesos cluster is given here: http://kovit-nisar-it.blogspot.com/
RUNNING THE OVERLORD: sudo service overlord start
RUNNING THE CLUSTER ELASTICITY AGENT: sudo service overlord-agent start