scala - Spark cluster submission can't bind to slave address -


After text "itemprop =" text ">
  error netty.NettyTransport: 0, Netty closed the transport: / 172.28.128.3 bound to spark.master Failed to use 15/03/16 04:08:50 Use Warne. Usage: The service 'driver' could not force port 0. Try Port 1 ^^^ This is an error that I am getting from my slave log. I am submitting my work with SPARC-submit. It does not make sense because Das is able to connect to the web, as shown in web-UI. I thought I have shown the correct ports as shown in all the configuration of my machines I 

'State Uttar Pradesh' -A2 | Tail-n1 | Awk '{$ 2 print}' | Cut -f1 -d '/') export SPARK_MASTER_IP = spark.master exportHADOOP_CONF_DIR = / usr / local / Hadoop / etc / Hadoop Export SPARK_WORKER_PORT = 9919

SPARC Defaults.Conf

spark.master: spark.master: 7077 Really spark.eventLog.dir spark.eventLog.enabled HDFS: //spark.master: 8020 / spark Login with spark.yarn.submit .file.replication 3 spark.app.name Quant spark.ui.port 4040 spark.driver.port 9929 spark.executor.port 9939 spark.driver .host spark.slave

This is my On Das and Master Nodes When I submit a job, I want to type Bash command =>

  / usr / local / spark / bin / spark-submit --class dev.quant.App --deploy-mode Using Cluster HDFs: ///spark/my-app.jar  

spark-env.sh and spark-defaults.conf are chmod 775 so that they should run.

My owner is logged by:

  15/03/16 04:08:51 INFO master.Master: Driver removal: driver-20150316040848-0002 15/03 / 16 04:08: 54 Information Master. Mister: akka.tcp: //driverClient@spark.master: 55303 got it deleted. 15/03/16 04:08:54 Warning remote.ReliableDeliverySupervisor: The association failed with the remote system [akka.tcp: //driverClient@spark.master: 55303], the address is now [5000] Gated to MS . The reason is: [Disassified] 15/03/16 04:08:54 Information Master. Mister: akka.tcp: //driverClient@spark.master: 55303 got it deleted. 15/03/16 04:08:54 Information actor.LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter $ DisassociateUnderlying] actor [Aqua: // sparkmaster / deadLetters] for actor [Aqua: // sparkmaster / system / Transportation /akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%40172.28.128.3%3A32995-5#678153583] was not distributed. [3] The dead letters faced. This logging may be stopped or the configuration settings can be adjusted with 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'  

what I heard Is not supported for launch in cluster mode, which I do not understand, because the standalone spark cluster solution is considered, so I also tried to launch in client mode, which gave me classical sound ambition: dev Quat. The app gives what I do not understand because my jar is clearly, all the dependencies in the assembly are packed together. I was trying to set this stupid thing for a very long time and it would be good to break it. Finally I have been installed Scale 2.10.5 and my app is packed with 2.10.5, if it matters.

Spark.driver.host looks suspicious to me. I think it should be set to spark. Master and Spark no. Slave or completely remove that parameter.


Comments

Post a Comment

Popular posts from this blog

ffmpeg h264_omx encoder - libomxcore.so missing. - Raspberry Pi Forums

Need simple timer/stopwatch sketch

Dtc compilation fails - Raspberry Pi Forums