Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial job has not accepted #1

Closed
JuanDavidGonzalez opened this issue Apr 25, 2019 · 3 comments
Closed

Initial job has not accepted #1

JuanDavidGonzalez opened this issue Apr 25, 2019 · 3 comments

Comments

@JuanDavidGonzalez
Copy link

I'm trying to create my own image using your Dockerfile, and its is build without problems and I create de master deployment and worker deployment, and the pods are running, but when I execute the example with pyspark I get this error:

$ kubectl exec spark-master-2-7dd86dc9d7-tftnr -it pyspark

Python 2.7.9 (default, Jun 29 2016, 13:08:31)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.0
      /_/

Using Python version 2.7.9 (default, Jun 29 2016 13:08:31)
SparkSession available as 'spark'.
>>> words = 'the quick brown fox jumps over the lazy dog the quick brown fox jumps over the lazy dog'
>>> seq = words.split()
>>> data = sc.parallelize(seq)
>>> counts = data.map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b).collect()
**[Stage 0:>                                                          (0 + 0) / 2]2019-04-25 15:53:10 WARN  TaskSchedulerImpl:66 - **Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources**

Any idea what is the problem?

@mjhea0
Copy link
Contributor

mjhea0 commented Apr 28, 2019

@mjhea0 mjhea0 closed this as completed Apr 28, 2019
@JuanDavidGonzalez
Copy link
Author

Thanks for your answer but I don't think that this be the problem, because when I download your image from docker hub, I don't have this problem, but I really need to built the image with the dockerfile, any other suggestion?

@leonardas103
Copy link

leonardas103 commented Apr 6, 2020

Running spark on Kubernetes, I am getting the same error:

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

I have narrowed it down to the worker not knowing the ip of spark-master, however, the worker pod logs are empty to investigate further.

Also, I believe that the following two are not the same

"./sbin/start-slave.sh spark://10.X.X.X:7077"
"./sbin/start-slave.sh spark://spark-master:7077"

Edit: Was caused by the pod network [See Issue] Fixed with:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants