2017-09-12 4 views
0

Je suis nouveau à Pentaho.Je suis en utilisant la distribution MAPR, Lorsque je soumets un travail d'allumage, je reçois l'erreur ci-dessous.S'il vous plaît m'aider sur ce.J'ai fait la configuration nécessaire pour l'intégration de l'étincelle et pentaho.S'il vous plaît trouver les captures d'écran ci-joint de l'emploi de Pentaho Spark Submit.Spark Soumettre erreur donnant dans Pentaho Spoon

2017/09/12 12:41:44 - Spoon - Starting job... 
2017/09/12 12:41:44 - spark_submit - Start of job execution 
2017/09/12 12:41:44 - spark_submit - Starting entry [Spark Submit] 
2017/09/12 12:41:44 - Spark Submit - Submitting Spark Script 
2017/09/12 12:41:45 - Spark Submit - Warning: Master yarn-cluster is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead. 
2017/09/12 12:41:45 - Spark Submit - SLF4J: Class path contains multiple SLF4J bindings. 
2017/09/12 12:41:45 - Spark Submit - SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
2017/09/12 12:41:45 - Spark Submit - SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
2017/09/12 12:41:45 - Spark Submit - SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
2017/09/12 12:41:45 - Spark Submit - SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
2017/09/12 12:41:45 - Spark Submit - 17/09/12 12:41:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2017/09/12 12:41:46 - Spark Submit - 17/09/12 12:41:46 ERROR MapRZKRMFinderUtils: Zookeeper address not configured in Yarn configuration. Please check yarn-site.xml. 
2017/09/12 12:41:46 - Spark Submit - 17/09/12 12:41:46 ERROR MapRZKRMFinderUtils: Unable to determine ResourceManager service address from Zookeeper. 
2017/09/12 12:41:46 - Spark Submit - 17/09/12 12:41:46 ERROR MapRZKBasedRMFailoverProxyProvider: Unable to create proxy to the ResourceManager null 
2017/09/12 12:41:46 - Spark Submit - Exception in thread "main" java.lang.RuntimeException: Unable to create proxy to the ResourceManager null 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider.getProxy(MapRZKBasedRMFailoverProxyProvider.java:135) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.<init>(RetryInvocationHandler.java:195) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:304) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:298) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:95) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:73) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:193) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:152) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.yarn.Client.run(Client.scala:1154) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1213) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.yarn.Client.main(Client.scala) 
2017/09/12 12:41:46 - Spark Submit - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
2017/09/12 12:41:46 - Spark Submit - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
2017/09/12 12:41:46 - Spark Submit - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
2017/09/12 12:41:46 - Spark Submit - at java.lang.reflect.Method.invoke(Method.java:498) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:733) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:202) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:116) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
2017/09/12 12:41:46 - Spark Submit - Caused by: java.lang.RuntimeException: Zookeeper address not found from MapR Filesystem and is also not configured in Yarn configuration. 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.MapRZKRMFinderUtils.mapRZkBasedRMFinder(MapRZKRMFinderUtils.java:99) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider.updateCurrentRMAddress(MapRZKBasedRMFailoverProxyProvider.java:64) 
2017/09/12 12:41:46 - Spark Submit - at org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider.getProxy(MapRZKBasedRMFailoverProxyProvider.java:131) 
2017/09/12 12:41:46 - Spark Submit - ... 21 more 
2017/09/12 12:41:46 - spark_submit - Finished job entry [Spark Submit] (result=[false]) 
2017/09/12 12:41:46 - spark_submit - Starting entry [Spark Submit] 
2017/09/12 12:41:46 - Spark Submit - Submitting Spark Script 

PENTAHO SPARK SUBMIT DETAILS PENTAHO SPARK ARGUMENT DETAILS

Répondre

0

À mon avis, le PDI vous indique le problème sur la ligne 6: Class path contains multiple SLF4J bindings et une référence à des explications détaillées 4 lignes ci-dessous: http://www.slf4j.org/codes.html#multiple_bindings. La classe ambiguë est StaticLoggerBinder qui peut être en opt/mapr/lib/slf4j-log4j12-1.7.12.jar ou en opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar.

Retirez l'un d'entre eux et redémarrez.

+0

Salut Alain, Merci pour votre réponse.Après avoir retiré l'un des jar nous avons rencontré le problème ci-dessous.2017/09/12 14:48:38 - Spoon - Starting job ... 2017/09/12 14:48 : 38 - spark_submit - Début de l'exécution du travail 2017/09/12 14:48:38 - spark_submit - Entrée de départ [Spark Submit] 2017/09/12 14:48:38 - Spark Submit - Envoi de Spark Script 2017/09/12 14:48:39 - Spark Submit - Avertissement: Master yarn-cluster est obsolète depuis la version 2.0. Veuillez utiliser le "fil" principal avec le mode de déploiement spécifié à la place. – Khumar