Je travaillais dans Hadoop et tout à coup une fois que je créé mon pot runnable programme de traitement d'image cette erreur est survenue, il a une relation avec le chemin de la bibliothèque native OpenCVpas libopencv_core.so dans java.library.path Hadoop
tout en utilisant l'éclipse chemin que je peux définir en utilisant:
// System.loadLibrary (Core.NATIVE_LIBRARY_NAME);
Mais lors de l'exécution de jar exécutable en utilisant hadoop, il donne une erreur. Toute personne pouvant rectifier?
[email protected]:/home/mnh/Desktop$ hadoop jar opencv19.jar /usr/local/hadoop/input/cars.mp4 /usr/local/hadoop/cars89
17/06/07 16:15:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/07 16:15:39 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.137.52:8050
17/06/07 16:15:40 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/06/07 16:16:08 INFO input.FileInputFormat: Total input paths to process : 1
17/06/07 16:16:08 INFO mapreduce.JobSubmitter: number of splits:1
17/06/07 16:16:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1496831815466_0009
17/06/07 16:16:09 INFO impl.YarnClientImpl: Submitted application application_1496831815466_0009
17/06/07 16:16:09 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1496831815466_0009/
17/06/07 16:16:09 INFO mapreduce.Job: Running job: job_1496831815466_0009
17/06/07 16:16:20 INFO mapreduce.Job: Job job_1496831815466_0009 running in uber mode : false
17/06/07 16:16:20 INFO mapreduce.Job: map 0% reduce 0%
17/06/07 16:16:29 INFO mapreduce.Job: Task Id : attempt_1496831815466_0009_m_000000_0, Status : FAILED
Error: no libopencv_core.so in java.library.path
17/06/07 16:16:37 INFO mapreduce.Job: Task Id : attempt_1496831815466_0009_m_000000_1, Status : FAILED
Error: no libopencv_core.so in java.library.path
17/06/07 16:16:45 INFO mapreduce.Job: Task Id : attempt_1496831815466_0009_m_000000_2, Status : FAILED
Error: no libopencv_core.so in java.library.path
17/06/07 16:16:54 INFO mapreduce.Job: map 100% reduce 100%
17/06/07 16:16:55 INFO mapreduce.Job: Job job_1496831815466_0009 failed with state FAILED due to: Task failed task_1496831815466_0009_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/06/07 16:16:56 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=26582
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=26582
Total vcore-seconds taken by all map tasks=26582
Total megabyte-seconds taken by all map tasks=27219968
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Vous devrez annuler la suggestion précédente que j'ai faite. Supprimez le fichier du chemin des Hadoop natives et placez-le ailleurs. – Serhiy
par la façon dont la suggestion précédente en quelque sorte résolu le problème, mais une chose s'est produite le travail mapreduce ralentit et ma carte réduit donne erreur de timeout à la carte 100% et réduire 0% @Serhiy –