Hadoop MapReduce is a software framework for processing vast amounts of data in-parallel on large clusters | |||
1,502 0 | |||
| In other words, Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see below HDFS Architecture Diagram) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.
The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master. Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration. The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client. Although the Hadoop framework is implemented in Java, MapReduce applications need not be written in Java. Hadoop Streaming is a utility which allows users to create and run jobs with any executables (e.g. shell utilities) as the mapper and/or the reducer. Hadoop Pipes is a SWIG- compatible C++ API to implement MapReduce applications (non JNI based). References - http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Inputs+and+Outputs Tags: Blob Chunk Cloud Cloud Computing Computers & Internet File-System HBase HDFS Hadoop Distributed File System Hadoop Mascot Hadoop Pipe Hadoop Streaming JNI Java JobTracker MapReduce NoSQL Parallel Processing SWIG TaskTracker | |||
| |||
| | |||
|
SIMILAR POSTSApache Hadoop is designed to scale up from single servers to thousands of machinesHDFS(Hadoop Distributed File System) is designed to run on commodity hardware – Low cost hardwareApache HBase is a storage system, with roots in Hadoop, and uses HDFS for underlying storage.Hive provides SQL-like query language on HDFS(Hadoop Distributed File System)With AWS cloud,we met our reliability and performance objectives at a fraction of the cost – Mr. Chun Kang – Pricipal Engineer, SamsungWeb Cache function in Network Gateway could cause internet service troubleSamsung Movies and TV Shows User InterfaceFNC Batch Renames Digital Photos by Date |