|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use org.apache.hadoop.mapreduce | |
---|---|
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.dancing | This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. |
org.apache.hadoop.examples.pi | This package consists of a map/reduce application, distbbp, which computes exact binary digits of the mathematical constant π. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.gridmix | |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.jobhistory | |
org.apache.hadoop.mapreduce.lib.aggregate | Classes for performing various counting and aggregations. |
org.apache.hadoop.mapreduce.lib.chain | |
org.apache.hadoop.mapreduce.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce.lib.fieldsel | |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.jobcontrol | Utilities for managing dependent jobs. |
org.apache.hadoop.mapreduce.lib.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapreduce.lib.map | |
org.apache.hadoop.mapreduce.lib.output | |
org.apache.hadoop.mapreduce.lib.partition | |
org.apache.hadoop.mapreduce.lib.reduce | |
org.apache.hadoop.mapreduce.split | |
org.apache.hadoop.mapreduce.task | |
org.apache.hadoop.mapreduce.tools |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.examples | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
Partitioner
Partitions the key space. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.examples.dancing | |
---|---|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.examples.pi | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
Partitioner
Partitions the key space. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.examples.terasort | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred | |
---|---|
Cluster
Provides a way to access information about the map/reduce cluster. |
|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
Counters
|
|
ID
A general identifier, which internally stores the id as an integer. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
JobStatus
Describes the current status of a job. |
|
JobStatus.State
Current state of the job |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
QueueInfo
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
TaskType
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred.gridmix | |
---|---|
Job
The job submitter's view of the Job. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred.lib | |
---|---|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred.lib.db | |
---|---|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce | |
---|---|
Cluster
Provides a way to access information about the map/reduce cluster. |
|
ClusterMetrics
Status information on the current state of the Map-Reduce cluster. |
|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
CounterGroup
A group of Counter s that logically belong together. |
|
Counters
|
|
ID
A general identifier, which internally stores the id as an integer. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
Job.JobState
|
|
Job.TaskStatusFilter
|
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobCounter
|
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
JobPriority
Used to describe the priority of the running job. |
|
JobStatus
Describes the current status of a job. |
|
JobStatus.State
Current state of the job |
|
MapContext
The context that is given to the Mapper . |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
QueueAclsInfo
Class to encapsulate Queue ACLs for a particular user. |
|
QueueInfo
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. |
|
QueueState
Enum representing queue state |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
ReduceContext
The context passed to the Reducer . |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
TaskCompletionEvent.Status
|
|
TaskCounter
|
|
TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
|
TaskTrackerInfo
Information about TaskTracker. |
|
TaskType
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.jobhistory | |
---|---|
Counters
|
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
TaskType
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.aggregate | |
---|---|
Job
The job submitter's view of the Job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.chain | |
---|---|
Job
The job submitter's view of the Job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.db | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.fieldsel | |
---|---|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.input | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.jobcontrol | |
---|---|
Job
The job submitter's view of the Job. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.join | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.map | |
---|---|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
MapContext
The context that is given to the Mapper . |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Mapper.Context
The Context passed on to the Mapper implementations. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.output | |
---|---|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobStatus.State
Current state of the job |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.partition | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Partitioner
Partitions the key space. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.reduce | |
---|---|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
ReduceContext
The context passed to the Reducer . |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
Reducer.Context
The Context passed on to the Reducer implementations. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.split | |
---|---|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.task | |
---|---|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
ReduceContext.ValueIterator
Iterator to iterate over values for a given group of records. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.tools | |
---|---|
Counters
|
|
Job
The job submitter's view of the Job. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |