|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use TaskAttemptContext | |
---|---|
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.pi | This package consists of a map/reduce application, distbbp, which computes exact binary digits of the mathematical constant π. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapreduce.lib.map | |
org.apache.hadoop.mapreduce.lib.output | |
org.apache.hadoop.mapreduce.lib.reduce |
Uses of TaskAttemptContext in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples with parameters of type TaskAttemptContext | |
---|---|
RecordReader<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text> |
MultiFileWordCount.MyInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.IntWritable> |
BaileyBorweinPlouffe.BbpInputFormat.createRecordReader(InputSplit generic,
TaskAttemptContext context)
Create a record reader for a given split. |
void |
MultiFileWordCount.CombineFileLineRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Constructors in org.apache.hadoop.examples with parameters of type TaskAttemptContext | |
---|---|
MultiFileWordCount.CombineFileLineRecordReader(CombineFileSplit split,
TaskAttemptContext context,
Integer index)
|
Uses of TaskAttemptContext in org.apache.hadoop.examples.pi |
---|
Methods in org.apache.hadoop.examples.pi with parameters of type TaskAttemptContext | |
---|---|
RecordReader<org.apache.hadoop.io.NullWritable,SummationWritable> |
DistSum.Machine.AbstractInputFormat.createRecordReader(InputSplit generic,
TaskAttemptContext context)
Specify how to read the records |
Uses of TaskAttemptContext in org.apache.hadoop.examples.terasort |
---|
Methods in org.apache.hadoop.examples.terasort with parameters of type TaskAttemptContext | |
---|---|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
TeraInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
OutputCommitter |
TeraOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
RecordWriter<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
TeraOutputFormat.getRecordWriter(TaskAttemptContext job)
|
void |
TeraOutputFormat.TeraOutputCommitter.setupTask(TaskAttemptContext taskContext)
|
Constructors in org.apache.hadoop.examples.terasort with parameters of type TaskAttemptContext | |
---|---|
TeraOutputFormat.TeraOutputCommitter(org.apache.hadoop.fs.Path outputPath,
TaskAttemptContext context)
|
Uses of TaskAttemptContext in org.apache.hadoop.mapred |
---|
Subinterfaces of TaskAttemptContext in org.apache.hadoop.mapred | |
---|---|
interface |
TaskAttemptContext
|
Methods in org.apache.hadoop.mapred with parameters of type TaskAttemptContext | |
---|---|
void |
OutputCommitter.abortTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.commitTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
boolean |
OutputCommitter.needsTaskCommit(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.setupTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
Uses of TaskAttemptContext in org.apache.hadoop.mapred.lib |
---|
Methods in org.apache.hadoop.mapred.lib with parameters of type TaskAttemptContext | |
---|---|
RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce |
---|
Subinterfaces of TaskAttemptContext in org.apache.hadoop.mapreduce | |
---|---|
interface |
MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context that is given to the Mapper . |
interface |
ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context passed to the Reducer . |
interface |
TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce that implement TaskAttemptContext | |
---|---|
class |
Mapper.Context
The Context passed on to the Mapper implementations. |
class |
Reducer.Context
The Context passed on to the Reducer implementations. |
Methods in org.apache.hadoop.mapreduce with parameters of type TaskAttemptContext | |
---|---|
abstract void |
OutputCommitter.abortTask(TaskAttemptContext taskContext)
Discard the task output |
abstract void |
RecordWriter.close(TaskAttemptContext context)
Close this RecordWriter to future operations. |
abstract void |
OutputCommitter.commitTask(TaskAttemptContext taskContext)
To promote the task's temporary output to final output location The task's output is moved to the job's output directory. |
abstract RecordReader<K,V> |
InputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
abstract OutputCommitter |
OutputFormat.getOutputCommitter(TaskAttemptContext context)
Get the output committer for this output format. |
abstract RecordWriter<K,V> |
OutputFormat.getRecordWriter(TaskAttemptContext context)
Get the RecordWriter for the given task. |
abstract void |
RecordReader.initialize(InputSplit split,
TaskAttemptContext context)
Called once at initialization. |
abstract boolean |
OutputCommitter.needsTaskCommit(TaskAttemptContext taskContext)
Check whether task needs a commit |
abstract void |
OutputCommitter.setupTask(TaskAttemptContext taskContext)
Sets up output for the task. |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.db |
---|
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type TaskAttemptContext | |
---|---|
void |
DBOutputFormat.DBRecordWriter.close(TaskAttemptContext context)
Close this RecordWriter to future operations. |
RecordReader<org.apache.hadoop.io.LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
OutputCommitter |
DBOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(TaskAttemptContext context)
Get the RecordWriter for the given task. |
void |
DBRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.input |
---|
Fields in org.apache.hadoop.mapreduce.lib.input declared as TaskAttemptContext | |
---|---|
protected TaskAttemptContext |
CombineFileRecordReader.context
|
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type TaskAttemptContext | |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
SequenceFileAsTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet. |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFilter.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for the given split |
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
void |
KeyValueLineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
SequenceFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
CombineFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsTextRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Constructors in org.apache.hadoop.mapreduce.lib.input with parameters of type TaskAttemptContext | |
---|---|
CombineFileRecordReader(CombineFileSplit split,
TaskAttemptContext context,
Class<? extends RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit. |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.join |
---|
Methods in org.apache.hadoop.mapreduce.lib.join with parameters of type TaskAttemptContext | |
---|---|
RecordReader<K,TupleWritable> |
CompositeInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext taskContext)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
abstract ComposableRecordReader<K,V> |
ComposableInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
void |
WrappedRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
MultiFilterRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
CompositeRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.map |
---|
Classes in org.apache.hadoop.mapreduce.lib.map that implement TaskAttemptContext | |
---|---|
class |
WrappedMapper.Context
|
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.output |
---|
Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type TaskAttemptContext | |
---|---|
void |
FileOutputCommitter.abortTask(TaskAttemptContext context)
Delete the work directory |
void |
TextOutputFormat.LineRecordWriter.close(TaskAttemptContext context)
|
void |
FilterOutputFormat.FilterRecordWriter.close(TaskAttemptContext context)
|
void |
FileOutputCommitter.commitTask(TaskAttemptContext context)
Move the files from the work directory to the job output directory |
org.apache.hadoop.fs.Path |
FileOutputFormat.getDefaultWorkFile(TaskAttemptContext context,
String extension)
Get the default path and filename for the output format. |
OutputCommitter |
FileOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
FilterOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
NullOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
LazyOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(TaskAttemptContext job)
|
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(TaskAttemptContext job)
|
RecordWriter<K,V> |
FilterOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
LazyOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.io.Writable> |
MapFileOutputFormat.getRecordWriter(TaskAttemptContext context)
|
protected org.apache.hadoop.io.SequenceFile.Writer |
SequenceFileOutputFormat.getSequenceWriter(TaskAttemptContext context,
Class<?> keyClass,
Class<?> valueClass)
|
static String |
FileOutputFormat.getUniqueFile(TaskAttemptContext context,
String name,
String extension)
Generate a unique filename, based on the task id, name, and extension |
boolean |
FileOutputCommitter.needsTaskCommit(TaskAttemptContext context)
Did this task write any files in the work directory? |
void |
FileOutputCommitter.setupTask(TaskAttemptContext context)
No task setup required. |
Constructors in org.apache.hadoop.mapreduce.lib.output with parameters of type TaskAttemptContext | |
---|---|
FileOutputCommitter(org.apache.hadoop.fs.Path outputPath,
TaskAttemptContext context)
Create a file output committer |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.reduce |
---|
Classes in org.apache.hadoop.mapreduce.lib.reduce that implement TaskAttemptContext | |
---|---|
class |
WrappedReducer.Context
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |