|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use InputSplit | |
---|---|
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.pi | This package consists of a map/reduce application, distbbp, which computes exact binary digits of the mathematical constant π. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapreduce.lib.map | |
org.apache.hadoop.mapreduce.split |
Uses of InputSplit in org.apache.hadoop.examples |
---|
Subclasses of InputSplit in org.apache.hadoop.examples | |
---|---|
static class |
BaileyBorweinPlouffe.BbpSplit
Input split for the BaileyBorweinPlouffe.BbpInputFormat . |
Methods in org.apache.hadoop.examples that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
BaileyBorweinPlouffe.BbpInputFormat.getSplits(JobContext context)
Logically split the set of input files for the job. |
Methods in org.apache.hadoop.examples with parameters of type InputSplit | |
---|---|
RecordReader<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text> |
MultiFileWordCount.MyInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.IntWritable> |
BaileyBorweinPlouffe.BbpInputFormat.createRecordReader(InputSplit generic,
TaskAttemptContext context)
Create a record reader for a given split. |
void |
MultiFileWordCount.CombineFileLineRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.examples.pi |
---|
Subclasses of InputSplit in org.apache.hadoop.examples.pi | |
---|---|
static class |
DistSum.Machine.SummationSplit
Split for the summations |
Methods in org.apache.hadoop.examples.pi that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
DistSum.MapSide.PartitionInputFormat.getSplits(JobContext context)
Partitions the summation into parts and then return them as splits |
List<InputSplit> |
DistSum.ReduceSide.SummationInputFormat.getSplits(JobContext context)
|
Methods in org.apache.hadoop.examples.pi with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.NullWritable,SummationWritable> |
DistSum.Machine.AbstractInputFormat.createRecordReader(InputSplit generic,
TaskAttemptContext context)
Specify how to read the records |
Uses of InputSplit in org.apache.hadoop.examples.terasort |
---|
Methods in org.apache.hadoop.examples.terasort that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
TeraInputFormat.getSplits(JobContext job)
|
Methods in org.apache.hadoop.examples.terasort with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
TeraInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapred |
---|
Subclasses of InputSplit in org.apache.hadoop.mapred | |
---|---|
class |
FileSplit
A section of an input file. |
class |
MultiFileSplit
A sub-collection of input files. |
Uses of InputSplit in org.apache.hadoop.mapred.lib |
---|
Methods in org.apache.hadoop.mapred.lib with parameters of type InputSplit | |
---|---|
RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapreduce |
---|
Methods in org.apache.hadoop.mapreduce that return InputSplit | |
---|---|
InputSplit |
MapContext.getInputSplit()
Get the input split for this map. |
Methods in org.apache.hadoop.mapreduce that return types with arguments of type InputSplit | |
---|---|
abstract List<InputSplit> |
InputFormat.getSplits(JobContext context)
Logically split the set of input files for the job. |
Methods in org.apache.hadoop.mapreduce with parameters of type InputSplit | |
---|---|
abstract RecordReader<K,V> |
InputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
abstract void |
RecordReader.initialize(InputSplit split,
TaskAttemptContext context)
Called once at initialization. |
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.db |
---|
Subclasses of InputSplit in org.apache.hadoop.mapreduce.lib.db | |
---|---|
static class |
DataDrivenDBInputFormat.DataDrivenDBInputSplit
A InputSplit that spans a set of rows |
static class |
DBInputFormat.DBInputSplit
A InputSplit that spans a set of rows |
Methods in org.apache.hadoop.mapreduce.lib.db that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
DBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
List<InputSplit> |
DataDrivenDBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
List<InputSplit> |
DateSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
BigDecimalSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
IntegerSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
FloatSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
DBSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
Given a ResultSet containing one record (and already advanced to that record) with two columns (a low value, and a high value, both of the same type), determine a set of splits that span the given values. |
List<InputSplit> |
BooleanSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
TextSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
This method needs to determine the splits between two user-provided strings. |
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
void |
DBRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.input |
---|
Subclasses of InputSplit in org.apache.hadoop.mapreduce.lib.input | |
---|---|
class |
CombineFileSplit
A sub-collection of input files. |
Methods in org.apache.hadoop.mapreduce.lib.input that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
NLineInputFormat.getSplits(JobContext job)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
List<InputSplit> |
CombineFileInputFormat.getSplits(JobContext job)
|
List<InputSplit> |
FileInputFormat.getSplits(JobContext job)
Generate the list of files and make them into FileSplits. |
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
SequenceFileAsTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet. |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFilter.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for the given split |
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
void |
KeyValueLineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
SequenceFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
CombineFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsTextRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.join |
---|
Subclasses of InputSplit in org.apache.hadoop.mapreduce.lib.join | |
---|---|
class |
CompositeInputSplit
This InputSplit contains a set of child InputSplits. |
Methods in org.apache.hadoop.mapreduce.lib.join that return InputSplit | |
---|---|
InputSplit |
CompositeInputSplit.get(int i)
Get ith child InputSplit. |
Methods in org.apache.hadoop.mapreduce.lib.join that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
CompositeInputFormat.getSplits(JobContext job)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
Methods in org.apache.hadoop.mapreduce.lib.join with parameters of type InputSplit | |
---|---|
void |
CompositeInputSplit.add(InputSplit s)
Add an InputSplit to this collection. |
RecordReader<K,TupleWritable> |
CompositeInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext taskContext)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
abstract ComposableRecordReader<K,V> |
ComposableInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
void |
WrappedRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
MultiFilterRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
CompositeRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.map |
---|
Methods in org.apache.hadoop.mapreduce.lib.map that return InputSplit | |
---|---|
InputSplit |
WrappedMapper.Context.getInputSplit()
Get the input split for this map. |
Uses of InputSplit in org.apache.hadoop.mapreduce.split |
---|
Constructors in org.apache.hadoop.mapreduce.split with parameters of type InputSplit | |
---|---|
JobSplit.SplitMetaInfo(InputSplit split,
long startOffset)
|
|
JobSplit.TaskSplitMetaInfo(InputSplit split,
long startOffset)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |