|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use JobContext | |
---|---|
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.pi | This package consists of a map/reduce application, distbbp, which computes exact binary digits of the mathematical constant π. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapreduce.lib.map | |
org.apache.hadoop.mapreduce.lib.output | |
org.apache.hadoop.mapreduce.lib.partition | |
org.apache.hadoop.mapreduce.lib.reduce | |
org.apache.hadoop.mapreduce.task |
Uses of JobContext in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples with parameters of type JobContext | |
---|---|
List<InputSplit> |
BaileyBorweinPlouffe.BbpInputFormat.getSplits(JobContext context)
Logically split the set of input files for the job. |
Uses of JobContext in org.apache.hadoop.examples.pi |
---|
Methods in org.apache.hadoop.examples.pi with parameters of type JobContext | |
---|---|
List<InputSplit> |
DistSum.MapSide.PartitionInputFormat.getSplits(JobContext context)
Partitions the summation into parts and then return them as splits |
List<InputSplit> |
DistSum.ReduceSide.SummationInputFormat.getSplits(JobContext context)
|
Uses of JobContext in org.apache.hadoop.examples.terasort |
---|
Methods in org.apache.hadoop.examples.terasort with parameters of type JobContext | |
---|---|
void |
TeraOutputFormat.checkOutputSpecs(JobContext job)
|
void |
TeraOutputFormat.TeraOutputCommitter.commitJob(JobContext jobContext)
|
static boolean |
TeraOutputFormat.getFinalSync(JobContext job)
Does the user want a final sync at close? |
static int |
TeraSort.getOutputReplication(JobContext job)
|
List<InputSplit> |
TeraInputFormat.getSplits(JobContext job)
|
static boolean |
TeraSort.getUseSimplePartitioner(JobContext job)
|
void |
TeraOutputFormat.TeraOutputCommitter.setupJob(JobContext jobContext)
|
static void |
TeraInputFormat.writePartitionFile(JobContext job,
org.apache.hadoop.fs.Path partFile)
Use the input splits to take samples of the input and generate sample keys. |
Uses of JobContext in org.apache.hadoop.mapred |
---|
Subinterfaces of JobContext in org.apache.hadoop.mapred | |
---|---|
interface |
JobContext
|
Methods in org.apache.hadoop.mapred with parameters of type JobContext | |
---|---|
void |
OutputCommitter.abortJob(JobContext context,
JobStatus.State runState)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.cleanupJob(JobContext context)
Deprecated. Use OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead. |
void |
OutputCommitter.commitJob(JobContext context)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.setupJob(JobContext jobContext)
This method implements the new interface by calling the old method. |
Uses of JobContext in org.apache.hadoop.mapreduce |
---|
Subinterfaces of JobContext in org.apache.hadoop.mapreduce | |
---|---|
interface |
MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context that is given to the Mapper . |
interface |
ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context passed to the Reducer . |
interface |
TaskAttemptContext
The context for task attempts. |
interface |
TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce that implement JobContext | |
---|---|
class |
Job
The job submitter's view of the Job. |
class |
Mapper.Context
The Context passed on to the Mapper implementations. |
class |
Reducer.Context
The Context passed on to the Reducer implementations. |
Methods in org.apache.hadoop.mapreduce with parameters of type JobContext | |
---|---|
void |
OutputCommitter.abortJob(JobContext jobContext,
JobStatus.State state)
For aborting an unsuccessful job's output. |
abstract void |
OutputFormat.checkOutputSpecs(JobContext context)
Check for validity of the output-specification for the job. |
void |
OutputCommitter.cleanupJob(JobContext jobContext)
Deprecated. Use OutputCommitter.commitJob(JobContext) or
OutputCommitter.abortJob(JobContext, JobStatus.State) instead. |
void |
OutputCommitter.commitJob(JobContext jobContext)
For committing job's output after successful job completion. |
abstract List<InputSplit> |
InputFormat.getSplits(JobContext context)
Logically split the set of input files for the job. |
abstract void |
OutputCommitter.setupJob(JobContext jobContext)
For the framework to setup the job output during initialization |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.db |
---|
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type JobContext | |
---|---|
void |
DBOutputFormat.checkOutputSpecs(JobContext context)
|
List<InputSplit> |
DBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
List<InputSplit> |
DataDrivenDBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.input |
---|
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type JobContext | |
---|---|
static org.apache.hadoop.fs.PathFilter |
FileInputFormat.getInputPathFilter(JobContext context)
Get a PathFilter instance of the filter set for the input paths. |
static org.apache.hadoop.fs.Path[] |
FileInputFormat.getInputPaths(JobContext context)
Get the list of input Path s for the map-reduce job. |
static long |
FileInputFormat.getMaxSplitSize(JobContext context)
Get the maximum split size. |
static long |
FileInputFormat.getMinSplitSize(JobContext job)
Get the minimum split size |
static int |
NLineInputFormat.getNumLinesPerSplit(JobContext job)
Get the number of lines per split |
List<InputSplit> |
NLineInputFormat.getSplits(JobContext job)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
List<InputSplit> |
CombineFileInputFormat.getSplits(JobContext job)
|
List<InputSplit> |
FileInputFormat.getSplits(JobContext job)
Generate the list of files and make them into FileSplits. |
protected boolean |
TextInputFormat.isSplitable(JobContext context,
org.apache.hadoop.fs.Path file)
|
protected boolean |
KeyValueTextInputFormat.isSplitable(JobContext context,
org.apache.hadoop.fs.Path file)
|
protected boolean |
FileInputFormat.isSplitable(JobContext context,
org.apache.hadoop.fs.Path filename)
Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be. |
protected List<org.apache.hadoop.fs.FileStatus> |
SequenceFileInputFormat.listStatus(JobContext job)
|
protected List<org.apache.hadoop.fs.FileStatus> |
FileInputFormat.listStatus(JobContext job)
List input directories. |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.join |
---|
Methods in org.apache.hadoop.mapreduce.lib.join with parameters of type JobContext | |
---|---|
List<InputSplit> |
CompositeInputFormat.getSplits(JobContext job)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.map |
---|
Classes in org.apache.hadoop.mapreduce.lib.map that implement JobContext | |
---|---|
class |
WrappedMapper.Context
|
Methods in org.apache.hadoop.mapreduce.lib.map with parameters of type JobContext | ||
---|---|---|
static
|
MultithreadedMapper.getMapperClass(JobContext job)
Get the application's mapper class. |
|
static int |
MultithreadedMapper.getNumberOfThreads(JobContext job)
The number of threads in the thread pool that will run the map function. |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.output |
---|
Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type JobContext | |
---|---|
void |
FileOutputCommitter.abortJob(JobContext context,
JobStatus.State state)
Delete the temporary directory, including all of the work directories. |
void |
FileOutputFormat.checkOutputSpecs(JobContext job)
|
void |
FilterOutputFormat.checkOutputSpecs(JobContext context)
|
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(JobContext job)
|
void |
NullOutputFormat.checkOutputSpecs(JobContext context)
|
void |
LazyOutputFormat.checkOutputSpecs(JobContext context)
|
void |
FileOutputCommitter.cleanupJob(JobContext context)
Deprecated. |
void |
FileOutputCommitter.commitJob(JobContext context)
Delete the temporary directory, including all of the work directories. |
static boolean |
FileOutputFormat.getCompressOutput(JobContext job)
Is the job output compressed? |
static boolean |
MultipleOutputs.getCountersEnabled(JobContext job)
Returns if the counters for the named outputs are enabled or not. |
static org.apache.hadoop.io.SequenceFile.CompressionType |
SequenceFileOutputFormat.getOutputCompressionType(JobContext job)
Get the SequenceFile.CompressionType for the output SequenceFile . |
static Class<? extends org.apache.hadoop.io.compress.CompressionCodec> |
FileOutputFormat.getOutputCompressorClass(JobContext job,
Class<? extends org.apache.hadoop.io.compress.CompressionCodec> defaultValue)
Get the CompressionCodec for compressing the job outputs. |
protected static String |
FileOutputFormat.getOutputName(JobContext job)
Get the base output name for the output file. |
static org.apache.hadoop.fs.Path |
FileOutputFormat.getOutputPath(JobContext job)
Get the Path to the output directory for the map-reduce job. |
static Class<? extends org.apache.hadoop.io.WritableComparable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobContext job)
Get the key class for the SequenceFile |
static Class<? extends org.apache.hadoop.io.Writable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobContext job)
Get the value class for the SequenceFile |
protected static void |
FileOutputFormat.setOutputName(JobContext job,
String name)
Set the base output name for output file to be created. |
void |
FileOutputCommitter.setupJob(JobContext context)
Create the temporary directory that is the root of all of the task work directories. |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.partition |
---|
Methods in org.apache.hadoop.mapreduce.lib.partition with parameters of type JobContext | |
---|---|
static String |
KeyFieldBasedComparator.getKeyFieldComparatorOption(JobContext job)
Get the KeyFieldBasedComparator options |
String |
KeyFieldBasedPartitioner.getKeyFieldPartitionerOption(JobContext job)
Get the KeyFieldBasedPartitioner options |
Uses of JobContext in org.apache.hadoop.mapreduce.lib.reduce |
---|
Classes in org.apache.hadoop.mapreduce.lib.reduce that implement JobContext | |
---|---|
class |
WrappedReducer.Context
|
Uses of JobContext in org.apache.hadoop.mapreduce.task |
---|
Classes in org.apache.hadoop.mapreduce.task that implement JobContext | |
---|---|
class |
org.apache.hadoop.mapreduce.task.JobContextImpl
A read-only view of the job that is provided to the tasks while they are running. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |