|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use DatanodeInfo | |
---|---|
org.apache.hadoop.hdfs | A distributed implementation of FileSystem . |
org.apache.hadoop.hdfs.protocol | |
org.apache.hadoop.hdfs.server.common | |
org.apache.hadoop.hdfs.server.namenode | |
org.apache.hadoop.hdfs.server.protocol |
Uses of DatanodeInfo in org.apache.hadoop.hdfs |
---|
Methods in org.apache.hadoop.hdfs that return DatanodeInfo | |
---|---|
DatanodeInfo[] |
DFSClient.datanodeReport(FSConstants.DatanodeReportType type)
|
DatanodeInfo |
DFSClient.DFSDataInputStream.getCurrentDatanode()
Returns the datanode from which the stream is currently reading. |
DatanodeInfo[] |
DistributedFileSystem.getDataNodeStats()
Return statistics for each datanode. |
Uses of DatanodeInfo in org.apache.hadoop.hdfs.protocol |
---|
Methods in org.apache.hadoop.hdfs.protocol that return DatanodeInfo | |
---|---|
DatanodeInfo[] |
ClientProtocol.getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes. |
DatanodeInfo[] |
LocatedBlock.getLocations()
|
static DatanodeInfo |
DatanodeInfo.read(DataInput in)
Read a DatanodeInfo |
Methods in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfo | |
---|---|
LocatedBlock |
ClientProtocol.addBlock(String src,
String clientName,
Block previous,
DatanodeInfo[] excludedNodes)
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). |
protected abstract void |
DataTransferProtocol.Receiver.opReplaceBlock(DataInputStream in,
long blockId,
long blockGs,
String sourceId,
DatanodeInfo src,
BlockAccessToken accesstoken)
Abstract OP_REPLACE_BLOCK method. |
static void |
DataTransferProtocol.Sender.opReplaceBlock(DataOutputStream out,
long blockId,
long blockGs,
String storageId,
DatanodeInfo src,
BlockAccessToken accesstoken)
Send OP_REPLACE_BLOCK |
protected abstract void |
DataTransferProtocol.Receiver.opWriteBlock(DataInputStream in,
long blockId,
long blockGs,
int pipelineSize,
DataTransferProtocol.BlockConstructionStage stage,
long newGs,
long minBytesRcvd,
long maxBytesRcvd,
String client,
DatanodeInfo src,
DatanodeInfo[] targets,
BlockAccessToken accesstoken)
Abstract OP_WRITE_BLOCK method. |
protected abstract void |
DataTransferProtocol.Receiver.opWriteBlock(DataInputStream in,
long blockId,
long blockGs,
int pipelineSize,
DataTransferProtocol.BlockConstructionStage stage,
long newGs,
long minBytesRcvd,
long maxBytesRcvd,
String client,
DatanodeInfo src,
DatanodeInfo[] targets,
BlockAccessToken accesstoken)
Abstract OP_WRITE_BLOCK method. |
static void |
DataTransferProtocol.Sender.opWriteBlock(DataOutputStream out,
long blockId,
long blockGs,
int pipelineSize,
DataTransferProtocol.BlockConstructionStage stage,
long newGs,
long minBytesRcvd,
long maxBytesRcvd,
String client,
DatanodeInfo src,
DatanodeInfo[] targets,
BlockAccessToken accesstoken)
Send OP_WRITE_BLOCK |
static void |
DataTransferProtocol.Sender.opWriteBlock(DataOutputStream out,
long blockId,
long blockGs,
int pipelineSize,
DataTransferProtocol.BlockConstructionStage stage,
long newGs,
long minBytesRcvd,
long maxBytesRcvd,
String client,
DatanodeInfo src,
DatanodeInfo[] targets,
BlockAccessToken accesstoken)
Send OP_WRITE_BLOCK |
Constructors in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfo | |
---|---|
DatanodeInfo(DatanodeInfo from)
|
|
LocatedBlock(Block b,
DatanodeInfo[] locs)
|
|
LocatedBlock(Block b,
DatanodeInfo[] locs,
long startOffset)
|
|
LocatedBlock(Block b,
DatanodeInfo[] locs,
long startOffset,
boolean corrupt)
|
|
UnregisteredNodeException(DatanodeID nodeID,
DatanodeInfo storedNode)
The exception is thrown if a different data-node claims the same storage id as the existing one. |
Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.common |
---|
Methods in org.apache.hadoop.hdfs.server.common that return DatanodeInfo | |
---|---|
static DatanodeInfo |
JspHelper.bestNode(LocatedBlock blk)
|
Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.namenode |
---|
Subclasses of DatanodeInfo in org.apache.hadoop.hdfs.server.namenode | |
---|---|
class |
DatanodeDescriptor
DatanodeDescriptor tracks stats on a given DataNode, such as available storage capacity, last update time, etc., and maintains a set of blocks stored on the datanode. |
Methods in org.apache.hadoop.hdfs.server.namenode that return DatanodeInfo | |
---|---|
DatanodeInfo[] |
FSNamesystem.datanodeReport(FSConstants.DatanodeReportType type)
|
DatanodeInfo |
FSNamesystem.getDataNodeInfo(String name)
|
DatanodeInfo[] |
NameNode.getDatanodeReport(FSConstants.DatanodeReportType type)
|
Methods in org.apache.hadoop.hdfs.server.namenode with parameters of type DatanodeInfo | |
---|---|
LocatedBlock |
NameNode.addBlock(String src,
String clientName,
Block previous,
DatanodeInfo[] excludedNodes)
|
BlocksWithLocations |
NameNode.getBlocks(DatanodeInfo datanode,
long size)
|
BlocksWithLocations |
BackupNode.getBlocks(DatanodeInfo datanode,
long size)
|
void |
FSNamesystem.markBlockAsCorrupt(Block blk,
DatanodeInfo dn)
Mark the block belonging to datanode as corrupt |
Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.protocol |
---|
Methods in org.apache.hadoop.hdfs.server.protocol that return DatanodeInfo | |
---|---|
DatanodeInfo[][] |
BlockCommand.getTargets()
|
Methods in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeInfo | |
---|---|
BlocksWithLocations |
NamenodeProtocol.getBlocks(DatanodeInfo datanode,
long size)
Get a list of blocks belonging to datanode
whose total size equals size . |
Constructors in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeInfo | |
---|---|
BlockRecoveryCommand.RecoveringBlock(Block b,
DatanodeInfo[] locs,
long newGS)
Create RecoveringBlock. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |