|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
@InterfaceAudience.Private public interface FSDatasetInterface
This is an interface for the underlying storage that stores blocks for a data node. Examples are the FSDataset (which stores blocks on dirs) and SimulatedFSDataset (which simulates data).
Nested Class Summary | |
---|---|
static class |
FSDatasetInterface.BlockInputStreams
This class contains the input streams for the data and checksum of a block |
static class |
FSDatasetInterface.BlockWriteStreams
This class contains the output streams for the data and checksum of a block |
static class |
FSDatasetInterface.MetaDataInputStream
This class provides the input stream and length of the metadata of a block |
Method Summary | |
---|---|
void |
adjustCrcChannelPosition(Block b,
FSDatasetInterface.BlockWriteStreams stream,
int checksumSize)
Sets the file pointer of the checksum stream so that the last checksum will be overwritten |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
append(Block b,
long newGS,
long expectedBlockLen)
Append to a finalized replica and returns the meta info of the replica |
void |
checkDataDir()
Check if all the data directories are healthy |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
createRbw(Block b)
Creates a RBW replica and returns the meta info of the replica |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
createTemporary(Block b)
Creates a temporary replica and returns the meta information of the replica |
void |
finalizeBlock(Block b)
Finalizes the block previously opened for writing using writeToBlock. |
InputStream |
getBlockInputStream(Block b)
Returns an input stream to read the contents of the specified block |
InputStream |
getBlockInputStream(Block b,
long seekOffset)
Returns an input stream at specified offset of the specified block |
BlockListAsLongs |
getBlockReport()
Returns the block report - the full list of blocks stored |
long |
getLength(Block b)
Returns the specified block's on-disk length (excluding metadata) |
FSDatasetInterface.MetaDataInputStream |
getMetaDataInputStream(Block b)
Returns metaData of block b as an input stream (and its length) |
long |
getMetaDataLength(Block b)
Returns the length of the metadata file of the specified block |
Replica |
getReplica(long blockId)
Deprecated. |
long |
getReplicaVisibleLength(Block block)
Get visible length of the specified replica. |
Block |
getStoredBlock(long blkid)
|
FSDatasetInterface.BlockInputStreams |
getTmpInputStreams(Block b,
long blkoff,
long ckoff)
Returns an input stream at specified offset of the specified block The block is still in the tmp directory and is not finalized |
boolean |
hasEnoughResource()
checks how many valid storage volumes are there in the DataNode |
ReplicaRecoveryInfo |
initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock)
Initialize a replica recovery. |
void |
invalidate(Block[] invalidBlks)
Invalidates the specified blocks |
boolean |
isValidBlock(Block b)
Is the block valid? |
boolean |
metaFileExists(Block b)
Does the meta file exist for this block? |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
recoverAppend(Block b,
long newGS,
long expectedBlockLen)
Recover a failed append to a finalized replica and returns the meta info of the replica |
void |
recoverClose(Block b,
long newGS,
long expectedBlockLen)
Recover a failed pipeline close It bumps the replica's generation stamp and finalize it if RBW replica |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
recoverRbw(Block b,
long newGS,
long minBytesRcvd,
long maxBytesRcvd)
Recovers a RBW replica and returns the meta info of the replica |
void |
shutdown()
Shutdown the FSDataset |
String |
toString()
Stringifies the name of the storage |
void |
unfinalizeBlock(Block b)
Unfinalizes the block previously opened for writing using writeToBlock. |
ReplicaInfo |
updateReplicaUnderRecovery(Block oldBlock,
long recoveryId,
long newLength)
Update replica's generation stamp and length and finalize it. |
Methods inherited from interface org.apache.hadoop.hdfs.server.datanode.metrics.FSDatasetMBean |
---|
getCapacity, getDfsUsed, getRemaining, getStorageInfo |
Method Detail |
---|
long getMetaDataLength(Block b) throws IOException
b
- - the block for which the metadata length is desired
IOException
FSDatasetInterface.MetaDataInputStream getMetaDataInputStream(Block b) throws IOException
b
- - the block
IOException
boolean metaFileExists(Block b) throws IOException
b
- - the block
IOException
long getLength(Block b) throws IOException
b
-
IOException
@Deprecated Replica getReplica(long blockId)
FSDataset
blockId
-
Block getStoredBlock(long blkid) throws IOException
IOException
InputStream getBlockInputStream(Block b) throws IOException
b
-
IOException
InputStream getBlockInputStream(Block b, long seekOffset) throws IOException
b
- seekOffset
-
IOException
FSDatasetInterface.BlockInputStreams getTmpInputStreams(Block b, long blkoff, long ckoff) throws IOException
b
- blkoff
- ckoff
-
IOException
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface createTemporary(Block b) throws IOException
b
- block
IOException
- if an error occursorg.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface createRbw(Block b) throws IOException
b
- block
IOException
- if an error occursorg.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface recoverRbw(Block b, long newGS, long minBytesRcvd, long maxBytesRcvd) throws IOException
b
- blocknewGS
- the new generation stamp for the replicaminBytesRcvd
- the minimum number of bytes that the replica could havemaxBytesRcvd
- the maximum number of bytes that the replica could have
IOException
- if an error occursorg.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface append(Block b, long newGS, long expectedBlockLen) throws IOException
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface recoverAppend(Block b, long newGS, long expectedBlockLen) throws IOException
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
void recoverClose(Block b, long newGS, long expectedBlockLen) throws IOException
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
void finalizeBlock(Block b) throws IOException
b
-
IOException
void unfinalizeBlock(Block b) throws IOException
b
-
IOException
BlockListAsLongs getBlockReport()
boolean isValidBlock(Block b)
b
-
void invalidate(Block[] invalidBlks) throws IOException
invalidBlks
- - the blocks to be invalidated
IOException
void checkDataDir() throws org.apache.hadoop.util.DiskChecker.DiskErrorException
org.apache.hadoop.util.DiskChecker.DiskErrorException
String toString()
toString
in class Object
void shutdown()
void adjustCrcChannelPosition(Block b, FSDatasetInterface.BlockWriteStreams stream, int checksumSize) throws IOException
b
- blockstream
- The stream for the data file and checksum filechecksumSize
- number of bytes each checksum has
IOException
boolean hasEnoughResource()
long getReplicaVisibleLength(Block block) throws IOException
IOException
ReplicaRecoveryInfo initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock) throws IOException
IOException
ReplicaInfo updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength) throws IOException
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |