|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hdfs.server.datanode.FSDataset
@InterfaceAudience.Private public class FSDataset
FSDataset manages a set of data blocks. Each block has a unique name and an extent on disk.
Nested Class Summary |
---|
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants |
---|
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction |
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.server.datanode.FSDatasetInterface |
---|
FSDatasetInterface.BlockInputStreams, FSDatasetInterface.BlockWriteStreams, FSDatasetInterface.MetaDataInputStream |
Field Summary | |
---|---|
static String |
METADATA_EXTENSION
|
static short |
METADATA_VERSION
|
Constructor Summary | |
---|---|
FSDataset(DataStorage storage,
org.apache.hadoop.conf.Configuration conf)
An FSDataset has a directory where it loads its data files. |
Method Summary | |
---|---|
void |
adjustCrcChannelPosition(Block b,
FSDatasetInterface.BlockWriteStreams streams,
int checksumSize)
Sets the offset in the meta file so that the last checksum will be overwritten. |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
append(Block b,
long newGS,
long expectedBlockLen)
Append to a finalized replica and returns the meta info of the replica |
void |
checkAndUpdate(long blockId,
File diskFile,
File diskMetaFile,
org.apache.hadoop.hdfs.server.datanode.FSDataset.FSVolume vol)
Reconcile the difference between blocks on the disk and blocks in volumeMap Check the given block for inconsistencies. |
void |
checkDataDir()
check if a data directory is healthy if some volumes failed - make sure to remove all the blocks that belong to these volumes |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
createRbw(Block b)
Creates a RBW replica and returns the meta info of the replica |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
createTemporary(Block b)
Creates a temporary replica and returns the meta information of the replica |
void |
finalizeBlock(Block b)
Complete the block write! |
File |
findBlockFile(long blockId)
Return the block file for the given ID |
File |
getBlockFile(Block b)
Get File name for a given block. |
InputStream |
getBlockInputStream(Block b)
Returns an input stream to read the contents of the specified block |
InputStream |
getBlockInputStream(Block b,
long seekOffset)
Returns an input stream at specified offset of the specified block |
BlockListAsLongs |
getBlockReport()
Generates a block report from the in-memory block map. |
long |
getCapacity()
Return total capacity, used and unused |
long |
getDfsUsed()
Return the total space used by dfs datanode |
File |
getFile(Block b)
Turn the block identifier into a filename; ignore generation stamp!!! |
long |
getLength(Block b)
Find the block's on-disk length |
FSDatasetInterface.MetaDataInputStream |
getMetaDataInputStream(Block b)
Returns metaData of block b as an input stream (and its length) |
long |
getMetaDataLength(Block b)
Returns the length of the metadata file of the specified block |
protected File |
getMetaFile(Block b)
|
long |
getRemaining()
Return how many bytes can still be stored in the FSDataset |
ReplicaInfo |
getReplica(long blockId)
Deprecated. use fetchReplicaInfo(long) instead. |
long |
getReplicaVisibleLength(Block block)
Get visible length of the specified replica. |
String |
getStorageInfo()
Returns the storage id of the underlying storage |
Block |
getStoredBlock(long blkid)
|
FSDatasetInterface.BlockInputStreams |
getTmpInputStreams(Block b,
long blkOffset,
long ckoff)
Returns handles to the block file and its metadata file |
boolean |
hasEnoughResource()
Return true - if there are still valid volumes on the DataNode. |
ReplicaRecoveryInfo |
initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock)
Initialize a replica recovery. |
void |
invalidate(Block[] invalidBlks)
We're informed that a block is no longer valid. |
boolean |
isValidBlock(Block b)
Check whether the given block is a valid one. |
boolean |
metaFileExists(Block b)
Does the meta file exist for this block? |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
recoverAppend(Block b,
long newGS,
long expectedBlockLen)
Recover a failed append to a finalized replica and returns the meta info of the replica |
void |
recoverClose(Block b,
long newGS,
long expectedBlockLen)
Recover a failed pipeline close It bumps the replica's generation stamp and finalize it if RBW replica |
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface |
recoverRbw(Block b,
long newGS,
long minBytesRcvd,
long maxBytesRcvd)
Recovers a RBW replica and returns the meta info of the replica |
void |
shutdown()
Shutdown the FSDataset |
String |
toString()
Stringifies the name of the storage |
void |
unfinalizeBlock(Block b)
Remove the temporary block file (if any) |
boolean |
unlinkBlock(Block block,
int numLinks)
Make a copy of the block if this block is linked to an existing snapshot. |
ReplicaInfo |
updateReplicaUnderRecovery(Block oldBlock,
long recoveryId,
long newlength)
Update replica's generation stamp and length and finalize it. |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final String METADATA_EXTENSION
public static final short METADATA_VERSION
Constructor Detail |
---|
public FSDataset(DataStorage storage, org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
Method Detail |
---|
protected File getMetaFile(Block b) throws IOException
IOException
public File findBlockFile(long blockId)
public Block getStoredBlock(long blkid) throws IOException
getStoredBlock
in interface FSDatasetInterface
IOException
public boolean metaFileExists(Block b) throws IOException
FSDatasetInterface
metaFileExists
in interface FSDatasetInterface
b
- - the block
IOException
public long getMetaDataLength(Block b) throws IOException
FSDatasetInterface
getMetaDataLength
in interface FSDatasetInterface
b
- - the block for which the metadata length is desired
IOException
public FSDatasetInterface.MetaDataInputStream getMetaDataInputStream(Block b) throws IOException
FSDatasetInterface
getMetaDataInputStream
in interface FSDatasetInterface
b
- - the block
IOException
public long getDfsUsed() throws IOException
getDfsUsed
in interface FSDatasetMBean
IOException
public boolean hasEnoughResource()
hasEnoughResource
in interface FSDatasetInterface
public long getCapacity() throws IOException
getCapacity
in interface FSDatasetMBean
IOException
public long getRemaining() throws IOException
getRemaining
in interface FSDatasetMBean
IOException
public long getLength(Block b) throws IOException
getLength
in interface FSDatasetInterface
IOException
public File getBlockFile(Block b) throws IOException
IOException
public InputStream getBlockInputStream(Block b) throws IOException
FSDatasetInterface
getBlockInputStream
in interface FSDatasetInterface
IOException
public InputStream getBlockInputStream(Block b, long seekOffset) throws IOException
FSDatasetInterface
getBlockInputStream
in interface FSDatasetInterface
IOException
public FSDatasetInterface.BlockInputStreams getTmpInputStreams(Block b, long blkOffset, long ckoff) throws IOException
getTmpInputStreams
in interface FSDatasetInterface
IOException
public boolean unlinkBlock(Block block, int numLinks) throws IOException
block
- BlocknumLinks
- Unlink if the number of links exceed this value
IOException
public org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface append(Block b, long newGS, long expectedBlockLen) throws IOException
FSDatasetInterface
append
in interface FSDatasetInterface
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
public org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface recoverAppend(Block b, long newGS, long expectedBlockLen) throws IOException
FSDatasetInterface
recoverAppend
in interface FSDatasetInterface
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
public void recoverClose(Block b, long newGS, long expectedBlockLen) throws IOException
FSDatasetInterface
recoverClose
in interface FSDatasetInterface
b
- blocknewGS
- the new generation stamp for the replicaexpectedBlockLen
- the number of bytes the replica is expected to have
IOException
public org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface createRbw(Block b) throws IOException
FSDatasetInterface
createRbw
in interface FSDatasetInterface
b
- block
IOException
- if an error occurspublic org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface recoverRbw(Block b, long newGS, long minBytesRcvd, long maxBytesRcvd) throws IOException
FSDatasetInterface
recoverRbw
in interface FSDatasetInterface
b
- blocknewGS
- the new generation stamp for the replicaminBytesRcvd
- the minimum number of bytes that the replica could havemaxBytesRcvd
- the maximum number of bytes that the replica could have
IOException
- if an error occurspublic org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface createTemporary(Block b) throws IOException
FSDatasetInterface
createTemporary
in interface FSDatasetInterface
b
- block
IOException
- if an error occurspublic void adjustCrcChannelPosition(Block b, FSDatasetInterface.BlockWriteStreams streams, int checksumSize) throws IOException
adjustCrcChannelPosition
in interface FSDatasetInterface
b
- blockstreams
- The stream for the data file and checksum filechecksumSize
- number of bytes each checksum has
IOException
public void finalizeBlock(Block b) throws IOException
finalizeBlock
in interface FSDatasetInterface
IOException
public void unfinalizeBlock(Block b) throws IOException
unfinalizeBlock
in interface FSDatasetInterface
IOException
public BlockListAsLongs getBlockReport()
getBlockReport
in interface FSDatasetInterface
public boolean isValidBlock(Block b)
isValidBlock
in interface FSDatasetInterface
public void invalidate(Block[] invalidBlks) throws IOException
invalidate
in interface FSDatasetInterface
invalidBlks
- - the blocks to be invalidated
IOException
public File getFile(Block b)
public void checkDataDir() throws org.apache.hadoop.util.DiskChecker.DiskErrorException
checkDataDir
in interface FSDatasetInterface
org.apache.hadoop.util.DiskChecker.DiskErrorException
public String toString()
FSDatasetInterface
toString
in interface FSDatasetInterface
toString
in class Object
public void shutdown()
FSDatasetInterface
shutdown
in interface FSDatasetInterface
public String getStorageInfo()
FSDatasetMBean
getStorageInfo
in interface FSDatasetMBean
public void checkAndUpdate(long blockId, File diskFile, File diskMetaFile, org.apache.hadoop.hdfs.server.datanode.FSDataset.FSVolume vol)
ReplicaInfo
does not match the file on
the disk, update ReplicaInfo
with the correct file
blockId
- Block that differsdiskFile
- Block file on the diskdiskMetaFile
- Metadata file from on the diskvol
- Volume of the block file@Deprecated public ReplicaInfo getReplica(long blockId)
fetchReplicaInfo(long)
instead.
FSDatasetInterface
FSDataset
getReplica
in interface FSDatasetInterface
public ReplicaRecoveryInfo initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock) throws IOException
FSDatasetInterface
initReplicaRecovery
in interface FSDatasetInterface
IOException
public ReplicaInfo updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newlength) throws IOException
FSDatasetInterface
updateReplicaUnderRecovery
in interface FSDatasetInterface
IOException
public long getReplicaVisibleLength(Block block) throws IOException
FSDatasetInterface
getReplicaVisibleLength
in interface FSDatasetInterface
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |