Uses of Class
org.apache.hadoop.hdfs.protocol.Block

Packages that use Block
org.apache.hadoop.hdfs A distributed implementation of FileSystem
org.apache.hadoop.hdfs.protocol   
org.apache.hadoop.hdfs.server.datanode   
org.apache.hadoop.hdfs.server.namenode   
org.apache.hadoop.hdfs.server.protocol   
 

Uses of Block in org.apache.hadoop.hdfs
 

Methods in org.apache.hadoop.hdfs that return Block
 Block DFSClient.DFSDataInputStream.getCurrentBlock()
          Returns the block containing the target position.
 

Uses of Block in org.apache.hadoop.hdfs.protocol
 

Methods in org.apache.hadoop.hdfs.protocol that return Block
 Block LocatedBlock.getBlock()
           
 Block BlockListAsLongs.BlockReportIterator.next()
           
 

Methods in org.apache.hadoop.hdfs.protocol that return types with arguments of type Block
 Iterator<Block> BlockListAsLongs.iterator()
          Returns an iterator over blocks in the block report.
 

Methods in org.apache.hadoop.hdfs.protocol with parameters of type Block
 void ClientProtocol.abandonBlock(Block b, String src, String holder)
          The client can give up on a block by calling abandonBlock().
 LocatedBlock ClientProtocol.addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes)
          A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().
 int Block.compareTo(Block b)
           
 boolean ClientProtocol.complete(String src, String clientName, Block last)
          The client is done writing data to the given filename, and would like to complete it.
 long ClientDatanodeProtocol.getReplicaVisibleLength(Block b)
          Return the visible length of a replica.
 LocatedBlock ClientProtocol.updateBlockForPipeline(Block block, String clientName)
          Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
 void ClientProtocol.updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes)
          Update a pipeline for a block under construction
 

Constructors in org.apache.hadoop.hdfs.protocol with parameters of type Block
Block(Block blk)
           
LocatedBlock(Block b, DatanodeInfo[] locs)
           
LocatedBlock(Block b, DatanodeInfo[] locs, long startOffset)
           
LocatedBlock(Block b, DatanodeInfo[] locs, long startOffset, boolean corrupt)
           
 

Constructor parameters in org.apache.hadoop.hdfs.protocol with type arguments of type Block
BlockListAsLongs(List<? extends Block> finalized, List<ReplicaInfo> uc)
          Create block report from finalized and under construction lists of blocks.
 

Uses of Block in org.apache.hadoop.hdfs.server.datanode
 

Subclasses of Block in org.apache.hadoop.hdfs.server.datanode
 class ReplicaInfo
          This class is used by datanodes to maintain meta data of its replicas.
 

Methods in org.apache.hadoop.hdfs.server.datanode that return Block
 Block FSDatasetInterface.getStoredBlock(long blkid)
           
 Block FSDataset.getStoredBlock(long blkid)
          
 Block DataNode.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength)
          Update replica with the new generation stamp and length.
 

Methods in org.apache.hadoop.hdfs.server.datanode with parameters of type Block
 void FSDatasetInterface.adjustCrcChannelPosition(Block b, FSDatasetInterface.BlockWriteStreams stream, int checksumSize)
          Sets the file pointer of the checksum stream so that the last checksum will be overwritten
 void FSDataset.adjustCrcChannelPosition(Block b, FSDatasetInterface.BlockWriteStreams streams, int checksumSize)
          Sets the offset in the meta file so that the last checksum will be overwritten.
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDatasetInterface.append(Block b, long newGS, long expectedBlockLen)
          Append to a finalized replica and returns the meta info of the replica
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDataset.append(Block b, long newGS, long expectedBlockLen)
           
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDatasetInterface.createRbw(Block b)
          Creates a RBW replica and returns the meta info of the replica
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDataset.createRbw(Block b)
           
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDatasetInterface.createTemporary(Block b)
          Creates a temporary replica and returns the meta information of the replica
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDataset.createTemporary(Block b)
           
 void FSDatasetInterface.finalizeBlock(Block b)
          Finalizes the block previously opened for writing using writeToBlock.
 void FSDataset.finalizeBlock(Block b)
          Complete the block write!
 File FSDataset.getBlockFile(Block b)
          Get File name for a given block.
 InputStream FSDatasetInterface.getBlockInputStream(Block b)
          Returns an input stream to read the contents of the specified block
 InputStream FSDataset.getBlockInputStream(Block b)
           
 InputStream FSDatasetInterface.getBlockInputStream(Block b, long seekOffset)
          Returns an input stream at specified offset of the specified block
 InputStream FSDataset.getBlockInputStream(Block b, long seekOffset)
           
 File FSDataset.getFile(Block b)
          Turn the block identifier into a filename; ignore generation stamp!!!
 long FSDatasetInterface.getLength(Block b)
          Returns the specified block's on-disk length (excluding metadata)
 long FSDataset.getLength(Block b)
          Find the block's on-disk length
 FSDatasetInterface.MetaDataInputStream FSDatasetInterface.getMetaDataInputStream(Block b)
          Returns metaData of block b as an input stream (and its length)
 FSDatasetInterface.MetaDataInputStream FSDataset.getMetaDataInputStream(Block b)
           
 long FSDatasetInterface.getMetaDataLength(Block b)
          Returns the length of the metadata file of the specified block
 long FSDataset.getMetaDataLength(Block b)
           
protected  File FSDataset.getMetaFile(Block b)
           
 long FSDatasetInterface.getReplicaVisibleLength(Block block)
          Get visible length of the specified replica.
 long FSDataset.getReplicaVisibleLength(Block block)
           
 long DataNode.getReplicaVisibleLength(Block block)
          Return the visible length of a replica.
 FSDatasetInterface.BlockInputStreams FSDatasetInterface.getTmpInputStreams(Block b, long blkoff, long ckoff)
          Returns an input stream at specified offset of the specified block The block is still in the tmp directory and is not finalized
 FSDatasetInterface.BlockInputStreams FSDataset.getTmpInputStreams(Block b, long blkOffset, long ckoff)
          Returns handles to the block file and its metadata file
 void FSDatasetInterface.invalidate(Block[] invalidBlks)
          Invalidates the specified blocks
 void FSDataset.invalidate(Block[] invalidBlks)
          We're informed that a block is no longer valid.
 boolean FSDatasetInterface.isValidBlock(Block b)
          Is the block valid?
 boolean FSDataset.isValidBlock(Block b)
          Check whether the given block is a valid one.
 boolean FSDatasetInterface.metaFileExists(Block b)
          Does the meta file exist for this block?
 boolean FSDataset.metaFileExists(Block b)
           
protected  void DataNode.notifyNamenodeReceivedBlock(Block block, String delHint)
           
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDatasetInterface.recoverAppend(Block b, long newGS, long expectedBlockLen)
          Recover a failed append to a finalized replica and returns the meta info of the replica
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDataset.recoverAppend(Block b, long newGS, long expectedBlockLen)
           
 void FSDatasetInterface.recoverClose(Block b, long newGS, long expectedBlockLen)
          Recover a failed pipeline close It bumps the replica's generation stamp and finalize it if RBW replica
 void FSDataset.recoverClose(Block b, long newGS, long expectedBlockLen)
           
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDatasetInterface.recoverRbw(Block b, long newGS, long minBytesRcvd, long maxBytesRcvd)
          Recovers a RBW replica and returns the meta info of the replica
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineInterface FSDataset.recoverRbw(Block b, long newGS, long minBytesRcvd, long maxBytesRcvd)
           
 void FSDatasetInterface.unfinalizeBlock(Block b)
          Unfinalizes the block previously opened for writing using writeToBlock.
 void FSDataset.unfinalizeBlock(Block b)
          Remove the temporary block file (if any)
 boolean FSDataset.unlinkBlock(Block block, int numLinks)
          Make a copy of the block if this block is linked to an existing snapshot.
 ReplicaInfo FSDatasetInterface.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength)
          Update replica's generation stamp and length and finalize it.
 ReplicaInfo FSDataset.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newlength)
           
 Block DataNode.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength)
          Update replica with the new generation stamp and length.
 

Uses of Block in org.apache.hadoop.hdfs.server.namenode
 

Fields in org.apache.hadoop.hdfs.server.namenode declared as Block
 Block DatanodeDescriptor.BlockTargetPair.block
           
 

Methods in org.apache.hadoop.hdfs.server.namenode with parameters of type Block
 void NameNode.abandonBlock(Block b, String src, String holder)
          The client needs to give up on the block.
 boolean FSNamesystem.abandonBlock(Block b, String src, String holder)
          The client would like to let go of the given block
 LocatedBlock NameNode.addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes)
           
 void CorruptReplicasMap.addToCorruptReplicasMap(Block blk, DatanodeDescriptor dn)
          Mark the block belonging to datanode as corrupt.
 void FSNamesystem.blockReceived(DatanodeID nodeID, Block block, String delHint)
          The given node is reporting that it received a certain block.
 void NameNode.blockReceived(DatanodeRegistration nodeReg, Block[] blocks, String[] delHints)
           
abstract  DatanodeDescriptor BlockPlacementPolicy.chooseReplicaToDelete(FSInodeInfo srcInode, Block block, short replicationFactor, Collection<DatanodeDescriptor> existingReplicas, Collection<DatanodeDescriptor> moreExistingReplicas)
          Decide whether deleting the specified replica of the block still makes the block conform to the configured block placement policy.
 DatanodeDescriptor BlockPlacementPolicyDefault.chooseReplicaToDelete(FSInodeInfo inode, Block block, short replicationFactor, Collection<DatanodeDescriptor> first, Collection<DatanodeDescriptor> second)
          Decide whether deleting the specified replica of the block still makes the block conform to the configured block placement policy.
 void NameNode.commitBlockSynchronization(Block block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, DatanodeID[] newtargets)
          Commit block synchronization in lease recovery
 boolean NameNode.complete(String src, String clientName, Block last)
          The client is done writing data to the given filename, and would like to complete it.
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.CompleteFileStatus FSNamesystem.completeFile(String src, String holder, Block last)
           
 LocatedBlock FSNamesystem.getAdditionalBlock(String src, String clientName, Block previous, HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes)
          The client would like to obtain an additional block for the indicated filename (which is being written-to).
 void FSNamesystem.markBlockAsCorrupt(Block blk, DatanodeInfo dn)
          Mark the block belonging to datanode as corrupt
 int FSNamesystem.numCorruptReplicas(Block blk)
           
 int CorruptReplicasMap.numCorruptReplicas(Block blk)
           
 LocatedBlock NameNode.updateBlockForPipeline(Block block, String clientName)
          Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
 void NameNode.updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes)
           
 

Uses of Block in org.apache.hadoop.hdfs.server.protocol
 

Subclasses of Block in org.apache.hadoop.hdfs.server.protocol
 class ReplicaRecoveryInfo
          Replica recovery information.
 

Methods in org.apache.hadoop.hdfs.server.protocol that return Block
 Block BlocksWithLocations.BlockWithLocations.getBlock()
          get the block
 Block[] BlockCommand.getBlocks()
           
 Block InterDatanodeProtocol.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength)
          Update replica with the new generation stamp and length.
 

Methods in org.apache.hadoop.hdfs.server.protocol with parameters of type Block
 void DatanodeProtocol.blockReceived(DatanodeRegistration registration, Block[] blocks, String[] delHints)
          blockReceived() allows the DataNode to tell the NameNode about recently-received block data, with a hint for pereferred replica to be deleted when there is any excessive blocks.
 void DatanodeProtocol.commitBlockSynchronization(Block block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, DatanodeID[] newtargets)
          Commit block synchronization in lease recovery
 Block InterDatanodeProtocol.updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength)
          Update replica with the new generation stamp and length.
 

Constructors in org.apache.hadoop.hdfs.server.protocol with parameters of type Block
BlockCommand(int action, Block[] blocks)
          Create BlockCommand for the given action
BlockRecoveryCommand.RecoveringBlock(Block b, DatanodeInfo[] locs, long newGS)
          Create RecoveringBlock.
BlocksWithLocations.BlockWithLocations(Block b, String[] datanodes)
          constructor
 



Copyright © 2009 The Apache Software Foundation