|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.conf.Configured
org.apache.hadoop.hdfs.server.datanode.DataNode
@InterfaceAudience.Private public class DataNode
DataNode is a class (and program) that stores a set of blocks for a DFS deployment. A single deployment can have one or many DataNodes. Each DataNode communicates regularly with a single NameNode. It also communicates with client code and other DataNodes from time to time. DataNodes store a series of named blocks. The DataNode allows client code to read these blocks, or to write new block data. The DataNode may also, in response to instructions from its NameNode, delete blocks or copy blocks to/from other DataNodes. The DataNode maintains just one critical table: block-> stream of bytes (of BLOCK_SIZE or less) This info is stored on a local disk. The DataNode reports the table's contents to the NameNode upon startup and every so often afterwards. DataNodes spend their lives in an endless loop of asking the NameNode for something to do. A NameNode cannot connect to a DataNode directly; a NameNode simply returns values from functions invoked by a DataNode. DataNodes maintain an open server socket so that client code or other DataNodes can read/write data. The host/port for this server is reported to the NameNode, which then sends that information to clients or other DataNodes that might be interested.
Nested Class Summary |
---|
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants |
---|
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction |
Field Summary | |
---|---|
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner |
blockScanner
|
org.apache.hadoop.util.Daemon |
blockScannerThread
|
FSDatasetInterface |
data
|
static String |
DN_CLIENTTRACE_FORMAT
|
DatanodeRegistration |
dnRegistration
|
static String |
EMPTY_DEL_HINT
|
org.apache.hadoop.ipc.Server |
ipcServer
|
static org.apache.commons.logging.Log |
LOG
|
DatanodeProtocol |
namenode
|
static int |
PKT_HEADER_LEN
Header size for a packet |
Fields inherited from interface org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol |
---|
versionID |
Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol |
---|
versionID |
Method Summary | |
---|---|
protected void |
checkDiskError()
Check if there is a disk failure and if so, handle the error |
protected void |
checkDiskError(Exception e)
Check if there is no space in disk |
static DataNode |
createDataNode(String[] args,
org.apache.hadoop.conf.Configuration conf)
Instantiate & Start a single datanode daemon and wait for it to finish. |
static InterDatanodeProtocol |
createInterDataNodeProtocolProxy(DatanodeID datanodeid,
org.apache.hadoop.conf.Configuration conf)
|
static InetSocketAddress |
createSocketAddr(String target)
Deprecated. |
static DataNode |
getDataNode()
Return the DataNode object |
DatanodeRegistration |
getDatanodeRegistration()
Return DatanodeRegistration |
FSDatasetInterface |
getFSDataset()
This method is used for testing. |
String |
getNamenode()
Return the namenode's identifier |
InetSocketAddress |
getNameNodeAddr()
|
long |
getProtocolVersion(String protocol,
long clientVersion)
|
long |
getReplicaVisibleLength(Block block)
Return the visible length of a replica. |
InetSocketAddress |
getSelfAddr()
|
ReplicaRecoveryInfo |
initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock)
Initialize a replica recovery. |
static DataNode |
instantiateDataNode(String[] args,
org.apache.hadoop.conf.Configuration conf)
Instantiate a single datanode object. |
static void |
main(String[] args)
|
protected Socket |
newSocket()
Creates either NIO or regular depending on socketWriteTimeout. |
protected void |
notifyNamenodeReceivedBlock(Block block,
String delHint)
|
void |
offerService()
Main loop for the DataNode. |
org.apache.hadoop.util.Daemon |
recoverBlocks(Collection<BlockRecoveryCommand.RecoveringBlock> blocks)
|
void |
run()
No matter what kind of exception we get, keep retrying to offerService(). |
static void |
runDatanodeDaemon(DataNode dn)
Start a single datanode daemon and wait for it to finish. |
void |
scheduleBlockReport(long delay)
This methods arranges for the data node to send the block report at the next heartbeat. |
static void |
setNewStorageID(DatanodeRegistration dnReg)
|
void |
shutdown()
Shut down this instance of the datanode. |
String |
toString()
|
Block |
updateReplicaUnderRecovery(Block oldBlock,
long recoveryId,
long newLength)
Update replica with the new generation stamp and length. |
Methods inherited from class org.apache.hadoop.conf.Configured |
---|
getConf, setConf |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final org.apache.commons.logging.Log LOG
public static final String DN_CLIENTTRACE_FORMAT
public DatanodeProtocol namenode
public FSDatasetInterface data
public DatanodeRegistration dnRegistration
public static final String EMPTY_DEL_HINT
public org.apache.hadoop.hdfs.server.datanode.DataBlockScanner blockScanner
public org.apache.hadoop.util.Daemon blockScannerThread
public org.apache.hadoop.ipc.Server ipcServer
public static final int PKT_HEADER_LEN
Method Detail |
---|
@Deprecated public static InetSocketAddress createSocketAddr(String target) throws IOException
NetUtils.createSocketAddr(String)
instead.
IOException
protected Socket newSocket() throws IOException
IOException
public static DataNode getDataNode()
public static InterDatanodeProtocol createInterDataNodeProtocolProxy(DatanodeID datanodeid, org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public InetSocketAddress getNameNodeAddr()
public InetSocketAddress getSelfAddr()
public DatanodeRegistration getDatanodeRegistration()
public String getNamenode()
public static void setNewStorageID(DatanodeRegistration dnReg)
public void shutdown()
protected void checkDiskError(Exception e) throws IOException
e
- that caused this checkDiskError call
IOException
protected void checkDiskError()
public void offerService() throws Exception
Exception
protected void notifyNamenodeReceivedBlock(Block block, String delHint)
public void run()
run
in interface Runnable
public static void runDatanodeDaemon(DataNode dn) throws IOException
IOException
public static DataNode instantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException
runDatanodeDaemon(DataNode)
subsequently.
IOException
public static DataNode createDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public String toString()
toString
in class Object
public void scheduleBlockReport(long delay)
public FSDatasetInterface getFSDataset()
public static void main(String[] args)
public org.apache.hadoop.util.Daemon recoverBlocks(Collection<BlockRecoveryCommand.RecoveringBlock> blocks)
public ReplicaRecoveryInfo initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock) throws IOException
InterDatanodeProtocol
initReplicaRecovery
in interface InterDatanodeProtocol
IOException
public Block updateReplicaUnderRecovery(Block oldBlock, long recoveryId, long newLength) throws IOException
updateReplicaUnderRecovery
in interface InterDatanodeProtocol
IOException
public long getProtocolVersion(String protocol, long clientVersion) throws IOException
getProtocolVersion
in interface org.apache.hadoop.ipc.VersionedProtocol
IOException
public long getReplicaVisibleLength(Block block) throws IOException
getReplicaVisibleLength
in interface ClientDatanodeProtocol
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |