org.apache.hadoop.hdfs.server.namenode
Class NameNode

java.lang.Object
  extended by org.apache.hadoop.hdfs.server.namenode.NameNode
All Implemented Interfaces:
ClientProtocol, FSConstants, DatanodeProtocol, NamenodeProtocol, NamenodeProtocols, org.apache.hadoop.ipc.VersionedProtocol, org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol, org.apache.hadoop.security.RefreshUserToGroupMappingsProtocol
Direct Known Subclasses:
BackupNode

@InterfaceAudience.Private
public class NameNode
extends Object
implements NamenodeProtocols, FSConstants

NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS. There is a single NameNode running in any DFS deployment. (Well, except when there is a second backup/failover NameNode.) The NameNode controls two critical tables: 1) filename->blocksequence (namespace) 2) block->machinelist ("inodes") The first table is stored on disk and is very precious. The second table is rebuilt every time the NameNode comes up. 'NameNode' refers to both this class as well as the 'NameNode server'. The 'FSNamesystem' class actually performs most of the filesystem management. The majority of the 'NameNode' class itself is concerned with exposing the IPC interface and the http server to the outside world, plus some configuration management. NameNode implements the ClientProtocol interface, which allows clients to ask for DFS services. ClientProtocol is not designed for direct use by authors of DFS client code. End-users should instead use the org.apache.nutch.hadoop.fs.FileSystem class. NameNode also implements the DatanodeProtocol interface, used by DataNode programs that actually store DFS data blocks. These methods are invoked repeatedly and automatically by all the DataNodes in a DFS deployment. NameNode also implements the NamenodeProtocol interface, used by secondary namenodes or rebalancing processes to get partial namenode's state, for example partial blocksMap etc.


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction
 
Field Summary
static int DEFAULT_PORT
           
protected  InetSocketAddress httpAddress
          HTTP server address
protected  org.apache.hadoop.http.HttpServer httpServer
          httpServer
static org.apache.commons.logging.Log LOG
           
protected  FSNamesystem namesystem
           
protected  NamenodeRegistration nodeRegistration
          Registration information of this name-node
protected  HdfsConstants.NamenodeRole role
           
protected  InetSocketAddress rpcAddress
          RPC server address
protected  org.apache.hadoop.ipc.Server server
          RPC server
static org.apache.commons.logging.Log stateChangeLog
           
protected  boolean stopRequested
          only used for testing purposes
 
Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, versionID
 
Fields inherited from interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol
DISK_ERROR, DNA_ACCESSKEYUPDATE, DNA_FINALIZE, DNA_INVALIDATE, DNA_RECOVERBLOCK, DNA_REGISTER, DNA_SHUTDOWN, DNA_TRANSFER, DNA_UNKNOWN, FATAL_DISK_ERROR, INVALID_BLOCK, NOTIFY, versionID
 
Fields inherited from interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
ACT_CHECKPOINT, ACT_SHUTDOWN, ACT_UNKNOWN, FATAL, JA_CHECKPOINT_TIME, JA_IS_ALIVE, JA_JOURNAL, JA_JSPOOL_START, NOTIFY, versionID
 
Fields inherited from interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
versionID
 
Fields inherited from interface org.apache.hadoop.security.RefreshUserToGroupMappingsProtocol
versionID
 
Fields inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
BLOCK_INVALIDATE_CHUNK, BLOCKREPORT_INITIAL_DELAY, BLOCKREPORT_INTERVAL, BUFFER_SIZE, DEFAULT_BLOCK_SIZE, DEFAULT_BYTES_PER_CHECKSUM, DEFAULT_DATA_SOCKET_SIZE, DEFAULT_FILE_BUFFER_SIZE, DEFAULT_REPLICATION_FACTOR, DEFAULT_WRITE_PACKET_SIZE, HDFS_URI_SCHEME, HEARTBEAT_INTERVAL, LAYOUT_VERSION, LEASE_HARDLIMIT_PERIOD, LEASE_RECOVER_PERIOD, LEASE_SOFTLIMIT_PERIOD, MAX_PATH_DEPTH, MAX_PATH_LENGTH, MIN_BLOCKS_FOR_WRITE, QUOTA_DONT_SET, QUOTA_RESET, SIZE_OF_INTEGER, SMALL_BUFFER_SIZE
 
Constructor Summary
  NameNode(org.apache.hadoop.conf.Configuration conf)
          Start NameNode.
protected NameNode(org.apache.hadoop.conf.Configuration conf, HdfsConstants.NamenodeRole role)
           
 
Method Summary
 void abandonBlock(Block b, String src, String holder)
          The client needs to give up on the block.
 LocatedBlock addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes)
          A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().
 LocatedBlock append(String src, String clientName)
          Append to the end of the file.
 void blockReceived(DatanodeRegistration nodeReg, Block[] blocks, String[] delHints)
          blockReceived() allows the DataNode to tell the NameNode about recently-received block data, with a hint for pereferred replica to be deleted when there is any excessive blocks.
 DatanodeCommand blockReport(DatanodeRegistration nodeReg, long[] blocks)
          blockReport() tells the NameNode about all the locally-stored blocks.
 void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
          Cancel an existing delegation token.
 void commitBlockSynchronization(Block block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, DatanodeID[] newtargets)
          Commit block synchronization in lease recovery
 boolean complete(String src, String clientName, Block last)
          The client is done writing data to the given filename, and would like to complete it.
 void concat(String trg, String[] src)
          Moves blocks from srcs to trg and delete srcs
 void create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize)
          Create a new file entry in the namespace.
static NameNode createNameNode(String[] argv, org.apache.hadoop.conf.Configuration conf)
           
 void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent)
          Create a symbolic link to a file or directory.
 boolean delete(String src)
          Deprecated. 
 boolean delete(String src, boolean recursive)
          Delete the given file or directory from the file system.
 UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
          Report distributed upgrade progress or force current upgrade to proceed.
 void endCheckpoint(NamenodeRegistration registration, CheckpointSignature sig)
          A request to the active name-node to finalize previously started checkpoint.
 void errorReport(DatanodeRegistration nodeReg, int errorCode, String msg)
          errorReport() tells the NameNode about something that has gone awry.
 void errorReport(NamenodeRegistration registration, int errorCode, String msg)
          Report to the active name-node an error occurred on a subordinate node.
 void finalizeUpgrade()
          Finalize previous upgrade.
static void format(org.apache.hadoop.conf.Configuration conf)
          Format a new filesystem.
 void fsync(String src, String clientName)
          Write all metadata for this file into persistent storage.
 ExportedAccessKeys getAccessKeys()
          Get the current access keys
static InetSocketAddress getAddress(org.apache.hadoop.conf.Configuration conf)
           
static InetSocketAddress getAddress(String address)
           
 LocatedBlocks getBlockLocations(String src, long offset, long length)
          Get locations of the blocks of the specified file within the specified range.
 BlocksWithLocations getBlocks(DatanodeInfo datanode, long size)
          Get a list of blocks belonging to datanode whose total size equals size.
 org.apache.hadoop.fs.ContentSummary getContentSummary(String path)
          Get ContentSummary rooted at the specified directory.
 org.apache.hadoop.fs.FileStatus[] getCorruptFiles()
          
 DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type)
          Get a report on the system's current datanodes.
 org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
          Get a valid Delegation Token.
 long getEditLogSize()
          Deprecated. 
 HdfsFileStatus getFileInfo(String src)
          Get the file info for a specific file.
 HdfsFileStatus getFileLinkInfo(String src)
          Get the file info for a specific file.
 FSImage getFSImage()
           
 File getFsImageName()
          Returns the name of the fsImage file
 File[] getFsImageNameCheckpoint()
          Returns the name of the fsImage file uploaded by periodic checkpointing
static String getHostPortString(InetSocketAddress addr)
          Compose a "host:port" string from the address.
 InetSocketAddress getHttpAddress()
          Returns the address of the NameNodes http server, which is used to access the name-node web UI.
protected  InetSocketAddress getHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
           
 String getLinkTarget(String path)
          Resolve the first symbolic link on the specified path.
 DirectoryListing getListing(String src, byte[] startAfter)
          Get a partial listing of the indicated directory
 InetSocketAddress getNameNodeAddress()
          Returns the address on which the NameNodes is listening to.
static NameNodeMetrics getNameNodeMetrics()
           
 long getPreferredBlockSize(String filename)
          Get the block size for the given file.
 long getProtocolVersion(String protocol, long clientVersion)
           
 HdfsConstants.NamenodeRole getRole()
           
protected  InetSocketAddress getRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
           
 org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
          Get server default values for a number of configuration params.
 long[] getStats()
          Get a set of statistics about the filesystem.
static URI getUri(InetSocketAddress namenode)
           
protected  void initialize(org.apache.hadoop.conf.Configuration conf)
          Initialize name-node.
 boolean isInSafeMode()
          Is the cluster currently in safe mode?
 void join()
          Wait for service to finish.
 void journal(NamenodeRegistration registration, int jAction, int length, byte[] args)
          Journal edit records.
 long journalSize(NamenodeRegistration registration)
          Get the size of the active name-node journal (edit log) in bytes.
protected  void loadNamesystem(org.apache.hadoop.conf.Configuration conf)
           
static void main(String[] argv)
           
 void metaSave(String filename)
          Dumps namenode state into specified file
 boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent)
          Create a directory (or hierarchy of directories) with the given name and permission.
 UpgradeCommand processUpgradeCommand(UpgradeCommand comm)
          This is a very general way to send a command to the name-node during distributed upgrade process.
 void refreshNodes()
          Refresh the list of datanodes that the namenode should allow to connect.
 void refreshServiceAcl()
           
 void refreshUserToGroupsMappings(org.apache.hadoop.conf.Configuration conf)
           
 NamenodeRegistration register(NamenodeRegistration registration)
          Register a subordinate name-node like backup node.
 DatanodeRegistration registerDatanode(DatanodeRegistration nodeReg)
          Register Datanode.
 boolean rename(String src, String dst)
          Deprecated. 
 void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options)
          Rename src to dst.
 long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
          Renew an existing delegation token.
 void renewLease(String clientName)
          Client programs can cause stateful changes in the NameNode that affect other clients.
 void reportBadBlocks(LocatedBlock[] blocks)
          The client has detected an error on the specified located blocks and is reporting them to the server.
 boolean restoreFailedStorage(String arg)
          Enable/Disable restore failed storage.
 CheckpointSignature rollEditLog()
          Deprecated. 
 void rollFsImage()
          Deprecated. 
 void saveNamespace()
          Save namespace image.
 DatanodeCommand[] sendHeartbeat(DatanodeRegistration nodeReg, long capacity, long dfsUsed, long remaining, int xmitsInProgress, int xceiverCount)
          Data node notify the name node that it is alive Return an array of block-oriented commands for the datanode to execute.
protected  void setHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
           
 void setOwner(String src, String username, String groupname)
          Set Owner of a path (i.e.
 void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions)
          Set permissions for an existing file/directory.
 void setQuota(String path, long namespaceQuota, long diskspaceQuota)
          Set the quota for a directory.
 boolean setReplication(String src, short replication)
          Set replication for an existing file.
protected  void setRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
           
 boolean setSafeMode(FSConstants.SafeModeAction action)
          Enter, leave or get safe mode.
 void setTimes(String src, long mtime, long atime)
          Sets the modification and access time of the file to the specified time.
 NamenodeCommand startCheckpoint(NamenodeRegistration registration)
          A request to the active name-node to start a checkpoint.
 void stop()
          Stop all NameNode threads and wait for all to finish.
 LocatedBlock updateBlockForPipeline(Block block, String clientName)
          Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
 void updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes)
          Update a pipeline for a block under construction
 void verifyRequest(NodeRegistration nodeReg)
          Verify request.
 void verifyVersion(int version)
          Verify version.
 NamespaceInfo versionRequest()
          Request name-node version and storage information.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

DEFAULT_PORT

public static final int DEFAULT_PORT
See Also:
Constant Field Values

LOG

public static final org.apache.commons.logging.Log LOG

stateChangeLog

public static final org.apache.commons.logging.Log stateChangeLog

namesystem

protected FSNamesystem namesystem

role

protected HdfsConstants.NamenodeRole role

server

protected org.apache.hadoop.ipc.Server server
RPC server


rpcAddress

protected InetSocketAddress rpcAddress
RPC server address


httpServer

protected org.apache.hadoop.http.HttpServer httpServer
httpServer


httpAddress

protected InetSocketAddress httpAddress
HTTP server address


stopRequested

protected boolean stopRequested
only used for testing purposes


nodeRegistration

protected NamenodeRegistration nodeRegistration
Registration information of this name-node

Constructor Detail

NameNode

public NameNode(org.apache.hadoop.conf.Configuration conf)
         throws IOException
Start NameNode.

The name-node can be started with one of the following startup options:

The option is passed via configuration field: dfs.namenode.startup The conf will be modified to reflect the actual ports on which the NameNode is up and running if the user passes the port as zero in the conf.

Parameters:
conf - confirguration
Throws:
IOException

NameNode

protected NameNode(org.apache.hadoop.conf.Configuration conf,
                   HdfsConstants.NamenodeRole role)
            throws IOException
Throws:
IOException
Method Detail

getProtocolVersion

public long getProtocolVersion(String protocol,
                               long clientVersion)
                        throws IOException
Specified by:
getProtocolVersion in interface org.apache.hadoop.ipc.VersionedProtocol
Throws:
IOException

format

public static void format(org.apache.hadoop.conf.Configuration conf)
                   throws IOException
Format a new filesystem. Destroys any filesystem that may already exist at this location.

Throws:
IOException

getNameNodeMetrics

public static NameNodeMetrics getNameNodeMetrics()

getAddress

public static InetSocketAddress getAddress(String address)

getAddress

public static InetSocketAddress getAddress(org.apache.hadoop.conf.Configuration conf)

getUri

public static URI getUri(InetSocketAddress namenode)

getHostPortString

public static String getHostPortString(InetSocketAddress addr)
Compose a "host:port" string from the address.


getRole

public HdfsConstants.NamenodeRole getRole()

getRpcServerAddress

protected InetSocketAddress getRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
                                         throws IOException
Throws:
IOException

setRpcServerAddress

protected void setRpcServerAddress(org.apache.hadoop.conf.Configuration conf)

getHttpServerAddress

protected InetSocketAddress getHttpServerAddress(org.apache.hadoop.conf.Configuration conf)

setHttpServerAddress

protected void setHttpServerAddress(org.apache.hadoop.conf.Configuration conf)

loadNamesystem

protected void loadNamesystem(org.apache.hadoop.conf.Configuration conf)
                       throws IOException
Throws:
IOException

initialize

protected void initialize(org.apache.hadoop.conf.Configuration conf)
                   throws IOException
Initialize name-node.

Parameters:
conf - the configuration
Throws:
IOException

join

public void join()
Wait for service to finish. (Normally, it runs forever.)


stop

public void stop()
Stop all NameNode threads and wait for all to finish.


getBlocks

public BlocksWithLocations getBlocks(DatanodeInfo datanode,
                                     long size)
                              throws IOException
Description copied from interface: NamenodeProtocol
Get a list of blocks belonging to datanode whose total size equals size.

Specified by:
getBlocks in interface NamenodeProtocol
Parameters:
datanode - a data node
size - requested size
Returns:
a list of blocks & their locations
Throws:
IOException
See Also:
Balancer

getAccessKeys

public ExportedAccessKeys getAccessKeys()
                                 throws IOException
Get the current access keys

Specified by:
getAccessKeys in interface NamenodeProtocol
Returns:
ExportedAccessKeys containing current access keys
Throws:
IOException

errorReport

public void errorReport(NamenodeRegistration registration,
                        int errorCode,
                        String msg)
                 throws IOException
Description copied from interface: NamenodeProtocol
Report to the active name-node an error occurred on a subordinate node. Depending on the error code the active node may decide to unregister the reporting node.

Specified by:
errorReport in interface NamenodeProtocol
Parameters:
registration - requesting node.
errorCode - indicates the error
msg - free text description of the error
Throws:
IOException

register

public NamenodeRegistration register(NamenodeRegistration registration)
                              throws IOException
Description copied from interface: NamenodeProtocol
Register a subordinate name-node like backup node.

Specified by:
register in interface NamenodeProtocol
Returns:
NamenodeRegistration of the node, which this node has just registered with.
Throws:
IOException

startCheckpoint

public NamenodeCommand startCheckpoint(NamenodeRegistration registration)
                                throws IOException
Description copied from interface: NamenodeProtocol
A request to the active name-node to start a checkpoint. The name-node should decide whether to admit it or reject. The name-node also decides what should be done with the backup node image before and after the checkpoint.

Specified by:
startCheckpoint in interface NamenodeProtocol
Parameters:
registration - the requesting node
Returns:
CheckpointCommand if checkpoint is allowed.
Throws:
IOException
See Also:
CheckpointCommand, NamenodeCommand, NamenodeProtocol.ACT_SHUTDOWN

endCheckpoint

public void endCheckpoint(NamenodeRegistration registration,
                          CheckpointSignature sig)
                   throws IOException
Description copied from interface: NamenodeProtocol
A request to the active name-node to finalize previously started checkpoint.

Specified by:
endCheckpoint in interface NamenodeProtocol
Parameters:
registration - the requesting node
sig - CheckpointSignature which identifies the checkpoint.
Throws:
IOException

journalSize

public long journalSize(NamenodeRegistration registration)
                 throws IOException
Description copied from interface: NamenodeProtocol
Get the size of the active name-node journal (edit log) in bytes.

Specified by:
journalSize in interface NamenodeProtocol
Parameters:
registration - the requesting node
Returns:
The number of bytes in the journal.
Throws:
IOException

journal

public void journal(NamenodeRegistration registration,
                    int jAction,
                    int length,
                    byte[] args)
             throws IOException
Description copied from interface: NamenodeProtocol
Journal edit records. This message is sent by the active name-node to the backup node via EditLogBackupOutputStream in order to synchronize meta-data changes with the backup namespace image.

Specified by:
journal in interface NamenodeProtocol
Parameters:
registration - active node registration
jAction - journal action
length - length of the byte array
args - byte array containing serialized journal records
Throws:
IOException

getDelegationToken

public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
                                                                                     throws IOException
Description copied from interface: ClientProtocol
Get a valid Delegation Token.

Specified by:
getDelegationToken in interface ClientProtocol
Parameters:
renewer - the designated renewer for the token
Returns:
Token
Throws:
IOException

renewDelegationToken

public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                          throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                 IOException
Description copied from interface: ClientProtocol
Renew an existing delegation token.

Specified by:
renewDelegationToken in interface ClientProtocol
Parameters:
token - delegation token obtained earlier
Returns:
the new expiration time
Throws:
IOException
org.apache.hadoop.security.token.SecretManager.InvalidToken

cancelDelegationToken

public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                           throws IOException
Description copied from interface: ClientProtocol
Cancel an existing delegation token.

Specified by:
cancelDelegationToken in interface ClientProtocol
Parameters:
token - delegation token
Throws:
IOException

getBlockLocations

public LocatedBlocks getBlockLocations(String src,
                                       long offset,
                                       long length)
                                throws IOException
Get locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.

Return LocatedBlocks which contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.

The client will then have to contact one of the indicated DataNodes to obtain the actual data.

Specified by:
getBlockLocations in interface ClientProtocol
Parameters:
src - file name
offset - range start offset
length - range length
Returns:
file length and array of blocks with their locations
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
FileNotFoundException - if the path does not exist.

getServerDefaults

public org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
                                                        throws IOException
Get server default values for a number of configuration params.

Specified by:
getServerDefaults in interface ClientProtocol
Returns:
a set of server default configuration values
Throws:
IOException

create

public void create(String src,
                   org.apache.hadoop.fs.permission.FsPermission masked,
                   String clientName,
                   org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag,
                   boolean createParent,
                   short replication,
                   long blockSize)
            throws IOException
Create a new file entry in the namespace.

This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.

Once created, the file is visible and available for read to other clients. Although, other clients cannot ClientProtocol.delete(String, boolean), re-create or ClientProtocol.rename(String, String) it until the file is completed or explicitly as a result of lease expiration.

Blocks have a maximum size. Clients that intend to create multi-block files must also use ClientProtocol.addBlock(String, String, Block, DatanodeInfo[]).

Specified by:
create in interface ClientProtocol
Parameters:
src - path of the file being created.
masked - masked permission.
clientName - name of the current client.
flag - indicates whether the file should be overwritten if it already exists or create if it does not exist or append.
createParent - create missing parent directory if true
replication - block replication factor.
blockSize - maximum block size.
Throws:
org.apache.hadoop.security.AccessControlException - if permission to create file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException.
QuotaExceededException - if the file creation violates any quota restriction
IOException - if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
AlreadyBeingCreatedException - if the path does not exist.
NSQuotaExceededException - if the namespace quota is exceeded.

append

public LocatedBlock append(String src,
                           String clientName)
                    throws IOException
Append to the end of the file.

Specified by:
append in interface ClientProtocol
Parameters:
src - path of the file being created.
clientName - name of the current client.
Returns:
information about the last partial block if any.
Throws:
org.apache.hadoop.security.AccessControlException - if permission to append file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.
IOException - if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

setReplication

public boolean setReplication(String src,
                              short replication)
                       throws IOException
Set replication for an existing file.

The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.

Specified by:
setReplication in interface ClientProtocol
Parameters:
src - file name
replication - new replication
Returns:
true if successful; false if file does not exist or is a directory
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

setPermission

public void setPermission(String src,
                          org.apache.hadoop.fs.permission.FsPermission permissions)
                   throws IOException
Set permissions for an existing file/directory.

Specified by:
setPermission in interface ClientProtocol
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setOwner

public void setOwner(String src,
                     String username,
                     String groupname)
              throws IOException
Set Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.

Specified by:
setOwner in interface ClientProtocol
username - If it is null, the original username remains unchanged.
groupname - If it is null, the original groupname remains unchanged.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

addBlock

public LocatedBlock addBlock(String src,
                             String clientName,
                             Block previous,
                             DatanodeInfo[] excludedNodes)
                      throws IOException
Description copied from interface: ClientProtocol
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.

Specified by:
addBlock in interface ClientProtocol
Parameters:
src - the file being created
clientName - the name of the client that adds the block
previous - previous block
excludedNodes - a list of nodes that should not be allocated for the current block
Returns:
LocatedBlock allocated block information.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
DSQuotaExceededException - if the directory's quota is exceeded.
IOException

abandonBlock

public void abandonBlock(Block b,
                         String src,
                         String holder)
                  throws IOException
The client needs to give up on the block.

Specified by:
abandonBlock in interface ClientProtocol
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

complete

public boolean complete(String src,
                        String clientName,
                        Block last)
                 throws IOException
The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of the file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.

Specified by:
complete in interface ClientProtocol
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

reportBadBlocks

public void reportBadBlocks(LocatedBlock[] blocks)
                     throws IOException
The client has detected an error on the specified located blocks and is reporting them to the server. For now, the namenode will mark the block as corrupt. In the future we might check the blocks are actually corrupt.

Specified by:
reportBadBlocks in interface ClientProtocol
Specified by:
reportBadBlocks in interface DatanodeProtocol
Parameters:
blocks - Array of located blocks to report
Throws:
IOException

updateBlockForPipeline

public LocatedBlock updateBlockForPipeline(Block block,
                                           String clientName)
                                    throws IOException
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.

Specified by:
updateBlockForPipeline in interface ClientProtocol
Parameters:
block - a block
clientName - the name of the client
Returns:
a located block with a new generation stamp and an access token
Throws:
IOException - if any error occurs

updatePipeline

public void updatePipeline(String clientName,
                           Block oldBlock,
                           Block newBlock,
                           DatanodeID[] newNodes)
                    throws IOException
Description copied from interface: ClientProtocol
Update a pipeline for a block under construction

Specified by:
updatePipeline in interface ClientProtocol
Parameters:
clientName - the name of the client
oldBlock - the old block
newBlock - the new block containing new generation stamp and length
newNodes - datanodes in the pipeline
Throws:
IOException - if any error occurs

commitBlockSynchronization

public void commitBlockSynchronization(Block block,
                                       long newgenerationstamp,
                                       long newlength,
                                       boolean closeFile,
                                       boolean deleteblock,
                                       DatanodeID[] newtargets)
                                throws IOException
Commit block synchronization in lease recovery

Specified by:
commitBlockSynchronization in interface DatanodeProtocol
Throws:
IOException

getPreferredBlockSize

public long getPreferredBlockSize(String filename)
                           throws IOException
Description copied from interface: ClientProtocol
Get the block size for the given file.

Specified by:
getPreferredBlockSize in interface ClientProtocol
Parameters:
filename - The name of the file
Returns:
The number of bytes in each block
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

rename

@Deprecated
public boolean rename(String src,
                                 String dst)
               throws IOException
Deprecated. 

Rename an item in the file system namespace.

Specified by:
rename in interface ClientProtocol
Parameters:
src - existing file or directory name.
dst - new name.
Returns:
true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
Throws:
IOException - if the new name is invalid.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
QuotaExceededException - if the rename would violate any quota restriction

concat

public void concat(String trg,
                   String[] src)
            throws IOException
Moves blocks from srcs to trg and delete srcs

Specified by:
concat in interface ClientProtocol
Parameters:
trg - existing file
src - - list of existing files (same block size, same replication)
Throws:
IOException - if some arguments are invalid
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
QuotaExceededException - if the rename would violate any quota restriction

rename

public void rename(String src,
                   String dst,
                   org.apache.hadoop.fs.Options.Rename... options)
            throws IOException
Rename src to dst.

Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.

This implementation of rename is atomic.

Specified by:
rename in interface ClientProtocol
Parameters:
src - existing file or directory name.
dst - new name.
options - Rename options
Throws:
IOException - if rename failed
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

delete

@Deprecated
public boolean delete(String src)
               throws IOException
Deprecated. 

Description copied from interface: ClientProtocol
Delete the given file or directory from the file system.

Any blocks belonging to the deleted files will be garbage-collected.

Specified by:
delete in interface ClientProtocol
Parameters:
src - existing name.
Returns:
true only if the existing file or directory was actually removed from the file system.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

delete

public boolean delete(String src,
                      boolean recursive)
               throws IOException
Delete the given file or directory from the file system.

same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.

Specified by:
delete in interface ClientProtocol
Parameters:
src - existing name
recursive - if true deletes a non empty directory recursively, else throws an exception.
Returns:
true only if the existing file or directory was actually removed from the file system.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

mkdirs

public boolean mkdirs(String src,
                      org.apache.hadoop.fs.permission.FsPermission masked,
                      boolean createParent)
               throws IOException
Create a directory (or hierarchy of directories) with the given name and permission.

Specified by:
mkdirs in interface ClientProtocol
Parameters:
src - The path of the directory being created
masked - The masked permission of the directory being created
createParent - create missing parent directory if true
Returns:
True if the operation success.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
QuotaExceededException - if the operation would violate any quota restriction.
IOException

renewLease

public void renewLease(String clientName)
                throws IOException
Description copied from interface: ClientProtocol
Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.

So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.

Specified by:
renewLease in interface ClientProtocol
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

getListing

public DirectoryListing getListing(String src,
                                   byte[] startAfter)
                            throws IOException
Description copied from interface: ClientProtocol
Get a partial listing of the indicated directory

Specified by:
getListing in interface ClientProtocol
Parameters:
src - the directory name
startAfter - the name to start listing after encoded in java UTF8
Returns:
a partial listing starting after startAfter
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

getFileInfo

public HdfsFileStatus getFileInfo(String src)
                           throws IOException
Get the file info for a specific file.

Specified by:
getFileInfo in interface ClientProtocol
Parameters:
src - The string representation of the path to the file
Returns:
object containing information regarding the file or null if file not found
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains symlinks; IOException if permission to access file is denied by the system
IOException

getFileLinkInfo

public HdfsFileStatus getFileLinkInfo(String src)
                               throws IOException
Get the file info for a specific file. If the path refers to a symlink then the FileStatus of the symlink is returned.

Specified by:
getFileLinkInfo in interface ClientProtocol
Parameters:
src - The string representation of the path to the file
Returns:
object containing information regarding the file or null if file not found
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains symlinks; IOException if permission to access file is denied by the system
IOException

getStats

public long[] getStats()
Description copied from interface: ClientProtocol
Get a set of statistics about the filesystem. Right now, only three values are returned. Use public constants like ClientProtocol.GET_STATS_CAPACITY_IDX in place of actual numbers to index into the array.

Specified by:
getStats in interface ClientProtocol

getDatanodeReport

public DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type)
                                 throws IOException
Description copied from interface: ClientProtocol
Get a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.

Specified by:
getDatanodeReport in interface ClientProtocol
Throws:
IOException

setSafeMode

public boolean setSafeMode(FSConstants.SafeModeAction action)
                    throws IOException
Description copied from interface: ClientProtocol
Enter, leave or get safe mode.

Safe mode is a name node state when it

  1. does not accept changes to name space (read-only), and
  2. does not replicate or delete blocks.

Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using setSafeMode(SafeModeAction.SAFEMODE_GET).

At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.

If safe mode is turned on manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER) then the name node stays in safe mode until it is manually turned off using setSafeMode(SafeModeAction.SAFEMODE_LEAVE). Current state of the name node can be verified using setSafeMode(SafeModeAction.SAFEMODE_GET)

Configuration parameters:

dfs.safemode.threshold.pct is the threshold parameter.
dfs.safemode.extension is the safe mode extension parameter.
dfs.namenode.replication.min is the minimal replication parameter.

Special cases:

The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
If the threshold is set to 1 then all blocks need to have at least minimal replication.
If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
Safe mode can always be turned off manually.

Specified by:
setSafeMode in interface ClientProtocol
Parameters:
action -
  • 0 leave safe mode;
  • 1 enter safe mode;
  • 2 get safe mode state.
Returns:
  • 0 if the safe mode is OFF or
  • 1 if the safe mode is ON.
Throws:
IOException

isInSafeMode

public boolean isInSafeMode()
Is the cluster currently in safe mode?


restoreFailedStorage

public boolean restoreFailedStorage(String arg)
                             throws org.apache.hadoop.security.AccessControlException
Description copied from interface: ClientProtocol
Enable/Disable restore failed storage.

sets flag to enable restore of failed storage replicas

Specified by:
restoreFailedStorage in interface ClientProtocol
Throws:
org.apache.hadoop.security.AccessControlException

saveNamespace

public void saveNamespace()
                   throws IOException
Description copied from interface: ClientProtocol
Save namespace image.

Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.

Specified by:
saveNamespace in interface ClientProtocol
Throws:
org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated.
IOException - if image creation failed.

refreshNodes

public void refreshNodes()
                  throws IOException
Refresh the list of datanodes that the namenode should allow to connect. Re-reads conf by creating new HdfsConfiguration object and uses the files list in the configuration to update the list.

Specified by:
refreshNodes in interface ClientProtocol
Throws:
IOException

getEditLogSize

@Deprecated
public long getEditLogSize()
                    throws IOException
Deprecated. 

Returns the size of the current edit log.

Specified by:
getEditLogSize in interface NamenodeProtocol
Returns:
The number of bytes in the current edit log.
Throws:
IOException

rollEditLog

@Deprecated
public CheckpointSignature rollEditLog()
                                throws IOException
Deprecated. 

Roll the edit log.

Specified by:
rollEditLog in interface NamenodeProtocol
Returns:
a unique token to identify this transaction.
Throws:
IOException

rollFsImage

@Deprecated
public void rollFsImage()
                 throws IOException
Deprecated. 

Roll the image

Specified by:
rollFsImage in interface NamenodeProtocol
Throws:
IOException

finalizeUpgrade

public void finalizeUpgrade()
                     throws IOException
Description copied from interface: ClientProtocol
Finalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.

Specified by:
finalizeUpgrade in interface ClientProtocol
Throws:
IOException

distributedUpgradeProgress

public UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
                                               throws IOException
Description copied from interface: ClientProtocol
Report distributed upgrade progress or force current upgrade to proceed.

Specified by:
distributedUpgradeProgress in interface ClientProtocol
Parameters:
action - FSConstants.UpgradeAction to perform
Returns:
upgrade status information or null if no upgrades are in progress
Throws:
IOException

metaSave

public void metaSave(String filename)
              throws IOException
Dumps namenode state into specified file

Specified by:
metaSave in interface ClientProtocol
Throws:
IOException

getCorruptFiles

public org.apache.hadoop.fs.FileStatus[] getCorruptFiles()
                                                  throws org.apache.hadoop.security.AccessControlException,
                                                         IOException

Specified by:
getCorruptFiles in interface ClientProtocol
Returns:
Array of FileStatus objects referring to corrupted files. The server could return all or a few of the files that are corrupt.
Throws:
org.apache.hadoop.security.AccessControlException
IOException

getContentSummary

public org.apache.hadoop.fs.ContentSummary getContentSummary(String path)
                                                      throws IOException
Get ContentSummary rooted at the specified directory.

Specified by:
getContentSummary in interface ClientProtocol
Parameters:
path - The string representation of the path
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setQuota

public void setQuota(String path,
                     long namespaceQuota,
                     long diskspaceQuota)
              throws IOException
Set the quota for a directory.

Specified by:
setQuota in interface ClientProtocol
Parameters:
path - The string representation of the path to the directory
namespaceQuota - Limit on the number of names in the tree rooted at the directory
diskspaceQuota - Limit on disk space occupied all the files under this directory.

The quota can have three types of values : (1) 0 or more will set the quota to that value, (2) FSConstants.QUOTA_DONT_SET implies the quota will not be changed, and (3) FSConstants.QUOTA_RESET implies the quota will be reset. Any other value is a runtime error.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
FileNotFoundException - if the path is a file or does not exist
QuotaExceededException - if the directory size is greater than the given quota
IOException

fsync

public void fsync(String src,
                  String clientName)
           throws IOException
Write all metadata for this file into persistent storage. The file must be currently open for writing.

Specified by:
fsync in interface ClientProtocol
Parameters:
src - The string representation of the path
clientName - The string representation of the client
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setTimes

public void setTimes(String src,
                     long mtime,
                     long atime)
              throws IOException
Description copied from interface: ClientProtocol
Sets the modification and access time of the file to the specified time.

Specified by:
setTimes in interface ClientProtocol
Parameters:
src - The string representation of the path
mtime - The number of milliseconds since Jan 1, 1970. Setting mtime to -1 means that modification time should not be set by this call.
atime - The number of milliseconds since Jan 1, 1970. Setting atime to -1 means that access time should not be set by this call.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

createSymlink

public void createSymlink(String target,
                          String link,
                          org.apache.hadoop.fs.permission.FsPermission dirPerms,
                          boolean createParent)
                   throws IOException
Description copied from interface: ClientProtocol
Create a symbolic link to a file or directory.

Specified by:
createSymlink in interface ClientProtocol
Parameters:
target - The pathname of the destination that the link points to.
link - The pathname of the link being created.
dirPerms - permissions to use when creating parent directories
createParent - - if true then missing parent dirs are created if false then parent must exist
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

getLinkTarget

public String getLinkTarget(String path)
                     throws IOException
Description copied from interface: ClientProtocol
Resolve the first symbolic link on the specified path.

Specified by:
getLinkTarget in interface ClientProtocol
Parameters:
path - The pathname that needs to be resolved
Returns:
The pathname after resolving the first symbolic link if any.
Throws:
IOException

registerDatanode

public DatanodeRegistration registerDatanode(DatanodeRegistration nodeReg)
                                      throws IOException
Description copied from interface: DatanodeProtocol
Register Datanode.

Specified by:
registerDatanode in interface DatanodeProtocol
Returns:
updated DatanodeRegistration, which contains new storageID if the datanode did not have one and registration ID for further communication.
Throws:
IOException
See Also:
DataNode.dnRegistration, FSNamesystem.registerDatanode(DatanodeRegistration)

sendHeartbeat

public DatanodeCommand[] sendHeartbeat(DatanodeRegistration nodeReg,
                                       long capacity,
                                       long dfsUsed,
                                       long remaining,
                                       int xmitsInProgress,
                                       int xceiverCount)
                                throws IOException
Data node notify the name node that it is alive Return an array of block-oriented commands for the datanode to execute. This will be either a transfer or a delete operation.

Specified by:
sendHeartbeat in interface DatanodeProtocol
Throws:
IOException

blockReport

public DatanodeCommand blockReport(DatanodeRegistration nodeReg,
                                   long[] blocks)
                            throws IOException
Description copied from interface: DatanodeProtocol
blockReport() tells the NameNode about all the locally-stored blocks. The NameNode returns an array of Blocks that have become obsolete and should be deleted. This function is meant to upload *all* the locally-stored blocks. It's invoked upon startup and then infrequently afterwards.

Specified by:
blockReport in interface DatanodeProtocol
blocks - - the block list as an array of longs. Each block is represented as 2 longs. This is done instead of Block[] to reduce memory used by block reports.
Returns:
- the next command for DN to process.
Throws:
IOException

blockReceived

public void blockReceived(DatanodeRegistration nodeReg,
                          Block[] blocks,
                          String[] delHints)
                   throws IOException
Description copied from interface: DatanodeProtocol
blockReceived() allows the DataNode to tell the NameNode about recently-received block data, with a hint for pereferred replica to be deleted when there is any excessive blocks. For example, whenever client code writes a new Block here, or another DataNode copies a Block to this DataNode, it will call blockReceived().

Specified by:
blockReceived in interface DatanodeProtocol
Throws:
IOException

errorReport

public void errorReport(DatanodeRegistration nodeReg,
                        int errorCode,
                        String msg)
                 throws IOException
Description copied from interface: DatanodeProtocol
errorReport() tells the NameNode about something that has gone awry. Useful for debugging.

Specified by:
errorReport in interface DatanodeProtocol
Throws:
IOException

versionRequest

public NamespaceInfo versionRequest()
                             throws IOException
Description copied from interface: NamenodeProtocol
Request name-node version and storage information.

Specified by:
versionRequest in interface DatanodeProtocol
Specified by:
versionRequest in interface NamenodeProtocol
Returns:
NamespaceInfo identifying versions and storage information of the name-node
Throws:
IOException

processUpgradeCommand

public UpgradeCommand processUpgradeCommand(UpgradeCommand comm)
                                     throws IOException
Description copied from interface: DatanodeProtocol
This is a very general way to send a command to the name-node during distributed upgrade process. The generosity is because the variety of upgrade commands is unpredictable. The reply from the name-node is also received in the form of an upgrade command.

Specified by:
processUpgradeCommand in interface DatanodeProtocol
Returns:
a reply in the form of an upgrade command
Throws:
IOException

verifyRequest

public void verifyRequest(NodeRegistration nodeReg)
                   throws IOException
Verify request. Verifies correctness of the datanode version, registration ID, and if the datanode does not need to be shutdown.

Parameters:
nodeReg - data node registration
Throws:
IOException

verifyVersion

public void verifyVersion(int version)
                   throws IOException
Verify version.

Parameters:
version -
Throws:
IOException

getFsImageName

public File getFsImageName()
                    throws IOException
Returns the name of the fsImage file

Throws:
IOException

getFSImage

public FSImage getFSImage()

getFsImageNameCheckpoint

public File[] getFsImageNameCheckpoint()
                                throws IOException
Returns the name of the fsImage file uploaded by periodic checkpointing

Throws:
IOException

getNameNodeAddress

public InetSocketAddress getNameNodeAddress()
Returns the address on which the NameNodes is listening to.

Returns:
the address on which the NameNodes is listening to.

getHttpAddress

public InetSocketAddress getHttpAddress()
Returns the address of the NameNodes http server, which is used to access the name-node web UI.

Returns:
the http address.

refreshServiceAcl

public void refreshServiceAcl()
                       throws IOException
Specified by:
refreshServiceAcl in interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
Throws:
IOException

refreshUserToGroupsMappings

public void refreshUserToGroupsMappings(org.apache.hadoop.conf.Configuration conf)
                                 throws IOException
Specified by:
refreshUserToGroupsMappings in interface org.apache.hadoop.security.RefreshUserToGroupMappingsProtocol
Throws:
IOException

createNameNode

public static NameNode createNameNode(String[] argv,
                                      org.apache.hadoop.conf.Configuration conf)
                               throws IOException
Throws:
IOException

main

public static void main(String[] argv)
                 throws Exception
Throws:
Exception


Copyright © 2009 The Apache Software Foundation