org.apache.hadoop.hdfs.protocol
Interface ClientProtocol

All Superinterfaces:
org.apache.hadoop.ipc.VersionedProtocol
All Known Subinterfaces:
NamenodeProtocols
All Known Implementing Classes:
BackupNode, NameNode

@InterfaceAudience.Private
@InterfaceStability.Evolving
public interface ClientProtocol
extends org.apache.hadoop.ipc.VersionedProtocol

ClientProtocol is used by user code via DistributedFileSystem class to communicate with the NameNode. User code can manipulate the directory namespace, as well as open/close file streams, etc.


Field Summary
static int GET_STATS_CAPACITY_IDX
           
static int GET_STATS_CORRUPT_BLOCKS_IDX
           
static int GET_STATS_MISSING_BLOCKS_IDX
           
static int GET_STATS_REMAINING_IDX
           
static int GET_STATS_UNDER_REPLICATED_IDX
           
static int GET_STATS_USED_IDX
           
static long versionID
          Compared to the previous version the following changes have been introduced: (Only the latest change is reflected.
 
Method Summary
 void abandonBlock(Block b, String src, String holder)
          The client can give up on a block by calling abandonBlock().
 LocatedBlock addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes)
          A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().
 LocatedBlock append(String src, String clientName)
          Append to the end of the file.
 void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
          Cancel an existing delegation token.
 boolean complete(String src, String clientName, Block last)
          The client is done writing data to the given filename, and would like to complete it.
 void concat(String trg, String[] srcs)
          Moves blocks from srcs to trg and delete srcs
 void create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize)
          Create a new file entry in the namespace.
 void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent)
          Create a symbolic link to a file or directory.
 boolean delete(String src)
          Deprecated. use delete(String, boolean) istead.
 boolean delete(String src, boolean recursive)
          Delete the given file or directory from the file system.
 UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
          Report distributed upgrade progress or force current upgrade to proceed.
 void finalizeUpgrade()
          Finalize previous upgrade.
 void fsync(String src, String client)
          Write all metadata for this file into persistent storage.
 LocatedBlocks getBlockLocations(String src, long offset, long length)
          Get locations of the blocks of the specified file within the specified range.
 org.apache.hadoop.fs.ContentSummary getContentSummary(String path)
          Get ContentSummary rooted at the specified directory.
 org.apache.hadoop.fs.FileStatus[] getCorruptFiles()
           
 DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type)
          Get a report on the system's current datanodes.
 org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
          Get a valid Delegation Token.
 HdfsFileStatus getFileInfo(String src)
          Get the file info for a specific file or directory.
 HdfsFileStatus getFileLinkInfo(String src)
          Get the file info for a specific file or directory.
 String getLinkTarget(String path)
          Resolve the first symbolic link on the specified path.
 DirectoryListing getListing(String src, byte[] startAfter)
          Get a partial listing of the indicated directory
 long getPreferredBlockSize(String filename)
          Get the block size for the given file.
 org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
          Get server default values for a number of configuration params.
 long[] getStats()
          Get a set of statistics about the filesystem.
 void metaSave(String filename)
          Dumps namenode data structures into specified file.
 boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent)
          Create a directory (or hierarchy of directories) with the given name and permission.
 void refreshNodes()
          Tells the namenode to reread the hosts and exclude files.
 boolean rename(String src, String dst)
          Deprecated. Use rename(String, String, Options.Rename...) instead.
 void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options)
          Rename src to dst.
 long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
          Renew an existing delegation token.
 void renewLease(String clientName)
          Client programs can cause stateful changes in the NameNode that affect other clients.
 void reportBadBlocks(LocatedBlock[] blocks)
          The client wants to report corrupted blocks (blocks with specified locations on datanodes).
 boolean restoreFailedStorage(String arg)
          Enable/Disable restore failed storage.
 void saveNamespace()
          Save namespace image.
 void setOwner(String src, String username, String groupname)
          Set Owner of a path (i.e.
 void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission)
          Set permissions for an existing file/directory.
 void setQuota(String path, long namespaceQuota, long diskspaceQuota)
          Set the quota for a directory.
 boolean setReplication(String src, short replication)
          Set replication for an existing file.
 boolean setSafeMode(FSConstants.SafeModeAction action)
          Enter, leave or get safe mode.
 void setTimes(String src, long mtime, long atime)
          Sets the modification and access time of the file to the specified time.
 LocatedBlock updateBlockForPipeline(Block block, String clientName)
          Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
 void updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes)
          Update a pipeline for a block under construction
 
Methods inherited from interface org.apache.hadoop.ipc.VersionedProtocol
getProtocolVersion
 

Field Detail

versionID

static final long versionID
Compared to the previous version the following changes have been introduced: (Only the latest change is reflected. The log of historical changes can be retrieved from the svn). 60: Replace full getListing with iterative getListinng.

See Also:
Constant Field Values

GET_STATS_CAPACITY_IDX

static final int GET_STATS_CAPACITY_IDX
See Also:
Constant Field Values

GET_STATS_USED_IDX

static final int GET_STATS_USED_IDX
See Also:
Constant Field Values

GET_STATS_REMAINING_IDX

static final int GET_STATS_REMAINING_IDX
See Also:
Constant Field Values

GET_STATS_UNDER_REPLICATED_IDX

static final int GET_STATS_UNDER_REPLICATED_IDX
See Also:
Constant Field Values

GET_STATS_CORRUPT_BLOCKS_IDX

static final int GET_STATS_CORRUPT_BLOCKS_IDX
See Also:
Constant Field Values

GET_STATS_MISSING_BLOCKS_IDX

static final int GET_STATS_MISSING_BLOCKS_IDX
See Also:
Constant Field Values
Method Detail

getBlockLocations

@Nullable
LocatedBlocks getBlockLocations(String src,
                                         long offset,
                                         long length)
                                throws IOException,
                                       org.apache.hadoop.fs.UnresolvedLinkException,
                                       FileNotFoundException
Get locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.

Return LocatedBlocks which contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.

The client will then have to contact one of the indicated DataNodes to obtain the actual data.

Parameters:
src - file name
offset - range start offset
length - range length
Returns:
file length and array of blocks with their locations
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
FileNotFoundException - if the path does not exist.

getServerDefaults

org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
                                                        throws IOException
Get server default values for a number of configuration params.

Returns:
a set of server default configuration values
Throws:
IOException

create

void create(String src,
            org.apache.hadoop.fs.permission.FsPermission masked,
            String clientName,
            org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag,
            boolean createParent,
            short replication,
            long blockSize)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException,
                   AlreadyBeingCreatedException,
                   NSQuotaExceededException
Create a new file entry in the namespace.

This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.

Once created, the file is visible and available for read to other clients. Although, other clients cannot delete(String, boolean), re-create or rename(String, String) it until the file is completed or explicitly as a result of lease expiration.

Blocks have a maximum size. Clients that intend to create multi-block files must also use addBlock(String, String, Block, DatanodeInfo[]).

Parameters:
src - path of the file being created.
masked - masked permission.
clientName - name of the current client.
flag - indicates whether the file should be overwritten if it already exists or create if it does not exist or append.
createParent - create missing parent directory if true
replication - block replication factor.
blockSize - maximum block size.
Throws:
org.apache.hadoop.security.AccessControlException - if permission to create file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException.
QuotaExceededException - if the file creation violates any quota restriction
IOException - if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
AlreadyBeingCreatedException - if the path does not exist.
NSQuotaExceededException - if the namespace quota is exceeded.

append

LocatedBlock append(String src,
                    String clientName)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Append to the end of the file.

Parameters:
src - path of the file being created.
clientName - name of the current client.
Returns:
information about the last partial block if any.
Throws:
org.apache.hadoop.security.AccessControlException - if permission to append file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.
IOException - if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

setReplication

boolean setReplication(String src,
                       short replication)
                       throws IOException,
                              org.apache.hadoop.fs.UnresolvedLinkException
Set replication for an existing file.

The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.

Parameters:
src - file name
replication - new replication
Returns:
true if successful; false if file does not exist or is a directory
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

setPermission

void setPermission(String src,
                   org.apache.hadoop.fs.permission.FsPermission permission)
                   throws IOException,
                          org.apache.hadoop.fs.UnresolvedLinkException,
                          SafeModeException
Set permissions for an existing file/directory.

Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException
SafeModeException

setOwner

void setOwner(String src,
              String username,
              String groupname)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException
Set Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.

Parameters:
src -
username - If it is null, the original username remains unchanged.
groupname - If it is null, the original groupname remains unchanged.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

abandonBlock

void abandonBlock(Block b,
                  String src,
                  String holder)
                  throws IOException,
                         org.apache.hadoop.fs.UnresolvedLinkException
The client can give up on a block by calling abandonBlock(). The client can then either obtain a new block, or complete or abandon the file. Any partial writes to the block will be discarded.

Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

addBlock

LocatedBlock addBlock(String src,
                      String clientName,
                      @Nullable
                      Block previous,
                      @Nullable
                      DatanodeInfo[] excludedNodes)
                      throws IOException,
                             org.apache.hadoop.fs.UnresolvedLinkException,
                             DSQuotaExceededException
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.

Parameters:
src - the file being created
clientName - the name of the client that adds the block
previous - previous block
excludedNodes - a list of nodes that should not be allocated for the current block
Returns:
LocatedBlock allocated block information.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
DSQuotaExceededException - if the directory's quota is exceeded.
IOException

complete

boolean complete(String src,
                 String clientName,
                 Block last)
                 throws IOException,
                        org.apache.hadoop.fs.UnresolvedLinkException
The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of the file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.

Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

reportBadBlocks

void reportBadBlocks(LocatedBlock[] blocks)
                     throws IOException
The client wants to report corrupted blocks (blocks with specified locations on datanodes).

Parameters:
blocks - Array of located blocks to report
Throws:
IOException

rename

@Deprecated
boolean rename(String src,
                          String dst)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. Use rename(String, String, Options.Rename...) instead.

Rename an item in the file system namespace.

Parameters:
src - existing file or directory name.
dst - new name.
Returns:
true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
Throws:
IOException - if the new name is invalid.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
QuotaExceededException - if the rename would violate any quota restriction

concat

void concat(String trg,
            String[] srcs)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException
Moves blocks from srcs to trg and delete srcs

Parameters:
trg - existing file
srcs - - list of existing files (same block size, same replication)
Throws:
IOException - if some arguments are invalid
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
QuotaExceededException - if the rename would violate any quota restriction

rename

void rename(String src,
            String dst,
            org.apache.hadoop.fs.Options.Rename... options)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException
Rename src to dst.

Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.

This implementation of rename is atomic.

Parameters:
src - existing file or directory name.
dst - new name.
options - Rename options
Throws:
IOException - if rename failed
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

delete

@Deprecated
boolean delete(String src)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. use delete(String, boolean) istead.

Delete the given file or directory from the file system.

Any blocks belonging to the deleted files will be garbage-collected.

Parameters:
src - existing name.
Returns:
true only if the existing file or directory was actually removed from the file system.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

delete

boolean delete(String src,
               boolean recursive)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Delete the given file or directory from the file system.

same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.

Parameters:
src - existing name
recursive - if true deletes a non empty directory recursively, else throws an exception.
Returns:
true only if the existing file or directory was actually removed from the file system.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

mkdirs

boolean mkdirs(String src,
               org.apache.hadoop.fs.permission.FsPermission masked,
               boolean createParent)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException,
                      NSQuotaExceededException
Create a directory (or hierarchy of directories) with the given name and permission.

Parameters:
src - The path of the directory being created
masked - The masked permission of the directory being created
createParent - create missing parent directory if true
Returns:
True if the operation success.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
{@link - AccessControlException} if permission to create file is denied by the system. As usually on the client side the exception will be wraped into RemoteException.
QuotaExceededException - if the operation would violate any quota restriction.
IOException
NSQuotaExceededException

getListing

DirectoryListing getListing(String src,
                            byte[] startAfter)
                            throws IOException,
                                   org.apache.hadoop.fs.UnresolvedLinkException
Get a partial listing of the indicated directory

Parameters:
src - the directory name
startAfter - the name to start listing after encoded in java UTF8
Returns:
a partial listing starting after startAfter
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

renewLease

void renewLease(String clientName)
                throws IOException,
                       org.apache.hadoop.fs.UnresolvedLinkException
Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.

So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.

Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

getStats

long[] getStats()
                throws IOException
Get a set of statistics about the filesystem. Right now, only three values are returned. Use public constants like GET_STATS_CAPACITY_IDX in place of actual numbers to index into the array.

Throws:
IOException

getDatanodeReport

DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type)
                                 throws IOException
Get a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.

Throws:
IOException

getPreferredBlockSize

long getPreferredBlockSize(String filename)
                           throws IOException,
                                  org.apache.hadoop.fs.UnresolvedLinkException
Get the block size for the given file.

Parameters:
filename - The name of the file
Returns:
The number of bytes in each block
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

setSafeMode

boolean setSafeMode(FSConstants.SafeModeAction action)
                    throws IOException
Enter, leave or get safe mode.

Safe mode is a name node state when it

  1. does not accept changes to name space (read-only), and
  2. does not replicate or delete blocks.

Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using setSafeMode(SafeModeAction.SAFEMODE_GET).

At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.

If safe mode is turned on manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER) then the name node stays in safe mode until it is manually turned off using setSafeMode(SafeModeAction.SAFEMODE_LEAVE). Current state of the name node can be verified using setSafeMode(SafeModeAction.SAFEMODE_GET)

Configuration parameters:

dfs.safemode.threshold.pct is the threshold parameter.
dfs.safemode.extension is the safe mode extension parameter.
dfs.namenode.replication.min is the minimal replication parameter.

Special cases:

The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
If the threshold is set to 1 then all blocks need to have at least minimal replication.
If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
Safe mode can always be turned off manually.

Parameters:
action -
  • 0 leave safe mode;
  • 1 enter safe mode;
  • 2 get safe mode state.
Returns:
  • 0 if the safe mode is OFF or
  • 1 if the safe mode is ON.
Throws:
IOException

saveNamespace

void saveNamespace()
                   throws IOException
Save namespace image.

Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.

Throws:
org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated.
IOException - if image creation failed.

restoreFailedStorage

boolean restoreFailedStorage(String arg)
                             throws org.apache.hadoop.security.AccessControlException
Enable/Disable restore failed storage.

sets flag to enable restore of failed storage replicas

Throws:
org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated.

refreshNodes

void refreshNodes()
                  throws IOException
Tells the namenode to reread the hosts and exclude files.

Throws:
IOException

finalizeUpgrade

void finalizeUpgrade()
                     throws IOException
Finalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.

Throws:
IOException

distributedUpgradeProgress

@Nullable
UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
                                               throws IOException
Report distributed upgrade progress or force current upgrade to proceed.

Parameters:
action - FSConstants.UpgradeAction to perform
Returns:
upgrade status information or null if no upgrades are in progress
Throws:
IOException

metaSave

void metaSave(String filename)
              throws IOException
Dumps namenode data structures into specified file. If the file already exists, then append.

Throws:
IOException

getCorruptFiles

org.apache.hadoop.fs.FileStatus[] getCorruptFiles()
                                                  throws org.apache.hadoop.security.AccessControlException,
                                                         IOException
Returns:
Array of FileStatus objects referring to corrupted files. The server could return all or a few of the files that are corrupt.
Throws:
org.apache.hadoop.security.AccessControlException
IOException

getFileInfo

@Nullable
HdfsFileStatus getFileInfo(String src)
                           throws IOException,
                                  org.apache.hadoop.fs.UnresolvedLinkException
Get the file info for a specific file or directory.

Parameters:
src - The string representation of the path to the file
Returns:
object containing information regarding the file or null if file not found
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains symlinks; IOException if permission to access file is denied by the system
IOException

getFileLinkInfo

HdfsFileStatus getFileLinkInfo(String src)
                               throws IOException,
                                      org.apache.hadoop.fs.UnresolvedLinkException
Get the file info for a specific file or directory. If the path refers to a symlink then the FileStatus of the symlink is returned.

Parameters:
src - The string representation of the path to the file
Returns:
object containing information regarding the file or null if file not found
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains symlinks; IOException if permission to access file is denied by the system
IOException

getContentSummary

org.apache.hadoop.fs.ContentSummary getContentSummary(String path)
                                                      throws IOException,
                                                             org.apache.hadoop.fs.UnresolvedLinkException
Get ContentSummary rooted at the specified directory.

Parameters:
path - The string representation of the path
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setQuota

void setQuota(String path,
              long namespaceQuota,
              long diskspaceQuota)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException,
                     FileNotFoundException,
                     SafeModeException
Set the quota for a directory.

Parameters:
path - The string representation of the path to the directory
namespaceQuota - Limit on the number of names in the tree rooted at the directory
diskspaceQuota - Limit on disk space occupied all the files under this directory.

The quota can have three types of values : (1) 0 or more will set the quota to that value, (2) FSConstants.QUOTA_DONT_SET implies the quota will not be changed, and (3) FSConstants.QUOTA_RESET implies the quota will be reset. Any other value is a runtime error.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
FileNotFoundException - if the path is a file or does not exist
QuotaExceededException - if the directory size is greater than the given quota
IOException
SafeModeException

fsync

void fsync(String src,
           String client)
           throws IOException,
                  org.apache.hadoop.fs.UnresolvedLinkException
Write all metadata for this file into persistent storage. The file must be currently open for writing.

Parameters:
src - The string representation of the path
client - The string representation of the client
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setTimes

void setTimes(String src,
              long mtime,
              long atime)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException
Sets the modification and access time of the file to the specified time.

Parameters:
src - The string representation of the path
mtime - The number of milliseconds since Jan 1, 1970. Setting mtime to -1 means that modification time should not be set by this call.
atime - The number of milliseconds since Jan 1, 1970. Setting atime to -1 means that access time should not be set by this call.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

createSymlink

void createSymlink(String target,
                   String link,
                   org.apache.hadoop.fs.permission.FsPermission dirPerm,
                   boolean createParent)
                   throws IOException,
                          org.apache.hadoop.fs.UnresolvedLinkException
Create a symbolic link to a file or directory.

Parameters:
target - The pathname of the destination that the link points to.
link - The pathname of the link being created.
dirPerm - permissions to use when creating parent directories
createParent - - if true then missing parent dirs are created if false then parent must exist
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.

getLinkTarget

String getLinkTarget(String path)
                     throws IOException
Resolve the first symbolic link on the specified path.

Parameters:
path - The pathname that needs to be resolved
Returns:
The pathname after resolving the first symbolic link if any.
Throws:
IOException

updateBlockForPipeline

LocatedBlock updateBlockForPipeline(Block block,
                                    String clientName)
                                    throws IOException
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.

Parameters:
block - a block
clientName - the name of the client
Returns:
a located block with a new generation stamp and an access token
Throws:
IOException - if any error occurs

updatePipeline

void updatePipeline(String clientName,
                    Block oldBlock,
                    Block newBlock,
                    DatanodeID[] newNodes)
                    throws IOException
Update a pipeline for a block under construction

Parameters:
clientName - the name of the client
oldBlock - the old block
newBlock - the new block containing new generation stamp and length
newNodes - datanodes in the pipeline
Throws:
IOException - if any error occurs

getDelegationToken

org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
                                                                                     throws IOException
Get a valid Delegation Token.

Parameters:
renewer - the designated renewer for the token
Returns:
Token
Throws:
IOException

renewDelegationToken

long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                          throws IOException
Renew an existing delegation token.

Parameters:
token - delegation token obtained earlier
Returns:
the new expiration time
Throws:
IOException

cancelDelegationToken

void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                           throws IOException
Cancel an existing delegation token.

Parameters:
token - delegation token
Throws:
IOException


Copyright © 2009 The Apache Software Foundation