|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
@InterfaceAudience.Private @InterfaceStability.Evolving public interface ClientProtocol
ClientProtocol is used by user code via
DistributedFileSystem
class to communicate
with the NameNode. User code can manipulate the directory namespace,
as well as open/close file streams, etc.
Field Summary | |
---|---|
static int |
GET_STATS_CAPACITY_IDX
|
static int |
GET_STATS_CORRUPT_BLOCKS_IDX
|
static int |
GET_STATS_MISSING_BLOCKS_IDX
|
static int |
GET_STATS_REMAINING_IDX
|
static int |
GET_STATS_UNDER_REPLICATED_IDX
|
static int |
GET_STATS_USED_IDX
|
static long |
versionID
Compared to the previous version the following changes have been introduced: (Only the latest change is reflected. |
Method Summary | |
---|---|
void |
abandonBlock(Block b,
String src,
String holder)
The client can give up on a block by calling abandonBlock(). |
LocatedBlock |
addBlock(String src,
String clientName,
Block previous,
DatanodeInfo[] excludedNodes)
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). |
LocatedBlock |
append(String src,
String clientName)
Append to the end of the file. |
void |
cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Cancel an existing delegation token. |
boolean |
complete(String src,
String clientName,
Block last)
The client is done writing data to the given filename, and would like to complete it. |
void |
concat(String trg,
String[] srcs)
Moves blocks from srcs to trg and delete srcs |
void |
create(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
String clientName,
org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag,
boolean createParent,
short replication,
long blockSize)
Create a new file entry in the namespace. |
void |
createSymlink(String target,
String link,
org.apache.hadoop.fs.permission.FsPermission dirPerm,
boolean createParent)
Create a symbolic link to a file or directory. |
boolean |
delete(String src)
Deprecated. use delete(String, boolean) istead. |
boolean |
delete(String src,
boolean recursive)
Delete the given file or directory from the file system. |
UpgradeStatusReport |
distributedUpgradeProgress(FSConstants.UpgradeAction action)
Report distributed upgrade progress or force current upgrade to proceed. |
void |
finalizeUpgrade()
Finalize previous upgrade. |
void |
fsync(String src,
String client)
Write all metadata for this file into persistent storage. |
LocatedBlocks |
getBlockLocations(String src,
long offset,
long length)
Get locations of the blocks of the specified file within the specified range. |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(String path)
Get ContentSummary rooted at the specified directory. |
org.apache.hadoop.fs.FileStatus[] |
getCorruptFiles()
|
DatanodeInfo[] |
getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes. |
org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> |
getDelegationToken(org.apache.hadoop.io.Text renewer)
Get a valid Delegation Token. |
HdfsFileStatus |
getFileInfo(String src)
Get the file info for a specific file or directory. |
HdfsFileStatus |
getFileLinkInfo(String src)
Get the file info for a specific file or directory. |
String |
getLinkTarget(String path)
Resolve the first symbolic link on the specified path. |
DirectoryListing |
getListing(String src,
byte[] startAfter)
Get a partial listing of the indicated directory |
long |
getPreferredBlockSize(String filename)
Get the block size for the given file. |
org.apache.hadoop.fs.FsServerDefaults |
getServerDefaults()
Get server default values for a number of configuration params. |
long[] |
getStats()
Get a set of statistics about the filesystem. |
void |
metaSave(String filename)
Dumps namenode data structures into specified file. |
boolean |
mkdirs(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
boolean createParent)
Create a directory (or hierarchy of directories) with the given name and permission. |
void |
refreshNodes()
Tells the namenode to reread the hosts and exclude files. |
boolean |
rename(String src,
String dst)
Deprecated. Use rename(String, String, Options.Rename...) instead. |
void |
rename(String src,
String dst,
org.apache.hadoop.fs.Options.Rename... options)
Rename src to dst. |
long |
renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Renew an existing delegation token. |
void |
renewLease(String clientName)
Client programs can cause stateful changes in the NameNode that affect other clients. |
void |
reportBadBlocks(LocatedBlock[] blocks)
The client wants to report corrupted blocks (blocks with specified locations on datanodes). |
boolean |
restoreFailedStorage(String arg)
Enable/Disable restore failed storage. |
void |
saveNamespace()
Save namespace image. |
void |
setOwner(String src,
String username,
String groupname)
Set Owner of a path (i.e. |
void |
setPermission(String src,
org.apache.hadoop.fs.permission.FsPermission permission)
Set permissions for an existing file/directory. |
void |
setQuota(String path,
long namespaceQuota,
long diskspaceQuota)
Set the quota for a directory. |
boolean |
setReplication(String src,
short replication)
Set replication for an existing file. |
boolean |
setSafeMode(FSConstants.SafeModeAction action)
Enter, leave or get safe mode. |
void |
setTimes(String src,
long mtime,
long atime)
Sets the modification and access time of the file to the specified time. |
LocatedBlock |
updateBlockForPipeline(Block block,
String clientName)
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block. |
void |
updatePipeline(String clientName,
Block oldBlock,
Block newBlock,
DatanodeID[] newNodes)
Update a pipeline for a block under construction |
Methods inherited from interface org.apache.hadoop.ipc.VersionedProtocol |
---|
getProtocolVersion |
Field Detail |
---|
static final long versionID
static final int GET_STATS_CAPACITY_IDX
static final int GET_STATS_USED_IDX
static final int GET_STATS_REMAINING_IDX
static final int GET_STATS_UNDER_REPLICATED_IDX
static final int GET_STATS_CORRUPT_BLOCKS_IDX
static final int GET_STATS_MISSING_BLOCKS_IDX
Method Detail |
---|
@Nullable LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, FileNotFoundException
Return LocatedBlocks
which contains
file length, blocks and their locations.
DataNode locations for each block are sorted by
the distance to the client's address.
The client will then have to contact one of the indicated DataNodes to obtain the actual data.
src
- file nameoffset
- range start offsetlength
- range length
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
FileNotFoundException
- if the path does not exist.org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
IOException
void create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, AlreadyBeingCreatedException, NSQuotaExceededException
This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients.
Although, other clients cannot delete(String, boolean)
, re-create or
rename(String, String)
it until the file is completed
or explicitly as a result of lease expiration.
Blocks have a maximum size. Clients that intend to create
multi-block files must also use addBlock(String, String, Block, DatanodeInfo[])
.
src
- path of the file being created.masked
- masked permission.clientName
- name of the current client.flag
- indicates whether the file should be
overwritten if it already exists or create if it does not exist or append.createParent
- create missing parent directory if truereplication
- block replication factor.blockSize
- maximum block size.
org.apache.hadoop.security.AccessControlException
- if permission to create file is
denied by the system. As usually on the client side the exception will
be wrapped into RemoteException
.
QuotaExceededException
- if the file creation violates
any quota restriction
IOException
- if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
AlreadyBeingCreatedException
- if the path does not exist.
NSQuotaExceededException
- if the namespace quota is exceeded.LocatedBlock append(String src, String clientName) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- path of the file being created.clientName
- name of the current client.
org.apache.hadoop.security.AccessControlException
- if permission to append file is
denied by the system. As usually on the client side the exception will
be wrapped into RemoteException
.
Allows appending to an existing file if the server is
configured with the parameter dfs.support.append set to true, otherwise
throws an IOException.
IOException
- if other errors occur.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.boolean setReplication(String src, short replication) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
src
- file namereplication
- new replication
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
SafeModeException
void setOwner(String src, String username, String groupname) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- username
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void abandonBlock(Block b, String src, String holder) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
LocatedBlock addBlock(String src, String clientName, @Nullable Block previous, @Nullable DatanodeInfo[] excludedNodes) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, DSQuotaExceededException
src
- the file being createdclientName
- the name of the client that adds the blockprevious
- previous blockexcludedNodes
- a list of nodes that should not be
allocated for the current block
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
DSQuotaExceededException
- if the directory's quota is exceeded.
IOException
boolean complete(String src, String clientName, Block last) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void reportBadBlocks(LocatedBlock[] blocks) throws IOException
blocks
- Array of located blocks to report
IOException
@Deprecated boolean rename(String src, String dst) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
rename(String, String, Options.Rename...)
instead.
src
- existing file or directory name.dst
- new name.
IOException
- if the new name is invalid.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
QuotaExceededException
- if the rename would violate
any quota restrictionvoid concat(String trg, String[] srcs) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
trg
- existing filesrcs
- - list of existing files (same block size, same replication)
IOException
- if some arguments are invalid
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
QuotaExceededException
- if the rename would violate
any quota restrictionvoid rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
This implementation of rename is atomic.
src
- existing file or directory name.dst
- new name.options
- Rename options
IOException
- if rename failed
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.@Deprecated boolean delete(String src) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
delete(String, boolean)
istead.
Any blocks belonging to the deleted files will be garbage-collected.
src
- existing name.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
boolean delete(String src, boolean recursive) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
src
- existing namerecursive
- if true deletes a non empty directory recursively,
else throws an exception.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, NSQuotaExceededException
src
- The path of the directory being createdmasked
- The masked permission of the directory being createdcreateParent
- create missing parent directory if true
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
{@link
- AccessControlException} if permission to create file is
denied by the system. As usually on the client side the exception will
be wraped into RemoteException
.
QuotaExceededException
- if the operation would violate
any quota restriction.
IOException
NSQuotaExceededException
DirectoryListing getListing(String src, byte[] startAfter) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- the directory namestartAfter
- the name to start listing after encoded in java UTF8
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void renewLease(String clientName) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
long[] getStats() throws IOException
GET_STATS_CAPACITY_IDX
in place of
actual numbers to index into the array.
IOException
DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type) throws IOException
IOException
long getPreferredBlockSize(String filename) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
filename
- The name of the file
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.boolean setSafeMode(FSConstants.SafeModeAction action) throws IOException
Safe mode is a name node state when it
Safe mode is entered automatically at name node startup.
Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_GET)
.
At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.
If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER)
then the name node stays in safe mode until it is manually turned off
using setSafeMode(SafeModeAction.SAFEMODE_LEAVE)
.
Current state of the name node can be verified using
setSafeMode(SafeModeAction.SAFEMODE_GET)
action
- IOException
void saveNamespace() throws IOException
Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.
org.apache.hadoop.security.AccessControlException
- if the superuser privilege is violated.
IOException
- if image creation failed.boolean restoreFailedStorage(String arg) throws org.apache.hadoop.security.AccessControlException
sets flag to enable restore of failed storage replicas
org.apache.hadoop.security.AccessControlException
- if the superuser privilege is violated.void refreshNodes() throws IOException
IOException
void finalizeUpgrade() throws IOException
IOException
@Nullable UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action) throws IOException
action
- FSConstants.UpgradeAction
to perform
IOException
void metaSave(String filename) throws IOException
IOException
org.apache.hadoop.fs.FileStatus[] getCorruptFiles() throws org.apache.hadoop.security.AccessControlException, IOException
org.apache.hadoop.security.AccessControlException
IOException
@Nullable HdfsFileStatus getFileInfo(String src) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- The string representation of the path to the file
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains symlinks;
IOException if permission to access file is denied by the system
IOException
HdfsFileStatus getFileLinkInfo(String src) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- The string representation of the path to the file
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains symlinks;
IOException if permission to access file is denied by the system
IOException
org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
ContentSummary
rooted at the specified directory.
path
- The string representation of the path
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void setQuota(String path, long namespaceQuota, long diskspaceQuota) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException, FileNotFoundException, SafeModeException
path
- The string representation of the path to the directorynamespaceQuota
- Limit on the number of names in the tree rooted
at the directorydiskspaceQuota
- Limit on disk space occupied all the files under
this directory.
FSConstants.QUOTA_DONT_SET
implies
the quota will not be changed, and (3) FSConstants.QUOTA_RESET
implies the quota will be reset. Any other value is a runtime error.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
FileNotFoundException
- if the path is a file or
does not exist
QuotaExceededException
- if the directory size
is greater than the given quota
IOException
SafeModeException
void fsync(String src, String client) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- The string representation of the pathclient
- The string representation of the client
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void setTimes(String src, long mtime, long atime) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
src
- The string representation of the pathmtime
- The number of milliseconds since Jan 1, 1970.
Setting mtime to -1 means that modification time should not be set
by this call.atime
- The number of milliseconds since Jan 1, 1970.
Setting atime to -1 means that access time should not be set
by this call.
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
target
- The pathname of the destination that the
link points to.link
- The pathname of the link being created.dirPerm
- permissions to use when creating parent directoriescreateParent
- - if true then missing parent dirs are created
if false then parent must exist
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.String getLinkTarget(String path) throws IOException
path
- The pathname that needs to be resolved
IOException
LocatedBlock updateBlockForPipeline(Block block, String clientName) throws IOException
block
- a blockclientName
- the name of the client
IOException
- if any error occursvoid updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes) throws IOException
clientName
- the name of the clientoldBlock
- the old blocknewBlock
- the new block containing new generation stamp and lengthnewNodes
- datanodes in the pipeline
IOException
- if any error occursorg.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
renewer
- the designated renewer for the token
IOException
long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
token
- delegation token obtained earlier
IOException
void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
token
- delegation token
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |