org.apache.hadoop.hdfs
Class DFSClient

java.lang.Object
  extended by org.apache.hadoop.hdfs.DFSClient
All Implemented Interfaces:
Closeable, FSConstants

@InterfaceAudience.Private
public class DFSClient
extends Object
implements FSConstants, Closeable

DFSClient can connect to a Hadoop Filesystem and perform basic file tasks. It uses the ClientProtocol to communicate with a NameNode daemon, and connects directly to DataNodes to read/write block data. Hadoop DFS users should obtain an instance of DistributedFileSystem, which uses DFSClient to handle filesystem tasks.


Nested Class Summary
static class DFSClient.DFSDataInputStream
          The Hdfs implementation of FSDataInputStream
 
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction
 
Field Summary
static org.apache.commons.logging.Log LOG
           
static int MAX_BLOCK_ACQUIRE_FAILURES
           
static long SERVER_DEFAULTS_VALIDITY_PERIOD
           
 
Fields inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
BLOCK_INVALIDATE_CHUNK, BLOCKREPORT_INITIAL_DELAY, BLOCKREPORT_INTERVAL, BUFFER_SIZE, DEFAULT_BLOCK_SIZE, DEFAULT_BYTES_PER_CHECKSUM, DEFAULT_DATA_SOCKET_SIZE, DEFAULT_FILE_BUFFER_SIZE, DEFAULT_REPLICATION_FACTOR, DEFAULT_WRITE_PACKET_SIZE, HDFS_URI_SCHEME, HEARTBEAT_INTERVAL, LAYOUT_VERSION, LEASE_HARDLIMIT_PERIOD, LEASE_RECOVER_PERIOD, LEASE_SOFTLIMIT_PERIOD, MAX_PATH_DEPTH, MAX_PATH_LENGTH, MIN_BLOCKS_FOR_WRITE, QUOTA_DONT_SET, QUOTA_RESET, SIZE_OF_INTEGER, SMALL_BUFFER_SIZE
 
Constructor Summary
DFSClient(org.apache.hadoop.conf.Configuration conf)
          Deprecated. Deprecated at 0.21
DFSClient(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf)
          Same as this(nameNodeAddr, conf, null);
DFSClient(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats)
          Same as this(nameNodeAddr, null, conf, stats);
 
Method Summary
 void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void close()
          Close the file system, abandoning all of the leases and files being created and close connections to the namenode.
 void concat(String trg, String[] srcs)
          Move blocks from src to trg and delete src See ClientProtocol.concat(String, String []).
 OutputStream create(String src, boolean overwrite)
          Create a new dfs file and return an output stream for writing into it.
 OutputStream create(String src, boolean overwrite, org.apache.hadoop.util.Progressable progress)
          Create a new dfs file and return an output stream for writing into it with write-progress reporting.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize)
          Create a new dfs file with the specified block replication and return an output stream for writing into the file.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)
          Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Call create(String,FsPermission,EnumSet,short,long,Progressable,int) with default permission.
 OutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.
 OutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Call create(String,FsPermission,EnumSet,boolean,short,long,Progressable,int) with createParent set to true.
static ClientProtocol createNamenode(org.apache.hadoop.conf.Configuration conf)
          The locking hierarchy is to first acquire lock on DFSClient object, followed by lock on leasechecker, followed by lock on an individual DFSOutputStream.
static ClientProtocol createNamenode(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf)
           
 void createSymlink(String target, String link, boolean createParent)
          Creates a symbolic link.
 DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
           
 boolean delete(String src)
          Deprecated. 
 boolean delete(String src, boolean recursive)
          delete file or directory.
 UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
           
 boolean exists(String src)
          Implemented using getFileInfo(src)
 void finalizeUpgrade()
           
 org.apache.hadoop.fs.BlockLocation[] getBlockLocations(String src, long start, long length)
          Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region.
 long getBlockSize(String f)
           
 long getCorruptBlocksCount()
          Returns count of blocks with at least one replica marked corrupt.
 long getDefaultBlockSize()
          Get the default block size for this cluster
 short getDefaultReplication()
           
 org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
           
 org.apache.hadoop.fs.FsStatus getDiskStatus()
           
 org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src)
          Get the checksum of a file.
static org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src, ClientProtocol namenode, SocketFactory socketFactory, int socketTimeout)
          Get the checksum of a file.
 HdfsFileStatus getFileInfo(String src)
           
 HdfsFileStatus getFileLinkInfo(String src)
          Get the file info for a specific file or directory.
 String getLinkTarget(String path)
          Resolve the *first* symlink, if any, in the path.
 long getMissingBlocksCount()
          Returns count of blocks with no good replicas left.
 ClientProtocol getNamenode()
          Get the namenode associated with this DFSClient object
 org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
          Get server default values for a number of configuration params.
 long getUnderReplicatedBlocksCount()
          Returns count of blocks with one of more replica missing.
 DirectoryListing listPaths(String src, byte[] startAfter)
          Get a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directory
 void metaSave(String pathname)
          Dumps DFS data structures into specified file.
 boolean mkdirs(String src)
          Deprecated. 
 boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission permission, boolean createParent)
          Create a directory (or hierarchy of directories) with the given name and permission.
 org.apache.hadoop.hdfs.DFSInputStream open(String src)
           
 org.apache.hadoop.hdfs.DFSInputStream open(String src, int buffersize, boolean verifyChecksum)
          Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places.
 org.apache.hadoop.hdfs.DFSInputStream open(String src, int buffersize, boolean verifyChecksum, org.apache.hadoop.fs.FileSystem.Statistics stats)
          Deprecated. Use open(String, int, boolean) instead.
 OutputStream primitiveCreate(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, int bytesPerChecksum)
          Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int) except that the permission is absolute (ie has already been masked with umask.
 boolean primitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission)
          Same {mkdirs(String, FsPermission, boolean) except that the permissions has already been masked against umask.
 void refreshNodes()
          Refresh the hosts and exclude files.
 boolean rename(String src, String dst)
          Deprecated. Use rename(String, String, Options.Rename...) instead.
 void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options)
          Rename file or directory.
 long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void reportBadBlocks(LocatedBlock[] blocks)
          Report corrupt blocks that were discovered by the client.
 void setOwner(String src, String username, String groupname)
          Set file or directory owner.
 void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission)
          Set permissions to a file or directory.
 boolean setReplication(String src, short replication)
          Set replication for an existing file.
 boolean setSafeMode(FSConstants.SafeModeAction action)
          Enter, leave or get safe mode.
 void setTimes(String src, long mtime, long atime)
          set the modification and access time of a file
 String toString()
          
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
 

Field Detail

LOG

public static final org.apache.commons.logging.Log LOG

SERVER_DEFAULTS_VALIDITY_PERIOD

public static final long SERVER_DEFAULTS_VALIDITY_PERIOD
See Also:
Constant Field Values

MAX_BLOCK_ACQUIRE_FAILURES

public static final int MAX_BLOCK_ACQUIRE_FAILURES
See Also:
Constant Field Values
Constructor Detail

DFSClient

@Deprecated
public DFSClient(org.apache.hadoop.conf.Configuration conf)
          throws IOException
Deprecated. Deprecated at 0.21

Same as this(NameNode.getAddress(conf), conf);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, Configuration)

DFSClient

public DFSClient(InetSocketAddress nameNodeAddr,
                 org.apache.hadoop.conf.Configuration conf)
          throws IOException
Same as this(nameNodeAddr, conf, null);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, Configuration, org.apache.hadoop.fs.FileSystem.Statistics)

DFSClient

public DFSClient(InetSocketAddress nameNodeAddr,
                 org.apache.hadoop.conf.Configuration conf,
                 org.apache.hadoop.fs.FileSystem.Statistics stats)
          throws IOException
Same as this(nameNodeAddr, null, conf, stats);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, ClientProtocol, Configuration, org.apache.hadoop.fs.FileSystem.Statistics)
Method Detail

createNamenode

public static ClientProtocol createNamenode(org.apache.hadoop.conf.Configuration conf)
                                     throws IOException
The locking hierarchy is to first acquire lock on DFSClient object, followed by lock on leasechecker, followed by lock on an individual DFSOutputStream.

Throws:
IOException

createNamenode

public static ClientProtocol createNamenode(InetSocketAddress nameNodeAddr,
                                            org.apache.hadoop.conf.Configuration conf)
                                     throws IOException
Throws:
IOException

close

public void close()
           throws IOException
Close the file system, abandoning all of the leases and files being created and close connections to the namenode.

Specified by:
close in interface Closeable
Throws:
IOException

getDefaultBlockSize

public long getDefaultBlockSize()
Get the default block size for this cluster

Returns:
the default block size in bytes

getBlockSize

public long getBlockSize(String f)
                  throws IOException
Throws:
IOException

getServerDefaults

public org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
                                                        throws IOException
Get server default values for a number of configuration params.

Throws:
IOException

getDelegationToken

public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
                                                                                     throws IOException
Throws:
IOException

renewDelegationToken

public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                          throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                 IOException
Throws:
org.apache.hadoop.security.token.SecretManager.InvalidToken
IOException

cancelDelegationToken

public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                           throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                  IOException
Throws:
org.apache.hadoop.security.token.SecretManager.InvalidToken
IOException

reportBadBlocks

public void reportBadBlocks(LocatedBlock[] blocks)
                     throws IOException
Report corrupt blocks that were discovered by the client.

Throws:
IOException

getDefaultReplication

public short getDefaultReplication()

getBlockLocations

public org.apache.hadoop.fs.BlockLocation[] getBlockLocations(String src,
                                                              long start,
                                                              long length)
                                                       throws IOException,
                                                              org.apache.hadoop.fs.UnresolvedLinkException
Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region. It returns a set of hostnames for every block within the indicated region. This function is very useful when writing code that considers data-placement when performing operations. For example, the MapReduce system tries to schedule tasks on the same machines as the data-block the task processes.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

public org.apache.hadoop.hdfs.DFSInputStream open(String src)
                                           throws IOException,
                                                  org.apache.hadoop.fs.UnresolvedLinkException
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

@Deprecated
public org.apache.hadoop.hdfs.DFSInputStream open(String src,
                                                             int buffersize,
                                                             boolean verifyChecksum,
                                                             org.apache.hadoop.fs.FileSystem.Statistics stats)
                                           throws IOException,
                                                  org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. Use open(String, int, boolean) instead.

Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

public org.apache.hadoop.hdfs.DFSInputStream open(String src,
                                                  int buffersize,
                                                  boolean verifyChecksum)
                                           throws IOException,
                                                  org.apache.hadoop.fs.UnresolvedLinkException
Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

create

public OutputStream create(String src,
                           boolean overwrite)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create a new dfs file and return an output stream for writing into it.

Parameters:
src - stream name
overwrite - do not check for file existence if true
Returns:
output stream
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if a symlink is encountered in src.
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           org.apache.hadoop.util.Progressable progress)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create a new dfs file and return an output stream for writing into it with write-progress reporting.

Parameters:
src - stream name
overwrite - do not check for file existence if true
Returns:
output stream
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if a symlink is encountered in src.
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create a new dfs file with the specified block replication and return an output stream for writing into the file.

Parameters:
src - stream name
overwrite - do not check for file existence if true
replication - block replication
Returns:
output stream
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if a symlink is encountered in src.
IOException

getNamenode

public ClientProtocol getNamenode()
Get the namenode associated with this DFSClient object

Returns:
the namenode associated with this DFSClient object

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.

Parameters:
src - stream name
overwrite - do not check for file existence if true
replication - block replication
Returns:
output stream
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if a symlink is encountered in src.
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Call create(String,FsPermission,EnumSet,short,long,Progressable,int) with default permission.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException
See Also:
FsPermission.getDefault()

create

public OutputStream create(String src,
                           org.apache.hadoop.fs.permission.FsPermission permission,
                           EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Call create(String,FsPermission,EnumSet,boolean,short,long,Progressable,int) with createParent set to true.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

create

public OutputStream create(String src,
                           org.apache.hadoop.fs.permission.FsPermission permission,
                           EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                           boolean createParent,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.

Parameters:
src - stream name
permission - The permission of the directory being created. If permission == null, use FsPermission.getDefault().
flag - do not check for file existence if true
createParent - create missing parent directory if true
replication - block replication
Returns:
output stream
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
See Also:
ClientProtocol.create(String, FsPermission, String, EnumSetWritable, boolean, short, long)

primitiveCreate

public OutputStream primitiveCreate(String src,
                                    org.apache.hadoop.fs.permission.FsPermission absPermission,
                                    EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                                    boolean createParent,
                                    short replication,
                                    long blockSize,
                                    org.apache.hadoop.util.Progressable progress,
                                    int buffersize,
                                    int bytesPerChecksum)
                             throws IOException,
                                    org.apache.hadoop.fs.UnresolvedLinkException
Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int) except that the permission is absolute (ie has already been masked with umask.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

createSymlink

public void createSymlink(String target,
                          String link,
                          boolean createParent)
                   throws IOException,
                          org.apache.hadoop.fs.UnresolvedLinkException
Creates a symbolic link.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException
See Also:
ClientProtocol.createSymlink(String, String,FsPermission, boolean)

getLinkTarget

public String getLinkTarget(String path)
                     throws IOException
Resolve the *first* symlink, if any, in the path.

Throws:
IOException
See Also:
ClientProtocol.getLinkTarget(String)

setReplication

public boolean setReplication(String src,
                              short replication)
                       throws IOException,
                              org.apache.hadoop.fs.UnresolvedLinkException
Set replication for an existing file.

Parameters:
replication -
Returns:
true is successful or false if file does not exist
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException
See Also:
ClientProtocol.setReplication(String, short)

rename

@Deprecated
public boolean rename(String src,
                                 String dst)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. Use rename(String, String, Options.Rename...) instead.

Rename file or directory. See ClientProtocol.rename(String, String).

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

concat

public void concat(String trg,
                   String[] srcs)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException
Move blocks from src to trg and delete src See ClientProtocol.concat(String, String []).

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

rename

public void rename(String src,
                   String dst,
                   org.apache.hadoop.fs.Options.Rename... options)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException
Rename file or directory. See ClientProtocol.rename(String, String, Options.Rename...)

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

delete

@Deprecated
public boolean delete(String src)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. 

Delete file or directory. See ClientProtocol.delete(String).

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

delete

public boolean delete(String src,
                      boolean recursive)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
delete file or directory. delete contents of the directory if non empty and recursive set to true

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

exists

public boolean exists(String src)
               throws IOException
Implemented using getFileInfo(src)

Throws:
IOException

listPaths

public DirectoryListing listPaths(String src,
                                  byte[] startAfter)
                           throws IOException,
                                  org.apache.hadoop.fs.UnresolvedLinkException
Get a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directory

Parameters:
src - the directory name
startAfter - the name to start listing after encoded in java UTF8
Returns:
a partial listing starting after startAfter
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

getFileInfo

public HdfsFileStatus getFileInfo(String src)
                           throws IOException,
                                  org.apache.hadoop.fs.UnresolvedLinkException
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

getFileLinkInfo

public HdfsFileStatus getFileLinkInfo(String src)
                               throws IOException,
                                      org.apache.hadoop.fs.UnresolvedLinkException
Get the file info for a specific file or directory. If src refers to a symlink then the FileStatus of the link is returned.

Parameters:
src - path to a file or directory.
Returns:
FileStatus describing src.
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains symlinks

getFileChecksum

public org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src)
                                                             throws IOException
Get the checksum of a file.

Parameters:
src - The file path
Returns:
The checksum
Throws:
IOException
See Also:
DistributedFileSystem.getFileChecksum(Path)

getFileChecksum

public static org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src,
                                                                           ClientProtocol namenode,
                                                                           SocketFactory socketFactory,
                                                                           int socketTimeout)
                                                                    throws IOException
Get the checksum of a file.

Parameters:
src - The file path
Returns:
The checksum
Throws:
IOException

setPermission

public void setPermission(String src,
                          org.apache.hadoop.fs.permission.FsPermission permission)
                   throws IOException,
                          org.apache.hadoop.fs.UnresolvedLinkException
Set permissions to a file or directory.

Parameters:
src - path name.
permission -
Throws:
FileNotFoundException - is file does not exist.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setOwner

public void setOwner(String src,
                     String username,
                     String groupname)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException
Set file or directory owner.

Parameters:
src - path name.
username - user id.
groupname - user group.
Throws:
FileNotFoundException - is file does not exist.
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

getDiskStatus

public org.apache.hadoop.fs.FsStatus getDiskStatus()
                                            throws IOException
Throws:
IOException

getMissingBlocksCount

public long getMissingBlocksCount()
                           throws IOException
Returns count of blocks with no good replicas left. Normally should be zero.

Throws:
IOException

getUnderReplicatedBlocksCount

public long getUnderReplicatedBlocksCount()
                                   throws IOException
Returns count of blocks with one of more replica missing.

Throws:
IOException

getCorruptBlocksCount

public long getCorruptBlocksCount()
                           throws IOException
Returns count of blocks with at least one replica marked corrupt.

Throws:
IOException

datanodeReport

public DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
                              throws IOException
Throws:
IOException

setSafeMode

public boolean setSafeMode(FSConstants.SafeModeAction action)
                    throws IOException
Enter, leave or get safe mode. See ClientProtocol.setSafeMode(FSConstants.SafeModeAction) for more details.

Throws:
IOException
See Also:
ClientProtocol.setSafeMode(FSConstants.SafeModeAction)

refreshNodes

public void refreshNodes()
                  throws IOException
Refresh the hosts and exclude files. (Rereads them.) See ClientProtocol.refreshNodes() for more details.

Throws:
IOException
See Also:
ClientProtocol.refreshNodes()

metaSave

public void metaSave(String pathname)
              throws IOException
Dumps DFS data structures into specified file. See ClientProtocol.metaSave(String) for more details.

Throws:
IOException
See Also:
ClientProtocol.metaSave(String)

finalizeUpgrade

public void finalizeUpgrade()
                     throws IOException
Throws:
IOException
See Also:
ClientProtocol.finalizeUpgrade()

distributedUpgradeProgress

public UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
                                               throws IOException
Throws:
IOException
See Also:
ClientProtocol.distributedUpgradeProgress(FSConstants.UpgradeAction)

mkdirs

@Deprecated
public boolean mkdirs(String src)
               throws IOException
Deprecated. 

Throws:
IOException

mkdirs

public boolean mkdirs(String src,
                      org.apache.hadoop.fs.permission.FsPermission permission,
                      boolean createParent)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Create a directory (or hierarchy of directories) with the given name and permission.

Parameters:
src - The path of the directory being created
permission - The permission of the directory being created. If permission == null, use FsPermission.getDefault().
createParent - create missing parent directory if true
Returns:
True if the operation success.
Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException
See Also:
ClientProtocol.mkdirs(String, FsPermission, boolean)

primitiveMkdir

public boolean primitiveMkdir(String src,
                              org.apache.hadoop.fs.permission.FsPermission absPermission)
                       throws IOException,
                              org.apache.hadoop.fs.UnresolvedLinkException
Same {mkdirs(String, FsPermission, boolean) except that the permissions has already been masked against umask.

Throws:
org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
IOException

setTimes

public void setTimes(String src,
                     long mtime,
                     long atime)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException
set the modification and access time of a file

Throws:
FileNotFoundException - if the path is not a file
IOException
org.apache.hadoop.fs.UnresolvedLinkException

toString

public String toString()

Overrides:
toString in class Object


Copyright © 2009 The Apache Software Foundation