namevaluedescription
hadoop.hdfs.configuration.version1version of this configuration file
dfs.namenode.logging.levelinfoThe logging level for dfs namenode. Other values are "dir"(trac e namespace mutations), "block"(trace block under/over replications and block creations/deletions), or "all".
dfs.namenode.secondary.http-address0.0.0.0:50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.address0.0.0.0:50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port.
dfs.datanode.http.address0.0.0.0:50075 The datanode http server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.ipc.address0.0.0.0:50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.handler.count3The number of server threads for the datanode.
dfs.namenode.http-address0.0.0.0:50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port.
dfs.https.enablefalseDecide if HTTPS(SSL) is supported on HDFS
dfs.client.https.need-authfalseWhether SSL client certificate authentication is required
dfs.https.server.keystore.resourcessl-server.xmlResource file from which ssl server keystore information will be extracted
dfs.client.https.keystore.resourcessl-client.xmlResource file from which ssl client keystore information will be extracted
dfs.datanode.https.address0.0.0.0:50475
dfs.namenode.https-address0.0.0.0:50470
dfs.datanode.dns.interfacedefaultThe name of the Network Interface from which a data node should report its IP address.
dfs.datanode.dns.nameserverdefaultThe host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes.
dfs.namenode.backup.address0.0.0.0:50100 The backup node server address and port. If the port is 0 then the server will start on a free port.
dfs.namenode.backup.http-address0.0.0.0:50105 The backup node http server address and port. If the port is 0 then the server will start on a free port.
dfs.namenode.replication.considerLoadtrueDecide if chooseTarget considers the target's load or not
dfs.default.chunk.view.size32768The number of bytes to view for a file on the browser.
dfs.datanode.du.reserved0Reserved space in bytes per volume. Always leave this much space free for non dfs use.
dfs.namenode.name.dirfile://${hadoop.tmp.dir}/dfs/nameDetermines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.namenode.edits.dir${dfs.namenode.name.dir}Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.name.dir
dfs.web.ugiwebuser,webgroupThe user account used by the web interface. Syntax: USERNAME,GROUP1,GROUP2, ...
dfs.permissions.enabledtrue If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories.
dfs.permissions.superusergroupsupergroupThe name of the group of super-users.
dfs.block.access.token.enablefalse If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes.
dfs.block.access.key.update.interval600 Interval in minutes at which namenode updates its access keys.
dfs.block.access.token.lifetime600The lifetime of access tokens in minutes.
dfs.datanode.data.dirfile://${hadoop.tmp.dir}/dfs/dataDetermines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
dfs.datanode.data.dir.perm755Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic.
dfs.replication3Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
dfs.replication.max512Maximal block replication.
dfs.namenode.replication.min1Minimal block replication.
dfs.blocksize67108864The default block size for new files.
dfs.client.block.write.retries3The number of retries for writing blocks to the data nodes, before we signal failure to the application.
dfs.blockreport.intervalMsec21600000Determines block reporting interval in milliseconds.
dfs.blockreport.initialDelay0Delay for first block report in seconds.
dfs.datanode.directoryscan.interval21600Interval in seconds for Datanode to scan data directories and reconcile the difference between blocks in memory and on the disk.
dfs.datanode.directoryscan.threads1How many threads should the threadpool used to compile reports for volumes in parallel have.
dfs.heartbeat.interval3Determines datanode heartbeat interval in seconds.
dfs.namenode.handler.count10The number of server threads for the namenode.
dfs.namenode.safemode.threshold-pct0.999f Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min. Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent.
dfs.namenode.safemode.extension30000 Determines extension of safe mode in milliseconds after the threshold level is reached.
dfs.datanode.balance.bandwidthPerSec1048576 Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second.
dfs.hostsNames a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
dfs.hosts.excludeNames a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.
dfs.namenode.max.objects0The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports.
dfs.namenode.decommission.interval30Namenode periodicity in seconds to check if decommission is complete.
dfs.namenode.decommission.nodes.per.interval5The number of nodes namenode checks if decommission is complete in each dfs.namenode.decommission.interval.
dfs.namenode.replication.interval3The periodicity in seconds with which the namenode computes repliaction work for datanodes.
dfs.namenode.accesstime.precision3600000The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS.
dfs.datanode.pluginsComma-separated list of datanode plug-ins to be activated.
dfs.namenode.pluginsComma-separated list of namenode plug-ins to be activated.
dfs.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
dfs.bytes-per-checksum512The number of bytes per checksum. Must not be larger than dfs.stream-buffer-size
dfs.client-write-packet-size65536Packet size for clients to write
dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondaryDetermines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.
dfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directoires then teh edits is replicated in all of the directoires for redundancy. Default value is same as fs.checkpoint.dir
dfs.namenode.checkpoint.period3600The number of seconds between two periodic checkpoints.
dfs.namenode.checkpoint.size67108864The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
dfs.namenode.delegation.key.update-interval86400The update frequency of master key for delegation tokens in the namenode.
dfs.namenode.delegation.token.max-lifetime604800The maximum lifetime for which a delegation token is valid.
dfs.namenode.delegation.token.renew-interval86400The frequency of renewal of delegation token.
dfs.datanode.failed.volumes.tolerated0The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.