Before Hadoop 2 comes to the picture, Hadoop clusters were living with the fact that Name Node has placed limits on the degree to which they could scale. Some of the clusters were able to scale beyond 3,000 or 4,000 nodes. NameNode’s require to maintain records for each block of data stored in the cluster turned out to be the most significant factor limiting the greater cluster growth. When we have too many blocks, it becomes increasingly difficult for the NameNode to scale up as the Hadoop cluster scales out.
The solution to expanding Hadoop clusters indefinitely is to federate the NameNode. Specifically, we must set it up so that we have multiple NameNode instances running on their own, dedicated master nodes and then making every NameNode responsible only for the data file blocks in its own name space.
Often in Hadoop’s infancy, a great amount of discussion was focused on the NameNode’s representation of a single point of failure (SPOF's). Hadoop, entirely, has always had a robust and failure-tolerant architecture design, with the exception only in this key area. As we already know, without the NameNode, there’s no Hadoop cluster
Using Hadoop 2, now we can configure HDFS so that there exists two kind of NameNodes:
1. An Active NameNode and,
2. A Standby NameNode
The Standby NameNode required being on a dedicated master node that’s configured very similar to the master node used by the Active NameNode.
The Standby NameNode’s work is not be sitting idly while the NameNode manages all the block address requests. The Standby NameNode, charged with the job of keeping the state of the block locations and block metadata in memory, managing the HDFS check-pointing duties. The Active NameNode now writes journal entries on file changes to the majority of the JournalNode services, which executes on the master nodes. (Note: This HDFS high availability solution requires at least of three master nodes, and if there are more, there can be only an odd number.) If a failure occurs, the Standby Node first reads all of the completed journal entries (where a majority of Journal Nodes have an entry, in simple words), to make sure that the new Active NameNode is perfectly consistent with the state of the cluster.
Zookeeperis used to monitor and maintain the Active NameNode and to manage the failover logistics if the Active NameNode becomes unavailable. Both of them, the Active and Standby NameNodes have dedicated Zookeeper Failover Controllers (ZFC) that handles the monitoring and failover tasks. In the event of a failure, the ZFC informs the Zookeeper instances on the cluster, which then select a new Active NameNode.
Sunil Singh
31-May-2017Your words increase my knowledge for sure. Thanks
Keep sharing these types of articles.