tag:blogger.com,1999:blog-8556003786324804973.post5784326887852641900..comments2024-01-27T03:43:02.667-08:00Comments on HDFS: Data warehousing at FacebookDhruba Borthakurhttp://www.blogger.com/profile/10832366855372649190noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-8556003786324804973.post-42725872639562999382014-03-12T09:51:39.566-07:002014-03-12T09:51:39.566-07:00I must say, I thought this was a pretty interestin...I must say, I thought this was a pretty interesting read when it comes to this topic. Liked the material. . . . . <a href="http://www.amsfulfillment.com" rel="nofollow">Jay Catlin</a><br /><br />David talpurhttps://www.blogger.com/profile/15600881829320806595noreply@blogger.comtag:blogger.com,1999:blog-8556003786324804973.post-56082752490354048102012-06-05T03:28:49.539-07:002012-06-05T03:28:49.539-07:00This comment has been removed by a blog administrator.hanumhttp://repository.gunadarma.ac.idnoreply@blogger.comtag:blogger.com,1999:blog-8556003786324804973.post-52242898238307762102011-07-04T15:32:02.848-07:002011-07-04T15:32:02.848-07:00The default fsck does not analyze files that were ...The default fsck does not analyze files that were being written when the namenode was killed. Please try running:<br /><br />bin/hadoop fsck / -files -blocks -locations -openforwrite<br /><br />this will print the files that have missing blocks (and the existence of missing blocks means that the NN will not exit safemode). You can manually exit safemode via<br /><br />bin/hadoop dfsadmin -safemode leaveDhruba Borthakurhttps://www.blogger.com/profile/10832366855372649190noreply@blogger.comtag:blogger.com,1999:blog-8556003786324804973.post-76949907130484113982011-07-04T01:07:37.053-07:002011-07-04T01:07:37.053-07:00hi Dhruba:
we encountered a situation like this: w...hi Dhruba:<br />we encountered a situation like this: we are using the shortCircuit first report to reduce the namenode restart circuit and we used it for a long time. But recently we found this situation,when namenode start, and after all of the datanodes' report has been processed, the safemode can't leave automaticly because of the safe mode count can't reach the threshold pct of 0.999 default. It stop at 0.998 and we fsck the whole hdfs and it's reported that the hdfs is health. I noticed that the shortCircuit first report just skip the reportDiff and addStoredBlock to blocksmap,Even though I don't think this is the reason after review our hdfs code,is there any possibility that this is the course of our problem? Or did you ever encountered this?I think fb's experience will offer great help to us.<br /><br />thank you very muchluo lihttps://www.blogger.com/profile/15560686212450937688noreply@blogger.comtag:blogger.com,1999:blog-8556003786324804973.post-81969252154007709512011-07-04T01:05:21.171-07:002011-07-04T01:05:21.171-07:00This comment has been removed by the author.luo lihttps://www.blogger.com/profile/15560686212450937688noreply@blogger.com