java - "Not sufficiently replicated yet" when appending to a file in HDFS -
i have version 2.4.1 cluster running, 1 namenode , 4 datanodes (namenode beingness 1 of them). running java programme appends info file in hdfs every second. file size 100 gb. replication factor 2.
i have lately started having problems appending info file. every time programme tries append file, exception org.apache.hadoop.ipc.remoteexception(java.io.ioexception): append: lastblock=blk_1073742660_2323939 of src=/user/hduser/siridata.txt not sufficiently replicated yet.
if run fsck says file corrupt - missing 1 block. filesystem gets healthy random combination of running balancer
/namenode recovery
/hdfs dfs -setrep
. after running append while, original problem reappears. 1 time removed 1 node had corrupt info , scheme got 100 % healthy without problems - while.
i tried start using new file data, same problem persists.
any ideas wrong? thanks!
java hadoop hdfs
No comments:
Post a Comment