razorslinky Posted December 7, 2014 Author Share Posted December 7, 2014 Personally from all the crashes and trouble I've seen people have with using BTRFS, I'd convert to XFS. I don't doubt that BTRFS will grow into a fine filesystem, but that will take more time. It just doesn't seem to be as mature as XFS. After reading all of the blogs, reviews, etc... it seems like BTRFS is amazing on paper (as in some people compare it to ZFS but not as mature and/or feature rich) but when it comes down to it, it seems like people are having some weird issues. Quote Link to comment
WeeboTech Posted December 7, 2014 Share Posted December 7, 2014 Go XFS, I really want to use BTRFS myself, but with all the trouble people have reported over the past few months it's not ready to store my important data. Quote Link to comment
razorslinky Posted December 7, 2014 Author Share Posted December 7, 2014 Go XFS, I really want to use BTRFS myself, but with all the trouble people have reported over the past few months it's not ready to store my important data. That's more than enough validation for me. This is going to be a long road of syncing, deleting, verifying, etc.. but I'm feeling hopeful that this will resolve all of my "mover / filesystem" issues. Quote Link to comment
WeeboTech Posted December 7, 2014 Share Posted December 7, 2014 It would be a good time to create those md5 hashes of the whole filesystem too. Quote Link to comment
razorslinky Posted December 7, 2014 Author Share Posted December 7, 2014 It would be a good time to create those md5 hashes of the whole filesystem too. Oh good call... I'll add to that my todo list. Also found out when stopping the array, unraid froze and had a kernel panic on the "syncing filesystem" line.. which means that there's still some underlying issue with a reiser filesystem on one of the disks still. 2 disks converted to XFS, 11 more to go. Quote Link to comment
razorslinky Posted December 13, 2014 Author Share Posted December 13, 2014 Finally migrated ALL 30TB (13 disks) from ReiserFS to XFS with NO write errors reported. I am running Mover right now and if that's successful I will attempt to run the bitrot utility and see how it works. I will keep everyone updated! Quote Link to comment
razorslinky Posted December 13, 2014 Author Share Posted December 13, 2014 Mover moved 100GB worth of data without any issues and now bitrot is running and hasn't had any issues so far. If bitrot successfully completes then my issue has been resolved and can be blamed on ReiserFS. Quote Link to comment
razorslinky Posted December 13, 2014 Author Share Posted December 13, 2014 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Quote Link to comment
sureguy Posted December 13, 2014 Share Posted December 13, 2014 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. Now I have to buy another 3TB drive I guess. Quote Link to comment
itimpi Posted December 13, 2014 Share Posted December 13, 2014 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. This is no longer true. With v6 XFS is now the default format. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. That is one of the problems - there is no proper maintainer for Reiserfs (the original developer is in prison for murder!). This is one of the reasons why Reiserfs is no longer seen as a viable format going forward. Quote Link to comment
sureguy Posted December 13, 2014 Share Posted December 13, 2014 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. This is no longer true. With v6 XFS is now the default format. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. That is one of the problems - there is no proper maintainer for Reiserfs (the original developer is in prison for murder!). This is one of the reasons why Reiserfs is no longer seen as a viable format going forward. I didn't realize that XFS was the default format in v6, but it's still a work around. Anyone that's been running unRAID (in any stable format) with the same setup as me, or slinky will be bitten by this bug. I don't know what the new adoption rate for unRAID currently is, but I don't imagine that 100% of the new users are going to default to the beta version. Since moving to version 6 doesn't change your drives to XFS (and I understand it cannot) if you're running an old version, it's still very much a work around. Thanks for the info itimpi, as always, it's appreciated! Quote Link to comment
razorslinky Posted December 16, 2014 Author Share Posted December 16, 2014 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. Now I have to buy another 3TB drive I guess. I've updated the topic to [WORK-AROUND] for now. So far, I've been running the Mover script everyday at 7am. I rebooted my server so I had to stop the bitrot script but it was at 25% when I stopped it. I'm feeling very confident that MY issue has been resolved due to moving the filesystem to XFS. I would really like to thank EVERYONE for helping and troubleshooting my issue. I know this is not the place for a suggestion but I think that in the future Limetech should get rid of ReiserFS completely. I really don't trust that filesystem after reading about other people having the same issue. Quote Link to comment
razorslinky Posted December 22, 2014 Author Share Posted December 22, 2014 Just wanted to update everyone since moving from ReiserFS to XFS. It's been a few weeks and everything has been running smoothly. Here's my uptime which is unbelievable for me: 09:17:36 up 8 days, 2:04, 2 users, load average: 4.27, 4.34, 3.91 I'm officially calling this and will mark this topic as [sOLVED-WORK-AROUND] Quote Link to comment
SmallwoodDR82 Posted December 25, 2014 Share Posted December 25, 2014 Wanted to thank razorslinky and all the others for all the hard work on this. I've had this issue for a long time and I'm happy to see there is light at the end of the tunnel. FYI - I was getting this error with unRAID 5.0.5 and had never gone to 6.0 until yesterday. I was not getting this error near as often as you but I have started the process of formatting all drives to XFS. Thank you again! My post for what it's worth - http://lime-technology.com/forum/index.php?topic=37311.0 Quote Link to comment
RobJ Posted December 25, 2014 Share Posted December 25, 2014 Linking this thread to the related Defect Report thread: Kernel crashes w potential reiserfs corruption: 6.0-beta12-x86_64 Quote Link to comment
SmallwoodDR82 Posted December 26, 2014 Share Posted December 26, 2014 Figured I would add to this. With 5.0.5 I was seeing this STALL CPU error maybe once a month. Some reason it seems 6.0beta12 has given this issue the business, because I cannot get my system to stay up for more than 24 hours. I'm still in the process of moving all drives over to XFS. Quote Link to comment
WeeboTech Posted December 27, 2014 Share Posted December 27, 2014 With 5.0.5 I was seeing this STALL CPU error maybe once a month. Was unRAID 5.0.5 failing when these messages popped up, or did it stay up and fully operational? Quote Link to comment
SmallwoodDR82 Posted December 27, 2014 Share Posted December 27, 2014 With 5.0.5 I was seeing this STALL CPU error maybe once a month. Was unRAID 5.0.5 failing when these messages popped up, or did it stay up and fully operational? It would fail. My only option was to hard power fail through ESXi. I couldn't telnet, smb, or anything. My console within ESXi wouldn't even respond. Quote Link to comment
RobJ Posted December 27, 2014 Share Posted December 27, 2014 It's time to look for commonalities, among all of those users reporting this issue. Please report: * motherboard * RAM and RAM type (ECC or not?) * CPU * added disk controllers * any virtualization features used, Dockers, ESXi, KVM, Xen, VMware, etc * plugins and addons * all UnRAID versions where you have experienced this More may be added as common components are spotted. Quote Link to comment
SmallwoodDR82 Posted December 27, 2014 Share Posted December 27, 2014 Happy to help. I have a thread in unRAID as guest will all the info and syslog... this thread titled during mover might put some people off. Mine happens randomly. Never had it happen during mover though. My thread - http://lime-technology.com/forum/index.php?topic=37311.0 Case: Norco 4224 Mb: Supermicro x9scl-f-o CPU: Xeon 3.4Ghz E3-1240v2 RAM: Kingston 32GB ECC Controller: Intel M1015 (IT Mode) Expander Card: RES2SV240 ESXi 5.1 - 5.0.5 with Plex Plugin only ESXi 5.1 - 6.0beta12 with Plex docker only ESXi 5.5 - 6.0beta12 with Plex docker only Bold is what ESXi I'm currently running. Saw the STALL CPU on all 3 versions. If I think of anything else I'll edit. Thanks! Quote Link to comment
SmallwoodDR82 Posted January 2, 2015 Share Posted January 2, 2015 in the process of moving to XFS array... Had another CPU stall. Syslog attached. syslog.zip Quote Link to comment
jonp Posted January 2, 2015 Share Posted January 2, 2015 So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED]. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. This is no longer true. With v6 XFS is now the default format. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. That is one of the problems - there is no proper maintainer for Reiserfs (the original developer is in prison for murder!). This is one of the reasons why Reiserfs is no longer seen as a viable format going forward. Just an FYI, there is a maintainer for reiserfs. His name is Jeff Mahoney. Quote Link to comment
Tezmo Posted August 9, 2015 Share Posted August 9, 2015 Has there been any movement on a fix for this issue in reiserfs? I'm getting the very same difficulty on a bare metai install of 6.0.1 having upgraded from a perfectly working install of 5, and the prospect of up changing filesystem to XFS for 76TB over 22 drives does not sound great. Thank you all! Quote Link to comment
jonp Posted August 9, 2015 Share Posted August 9, 2015 This issue was fixed. Please post a new issue under general support. Sounds like something else might be amiss here. Quote Link to comment
Tezmo Posted August 10, 2015 Share Posted August 10, 2015 Good to know Jon, thank you - I was sure this was it because the symptoms sound identical but I will begin gathering log files the next time it happens! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.