razorslinky

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by razorslinky

  1. Updated the both ui_info and ui.properties and still having issues connecting Here's some logs from the docker Crashplan: Thanks for this! Have you been able to find a direct download link for Windows 64 bit version as well? Just tried to download the x64 version and it appears that they haven't posted it yet. https://download2.code42.com/installs/win/install/CrashPlan/jre/CrashPlan-x64_4.3.0_Win.exe https://download2.code42.com/installs/win/install/CrashPlan/jre/CrashPlan-x64_4.4.0_Win.exe
  2. Figured it out... reviewed the logs in C:\ProgramData\CrashPlan\log\ui_Administrator.log.0 Updated .ui_info to look like this: port, guid, unraid ip And now the GUI connects to the crashplan headless docker
  3. Just downloaded and installed the windows client: https://download.code42.com/installs/win/install/CrashPlan/jre/CrashPlan_4.4.0_Win.exe The Crashplan (gfjardim/crashplan:latest) version is showing as 4.4.1 Updated the both ui_info and ui.properties and still having issues connecting Here's some logs from the docker Crashplan:
  4. Just wanted to update everyone since moving from ReiserFS to XFS. It's been a few weeks and everything has been running smoothly. Here's my uptime which is unbelievable for me: 09:17:36 up 8 days, 2:04, 2 users, load average: 4.27, 4.34, 3.91 I'm officially calling this and will mark this topic as [sOLVED-WORK-AROUND]
  5. Razorslinky, Thanks so much for your commitment to sorting this out. I'd recommend changing the topic to [WORK-AROUND] if bitrot runs fine though. The problem still exists, and since ReiserFS is the default filesystem, any new users will be plagued by this problem. Hopefully LimeTech can resolve the problem or prompt the ReiserFS maintainer to resolve it, but if not, this work around should help any that encounter the problem we're having. Marking it as solved makes it seem like it's no longer an issue, which could result in people not reading the entirety of the post. Now I have to buy another 3TB drive I guess. I've updated the topic to [WORK-AROUND] for now. So far, I've been running the Mover script everyday at 7am. I rebooted my server so I had to stop the bitrot script but it was at 25% when I stopped it. I'm feeling very confident that MY issue has been resolved due to moving the filesystem to XFS. I would really like to thank EVERYONE for helping and troubleshooting my issue. I know this is not the place for a suggestion but I think that in the future Limetech should get rid of ReiserFS completely. I really don't trust that filesystem after reading about other people having the same issue.
  6. So far... everything is looking amazing. I've been manually copying over 50GB of data to the cache drive and manually running the Mover script. No issues or lockups have occurred. Bitrot script is running against 15TB of data and so far its gone through 10%. I'm going to be very optimistic here and say that the issues have been resolved. I'll wait until the bitrot script is finished to change my topic to [sOLVED].
  7. Mover moved 100GB worth of data without any issues and now bitrot is running and hasn't had any issues so far. If bitrot successfully completes then my issue has been resolved and can be blamed on ReiserFS.
  8. Finally migrated ALL 30TB (13 disks) from ReiserFS to XFS with NO write errors reported. I am running Mover right now and if that's successful I will attempt to run the bitrot utility and see how it works. I will keep everyone updated!
  9. This is exactly what I was going to say. Did you ever run Unraid 6.0 beta6 and/or 7? Those two version had the reiserfs bug (in the kernel not from unraid) which toasted the filesystem. I've had data corruption and kernel locks all the time, while running Mover or even writing data to /mnt/user.. I'm slowly moving all of my data from reiserfs to XFS and feel pretty confident that it's a reiserfs issue since I've never had one issue copying data to any newly formatted XFS drive. So far I've copied about 15 TB from reiserfs to XFS using the following command under Screen (so I can close the putty session): "rsync -arv --stats --remove-source-files --progress /mnt/disk5/ /mnt/disk7/" (some of the arguments might be redundant but I've had really good success this way) Here's my post with a ton of information and determining it's a reiserfs issue: http://lime-technology.com/forum/index.php?topic=35788.0
  10. Oh good call... I'll add to that my todo list. Also found out when stopping the array, unraid froze and had a kernel panic on the "syncing filesystem" line.. which means that there's still some underlying issue with a reiser filesystem on one of the disks still. 2 disks converted to XFS, 11 more to go.
  11. That's more than enough validation for me. This is going to be a long road of syncing, deleting, verifying, etc.. but I'm feeling hopeful that this will resolve all of my "mover / filesystem" issues.
  12. After reading all of the blogs, reviews, etc... it seems like BTRFS is amazing on paper (as in some people compare it to ZFS but not as mature and/or feature rich) but when it comes down to it, it seems like people are having some weird issues.
  13. Go XFS for now. That's the plan. Just wanted to see if I was missing anything with BTRFS.
  14. Alright... Well here's my progress so far: Moved everything off disk3 and onto another computer and formatted the disk to XFS. After formatting it XFS, I ran the scrub command for ~30 minutes just to see if it would crash, and it actually kept going. When running scrub with ReiserFS, Unraid would crash within 2-3 minutes. So I rsycned another drive to Disk3 and so far it's written 1.45 TB and has 2,801,338 writes on it so far. And it's going strong, averaging between 50-90 MB/s. I think all of my crashing has to do with the ReiserFS corruption issue. So I'm now in the process of converting every drive I have to XFS. I'm very tempted to do BTRFS instead but I'm a little hesitant since it's still considered "experimental." I love living on the bleeding edge but I don't think I want to deal with any BTRFS issues anytime soon. Anyone here want to chime in about XFS vs BTRFS?
  15. Not sure if you want to test this but what happens if you run bitrot against the actual disk... /mnt/disk1/$SHARE? This will bypass the shfs mounting method and write straight to the disk itself.
  16. In the same array? I would not rsync it to another reiserfs disk. I would acquire a new empty disk, format it as XFS and rsync from reiserfs to XFS, not back and forth. Who knows how deep the corruption lies Oh yes! I used beta7/beta8 and probably downloaded between 300GB+ a week and of course used mover on a daily basis. The drives that I am using are outside the array on an actual physical computer not attached to Unraid. I'm not going to risk anything by syncing inside the Unraid array for now.
  17. I suppose you could try doing the reiserfsck on the drive in question, but no one really knows how bad the corruption is. Before I did anything, I would try to copy/rsync at least 1 disk to a newer XFS disk. At that point you can migrate to XFS for other drives, or attempt to fix the disk giving you issues with reiserfsck. That's a learning experience in itself. It's like do I **** or get off the potty? Given the maintenance and the future of reiserfs, this may be the impetus for you to get off reiserfs completely. Frankly we don't know where the corruption came from. Beta7,8 or other ESX related crashes thus causing issue on the filesystem in question. I.E. Did this problem start with reiserfs corruption? or did ESX cause unRAID to fail abnormally, this causing the reiserfs corruption. I've seen any form of power failure cause corruption on filesystems. Usually they fix what they can during a fsck. The only way to 'safely' determine this is to copy what you can to another drive (do not write anything if you do not have to). go bare metal or go unRAID 5 and test the questionable filesystem. In dealing with all of the recent issues with reiserfs, I've been wanting to move to XFS and this is my push to do so. I've been baffled with the choice of ReiserFS due to it no longer being "officially" supported anymore. No amount of people can really replace the dude (and company) who created it and knows it best. In regards to the corruption, I never had any major issues until after beta6. This I know for a fact because I've never had a single lock up or freezing until after beta6. My ESXi box has an APC attached and only has had a power failure two to three times in the past 5-6 months due to the power going out in my apartment for 3+ hours and my damn script not shutting down Unraid properly. I have 3 other workstations on my ESXi box (RDP, Mediaserver and Arch linux box) that have never had any corruption issues. ESXi has been rock solid and I've pretty much set it up just like I've done in the enterprise world (minus SAN/iSCSI.. that's coming much later in 2015) I will be rsyncing disk3 over to another drive, formatting it to XFS and rsyncing it back. I will then do another scrub test just to verify that it's working as it should. I'm happy with the results from this forum and thank everyone for reading/following along.. I also don't want to come across like I'm blaming Unraid for the data corruption and nothing being done to resolve it seeing as it's a wide spread issue that was fixed months ago: https://bugzilla.kernel.org/show_bug.cgi?id=83121
  18. Well not so funny story... My parity drive died and I haven't had the money and time (new born) to replace it so I'm running on hope that nothing else dies or is getting corrupted while I get a replacement. I don't have any md5sums of each drive to verify integrity and haven't tested this under Unraid 5 or Unraid 6 beta 6. Every-time I start working on rebooting Unraid to baremetal something comes up and I don't have the time to do it... but I am hoping to do that tonight. I'm also going to try running the scrub software under Arch Linux and see what happens. If I find out what drives are corrupted by running the scrub command.. should I try and convert those over to XFS? Or is there another way to fix the silent corruption issue? Also I REALLY appreciate all of the help that you are providing. Thank you!
  19. Right after I ran "scrub -X /mnt/disk3/tmp -p verify" I received this error message. Unraid locks up and can't do anything which forced me to hard reboot. Status: Disk1 scrub passed.. no issues. Disk2: Currently running right now Disk3 scrub froze and had reiserfs issues from the syslog
  20. Just compiled scrub from https://github.com/chaos/scrub and I have attached it here. Running "scrub -X /mnt/disk(1-13)/tmp -p verify" and will post the results when they are finished. I'm getting a little tired of the troubleshooting and freezing. Next step is purchasing a 4TB drive and converting everything over to XFS and I REALLY don't want to do that. scrub_x86_64.zip
  21. Just in case someone searching this thread is having the same type of issue I wanted to let you know that there are a few of us experiencing the same issue. Here's a thread that I started: http://lime-technology.com/forum/index.php?topic=35788.45
  22. That's actually my next troubleshooting step tonight. Well that sounds very interesting.. Do you happen to have the command lines by chance? I'll try this right after I try booting baremetal unraid tonight
  23. Running ESXi, 5.5.0, 2068190 Intel S2400C Motherboard, 2x Intel Xeon E5-2407 2.20GHz, 32GB RAM I have the unRAID server set as a VM with 8GB allocated, 2 CPUs 2x SAS Cards in Passthough mode (IBM ServeRaid M1015) We have a few things in common: ESXi 5.5.0 2068190, 2 CPUs but I only have 4GB of RAM and 1 M1015 flashed to LSI2008 IT firmware. Can you test something for me.. would it be possible for you to move 50-100GB+ between /mnt/disks using MC via SSH/Telnet? I want to see if bypassing shfs works for you. If it doesn't lock up and works just fine we might be onto something.
  24. Just went through the changelogs and noted only the changes to "shfs" Snce the past few stalls have been pointing towards shfs and noticing that it only freezes when the Mover (which uses /mnt/user and /mnt/user0) and Bitrot script (when using /mnt/user/TV) is running than it has to be something with shfs. When I manually copy 100s of GB of data via MC or run the bitrot script against /mnt/disk(1-13) it doesn't have any issues, hell, I can even watch shows via XBMC, copy data and run bitrot without issues. Please correct me if I'm wrong as I'm trying not to jump to conclusions but I believe I've narrowed it down to shfs... /etc/mtab: Latest crash: The only one that sticks out is from beta6 to beta7 since beta7 is where we started having the issues.
  25. Decided to boot into Arch and start Unraid-6.0b12 under Xen 4.4.1 and try the following: Ran the Mover script which moved 10GB of data Ran the bitrot script "bitrot.sh -a -p /mnt/disk4/TV/ -m *.mkv" Updated a docker image which live on the cache drive And so far so good. I know that when I ran "bitrot.sh -a -p /mnt/user/TV" it gave me a kernel panic the minute it started putting the SHA256 into the file itself. I am monitoring the syslog both via the browser and "tail -f /var/log/syslog" and so far so good. This is the longest Unraid and the bitrot script has ever lasted. Bitrot script was able to process 94GB successfully under /mnt/disk4/TV and another 50GB under /mnt/disk12/Anime.. Here comes the real test: "bitrot.sh -a -p /mnt/user/TV/ -m *.mkv" Let's see if Unraid has a (CPU) panic with this. So right when it started the bitrot accessing /mnt/user/TV and watching XBMC at the same time it decided to panic: Nov 30 22:45:24 Tower kernel: INFO: rcu_sched self-detected stall on CPU { 1} (t=6000 jiffies g=107510 c=107509 q=91452) Nov 30 22:45:24 Tower kernel: Task dump for CPU 1: Nov 30 22:45:24 Tower kernel: shfs R running task 0 14976 1 0x00000008 Nov 30 22:45:24 Tower kernel: 0000000000000000 ffff88012ae83c88 ffffffff8105cc09 0000000000000001 Nov 30 22:45:24 Tower kernel: 0000000000000001 ffff88012ae83ca0 ffffffff8105f2c4 ffffffff81822d00 Nov 30 22:45:24 Tower kernel: ffff88012ae83cd0 ffffffff810766a5 ffffffff81822d00 ffff88012ae8e0c0 Nov 30 22:45:24 Tower kernel: Call Trace: Nov 30 22:45:24 Tower kernel: [] sched_show_task+0xbe/0xc3 Nov 30 22:45:24 Tower kernel: [] dump_cpu_task+0x34/0x38 Nov 30 22:45:24 Tower kernel: [] rcu_dump_cpu_stacks+0x6a/0x8c Nov 30 22:45:24 Tower kernel: [] rcu_check_callbacks+0x1e1/0x4ff Nov 30 22:45:24 Tower kernel: [] ? tick_sched_handle+0x34/0x34 Nov 30 22:45:24 Tower kernel: [] update_process_times+0x38/0x60 Nov 30 22:45:24 Tower kernel: [] tick_sched_handle+0x32/0x34 Nov 30 22:45:24 Tower kernel: [] tick_sched_timer+0x35/0x53 Nov 30 22:45:24 Tower kernel: [] __run_hrtimer.isra.29+0x57/0xb0 Nov 30 22:45:24 Tower kernel: [] hrtimer_interrupt+0xd9/0x1c0 Nov 30 22:45:24 Tower kernel: [] xen_timer_interrupt+0x2b/0x108 Nov 30 22:45:24 Tower kernel: [] handle_irq_event_percpu+0x26/0xec Nov 30 22:45:24 Tower kernel: [] handle_percpu_irq+0x39/0x4d Nov 30 22:45:24 Tower kernel: [] generic_handle_irq+0x19/0x25 Nov 30 22:45:24 Tower kernel: [] evtchn_fifo_handle_events+0x12d/0x156 Nov 30 22:45:24 Tower kernel: [] __xen_evtchn_do_upcall+0x48/0x75 Nov 30 22:45:24 Tower kernel: [] xen_evtchn_do_upcall+0x2e/0x3f Nov 30 22:45:24 Tower kernel: [] xen_do_hypervisor_callback+0x1e/0x30 Nov 30 22:45:24 Tower kernel: [] ? __discard_prealloc+0x71/0xb1 Nov 30 22:45:24 Tower kernel: [] ? reiserfs_discard_all_prealloc+0x43/0x4c Nov 30 22:45:24 Tower kernel: [] ? do_journal_end+0x4e1/0xc57 Nov 30 22:45:24 Tower kernel: [] ? journal_end+0xad/0xb4 Nov 30 22:45:24 Tower kernel: [] ? reiserfs_xattr_set+0xd2/0x114 Nov 30 22:45:24 Tower kernel: [] ? user_set+0x3f/0x4d Nov 30 22:45:24 Tower kernel: [] ? reiserfs_setxattr+0x9b/0xa9 Nov 30 22:45:24 Tower kernel: [] ? __vfs_setxattr_noperm+0x69/0xd5 Nov 30 22:45:24 Tower kernel: [] ? vfs_setxattr+0x7c/0x99 Nov 30 22:45:24 Tower kernel: [] ? setxattr+0x118/0x162 Nov 30 22:45:24 Tower kernel: [] ? final_putname+0x2f/0x32 Nov 30 22:45:24 Tower kernel: [] ? user_path_at_empty+0x60/0x87 Nov 30 22:45:24 Tower kernel: [] ? __sb_start_write+0x9a/0xce Nov 30 22:45:24 Tower kernel: [] ? __mnt_want_write+0x43/0x4a Nov 30 22:45:24 Tower kernel: [] ? SyS_lsetxattr+0x66/0xa8 Nov 30 22:45:24 Tower kernel: [] ? system_call_fastpath+0x16/0x1b If I browse \\tower I see all of the unraid shares but I am unable to browse to \\tower\TV as it freezes Explorer Browsing via SSH has the same issue. I can see /mnt/user/* but going to /mnt/user/TV freezes Putty XBMC is unable to find the files to play I notice that it "stalled" on shfs. Any way to debug or troubleshoot if it's shfs causing this issue? I'm guessing that's why it didn't freeze when moving files between /mnt/disk(1-13) since it bypasses shfs.