wishie

Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by wishie

  1. That's better.. 140MB/sec roughly.. Thanks!
  2. Im running the script from the post: following the "Shrink Array" wiki page.. but the clear script is reporting that it's clearing at 1.2MB/s... surely it shouldn't be that slow?
  3. I've swapped out the cables and will see if it happens again.. if it does, ill grab some diagnostic logs
  4. Recently, I got an alert that I had a failed disk, and that is was disabled. Since the drive was fairly new (under 6 months) I thought this was odd, and did an extended SMART test which showed no errors. I removed the drive and put it in another PC and checked, and it was fine there also. Anyway, to be cautious, I replaced the drive with a brand new 3Tb NAS drive and rebuilt it from parity. Within a few days, this drive is now showing that its failed, and was disabled. Both disks pass all tests without issue, so I'm now suspecting a cabling issue. I use an LSI SAS controller for all of my drives, and wonder if anyone else has experienced any cable issues with the SFF-8087 to 4 x SATA cables? Or do you think it could be something else? A bad power connector maybe?
  5. So, a bit of an update.. The 'parity swap' to the larger parity drive, and the subsequent data drive rebuild worked perfectly. No errors at all, everything back working 100%. To be honest, when I had a failed data drive, and no suitable replacement drives (all my replacements were larger) I assumed I was looking at data loss. This 'parity swap' feature (and unRAID in general) have saved my bacon. Now, my next steps are going to be replacing all my reiserfs drives with XFS formatted ones.. there is a way to do this without parity rebuilds each time, right?
  6. That's what I am doing.. the 'parity swap' to a larger drive, and obviously using the old parity drive to replace the failed one.
  7. Ok, I've found a tutorial on how to upgrade the parity, so I'm going to do that first.. then I will 'shrink' the array, by removing disk 7, and then put it in the slot of the failed drive, and rebuild it from parity. After that, I will install another new 3Tb drive to replace one of the oldest disks (yet still working perfectly, a 1Tb SAMSUNG drive). It seems the read errors on disk 3 were to do with it being an older rieserfs formatted disk, and it being very full. Removing some data from it seems to have made the errors go away, and there are no SMART errors for that drive.
  8. So, im in a bit of a tight spot.. I've got an array of 7 disks, but with one (disk 1) in a failed state, and another (disk 3) starting to show read errors.. Now, I want to replace the failed disk and rebuild parity, but the issue is, my new replacement disks are larger than my current parity disk. I am at a loss of which order to attack things. My parity drive (and therefore also my largest data drive) is 2Tb. I do have a 'spare 2Tb' drive, with no data on it anymore, BUT, its part of the array (disk 7). Is there a way to remove disk 7 from the array, and reassign it to slot 1, rebuild parity, then replace parity disk, then replace the disk with read errors (disk 3) ? Or, what do you think my best course of action is in this situation?
  9. Hmm, So the "Minimum Free Space" setting for a user share, such as 'Movies' takes into account the space on the cache drive? The help on the WebUI says: I always assumed this was the minimum free space on any DATA drive belonging to the share, NOT including the cache drive. That would make sense to me.. Because while you don't want your DATA disks to sit at full capacity for a long period of time, running the cache disk 'close to the metal' for a few hours would be considered perfectly fine. So if my assumptions above are correct, I propose that the "Minimum Free Space" of the share should be only for DATA disks, not the CACHE. Thoughts?
  10. I went to "General Share Settings" -> "Cache Drive" and saw a 20Gb limit.. I lowered that to 0KB, but still have the issue, even after a reboot. My config files in the original post should reflect that. That said, this morning after the mover has moved 78Gb of stuff, I can successfully use the cache drive.. I wonder if there is a setting somewhere that isn't being updated properly? I had that yesterday with a plugin too.. A 'corrupt' config file that wouldn't update, but appeared to work fine from the WebUI.
  11. I went to "General Share Settings" -> "Cache Drive" and saw a 20Gb limit.. I lowered that to 0KB, but still have the issue, even after a reboot. My config files in the original post should reflect that.
  12. Here is the full diagnostics from the WebUI "Tools->Diagnostics" section wishie-diagnostics-20151103-2327.zip
  13. I recently swapped out an old 750G WD Green Hard Disk for a 120G Kingston SSD as my Cache drive. After moving all my Docker app data etc over to the new cache drive, I booted up, assigned the new disk, and assumed everything was ok. I will try to explain this as best I can. 1) If i try to create a new folder on my 'Movies' share, i get an error in the syslog "shfs/user: share cache full" and the new folder is created directly in the array, NOT on the cache drive. 2) If I SSH into the server and try to create a new folder in /mnt/user/Movies/ I see the same error, and the folder is instead created in the array, on one of the data drives. 3) If I SSH into the server and manually create /mnt/cache/Movies/newfolder/ it shows up as expected in the 'Movies' share. In short, it seems that 'shfs' thinks the cache disk is full, and refuses to write to it. All other methods of writing to the disk are fine, however. Attached is the output of 'df' and 'ls -la' on the /mnt dir, and my complete syslog df.txt ls.txt syslog.tar.gz
  14. Even though not used any more (by most people) I thought I should point out a few possible issues with it, and a solution. I don't agree with setting a hard-coded 'sleep 30' as that may not be enough time, or some people don't even have the array set to start on boot. Also, matching the devices by using /dev/md* could cause issues if there is ever any other similar named device, like /dev/mdblah or something. Below is the code I have used to get around these issues: That code should be run from the go script and it will sit there sleeping until the block device /dev/md1 exists. At that point, it will run the blockdev command on all device nodes matching /dev/mdNN where NN is a digit from 0 to 99 inclusive.
  15. 32 minutes to go on this parity sync! But yeah, ill grab some more RAM and the M1015.. that should sort out a few issues. Thanks
  16. The only plugin is "Dynamix webGui" which is a default one, isn't it? I do have Docker and 2 containers, but I disabled that to save memory. So far after the preclear, the Parity-Sync has been running for 2hrs without crashing.. this is the longest time yet.. I hope it keeps it up. Ive decided to replace this mess of onboard and CPI controllers with an M1015 (LSI 9211) controller too.. thats unrelated, but yeah
  17. Hi guys, Ive been using unRAID for quite a few years, and have suffered drive failures before and replaced them without issue. Recently though, my Parity drive reported as 0b in size and then stopped working all together. I removed the drive, put in the new replacement, and started a Parity-Sync... I came back a bit later to check on the progress and found the computer had basically locked up. There is no monitor or keyboard attached, but I couldn't access the Web UI anymore, or SSH in. I connected a monitor and keyboard to the machine, and tried again, this time running 'top' to watch things like CPU/RAM usage etc.. Once again, the machine locked up (around 48 minutes into the Parity-Sync). Now, this machine has 2Gb of RAM, but 512Mb of that is assigned to the GPU and there is no way to change that in the UEFI/BIOS sadly. So, unRAID has 1.5Gb of RAM available. So, to try and rule out 'Out of memory' issues (because we all know Linux does some comical things when it runs out of memory) I created a 4Gb swap file on my cache drive. So, I booted the machine, clicked "Start Array" to mount all the disks, ssh in a did 'swapon /mnt/cache/4Gb.swapfile' and run top to confirm its there. There is now swap, but unRAID is using NONE of it.. Sometimes I've seen it use 1mb or so, but barely any.. while the RAM is very very full.. I checked the 'swapiness' which is at a default of '60' like most other distros... So, basically, I have 2 questions: 1) Why isn't the swap being used? 2) Do you think I'm on the right track with 'out of memory' issue for the Parity-Sync to fail? Note, I have just spent almost 21hrs doing a 'preclear' of this drive, and it has 0 errors. Should I perhaps boot the machine up again, run 'top' on one tty, and 'tail -f /var/log/syslog' on another tty?
  18. I recently updated from 5.x to unRAID 6.0beta12, and found that my NFS shares were only exporting as read-only, even though i was specifying read/write. After spending a few hours trying various things, I went through the changelogs for unRAID and noticed that 'sec=sys' was being set as a new default for beta4... after doing some research, this seemed like a possible solution, but i assumed that beta12 still used this default as there was no mention of it changing in the changelogs. Eventually I tried it anyway, modifying my rule to include 'sec=sys' and suddenly all my NFS shares are working read/write again. Can we please have this back as the default, or if not, let us know what the default is (I'm assuming sec=krb5 like Debian does, although I think thats a stupid default) Thanks.
  19. try the following *(sec=sys,rw,all_squash,insecure) as the rule.
  20. A clean install of 5.0 does NOT include the 'Enhanced UI' but it gives you the option to install it, via Settings. I don't think the enhanced UI causes any issues by itself.
  21. Its not a VM, its an Asus E35M1-M (AMD E-350) with 4Gb RAM Attached is the syslog, and the output from dmesg. Note the errors at the end of the dmesg output. syslog.txt dmesg.txt
  22. I know one of the controllers freaks out and complains about an IRQ issue.. This does slow down any drives connected to that controller, but not THAT much.. I do have an IBM M1015 on the way to rectify this problem. Perhaps I should wait until it arrives before taking more drastic action.
  23. Yeah, I confirmed that the settings were ok before.. but just to be sure, I checked again. All shares are configured to use the cache disk. Im not too sure what is going on. Ive also had some other strange behaviour lately, and trying to update to 5.0 FINAL didn't work well either.. unRAID said it was on rc15 or something when I updated the files and booted it.. Is there a way I can backup my array/shares config and start from scratch?
  24. If i create a file (say, 1024mb) and use scp to copy it to /mnt/user/Share/ it goes VERY slowly (down to under 1mb/sec over a 1gbit link) If i scp the same file to /mnt/cache/Share/ i get ~40mb/sec over the same link.. Now, am i wrong by copying to /mnt/user/* ? I thought this was meant to be the view of the array including cache, so all data copied to here should be added to the cache drive... I also get very slow copies using the GUI (in OSX) , so its not just scp having this issue. I have not, however, copied directly to the cache drive from the GUI.. perhaps I can try that, too. What am I doing wrong? Where should I be copying new data (so it ends up on the cache drive)
  25. So for a few months now, Ive been having speed issues while writing to the array. Moving from SMB to NFS made things a little better (so it seemed) but now its just as bad. I decided to do a few tests over the network with 'dd'. I found that I could write to a share of mine which DOESNT use the cache drive, faster than a share that DOES use the cache drive. I thought this to be rather odd. I ssh'd into my server, and tried hdparm -Tt on a drive from my array (sda), and then my cache drive (sdh). The results are below: hdparm -Tt /dev/sda /dev/sda: Timing cached reads: 1618 MB in 2.00 seconds = 809.34 MB/sec Timing buffered disk reads: 44 MB in 3.03 seconds = 14.54 MB/sec hdparm -Tt /dev/sdh /dev/sdh: Timing cached reads: 2 MB in 2.01 seconds = 1019.94 kB/sec Timing buffered disk reads: 4 MB in 4.12 seconds = 994.20 kB/sec Does this indicate what I think it does? That my cache drive has something seriously wrong with it? Any clues would be handy.. Thanks, wishie