Jump to content

sdamaged

Members
  • Posts

    178
  • Joined

  • Last visited

Everything posted by sdamaged

  1. OK so either a new disk with CRC errors, or this is a disk that has been moved to the other port the previous erroring drive was using I see your Supermicro X8DTL is using an older BIOS (2.0), it looks like 2.1b is available now so might be worth upgrading to that as well.
  2. OK, do you have a different SATA port, or a different drive to try? I assume you're using SATA from the motherboard? Also, maybe try swapping the cable / port from a good drive to the suspect one See if the issue happens still. (process of elimination!)
  3. Could you try a different SATA cable on that drive? If its the same cable you're using, then that's the only constant (other than the drive...!)
  4. Hmm i think maybe moving the server has knocked the cable or the drive Seeing some UDMA CRC Errors on the drive - SAMSUNG_HD204UI_S2H7J90B713375-20200226-0751 disk4 (sde) ata4: SError: { UnrecovData HostInt 10B8B BadCRC } I would turn it off, open it up and check all cables and connections. The drive is also 3.8 years old, so possibly on its way out - just something to think about
  5. Lol the single diagnostics zip would have sufficed... It contains all of the above Did you say you were copying files when this happened? I've raised a bug report for the same thing. System locks up and freezes, dockers stop working when copying files to the server cache Not sure if related... EDIT - Lots of CRC errors on ata4: SError: { UnrecovData 10B8B BadCRC } Possible bad drive or cable.
  6. If copying a large file to a server share (Windows 10 1909 SMB protocol) which is using cache, the entire CPU will be maxed out on every thread, and the server locks up. DNS will then fail as the Pi-Hole docker will bomb out (all other dockers will fail too), and it only returns if you let the copy complete, or you cancel the file transfer. Disabling cache prevents this issue I have read other people have seeing the same issue here. Screenshot of glances running and unraid CPU usage whilst movie is copying over the network (gigabit lan) Marked as urgent, as dockers all crashing out means Plex stops working (angry wife and kids...!) and internet stops working across the entire network as PiHole fails (no DNS). Could disable cache, but then why have it? Samsung 850 Pro 2TB SSD for cache, with appdata, one running Windows Server 2016 VM and the docker image. EDIT - Just updating after finding more information out regarding this issue. So this seems to be due to using a Samsung SSD with a BTRFS file system. If you switch the file system to XFS it resolves the problem. Obviously if you're wanting to use a cache pool with Samsung SSDs you're currently SOL until this is fixed goku-diagnostics-20200225-1917.zip
  7. No worries dude. The community here is pretty helpful, i use them myself, no thanks necessary
  8. Found these bit of info online too:- "If you're using a cheapo third party USB-C cable, it might only support USB 2.0 High Speed transfer rates (if it doesn't have wires for all the USB 3.1 pins)." "Type C defines a connector. It doesn't define speed. This is like assuming all M.2 slots are faster than SATA. M.2 can also use SATA based drives in addition to NVMe." i would say mystery solved...!
  9. Also, i wonder why the drives are showing as JMicron generic instead of the actual drive serials. This suggests to me that perhaps they are not connected bare metal? I still think its a issue with USB2 though as said above, especially as the speeds seem about right for USB2. Probably stating the obvious here, but make sure you're connected to one of the blue USB ports
  10. Nothing wrong with 3rd Gen Ryzen on unRAID for the OS itself. I've not had a single crash yet since switching over from Intel. I would have zero hesitation recommending AMD for unRAID, in fact i would choose them over Intel any day of the week. Used a 3600 and also a 3950X on my system, and it's absolutely fantastic. Threadripper i'm not quite sure, heard a few people have had issues, but i don't think it would stop me from going down that route. However, as Ryzen is now so good, unless you need masses of PCI-E lanes, then x570 is plenty imo. Of course if you're also virtualising an OS as well, then that could bring other issues that may not be the fault of unRAID itself.
  11. EDIT - Just googled and can see the Orico 8 bay (NS800U3 if that's the model) is USB3.0. I would have assumed that if you have multiple drives using a single USB link, it's possibly saturating it? But from looking it states up to 5Gbps I'm confused by what you're saying about the rebuilds. unRAID parity will repair the one or two disks that are missing:- "It seemed the total read speed of the array stays constant at about 30-35Mb/s, so each drive will read at about 6Mb/s and the data drive would write at 6Mb/s. Once the 2Tbs are out of the way the remaining drives go up to 8-9Mb/s and once the 4Tbs are done the 10Tbs go at 18-19Mb/s. It got to 99.6% complete and my son threw a toy behind the TV and it unplugged again" I'm a bit confused about the above. I would have assumed that when you unplugged the array and one of the drive was disabled, that this one drive would then be rebuilt by the parity drive? Also, are the drives in the enclosure each classed as being passed through, i.e with bare metal access by the OS? Or are they in their own kind of array? Sorry, trying to get my head around this...!
  12. 6.8.2, Ryzen 3600 it is an Asrock board though, which is the only brand i would ever use on unRAID. As mentioned above though, this is using the driver nct6775
  13. Found it, thanks man. Viewing other people signatures was disabled by default (never seen that before...)
  14. Sorted issue with the people at linuxserver. You need to map the container path using a prefix of /sync Example below Sorted now. I found another bug whereby if you remove the /sync share completely the container won't start. They have raised as a bug.
  15. Ah yes, so i have tried that and i believe it will move data from disk to disk, but i didn't think it could do it en masse. I'll check it again though as not used it for a while thanks a lot EDIT just checked and it does indeed do this using "scatter" - amazing!
  16. Also, do you know of an easy way to move data to the first 6 volumes? At the moment i'm literally moving folders within Krusader. Would be epic if i could just somehow instruct the data to be shifted to the first disks a bit easier
  17. Does anyone have an idea on the best way to accomplish this scenario? I have 12 x 14TB data disks, and my current data usage is at 80TB I want to be able to have all my current data fill the first six disks, and the last 6 to be empty so they can stay spun down. Of course as the first 6 fill, then move onto the 7th I'm currently using High Water, but as i understand it, that would fill each volume to 50% first? Do i need to switch to fill up mode? Purely thinking about power savings! EDIT - i assume i could also exclude the last 6 disks from all shares, but thats a bit of a faff, and i would have to remember to re-enable them as the disks began to near capacity
  18. That would also be pretty damn good!
  19. Easier said than done. It's not easy to back up 100+TB of data My important stuff is backed up according to the 321 strategy, but movies and tv shows which take up a huge amount of space can't easily be backed up. Triple parity would at least give some additional peace of mind when swapping faulty disks. (it took some nagging to persuade my wife to let me but 14 x 14TB external drives!)
  20. Sorry if this has been asked before. Dual parity just isn't enough with large arrays. I've had to replace 2 disks a couple of times now, and having the extra peace of mind with that one extra disk would be amazing. I know it's likely a big CPU overhead with the maths calculations, but if you could give people the choice, who have the hardware to do it, that would be amazing Once you hit 14+ drives, the chance for an additional failure whilst rebuilding must skyrocket thanks!
  21. Really hope they offer this at some point. I would jump on it. EDIT - Just asked!
×
×
  • Create New...