Jump to content

JorgeB

Moderators
  • Posts

    63,550
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. Try changing cache slots to just 1, if it still doesn't mount best best is to completely wipe the SSDs previously used as cache with blkdiscard and redo the pool, if you need to save data you can use this FAQ entry.
  2. Yep, that's a problem with rsync, especially on initial sync as it creates immediately all folders on the first available disk.
  3. Better than USB (for single drive at least) but I still wouldn't recommended it, eSATA cables are known to be flaky, and port multipliers are definitely not recommended, glad yours is working well but it's likely the exception, I keep finding users with errors related to port multipliers.
  4. I wouldn't bother, either use larger internal disks or go SAS.
  5. Start by running memtest for a few hours, ideally 24.
  6. It's usually an enclosure problem, but like most already told you USB is a bad idea for array use, even if SMART is passed they usually have very bad error handling, if you need to use external use a SAS enclosure.
  7. Just got the first two, which will be used as parities, since they are larger than current ones, will need to do a double parity swap, not surprisingly they look exactly the same, except the sticker, they also weigh the same, was hoping for a small difference to confirm they actually have different internals, obviously weighting the same doesn't prove they don't, also my scale resolution is 5g, so there might be a very small difference unmeasurable by me, I guess time will tell if they have better vibration protection, though Toshiba recommends theses disks for 1 to 8 bays and they will be used on a 21 disk server, but if nothing else they will have an extra year of warranty. On that note some good news, one of two failed 4TB WD is still under warranty, so not a total loss, before checking the serials I was thinking they were both out of warranty since they were some of the first 4TB disks on that server.
  8. In my case it can't be 24x7 use as these servers are archive only, only on for a few hours every week. I always liked the WD green/blue/reds as they are low power and very low noise, and I still have a lot of them without issues, some older 2TB and 4 and 6TB on servers with fewer disks, but those on the biggest servers, especially 3 and 4TB are failing at an alarming rate, and the problem always start the same, they start getting slow sectors (I start noticing low performance on transfers since I always use turbo write), SMART attribute for Raw Read Error Rate starts increasing and after just a few more hours of use they start having read errors. Since I need to replace those last two disks and I've been having good luck with Toshiba disks (and they are very competitively priced) I'm going to do a dual parity swap and upgrade to larger disks, but I'll be using one X300 desktop drive and one N300 NAS drive, and when I need to upgrade or replace more disks on this server will do them 2 by 2, one from each, so after a few years maybe I'll be able to gather if one is really better than the other for larger server use.
  9. Thoughts? I use mostly desktop drives on my servers, lately I've been having disks failures with some regularity, especially my 3 and 4TB WD green/blues are failing like at least one a month, just this weekend I had a double disk failure on one of my servers on two fairly recent and very low power on hours 4TB WD green drives, I'm starting to think at least these drives don't handle vibrations well, but I also have a lot of Samsung, Toshiba, 2TB WD greens and a few Seagate desktop drives and these have been fairly reliable.
  10. Both disks have the same UUID: Apr 11 21:47:50 NASBackup kernel: XFS (md8): Filesystem has duplicate UUID 6e5537c6-38fe-4f06-8d46-587f6c2185fe - can't mount Likely one was rebuilt for the other at some point in the past, so it will only mount the first one, you can change the UUID in either one: xfs_admin -U generate /dev/sdX1 Note the 1 in the end.
  11. That's a bad idea, if it keeps increasing there's still a problem.
  12. Apr 6 09:21:07 drogo kernel: XFS (md1): Internal error XFS_WANT_CORRUPTED_GOTO at line 1423 of file fs/xfs/libxfs/xfs_ialloc.c. Caller xfs_dialloc_ag+0xdd/0x23f [xfs] Yes
  13. There's filesystem corruption on disk1, check filesystem: https://lime-technology.com/wiki/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  14. Disk dropped offline, it's on a Marvell controller and these are know to drop disks in some cases, disk4 is on the same controller, has it been there for long and without issues?
  15. This type o call trace happens to several users in different unRAID releases, seems to be mostly harmless though, but don't know the cause.
  16. If the number of errors keeps increasing there's a problem.
  17. The fact that's it's new doesn't mean there isn't a problem, but wait, as long as you don't get more you're OK.
  18. Single error is not a problem, but since it happen on all disks you might get more, and that would mean there's trouble, I would say 4 bad SATA cables it's not likely, so maybe controller or if they share some kind of enclosure.
  19. You just need to copy/paste all the files from one to another, make sure the label is UNRAID and run makebootable.
  20. When the stop array button is available. That's likely a corrupt docker image, would need the diags to confirm, if yes just delete and recreate.
×
×
  • Create New...