Jump to content

JorgeB

Moderators
  • Posts

    67,119
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. Happy for you, still unrelated. Let me refrase that, almost certainly unrelated and it wouldn't make any sense, but stranger things have happened.
  2. Yes, drive that fails SMART test = failed drive Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 26 - # 2 Short offline Completed: read failure 10% 26
  3. Old SAS1 models support up to 2TB, all SAS2 (SAS2xxx) and SAS3 (SAS3xxx) models support any size.
  4. Your CPU should handle encryption just fine, only CPUs without AES could have some trouble.
  5. Yep, and while the older Seagate Archive shingled drives work well with Unraid, the newer Barracuda models appear to be crap, at least some of them.
  6. Those ST4000DM004 disks are SMR, and form same family as the ST8000DM004, there are various users with those disks and very poor writes, that's likely your problem, if you have a different disk add it to the array and test writing to it with turbo write enable, parity is not SMR so it won't be a problem.
  7. It's OK as long as it fits on the disk it currently is, since on Unraid a file can't span more than one disk, depending on the VM config it might also be limited to 2TB max.
  8. This is a general support problem, not a bug, but while it's not moved by a mod, disk5 dropped offline, so there's no SMART, but more likely a connection issue, you should look or post the SMART report after checking connections/rebooting. You'll then need to restart rebuilding disk6. Also, parity2 had some media errors, so recommend running an extended SMART test.
  9. No, you just don't get warning in Windows about CRC errors. No. No, though you should avoid Marvell based controllers, Asmedia for 2 ports or LSI for more than 2. 99 times out of 100 CRC errors are caused by the SATA cable, Samsung SSDs are particularly picky with cables, they require high quality cables, likely you're using less than optimal cables.
  10. Was going to reply to this but forgot, yes, that's the disadvantage of the second method, the snapshot then rsync method, with the first method, btrfs send/receive, you can move/rename folders on source and only the metadata changes will be sent. Unraid's independent array filesystem has many advantages, but in this case makes send/receive not practical, unless backup server disk config mirrors the 1st server, I still use send/receive for some of my smaller servers which use only the cache pool in raid5/6.
  11. Yeah, this happens with various OSes, I saw the same with my FreeNAS server when I had it, SMB reads are noticeable slower then writes over 10GbE, I see the same with Unraid, never really worried about it since it's fast enough and looks like it's a Samba "feature", at least on some hardware configs.
  12. No, my 10GbE NICS are Mellanox, gigabit are Intel, though apparently using an Intel NIC fixed the issue for the OP.
  13. It is, but those X9 models don't support more control than that X9 dual socket and most (all?) X10/X11 board do. Full is self explanatory, standard and optimal look similar to me, but it might depend on the fans used, try each one and check the RPMs.
  14. You mean XFS? Then you missed a step, as XFS works for single cache.
  15. You can only set them to the 3 predefined settings, full, optimal or standard.
  16. I'll do some more testing when I have the time, but I did the first test today expecting to get full gigabit read and writes, which should be between 110/114MB/s and definitely not getting it on reads, and it's not the hardware as I can get much faster read speeds with 10GbE, different hardware can make a bigger difference, as the OP is seing, but it does really appear that there's some issue with the read speed over SMB, at least in some cases, and it's not the first time, I remember some releases ago I could only get fast speeds with some of my servers by forcing SMB to 2.02, it doesn't appear to help now.
  17. There is if it's a single device cache, pools can only be btrfs, though doubt very much btrfs has anything to do with those crashes.
  18. Did some tests with v6.7-rc2 and I do see some difference with write vs read performance, though nowhere as much difference as you see, but there might be something there: Read from SSD cache: Read from NVMe unassigned device: Write to NVMe device (writing to cache is the same):
×
×
  • Create New...