Jump to content

JorgeB

Moderators
  • Posts

    67,653
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Fix filesystem on both disks, run it without -n or nothing will be done.
  2. If you started the array with a disk unassigned that disk will no longer be 100% in sync with parity, just mounting and unmounting the disks is enough for that, so you need to rebuild (or do a new config to re-enable it but it will require a correcting parity check, and that takes as long).
  3. If you have any VM vdisks on that disk they will be sparse, sparse means that if you for example select 500G for the vdisk size but the OS only uses 50G, it will only occupy 50G on the disk, it can then grow as needed, if it gets copied/moved to another disk without using the flag it will occupy 500G on dest, instead of just 50G. rsync flag for this is: --sparse
  4. You're welcome, you can take a look here for some recommended models:
  5. Problem with the onboard SATA controller: Jun 1 14:00:59 SRVUNR1 kernel: ahci 0000:03:00.1: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000f address=0xe7d60000 flags=0x0000] Unfortunately quite common with some Ryzen boards, BIOS update might help, or using a newer Unraid release when available due to the newer kernel, failing that best bet is to use an add-on controller (or a different model board).
  6. You can create multiple single device "pools", some limitations though, all existing data in a specific share using those pools would still be visible when browsing that share but you'd need to work with disk shares to copy data to all the pools except the one set to use that share, currently any share only can be configured to use one pool.
  7. Yes, you can also use the existing one, just need a second adapter and more cables.
  8. Click on the share then "compute" under size, it will show usage by disk/pool. This works, with the VM off.
  9. Formatting is never part of a rebuild, and since the emulated disk is mounting correctly you just need to re-enable the drive, but if the replacement controller is going to arrive soon best to wait for it before doing it.
  10. Shares with cache=no aren't moved to the array, only cache=yes. Yes, set the share(s) to cache=prefer and run the mover, the VMs and VM service should be disable, mover won't move open files.
  11. Array can only have 30 devices max (28 data + 2 parity), but you can have additonal multiple pools and unassigned devices, hence unlimited devices, just not unlimited array devices.
  12. Ahh, good catch, maybe why everything looks normal there.
  13. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601 Though that looks more like a hardware issue, so possibly nothing will be logged.
  14. Yes. Which one? That's it, SFF-8088 to SFF-8088. Any one will work, they are transparent to the OS. It's not powered, looks like this: In that case I had a second PSU to power the disks and the expander on the second chassis.
  15. This was the result (PCIe 3.0 x4): ndk_sh t u v w x y z aa /dev/sdt: 409.95 MB/s /dev/sdu: 409.91 MB/s /dev/sdv: 409.88 MB/s /dev/sdw: 410.22 MB/s /dev/sdx: 410.31 MB/s /dev/sdy: 410.54 MB/s /dev/sdz: 412.00 MB/s /dev/sdaa: 410.20 MB/s Total = 3283.01 MB/s Ran it 3 times and this was the best of the 3, so strangely a little slower than an Unraid read check. Do you mind sharing that one also? Have 4 NVMe devices in a bifurcated x16 slot but no good way of testing, since an Unraid read check produces much slower than expected speeds with those. That's a good result, but I expect NVMe devices will be a little more efficient, consider that you have the NVMe controller on the device(s) going directly to the PCIe bus, with SAS/SATA you have the SAS/SATA controller on each device, then you have the HBA and only then the PCIe bus, so I believe it can never be as fast as NVMe, though I have no problem in now acknowledging that the PCIe 3.0 bus itself can reach around 7GB/s with an x8 link, but IMHO those speeds will only be possible with NVMe devices, still believe with SAS/SATA HBA it will always be a little slower, around 6.6-6.8GB/s. P.S. what adapter are you using for 5 NVMe devices in a one slot? Mostly find 4 device adapters to use on bifurcated slots, though I guess it will be more expensive due to the PICe bridge, still might be worth to get one more NMVe device in a single slot while maintaining good performance.
  16. Interesting, I feel the other way, i.e. that they did a good job with it, I have an older HP expander that is SAS2 but SATA2 only, so with SATA devices it can only do 1100MB/s per wide link, without Databolt and the PMC equivalent, we'd only be able to get 2200MB/s per wide link with a SAS3 HBA+expander, so 4000MB/s (and around 5500MB/s with dual link) seem good to me, I think it's difficult to expect that a 6G link would have the exact same performance as a native 12G link. It's not exclusive to SATA, it's link speed related, using SAS2 devices with a SAS3 HBA+expander will be the same, since they also link at 6G, and because there's no 12Gb/s SATA nothing much can really be done about it, SAS2 users can upgrade to SAS3 devices if they really want max performance, some more interesting info I found on this: Note that read speed aligns well with my results with dual link and the expander: Write speeds are even better, basically same as native 12G, that is something I can never replicate since the SSDs I have for testing are fast at reading but much slower at writing. It can be seen for example with lsscsi -v: [1:0:12:0] enclosu LSI SAS3x28 0601 - dir: /sys/bus/scsi/devices/1:0:12:0 [/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host1/port-1:0/expander-1:0/port-1:0:12/end_device-1:0:12/target1:0:12/1:0:12:0]
  17. Again that's not an Unraid issue, it's usually board/BIOS related, those diags are after rebooting, if you need help diagnosing the unclean shutdown you should post the diags that are automatically saved to the flash drive after one, in the logs folder.
  18. It doesn't, but if there were any sparse files on source (like vdisks) they won't be sparse on dest unless you use the appropriate flag.
  19. Not clear what you mean, do you want to use the new DL380 as a new server or as an enclosure?
  20. Start by checking network bandwidth by running a single stream iperf test.
  21. Yep, so that it's moved to the array.
×
×
  • Create New...