Jump to content

JorgeB

Moderators
  • Posts

    63,741
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. Interesting, I measured 6w from both of these, idling with 8 devices connected.
  2. You can use the consold8 script to that automatically for you, like so: I used it to consolidate tv shows by season according to a new split level with great success.
  3. No HDD you'll get you close to 600MB/s, fastest disks ATM are about 225MB/s, with SSDs you can get up to 550MB/s.
  4. Performance will be the same as the Intel onboard controller.
  5. Expanders can be cascaded, obviously there will have less bandwidth than if using 2 HBAs, but it's worth to try and see if there's a performance improvement, if there isn't you could be CPU limited and no point in investing on a 2nd HBA.
  6. I have no experience with those backplanes but from the support response looks like at least the front backplane supports dual link, you could try option2 from response or get another HBA so you could use one dual linked to front backplane and the other single linked to rear backplane (single link is enough for 6 disks). But IIRC your parity check speed was CPU limited so you may not see any improvements.
  7. I hadn't check the logs, but that last check was noncorrect: Apr 30 08:12:28 Tower kernel: mdcmd (51): check nocorrect
  8. There are many reports of unresponsive v6 server system when using reiserfs, usually converting all disks to xfs fixes the problem, and IMO you should convert anyway since reiserfs is no longer properly maintained and it can have terrible performance in some situations. It's normal for the correcting check to find the same errors, but they are now corrected and next check should find 0 errors.
  9. Sounds like an alarm, possible high CPU temp, as it's rather higher than normal, check cooling. As for the unresponsive array, you're using reiserfs on all disks, that would be my #1 suspect, you can confirm by converting only your cache to XFS, disable the mover temporarily so that all writes are limited to cache, test for a few days and if it works convert the remaining disks. PS: mover was run during the parity check, this should be avoided as it will slow down both operations considerably.
  10. Currently the automatic parity check after an unclean shutdown is no correct, so stop it and start a correcting check.
  11. JorgeB

    VM FAQ

    How do I keep my sparse vdisk as small as possible? How do enable trim on my Windows 8/10 or Windows Server 2012/2016 VM? NOTE: according to this post by @aim60virtio devices also support discard on recent versions of qemu, so you just need to add the discard='unmap' option to the XML. Still going to leave the older info here for now just in case. By default vdisks are sparse, i.e., you can chose 30GB capacity but it will only allocate the actual required space and use more as required, you can see the current capacity vs allocated size by clicking on the VM name, problem is that over time as files are written and deleted, updates installed, etc, the vdisk grows and it doesn't recover from the deleted files. This has two consequences, space is wasted and if the vdisk in on an SSD that unused space is not trimmed, it's possible to "re-sparsify" the vdisk e.g., by cping it to another file, but this it's not very practical and there's a better way. You can use the vitio-scsi controller together with discard='unmap', this allows Windows 8/10 to detect the vdisk as "thin provisioned drive", and any files deleted on the vdisk are immediately recovered as free space on the host (might not work if the vdisk is on a HDD), and this also allows fstrim to then trim those now free sectors when the vdisk is on an SSD. On an existing vdisk it's also possible to run Windows defrag to recover all unused space after changing to that controller. Steps to change an existing Windows8/10 VM (also works for Windows Server 2012/2016): 1) First we need to install the SCSI controller, shutdown the VM (For Windows 8/10 I recommend disabling Windows fast Startup -> Control Panel\All Control Panel Items\Power Options\System Settings before shutdown, or else the VM might crash on first boot after changing the controller). Then edit the VM in form mode (toggle between form and XML views is on the upper right side), and change an existing device other than your main vdisk or virtio driver cdrom to SCSI, for example your OS installation device if you still have it, if not you can also add a second small vdisk and chose SCSI as the vdisk bus, save changes 2) Start the VM and install the driver for the new "SCSI controller", look for it on the virtio driver ISO (e.g., vioscsi\w10) 3) Shutdown the VM, edit the VM again, again using the form view and change the main vdisk controller to "SCSI", now change view to XML and add "discard='unmap'" to the SCSI controller: Add after cache='writeback', e.g. before: After: 4) Start the VM (if you added a 2nd vdisk you can remove it now before starting), it's should boot normally, you can re-enable Windows fast startup. 5) Run Windows Defrag and Optimize drives, check that he disk is now detected as "Thin provisioned drive" and run optimize to recover all previous unused space. From now on all deleted files on the vdisk should be immediately trimmed. Note: If after this you edit the VM using the GUI editor these changes will be lost and will need to be redone.
  12. It's an old bug, speed is reported according to parity size, so when the cleared (or rebuilt) disk is smaller speed is incorrect.
  13. Yes, with SSDs PCIe 2.0 can easily became a bottleneck during simultaneous access.
  14. Only if using SAS3 enterprise SSDs, speed will be the same with SATA SSDs, also make sure it works with unRAID as not all newer LSI models are supported.
  15. It's not an unRAID problem, it's a Linux problem with most Marvell chipsets, and 4 port controllers are mostly Marvell based, if you want a new controller you can get 2 or 8 ports, 2 ports go for Asmedia, 8 ports there are new LSI controllers that work out of the box, e.g., LSI 9207-8i.
  16. Works fine for some, but the important thing is to try a different one, or restarting in safe mode.
  17. This is a common issue, one or the other solution from the FAQ should fix it:
  18. It's not in the FAQ, maybe it should?
  19. Your cache disk has been acting up, probably because of the pending sector.
  20. NVMe devices are supported since v6.2
  21. As long as the disk isn't going to be used as an array disk you can partition it with the UD plugin.
×
×
  • Create New...