• Posts

  • Joined

  • Last visited

About billington.mark

  • Birthday 07/30/1985


  • Gender
  • Location
    United Kingdom

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

billington.mark's Achievements


Contributor (5/14)



  1. Hi Guys, I'm late to the SAS party and recently snagged two ST6000NM0014 drives for a very small sum. As I noticed the drives wouldn't spin down, I found this plugin, and the drives do now spin down, but the error count on the WebUI for those disks slowly started to creep up. Manually spinning the drives down in the WebUI also swamps the syslog with IO errors, which led to me having to rebuild the array from parity as a result. Removing the plugin leaves the drives spun up, but obviously it would be better if these played nice! System details: Unraid 6.9.1 (downgraded from 6.9.2 due to the other spin down issue) LSI SAS2008 controller, with 2x SAS ST6000NM0014's, and the rest are SATA (which spin down fine) The OP mentions that the issue can be caused by a combination of controller\disk, rather than the non-standard implementation of power management across different brands, but the thread seems to lean heavily towards the latter? I guess my main questions are: Should I hang onto my ST6000NM0014's? Is there a reason the Constellation ES.3 is currently #'d from the exclusion list if its still misbehaving? Would things be better with a different SAS controller? Can the OP be updated with a list of SAS drives that are known to play nice with this plugin? Is this being addressed at a core unraid OS level for 6.10? Apologies if these have been answered already but the last few pages of this thread are hard to follow with the 6.9.2 issue being added to the mix!
  2. I tried 19 times to decode this but then gave up
  3. +1 to this. TOTP 2FA code implementation would be a welcome feature addition.
  4. This sums up my stance too. I can understand LimeTechs view as to why they didnt feel the need to communicate this to the other parties involved (as they never officially asked them to develop the solution they'd developed and put in place). But on the flip side the appeal of UnRaid is the community spirit and drive to implement things which make the platform more useful. It wouldnt have taken a lot to give certain members in community a heads up that this was coming, and to give credit where credit is due in the release notes. Something along the lines of: "After seeing the appeal and demand around 3rd party driver integration with unraid as its matured over recent years we've introduced a mechanism to bring this into the core OS distribution. We want to thank specific members of the community such as @CHBMB and @bass_rock who've unofficially supported this in recent times and which drove the need to implement this in a way which allows better support for the OS as a whole for the entire community, and remove the need for users to use unofficial builds." Also for them to be given a heads up and at least involved in the implementation stages at an advisory or review level as well... Anyway, I hope this communication mishap can be resolved. It was obviously not intentional, and 2020 as a whole has meant we're all stressed and overworked (I am anyway!), so it makes situations like this a lot easier to trigger. Hopefully lessons can be learned here and changes similar to this can be managed with a little more community involvement going forward.
  5. I upgraded to this mATX X570 motherboard which has 8xSATA ports and 2xNVME ports. I had the H310 to make up the shortfall on a B450 board I had previously, and as I upgraded it was no longer needed. Run it with a Ryzen APU and it happily does everything I need it to, without the need of any expansion cards, or separate GPU: X570M Pro4
  6. Had the error with a disk not decrypting after shutting down to install an additional NVME. Not sure that has any relevance but its a change to the system from the previous boot. If memory serves me right, I got the same behaviour when removing an old SSD last time this happened, so it could have something to do with a hardware change from the previous boot, or a disk addition\removal in particular? 1st attempt at starting the array disk1 didn't want to decrypt, although all other disks did. Stopped the array and tried again (no reboot), and disk5 didn't decrypt the second time. All other disks were fine though. Rebooted and everything is happy again. Logs attached.
  7. Not too late. Its on eBay but no one has snapped it up yet. If you want it, its yours Ping me a PM Mark
  8. re-updated and its behaved this time. i'll grab logs if it does it again.
  9. Had to roll back to beta22. Array wouldnt decrypt some drives on starting the array. (got an incorrect passkey error). Odd thing is that each time i stoppped the arrany and tried it again, it would struggle to mount different disks each time. Needed to get the array back up asap so it slipped my mind to take logs. Is this a known issue?
  10. Pro4/ Recently upgraded so now going spare. Includes Manual and I/O plate. £50 including postage Payment via Paypal Will be putting this on eBay in a week or so if there's no interest on here. Sold
  11. As it says on the tin. Also includes the cables to connect 8 HDDs. Recently upgraded and no longer have use for this card. Flashed a while ago to be used as a HBA and has had 8 drives running flawlessly. £25 inc Postage Payment via Paypal Will be putting this on eBay in a week or so if there's no interest on here. Sold
  12. Stuff to try: Set up the VM using Q35 instead of i440fx. The virtual PCI-e lanes are set up much differently, which could yeild better results. (this will most likely need an OS reinstall inside the VM). Windows not booting\installing: Set the VM to only have 1 CPU, and install\boot as normal, do a graceful shutdown, then update the VM config to the desired CPU setup. This fixes things after feature updates if the VM misbehaves after them too.
  13. Sorry, ive not been online here in a while and only just seen that ive been tagged in a few posts. I no longer use a VM for my workstation for various reasons which are off topic! (Also I cant remember all the commands, or fully up to speed with the best practice for stubbing and passing devices through in 2020, so im happy to be corrected if any of this advice is out of date!) I know there's a new way of passing devices to VMs nowadays, but the premise that you have to pass all devices in an IOMMU group will still apply. You need to stub all the devices associcate with the card, so 2c:00.0 (GPU), 2c:00.0 (Audio), 2c:00.0 (USB) ,2c:00.0 (Serial). The GPU and Audio are probaby stubbed by unraid in the background automagically (so they'll not have a driver assigned on boot) which is why they're selectable. The serial and USB wont have been, so unraid will be assigning a driver and grabbing the device as its own. Stubbing then rebooting should be enough for the USB and Serial device to show up in the VM config GUI for you to select. After that, it should pass through and just work. If they're not selectable, you need to check if a driver has been applied to the device (lspci should tell you). I think a virtio driver is assigned to stubbed devices. If anything else has grabbed it, youve not stubbed it properly. If they have a virtio driver assigned, its probably just a GUI limitation. you'll need to go into the XML, find where the GPU and Audio device is assigned, and duplicate whats been set up automatically for the GPU and Audio, for the USB and Serial device. Heopefully thats enough to work with.
  14. I had this a while ago and couldn't get anywhere near bear metal performance. You can get quite complex with this and start looking into IOthread pinning to help, but there's only so much you can do with a virtual disk controller vs a hardware one. You'll probably notice a bit of a difference if you use the emulatorpin option to take the workload off CPU0 (which will be competing with unraid stuff). if you have a few cores to spare, give it a hyperthreaded pair (one that you're not already using in the VM). In the end I got a PCIe riser for an NVME, and passed that through to the VM. You get about 90% of the way there performance wise compared to bare metal as the controller is part of the NVME itself. Good luck.