ju_media

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by ju_media

  1. Ah I see. Thank you. Maybe that’s where I went wrong the first time, was rebooting instead of power cycling. which board / chipset are you using with the 980 devices?
  2. @JorgeB would you be so kind as to suggest your recommended NVMe drives for use as the cache drive(s) that you know from experience are stable on Unraid? Like I said, I never had issues with the standard Sabrent Rocket drive, but it didn’t have the best sustained write performance once I hit around the 500GB mark. The Samsung seems to just keep going, not slowing down even when I write 1TB+ to it, but this issue with the frequent drop-outs are a complete deal breaker. If you have a personal recommendation that has similar sustained write/internal cache performance, but is 100% stable, I’d love to hear it as I’ll just swap this one out I think. Not worth the hassle anymore.
  3. In February 2023 I upgraded my NVMe cache drive from a Sabrent Rocket 1TB to a Samsung 980 Pro 2TB. I never had a single issue with the Sabrent drive - I just needed more capacity to deal with large one-off writes, and I went with the ‘best’ drive I could afford, which was the 980 Pro based off many reviews and benchmarks around continuous write/cache performance. But, I started having the same exact issue described in this thread with my Unraid server, I would say around 4 months ago now. Probably July 2023 onwards. It happened maybe once a month, but has now started happening more frequently in the last month or so, having gone ‘down’ 3 times in the last 30 days. The only way I get the cache drive to appear again is by shutting down unraid, removing the SSD from the M.2 slot, re-seating the SSD, and booting back up. This is causing unnecessary additional parity checks to run on my system too, as I guess unraid thinks a new device has been added or something. My 980 Pro is on the 5B2QGXA7 firmware (which it shipped with, this hasn’t been changed by me). This is the firmware known to fix some other issues with 980 Pros bricking into a read only state. Unraid is on 6.11.0 but I am not sure from memory if there were any changes to the Unraid version before/after installing the 980 Pro in my system, so I don’t know if there is any correlation there. Regardless of all of this, I have added the line of code shared in this thread into my flash syslinux config, and will report back if this resolves the issue. It’s infuriating coming home late at night for none of my motion sensors/automations to kick in, and immediately I know that the SSD has gone down again, killing half of my smart home stuff in the process 🙃
  4. Oh that’s a good point, so if I’ve understood correctly, I should add the new parity drive first, so I then have increased protection in the event that I face any drive issues during the ‘add a new storage drive’ stage.
  5. OK - I’ll be sure to do that when replacing the cache - thanks for the tips! For adding the 2nd parity vs adding the new storage drive, is there any difference if I do one before the other? I.e. down time, rebuild time, etc…
  6. Nice one, thank you - so I guess that would include setting all shares to be ‘Cache: No’ ? And then invoke the mover so anything currently on ‘Cache: Yes/Prefer’ gets moved to the array? not sure of best practise for this with VMs and app data folders, I guess they can be moved to the array temporarily while I swap the cache drive.
  7. Yes, existing parity drive it 16TB. I wanted to go with 18TB drives for these 2 new ones, but then I remembered I wouldn't get ‘full use’ of 4TB of those. Also rebuild time would probably be horrendous.
  8. I was hoping to gain some clarity from the community on the best way (if there is one) to go about the following upgrades to my unraid server. I’m very low on space so have picked up 2x new 16TB Toshiba enterprise drives, and plan to configure one as a new, 2nd parity drive. The other drive will be used to expand the array. So far I’ve run the ‘unassigned devices preclear’ on one of the drives, completing with no errors in around ~65 hours, and the second one is running now. I know this doesn’t need to be done for a new drive in the array, but wanted to stress test the drive regardless. I also need to replace my existing cache as it’s just not large enough any more to handle the kind of writes I’m doing on a daily basis; went with a Samsung 980 Pro 2TB after doing a fair bit of research on sustained write speeds of the various NVMe drives on the market. Couldn’t justify the bump in price to the Sabrent Rocket Plus 4 (?) 4TB variant. Is there a specific way I should go about doing these upgrades? I.e. add the new parity first, then add the new storage drive to the array, then replace the cache? (I’m not sure what is involved with fully replacing a cache drive, at this stage). There is quite a lot that lives on the cache currently; appdata, downloads, a few VM images, etc. Any guidance or insight much appreciated 🙂
  9. Curious to know what version of UNRAID you are all using? Just read a thread that discussed the Conbee pass through breaking specifically when upgrading from 6.9.2 to 6.10.X. are you guys all running 6.10.X, or did any of you have troubles passing the Conbee through to the VM while running 6.9.2?
  10. +1 trying to do the same with Surfshark on my unraid server. Would appreciate an update if the OP ever managed to find a solution to this?
  11. +1; did this ever get implemented? Would love to be able to direct-attach my Mac to my Unraid server over TB3 (or even TB2 with the adapter). Anything is going to be better than my current 1Gbe, and I don't want to go the route of paying for all the hardware I need for 10Gbe (switch, NIC for the server, adapter for the Mac). I would rather just buy a TB2 / TB3 NIC for the Unraid server and direct attach to that... Is this possible since the latest Linux kernels supposedly support Thunderbolt?