57thStIncident

Members
  • Posts

    4
  • Joined

  • Last visited

Everything posted by 57thStIncident

  1. Your situation has some parallels with my own. I am also contemplating how I can make good use of still-decent drives that are a little too small while making sensible incremental additions. My main desktop has a 1TB firecuda as cache and a single 6TB disk array, currently with no parity. I've been using it more of a dual-GPU VM host than for data storage, as I still have an aging 4-bay Synology DS415+ with assorted disks (2+4+4+6) for backups. Both machines are getting a little on the full side though. I think I also have a few loose, currently unused 2TB disks but no bays to put them in at the moment. A nice thing about both unRAID and the Synology is that both can accommodate and make pretty good use of varying drive sizes, so I hate to waste the smaller disks. Unfortunately my desktop case is currently poor for HDD -- when I bought it I don't think I'd yet decided to try unRAID on it, so it's more of a standard modern gaming case, with much better support for SSD than HDD (there are only 2x3.5" bays). So I'm wrestling with when I should bite the bullet and change or add case...and of course the Synology won't last forever -- it's quite old now so when it dies likely I'll replace it with something homebuilt. What I'm thinking of doing is getting 2x larger 3.5" disks (maybe 12TB?) and using those for my unRAID array (will get 4TB more space, and finally parity protected). I think I might have a 2TB 2.5" disk unused that could also be added though obviously it's a lower-performing disk. Maybe I can distribute the data so that vs. one of the other disks might be able to spin down sometimes. I'd then move the 6TB disk from unRAID to replace the 2TB disk in the Synology, which would add 4TB to that unit as well. I have a 4-bay USB3 JBOD enclosure that could be used for the random smaller drives but I'm not sure attached USB is particularly well integrated or advisable for the Synology or unRAID so not sure what i'll do with those until I decide on a case that can handle more drives. I'm also considering adding a second SSD. 1TB Firecuda seems generous for a cache drive. Mine is really being used more for VM disk images, less for array caching. Not sure about your usage pattern but I imagine the job could be done with less. I don't think you're alone about wondering what to do with 2x SSD...whether to combine them with BTRFS in a single cache pool, or use one for cache, the other as an unassigned device, with the intent of backing the necessary contents to the array as necessary. This approach is a little less flexible as you don't get the full combined capacity in one pool (and there may be some performance benefit to combining them?) but it is probably a little safer should one of the SSD's fail. If you make a point of backing up their contents to the array often I'm not sure it matters *that* much between the two choices. I believe your existing disks can be moved retaining data without too much trouble but you'll want to read up on the procedure to avoid any pitfalls. Pls confirm elsewhere but I don't think there's any configuration you need, and I'd think the order might not matter -- I'm kind of assuming you're just moving data disks and that you're going to regen the parity on the other hardware. If you're trying to move multiple disks + parity I'd think that would require a bit of extra care. Again, I haven't done enough of that sort of activity to be an authority so perhaps read up the docs and look for other tutorials/experts.
  2. Same problem, after BIOS update; MSI x570 unify. Able to fix by re-enabling SVM in BIOS CPU settings.
  3. I did the same thing shortly after my previous post, and the procedure wasn't too troublesome. I'm not sure it's really solved my problems though. I suspect the RX580 support may be more solid than for the Vega56, but this is conjecture, I haven't stressed this much or tried very hard at this point to figure out what's going on when it doesn't work.
  4. It's my understanding that installing the vendor-reset module we can fix this. I can't yet tell you how to do this. I've updated to 6.9.1, now I'm looking into how to install vendor-reset.