Jump to content

harley-c

Members
  • Posts

    8
  • Joined

  • Last visited

harley-c's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. I ended up dropping back to single parity, and using the spare disk to replace the failed one so that I can get back into a good state quickly I'll now swap that single parity disk out for a larger one, and then add another larger so that i've got my larger parity set, a spare 8TB in case of failure of a data drive, and the ability to expand in the future Thanks guys
  2. Thanks. Just read through that and there doesn'ty seem to be any mention of dual-parity so I'm not 100% clear that it will work as it is describing Does anyone have any experience of doing this similar to my setup with dual parity? If i do what it says then i'll end up with one large parity drive and one small oneso not sure what that means and what i'll need to do next to get to a final state where everything could use the bigger drives
  3. Hi all i currently have an array of 12 x 8TB drives, running dual parity. So 10 data drives + 2 parity. I understand parity vs backup and do actually have a full backup of all data on another server also one of my data drives has failed so needs replacing I will be needing to expand soon also and don't have additional bays so will have to start replacing with bigger drives. I understand that the parity drives are the limiting factor on the useable space of any other drive here so needs to be expanded as a priority, but I need to get the array back to a good state first ideally I would guess Will the following work?... - replace dead drive with new bigger drive (say 18TB for example). It will rebuild, array will go green, but I will only be able to use 8TB of this due to the 8TB parity disks - replace one of the parity disks with 18TB drive and let it rebuild ... still the other 8TB parity drive as the limiting factor at this point - replace final parity drive with 18tB drive and let it rebuild .. now have both parity drives + the replaced drive at 18TB At this point will it start to use the full capacity on those 18TB drives automatically? Or will the data drive i replaced still be stuck at using the 8TB that it could use when it was first replaced? Any further disk replacement after this will just automatically use the full 18TB? I want to make sure i understand this properly and am not wasting my time here or doing things in the wrong order Thanks
  4. Just crashed again and I managed to get this in the vm log before the host rebooted Searching around seems to keep pointing me to some info on AMD GPU reset bugs. I'm using an NVidia 3060Ti though ... has anyone seen that same sort of issue with nVidia?
  5. tower-diagnostics-20221116-2043.zipanother diagnostics file atached. as far as i've seen so far it only seems to be my vm with the GPU passed through. Seems stable when this is not runnning, and seems stable when other regular vms are running syslog in ther diag file seemed to be truncated so i included full file from the flash drive also didn't seem to really be any messages before the flood of messages after reboot, but see if anything stands out thanks syslog-from-flash.txt
  6. actually... with the constant parity checks I seem to have had a disk failure. I've replaced it and will let this recover before any further testing so it will probably be tomorrow before I get any more logs
  7. I'm afraid not. I'm turning on the local syslog server now so will get some more logs ASAP
  8. Everything was running fine on 6.11.0 before upgrade, but now my 'gaming' vm with nVidia 3060ti GPU passtrough seems to regularly hang When this happens, the entire Unraid server reboots after a couple of seconds I had the same result when upgrading to previous version also, but just rolled back then without logging a ticket tower-diagnostics-20221110-2231.zip
×
×
  • Create New...