glennbrown

Members
  • Posts

    10
  • Joined

  • Last visited

glennbrown's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. Just wanted to report back, system is back up and everything is happy. Parity is rebuilding. Was nice and painless. Only real issue is completely not related to Unraid, the Plex DB's corrupted on me yet again. I am not sure why but they seem temperamental to being rsync'd this happened when I converted back to my Ubuntu setup too, Thank god for backups.
  2. I was going to put them on a cache pool but just wasn't sure if I should create empty folders on the array as well. Tomorrow going to boot backup into Unraid and will see how it goes.
  3. I re-formated the parity drive for use in Snapraid anyway so was accounting for that. Should I re-create empty shares on the Array drives for appdata, domains, system before I boot from the USB drive?
  4. So a little backstory I was running the trial of Unraid, decided to go back to Ubuntu + Snapraid/MergerFS setup since I wasn't sure I wanted to pay for Unraid. I think I have finally hit the point where dealing with the annoying little idiosyncrasies of the Ubuntu setup I want to just pay and move on with my life. So my question is when I converted the system back I just left the data disks as XFS with way Unraid had laid them out. I did delete/recreate the two cache pools. My question is if I take the USB stick that is still formatted will I be able to just pick up where I left off on the array side and re-create the cache pools. (I know I lost the docker.img and libvirt.img files). Below is the tree layout: ➜ tmp tree -L 1 /mnt/disk{1,2,3} /mnt/disk1 ├── downloads ├── isos ├── Movies ├── Music ├── Photos ├── Software └── TV Shows /mnt/disk2 ├── Movies ├── Photos ├── TV Shows └── Videos /mnt/disk3 ├── downloads ├── isos ├── Movies ├── Time Capsule ├── TV Shows └── Videos
  5. So I figured it out there is a option in Mover Tuning that delays moving "yes" shares until a certain percentage is hit. But doesn't seem to be obeying the 5% rule since the cache pools where above 5% used. root@odin:/var/log# df -h /mnt/cache* Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p1 466G 117G 349G 26% /mnt/cache1_nvme /dev/sdb1 466G 45G 420G 10% /mnt/cache2_ssd I disabled the option and fired off Mover and it is now moving files to the array for the shares that are set too Yes
  6. So pretty new to new Unraid, I understand that with a share set to "Prefer" data will be generally stay on the Cache pool. However I thought that when set to yes it would write to the Cache and when Mover runs it would move the data to the Array. It does not appear to be doing that right now. I did install the CA Mover Tuning Plugin but did not modify anything in it.
  7. Ok thank you, ended up deleting it and recreating. All good now.
  8. So I created two different cache pools. cache1_nvme - Single 500GB NVMe drive cache2_ssd - Two 500GB SATA SSD's The ssd based cached created properly and is the expected space I would see. The nvme pool on the other hand only is a small 537MB size not the full 500GB. This drive was previously my boot volume for my server when it was running Ubuntu (just switched to Unraid) wondering if that caused this hiccup. Question is can I fix it without having to delete and re-create it?
  9. Ok double checked it is in fact a parity-sync/data rebuild. Don't suppose I can stop it and the remove the parity drive for now so I can continue data migration.
  10. So a little back story, moving from a setup where I was using Ubuntu with Snapraid/MergerFS. I cleared off one of my 12TB data drives and setup the Unraid Array with my old snapraid 12TB Parity Drive and the other 12TB Data drive. Then used Unassigned devices and Krusader to start moving data over from my two 10TB drives. I finished up the first 10TB drive and am ready to bring that drive into the array. However it is giving me a error about not being able to add/remove disks. I had saw a few threads that before you can add more drives you need to let a partiy check finish. I had it paused while I was migrating data. Can someone confirm that is the case?