Fiala06

Members
  • Posts

    118
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Fiala06's Achievements

Apprentice

Apprentice (3/14)

5

Reputation

  1. Might want to hold off if it's already tight for your budget.
  2. Welp, that was the fix! Thank you, I'm just going to leave it uninstalled.
  3. Noticed my cache drive was filling up. Haven't had a problem with this before but all my new Media seems to be staying on the Cache and not moving to the array. I've enabled logging for mover and all I see is : Any ideas? Screenshots unraid-diagnostics-20231027-1412.zip
  4. So I started off re-flashing my flash drive then coping the config folder back to it as I couldn't login. Once done I can now get back in but only using http://unraid.local/login. Trying the direct IP results in never letting me login. Almost like a cache issue but does the same thing on both my machines. Disk 7 and 12 after the reboot decided to say emulate. Seem to work fine and I've moved them around making sure it wasn't a power/HBA/connection issue. Any harm in setting Disk 7 and 12 to no device and moving them to say 15 and 17 and starting it up? Disk 16 and 18 are suppose to be rebuild once I start up the array. Thanks diag.zip
  5. Thanks again for all your help and quick replies! I managed to format both new drives and just finished restoring everything.
  6. Well they both now say Unmountable: Wrong or no file system
  7. Rebooted and now the array didn't auto start.
  8. Just ran it after backing everything up root@UNRAID:~# blkdiscard -f /dev/nvme0n1 blkdiscard: Operation forced, data will be lost! root@UNRAID:~# ls -lh^C root@UNRAID:~# btrfs fi show Label: none uuid: 8259fa8c-4b09-4304-aed8-6fe58d49323c Total devices 2 FS bytes used 583.02GiB devid 1 size 931.51GiB used 593.03GiB path /dev/nvme0n1p1 devid 4 size 0 used 0 path /dev/sdq1 MISSING Label: none uuid: cd6041b9-7b7e-4203-8bd5-38bc9167f1e4 Total devices 2 FS bytes used 4.11GiB devid 1 size 465.76GiB used 6.03GiB path /dev/sdt1 devid 3 size 931.51GiB used 6.03GiB path /dev/sdh1 Label: none uuid: c854bdcc-ee55-4a4f-bd72-a29f5222a437 Total devices 2 FS bytes used 834.35GiB devid 1 size 931.51GiB used 839.03GiB path /dev/sdm1 devid 3 size 1.82TiB used 839.03GiB path /dev/sdy1 Label: none uuid: 3e52a2e4-0036-41a0-bdd7-4e133ef6acb1 Total devices 1 FS bytes used 22.14GiB devid 1 size 80.00GiB used 26.02GiB path /dev/loop2
  9. I've never used that before so blkdiscard /dev/nvme0n1? Since that's the drive I will no longer be using?
  10. Maybe from me trying to get them to work, starting and stopping the array and moving them around? I'm really not sure. Edit: Since it's working now with a single cache drive, could I just do a new config? Then add the 2nd cache? Would that reset the ids?
  11. This was the original pool: I replaced the 870 EVO (sds) with the CT1000MX500SSD1 (sdq) about a week ago. Everything went well. Fast forward, 2 days ago the nvmeOn1 started throwing all sorts of errors (tested and is failing). Ordered a new drive, attempted to put it in yesterday and here we are.
  12. root@UNRAID:~# btrfs fi show Label: none uuid: 8259fa8c-4b09-4304-aed8-6fe58d49323c Total devices 2 FS bytes used 581.62GiB devid 1 size 931.51GiB used 589.03GiB path /dev/nvme0n1p1 devid 4 size 0 used 0 path /dev/sdq1 MISSING Label: none uuid: cd6041b9-7b7e-4203-8bd5-38bc9167f1e4 Total devices 2 FS bytes used 4.11GiB devid 1 size 465.76GiB used 7.03GiB path /dev/sdt1 devid 3 size 931.51GiB used 7.03GiB path /dev/sdh1 Label: none uuid: c854bdcc-ee55-4a4f-bd72-a29f5222a437 Total devices 2 FS bytes used 834.26GiB devid 1 size 931.51GiB used 839.03GiB path /dev/sdm1 devid 3 size 1.82TiB used 839.03GiB path /dev/sdy1 Label: none uuid: 3e52a2e4-0036-41a0-bdd7-4e133ef6acb1 Total devices 1 FS bytes used 22.38GiB devid 1 size 80.00GiB used 26.02GiB path /dev/loop2 the nvme011p1 is the drive was replacing. The other drive on the cache pool I did replace about a week ago also which is sdq Here you can see the cache pool is working with the single disk (sdq)
  13. So if I remove the cache pool and create a new one with a single drive, it works fine. As soon as I add a new 2nd drive I get the same error. Even tried formatting the new ssd before adding it to the cache.
  14. One of my cache drives (2 total) is failing. So I went to replace it now I'm getting "Unmountable: Invalid pool config" I've tried setting the failed disk to not installed, start and stop the array but doesn't seem to matter what I do I can't get this to start again. Any ideas? unraid-diagnostics-20230320-1639.zip
  15. I'm running your sonarr and syncthing containers. Is there a way to prevent the stlfolder from showing in sonarr/radarr? I've accidently removed it a few times now, then ofc you have to reset the sync in syncthing.