glennbrown

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by glennbrown

  1. Native Docker Compose with it properly recognizing when a stack/container has been updated manually or via watchtower.
  2. I did call out system too He did not mention using VM's but yes if you do run VM's that one as well. I see little benefit to have iso's sitting on a pool device fulltime. I just have mine personally set Yes for iso's
  3. Stop the array. Add a pool, add the NVMe drive to that pool. Start the array In the appdata share settings change the following: "Use cache pool (for new files/directories):" - Set this to Prefer "Select Cache Pool:" - Set this to name of the pool you created. Do the above for the "system" share too You will want to stop Docker then execute mover, that should move all data to the pool.
  4. @ich777 I submitted a pull request against the smartctl exporter textfile collector. There was a spelling mistake in the one nested if conditional when checking node_exporters settings.cfg causing it to always return null. I also did some minor bash syntax cleanup in that nested IF. Also wondering did you have any plans to update node-exporter to 1.5.0 and would be open to a pull request to enable the cpu info collector by default?
  5. Has anyone encountered problems with this NIC? Everytime I reboot my server it will come back up at 100MB full, if I unplug/replug it negotiates to 1GB Full. I found lots of posts of seeing similar issues with windows and other Linux systems but no concrete “this is how you fix it” answers.
  6. Disregard, 6.11.0 fixed the issue with the module loading now the problem is none of my fans show up in sensors.
  7. I am getting a error when attempting to load nct6775 driver for sensors and the System Temp plugin. The error is related to this bug https://github.com/lm-sensors/lm-sensors/issues/197 so i need to append the kernel boot parameter acpi_enforce_resources=lax Should it as simple as this: kernel /bzimage append initrd=/bzroot acpi_enforce_resources=lax
  8. Nice build, I saw you mentioned about anexternal GPU for transcoding. If you have not set it up yet you should setup the iGPU in Plex. Intel Quicksync is hard to beat in terms of perforance to power over a dedicated GPU for Plex usage.
  9. System was originally built in 2020 to just run Ubuntu Linux and my storage was held by a Synology, after a series of hard drive failures I got a different case and switched my setup to Snapraid+Mergerfs on Ubuntu. After recommending numerous people to use Unraid including my father I decided to walk the talk so to speak and switched to Unraid myself maybe about 10 months ago now. Today I did a motherboard and ram upgrade plus added some new fans, CPU Cooler and an additional 1TB NVMe. Specs CPU: Intel i5-10400 Motherboard: MSI Z590-A Pro (was originally a Gigabyte B460M-DS3H) Memory: 96GB Team Group VulcanZ DDR4-3200 2x32GB Kit and a 2x16GB Kit SSD: 500GB WD SN750 Black m.2, 1TB WD SN750 Black m2 & 2x Samsung 840 SSD's (going to be retiring the Samsungs) HDD: 2x WD 10TB, 2x WD 12TB (all where shucked Easystores) HBA: LSI 9207-8i PSU: Cooler Master 450W CASE: Silverstone CS380 Miscellaneous: Arctic P12 Fans and Arctic Freezer i35 CPU cooler Some potential future upgrades: Power Supply, the current budget one has held up extremely well but would kind of like a Seasonic CPU, I am hoping that with Raptor Lake the 11700K will have some good sales at Microcenter.
  10. Following the procedure to just rest the root password, that seems to have fixed the issue. Not really sure why that was necessary when SSH was working.
  11. 6.10.3 Nope using root as username. Tried two different laptops/browsers as well as my iPad.
  12. Just started tonight, webgui says invalid password until it puts me into the attempts timeout. SSH works just fine. I tried restarting nginx and php-fpm but still not working, any suggestions?
  13. Just wanted to report back, system is back up and everything is happy. Parity is rebuilding. Was nice and painless. Only real issue is completely not related to Unraid, the Plex DB's corrupted on me yet again. I am not sure why but they seem temperamental to being rsync'd this happened when I converted back to my Ubuntu setup too, Thank god for backups.
  14. I was going to put them on a cache pool but just wasn't sure if I should create empty folders on the array as well. Tomorrow going to boot backup into Unraid and will see how it goes.
  15. I re-formated the parity drive for use in Snapraid anyway so was accounting for that. Should I re-create empty shares on the Array drives for appdata, domains, system before I boot from the USB drive?
  16. So a little backstory I was running the trial of Unraid, decided to go back to Ubuntu + Snapraid/MergerFS setup since I wasn't sure I wanted to pay for Unraid. I think I have finally hit the point where dealing with the annoying little idiosyncrasies of the Ubuntu setup I want to just pay and move on with my life. So my question is when I converted the system back I just left the data disks as XFS with way Unraid had laid them out. I did delete/recreate the two cache pools. My question is if I take the USB stick that is still formatted will I be able to just pick up where I left off on the array side and re-create the cache pools. (I know I lost the docker.img and libvirt.img files). Below is the tree layout: ➜ tmp tree -L 1 /mnt/disk{1,2,3} /mnt/disk1 ├── downloads ├── isos ├── Movies ├── Music ├── Photos ├── Software └── TV Shows /mnt/disk2 ├── Movies ├── Photos ├── TV Shows └── Videos /mnt/disk3 ├── downloads ├── isos ├── Movies ├── Time Capsule ├── TV Shows └── Videos
  17. So I figured it out there is a option in Mover Tuning that delays moving "yes" shares until a certain percentage is hit. But doesn't seem to be obeying the 5% rule since the cache pools where above 5% used. root@odin:/var/log# df -h /mnt/cache* Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p1 466G 117G 349G 26% /mnt/cache1_nvme /dev/sdb1 466G 45G 420G 10% /mnt/cache2_ssd I disabled the option and fired off Mover and it is now moving files to the array for the shares that are set too Yes
  18. So pretty new to new Unraid, I understand that with a share set to "Prefer" data will be generally stay on the Cache pool. However I thought that when set to yes it would write to the Cache and when Mover runs it would move the data to the Array. It does not appear to be doing that right now. I did install the CA Mover Tuning Plugin but did not modify anything in it.
  19. Ok thank you, ended up deleting it and recreating. All good now.
  20. So I created two different cache pools. cache1_nvme - Single 500GB NVMe drive cache2_ssd - Two 500GB SATA SSD's The ssd based cached created properly and is the expected space I would see. The nvme pool on the other hand only is a small 537MB size not the full 500GB. This drive was previously my boot volume for my server when it was running Ubuntu (just switched to Unraid) wondering if that caused this hiccup. Question is can I fix it without having to delete and re-create it?
  21. Ok double checked it is in fact a parity-sync/data rebuild. Don't suppose I can stop it and the remove the parity drive for now so I can continue data migration.
  22. So a little back story, moving from a setup where I was using Ubuntu with Snapraid/MergerFS. I cleared off one of my 12TB data drives and setup the Unraid Array with my old snapraid 12TB Parity Drive and the other 12TB Data drive. Then used Unassigned devices and Krusader to start moving data over from my two 10TB drives. I finished up the first 10TB drive and am ready to bring that drive into the array. However it is giving me a error about not being able to add/remove disks. I had saw a few threads that before you can add more drives you need to let a partiy check finish. I had it paused while I was migrating data. Can someone confirm that is the case?