glennbrown

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by glennbrown

  1. 12 minutes ago, trurl said:

    You want more than just appdata. Do all the default shares you have in that wiki link

     I did call out system too ;) 

    He did not mention using VM's but yes if you do run VM's that one as well. I see little benefit to have iso's sitting on a pool device fulltime. I just have mine personally set Yes for iso's

  2. Stop the array.

    Add a pool, add the NVMe drive to that pool.

    Start the array

    In the appdata share settings change the following:

    "Use cache pool (for new files/directories):"  - Set this to Prefer
    "Select Cache Pool:" - Set this to name of the pool you created.

    Do the above for the "system" share too

     

    You will want to stop Docker then execute mover, that should move all data to the pool.

  3. @ich777 I submitted a pull request against the smartctl exporter textfile collector. There was a spelling mistake in the one nested if conditional when checking node_exporters settings.cfg causing it to always return null. I also did some minor bash syntax cleanup in that nested IF.

     

    Also wondering did you have any plans to update node-exporter to 1.5.0 and would be open to a pull request to enable the cpu info collector by default?

     

     

  4. Has anyone encountered problems with this NIC? Everytime I reboot my server it will come back up at 100MB full, if I unplug/replug it negotiates to 1GB Full.

     

    I found lots of posts of seeing similar issues with windows and other Linux systems but no concrete “this is how you fix it” answers.

  5. Nice build, I saw you mentioned about anexternal GPU for transcoding. If you have not set it up yet you should setup the iGPU in Plex. Intel Quicksync is hard to beat in terms of perforance to power over a dedicated GPU for Plex usage.

    • Thanks 1
  6. System was originally built in 2020 to just run Ubuntu Linux and my storage was held by a Synology, after a series of hard drive failures I got a different case and switched my setup to Snapraid+Mergerfs on Ubuntu. After recommending numerous people to use Unraid including my father I decided to walk the talk so to speak and switched to Unraid myself maybe about 10 months ago now. Today I did a motherboard and ram upgrade plus added some new fans, CPU Cooler and an additional 1TB NVMe. 

     

    Specs

    CPU: Intel i5-10400

    Motherboard: MSI Z590-A Pro (was originally a Gigabyte B460M-DS3H)

    Memory: 96GB Team Group VulcanZ DDR4-3200 2x32GB Kit and a 2x16GB Kit

    SSD: 500GB WD SN750 Black m.2, 1TB WD SN750 Black m2 & 2x Samsung 840 SSD's (going to be retiring the Samsungs)

    HDD: 2x WD 10TB, 2x WD 12TB (all where shucked Easystores)
    HBA: LSI 9207-8i

    PSU: Cooler Master 450W

    CASE: Silverstone CS380
    Miscellaneous: Arctic P12 Fans and Arctic Freezer i35 CPU cooler

     

    Some potential future upgrades:

    Power Supply, the current budget one has held up extremely well but would kind of like a Seasonic

    CPU, I am hoping that with Raptor Lake the 11700K will have some good sales at Microcenter.

    IMG_2772.jpeg

    IMG_2773.jpeg

    Screen Shot 2022-09-18 at 7.27.45 PM.jpg

    • Like 1
  7. Just wanted to report back, system is back up and everything is happy. Parity is rebuilding. Was nice and painless.

    Only real issue is completely not related to Unraid, the Plex DB's corrupted on me yet again. I am not sure why but they seem temperamental to being rsync'd this happened when I converted back to my Ubuntu setup too, Thank god for backups. 

    • Like 1
  8. 20 hours ago, trurl said:

    Typically you want those particular shares on cache or other pool and not in the parity array, so your dockers/VMs will perform better and won't keep array disks spunup since those files are always open.

     

    I was going to put them on a cache pool but just wasn't sure if I should create empty folders on the array as well. 

     

    Tomorrow going to boot backup into Unraid and will see how it goes.

  9. So a little backstory I was running the trial of Unraid, decided to go back to Ubuntu + Snapraid/MergerFS setup since I wasn't sure I wanted to pay for Unraid. I think I have finally hit the point where dealing with the annoying little idiosyncrasies of the Ubuntu setup I want to just pay and move on with my life.

     

    So my question is when I converted the system back I just left the data disks as XFS with way Unraid had laid them out. I did delete/recreate the two cache pools. My question is if I take the USB stick that is still formatted will I be able to just pick up where I left off on the array side and re-create the cache pools. (I know I lost the docker.img and libvirt.img files).

     

    Below is the tree layout:

     

    ➜  tmp tree -L 1 /mnt/disk{1,2,3}
    /mnt/disk1
    ├── downloads
    ├── isos
    ├── Movies
    ├── Music
    ├── Photos
    ├── Software
    └── TV Shows
    /mnt/disk2
    ├── Movies
    ├── Photos
    ├── TV Shows
    └── Videos
    /mnt/disk3
    ├── downloads
    ├── isos
    ├── Movies
    ├── Time Capsule
    ├── TV Shows
    └── Videos
    

     

  10. So I figured it out there is a option in Mover Tuning that delays moving "yes" shares until a certain percentage is hit. But doesn't seem to be obeying the 5% rule since the cache pools where above 5% used.

     

    root@odin:/var/log# df -h /mnt/cache*
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme0n1p1  466G  117G  349G  26% /mnt/cache1_nvme
    /dev/sdb1       466G   45G  420G  10% /mnt/cache2_ssd

     

    I disabled the option and fired off Mover and it is now moving files to the array for the shares that are set too Yes

    Screen Shot 2021-09-26 at 9.15.28 AM.png

  11. So pretty new to new Unraid, I understand that with a share set to "Prefer" data will be generally stay on the Cache pool. However I thought that when set to yes it would write to the Cache and when Mover runs it would move the data to the Array. It does not appear to be doing that right now. I did install the CA Mover Tuning Plugin but did not modify anything in it.

  12. So I created two different cache pools.

     

    cache1_nvme - Single 500GB NVMe drive

    cache2_ssd - Two 500GB SATA SSD's

     

    The ssd based cached created properly and is the expected space I would see. The nvme pool on the other hand only is a small 537MB size not the full 500GB. This drive was previously my boot volume for my server when it was running Ubuntu (just switched to Unraid) wondering if that caused this hiccup.

     

    Question is can I fix it without having to delete and re-create it?

     

     

    Screen Shot 2021-09-20 at 10.15.19 PM.png

  13. 25 minutes ago, JorgeB said:

    You need to let a parity sync finish, parity check you can just cancel and add the drives.

     

    Ok double checked it is in fact a parity-sync/data rebuild. Don't suppose I can stop it and the remove the parity drive for now so I can continue data migration.

  14. So a little back story, moving from a setup where I was using Ubuntu with Snapraid/MergerFS. I cleared off one of my 12TB data drives and setup the Unraid Array with my old snapraid 12TB Parity Drive and the other 12TB Data drive. Then used Unassigned devices and Krusader to start moving data over from my two 10TB drives. I finished up the first 10TB drive and am ready to bring that drive into the array.

     

    However it is giving me a error about not being able to add/remove disks. I had saw a few threads that before you can add more drives you need to let a partiy check finish. I had it paused while I was migrating data.

     

    Can someone confirm that is the case?