Jump to content

Diego Spinola

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by Diego Spinola

  1. Hi there @jonathanm  thanks for the reply, just now I found an offsite (B2) "flash backup" generated last night which would certainly contain all config files needed for this to be repaired (don't really know where this cache pool info would be located but i'm sure it is in there right?), do you think that the safest way it is to do what you said even though I have some "cache only" shares on my server? Am I overthinking it and there's no real risk ?

    Using your plan I would:

    1. Turn VM and Dockers off
    2. Remove all devices from cache pool
    3. Start and Stop array
    4. Create a new cache pool with the same drives, in the same order as before

     

    •  Plan B : Try to recover the configs from yesterday's backup 

       Thanks 
  2. Hi there guys, I'm migrating one of my servers to a new motherboard :
     

    1. Turned off array auto start
    2. Took a screenshot of the disk order
    3. Transported all HDDs and nvme's to the new system
    4. booted 
    5. noticed that one of the cache nvme's was missing (never started the array)
    6. powered off and re-seated the missing nvme device from a pci-e slot that seems to be the issue into another
    7. booted again, every drive was recognized but the cache pool didn't populate with the missing drive automatically
    8. if I try to assign the missing drive in it's original place it shows the following warning "All existing data on this device will be OVERWRITTEN when array is Started"


    At no point have I started the array so I think I'm still safe...my initial instinct was to rebuild the flash from the online backup...BUT... it seem to have been overwritten just as I booted (even without starting the array...doh...)

    Seems to me that there is probably a very simple solution here but I'm not seeing it ...maybe because of the scary warning


    Can anyone give me an insight?

    Thanks


     

  3. Ok , something very weird happened:

    • I pulled the image on one of the machines that were working fine with
    • Ran it and created a small file in it (just a "touch" to change it's hash)
    • Then committed the container to mynamespace/test:latest and pushed it to my registry
    • tried to pull it again on unraid CLI:
     docker pull localhost:5000/mynamespace/test
    Using default tag: latest
    latest: Pulling from mynamespace/test
    7413c47ba209: Pull complete 
    0fe7e7cbb2e8: Pull complete 
    1d425c982345: Pull complete 
    344da5c95cec: Pull complete 
    0a8244e1fdbf: Pull complete 
    9af8d374b4e3: Pull complete 
    Digest: sha256:ba9e72e1b975455825376c5b0edebbbd4e6b74d21968dc4f5e08f9bc10292c44
    Status: Downloaded newer image for localhost:5000/mynamespace/test:latest


    I've no idea why this works now, will try to replicate the initial conditions

  4. Hey there folks,

     

    I've been using a private registry (no https , already added it  to "/etc/docker/daemon.json" as an insecure-registries entry) to pull images to several machines in my network (most of them running ubuntu) and it works great...

     

    however whenever I try to pull an image to the unraid server I get the following:
     

    root@arda:~# docker -v
    Docker version 18.09.6, build 481bc77
    root@arda:~# docker pull "localhost:5000/mynamespace/test:latest" 
    latest: Pulling from mynamespace/test
    7413c47ba209: Downloading 
    0fe7e7cbb2e8: Downloading 
    1d425c982345: Downloading 
    344da5c95cec: Downloading 
    0a8244e1fdbf: Download complete 
    unknown blob


    Does anyone have any idea/hint of what might be going on here? if i pull the same test image elsewhere (any of my other machines) it works fine...

    Thanks

  5. WARNING: anyone stumbling on this post in the future and thinking about doing this BE VERY CAREFUL, this SHOULD ONLY be considered on parity-less arrays , this operation WILL invalidate your parity if you were to use it on an array containing one or more parity drives... consider yourself warned

    18 hours ago, Diego Spinola said:

    I can also imagine a workaround of stopping the whole array mounting each drive manually and trying to TRIM them to see whether their "is it an array drive check" could be fooled this way , that would also work for me since my workstation could be stopped every weekend (This I can actually test rather easily, will try)... 

     

     

    Well I tried it like this and it works! 😃

     

    #!/bin/bash
    
    
    CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+')
    curl -k --data "startState=STARTED&file=&csrf_token=${CSRF}&cmdStop=Stop" http://localhost/update.htm
    
    echo "Stopping array"
    sleep 5 #do we need to pool it? is the curl stop cmd async?
    
    echo "Mounting..."
    mount /dev/nvme2n1p1 /root/test/
    echo "Trimming..."
    fstrim -v /root/test/
    echo "Unmounting..."
    umount /root/test
    echo "pause for array start"
    sleep 5 
    
    CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+')
    curl -k --data "startState=STOPPED&file=&csrf_token=${CSRF}&cmdStart=Start" http://localhost/update.htm
    
    echo "array should be starting"
    
    
    Stopping array
    Mounting...
    Trimming...
    /root/test/: 910.1 GiB (977167618048 bytes) trimmed
    Unmounting...
    pause for array start

    This is just a proof of concept, but yeah worst case scenario I could run a script similar to this once a week to stop the array and TRIM my array drives


    Start/Stop array snipet from:

     

    So aside from not working with parity, and not working on a started array (I'll grant you the aquaduct...)
    can anyone see a problem with this method?

  6. On 7/13/2019 at 8:42 AM, trurl said:

    The main thing you lose by not having disks in the array is the ability to span folders, but sound like that isn't important since you want to work directly with the different disks anyway.

    I like the idea of the JBOD using UD...it's my failsafe option if I fail my quest to TRIM parity-less array drives

    Currently I'm working with the following drives:

    • 1x nvme Very fast Intel 480Gb Optane PCI-E SSD 
    • 2x nvme256Gb Samsung M.2 EVO 960 SSD not as fast as the optane but quite fast
    • 1x nvme 1TB Crucial M.2 SSD 2280 Still faster than most SSDs
    • 2x 480Gb WD SATA SSDs WDC_WDS480G2G0A

    • 1x 480GB Kingston SATA SSD (slower and older)
    • 1x 1Tb HDD mechanical drive I just don't use anymore

    I'm thinking that if I had the option of using SSDs on the array I'd keep the Optane as a cache drive (that would store all VMs and Containers roots) and just throw everything else (except the HDD) in the array and manage shares (exclusions) according to disk performances 


     

  7. On 7/13/2019 at 8:52 AM, johnnie.black said:

    You can but trim won't work, since it's been disabled for all array devices, even if there's no parity.

    Do you know why it wont work? if it is this just a quirk/check of unraid's implementation of the fstrim utility or is it some lower level check? if not could I just recompile a version without it? I'm guessing it's GPL isn't it(unless they completely rewrote the thing)? I can also imagine a workaround of stopping the whole array mounting each drive manually and trying to TRIM them to see whether their "is it an array drive check" could be fooled this way , that would also work for me since my workstation could be stopped every weekend (This I can actually test rather easily, will try)... 

  8. Hey all,

    I've been an unRAID user for a while now (2 servers on my office,really like unraid's blend of NAS and "hypervisor" ) and lately I've been toying with the idea of using it for my 64Gb 16c Threadripper workstation (tested the whole GPU passthrough last weekend and it seems to work great) and I could really use the VMs/Containers for my particular development needs ...

    I hit a little snag in planning and would love some input from you guys, here it goes:

    I currently have several SSDs on this machine (An optane pci-e drive (currently used for boot), couple of m.2s , and some SATA drives) , and a single mechanical drive...all unmatched... I was thinking of my options :
     

    • A raid1 btrfs cache pool with all drives seemed like a waste (since any VMs would be backed up daily to my other servers (and rcloned to my remote backup)) and these drives have huge differences in performance
    • A raid0 btrfs cache pool doesn't seem right either (since the drives have difference performance...)
    • Maybe a combo of the optane as cache drive and others as UD...but this would limit me in the way I wanted to use unraid shares
       

    But then it hit me... since I don't need parity , could I just create an SSD array with my unmatched drives ? Since there's no parity to be messed up could I then TRIM these disks somehow so that I don't lose performance over time?


    am I overthinking this? did I miss a more obvious solution? how would you do it?

    Thanks


    DS

×
×
  • Create New...