Diego Spinola

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Diego Spinola

  1. Hi @Xcage I'm someone in the future who stumbled into your post , I'm attempting the same thing using Disk2VHD... Had basically given up... I'm going to try your last post settings and see if it works! edit: No luck so far...
  2. Anyone knows if it's really coming ? this would be AWESOME for my new workstation... my power hungry zfs pool (10x6TBs SAS) could use some rest
  3. I was debugging the same issue and found the two /17 rules , I'm about to give up trying to make sense of why the address space has been split... but not knowing it makes me fearful of missing some hidden "gotcha" of using the /16 address for my VLAN... for now I'll use 2 rules "just in case"
  4. Hi there @neverendingtech I managed to get it running by replacing the uid/gid from the template and adding "--user 99:100" as an extra parameter in the docker advanced options.... I guess better late than never
  5. Same thing just happened to me (cache full due to massive operation) ,now that we can have multiple pools I'm thinking of adding a separate pool just for docker/vm data so that this won't happen again
  6. Thanks @jonathanm , it ended up working perfectly , strange behavior tho , I'll report it as soon as I'm finished with the upgrades
  7. Something really weird just happened: I rebooted the server (Again without starting the array) and it seem to have "remembered" the disk: all by itself... any way to check that it won't destroy my data before staring the array?
  8. Hi there @jonathanm thanks for the reply, just now I found an offsite (B2) "flash backup" generated last night which would certainly contain all config files needed for this to be repaired (don't really know where this cache pool info would be located but i'm sure it is in there right?), do you think that the safest way it is to do what you said even though I have some "cache only" shares on my server? Am I overthinking it and there's no real risk ? Using your plan I would: Turn VM and Dockers off Remove all devices from cache pool Start and Stop array Create a new cache pool with the same drives, in the same order as before Plan B : Try to recover the configs from yesterday's backup Thanks
  9. Here's a couple of screenshots... still haven't dared to start the array out of fear : Now: Before:
  10. Hi there guys, I'm migrating one of my servers to a new motherboard : Turned off array auto start Took a screenshot of the disk order Transported all HDDs and nvme's to the new system booted noticed that one of the cache nvme's was missing (never started the array) powered off and re-seated the missing nvme device from a pci-e slot that seems to be the issue into another booted again, every drive was recognized but the cache pool didn't populate with the missing drive automatically if I try to assign the missing drive in it's original place it shows the following warning "All existing data on this device will be OVERWRITTEN when array is Started" At no point have I started the array so I think I'm still safe...my initial instinct was to rebuild the flash from the online backup...BUT... it seem to have been overwritten just as I booted (even without starting the array...doh...) Seems to me that there is probably a very simple solution here but I'm not seeing it ...maybe because of the scary warning Can anyone give me an insight? Thanks
  11. Ok , something very weird happened: I pulled the image on one of the machines that were working fine with Ran it and created a small file in it (just a "touch" to change it's hash) Then committed the container to mynamespace/test:latest and pushed it to my registry tried to pull it again on unraid CLI: docker pull localhost:5000/mynamespace/test Using default tag: latest latest: Pulling from mynamespace/test 7413c47ba209: Pull complete 0fe7e7cbb2e8: Pull complete 1d425c982345: Pull complete 344da5c95cec: Pull complete 0a8244e1fdbf: Pull complete 9af8d374b4e3: Pull complete Digest: sha256:ba9e72e1b975455825376c5b0edebbbd4e6b74d21968dc4f5e08f9bc10292c44 Status: Downloaded newer image for localhost:5000/mynamespace/test:latest I've no idea why this works now, will try to replicate the initial conditions
  12. Hey there folks, I've been using a private registry (no https , already added it to "/etc/docker/daemon.json" as an insecure-registries entry) to pull images to several machines in my network (most of them running ubuntu) and it works great... however whenever I try to pull an image to the unraid server I get the following: root@arda:~# docker -v Docker version 18.09.6, build 481bc77 root@arda:~# docker pull "localhost:5000/mynamespace/test:latest" latest: Pulling from mynamespace/test 7413c47ba209: Downloading 0fe7e7cbb2e8: Downloading 1d425c982345: Downloading 344da5c95cec: Downloading 0a8244e1fdbf: Download complete unknown blob Does anyone have any idea/hint of what might be going on here? if i pull the same test image elsewhere (any of my other machines) it works fine... Thanks
  13. Did you manage to solve this @Leeuujay ? I'm seeing the same thing on my new install...
  14. WARNING: anyone stumbling on this post in the future and thinking about doing this BE VERY CAREFUL, this SHOULD ONLY be considered on parity-less arrays , this operation WILL invalidate your parity if you were to use it on an array containing one or more parity drives... consider yourself warned Well I tried it like this and it works! 😃 #!/bin/bash CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') curl -k --data "startState=STARTED&file=&csrf_token=${CSRF}&cmdStop=Stop" http://localhost/update.htm echo "Stopping array" sleep 5 #do we need to pool it? is the curl stop cmd async? echo "Mounting..." mount /dev/nvme2n1p1 /root/test/ echo "Trimming..." fstrim -v /root/test/ echo "Unmounting..." umount /root/test echo "pause for array start" sleep 5 CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') curl -k --data "startState=STOPPED&file=&csrf_token=${CSRF}&cmdStart=Start" http://localhost/update.htm echo "array should be starting" Stopping array Mounting... Trimming... /root/test/: 910.1 GiB (977167618048 bytes) trimmed Unmounting... pause for array start This is just a proof of concept, but yeah worst case scenario I could run a script similar to this once a week to stop the array and TRIM my array drives Start/Stop array snipet from: So aside from not working with parity, and not working on a started array (I'll grant you the aquaduct...) can anyone see a problem with this method?
  15. I like the idea of the JBOD using UD...it's my failsafe option if I fail my quest to TRIM parity-less array drives Currently I'm working with the following drives: 1x nvme Very fast Intel 480Gb Optane PCI-E SSD 2x nvme256Gb Samsung M.2 EVO 960 SSD not as fast as the optane but quite fast 1x nvme 1TB Crucial M.2 SSD 2280 Still faster than most SSDs 2x 480Gb WD SATA SSDs WDC_WDS480G2G0A 1x 480GB Kingston SATA SSD (slower and older) 1x 1Tb HDD mechanical drive I just don't use anymore I'm thinking that if I had the option of using SSDs on the array I'd keep the Optane as a cache drive (that would store all VMs and Containers roots) and just throw everything else (except the HDD) in the array and manage shares (exclusions) according to disk performances
  16. Do you know why it wont work? if it is this just a quirk/check of unraid's implementation of the fstrim utility or is it some lower level check? if not could I just recompile a version without it? I'm guessing it's GPL isn't it(unless they completely rewrote the thing)? I can also imagine a workaround of stopping the whole array mounting each drive manually and trying to TRIM them to see whether their "is it an array drive check" could be fooled this way , that would also work for me since my workstation could be stopped every weekend (This I can actually test rather easily, will try)...
  17. Hey all, I've been an unRAID user for a while now (2 servers on my office,really like unraid's blend of NAS and "hypervisor" ) and lately I've been toying with the idea of using it for my 64Gb 16c Threadripper workstation (tested the whole GPU passthrough last weekend and it seems to work great) and I could really use the VMs/Containers for my particular development needs ... I hit a little snag in planning and would love some input from you guys, here it goes: I currently have several SSDs on this machine (An optane pci-e drive (currently used for boot), couple of m.2s , and some SATA drives) , and a single mechanical drive...all unmatched... I was thinking of my options : A raid1 btrfs cache pool with all drives seemed like a waste (since any VMs would be backed up daily to my other servers (and rcloned to my remote backup)) and these drives have huge differences in performance A raid0 btrfs cache pool doesn't seem right either (since the drives have difference performance...) Maybe a combo of the optane as cache drive and others as UD...but this would limit me in the way I wanted to use unraid shares But then it hit me... since I don't need parity , could I just create an SSD array with my unmatched drives ? Since there's no parity to be messed up could I then TRIM these disks somehow so that I don't lose performance over time? am I overthinking this? did I miss a more obvious solution? how would you do it? Thanks DS