Diego Spinola

Members
  • Posts

    19
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Diego Spinola's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Hi @Xcage I'm someone in the future who stumbled into your post , I'm attempting the same thing using Disk2VHD... Had basically given up... I'm going to try your last post settings and see if it works! edit: No luck so far...
  2. Anyone knows if it's really coming ? this would be AWESOME for my new workstation... my power hungry zfs pool (10x6TBs SAS) could use some rest
  3. I was debugging the same issue and found the two /17 rules , I'm about to give up trying to make sense of why the address space has been split... but not knowing it makes me fearful of missing some hidden "gotcha" of using the /16 address for my VLAN... for now I'll use 2 rules "just in case"
  4. Hi there @neverendingtech I managed to get it running by replacing the uid/gid from the template and adding "--user 99:100" as an extra parameter in the docker advanced options.... I guess better late than never
  5. Same thing just happened to me (cache full due to massive operation) ,now that we can have multiple pools I'm thinking of adding a separate pool just for docker/vm data so that this won't happen again
  6. Thanks @jonathanm , it ended up working perfectly , strange behavior tho , I'll report it as soon as I'm finished with the upgrades
  7. Something really weird just happened: I rebooted the server (Again without starting the array) and it seem to have "remembered" the disk: all by itself... any way to check that it won't destroy my data before staring the array?
  8. Hi there @jonathanm thanks for the reply, just now I found an offsite (B2) "flash backup" generated last night which would certainly contain all config files needed for this to be repaired (don't really know where this cache pool info would be located but i'm sure it is in there right?), do you think that the safest way it is to do what you said even though I have some "cache only" shares on my server? Am I overthinking it and there's no real risk ? Using your plan I would: Turn VM and Dockers off Remove all devices from cache pool Start and Stop array Create a new cache pool with the same drives, in the same order as before Plan B : Try to recover the configs from yesterday's backup Thanks
  9. Here's a couple of screenshots... still haven't dared to start the array out of fear : Now: Before:
  10. Hi there guys, I'm migrating one of my servers to a new motherboard : Turned off array auto start Took a screenshot of the disk order Transported all HDDs and nvme's to the new system booted noticed that one of the cache nvme's was missing (never started the array) powered off and re-seated the missing nvme device from a pci-e slot that seems to be the issue into another booted again, every drive was recognized but the cache pool didn't populate with the missing drive automatically if I try to assign the missing drive in it's original place it shows the following warning "All existing data on this device will be OVERWRITTEN when array is Started" At no point have I started the array so I think I'm still safe...my initial instinct was to rebuild the flash from the online backup...BUT... it seem to have been overwritten just as I booted (even without starting the array...doh...) Seems to me that there is probably a very simple solution here but I'm not seeing it ...maybe because of the scary warning Can anyone give me an insight? Thanks
  11. Ok , something very weird happened: I pulled the image on one of the machines that were working fine with Ran it and created a small file in it (just a "touch" to change it's hash) Then committed the container to mynamespace/test:latest and pushed it to my registry tried to pull it again on unraid CLI: docker pull localhost:5000/mynamespace/test Using default tag: latest latest: Pulling from mynamespace/test 7413c47ba209: Pull complete 0fe7e7cbb2e8: Pull complete 1d425c982345: Pull complete 344da5c95cec: Pull complete 0a8244e1fdbf: Pull complete 9af8d374b4e3: Pull complete Digest: sha256:ba9e72e1b975455825376c5b0edebbbd4e6b74d21968dc4f5e08f9bc10292c44 Status: Downloaded newer image for localhost:5000/mynamespace/test:latest I've no idea why this works now, will try to replicate the initial conditions
  12. Hey there folks, I've been using a private registry (no https , already added it to "/etc/docker/daemon.json" as an insecure-registries entry) to pull images to several machines in my network (most of them running ubuntu) and it works great... however whenever I try to pull an image to the unraid server I get the following: root@arda:~# docker -v Docker version 18.09.6, build 481bc77 root@arda:~# docker pull "localhost:5000/mynamespace/test:latest" latest: Pulling from mynamespace/test 7413c47ba209: Downloading 0fe7e7cbb2e8: Downloading 1d425c982345: Downloading 344da5c95cec: Downloading 0a8244e1fdbf: Download complete unknown blob Does anyone have any idea/hint of what might be going on here? if i pull the same test image elsewhere (any of my other machines) it works fine... Thanks
  13. Did you manage to solve this @Leeuujay ? I'm seeing the same thing on my new install...