xl3b4n0nx

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

xl3b4n0nx's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm not sure where the best place to do this is, but I have an idea for a feature request. I think this would be quite beneficial to anyone on 6.12+ and using ZFS for their cache pool where their appdata directory lives. The feature would take advantage of ZFS snapshots to enable backing up of the appdata directory and xml without shutting down the containers. The procedure would be something like this. Check for unraid 6.12+ Check if appdata is on a ZFS filesystem AND in its own dataset If so, take a snapshot and give it a unique name generated by the script Substitute in the snapshot path into the appdata location and then run the script as normal, but without shutting down the containers At the end of the procedure the MVP could be deleting the snapshot so they don't pile up. But a more advanced version would incorporate some basic snapshot management for the automatically created snapshots. If i can get some time I would love to take a crack at this, but that's in short supply for me now. I figured I would throw this out there in case someone else wants to work on it.
  2. You said you were able to get mariadb working with mattermost. Could you post an example config of that? I created the DB the same way I did with Nextcloud. I tried to input everything in the same way it is stubbed out in the docker template, but the container won't start.
  3. How do I get the Mattermost container running? I tried setting up a database in MariaDB using mysql and I can't get the container to start. Is there a guide to getting it running and connected to a reverse proxy?
  4. That is exactly what I am after. Because once I get a 10GbE connection setup I want NVMe drives to be able to service that speed and have SATA SSDs as a next tier to still maintain decent speed (4Gb-5Gb) if the NVMe gets filled. And if I can have a separate cache pool for dockers and system data running xfs for maximum stability. I won't mind running btrfs for my data cache so I can have redundancy.
  5. I think that is GREAT feature and I will absolutely be using it. I can't stand when my cache fills and my dockers go nuts because the drive is out of space. However, that is not what i am referring to. That is a set of lateral cache pools. I am talking about vertical pools. Ex: 500GB NVMe | V 2 TB SATA SSD | V UnRAID Array Is this a possibility with this new multi-pool feature?
  6. Would it be possible to create two cache pools and tier them with this setup? 500GB NVMe to a 2TB SATA to the UnRAID array.
  7. Turns out the created directory for the mount got into a weird state. I unmounted it and then removed the directory and let the mount script recreate it and it was fine.
  8. It claims that /mnt/user/secure (the one causing the most problems) isn't mounted when I try that.
  9. I have google drive and an encrypted folder mounted as google and secure. I set this up following Spaceinvaderone's tutorial. The problem I am having is with the unmount script. The 'fusermount -u /mnt/disks/secure' returns 'Invalid argument' and can't properly unmount the secure folder. How can I fix this error?
  10. Has anyone gotten this to work with an Asus RT-AC66U or similar router? I can't figure out a way to setup my router to point machines to this docker.
  11. I will try this later. This looks like the most promising solution yet. I will reply with results. Edit: This worked! Thank you! The VM framework doesn't crash when I boot a VM with a GPU. Now the VNC view just barely works.
  12. So I have been attempting to setup a VM on my new build with GPU passthrough. On my previous build (FX-6300 on AMD 970 board) this was a simple task. On my new TR 1920X and AsRock X399 Fatal1ty Professional Gaming it has proven to be quite a challenge. It seems that UnRAID 6.8.1 binds all of my GPUs on boot and they can not be passed through at all. If I attempt to unbind them ("echo <pci_device_id> > /sys/bus/pci/drivers/nvidia/unbind") it hard locks the Nvidia driver (Unraid NVidia build). I get the same result when I start a VM with any GPU in it. The diagnostic files are attached. Does anyone know how to prevent unraid from stealing all of my GPUs? I have tried the method in Spaceinvader One's(@SpaceinvaderOne) video on GPU passthrough, but it doesn't work because I get stuck at the unbind step. tower-diagnostics-20200216-2223.zip
  13. I tried your suggestion and got the same problem. It seems unraid has bound all of my GPUs even though 2 are not in use at all.
  14. I will give that a shot. I never though to try that.
  15. I am trying to setup a Ubuntu VM with a Nvidia GPU passed through. The GPU is not the primary GPU in the system so passthrough shouldn't be an issue. I am running a Threadripper 1920X on the AsRock X399 Fatal1ty Professional Gaming. The GPUs in the system are Quadro K4000 (primary), GT 730, Quadro K2000. I am trying to pass through the Quadro K2000. When I start the VM the VM and docker managers hang then the whole system hangs. A clean shutdown is not possible. The error in the system log is libvirtd tainted and then a stack trace which clearly indicates the GPU is the problem. For reference, I have other VM's running without GPUs with no issue at all. UnRAID Nvidia version 6.8.2. Diagnostics are attached. VM settings: CPU Mode: Host Passthrough Machine: Q35-4.2 USB Controller: 2.0 GPU 1: VNC GPU 2: Quadro K2000 Sound Card: Quadro audio tower-diagnostics-20200209-1621.zip