escapement

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by escapement

  1. I'm a long time unraid user, but no guru for sure. I currently have a traditional unraid server setup (hpz820 dual-xeon, 512gb ram, 4x14tb hdd in array, 4tb nvme cache, 10gb networking, dual graphics cards for passthrough). I use this mostly for media storage, docker (starr stack, nextcloud, etc..) and as a vm host for a mix of linux and windows vms that I use for my daily driver (linux) and some retro gaming and not too demanding steam games. Xeons, ddr3 registered ram and pcie-3 aren't great for gaming but they server for this purpose. I I'm currently setting up a new server which I intend on using for running development vms for windows and linux, database serving and video transcoding. It will have an amd graphics card to pass through to vms and an nvidia tesla p40 for number crunching. I'm trying to leverage new technology that has arisen since I set up my last server... Namely, cheap nvme drives and zfs. I originally was going to use dual nvme drives for the array and 4xhdd for a raidz zfs pool, but after doing research, found nvme drives aren't good in pools due to the trim/wear and tear issues. The host is a dell t7910 workstation... dual 20 core xeon, 128gb ram, expandable to 2tb ... probably going to bring it up to 512gb which is affordable. This machine supports pcie bifurcation and has a lot of slots, so I can jam a bunch of nvme drives into it. After thinking about it, this is the new layout I came up with: * a single 512gb nvme for the array... no parity. The array won't be used much so not too worried about trim. Since there is no parity, I think I can take it off line and trim it if I need to... Well at least I think I can but may be wrong about this... but not much of an issue. * a cache pool made up of 3x4tb nvme drives set up in raidz, giving me 8tb storage with redundancy. * 4x10tb hdd zpool set up in raidz for main storage. The system, domains and appdata shares will be stored on the cache drive with secondary storage on the zpool. I doubt I will fill up the cache, so it will never be written to the zpool, but I can use plugins or zfs replication to back them up to the zpool. I can create shares either on the zpool or cache/zpool. It seems like this has advantages over my traditional setup. I have redundancy in the cache pool and the main storage is interleaved so it should be faster than an unraid array. I can also move the cache shares to the zpool to empty the cache drive, destroy the cache pool and create a new one with more nvme drives if I need to in the future, so it's expandable. The move the shares back to the new cache pool. A disadvantage I can see is that zpools are always spun up, so it will use more power and I don't have fine grained control over which share is stored on which drive. Is this a reasonable setup? Are there any pitfalls I'm missing? As I said, I'm not an unraid expert, but a long time user and would appreciate some feedback from those of you out there who are.
  2. I would love to see infiniband support in Unraid... Obviously bridging to a vm isn't possible, because there is no virtio driver for this.... But, support for file sharing via ipoib would rock... Also, SR-IOV can be used to pass the ib card through to VM's. My connectx-3 card supports 16 virtual connection, plus the host connection. I just set up a server with Proxmox/ZFS because I want 56gb infiniband. Took me 3 weeks to figure out how to get it all running, but it's finally running. Connectx-3 ib/en cards can be had for less than $75... For dual port cards.... Single port cards are even cheaper.. IB switches can be had for less than $200... I got my 12 port Mellanox unmanged switch for less than $100... You can't touch a 40gb qsfp ethernet switch for that price..... I've seen 36 port managed switches for less than $200... Sure they sound like a jet engine, but they can be modded with quiet fans... and qsfp cables can be had for cheap as well... picked up a 25 meter optical cable to run down to my data center in the garage for less than $100... I will look up your feature request and second it...
  3. I had an issue with mover... It wasn't moving anything... searched the forums and someone said change use cache from yes to no then back... That worked... I was looking at the code for the mover script... I have one question.. It looks like if I change a share from use cache to don't use cache, it doesn't do anything... Shouldn't it either delete the share from cache or try to a move on it if there are files on the cache ? It's a difficult strategy... My cache drive filled up because mover wasn't doing it's job.. I set my downloads share to be no cache... I was hoping it would just migrate all the cache files off to the array... I even ran the mover manually and it did nothing... Am I missing something? TN
  4. You are my hero... found this on my first search... Worked a charm! Thanks John!
  5. Tried to install the docker image.. The first time I used custom setting and got an error... Next, tried to install with default settings and got the same error root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-krusader' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="unRAID" -e 'TEMP_FOLDER'='/config/krusader/tmp' -e 'WEBPAGE_TITLE'='Tower' -e 'VNC_PASSWORD'='' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '6080:6080/tcp' -v '/mnt/user':'/media':'rw' -v '/mnt/user/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader' Unable to find image 'binhex/arch-krusader:latest' locally /usr/bin/docker: Error response from daemon: Get https://registry-1.docker.io/v2/binhex/arch-krusader/manifests/latest: received unexpected HTTP status: 503 Service Unavailable. See '/usr/bin/docker run --help'. The command failed. Am I doing something wrong or is there a temporary issue with the server? I just upgraded to Unraid 6.5.3... Is this an issue? Thanks