Arbadacarba

Members
  • Posts

    296
  • Joined

  • Last visited

Everything posted by Arbadacarba

  1. I don't mean to hijack the thread, but I thought if I made the question a little more comprehensive we might get a response: I'm running: 1 7 drive Array with single parity disk (Containing stored media - Mixed Disk Sizes) [xfs] 1 2TB NVMe Cache drive [btrfs] 1 System pool with 2 4tb SSD striped (with Dockers, VM, sand system folder) [btrfs] Is there any benefit to my switching these to ZFS? My system pool is where I wanted best performance, and would have used the NVME drive, but I got Terrible performance from the SSD's when mirrored, and a 8TB cache seemed wasteful. I'm not worried about the HOW TO, as I'm sure I can figure that out... But I just don't understand why everyone is so excited about ZFS
  2. I figured mine out... I'm using Fancy Zones to split my 32" monitor into four zones... All the same width (Half the screen (4K so 1920 each)) but varying heights. I generally have my homelab window in the top left and have dropped to 2 columns in 6.12. I played with it and set the border (Space around zones) to -4 (Had it at -3 before) and viola, 3 columns. pfSense has the option of defining the number of columns for the dashboard. That would be helpful here. Now if only the Docker and VM folders app worked.
  3. Same question, and while I understand they made the columns all equal in order to make them interchangeable, I'm hoping there is a way to edit it, or that Limetech would consider including more control.
  4. So Yaml files edit best in Notepad++ and I have my appdata folder accessible in a windows share. Create the additional Paths: My appdata folder is on a SSD pool Not Happy with this yet I should probably switch these to read only. Did you create the icons folder? .../appdata/homepage/public/icons/ and then: Run the Container While the container is running, open the WebUI. Opening the page will generate the configuration files. You may need to set the permissions of the folders to be able to edit the files. Click on the Homepage icon. Click on Console. Enter chmod -R u-x,go-rwx,go+u,ugo+X /app/config chmod -R u-x,go-rwx,go+u,ugo+X /app/public/icons chown -R nobody:users /app/config chown -R nobody:users /app/public/icons The three sections I've modified are: widgets.yaml widgets.yaml services (de keyed).yaml bookmarks(dekeyed).yaml For the pfsense integration I used this pkg: https://github.com/jaredhendrickson13/pfsense-api which was... fun... I ended up copying it to my server and installing it locally. Final Product (reworked a bit as I wrote this): Notes: Most Fields lines can be left blank and they will show all 4... if you only want some, or you want different ones you can pick and chose. No backslash at the end of the url I get my icons by draging them from the dashboard to the folder My PiHole is not functional (at the moment)
  5. Did you make any progress with this? I have it up and running, with the final piece of the puzzle for me getting it to run on its own IP without the need for a port number. (running on [port 80) I really like the program but the name makes it impossible to find information about it... We all made fun of the goofy spellings web 2.0 used but it sure was easier to reseach imgr than if they had called it imager... homepage? Sheesh
  6. Anyone had any luck getting Homepage to use a different port than 3000? I'm trying to get it to use 443 or 80 on a different IP.
  7. Ya, That's my general Technique as well: The Bare Drive VM's Boot the random Hard Drives I get presented with to pull files from and find things on. I just think it would be usefull to skip a step and get right into the drive itself. It seems obvious that Unassigned Devices should be able to mount VM Disks
  8. Is there any way to mount a VM drive? Like a qcow2 volume? Either to mount it in Unassigned Devices and copy the files, or access the whole drive and modify it using a partition manager. My use case it that I have cloned a broken drive to a qcow volume on my cache drive and it was successfully cloned, But now I need to do some partition management tasks on the drive and then I plan to clone it back to hardware... I can't (Easily) clone it back first because it is a 2 TB disk that refuses to fit on a 2TB SSD and in fact only contains under 200GB of data so if I could access the drive I could shrink the partition and fit it on a Smaller SSD. I do a lot of drive conversions and use a dual drive slot in my Unraid server tower to let me avoid working on the patient machines directly.
  9. OpenSpeedTest installed on the Unraid Server at 10.40.70.11:3000 Connection from Laptop: Wifi at my Desk: (My AP's are NOT nearby) SSID: Paradise Protocol: Wi-Fi 6 (802.11ax) Security type: WPA2-Personal Network band: 2.4 GHz Network channel: 11 Link speed (Receive/Transmit): 86/155 (Mbps) IPv4 address: 10.40.70.192 IPv4 DNS servers: 10.40.70.1 Manufacturer: Intel Corporation Description: Intel(R) Wi-Fi 6 AX200 160MHz Driver version: 22.200.0.6 Physical address (MAC): 78-2B-46-C7-0D-21 Cabled at my Desk: Connection from Chrome Docker: What the Hell? Connection from VM (Goat): Connection from Gaming VM:
  10. So I have an Unraid server serving several VMs including my pfsense router... The pfsense box has a pair of nics, one of which on the Lan side is connected to a 1G router... My Unraid Server has a single 2.5G card connected to the same 1G router... My VM's are running e1000 1G nics What is the potential connection speed between my VM and the Shares on the Unraid server? Would that change if I were using a different vitual nic? How would I best test this? Thanks
  11. That was my general understanding as well... But I've seen several references (By people who are fairly experienced with Unraid) talking about being able to use the card on Dockers and VM's... Not simultaneously but asynchronously without a reboot. On the upside, my experimentation required that I play games for 3 hours last night... So it's not a total loss.
  12. And you DID install the NVIDIA Driver App in Unraid?
  13. So I removed the VFIO Binding for my video card and rebooted the server... Then reset the Video card in my VM (Just in case) and it booted to the 4080 just fine... I did not expect that... I had always thought the Binding was required... I guess next, I will try loading the NVIDIA Driver and trying it again...
  14. Can you run that command in the Go file? /boot/config/go Just nano and enter the line... Maybe with an Comment Above it?
  15. Try Netdata... I've seen more than one update in a single day.
  16. Is it that you (OP) don't want to update or it it just that you don't like the banner and want it to just do it and quit bugging?
  17. That stopped me too... I may have to give it a shot. Question is: Does the VM Booting cause the dockers to fail or do they just lose the resource? Do they recover when the VM is shut down? As for how I got my GPU Passed Through in the first place I'm afraid I just did as little as I could. What Hardware are you running? Intel CPU Nvidia GPU? Have you got the video card bound to VFIO at boot?
  18. I've read little references in a few other posts about optional ways to pass a GPU through to a VM... I've had a Nvidia GPU passed though to my Gaming VM (Passed through NVMe as well) and been very happy with the result when I'm playing Games on the VM. But that is not all that often... Sadly... I'm wondering if there is a better way to pass a GPU through that would let me use the GPU for other things in Dockers or even other VMs when I'm not gaming?
  19. I went looking for rc3 in the change log as well... It was appreciated last cycle
  20. Have you checked the speed on that cache? I found I lost a lot of performance when I went to Raid1 for my Domains and such...
  21. I'm a little confused about the icons to the left of the shares... They tell you the status of your shares with regards to data redundancy don't they? My Media Shares are on the main array with 6 drives and a parity drive... So they are good. (But I assume they should show a warning if I move a file onto the share and mover hasn't moved it off of the cache yet?) <-- Yes, that works My Cache is a single NVME drive (Wow things get exciting when you have mover completely fill the cache drive) and if I have a share set to ONLY is shows up with a caution icon that the data is not protected. My Pool (Systems - Best name I could come up with) is a Raid0 btrfs array comprised of two 4TB SSD drives. My system folder is on the Systems pool. and it is showing as protected... It shouldn't be should it... If anything it's worse than being on a single drive? I have a backup and if I lost that Pool I would be busy fixing it but not remotely upset about it. (My wife would be irritated though)
  22. Dual SSD Cache (Configured in Raid 0)) rsync --progress --stats -v Test.mkv /mnt/cash/Backup 25,775,133,173 100% 1.11GB/s 0:00:21 (xfr#1, to-chk=0/1) 1,145,841,157.42 bytes/sec NVME (Again) rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 1.81GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,778,029,382.21 bytes/sec So we are back to a noticeable improvement in speed for the SSDs... So now... Do I set the NVMe drive up as Cache and the larger (But also unprotected) Pool as my working folder? And just make sure it is backed up to the array in case of drive failure? Or do the opposite? I think I would be better off using the much larger pool where I will have far more content... My System folder, my domains (VM Drives), and maybe a Steam Library? Just to be funny - Copying the file from one ram drive to another rsync --progress --stats -v Test.mkv /tmp 25,775,133,173 100% 1.98GB/s 0:00:12 (xfr#1, to-chk=0/1) 2,062,514,083.36 bytes/sec This is kind of an unexpected result... am I spoiling the other tests in some way with my methodology? Arbadacarba
  23. Found This and it works but I expect it will break in 6.12... at least at first:
  24. Debating if anything is changing in 6.12... May try again
  25. Dual SSD Cache (No longer used as Cache...) rsync --progress --stats -v Test.mkv /mnt/systems/Backup 25,775,133,173 100% 341.82MB/s 0:01:11 (xfr#1, to-chk=0/1) 360,579,385.22 bytes/sec NVME (Just for a sanity check) rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 1.80GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,778,029,382.28 bytes/sec So I've Halved the speed of my SSD Array... Hmm... I didn't expect that.