Arbadacarba

Members
  • Posts

    296
  • Joined

  • Last visited

Posts posted by Arbadacarba

  1. I figured mine out... I'm using Fancy Zones to split my 32" monitor into four zones... All the same width (Half the screen (4K so 1920 each)) but varying heights.

     

    I generally have my homelab window in the top left and have dropped to 2 columns in 6.12.

     

    I played with it and set the border (Space around zones) to -4 (Had it at -3 before) and viola, 3 columns.

     

    image.thumb.png.8eef28dcd8eb1942559c12ef7018823f.png

     

    pfSense has the option of defining the number of columns for the dashboard. That would be helpful here.

     

    Now if only the Docker and VM folders app worked.

  2. So Yaml files edit best in Notepad++ and I have my appdata folder accessible in a windows share.

     

    Create the additional Paths:

     

    image.png.49cfee076b62081b156dc73b959f21b6.png

    My appdata folder is on a SSD pool

     

    image.png.cd96860efa63f53b85f04fadb72290fa.png

    Not Happy with this yet

     

    image.png.828f6d84f5765e604ef6903867a0042b.png

    I should probably switch these to read only.

     

    Did you create the icons folder? .../appdata/homepage/public/icons/ and then:

     

    Run the Container

    While the container is running, open the WebUI.

    Opening the page will generate the configuration files.

    You may need to set the permissions of the folders to be able to edit the files.

    Click on the Homepage icon.

    Click on Console.

    Enter 

    chmod -R u-x,go-rwx,go+u,ugo+X /app/config

    chmod -R u-x,go-rwx,go+u,ugo+X /app/public/icons

    chown -R nobody:users /app/config

    chown -R nobody:users /app/public/icons

     

    The three sections I've modified are:

     

    widgets.yaml

    widgets.yaml

     

    services (de keyed).yaml

     

    bookmarks(dekeyed).yaml

     

    For the pfsense integration I used this pkg:

     

    https://github.com/jaredhendrickson13/pfsense-api

     

    which was... fun... I ended up copying it to my server and installing it locally.

     

    Final Product (reworked a bit as I wrote this):

     

    image.thumb.png.78b3c7029fba03983ebd9c46f9cd6843.png

     

    Notes:

    • Most Fields lines can be left blank and they will show all 4... if you only want some, or you want different ones you can pick and chose.
    • No backslash at the end of the url
    • I get my icons by draging them from the dashboard to the folder
    • My PiHole is not functional (at the moment)
  3. Did you make any progress with this? I have it up and running, with the final piece of the puzzle for me getting it to run on its own IP without the need for a port number. (running on [port 80)

     

    image.thumb.png.d8ec2ff6bb56d5e267a4c1ce427086b2.png

     

    I really like the program but the name makes it impossible to find information about it... We all made fun of the goofy spellings web 2.0 used but it sure was easier to reseach imgr than if they had called it imager... homepage? Sheesh

  4. 10 hours ago, JonathanM said:

    Or, my favorite, set up a VM specifically for recovery and maintenance tasks, and add the damaged image as a secondary drive.

     

    Ya, That's my general Technique as well:

     

    image.png.570014766d2ac20f46ea37856316c32b.png

     

    The Bare Drive VM's Boot the random Hard Drives I get presented with to pull files from and find things on.

     

    I just think it would be usefull to skip a step and get right into the drive itself. It seems obvious that Unassigned Devices should be able to mount VM Disks

    • Like 1
  5. Is there any way to mount a VM drive? Like a qcow2 volume? Either to mount it in Unassigned Devices and copy the files, or access the whole drive and modify it using a partition manager.

     

    My use case it that I have cloned a broken drive to a qcow volume on my cache drive and it was successfully cloned, But now I need to do some partition management tasks on the drive and then I plan to clone it back to hardware... I can't (Easily) clone it back first because it is a 2 TB disk that refuses to fit on a 2TB SSD and in fact only contains under 200GB of data so if I could access the drive I could shrink the partition and fit it on a Smaller SSD.

     

    I do a lot of drive conversions and use a dual drive slot in my Unraid server tower to let me avoid working on the patient machines directly.

  6. OpenSpeedTest installed on the Unraid Server at 10.40.70.11:3000

     

     

    Connection from Laptop:

     

    Wifi at my Desk: (My AP's are NOT nearby)

    image.thumb.png.5c33fec38e1fadc730832237d8aa306b.png

     

    SSID:        Paradise

    Protocol:        Wi-Fi 6 (802.11ax)

    Security type:        WPA2-Personal

    Network band:        2.4 GHz

    Network channel:        11

    Link speed (Receive/Transmit):        86/155 (Mbps)

    IPv4 address:        10.40.70.192

    IPv4 DNS servers:        10.40.70.1

    Manufacturer:        Intel Corporation

    Description:        Intel(R) Wi-Fi 6 AX200 160MHz

    Driver version:        22.200.0.6

    Physical address (MAC):        78-2B-46-C7-0D-21

     

    Cabled at my Desk:

    image.thumb.png.9b283105fbc8b6f55115c7333bba982d.png

     

    Connection from Chrome Docker:

    image.thumb.png.7d27254cfdc5df08f7a05309168fc2f7.png

    What the Hell?

     

    Connection from VM (Goat):

    image.thumb.png.310f4dd834efb63e30fbb8c9aa1ab93e.png

     

    Connection from Gaming VM:

    image.thumb.png.9e3d96239675942965da44e540668cec.png

  7. So I have an Unraid server serving several VMs including my pfsense router...

     

    The pfsense box has a pair of nics, one of which on the Lan side is connected to a 1G router...

     

    My Unraid Server has a single 2.5G card connected to the same 1G router...

     

    My VM's are running e1000 1G nics

     

    What is the potential connection speed between my VM and the Shares on the Unraid server?

     

    Would that change if I were using a different vitual nic?

     

    How would I best test this?

     

    Thanks

  8. That was my general understanding as well... But I've seen several references (By people who are fairly experienced with Unraid) talking about being able to use the card on Dockers and VM's... Not simultaneously but asynchronously without a reboot.

     

    On the upside, my experimentation required that I play games for 3 hours last night... So it's not a total loss.

  9. So I removed the VFIO Binding for my video card and rebooted the server... Then reset the Video card in my VM (Just in case) and it booted to the 4080 just fine... I did not expect that... I had always thought the Binding was required...

     

    I guess next, I will try loading the NVIDIA Driver and trying it again...

     

  10. That stopped me too...

     

    I may have to give it a shot.

     

    Question is: Does the VM Booting cause the dockers to fail or do they just lose the resource? Do they recover when the VM is shut down?

     

    As for how I got my GPU Passed Through in the first place I'm afraid I just did as little as I could.

     

    What Hardware are you running? Intel CPU Nvidia GPU?

     

    Have you got the video card bound to VFIO at boot?

     

     

  11. I've read little references in a few other posts about optional ways to pass a GPU through to a VM... I've had a Nvidia GPU passed though to my Gaming VM (Passed through NVMe as well) and been very happy with the result when I'm playing Games on the VM.

     

    But that is not all that often... Sadly...

     

    I'm wondering if there is a better way to pass a GPU through that would let me use the GPU for other things in Dockers or even other VMs when I'm not gaming?

  12. I'm a little confused about the icons to the left of the shares... They tell you the status of your shares with regards to data redundancy don't they?

     

    My Media Shares are on the main array with 6 drives and a parity drive... So they are good. (But I assume they should show a warning if I move a file onto the share and mover hasn't moved it off of the cache yet?) <-- Yes, that works

     

    My Cache is a single NVME drive (Wow things get exciting when you have mover completely fill the cache drive) and if I have a share set to ONLY is shows up with a caution icon that the data is not protected.

     

    My Pool (Systems - Best name I could come up with) is a Raid0 btrfs array comprised of two 4TB SSD drives.

     

    My system folder is on the Systems pool. and it is showing as protected... It shouldn't be should it... If anything it's worse than being on a single drive?

     

    I have a backup and if I lost that Pool I would be busy fixing it but not remotely upset about it. (My wife would be irritated though)

     

    image.thumb.png.b067a5431be4b5eb3ca912c13e85285d.png

     

  13. Dual SSD Cache (Configured in Raid 0))

    rsync --progress --stats -v Test.mkv /mnt/cash/Backup

     25,775,133,173 100%    1.11GB/s    0:00:21 (xfr#1, to-chk=0/1)

    1,145,841,157.42 bytes/sec

     

    NVME (Again)

    rsync --progress --stats -v Test.mkv /mnt/cache/Backup

     25,775,133,173 100%    1.81GB/s    0:00:13 (xfr#1, to-chk=0/1)

    1,778,029,382.21 bytes/sec

     

    So we are back to a noticeable improvement in speed for the SSDs... 

     

    So now... Do I set the NVMe drive up as Cache and the larger (But also unprotected) Pool as my working folder? And just make sure it is backed up to the array in case of drive failure? Or do the opposite?

     

    I think I would be better off using the much larger pool where I will have far more content... My System folder, my domains (VM Drives), and maybe a Steam Library?

     

    Just to be funny - Copying the file from one ram drive to another

    rsync --progress --stats -v Test.mkv /tmp
     25,775,133,173 100%    1.98GB/s    0:00:12 (xfr#1, to-chk=0/1)

    2,062,514,083.36 bytes/sec

     

    This is kind of an unexpected result... am I spoiling the other tests in some way with my methodology?

     

    Arbadacarba

  14. Dual SSD Cache (No longer used as Cache...)

    rsync --progress --stats -v Test.mkv /mnt/systems/Backup
     25,775,133,173 100%  341.82MB/s    0:01:11 (xfr#1, to-chk=0/1)

    360,579,385.22 bytes/sec

     

    NVME (Just for a sanity check)

    rsync --progress --stats -v Test.mkv /mnt/cache/Backup
     25,775,133,173 100%    1.80GB/s    0:00:13 (xfr#1, to-chk=0/1)

    1,778,029,382.28 bytes/sec

     

    So I've Halved the speed of my SSD Array... Hmm... I didn't expect that.

     

     

  15. Systems it is.

     

    I think I can live with that... my VM's are all on there so technically there are more than one system on it...

     

    At least that is what I'm telling myself.

     

    So now I have a NVME cache and SSD storage for VM's and System files

     

    Thanks

  16. So I can't have a share with the same name as the pool?

    My goal was to have a pool of 2 2TB SSD's for the system folder on my unraid server... Somewhere to keep the Docker folders, appdata, domain folder, libvert file... I have them all on a pool called config, (because I was nervous about naming the pool System when I created it.) and am about to shuffle things around and thought I might try making it more accurate.

     

    Arbadacarba