Evolze

Members
  • Posts

    21
  • Joined

  • Last visited

About Evolze

  • Birthday November 11

Converted

  • Personal Text
    Hi, I'm Evolze! I'm an Junior Linux Sysadmin and someone who enjoys running performance benchmarks and tinkering with various homelab projects (maybe a bit too much). ๐Ÿ˜†

    Unraid Journey Started on 11/13/2022

Evolze's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Understood, sounds good. Thanks again for taking a look at this - especially during this time of the year! I deleted my original temp workaround a few days ago, but did manage to save a copy of the Arch Linux bug report and Tianocore GitHub discussion (Oct 2022) I had mentioned, if that's what you were referring to? I also had a chance to look at your edk2-unraid repo w/looking at the compile shell script, and I see what you mean. Hopefully, nothing too drastic was changed between the newer versions. ------------------------- On a completely separate note, out of curiosity regarding your compile.sh script, where are these variables stored or fetched from? Are they set in GitHub, your personal system, or via the tianocore repo? I'm able to follow most of it but am a bit lost on where the directory locations are being called and stored. ๐Ÿ˜€ ${DATA_DIR} ${LAT_V} ${LAT_V##*-}
  2. @ich777 Removed my temporary workaround guide above. I had no idea it could cause issues to the libvirt.img, so I appreciate you mentioning this! Out of curiosity, what would happen to the libvirt image using the temp fix I mentioned? Curious as I don't believe I encountered any issues, but then again, I only tested this with one test GPU passthrough VM. ๐Ÿ˜€ I haven't tried these commands yet, but I am wondering -- does this override the default 6.12.x OVMF files in /usr/share/qemu? Or does it create a separate directory and all affected VMs would need to be updated to point to those edk2 OVMF files? I also ask as in my workaround, I mentioned copying the 6.11.x files to that directory as this issue only seemed to impact GPU-based VMs as others using VNC didn't seem to be impacted. That way, both OVMF file versions could co-exist at once (if that makes sense). ๐Ÿ˜€ Nonetheless, thanks again for this! Looking forward to trying this out in the coming days. ----- @SimonF thanks for this too! I've only installed plugins and extensions via the CA, so I was curious how manually plugin installs would persist across reboots.
  3. I know this post is old, but thank you so much @ishtangli! This was the exact setting I needed to get this working for my system. For background/reference: The iGPU is used for Unraid when booting to see what all is going on and for an accessible console to log into, if needed. I just recently bought an Nvidia GTX 1070 from a co-worker and was wanting to add a dedicated GPU to my Unraid server. After struggling for a couple of days, enabling this iGPU Multi-Monitor setting in the BIOS (Asus Motherboard) finally did the trick! ๐ŸŽ‰
  4. Thanks so much! I'll take a look and make some adjustments. ๐Ÿ˜€
  5. Hey all, sorry to revive an older post, but came across an issue with this configuration, at least on macOS. Is anyone else having this issue? From Windows & Linux, I'm able to connect to any Unraid SMB share without an issue. However, if I attempt to try to mount them or even connect to them on macOS, I get the following error message: From Finder >> Go >> Connect to Server... (Command + K) Attempt #1: smb://Tower/data There was a problem connecting to the server "Tower". URLs with the type "smb:" are not supported. ============ Attempt #2 smb://10.0.0.X/data There was a problem connecting to the server "10.0.0.X". URLs with the type "smb:" are not supported. I have tried both the IP address and the hostname via DNS as well - no go. I did some digging and there does not seem to be a lot of helpful info online. For reference, this is the SMB Extra configuration I am currently using: server min protocol = SMB3_11 client ipc min protocol = SMB3_11 client signing = mandatory server signing = mandatory client ipc signing = mandatory client NTLMv2 auth = yes smb encrypt = required restrict anonymous = 2 null passwords = no raw NTLMv2 auth = no I did do some troubleshooting and if I remove this 'extra' configuration from SMB, everything on macOS seems to work. I am currently running macOS Catalina 10.15.7 and since my MacBook Pro is a Mid 2012, this is the only "official" OS option I have. @wgstarks Could you possibly post or provide your SMB extra configuration file for reference? I ask as your Unraid Forum bio mentions you use an all Mac network, so it must be working for you in some way, shape, or form. ๐Ÿ˜€
  6. Thanks so much @JorgeB! Since my domain share is on there, I'm assuming I should disable the VM service prior to making the change?
  7. Hey all, I have a quick question regarding switching pre-existing cache drives and the best avenue to go about this. When I originally built and spec'd out my Unraid server a few months ago, I thought I would need more storage space to provide backups for my files, VMs, and other applications. However, my needs have now changed, and a focus has been put on having more VM storage rather than backup space. I have had my file and VM backups in place for a while now, and after re-evaluating my storage needs, a larger VM cache drive is the way I'd like to go. Anyways, for this scenario, I currently have two cache drives (although three in the system overall). 1x 512GB Solidigm NVMe 1x 1TB Samsung 980 Pro NVMe For my cache pools, I currently have them configured this way: Backup_cache = 1TB Samsung NVMe Share on Drive: backups VM_cache = 512GB Solidigm NVMe Share on Drive: domains I would like to have the cache drive assignment changed to the following new configuration: Backup_cache = 512GB Solidigm NVMe VM_cache = 1TB Samsung NVMe I already have all the data from both shares (backups & domains) backed up to the array, and if needed, both drives can be formatted/switched at any time. Finally for my question, what is the best way to go about doing this and switching the cache drive positions? I attempted to just stop the array, and change the disk positions, but I kept getting a pretty stern Wrong, and I don't necessarily want to start the array in case it accidentally overwrites my Docker cache drive (screenshot below). ------ TLDR; I have 2x cache drives. Data has already been backed up on them both. I want to move Cache Drive A from Pool 1 to Pool 2, and want to move Cache Drive B from Pool 2 to Pool 1. Current Config: Pool 1: Cache Drive A Pool 2: Cache Drive B New Config: Pool 1: Cache Drive B Pool 2: Cache Drive A Thanks all for any help or insight you can provide, and sorry for the long post! ๐Ÿ˜„
  8. Ah okay, understood. The cache drive is set to Cache Only and even then, I had no idea these filesystem-specific features existed within UrBackup. It's a good option to have if I decide to go down this path. Once again, thanks for your help -- Happy New Year! ๐Ÿ˜€
  9. Fantastic, thank you so much for your help! This seems to have gotten rid of the error, but now it appears to have created a few others ones. ๐Ÿ˜† It now seems to be complaining about how it is not on a btrfs filesystem and how a dataset is not configured? Any ideas here? For background, everything Docker-related on my Unraid host is setup on a cache drive formatted with btrfs, with the docker image being a btrfs vDisk (if this matters): 2022-12-31 03:53:28,764 DEBG 'urbackup' stdout output: MOUNT TEST OK 2022-12-31 03:53:28,767 DEBG 'urbackup' stdout output: Testing for btrfs... 2022-12-31 03:53:28,772 DEBG 'urbackup' stderr output: ERROR: not a btrfs filesystem: /media/testA54hj5luZtlorr494 2022-12-31 03:53:28,772 DEBG 'urbackup' stdout output: TEST FAILED: Creating test btrfs subvolume failed 2022-12-31 03:53:28,772 DEBG 'urbackup' stdout output: Testing for zfs... TEST FAILED: Dataset is not set via /etc/urbackup/dataset I tried doing a bit of research and it looks this is called via the "urbackup_snapshot_helper test" command. Not sure why it thinks there isn't a btrfs filesystem there (unless the Docker container does not know about the host filesystem type)? Nonetheless, seriously though, thank you again for your help these past couple of days! ๐Ÿ˜€
  10. Could you please tell me how you did this from within the container's filesystem itself? Did you just make a directory in /etc/urbackup/backupfolder? Or did you create a symlink to it at all? I tried just about everything and cannot seem to get rid of this error. It shouldn't nag me this much but it does ๐Ÿ˜…
  11. An edit / addition to my post above from yesterday. After clearing out everything (appdata), I re-downloaded the container again. I believe I found out why it says: 2022-12-28 04:00:53,987 DEBG 'urbackup' stdout output: Backupfolder not set From within the container, I was looking at the start.sh script within the root user directory (/root/start.sh) and found something interesting. It has this listed, which I believe is not set correctly: # set default location for backup storage to /media echo "/media" > /var/urbackup/backupfolder I checked in /var/urbackup/backupfolder and it is not a directory @binhex It simply creates a file called backupfolder that has "/media" printed within it. I'm not sure if this is supposed to behave this way but it does not create a directory. There might need to be an update to the docker container itself to fix this? I'm still fairly new to Linux, but maybe a Symlink instead? Anyways, hope I'm not the only one getting this 'Backupfolder not set' message. I'm just wanting everything sorted out before I start using it full-time. ๐Ÿ˜€ Especially this message as it appears to be the most concerning when turning off the container. Dec 28 04:16:08 Tower kernel: urbackupsrv[4176]: segfault at 0 ip 0000000000000000 sp 00007ffdb1ce84b8 error 14 Dec 28 04:16:08 Tower kernel: Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.
  12. Hi everyone, I'm fairly new to Unraid still and definitely still trying to get the hang of Docker too. I just recently got this container installed and everything appears to be running fine. However, there seem to be some frightening error messages and this is making me hold off on beginning to use this fully. When looking at Unraid's syslog, the container starts up fine. However, when looking at the container logs, I get the following: 2022-12-28 04:00:53,971 DEBG 'urbackup' stderr output: Raising maximum file descriptor to 65535 failed. This may cause problems with many clients. (errno=1) Raising nice-ceiling to 35 failed. (errno=1) 2022-12-28 04:00:53,983 DEBG 'urbackup' stdout output: Backupfolder not set 2022-12-28 04:00:53,987 DEBG 'urbackup' stdout output: Backupfolder not set In addition, here is what I see in the Unraid Syslog when turning the container off via the Web GUI. It seems very concerning. Is this expected behavior for anyone else? Dec 28 04:16:08 Tower kernel: urbackupsrv[4176]: segfault at 0 ip 0000000000000000 sp 00007ffdb1ce84b8 error 14 Dec 28 04:16:08 Tower kernel: Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6. Here is a screenshot of my Docker configuration. Any assistance or ideas into fixing these?
  13. That makes sense, thanks so much for the information.
  14. @JorgeB Thanks so much! Even though I do not have a multi device pool in this instance, out of curiosity, why would COW be considered required? Is it because the two drives are assisting one another to create and manage the btrfs snapshots or data copies for redundancy?
  15. Hi everyone, I have a quick question regarding the btrfs Copy-on-Write option available in the share creation menu. It is a little bit lengthy, but I hope it makes sense. ๐Ÿคž Background: I'm in the process of moving all of my files and homelab projects to Unraid. The pool, called backup_cache is formatted as btrfs. I want to create a new share called backups which will be configured with the prefer cache pool setting, as shown here. The goal of both the cache pool and share is to grab data from my other Unraid shares (i.e., appdata, domains, files, projects, system, etc.) and create a backup of them on this pool & share. Once the data has been backed up to the cache drive (1 on array, 1 copy on cache drive/share), I plan on using Duplicati or a different utility to have everything on the backups share uploaded to OneDrive. From there, the schedule would repeat once a week, where OneDrive would keep about 2-3 months of backups. I've attached a rough depiction of what I am attempting to accomplish. The files that will be copied/duplicated to this backups share will include virtual machine vDisks backed up & compressed as .zst (via VM Manager Plugin), Docker app data, and operating system backups (i.e., Windows Backup & Restore, Linux OS via rsync). It will also include some other files, such as Office documents, photos, videos, and other files that I will be reading and writing too often. Question: As mentioned earlier, the cache pool is formatted as btrfs. Thus, the option for enabling copy-on-write (COW) is available. Given how I plan to utilize this for backups only and then to OneDrive, should I enable Copy-on-Write for this share? I ask as I know that vDisks/domains and Docker/system shares are recommended to have COW turned off, but in my case, these files will be copied/backed up in a compressed format. I feel like I have a good understanding of btrfs, but I'm wanting to be extra sure as switching between the COW and NOCOW file property can't be done easily for individual files. I've been scouring the forums for the last few hours to see if anyone had a similar configuration or question and could not find anything. If this has been answered before, my apologies in advance! I'm super excited to begin using my newly built Unraid host to do some benchmarking and homelab projects! Any insight or additional recommendations on the best way to configure this share would be greatly appreciated. ๐Ÿ˜€ TL;DR: I am creating a backups share with both compressed and uncompressed files from other Unraid shares. Data will exist on the array, backup_cache pool, and then OneDrive. Given usage, asking if btrfs's copy-on-write feature should be enabled based on this share's purpose and its file contents.