Leaderboard

Popular Content

Showing content with the highest reputation on 07/08/20 in all areas

  1. 6.9.0-beta24 vs. -beta22 Summary: fixed several bugs added some out-of-tree drivers added ability to use xfs-formatted loopbacks or not use loopback at all for docker image layers. Refer to Docker section below for more details (-beta23 was an internal release) Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Multiple Pools This features permits you to define up to 35 named pools, of up to 30 storage devices/pool. The current "cache pool" is now simply a pool named "cache". Pools are created and managed via the Main page. Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg. If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact. When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to current cache pool operation. Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order: pool assigned to share disk1 : disk28 all the other pools in strverscmp() order. As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "unRAID array" pools. We are also considering zfs support. Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool. Now you have x2 1-device btrfs pools. Upon array Start user might understandably assume there are now x2 pools with exactly the same data. However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device. Language Translation A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI. There are several language packs now available, and several more in the works. Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language. Note: Community Applications must be up to date to install languages. See also here. Each language pack exists in public Unraid organization github repos. Interested users are encouraged to clone and issue Pull Requests to correct translations errors. Language translations and PR merging is managed by @SpencerJ. Linux Kernel Upgraded to 5.7. These out-of-tree drivers are currently included: QLogic QLGE 10Gb Ethernet Driver Support (from staging) RealTek r8125: version 9.003.05 (included for newer r8125) HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request) Note that as we update Linux kernel, if an out-of-tree driver no longer builds, it will be omitted. These drivers are currently omitted: Highpoint RocketRaid r750 (does not build) Highpoint RocketRaid rr3740a (does not build) Tehuti Networks tn40xx (does not build) If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers. Base Packages All updated to latest versions. In addition, Linux PAM has been integrated. This will permit us to install 2-factor authentication packages in a future release. Docker Updated to version 19.03.11 We also made some changes to add flexibility in assigning storage for the Docker engine. First, 'rc.docker' will detect the filesystem type of /var/lib/docker. We now support either btrfs or xfs and the docker storage driver is set appropriately. Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name. For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs. If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs. We also added the ability to bind-mount a directory instead of using a loopback. If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker". If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker. For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker". Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this. In this release however, you must edit the 'config/docker.cfg' file directly to specify a directory, for example: DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker" Finally, it's now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up. Virtualization libvirt updated to version 6.4.0 qemu updated to version 5.0.0 In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. Refer also @ljm42's excellent guide. In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS. The primary use case is to facilitate accelerated transcoding in docker containers. For this we require Linux to detect and auto-install the appropriate driver. However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page. Users passing GPU's to VM's are encouraged to set this up now. "unexpected GSO errors" If your system log is being flooded with errors such as: Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66 You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net". In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page. For other network configs it may be necessary to directly edit the xml. Example: <interface type='bridge'> <mac address='xx:xx:xx:xx:xx:xx'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> Other AFP support has been removed. Numerous other Unraid OS and webGUI bug fixes and improvements. Version 6.9.0-beta24 2020-07-08 Bug fixes: fix emhttpd crash expanding number of slots for an existing pool fix share protected/not protected status fix btrfs free space reporting fix pool spinning state incorrect Base distro: curl: version 7.71.0 fuse3: version 3.9.2 file: version 5.39 gnutls: version 3.6.14 harfbuzz: version 2.6.8 haveged: version 1.9.12 kernel-firmware: version 20200619_3890db3 libarchive: version 3.4.3 libjpeg-turbo: version 2.0.5 lcms2: version 2.11 libzip: version 1.7.1 nginx: version 1.19.0 (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516) ntp: version 4.2.8p15 openssh: version 8.3p1 pam: version 1.4.0 rsync: version 3.2.1 samba: version 4.12.5 (CVE-2020-10730, CVE-2020-10745, CVE-2020-10760, CVE-2020-14303) shadow: version 4.8.1 sqlite: version 3.32.3 sudo: version 1.9.1 sysvinit-scripts: version 2.1 ttyd: version 20200624 util-linux: version 2.35.2 xinit: version 1.4.1 zstd: version 1.4.5 Linux kernel: version 5.7.7 out-of-tree driver: QLogic QLGE 10Gb Ethernet Driver Support (from staging) out-of-tree driver: RealTek r8125: version 9.003.05 out-of-tree driver: HighPoint rr272x_1x: version v1.10.6-19_12_05 Management: cleanup passwd, shadow docker: support both btrfs and xfs backing filesystems loopbacks: permit xfs or btrfs based on filename mount_image: suppport bind-mount mount all btrfs volumes using 'space_cache=v2' option mount loopbacks with 'noatime' option; enable 'direct-io' non-rotational device partitions aligned on 1MiB boundary by default ssh: require passwords, disable non-root tunneling web terminal: inhibit warning pop-up when closing window webgui: Add log viewer for vfio-pci webgui: Allow different image types to upload with 512K max webgui: other misc. improvements webgui: vm manager: Preserve VNC port settings
    7 points
  2. 3 points
  3. Just a few comments on the ability to use a folder / share for docker If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the cache drive. Just be aware though that this new feature is technically experimental. (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all) I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP). My reasoning for this is that If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run. The folder will have to be removed (via the command line), and then recreated. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way. Using a separate share will also allow you to not export it without impacting the other shares' exporting. (And there are no "user-modifiable" files in there anyways. If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell) You definitely want the share to be cache only (although cache prefer should probably be ok). Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you. On this beta (until the GUI properly supports this new feature), you also cannot use Settings - Docker to stop / start the service if you've made the change to the .cfg file to utilize this feature. (You can stop the service, but in order to restart it you have to enable it via the config file and then stop / start the array) I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder. This may however been a glitch in my system. Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated (once the GUI properly supports these changes) to let you know of any problems that it detects with how you've configured the folder.
    3 points
  4. Added several options for dealing with this issue in 6.9.0-beta24.
    3 points
  5. NOTE: I do not have time to keep the table of contents up to date, so there are going to be other scripts within this thread that are not listed here. Just a thread to contain any/all additional scripts created by users for use within the user.scripts plugin. I'm going to be using this thread for anything that pops into my head that may be of use but is either too simple for a plugin format, or just not worth the time for something that may only get run once. Ideal format to post any contributed scripts would be a zip file containing the script and description (stored within an already named folder for ease of adding to the plugin, and additionally a code block of the script itself for complete openness. See the user.scripts thread for details on how to add these scripts (or any others) Default Scripts Included in the plugin Fix Files Stored on the Array for cache-only shares and the reverse Clean Docker Logs Backup MySQL Folder Run mover at a certain utilization automatically Record Disk Assignments Enable / Disable Turbo Write Mode Auto set turbo mode based on drives spun up Run Mover At A Threshhold, optional to skip moving if parity check in progress Clear An unRaid Data Drive A script to have a file with the folders containing movies and tvshows. Send Server Status To Phone Backup vm xml files and ovmf nvram files Automatically download from repo and install custom VM icons to vm manager Run A Custom Script At Parity Check / Rebuild Start And Stop Catalog Drive Contents Move a folder when disk utilization exceeded Very simple script which will resume paused/suspended vms or start shut off vms Scheduled Scrubs Scheduled checks for Out Of Memory Errors Play PacMan On Your Server USB Hotplug for Virtual Machines with no passthrough and a revision HERE Enable / Disable Nested VM https://forums.unraid.net/topic/48707-additional-scripts-for-userscripts-plugin/?page=4#comment-547492 RemoveSpacesFromFile FolderfromFilename Automatically save syslog onto flash drive Check Plugin Integrity Allow unRaid to utilize the full width of the browser instead of limited to 1920px Get size of running containers Script to spin up all drives at certain times of day unRaid GUI Bleeding Edge Toolkit Enable Hardware Decoding In Plex Convert files from dos to linux format
    1 point
  6. 1 point
  7. one word, nice! 🙂 https://www.bbc.co.uk/news/technology-53322755
    1 point
  8. Whooo, RTL8125BG 2.5GB nic works now, using https://www.gigabyte.com/Motherboard/Z490-AORUS-ELITE-rev-10#kf Thanks
    1 point
  9. I'd be interested to see if anyone has been able to get NUT or APCUPSD to change / register different AVR voltages.. I have a CyberPower OR1500LCDRM1U UPS and just "figured" that it worked automatically... with no idea of what it considered a low voltage for AVR.. I just "figured" that it regulated any deviance from 120v. According to APCUPSD and NUT, both report the default setting is 90v with no obvious way to change that value. Does that really mean that AVR does not turn on until it drops to 90v? That would seem to defeat the purpose of having AVR. From CyberPower: The OR1500LCDRM1U uses Automatic Voltage Regulation (AVR) to correct minor power fluctuations without switching to battery power, which extends battery life. AVR is essential in areas where power fluctuations occur frequently. A 30v drop is not a minor power fluctuation. Or is this setting just not what controls AVR, or is NUT/apcuspd just reporting something wrong? UPS pros, let's get to the bottom of this!
    1 point
  10. I want to provide a vanilla experience of ZFS on unRAID until it’s natively supported [emoji16] I don’t want to add some some functions that might or might not benefit user use-cases. If you want the pool to automatically import export it’s a great idea to add zpool export -a and zpool import -a on stopping/starting the array via the awesome user scripts plugin. Considering this is a plug-in for advanced users I don’t think the target audience for this will have a problem adding these commands if preferred. Now we just wait and see if native zfs support comes in the next betas Ps I have built Sanoid/Syncoid for myself and was wondering if there was any demand for a plug-in?
    1 point
  11. Yeah ZFS is really portable, but you have to be sure that you don't have feature flags turned on on the pool that is not supported on Freebsd/Freenas See this link: https://openzfs.org/wiki/Feature_Flags Sent from my iPhone using Tapatalk
    1 point
  12. Thank you SirReal63, very helpful. I don't use Plex; I watch my movies using a Windows 7 HTPC and they are saved on the NAS in DVD format. I think this means the CPU of my NAS has an easier life than yours, i.e. no transcoding?
    1 point
  13. @LateNight Mine is showing the exact same information other than the time stamps and ID.
    1 point
  14. 1 point
  15. Or to just compile it themselves using the other customized compiler tool.
    1 point
  16. @StevenD Your storage hardware makes me smile.
    1 point
  17. You can rename a pool by stopping the array, then click on the pool name in the Main page, which brings up the pool settings. In here clicking on the name opens a window to rename the pool. Renaming a pool does not change any internal references. For example if the path of your docker image contains a direct reference to the pool name, e.g. /mnt/cache/system/docker.img, you will need to update this reference manually.
    1 point
  18. I believe it's not currently possible, at least no with the GUI, maybe manually, but it doesn't affect the shares, since those will remain the same, you'll need to correct any internal paths though. "/mnt/pool/share" is shared as "\\tower\share" if you e.g. change the pool name "/mnt/new_pool_name/share" is still shared as "\\tower\share"
    1 point
  19. In fact the whole of the parity disk is always used, so if it had a utilisation it would always say 100%.
    1 point
  20. New release of Recycle Bin. If you are using the Recycle Bin on 6.9 Beta 24, you will notice that deleted files are logged to the syslog as well as the smb log used by Recycle Bin to show deleted files. This is due to a change in the syslog settings. If you are doing a lot of deleting on a SMB share, you might want to exclude that share from the Recycle Bin so the Syslog doesn't fill with file deleted messages.
    1 point
  21. If you use the "folder" method as described in the OP, then yes Before this release, the docker.img was always BTRFS, regardless if the drive it sat on was BTRFS, XFS, or ReiserFS. To make the image xfs, you change the filename in Settings - Docker to be docker.xfs.img instead of docker.img My post detailing some items in the folder option gives one huge advantage-> If you've constantly struggled with the docker.img filling up. Performance wise, you won't see any significant difference between the options now available (But, the folder method will be faster, if only synthetically because of below) The main reason for these changes however is to lessen the excess writes to the cache drive. The new way of mounting the image should give lesser amount of writes. The absolute least amount of writes however will come via the folder method. But, the GUI doesn't natively support it yet without the change itemized in the OP.
    1 point
  22. I compile my kernel for bêta 24 with nvidia and it's work. Great News no more need flr patch for Audio and usb passthrough on x570 ryzen Board. It's implemented in 5.7.7 stock kernel [emoji16]. Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  23. This appears to be fixed but the used space is now wrong, maybe there's another way to get it, like whatever df uses, this is an empty raid5 pool with 4 x 32GB devices, free space now correctly accounts for parity unlike before, but note the used space:
    1 point
  24. Just upgrade to 6.8.3 Doesn't appear that 6.5.3 still exists on the download server.
    1 point
  25. Since it's a very old release you need to do a manual update, 6.5.3 was deleted from the cloud some time ago.
    1 point
  26. Thank you very much Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  27. Container updated and now fully compatible with v6.9.0beta24 (prebuilt images will be updated soon). EDIT: Prebuilt images are now online.
    1 point
  28. While it is not the most mainsteam feature, it could help on certain cases. However, I am afraid that the WiFi drivers might be quite a new layer of mess.
    1 point
  29. See Q4 here: https://github.com/binhex/documentation/blob/master/docker/faq/delugevpn.md Alternatively you can enable the plugin then stop/start the backend server from within the deluge UI. I forget where to find it exactly, it’s in the deluge menu somewhere. Sent from my iPhone using Tapatalk
    1 point
  30. Perfect, thanks for picking that up! Appears to be working now.
    1 point
  31. eheh you are always ahead of me by 5-6 hours...damn time zone Beginning the download now, better to try all to be prepared and take into account all the issues. For mac os big sur, until now I found: - issue with textedit: several crashes when opening existing txt files (crash complaining about layout, or something similar) - preference panel icons of third party applications: not working well, icons disappear sometimes - system preferences --> network: it takes long time to load the network panel - control center: somehow slow when clicking on the control center icon (btw I hate the control center...and I can't wait to find a way to delete it) - control center: from system preferences if you set to hide icons in control center it will not work, icons still there - siri: though audio works siri always reply with "I don't understand that, Can you please repeat?", such as Siri can't identify the microphone.
    1 point
  32. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
    1 point
  33. Updated to beta 2 by downloading the beta 2 installer and making a dmg and running it. Took a long 4 reboots where I had to manually select the Mac installer selection in opercore to keep it going. Thought it frooze twice but it eventually finished. So be very patient...it's still going.
    1 point
  34. Using drives that you don't trust to hold data is a very bad idea. Unraid's drive recovery uses all the capacity of all the drives to maintain parity, and when one drive fails, all the remaining drives must be perfect to rebuild the failed drive. So, be sure to fully test any drive that will be included in the parity array, and discard any that show signs of failure.
    1 point
  35. Yep, with QSV my handbrake encodes are using CPU in the 65-75% range. It all depends on audio, video and subtitle encoding needs. Without QSV, it will use CPU in 90-95% range. Sent from my iPhone using Tapatalk
    1 point
  36. After i added 2nd GPU card - needed to do some testing...all was good. Then removed it , kept only one and the same issue started again and FREEEZEs. Read through your notes, some custom patch has to be applied (for version 6.8.3). I was already on 6.9.1b22 so not able to revert 2version back. Anyway not really sure how i was able to run it before without any patch, but i assume this is the way: I wasnt able to start VM module, soon as i wanted it freeze with error bellow. So what to do: 1. in BIOS disable IOMMU 2. Start the Unraid2 3. Start VM module. Make all possible VMs with "AMD Starship/Matisse PCIe Dummy Function | Non-Essential Instrumentation (0c:00.0)" on Disabled AUTO start, then restart unraid 4. Enable IOMMU in BIOS 5. Unraid shold boot , VM module should be visible. Edit the VMs and look for "AMD Starship/Matisse PCIe Dummy Function | Non-Essential Instrumentation (0c:00.0)" added into your VM image - you shold UNTICK IT, then SAVE...next time when you EDIT VM image is not present anymore. 6. Start the VM and all should be running fine I added limetech to my reply , to include patch...as seems like all users with new Ryzen 3xxx series have the same problem. "AMD Starship/Matisse PCIe Dummy Function | Non-Essential Instrumentation (0c:00.0)" source of issues Jul 5 13:02:30 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 1023ms after FLR; waiting Jul 5 13:02:32 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 2047ms after FLR; waiting Jul 5 13:02:35 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 4095ms after FLR; waiting Jul 5 13:02:40 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 8191ms after FLR; waiting Jul 5 13:02:50 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 16383ms after FLR; waiting Jul 5 13:03:07 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 32767ms after FLR; waiting Jul 5 13:03:42 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 65535ms after FLR; giving up Jul 5 13:03:43 unRAIDTower kernel: clocksource: timekeeping watchdog on CPU10: Marking clocksource 'tsc' as unstable because the skew is too large: Jul 5 13:03:43 unRAIDTower kernel: clocksource: 'hpet' wd_now: b4700ed2 wd_last: b3954a18 mask: ffffffff Jul 5 13:03:43 unRAIDTower kernel: clocksource: 'tsc' cs_now: 1d337ecfa60 cs_last: 1d337dd658c mask: ffffffffffffffff Jul 5 13:03:43 unRAIDTower kernel: tsc: Marking TSC unstable due to clocksource watchdog Jul 5 13:03:43 unRAIDTower kernel: TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. Jul 5 13:03:43 unRAIDTower kernel: sched_clock: Marking unstable (510899129422, -8570651)<-(510996221197, -105679272) Jul 5 13:03:45 unRAIDTower kernel: clocksource: Switched to clocksource hpet
    1 point
  37. One request: Leave the license model as is... Linking the license to the hardware of the machine would be more than just an "iceberg problem" - that would be more of a volcano... The GUID model is not optimal, but it is still easier to use than anything else. My little Lexar USB stick has been working since 11.2012 and is still working (tap on wood)... 😉
    1 point
  38. I have both Starship USB controllers passed through, the USB controllers on the CPU (Matisse) don't work yet AFAIK. As mentioned above, I had to add pcie_no_flr=1022:148 to my config fix the FLR issue. They work perfectly now.
    1 point
  39. Create a qcow2 image of 256 MB (from Unraid terminal): qemu-img create -f qcow2 /path/to/the/image/test.qcow2 256M This will create a qcow2 image of 256 MB. Enable NBD on the host and connect the qcow2 image as a network block device (from Unraid terminal): modprobe nbd max_part=8 qemu-nbd --connect=/dev/nbd0 /path/to/the/image/test.qcow2 Create EFI partition and format: Create GPT (from Unraid terminal): gdisk /dev/nbd0 w y Create EFI partition (from Unraid terminal): gdisk /dev/nbd0 n 1 <press Enter to accept default value> <press Enter to accept default value> EF00 w y Format the EFI partition as FAT32 (from Unraid terminal): mkfs.fat -F 32 /dev/nbd0p1 Create a folder for the mount point (advise is to create a folder in your shares, so you can write files into the EFI partition from your vm too): mkdir /path/to/the/mount/point/ Mount the EFI partition (from Unraid terminal): mount /dev/nbd0p1 /path/to/the/mount/point/ Copy your EFI files in /path/to/the/mount/point/ Unmount the EFI partition (from Unraid terminal): umount /path/to/the/mount/point/ Disconnect nbd0 (from Unraid terminal): qemu-nbd --disconnect /dev/nbd0 rmmod nbd (Optional) Rename the EFI partition from "NO NAME" to "EFI" From the virtual machine you can mount the new EFI partition and rename it with the command: sudo diskutil rename /dev/disk0s1 "EFI" where disk0s1 is the EFI partition. In the vm xml point to the qcow2 image and add <boot order='1'/>
    1 point
  40. I was having problems getting this all to work but I figured it out after about an hour. I was able to connect to the vpn but was not able to connect to anything on my network or get an internet connection on my phone. It turned out to be a DNS issue and adding the address of my home router as the DNS server to the wireguard app on my phone fixed all of my problems. Overall, easier to setup than openvpn but still took a while to troubleshoot. I will probably keep openvpn as a backup to wireguard.
    1 point
  41. So I had the same issue but the solution made me feel like an absolute idiot....spent a good 20-30 minutes trying to figure this out but eventually got it: 1. Enable advanced config options on your UPS via the on screen display 2. Open configuration options and scroll down until you find ModBus 3. Change ModBus to Enabled. It is disabled by default on all APC UPS units. 4. In unRAID UPS Setting menu, set the "UPS Cable" to USB and set "UPS Type" to ModBus. No additional settings like /dev/tty** are required. It may take a minute or two for the info to load but it will. ***Make sure you use the USB-A to USB-B cable to connect your UPS to the server, not the RJ-45 to USB-A. ModBus does not seem to work with the RJ-45 port on the UPS***
    1 point