jpowell8672

Members
  • Posts

    377
  • Joined

  • Last visited

  • Days Won

    1

jpowell8672 last won the day on October 8 2019

jpowell8672 had the most liked content!

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jpowell8672's Achievements

Contributor

Contributor (5/14)

69

Reputation

1

Community Answers

  1. Upgraded from 6.12.6 to 6.12.8 without issue. Thank you, I appreciate your hard work!
  2. Updated from 6.12.5 -> 6.12.6 no issues. Thanks again for your hard work & quick fix.
  3. 6.12.4 -> 6.12.5 no issues. Thank you
  4. Welcome to the UNRAID family @Adam-M
  5. Run Tools > Update Assistant first then follow release notes upgrade procedure: Version 6.12.4 2023-08-31 Upgrade notes Known issues Please see the 6.12.0 release notes for general known issues. Rolling back Before rolling back to an earlier version, it is important to ensure Bridging is enabled: Settings > Network Settings > eth0 > Enable Bridging = Yes Then Start the array (along with the Docker and VM services) to update your Docker containers, VMs, and WireGuard tunnels back to their previous settings which should work in older releases. Once in the older version, confirm these settings are correct for your setup: Settings > Docker > Host access to custom networks Settings > Docker > Docker custom network type If rolling back earlier than 6.12.0, also see the 6.12.0 release notes. Fix for macvlan call traces The big news in this release is that we have resolved issues related to macvlan call traces and crashes! The root of the problem is that macvlan used for custom Docker networks is unreliable when the parent interface is a bridge (like br0), it works best on a physical interface (like eth0) or a bond (like bond0). We believe this to be a longstanding kernel issue and have posted a bug report. If you are getting call traces related to macvlan, as a first step we recommend navigating to Settings > Docker, switch to advanced view, and change the "Docker custom network type" from macvlan to ipvlan. This is the default configuration that Unraid has shipped with since version 6.11.5 and should work for most systems. If you are happy with this setting, then you are done! You will have no more call traces related to macvlan and can skip ahead to the next section. However, some users have reported issues with port forwarding from certain routers (Fritzbox) and reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode. For those users, we have a new method that reworks networking to avoid issues with macvlan. Tweak a few settings and your Docker containers, VMs, and WireGuard tunnels should automatically adjust to use them: Settings > Network Settings > eth0 > Enable Bonding = Yes or No, either work with this solution Settings > Network Settings > eth0 > Enable Bridging = No (this will automatically enable macvlan) Settings > Docker > Host access to custom networks = Enabled Note: if you previously used the 2-nic docker segmentation method, you will also want to revert that: Settings > Docker > custom network on interface eth0 or bond0 (i.e. make sure eth0/bond0 is configured for the custom network, not eth1/bond1) When you Start the array, the host, VMs, and Docker containers will all be able to communicate, and there should be no more call traces! Troubleshooting If your Docker containers with custom IPs are not starting, edit them and change the "Network type" to "Custom: eth0" or "Custom: bond0". We attempted to do this automatically, but depending on how things were customized you may need to do it manually. If your VMs are having network issues, edit them and set the Network Source to "vhost0". Also, ensure there is a MAC address assigned. If your WireGuard tunnels will not start, make a dummy change to each tunnel and save. If you are having issues port forwarding to Docker containers (particularly with a Fritzbox router) delete and recreate the port forward in your router. To get a little more technical... After upgrading to this release, if bridging remains enabled on eth0 then everything works as it used to. You can attempt to work around the call traces by disabling the custom Docker network, or using ipvlan instead of macvlan, or using the 2-nic Docker segmentation method with containers on eth1. Starting with this release, when you disable bridging on eth0 we create a new macvtap network for Docker containers and VMs to use. It has a parent of eth0 instead of br0, which is how we avoid the call traces. A side benefit is that macvtap networks are reported to be faster than bridged networks, so you may see speed improvements when communicating with Docker containers and VMs. FYI: With bridging disabled for the main interface (eth0), then the Docker custom network type will be set to macvlan and hidden unless there are other interfaces on your system that have bridging enabled, in which case the legacy ipvlan option is available. To use the new fix being discussed here you will want to keep that set to macvlan. https://docs.unraid.net/unraid-os/release-notes/6.12.4/#:~:text=This release resolves corner cases,properly shut the system down.
  6. https://nascompares.com/2023/06/14/pros-and-cons-of-unraid-nas-os-should-you-use-it/
  7. Version 6.12.0-rc3 2023-04-14 (This is consolidated change log vs. Unraid OS 6.11) Upgrade notes If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate. ZFS Pools New in this release is the ability to create a ZFS file system in a user-defined pool. In addition you may format any data device in the unRAID array with a single-device ZFS file system. We are splitting full ZFS implementation across two Unraid OS releases. Initial support in this release includes: Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4-way mirror in a mirror vdev. Multiple vdev groups. Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table. Support replacing single missing device with a new device of same or larger size. Support scheduled trimming of ZFS pools. Support pool rename. Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit. Non-root vdev cannot be configured in this release, however, they can be imported. Note: imported hybrid pools may not be expanded in this release. Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system. A ZFS pool has three variables: profile - the root data organization: raid0, mirror (up to 4-way), raidz1, raidz2, raidz3 width - the number of devices per root vdev groups - the number of root vdevs in the pool At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool. Special treatment for root single-vdev mirrors: A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation. A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device. To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update. Pools created with the steini84 plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report). Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized. Autotrim can be configured as on or off (except for single-device ZFS volumes in the unRAID array). Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels. When creating a new ZFS pool you may choose zfs - encrypted, which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time. During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop. Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories.
  8. It must have just been down, it's working fine again now.
  9. 5 Reallocated_Sector_Ct 1160 197 Current_Pending_Sector 16 198 Offline_Uncorrectable 16 Unraid reports a sensitive smart status to try to warn you early of a problematic soon to possible fail drive https://wiki.unraid.net/Understanding_SMART_Reports
  10. Looks good to me. You could always add in a nvidia quadro or gpu for jellyfin transcoding in future.
  11. No need to trim any btrfs pool since it uses the discard=async mount option
  12. Updated 6.11.4 to 6.11.5 no issues, thanks for the update.
  13. If you have the fix common problems plugin installed which most do and is good to have then you will have a Update Assistant under the tools tab which is: Update Assistant This script is part of Fix Common Problems These tests, while not definitive will give you recommendations on what you should do prior to updating your unRaid OS. The tests are run against what the available update for unRaid expects and/or wants. There may also be perfectly valid use-cases for any issues that this script finds.