Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/21/17 in Posts

  1. 12 points
    I would love to be able to view the cpu thread pairs from the vm templates like this
  2. 9 points
    Once in a while it is good to hear those encouraging words. Thanks 😄
  3. 8 points
    Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
  4. 8 points
    Hi Guys been thinking about how to get sonarr and radarr to auto re-encode downloads automatically using handbrake. There will be 2 parts to this and i will post the next under this one in a few days time. This video the first part shows how to automatically re-encode video files downloaded by sonarr or radarr using an advanced docker path mapping sending the media files through handbrake first before then being processed by sonarr/radarr. I use h265 as i want smaller video files but any format can be choosen. This first part goes through the principles of how this works. The second video will go deeper using detailed examples. It is recommended to watch this video first. PART ONE PART 2
  5. 8 points
    If you have 2 step turned on, just go to https://security.google.com/settings/security/apppasswords?pli=1 to generate an app password
  6. 7 points
    A plugin to extend the autostart options for docker applications. This plugin will give you the option to explicitly select the order that docker apps will start up in (useful if one app depends upon another to be up and running), and also gives you selectable delays between starts of each app ( to give the first app time to get up and running properly, or to reduce disk thrashing by only trying to load a single app at a time vs loading 10 simultaneously) Each app has a selectable start delay after the previous app in the list starts up (defaults to 5 seconds, but can be set to any value, or no delay at all) Note that any app which this plugin manages will appear on the normal Docker Containers screen as having autostart off You can also combine management of autostart between unRaid itself, and this plugin. Note: This plugin requires unRaid 6.3.0-rc2+ It will not install under any previous version, and CA will not display this plugin as being available on incompatible unRaid versions Install this plugin via CA ( search for Docker autostart)
  7. 7 points
    Just an FYI, we are actively working to create an alternate black on white theme for folks to choose from. Stay tuned!
  8. 7 points
    Since it is my system that is being quoted for power usage, I would like to offer a clarification on the impact of C-States on power consumption. I also performed most of the discovery testing on Ryzen stability issues, so below is a synopsis of my findings. Yes, there is an unRAID specific bug that causes Ryzen based systems to crash unless "Global C-State Control" is disabled in the BIOS. The bug seems to be related to an idle CPU going into power saving C-States, and "heavy" users who have lots of VM's running (especially Windows VMs) seem to have fewer issues as their CPU cores are less idle. We have not been able to determine if it is the act of going into a higher C-State (entering a power save mode), or the act of coming out of a higher C-State (returning to a performance mode), that is the problem. Disabling C6 was not enough, and the only workaround that has worked is completely disabling C-States via the "Global C-State Control". Various tests have been performed that have eliminated the Linux kernel as the culprit. Even pre-4.9 versions of the Linux kernel do not experience this issue, and Win10 does not experience this issue. Various flavors of Linux have been tested, and all have performed flawlessly. Whatever the issue is, it only occurs on unRAID, and the root cause is most likely in the "secret sauce" that makes unRAID so special, though I suppose it could be just a build parameter or something innocuous. unRAID plugins have also been eliminated as a root cause, as even running in Safe Mode, unRAID systems will crash when idle on a Ryzen system if C-States are enabled. The latest unRAID releases, including 6.4.0-rc6, have had no effect on the issue. Similarly, BIOS versions have had no impact on the issue, not even with the latest AGESA 1.0.0.6 code updates. For whatever reason, my system appears to be an excellent "canary in a coal mine" system, as it will consistently crash within a few hours with C-States enabled, quicker than most other Ryzen adopters. My system is typically idle, with no VM's running, which may be a contributing factor. Those "heavy" users with lots of VM's running can often go many days or even weeks, but eventually they too observe a crash with C-States enabled. With Global C-State Control disabled, my system has achieved 72 days of up-time, a streak that only ended so I could apply BIOS and unRAID upgrades. Disabling Global C-State Control is not a perfect solution, however, is it comes with increased idle power usage and heat, may reduce performance (due to heat related throttling), and shortens UPS up-time due to higher idle power consumption. On my system, with C-States enabled, I idle at about 108 watts (+/-5w), with CPU temps in the low 30's. With Global C-States disabled, I idle closer to 126w (again, +/-5w), with CPU temps in the low 40's. That represents a power consumption increase of somewhere between 8 and 28 watts, mostly consumed by the CPU, but some by the cooling system working harder. I know a 108w idle, even under ideal conditions, sounds high, but that is primarily due to my choice of a power hungry SATA controller card, 18 drives, and a pro-grade server case with powerful fans and SAS back-planes, without all of which I have seen my motherboard idle as low as 38w with C-States enabled, and 45-55w without. So disabling C-States does not give you >100w idle power consumption, but does significantly increase power consumption all the same. Last thought, especially for those considering Threadripper: The issues described here may be up to 2x worse on Threadripper, as combining two 8-core Ryzen CPUs into a single 16-core Threadripper, and bumping the TDP up from 95w to 155w (or maybe even 180w), will only amplify the power consumption/heat effects of disabling C-States. I'm very excited for Threadripper, but I advise a wait-and-see approach for Threadripper on unRAID. Hope the above helps clarify the situation. For those of us enjoying Ryzen unRAID systems, we are having great success with C-States disabled, but with the obvious impact of increased power consumption and heat. The fact that we have found this 'workaround' has probably shielded this issue from Limetech's awareness, but to be clear, the current situation is not ideal. Paul
  9. 6 points
    unraid really needs a better way of browsing files than using a docker file manager, using the command line or using another pc on the network. would be great to have a simple file browser built into the web ui that allowed you to browse your local files and do simple functions like copy, move, delete, rename, and open some files (documents, videos, music, and text files for example). Basically something like dissident.ai built right into the webui.
  10. 6 points
    CA Mover Tuning A simple little plugin (mainly created for my own purposes) that will let you fine tune the operation of the mover script On scheduled runs of mover, only actually move file(s) if the the cache drive is getting full (selectable threshold) On scheduled runs of mover, optionally don't move if a parity check / rebuild is already in progress Ability to completely disable the scheduled runs of mover if so desired (Not sure if there's much demand for this) Manually executed runs of mover ("Move Now" button) can either follow the rules for schedules, or always move all files After installation (any guesses as to how to install the plugin?), the default settings are set so that there is no difference as to how unRaid normally handles things. You'll find it's settings within Settings - Scheduler
  11. 6 points
    Its a Jimmy Banner Bonanza! Enjoy!
  12. 6 points
    Since you... "had a devil of a time deciding whether to simply update to the 4.19 kernel now..." 6.6.6 was the version number that came to mind? 🤣😎 As an FYI, I was able to upgrade both of my servers without the need for any holy water, crosses, chanting or other types of rituals.
  13. 6 points
    You guys ain't seen nothing yet... Wait till you see what is coming...
  14. 6 points
    Go to the Docker tab. Find the Binhex-Krusader in the Dockers list. Click on it (see picture below) and select "Edit". The Binhex-Krusader Docker edit screen pops up. Go down the list until you see the /mnt/disks/ for the container path for unassigned. Click the "Edit" button to the right of it. The details pop-up for the unassigned devices paths. Go down the list to "Access Mode" and change that to "RW/Slave" and then save.
  15. 6 points
    I'm hoping for 6.4 to be released when it's ready. You want a constant cycle go buy an iphone.
  16. 5 points
    Last update: 27/11/2019 After a few months of researching, prepping my data, persuading she-who-must-be-obeyed, etc. I finally pulled the plug when Amazon finally has my motherboard in stock. I had a i7-5820K (overclocked to 4GHz) and a Xeon E3-1245v5 server (ITX case) but it was more out of necessity since I wasn't able to afford Dual Xeon to merge them and still have sufficient performance. The 2990WX came out at just the right price point that makes the idea possible. OS at time of building: 6.4.0 OS Current: 6.8.3-rc3 CPU: AMD Ryzen Threadripper 2990WX Heatsink: Noctua NH-U14S TR4-SP3 (in push-pull - I nicked a 15mm fan from my old NH-D14 workstation cooler) Motherboard: Gigabyte X399 Designare EX (F12e BIOS) RAM: 64GB Corsair Vengeance LPX 2666MHz + 32GB GSkill Ripjaw 4 2800MHz (nicked from the old workstation) Case: Silverstone Fortress FT02 (old workstation case) Drive Cage: Evercool Dual (2x5.25 -> 3x3.5 with 80mm fan) - need this to mount 4x2.5" SSD Power Supply: Corsair HX850 (10+ years old!) GPU: Zotac GTX 1070 Mini for main VM, Zotac GT 710 PCIe x1 for unRAID Parity Drive: None because I trust the cloud Local array: 10TB Seagate Iron Wolf, 8TB Seagate NAS, 8TB Hitachi HE8 Cache: 2TB Samsung Evo 850 2.5" SSD VM-only: 512GB Samsung SM951 M.2 (the AHCI variety), 1.2TB Intel 750 NVMe, Samsung 970 EVO 2TB, Samsung PM983 3.84TB (via Asus Hyper M.2 X16 card + split PCIe to x4,x4,x4,x4) Rclone mountpoint: Kingston SSDNow+ 128GB 2.5" SSD (SATA II - it's that old!) Unassigned: 4TB Samsung Evo 860 2.5" SSD, 2TB Crucial MX300 2.5" SSD, 1.2TB Intel 750 NVMe Total Hard Drive Array Capacity: 26TB local + rclone Total Cache Drive Array Capacity: 2TB Primary Use: Main video/photo editing workstation + various unRAID stuff that people do on unRAID Likes: Pretty much take anything thrown at it and spit it output in my face. Dislikes: It's heavy AF! Future Plans: Move to a smaller case (Raijintek Thetis?) for a more compact built. Some tips: Hyper-V apparently improves PCIe performance! So unless you are suffering from the dreaded error code 43 that cannot be resolved with any other tweaks, do NOT turn off Hyper-V by default. Start a new template if you need to switch on/off Hyper-V. The latest Linux kernel + F12 BIOS seem to make disabling Global C State Control less stable. That manifests as out of memory errors if I try to reserve more than 50% of RAM all at once (e.g. start my workstation VM). So if you disabled Global C State Control in the past and now seem to have some instability, maybe try enabling that with the latest BIOS. Do NOT use ACS Override with multifunction with unRAID version above 6.5.3 due to extreme lags. Unraid 6.8 (tested on rc7) has resolved the lags! 👍 Normal ACS Override does not improve on IOMMU grouping so no point using it. The bottom right M.2 slot (the 2280 size) is in the same IOMMU group as (a) both wired LAN ports, (b) wireless LAN, (c) SATA controller and (d) the middle PCIe 2.0 slot. Hence, it practically cannot be passed through via the PCIe method without ACS Override multifunction. (need ACS multifunction override which lags - see above) The bottom PCIe slot (x8) is in the same IOMMU group as the bottom right M.2 slot so it's the same situation (see above). Just bought a M.2 -> PCIe adapter to test that slot and found out I just wasted 2 hours of my life. If you build from scratch, make sure to get a motherboard with the ability to upgrade BIOS without CPU (and familiarise yourself fully with the process). Even Linus forgot to update his BIOS before his 2990WX build. Windows isn't yet optimised for too many threads/cores. My testing shows anything more than 32 processes leads to a DROP in performance. The diminishing return also means going beyond 24-28 processes leads to almost no improvement in real-world performance. (a process = 1 thread/core e.g. 24 processes = 12 cores in SMT or 24 cores without SMT). Due to Ryzen design (essentially gluing 2x4-core units into a die and then gluing 2/4 dies into a CPU), 24-core config performs better than 28-core, at least with my workload on Windows. 32-core no-SMT is fastest but only in barebone. Perhaps placebo but I found spreading the cores out evenly seems to improve performance. I guess every 8 logical cores = 1 unit so if you have an 8-core VM for example, have 1 core in each unit. Finally, I would like to thank @gridrunner, @eschultz, @Jcloud, @methanoid, @tjb_altf4, @jbartlett, @guru69 for their kind advice during my research process + @DZMM for helping me out with rclone. 👍 Below are my IOMMU groups with IOMMU in BIOS is set to "On" (but without ACS Override multifunction): The motherboard native IOMMU is actually very good (e.g. can pass through USB and main GPU). The LAN ports + wifi + the PCIe2.0 slots are all in the same group. And this Gigabyte motherboard is amazing in that it allows you to pick which slot as primary GPU so you can use a good GPU on the fast slot and dump a cheapo on the slowest slot (in my case a single-slot PCIe x1 GT 710 GPU) for unRAID. Do not need to use vbios (but I use it regardless). ACS Override multifunction would basically splits everything into its own group. However, note that I experienced extreme lag with Unraid 6.6 and 6.7 when ACS Override is turned on (regardless of mode). 6.5.3 (and prior) does not have this issue and 6.8+ has resolved this so presumably a Linux Kernel issue. IOMMU group 0: [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02) [1022:43b6] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller (rev 02) [1022:43b1] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge (rev 02) [1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [8086:1539] 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [8086:24fd] 05:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78) [8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [10de:128b] 07:00.0 VGA compatible controller: NVIDIA Corporation GK208B [GeForce GT 710] (rev a1) [10de:0e0f] 07:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1) [144d:a801] 08:00.0 SATA controller: Samsung Electronics Co Ltd Device a801 (rev 01) IOMMU group 1: [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 2: [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [8086:0953] 09:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) IOMMU group 3: [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 4: [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 0a:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145f] 0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller IOMMU group 5: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:7901] 0b:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 0b:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller IOMMU group 6: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 7: [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 8: [1022:1460] 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 9: [1022:1460] 00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:1a.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:1a.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:1a.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:1a.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:1a.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 10: [1022:1460] 00:1b.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:1b.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:1b.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:1b.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:1b.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:1b.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:1b.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:1b.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 11: [1022:1452] 20:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 12: [1022:1452] 20:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 13: [1022:1452] 20:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 14: [1022:1452] 20:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 15: [1022:1452] 20:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 20:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 21:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 21:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 16: [1022:1452] 20:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 20:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function IOMMU group 17: [1022:1452] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [144d:a808] 41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 [144d:a808] 42:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 [8086:0953] 43:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) IOMMU group 18: [1022:1452] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 19: [1022:1452] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [10de:1b81] 44:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 44:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) IOMMU group 20: [1022:1452] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 21: [1022:1452] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 45:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145f] 45:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller IOMMU group 22: [1022:1452] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 46:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:7901] 46:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 23: [1022:1452] 60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 24: [1022:1452] 60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 25: [1022:1452] 60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 26: [1022:1452] 60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 27: [1022:1452] 60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 61:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 61:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 28: [1022:1452] 60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 62:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function
  17. 5 points
    I have a Bluetooth dongle that I want to use in a docker container. Docker requires that the Host OS install drivers for the device before it can be passed to the container. Searching the forums it appears I am not alone in that need, particularly for the users of Home Assistant. See this post for more info:
  18. 5 points
    This one is in the works. Edit: actually, doesn't mess with syslinux.cfg file, instead it generates entries in config/vfio-pci.cfg
  19. 5 points
    It's not written to any permanent storage, it's written to a file in the root file system which is in RAM. Your "secret password" is only to decrypt the drives - even if the password were leaked somehow they still need the physical drives to make use of it. You'll first have to teach her to hack your root login password in order to get into your server. As @bonienl mentioned in above post we have made some changes so that casual user doesn't accidentally leave a keyfile laying around. These will appear in Unraid OS 6.8 release. In meantime, click that 'Delete' button after Starting array and, I'm done talking with you.
  20. 5 points
    Here a guide that shows how to repair XFS filesystem corruption if you are ever unlucky enough to have this happen on your server. Hope its useful if you need to fix it !
  21. 5 points
    For anyone who needs the script here it is: #!/bin/bash # Drives declare -a StringArray=("ata-WDC_WD2003FYYS-70W080_WJUN0123456" "DRIVE2" "DRIVE3" "DRIVE4") # Show status echo "Current drive status: " for drive in ${StringArray[@]}; do hdparm -W /dev/disk/by-id/$drive done # Enable write caching for drive in ${StringArray[@]}; do hdparm -W 1 /dev/disk/by-id/$drive done # Show status again echo "Finished running, check if the write cache was enabled!" for drive in ${StringArray[@]}; do hdparm -W /dev/disk/by-id/$drive done Replace the serial codes in the StringArray and you should be good to go! Pop it in User Scripts.
  22. 5 points
  23. 5 points
    Tying registration to a USB Flash device is both eleguent and a real pain-in-the-neck at the same time. First, there is a very technical problem it solves. In the case of writing to a parity-protected array, there exists the possibility that a device can report a write failure in a long queue of write commands. When this happens we must assume that the sector being written does not contain proper data. This means that any subsequently queued read command to that same sector must also be marked as "failed" even if the drive reports success, and instead, read data must be "reconstructed" using the other devices in the array. Further, the failed status of the device must persist across reboots so that reading stale data is not possible. This problem is solved by having the md/unraid driver write failed status information directly to a file (super.dat) stored on the USB flash device. This is done right in-line and synchronously with I/O completions handled by the driver. This is the reason we must insist that the USB flash device remain plugged into a running server. Next, using the GUID of the device for registration lets you totally reconfigure your server with new h/w and everything "just works". This is even more awesome when using Virtual Machines. For example, one can create a win10 VM on a dog-slow 10 year old hard disk, on a 5 year old motherboard, with GPU passthrough cobbled using ACS override, and then... Buy new h/w: latest Threadripper, GTX 1080, fast SSD for cache. You just plug your flash device into new server and unRAID comes right up. Next, maybe leave your old slow spinner unassigned, copy the vdisk.img file to your new SSD, make a tweak to the VM settings to pass through different GPU, maybe change memory/core assignments, and guess what? The win10 VM also boots right up and works! (Well there might be a little more work involved for this scenario but you get the point.) As for LimeTech going bust... we have instructions with several trusted individuals on how to release everything unencumbered. But as others have pointed out, you can always get at all of your data using other linux distros. Lastly I need to reiterate: IMHO the real value of unRAID lies in the Community of users who participate and help out here on the Forums. Somehow we have managed to keep the Forums friendly and inviting to new users. Of course there are, ahem, a few exceptions, but in general, it's a really great group of users here. (Again: thanks to all of you!)
  24. 5 points
    This concept shows how parity works - if there are an odd number of "1" bits then parity is 1, if there are an even number then parity is 0. The green and white circles were pulled directly from the unRAID interface, so there is a nice tie-in to the product. There is also an Easter egg, the four lines taken together and converted to ascii spell "un" I also included a potential tagline
  25. 4 points
    Application Name: Tautulli Application Site: http://tautulli.com Docker Hub: https://hub.docker.com/r/linuxserver/tautulli/ Github: https://github.com/linuxserver/docker-tautulli Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support. 11/03/2018 - Project renamed from Plexpy to Tautulli To migrate to Tautulli use Community Applications to install Tautulli, using your old Plexpy appdata (ensure you've stopped the existing plexpy container first)
  26. 4 points
    Instructions For Pi-Hole with WireGuard: For those of you who don't have a homelab exotic enough to have VLANs and who also don't have a spare NIC lying around, I have come up with a solution to make the Docker Pi-Hole container continue to function if you are using WireGuard. Here are the following steps I used to get a functional Pi-hole DNS on my unRAID VM with WireGuard: 1a) Since we're going to change our Pi-hole to a host network, we'll first need to change your unRAID server's management ports so there isn't a conflict with Settings > Management Access: 1) Take your Pi-hole container and edit it. Change the network type to "Host". This will allow us to avoid the problems inherent in trying to have two bridge networks talk to each other in Docker. (Thus removing our need to use a VLAN or set up a separate interface). You'll also want to make sure the ServerIP is set to your unRAID server's IP address and make sure that DNSMASQ_LISTENING is set to single (we don't want PiHole to take over dnsmasq): 2) We'll need to do some minor container surgery. Unfortunately the Docker container lacks sufficient control to handle this through parameters. For this step, I will assume you have the following volume mapping, modify the following steps as needed: 3) Launch a terminal in unRAID and run the following command to cd into the above directory: cd /mnt/cache/appdata/pihole/dnsmasq.d/ 4) We're going to create an additional dnsmasq config in this directory: nano 99-my-settings.conf 5) Inside this dnsmasq configuration, add the following: bind-interfaces Where the listen-address is the IP address of your unRAID server. The reason this is necessary is because without it, we end up with a race condition depending on if the Docker container or libvirt starts first. If the Docker container starts first (as what happens when you set the container to autostart), libvirt seems to be unable to create a dnsmasq which could cause problems for those of you with VMs. If libvirt starts first, you run into a situation where you get the dreaded: "dnsmasq: failed to create listening socket for port 53: Address already in use". This is because without the above configuration, the dnsmasq created by Pi-hole attempts to listen on all addresses. By the way, this should also fix that error for those of you running Pi-hole normally (I've seen this error a few times in the forum and I can't help but wonder if this was the reason we went with the ipvlan set up in the first place). Now, just restart the container. I tested this and it should not cause any interference with the dnsmasq triggered by libvirt.
  27. 4 points
    I'm very much interested in hearing how things go.
  28. 4 points
  29. 4 points
    This video is a tutorial showing how to 'fairly securely' auto start an encrypted unRAID array by hosting the keyfile on an ftps server online or a local ftp server on your cell phone.
  30. 4 points
    Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  31. 4 points
    Currently I have hand edited options at the end of my VM XML config, as well as a manually selected network model `<model type='vmxnet3'/>` Example: </devices> <qemu:commandline> <qemu:arg value='one'/> <qemu:arg value='two'/> <qemu:arg value='three'/> <qemu:arg value='four'/> </qemu:commandline> </domain> Could the Template editor keep these custom edits by offering an editable field in the template to add <qemu:commandline>, and also offer a dropdown for the network card to allow selecting a model? Thank you in advanced
  32. 4 points
    This is a special release in light of the recent so-called Zombieland vulnerability revealed by Intel earlier this week. Normally we don't generate -rc stable patch releases, however in the interest of maintaining our, and the Community's sanity, this release is exactly the same as 6.7.0 except for updated Intel CPU microcode and corresponding Linux kernel (4.19.43). If 6.7.1-rc1 "fails" with something but 6.7.0 "works" then we can be fairly confident microcode/kernel to blame. We have released this on the next branch in order to get some testing before publishing for everyone on stable. Please post here in this topic, any issues you run across which are not present in 6.7.0 also - that is, issue that you think can be directly attributed to the microcode and/or kernel changes. Version 6.7.1-rc1 2019-05-17 Linux kernel: version: 4.19.43 intel-microcode: version 20190514a
  33. 4 points
    In this inaugural blog in the New Users Blog Series, we talk about: Unraid and the USB flash drive Using the USB Flash Creator tool How drives are counted towards the license limit How to reset your root password How to rename your server (Tower) How to change banner images and themes Check it out and let us know what you think! Have ideas/questions about Unraid that you'd like to see a blog written about? Post them here or send me a DM. Cheers! https://unraid.net/blog/unraid-new-users-blog-series
  34. 4 points
    I'll bite... Recently consolidated Unraid Server: 1X Intel Core i7 8700T 1X Gigabyte Z370N WIFI 2X Supermicro VLP 16GB ECC Unbuffered DDR4 2666 2X Corsair Force MP510 M.2 480GB NVME SSD 2X Seagate BarraCuda 5TB SATA 6Gb/s 128MB Cache 2.5-Inch 15mm 1X Supermicro MCP-220-51401-0N Dual 2.5" Fixed HDD Bracket 1X Seasonic 250SU 250W 80Plus 1U Server Power Supply 2X Supermicro AOC-SGP-I4 QP 1Gbps Ethernet Controller 1X Ameri-rack ARC1-PELY423-C5V3 PCIe bifurcation riser card 1X Thermaltake Engine 27 1X Logic Supply MK100 1U Rackmount Case Mounted in front of: 1X CyberPower OR700LCDRM1U UPS 700VA 400W Roles: NAS, PFSense (bridged LAN ports), Plex Server (Docker), Unifi Management (Docker) Idle Power of Server + PoE AP + Cable Modem = 52W
  35. 4 points
    Added to Unraid 6.7
  36. 4 points
    In celebration of the Legalization of a certain recreational plant that came into effect earlier this week, I twisted one up and 6.6.3 installed very (very) smoothly.
  37. 4 points
    To clarify: in the case of a single "disk1" and either one or two parity devices, the md/unraid driver will write the same content to disk1 and to either or both parity devices, without invoking XOR to generate P and without using matrix arithmetic to calculate Q. Hence in terms of writes, single disk1 with one parity device functions identically to a 2-way mirror, and with a second parity device, as a 3-way mirror. The difference comes in reads. In a typical N-way mirror, the s/w keeps track of the HDD head position of each device and when a read comes along, chooses the device which might result in the least seek time. This particular optimization has not been coded into md/unraid - all reads will directly read disk1. Also, N-way mirrors might spread reads to all the devices, md/unraid doesn't have this optimization either. Note things are different if, instead of a single disk1, you have single disk2 (or disk3, etc), and also a parity2 device (Q). In this case parity2 will be calculated, so not a true N-way mirror in this case.
  38. 4 points
    Ahhhhh hahahahahahahahaaha.... (sorry)
  39. 4 points
    Hi everyone, first of all, let me say hello to everyone. My first posting after nearly 1 year of using Unraid. So far i had no big issues and if there where some smaller ones it was easy to find a fix for it in the forums. I fiddled around with the topic of core assignments when i started with the TR4 platform end of last year. I thought i figured it out after doing some tests back than, which die corresponds to which core shown in the Unraid Gui. First of all my specs: CPU: 1950x locked at 3,9GHz @1.15V Mobo: ASRock Fatal1ty X399 Professional Gaming RAM: 4x16GB TridentZ 3200MHz GPU: EVGA 1050ti Asus Strix 1080ti Storage: Samsung 960 EVO 500GB NVME (Cache Drive) Samsung 850 EVO 1TB SSD (Steam Library) Samsung 960 Pro 512GB NVME (passthrough Win10 Gaming VM) 3x WD Red 3TB (Storage) After reading your post @thenonsense i was kinda confused. So i decided to do some more testing. Here are my results which basically confirmes your findings. I did some benchmarks with Cinebench (3 times in a row) inside a Win10 vm i use since end of last year for gaming and video editing. Also i did some cache and memory benchmarks with Aida64. Specs Win10 VM: 8cores+8threads 16GB RAM Asus Strix 1080ti 960 Pro 512GB NVME Passthrough TEST 1 initial cores assigned: Cinebench Scores: run1: 1564 run2: 1567 run3: 1567 Next i did the exact same tests with the following core assignments you suggested @thenonsense TEST 2 Cinebench Scores: run1: 2226 run2: 2224 run3: 2216 Both the CPU and the memory score was improved. The memory performance almost doubled!! Clearly a sign that on the second test, only one die was used and the performance wasn`t limited by the communication between the dies over the Infinity Fabric as on my old setting. After that i decided to do some more testing. This time with a Windows 7 VM with only 4 cores and 4GB of RAM to check which are the physical cores and which ones are the corresponding SMT threads. first test: assigned cores 4 5 6 7 (physical cores only) Cinebench Scores: run1: 558 run2: 558 run3: 557 second test: assigned cores 12 13 14 15 (SMT cores only) Cinebench Scores: run1: 540 run2: 542 run3: 541 third test: assigned cores 4 5 12 13 (physical + corresponding SMT cores) Cinebench Scores: run1: 561 run2: 563 run3: 560 And again a clear sign your statement is correct @thenonsense. The cores 0-7 are the physical cores and the cores 8-15 are the SMT cores. Test 2 only uses the SMT cores and clearly showes that the performance is worse than using physical cores like in test 1. I'm really sure based on my first tests last year i configured my WIN10 vm to only use the cores from one die and all other vm`s to use the correct corresponding core pairs. Clearly not. Did UNRAID changed something in how the cores are presented in the webgui in one of the last versions? i never checked if something is changed. All my VM`S run smooth without any hickups or freezes but as the tests showed the performance wasn't optimal. @limetech It would be nice if you guys could find a way to recognize the CPU if its a Ryzen/Threadripper based system and present the user the correct core pairing in the webui. Over all, i had no bigger issues over the time i use your product. Let me say thank you for providing us UNRAID Greetings from Germany and sry for my bad english ?
  40. 4 points
    all gone very quiet after the move. I hope everything is ok @limetech ?
  41. 4 points
    Not criticizing. It's just that simple answers to simple questions often lead to more questions, some of which may have already been answered. By the way, we are all mostly just volunteers here, so you are not really my customer in any sense. I did take the trouble to re-read the last few pages myself looking for your simple answer, but I'm not really clear on exactly what your question is. Simple questions often have a host of assumptions that I may not have in mind when trying to answer.
  42. 4 points
    First thing I learned is I failed miserably at communicating what problem I was trying to solve in my original post. I'll try to restate it here, lets see if it's an improvement: PROBLEM: USB Flash drives continue to grow in capacity, while unRAID's required space is small by comparison. My problem is that free space is wasted, because no valuable purpose it can serve has been identified. My first unRAID USB flash drive many years ago was 2GB. Then came more at 4GB, 16GB, and my latest is 128GB. By comparison, unRAID's USB drive capacity requirement is simply 1GB. With a 128GB drive, is there a purpose the other 127GB could serve? INSPIRATION: @limetech posted a new USB creator tool allowed >32GB flash drives. I tried it, I love the tool. It got me thinking though "what is possible with all this extra space on the unRAID usb flash drive?" CONSTRAINTS: There are some constraints in this quest that limit the potential applications of use. USB has limited I/O speed. FAT32 has limitations. Of all of these constraints, which could be mitigated sufficiently (not optimally!) to be viable in that 127GB of otherwise wasted space? THE ELEPHANT IN THE ROOM: Disk writes effecting lifespan of USB Flash Drive. This is effectively a blocker for many use cases. Consider that this research effort ignored the elephant, simply to find what was possible if lifespan didn't matter, or if writes could be effectively mitigated or are simply not a risk. EXPLORED APPLICATIONS OF USE: Not many viable opportunities to pick from. As is quite clear to many who posted here, using USB is not worth exploring for this simply due to all the constraints involved, and there are better options available. Anything I would likely discover in this endeavor was going to be a really narrow use case, because far superior options would be preferable to whatever application of use I'd discovered. I picked Docker. WHY DOCKER?: Simply put, the docker.img can be small enough in size that it's not useless because of USB I/O speed. If required Docker.img could be smaller than 4GB if needed to work within FAT32 limits. As for /appdata, there are plenty of applications that run in containers that do not create a significant amount of that would effect lifespan significantly. For apps that are write or I/O intensive, those can be mitigated with using the RAM. There are certain applications that simply not effectively mitigated though, so again it depends what docker container apps you are planning to run in this scenario. VIABLE SOLUTION FOUND: I used a 128GB Sandisk Cruzer Fit. On my Windows PC, I created a 1GB partiion in which UNRAID was installed I then moved the USB to and tested that it worked. Then using putty to ssh into my new UNRAID server. From the command line I used the fdisk to create a second XFS partition that was 127GB. Next I formated it as XFS and mounted it as /mnt/disks/usb I created the following directories: /mnt/disks/usb/docker/img /mnt/disks/usb/docker/appdata Next I used chown and chmod to set appropriate ownership and permissions Next I moved to the UNRAID GUI. This new mounted file-system will appear in the UNRAID's Settings > Docker Settings within the directory menus or can also simply enter the path into the field. I set the docker image filesize to 15GB and pointed it at the docker/img path, and likewise set the appdata path. Started Array and installed Community Applications. Everything works as expected Research effort a success! WHAT NEXT? Nothing planned. There is potential here should others see a need to explore this as USB drives continue to grow in capacity and one wishes to utilize that space.
  43. 3 points
    Can you bake SNMP into the OS without the need for a separate plugin? Unraid is the only NAS without SNMP installed by default...
  44. 3 points
    im on it guys, looks like there has been a switch to .net core which requires changes to the code which ive now done, new image now building
  45. 3 points
    Libvirt Hotplug USB This is my first plugin and alpha release. Unraid Plugin for Hot-plugging USB Devices to Running VMs. Libvirt Hotplug USB allows mounting of USB Devices (e.g. Keyboard, Mouse, iPhone, FlashDrive etc) on running VMs. It uses virsh to attach the Devices which avoids Conflicts between different VMs. Works perfectly with all of my USB Devices except iPhone, which requires detaching and attaching multiple times to be detected in the VM. To install this plugin, paste the following URL into the Plugins / Install PlugIn section: https://raw.githubusercontent.com/bshakil/unraid-libvirt-usbhotplug/master/plugins/libvirt.hotplug.usb.plg
  46. 3 points
    This happens when you change the name of the VM. The primary vdisk location is set to auto and is still looking for the "oldname.img". You will need to manually select the correct img file.
  47. 3 points
    Here's another version - looks the same but I got bored and decided to recreate it from scratch.
  48. 3 points
    Here ya go... The text versions might bump up against the lime tech overlays. Nevertheless, ENJOY!
  49. 3 points
    Here you go, I sent you via private message my unaltered diagnostics. I was able to run the shfs debugging only for a couple of minutes before memory gets exhausted... Hope it will help!
  50. 3 points
    Quite correct Shaun. I didn't check the location of the "Post Install" script in the plg file in relation when I accepted the patch. I moved it to the bottom right above the script which does the "bottonstart" execution kicking off the plugin and things are fine now. If anyone has issues applying the update it may be caused by a rogue /var/log/plugins/ssh.plg file remaining after the failure of 2018.01.18 to load. "Easiest" (i.e. Web UI) fix is a reboot but is not not required if you are comfortable on the command line. Manual fix can be accomplished by running the command "rm /var/log/plugins/ssh.plg" by your preferred method (e.g. ssh and command line, User Scripts plugin...) Follow that in the UI with a check for updates on the plugins page and then update.