Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/21/17 in all areas

  1. 23 points
    @tillkrueger @jenskolson @trurl @unrateable @jonathanm @1812 @Squid since all of you were active in this thread. I found a way to get the file transfer back. Bring up the Guacamole left panel menu (CTRL ALT SHIFT) Input Method = On Screen Keyboard In the On Screen Keyboard, use ALT (it'll stay on, 'pressed') then TAB, select it using TAB, then ALT again (to turn off) A tip I found too, is that anytime doing a copy or move, always best to use the 'queue' button in the pop-up confirmation dialog so that multiple transfers are sequentially handled. It's easy to get to the queue, I found using this it often mitigates much of my need to see the file transfer progress window. The 'Queue Manager' is easy to get back on the screen by using the top menu, Tools > Queue Manager
  2. 20 points
    Check out this awesome introduction video produced by @SpaceInvaderOne:
  3. 18 points
    I was wanting to do GPU Hardware Acceleration with a Plex Docker but unRAID doesn't appear to have the drivers for the GPUs loaded. would be nice to have the option to install the drivers so the dockers could use them.
  4. 18 points
    I took a stab at writing a How-To based on the feedback in the release thread. If this looks helpful, maybe it could be added to the top post? Edit: this warrants it's own topic, thank you! -tom ----- Upgrading from 6.4.x to 6.5.0 Nothing to it really: Read the first post in the 6.4.1 and 6.5.0 release notes threads Consider disabling mover logging, it just adds noise to diagnostics. New 6.4.1 installs have it disabled by default. Go to Settings -> Scheduler -> Mover Settings Install/Update the Fix Common Problems plugin, then go to Tools -> Update Assistant and click "Run Tests". Whereas the normal FCP checks for potential problems with your *current* version of unRAID, the Update Assistant checks for incompatibilities with the version of unRAID you are *about* to install. It is highly recommended that you resolve any issues before proceeding. If you choose not to run the Update Assistant, you'll want to perform these steps manually: Ensure your server name does not include invalid characters, or you will have problems in 6.5.x. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and the name must be 15 characters or less. To fix this, go to Settings -> Identification and change the "Server name". (see this) Upgrade all your plugins Uninstall the Advanced Buttons plugin, it was newly discovered to be incompatible (see this). You may want to review the next section for other plugins that are known to have problems, if you skipped that when going to 6.4.0. Note that the S3 Sleep plugin works again, feel free to install it if you removed it when going to 6.4.0 (see this). Stop the array (this step is optional, but I like to do it before starting the upgrade) Go to Tools -> Update OS and update the OS. You may need to switch from Next to Stable to see the update. Reboot! Then check out the "Setting Up New Features" section below. Upgrading from 6.3.5 (or earlier?) to 6.5.0 Before you upgrade from 6.3.5 Read the first post in the 6.4.0, 6.4.1 and 6.5.0 release notes threads Consider disabling mover logging, it just adds noise to diagnostics. New 6.4.1 installs have it disabled by default. Go to Settings -> Scheduler -> Mover Settings Install/Update the Fix Common Problems plugin, then go to Tools -> Update Assistant and click "Run Tests". Whereas the normal FCP checks for potential problems with your *current* version of unRAID, the Update Assistant checks for incompatibilities with the version of unRAID you are *about* to install. It is highly recommended that you resolve any issues before proceeding. If you choose not to run the Update Assistant, you'll want to perform these steps manually: Ensure your server name does not include invalid characters, or you will have problems in 6.5.x. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and the name must be 15 characters or less. To fix this, go to Settings -> Identification and change the "Server name". (see this) If you have VMs, go to Settings -> VM Manager, switch to Advanced View, and make sure all of the paths are valid. Here are the default settings, but make sure the paths below actually exist on your system. Without these paths, your VMs will not load under 6.4.1. For more info see this default VM storage path -> /mnt/user/domains/ (this is DOMAINDIR in \\tower\flash\config\domain.cfg) default ISO storage path -> /mnt/user/isos/ (this is MEDIADIR in \\tower\flash\config\domain.cfg) Delete this file from your flash drive: \\tower\flash\config\plugins\dynamix.plg This is an old version of the dynamix webgui. Depending on how old it is, it can prevent the new webgui from loading. (see this, this) Upgrade all your plugins (other than the main unRAID OS plugin) You must uninstall Advanced Buttons (see this) and the Preclear plugin (see this, this, this, this, this, this). Consider uninstalling unmenu (see this), the Pipework docker (see this), and any other plugins you no longer use. Note that the S3 Sleep plugin works again, just make sure you have updated to the latest (see this) Consider installing the Fix Common Problems plugin and resolving any issues it highlights Additional cleanup you may want to perform: Consider deleting all files from the \\tower\flash\extra folder and install them using Nerd Tools instead Review your \\tower\flash\config\go script and use a good editor (like Notepad++, not Notepad) to remove as much as possible. For instance, remove any references to "cache_dirs" and use the Dynamix Cache Directories plugin instead. Consider moving other customizations to the User Scripts plugin Remove any port assignments added on the /usr/local/sbin/emhttp line FYI, a completely stock go script looks like this: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & Consider Installing any BIOS updates that are available for your motherboard If you made any substantial changes in this section, reboot and test to make sure any problems are not the result of these changes. Performing the upgrade from 6.3.5 Stop the array (If 6.3.5 or earlier) Go to Plugins and update the unRAID OS plugin (but don't reboot yet) (If on 6.4.0 or one of the 6.4 rc's) Go to Tools -> Update OS and update the OS (but don't reboot yet). You may need to switch from Next to Stable to see the update. By default, the webgui in unRAID 6.4.1 will use port 80 and any customization you made to the port in your go script will be ignored. If you need to change the port(s) before booting into 6.4.1 for the first time, use a good editor (like Notepad++, not Notepad) and edit \\tower\flash\config\ident.cfg Add the following lines to the end of the file, substituting the ports you want to use: USE_SSL="auto" PORT="80" PORTSSL="443" NOTE: If you need to change the defaults, be sure to pick high values over 1024. i.e. 81 and 43 are *not* good options. Try 8080 and 8443 Once you have booted in 6.4.0, you should no longer edit this file by hand. Better to use the webgui: Settings -> Identification -> Management Access If you are running Ryzen: Edit your \\tower\flash\config\go script (using a good editor like Notepad++ (not Notepad)) and add the "zenstates" command right before "emhttp", like this: /usr/local/sbin/zenstates --c6-disable /usr/local/sbin/emhttp & Also, go into your BIOS and disable "Global C-state control" Reboot It may be helpful to clear your browsers cache --- Setting Up New Features unRAID now supports SSL! unRAID will automatically provision and maintain a Lets Encrypt certificate for you, along with the necessary dynamic DNS entries. To enable this, go to Settings -> Identification -> Management Access and click Provision. If you get an error about rebinding protection, wait 10 minutes and try again. If you still get the error, click Help to read how to adjust your router. Using these certificates will change your url to <some long number>.unraid.net. No it can't be changed without disabling all of the automation and switching to your own certificates. You don't need to remember the number, when you connect via IP or servername it will automatically redirect. If you are concerned about a theoretical DNS outage, know that you can override your DNS by adding an entry to your PC's hosts file. Or in a pinch you can edit \\tower\boot\config\ident.cfg and set USE_SSL="no" If you prefer to not use the fully automated Lets Encrypt certificates, you can set your own domain name and supply your own certificates or use self-signed certificates. In this mode, you are responsible for managing DNS and ensuring the certificates do not expire. Click the Help icon on the SSL Certificate Settings page for more details. Want to check out the new themes? Navigate to Settings -> Display Settings -> Dynamix color theme. You may need to change your banner when you change the theme Note that you can now use the webgui to assign unique IP addresses to your dockers. If you manually customized your macvlans under 6.3.5 you'll need to set them up again using the webgui. You can now disable the insecure telnet protocol. Go to Settings -> Identification -> Management Access and set "Use TELNET" to "No" 6.4.1 adds docker support links to the docker page. Any dockers you install in 6.4.1 or later will automatically get this functionality, to update your existing dockers follow this one-time process Solutions to Common Problems Are you looking for the "edit XML" option for your VMs? First Edit the VM, then click the button in the upper right corner to switch from "Form View" to "XML View" If your system hangs at "loading /bzroot", you need to switch from "legacy" booting to UEFI. The ASRock C236 WSI motherboard in particular needs this, others may as well. (see this, this) To do this: (If needed) rename the "EFI-" folder on the flash drive to "EFI" Go into the bios and set the boot priority #1 to "UEFI: {flash drive}" Starting wth 6.4.1, unRAID now monitors SMART attribute 199 (UDMA CRC errors) for you. If you get an alert about this right after upgrading, you can just acknowledge it, as the error probably happened in the past. If you get an alert down the road, it is likely due to a loose SATA cable. Reseat both ends, or replace if needed. (see this) If your VMs will not start after upgrading, go to Settings -> VM Manager, switch to Advanced View, and make sure all of the paths are valid. Here are the default settings, but make sure the paths actually exist on your system: default VM storage path -> /mnt/user/domains/ (this is DOMAINDIR in \\tower\flash\config\domain.cfg) default ISO storage path -> /mnt/user/isos/ (this is MEDIADIR in \\tower\flash\config\domain.cfg) For more info see this. (the Update Assistant could have warned you of this in advance) If you can't access the webgui after upgrading to 6.5.0, your server name may include characters that are invalid for NETBIOS names. Only 'A-Z', 'a-z', '0-9', dashes ('-'), and dots ('.') are allowed, and it must be 15 characters or less. (see this) To fix this, edit \\tower\flash\config\ident.cfg using a good editor (like Notepad++, not Notepad) and remove those characters from the "NAME" parameter, then reboot. (the Update Assistant could have warned you of this in advance) If you are unable to access the webgui after upgrading, you may have a really old version of the dynamix webgui plugin on your system. Delete \\tower\flash\config\plugins\dynamix.plg and reboot. (see this, this) (the Update Assistant could have warned you of this in advance) If you have problems booting after applying the upgrade, move your flash drive to a Windows or Mac machine and run checkdisk, fixing any problems it finds. If your cache disk has an "incompatible partition" after upgrading, it was probably created in an older version of the Unassigned Devices plugin. UD has been updated so this won't happen again. (the Update Assistant could have warned you of this in advance) This is a one-time fix. You'll need to downgrade to 6.3.5 and move the data off the cache drive, re-format the disk and move the data back. The procedure is outlined here If you have on-board Aspeed IPMI you may find that IPMI loses video or changes color during the boot process. To resolve this, go to Main -> Boot Device -> Flash -> Syslinux Config and add "nomodeset" to your "append" line (and reboot). It should look something like this: label unRAID OS kernel /bzimage append initrd=/bzroot nomodeset You'll probably want to repeat that on each of the other append lines in this file. If you don't see CPU load statistics on the dashboard, or if you can't use the new web-based terminal, switch to Chrome or Firefox instead of Safari. This is a shortcoming in Safari, not a bug in unRAID. Doesn't apply in 6.5.0 Are you having problems with your Lets Encrypt docker? First, make sure the webgui and docker aren't both trying to use the same port. Beyond that, Lets Encrypt recently made some changes that break things. These issues are not related to the unRAID upgrade, refer to the LE docker thread for help Speedtest complaining about Python? The timing is coincidental, but it is not related to the unRAID upgrade, see the Speedtest thread If you are unable to update your dockers, or if you see this error in the syslog: Feb 3 12:08:59 Unraid-Nas [6503]: The command failed. Error: sh: /usr/local/emhttp/usr/bin/docker: No such file or directory then you need to uninstall the Advanced Buttons plugin. It is not currently compatible with 6.4.1+ See this (the Update Assistant could have warned you of this in advance) If you get this error message when trying to install the preclear plugin: unRAID version (6.4.1 - GCC 7.3.0) not supported. please re-read the "before you upgrade" section of this post. The preclear plugin is not compatible with 6.4.1+ If you see this error message in your logs, please re-read the "before you upgrade" section of this post. rc.diskinfo is part of the preclear plugin, which is not compatible with 6.4.1 (the Update Assistant could have warned you of this in advance) Feb 6 06:21:30 Tower rc.diskinfo[17255]: PHP Warning: file_put_contents(): Only 0 of 2584 bytes written, possibly out of free disk space in /etc/rc.d/rc.diskinfo on line 499 Are you seeing a "wrong csrf_token" token in your logs? Close all your browser tabs (on all computers) that were pointed at unRAID prior to the last reboot. More info If you have severe errors (no lan, array won't start, webgui won't start) try installing on a new flash drive. If it works, that means the problem is with one of your customizations. You can either try to find and fix the problem (if you skipped the "before you upgrade" section, that would be a good place to start), or move forward with the clean flash drive. To continue with the clean drive, copy just the basics (/config/super.dat and your key file) from the old drive to the new one and then reconfigure as needed. Are you still having problems? Review the expanded "Before you upgrade" section above. If that doesn't help, grab your diagnostics (Tools -> Diagnostics) if you can, and reboot into Safe Mode. If your problems go away, then the problem is likely with a plugin. If the problems persist, you'll need additional help. Either way, grab your diagnostics again while in safe mode and attach both sets to a forum post where you clearly explain the issue.
  5. 15 points
    Currently unRAID uses basic auth to enter credentials for the web gui, but many password managers don't support this. Would be great if we could get a proper login page. Examples This kind of login page always works with password managers. This does not
  6. 13 points
    Hi guys, this is a simple plugin that allows users to clear their disks before add them to the array. The main characteristics of this plugin are: Modularity: can be used standalone or in conjunction with Joe L. or bjp999 scripts; Ease of use: with a few clicks you can start a clear session on your disk; Integration: you can always access the plugin under Tools > Preclear Disk menu. If you have Unassigned Devices installed, you can start/stop/view preclear sessions directly from Main > Unassigned Devices. All dependencies included: you don't need SCREEN to run a preclear session in the background; all jobs are executed in the background by default, so you can close your browser while the preclear runs. You can install it directly or via Community Apps. Q & A: Q) Why Joe L. or bjp999 scripts are not included? A) I'm not authorized by Joe L. to redistribute his script, so you need to download a copy from the topic above and put it under /boot/config/plugins/preclear.disk/ renaming it to preclear_disk.sh if necessary. bjp999 modifications are unofficial, so I decided not to include it by default. Q) By default, I see a "gfjardim" script always available. Why? A) Since I'm not authorized to redistribute Joe L. script and the recent slow support by the author, I decided a major code rewrite in the script was needed. The new script is being actively supported, compatible with unRAID notifications, is faster than bjp999 script and has a cleaner output so users can easily visualizes what's going on with their disks. Q) I want to use one of the older scripts(Joe L. or bjp999) in conjunction with notifications. Is that possible? A) Yes. I've made some adjustments on both scripts so they become compatible with unRAID notifications; Joe L. version can be found here and bjp999 can be found here. Q) Is there any howtos available? A) gridrunner made a awesome video explaining why preclearig a hard disk is a good idea, and how you can accomplish that: Q) The plugin asked me to send some statistics information. How does the statistics report system work? Is it safe? Is it anonymous? A) To better track the usage of the plugin, a statistics report system was put in place. The main goals I intend to archive are: know number of disks that gets precleared; fix any silent bugs that gets reported on the logs; know average size of disks, their model, their average speed and elapsed time we should expect from that model; success rate; rate of disks with SMART problems; This system is totally optional and users will get prompted if they want to send each report. It is also safe and totally anonymous, since all data is sent to Google Forms and no identifying data is exported, like disks serial numbers. Detailed info can be found here. The statistics are public and can be found here. Q) How can I download a copy of the plugin log? A) Please go to Tools, then Pleclear Disk, and click on the Download icon: Q) Which are the differences between Erase and Clear? A) The Clear option uses zeroes to fill the drive; at the end, the drive can be added to the array the array immediately. The Erase All the Disk option uses random data to wipe out the drive; the resulting drive can't be quickly added to the array. If you want to add if after erase, you must select Erase and Clear the Disk.
  7. 13 points
    Hi, Guys. This is the first part of a two-part video about setting up a Windows 10 KVM VM in unRAID. (second part in a day or 2 if work lets me !) The first part deals with setting up the VM correctly to be able to use as a 'daily driver'. Then the second part passing through hardware to turn it into a gaming VM. The first part consists of Download a windows 10 iso. Where to Buy a license for windows 10 pro for $20 How to assign resources and correctly pin you CPUs. How to install the virtio drivers including the qxl graphics driver. How to remove or block the windows 10 data mining - phone home - etc with anti beacon. How to install multiple useful programmes with ninite Using Splashtop desktop for good quality remote viewing How to install a virtual sound card to have sound in Splashtop/RDP etc. Using mapped drives and symlinks to get the most out of the array. Windows tweaks for VM compatibility. general tips Hope you find it useful The best way to install and setup a windows 10 vm as a daily driver or a Gaming VM Below is the T second part of a two-part video about setting up a Windows 10 KVM VM in unRAID. The second part deals with passing through hardware and potential problems and solutions showing you how to turn it into a gaming VM. Hope you find it useful.
  8. 13 points
    There are several things you need to check in your Unraid setup to help prevent the dreaded unclean shutdown. There are several timers that you need to adjust for your specific needs. There is a timer in the Settings->VM Manager->VM Shutdown time-out that needs to be set to a high enough value to allow your VMs time to completely shutdown. Switch to the Advanced View to see the timer. Windows 10 VMs will sometimes have an update that requires a shutdown to perform. These can take quite a while and the default setting of 60 seconds in the VM Manager is not long enough. If the VM Manager timer setting is exceeded on a shutdown, your VMs will be forced to shutdown. This is just like pulling the plug on a PC. I recommend setting this value to 300 seconds (5 minutes) in order to insure your Windows 10 VMs have time to completely shutdown. The other timer used for shutdowns is in the Settings->Disk Settings->Shutdown time-out. This is the overall shutdown timer and when this timer is exceeded, an unclean shutdown will occur. This timer has to be more than the VM shutdown timer. I recommend setting it to 420 seconds (7 minutes) to give the system time to completely shut down all VMs, Dockers, and plugins. These timer settings do not extend the normal overall shutdown time, they just allow Unraid the time needed to do a graceful shutdown and prevent the unclean shutdown. One of the most common reasons for an unclean shutdown is having a terminal session open. Unraid will not force them to shut down, but instead waits for them to be terminated while the shutdown timer is running. After the overall shutdown timer runs out, the server is forced to shutdown. If you have the Tips and Tweaks plugin installed, you can specify that any bash or ssh sessions be terminated so Unraid can be gracefully shutdown and won't hang waiting for them to terminate (which they won't without human intervention). If you server seems hung and nothing responds, try a quick press of the power button. This will initiate a shutdown that will attempt a graceful shutdown of the server. If you have to hold the power button to do a hard power off, you will get an unclean shutdown. If an unclean shutdown does occur because the overall "Shutdown time-out" was exceeded, Unraid will attempt to write diagnostics to the /log/ folder on the flash drive. When you ask for help with an unclean shutdown, post the /log/diagnostics.zip file. There is information in the log that shows why the unclean shutdown occurred.
  9. 13 points
    Ok, move pretty much complete. Quite a pain moving both residence and corp headquarters at the same time! FYI here's a brief history: circa 2005/2006: unRAID born, Sunnyvale, CA 2008-2011: Fort Collins, CO 2012-early 2018: San Diego, CA (incorporated 2015) present: Anaheim, CA (no, we're not in the tree house in Disneyland) Jon, meanwhile, moved within the same city near Chicago. Next up: release 6.5.3-rc2, which brings us up-to-date with linux 4.14 LTS kernel, along with a handful of bug fixes. As soon as that release is promoted to stable we'll get unRAID 6.6 next release out there. Thanks to everyone for your patience during this time.
  10. 12 points
    Plugin Name: Unraid Nvidia Github: https://github.com/linuxserver/Unraid-Nvidia-Plugin This plugin from LinuxServer.io allows you to easily install a modified Unraid version with Nvidia drivers compiled and the docker system modified to use an nvidia container runtime, meaning you can use your GPU in any container you wish. Any posts discussing circumvention of any Nvidia restrictions we will be asking mods to remove. We have worked hard to bring this work to you, and we don't want to upset Nvidia. If they were to threaten us with any legal action, all our source code and this plugin will be removed. Remember we are all volunteers, with regular jobs and families to support. Please if you see anyone else mentioning anything that contravenes this rule, flag it up to the mods. People that discuss this here could potentially ruin it for all of you. EDIT: 25/5/19 OK everyone, the Plex script seems to be causing more issues than the Unraid Nvidia build as far as I can tell. From this point on, to reduce the unnecessary noise and confusion on this thread, I'm going to request whoever is looking after, documenting or willing to support the Plex scripts spins off their own thread. We will only be answering any support questions on people not using the script. If your post is regarding Plex and you do not EXPLICITLY state that you are not using the Plex script then it will be ignored. I know some of you may think this is unreasonable but it's creating a lot of additional work/time commitments for something I never intended to support and something I don't use (Not being a Plex user) May I suggest respectfully, that one of you steps forward to create a thread, document it, and support it in it's own support place. I think we need to decouple issues with the work we've done versus issues with a currently unsupported script. Thanks.
  11. 12 points
    Hello Unraid fans! Earlier this year, we embarked upon a journey to reinvent our brand and come up with better ways to convey the value that Unraid brings to so many people. What you are now seeing on Unraid.net is the first step of that adventure, and we sure hope you like it. Unraid has become so much more important to so many people due to the sheer flexibility and control that it gives users over their hardware. We let folks build rigs as small or big as they want, and the OS scales it's capabilities in parallel. And best of all, we have one of the best user communities of any OS out there. Our forum is filled with folks eager to help you realize the full potential of your system and ready to lend a helping hand when things don't go as planned. That is why we took the time, effort, and resources to ditch the old "Lime Technology" site and embrace Unraid.net as our new platform. But we're not stopping there. The goal is to continually make the product easier and easier to use while delivering features that are highly desired and valuable. Stay tuned to the announcements forums for more information on our upcoming software releases so you don't miss a thing! Oh, and welcome to Unraid.net .
  12. 12 points
    I would love to be able to view the cpu thread pairs from the vm templates like this
  13. 11 points
    Note: To view the application lists before installing unRaid, click HERE Community Applications (aka CA) This thread is rather long (and is mostly all off-topic), and it is NOT necessary to read it in order to utilize Community Applications (CA) Just install the plugin, go to the apps tab and enjoy the freedom. If you find an issue with CA, then don't bother searching for answers in this thread as all issues (when they have surfaced) are fixed generally the same day that they are found... (But at least read the preceding post or two on the last page of the thread) - This is without question, the best supported plugin / addon in the universe - on any platform. Simple interface and easy to use, you will be able to find and install any of the unRaid docker or plugin applications, and also optionally gain access to the entire library of applications available on dockerHub (~1.8 million) INSTALLATION To install this plugin, paste the following URL into the Plugins / Install PlugIn section: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg After installation, a new tab called "Apps" will appear on your unRaid webGUI. To see what the various icons do, simply press Help or the (?) on unRaid's Tab Bar. Note All screenshots in this post are subject to change as Community Applications continues to evolve Easily search or browse applications Get full details on the application Easily reinstall previously installed applications And much, much more (including the ability to search for and install any of the containers available on dockerHub (1,000,000+) USING CA CA also has a dedicated Settings section (click Settings) which will let you fine tune certain aspects of its operation. NOTE: The following video was made previously to the current user interface, so the video will look significantly different than the plugin itself. But it's still worth a watch. Buy Andrew A Beer! Note that CA is always (and always will be) compatible with the latest Stable version of unRaid, and the Latest/Next version of unRaid. Intermediate versions of various Release Candidates may or may not be compatible (though they usually are - But, if you have made the decision to run unRaid Next, then you should also ensure that all plugins and unRaid itself (not just CA) are always up to date). Additionally, every attempt is made to keep CA compatible with older versions of unRaid. As of this writing, CA is compatible with all versions of unRaid from 6.4 onward. Cookie Note: CA utilizes cookies in its regular operation. Some features of CA may not be available if cookies are not enabled in your browser. No personally identifiable information is ever collected, no cookies related to any software or media stored on your server are ever collected, and none of the cookies are ever transmitted anywhere. Cookies related to the "Look & Feel" of Community Applications will expire after a year. Any other cookies related to the operation of Community Applications that are used are automatically deleted after they are used (usually within a minute or two)
  14. 11 points
    It's easy for us to get overwhelmed by new issues, especially coinciding with new features and new kernel releases. Our lack of immediate reply does not mean your report is being ignored. We very much appreciate all hints, testing results, etc. Remember, for very odd issues, please reboot in "Safe Mode" to ensure no strange interaction with a plugin.
  15. 11 points
    New in Unraid OS 6.7 release: New Dashboard layout, along with new Settings and Tools icons. Designed by user @Mex and implemented in collaboration with @bonienl. We think you will find this is a big step forward. Time Machine support via SMB. To enable this feature it is necessary to first turn on Enhanced OS X interoperability on the Settings/SMB page. Next, select a share to Export for Time Machine in the share SMB Security Settings section. AFP support is deprecated. Linux kernel 4.19. This is the latest Long Term Support kernel. We will go with this for a while but anticipate updating to 4.20 or even 5.0 for Unraid 6.7.0 stable. Here are some other kernel-related updates: Added TCP "BBR Congestion control" and made it the default. This should improve network throughput but probably not too many users will notice anything different. Added Bluetooth support in the Linux kernel. We did not add the user-space tools so this will be mostly useful to support Bluetooth in docker containers. AMD firmware update for Threadripper. Ignore case in validating user share names. If there are multiple top-level directories which differ only in case, then we use the first such share name encountered, checking in order: cache, disk1, disk2, ..., diskN. Additional top-level directories encountered will be ignored. For example, suppose we have: /mnt/cache/ashare /mnt/disk1/Ashare /mnt/disk2/ashare The name of the exported share will be 'ashare' and will consist of a union of /mnt/cache/ashare and /mnt/disk2/ashare. The contents of /mnt/disk1/Ashare will not show up in /mnt/user/ashare. If you then delete the contents of /mnt/user/ashare followed by deleting the 'ashare' share itself, this will result in share 'Ashare' becoming visible. Similar, if you delete the contents of /mnt/cache/ashare (or gets moved), then you will now see share 'Ashare' appear, and it will look like the contents of 'ashare' are missing! Thankfully very few (if any) users should be affected by this, but handles a corner case in both the presentation of shares in windows networking and storage of share config data on the USB flash boot device. New vfio-bind method. Since it appears that the xen-pciback/pciback kernel options no longer work, we introduced an alternate method of binding, by ID, selected PCI devices to the vfio-pci driver. This is accomplished by specifying the PCI ID(s) of devices to bind to vfio-pci in the file 'config/vfio-pci.cfg' on the USB flash boot device. This file should contain a single line that defines the devices: BIND=<device> <device> ... Where <device> is a Domain:Bus:Device.Function string, for example, BIND=02:00.0 Multiple device should be separated with spaces. The script /usr/local/sbin/vfio-pci is called very early in system start-up, right after the USB flash boot device is mounted but before any kernel modules (drivers) have been loaded. The function of the script is to bind each specified device to the vfio-pci driver, which makes them available for assignment to a virtual machine, and also prevents the Linux kernel from automatically binding them to any present host driver. In addition, and importantly, this script will bind not only the specified device(s), but all other devices in the same IOMMU group as well. For example, suppose there is an NVIDIA GPU which defines both a VGA device at 02:00.0 and an audio device at 02.00.1. Specifying a single device (either one) on the BIND line is sufficient to bind both device to vfio-pci. The implication is that either all devices of an IOMMU group are bound to vfio-pci or none of them are. Added 'telegram' notification agent support - thank you @realies Finally, we updated several base packages, including move to Samba 4.9 and docker 18.09, and fixed a number of minor bugs. Version 6.7.0-rc1 2019-01-21 Base distro: aaa_elflibs: version 15.0 (rev 3) acpid: version 2.0.31 adwaita-icon-theme: version 3.30.1 at: version 3.1.23 at-spi2-atk: version 2.30.0 at-spi2-core: version 2.30.0 atk: version 2.30.0 bin: version 11.1 (rev 3) bluez: version 4.101 bluez firmware: version 1.2 bridge-utils: version 1.6 btrfs-progs: version v4.19.1 ca-certificates: version 20181210 cairo: version 1.16.0 cifs-utils: version 6.8 coreutils: version 8.30 (rev 4) curl: version 7.63.0 cyrus-sasl: version 2.1.27 dbus: version 1.12.12 dhcpcd: version 7.0.8 diffutils: version 3.7 dmidecode: version 3.2 dnsmasq: version 2.80 docker: version 18.09.1 e2fsprogs: version 1.44.5 etc: version 15.0 (rev 9) ethtool: version 4.19 file: version 5.35 findutils: version 4.6.0 fribidi: version 1.0.5 gdbm: version 1.18.1 gdk-pixbuf2: version 2.38.0 git: version 2.20.1 glibc-zoneinfo: version 2018g glib2: version 2.58.2 gnutls: version 3.6.5 (CVE-2018-16868) gptfdisk: version 1.0.4 graphite2: version 1.3.13 grep: version 3.3 gtk+3: version 3.24.2 gzip: version 1.10 harfbuzz: version 2.3.0 haveged: version 1.9.4 hdparm: version 9.58 hostname: version 3.21 hwloc: version 1.11.11 icu4c: version 63.1 inotify-tools: version 3.20.1 intel-microcode: version 20180807a iproute2: version 4.19.0 iptables: version 1.8.2 iputils: version s20180629 irqbalance: version 1.5.0 jansson: version 2.12 kernel-firmware: version 20181218_0f22c85 keyutils: version 1.6 libSM: version 1.2.3 libX11: version 1.6.7 libarchive: version 3.3.3 libcap-ng: version 0.7.9 libdrm: version 2.4.96 libedit: version 20181209_3.1 libepoxy: version 1.5.3 libestr: version 0.1.11 libevdev: version 1.6.0 libgcrypt: version 1.8.4 libgpg-error: version 1.33 libjpeg-turbo: version 2.0.1 libnftnl: version 1.1.2 libpcap: version 1.9.0 libpng: version 1.6.36 libpsl: version 0.20.2 libpthread-stubs: version 0.4 (rev 3) librsvg: version 2.44.11 libtirpc: version 1.1.4 libvirt: version 4.10.0 libwebp: version 1.0.1 libxcb: version 1.13.1 lm_sensors: version 3.5.0 logrotate: version 3.15.0 lvm2: version 2.03.02 lzip: version 1.20 lz4: version 1.8.3 mc: version 4.8.22 mesa: version 18.3.0 miniupnpc version: 2.1 nano: version 3.2 ncurses: version 6.1_20181110 netatalk: version 3.1.12 (CVE-2018-1160) nettle: version 3.4.1 (CVE-2018-16869) nghttp2: version 1.35.1 nginx: version 1.14.2 (+ nchan 1.2.3) (CVE-2018-16843, CVE-2018-16844, CVE-2018-16845) ntp: version 4.2.8p12 (rev 5) openldap-client: version 2.4.47 pciutils: version 3.6.2 perc2: version 10.32 php: version 7.2.13 pixman: version 0.36.0 pkgtools: version 15.0 (rev 23) pv: version 1.6.6 qemu: version 3.1.0 rpcbind: version 1.2.5 rsyslog: version 8.40.0 samba: version 4.9.4 (CVE-2018-14629, CVE-2018-16841, CVE-2018-16851, CVE-2018-16852, CVE-2018-16853, CVE-2018-16857) sed: version 4.7 shadow: version 4.6 shared-mime-info: version 1.10 smartmontools: version 7.0 spice: version 0.14.1 spice-protocol: version 0.12.14 sqlite: version 3.26.0 sudo: version 1.8.26 sysvinit-scripts: version 2.1 (rev 24) sysvinit: version 2.93 tar: version 1.30 (rev 3) tree: version 1.8.0 ttyd: version 1.4.2 util-linux: version 2.33 wget: version 1.20 xauth: version 1.0.10 (rev 3) xfsprogs: version 4.19.0 wget: version 1.20.1 xkeyboard-config: version 2.25 xterm: version 341 zstd: version 1.3.8 Linux kernel: version: 4.19.16 OOT Intel 10Gbps network driver: ixgbe: version 5.5.3 OOT Tehuti 10Gbps network driver: tn40xx: version 0.3.6.17 added drivers: CONFIG_USB_SERIAL_CH341: USB Winchiphead CH341 Single Port Serial Driver added TCP BBR congestion control kernel support and set as default: CONFIG_NET_KEY: PF_KEY sockets CONFIG_TCP_CONG_BBR: BBR TCP CONFIG_NET_SCH_FQ: Fair Queue CONFIG_NET_SCH_FQ_CODEL: Fair Queue Controlled Delay AQM (FQ_CODEL) added Bluetooth kernel support: CONFIG_BT: Bluetooth subsystem support CONFIG_BT_BREDR: Bluetooth Classic (BR/EDR) features CONFIG_BT_RFCOMM: RFCOMM protocol support CONFIG_BT_RFCOMM_TTY: RFCOMM TTY support CONFIG_BT_BNEP: BNEP protocol support CONFIG_BT_BNEP_MC_FILTER: Multicast filter support CONFIG_BT_BNEP_PROTO_FILTER: Protocol filter support CONFIG_BT_HIDP: HIDP protocol support CONFIG_BT_HS: Bluetooth High Speed (HS) features CONFIG_BT_LE: Bluetooth Low Energy (LE) features CONFIG_BT_HCIBTUSB: HCI USB driver CONFIG_BT_HCIBTUSB_AUTOSUSPEND: Enable USB autosuspend for Bluetooth USB devices by default CONFIG_BT_HCIBTUSB_BCM: Broadcom protocol support CONFIG_BT_HCIBTUSB_RTL: Realtek protocol support CONFIG_BT_HCIUART: HCI UART driver CONFIG_BT_HCIUART_H4: UART (H4) protocol support CONFIG_BT_HCIUART_BCSP: BCSP protocol support CONFIG_BT_HCIUART_ATH3K: Atheros AR300x serial support CONFIG_BT_HCIUART_AG6XX: Intel AG6XX protocol support CONFIG_BT_HCIUART_MRVL: Marvell protocol support CONFIG_BT_HCIBCM203X: HCI BCM203x USB driver CONFIG_BT_HCIBPA10X: HCI BPA10x USB driver CONFIG_BT_HCIVHCI: HCI VHCI (Virtual HCI device) driver CONFIG_BT_MRVL: Marvell Bluetooth driver support CONFIG_BT_ATH3K: Atheros firmware download driver md/unraid: version 2.9.5 (kernel BUG if read phase of read/modify/write with FUA flag set fails on stripe with multiple read failures) patch: PCI: Quirk Silicon Motion SM2262 NVMe controller reset patch: support Mozart 395S chip Management: add early vfio-bind utility fix: docker log rotation fix: inconsistent share name case fix: terminal instances limited to 8 (now lifted) restore PHP E_WARNING in /etc/php/php.ini support Apple Time Machine via SMB update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: New icon reference webgui: Added new font icons webgui: added new case icons webgui: Revamped dashboard page webgui: Replaced orb png icons by font-awesome webgui: Position context menu always left + below icon webgui: Do not capitalize path names in title of themes Azure and Gray webgui: Allow plugins to use font awesome for icon webgui: sort notification agents alphabetically, add telegram notifications webgui: Dashboard: use disk thresholds for utlization bars webgui: VM manager: remove and rebuild USB controllers webgui: Fixed: slots selection always disabled after "New Config" webgui: Fix Background color when installing container webgui: Fixed share/disk size calculation when names include space webgui: Add log-size and log-file options to docker run command webgui: Escape quotes on a containers template webgui: Prevent update notification if plugin is not compatible webgui: other GUI enhancements
  16. 11 points
    He was wondering if Limetech quit which would explain why the forums were so quiet, but that is not quite the case and they were just moving. ?
  17. 11 points
  18. 10 points
    Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The standard password for the gameservers if enabled is: Docker Please read the discription of each docker and the variables that you install (some dockers need special variables to run).
  19. 10 points
    This Docker is in BETA status. Some controllers may not be detected, there may be errors thrown. Please help in resolving these issues if this occurs if you use this beta version. Installation Via the Community Application: Search for "DiskSpeed" Manual Installation (The Community Applications plugin is having issues currently, here's a work around for now) Save the attached "my-DiskSpeed.xml" file to your NAS under \\tower\flash\config\plugins\dockerMan\templates-user View the Docker tab in your unRAID Administrator , click on "Add Container" Under "Select a template", pick "my-DiskSpeed" The defaults should work as-is unless you have port 18888 already in use. If so, change the Web Port & WebUI settings to a new port number. The Docker will create a directory called "DiskSpeed" in your appdata directory to hold persistent data. Note: Privileged mode is required so that the application can see the controllers & drives on the host OS. This docker will use up to 512MB of RAM. RAM optimization will happen in a later BETA. Running View the Docker tab in your unRAID Administrator and click on the icon next to "DiskSpeed" and select WebUI. A new window will open. On first time run (or after the Docker app is updated or you select to rescan hardware), the application will scan your system to locate drive controllers & the hard drives attached to them. Drive Images As of this post, the Hard Drive Database (HDDB) has 825 drive models in 20 brands. If you have one or more drives that do not have a predefined image in the HDDB, you have a couple options available - wait for me to add the image which will be displayed after you click "Rescan Controllers" or you can add the drive yourself by editing it and uploading a drive image for it. You can view drive images in the HDDB to see if there's an image that'll fit your drive and optionally upload it so others can benefit. Controller & Drive Identification Issues Some drives, notably SSD's, do not reveal the Vendor correctly or at all. If you view the Drive information and it has the same value for the vendor as the model or an incorrect or missing Vendor, please inform me so that I can manually add the drive to the database or add code to handle it. If you have a controller that is not detected, please notify me. Benchmarking Drives The current method of benchmarking the hard drives is to read the hard drive at certain percentages for 15 seconds and takes the average speed over each of those seconds except for the first 2 seconds which tend to trend high. Hard Drives report an optimal block size to use while reading but if not, a block size of 128K is used. Since Docker under unRAID requires the array to be running, SpeedGap detection was added to detect disk drive activity during the test by comparing the smallest & largest amount of bytes read over the 15 seconds. If a gap is detected over a given size which starts at 45MB, the gap allowed is increased by 5MB and the spot retested. If a drive keeps triggering the SpeedGap detection over each spot, you may need to disable the SpeedGap detection as the drive is very erratic in its read speeds. One drive per controller is tested at the same time. In the future, each drive will be tested to see it's maximum transfer rate and the system will be tested to see how much data each controller can transfer and how much data the entire system bus can handle. Then multiple drives over multiple controllers will be read simultaneously while keeping the overall bandwidth by controller & system under the maximum transfer rate. Contributing to the Hard Drive Database If you have a drive that doesn't have information in the Hard Drive Database other than the model or you've performed benchmark tests, a button will be displayed at the bottom of the page labeled "Upload Drive & Benchmark Data to the Hard Drive Database". The HDDB will display information given up by the OS for the drives and the average speed graphs for comparison. Application Errors If you get an error message, please post the error here and the steps you took to cause it to happen. There will be a long string of java diagnostics after the error message (java stack) that you do not need to include, just the error message details. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter. Note: The unRAID diagnostic file doesn't provide any help. If submitting a diagnostic file, please use the link at the bottom of the controllers in the Diskspeed GUI. Home Screen (click top label to return to this screen) Controller Information Drive Information Drive Editor my-DiskSpeed.xml
  20. 10 points
    I would love a progress bar/percentage thingy for mover
  21. 10 points
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as IBM M1015 and other similar cards 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake and Coffee Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  22. 10 points
    Read this first 1. Any posts discussing circumvention of any Nvidia restrictions we will be asking mods to remove. We have worked hard to bring this work to you, and we don't want to upset Nvidia. If they were to threaten us with any legal action, all our source code and this plugin will be removed. Remember we are all volunteers, with regular jobs and families to support. Please if you see anyone else mentioning anything that contravenes this rule, flag it up to the mods. People that discuss this here could potentially ruin it for all of you! 2. If you attempt to start a VM passing through a GPU that is actively transcoding, you will hard lock your server and it will need an unclean shutdown. 3. If you use Unraid in GUI mode and have only a single GPU in your server and you use that GPU in a virtual machine, trying to start that VM will crash libvirt. 4. You can passthrough a GPU that is being passed through to a docker container as long as when you start that VM there is no active transcode processes happening, the docker container will fallback to software transcoding. Check the webui of the docker container to check if transcoding is occurring before starting the VM, or if your GPU supports it, you can use watch nvidia-smi to check for transcoding processes. 5. To be 100% safe, we recommend a dedicated GPU for transcoding that is not being used for any virtual machines, if you decide to ignore this, then you're on your own, we are not responsible for any problems that ensue. 6. We will produce one Nvidia build per Unraid release, we will not be updating the drivers multiple times for each Unraid version, unless there is a critical bug that demands this. So please don't ask. 7. We are reliant on a lot of upstream projects here, we don't have any control over them, so we can't guarantee this will work for every release. 8. We will look at a DVB solution in the future. Once we know this is stable and working. Background Unraid Nvidia is a way of leveraging a Nvidia graphics card in your Unraid server to offload transcoding from the CPU to your GPU. Although people have long been asking for Nvidia drivers on Unraid, there seems to have been a lack of understanding that this wouldn't solve a fundamental problem. It would mean that only processes run on the host would be able to use them, they wouldn't be useful within docker containers unless the same drivers were then installed within each container, which is both inefficient and requires a special build of every container you wish to use Nvidia within. We began to look at a possible solution about 5 months ago and it has required a number of different steps to get this working. 1. Installing Nvidia drivers and kernel modules into Unraid utilising upstream SlackBuild packages, namely nvidia-kernel, nvidia-driver, google-go-lang, libseccomp & runc. Modifying a part of the nvidia-driver due to the lack of a desktop environment on Unraid. 2. Modifying the docker runtime to a version that can use an Nvidia wrapper to enable any docker container on your system to use the Nvidia graphics card using the Nvidia docker project. 3. Rolling all this together and completely repackaging Unraid to implement all these changes, only bzroot-gui is left unaltered. 4. Development of a plugin to enable easy downloading of the modified Nvidia release of Unraid. All in all, not a simple task. Initial attempts by @CHBMB were hindered by segfaults, at which point, his real life commitments took over and the project hit a hiatus, @bass_rock then forked the work and hit the same issue. After reaching out bass_rock we invited him to join the LinuxServer.io team to collaborate and to try one final strategy we had in mind to get this working, but hadn't got around to. This strategy was instead of installing the drivers at boot time from a custom built Slackware package, taking the approach at build time of unpackaging bzroot and installing the drivers directly there, removing the need for an install at boot time and to our delight solving the segfault issue. Enthusiastically bass_rock implemented this and also made the original scripts nothing short of a work of art, adding shine and polish.
  23. 10 points
    How to setup Dockers to have own IP address without sharing the host IP address: This is only valid in unRAID 6.3 series going forward. 6.4.0 has this built into the GUI but currently have a limitation of needing extra IP addresses for each of your interfaces and needs to delete all manually created docker networks. If you don't like that, you can opt to create the network manually like here and disable the docker network auto cleanup. 6.4.1 is now more intelligent about this and presents a relatively powerful UI that covers most simple cases. You don't need to assign unnecessary IP address to additional interfaces anymore. refer to https://lime-technology.com/forums/topic/62107-network-isolation-in-unraid-64/ for more details Single NIC only: Some assumptions: We'll be using a shared interface br0 (This allows us to use the same nic with virtual machines, otherwise its alright to use eth0) The IP address details are: unRAID = 192.168.1.2 Gateway/router = 192.168.1.1 Subnet = 192.168.1.0/24 Docker IP pool = 192.168.1.128/25 (192.168.1.128-254) A new docker network will be established called homenet Login via SSH and execute this: # docker network create \ -o parent=br0 \ --driver macvlan \ --subnet 192.168.1.0/24 \ --ip-range 192.168.1.128/25 \ --gateway 192.168.1.1 \ homenet Modify any Docker via the WebUI in Advanced mode Set Network to None Remove any port mappings Fill in the Extra Parameters with: --network homenet Apply and start the docker The docker is assigned an IP from the pool 192.168.1.128 - 192.168.1.254; typically the first docker gets the first IP address # docker inspect container | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "192.168.1.128", # docker exec container ping www.google.com PING www.google.com (122.2.129.167): 56 data bytes 64 bytes from 122.2.129.167: seq=0 ttl=57 time=36.842 ms 64 bytes from 122.2.129.167: seq=1 ttl=57 time=36.496 ms ^C # docker exec container ping 192.168.1.2 PING 192.168.1.2 (192.168.1.2): 56 data bytes ^C # At this point, your gateway/router will have a first class network citizen with the specified IP address An additional Extra Parameter can be specified to fix the IP address: --ip 192.168.1.128 The container will not be allowed to talk to unRAID host due to the underlying security implementation with the macvlan driver used by Docker. This is by design That's it. Secondary NIC is available: Some assumptions: We'll be using a dedicated interface br1 (the native eth1 interface can used here too) There is no IP address assigned to the interface The IP address details are: Gateway/router = 10.0.3.1 Subnet = 10.0.3.0/24 Docker IP pool = 10.0.3.128/25 (10.0.3.128-254) A new docker network will be established called docker1 unRAID has an ip of 10.0.3.2 Login via SSH and execute this: # docker network create \ -o parent=br1 \ --driver macvlan \ --subnet 10.0.3.0/24 \ --ip-range 10.0.3.128/25 \ --gateway 10.0.3.1 \ docker1 Modify any Docker via the WebUI in Advanced mode Set Network to None Remove any port mappings Fill in the Extra Parameters with: --network docker1 Apply and start the docker The docker is assigned an IP from the pool 10.0.3.128 - 10.0.3.254; typically the first docker gets the first IP address # docker inspect container | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "10.0.3.128", # docker exec container ping www.google.com PING www.google.com (122.2.129.167): 56 data bytes 64 bytes from 122.2.129.167: seq=0 ttl=57 time=36.842 ms 64 bytes from 122.2.129.167: seq=1 ttl=57 time=36.496 ms ^C # docker exec container ping 10.0.3.2 PING 10.0.3.2 (10.0.3.2): 56 data bytes 64 bytes from 10.0.3.2: seq=0 ttl=64 time=0.102 ms 64 bytes from 10.0.3.2: seq=1 ttl=64 time=0.075 ms 64 bytes from 10.0.3.2: seq=2 ttl=64 time=0.065 ms 64 bytes from 10.0.3.2: seq=3 ttl=64 time=0.069 ms ^C --- 10.0.3.2 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.065/0.077/0.102 ms At this point, your gateway/router will have a first class network citizen with the specified IP address An additional Extra Parameter can be specified to fix the IP address: --ip 10.0.3.128 The container can happily talk to unRAID as the packets go out via br1 and talk to the host on br0 That's it. Some caveats: With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network. We can only have one network defined per gateway. So if you already have docker network br0 on your main LAN (gateway 192.168.1.1), docker will not allow you to create another network referencing the same gateway. I cannot confirm yet what happens in the case of two or more NICs bridged/bonded together (but it should be the same as a single NIC) unRAID 6.4.0 We need to disable the docker network auto generation and cleanup if we want these settings to remain (until the auto cleanup is made more intelligent). The code below should be inserted into the go file before /usr/local/sbin/emhttp is started. The code below will disable docker network auto creation too, so it will behave like 6.3.5. It seems that the docker page will happily show any docker network you have defined, so you can just use them as normal and no longer need to set Extra Parameters. Again not needed with 6.4.1 going forward. # stop docker network creation and pruning sed -i -e '/# cleanup/,+3s/^/## /;s/^ custom_networks/## custom_networks/' /etc/rc.d/rc.docker
  24. 10 points
    Too bad we couldn't time linux 4.20 kernel with Unraid 6.6.6.
  25. 10 points
    Hi Everyone! In an effort to combat spammers that are flooding the forum with spam posts, we are putting a new requirement in place: the first post of any new member MUST be approved by the moderators before normal members can see it and before additional posts will be allowed. We apologize in advance for any inconvenience that this may cause folks wishing to post, but due to the sheer nature of the spammers hitting us, this is one of the better ways for us to combat the issue for the time being. Thank you in advance for your understanding. All the best, The Lime Technology Team
  26. 10 points
    @realies @pwm First let me start by saying that the implications that we are "just trying to look busy" and that we deliberately are trying to "fool" people by using versioning "trickery" is pure rubbish, and I would appreciate and in fact, insist, that you go back and edit your post to strike out those statements. The only thing you got right was indeed we are a small company without the resources to test a wide range of hardware and configurations for an extremely complex set of subsystems. However we never publicly release any software where we know we have introduced risk of data loss, except in rare cases where we clearly state the risk in release notes. As for use of "RC" designation, have you ever studied the linux kernel release methodology? I know your answer is No or else you would know that the "RC" series of a linux kernel is precisely for shaking out bugs and those releases are definitely not ready for any kind of production use. The choice made to promote a kernel from "RC" to "mainline" is also fairly arbitrary. Go take a look at any kernel change log and you will see all kinds of crazy bugs fixed by later patch releases. They typically don't even mark security-related patches as being security-related (because they don't want to give hints to would-be hackers, placing unpatched users at risk). Unless we are forced to, we also don't use the the ".0" release of a new kernel either because we know there will be wider usage with inevitable patch releases (up to 26 now for 4.14). We know from monitoring downloads that most of our user community doesn't update to a .0 unRaid release immediately either. This is the nature of software development and you guys should know better. As for introducing "new features" beyond -rc1 - ok you got me there! Usually we have one or two truly "new" features which require significant coding in each minor release, and usually most of that coding is in place by -rc1. Sometimes big chunks might be staged over several RC's. Sometimes it might take a few RC's to get the UI completely right or handle unexpected cases. Meanwhile, other developers are contributing code. I'm not going to tell those people, "Sorry don't give us those changes because we are still testing this other unrelated feature." Sometimes those features themselves might extend the RC series beyond what we wanted, but that's the price we are willing to pay for now. In the case of the linux kernel, there are thousands of developers which requires far more discipline and rigor to maintain sanity (hence the "merge window" concept). In contrast, we are a small team with good communication (usually) and don't need to be so pedantic. Finally, this stuff is hard. Every user thinks their feature request or bug fix should be the highest priority. Every user wants to see fast progress. Every user can find faults and bugs in any software release. And we have to be as responsive as possible to survive as a company. So how about you cut us a break?
  27. 9 points
    This plugin is designed to find and offer up suggestions to a multitude of configuration errors, outright problems, etc for numerous aspects of your unRaid server. To install this plugin, just head over to the Apps tab and search for Fix Common Problems. After installation, you will see a lifesaver within Settings / User Utilities which will launch a manual scan (and give you the option to set the background scan settings) For every error or warning that this plugin finds, a suggested course of action will also be displayed. Additionally, should you seek additional help for any error / warning, the errors are all logged into your syslog so that persons helping can easily find the issue when you post your diagnostics. Scans can be scheduled to run automatically in the background (you have the option of hourly, daily, weekly, and monthly). Additionally, if the background scans find an issue they will send out a notification (depending upon your notification settings in this plugin) The current list of tested items will be maintained in the second post of this thread. Any support for problems this plugin finds, should be posted in the General v6 section of these forums. Problems relating to false positives, suggestions for more checks, why I made the decisions I did, wording mistakes in suggestions, etc. should be posted here. As usual for anything written by me, updates are frequent as new ideas pop into my head. Highly recommended to turn on auto-updates for this plugin. Additionally, a special "Troubleshooting Mode" is available to assist with problems involving random crashes / shutdowns / lockups / etc A video with a basic run through of FCP can be found here: (at about 18:25)
  28. 9 points
    CA Appdata Backup / Restore (v2) Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users. Development has ceased on the original version, and is now switched over to this replacement. Fundamentally they are both the same and accomplish the same goals (namely, backing up your Appdata share and USB / libvirt), but this version is significantly faster at the job. This version uses tar instead of rsync (and offers optional compression of the archive - roughly 50% if not including any downloads in the archive - which you really shouldn't be anyways). Because of using tar, there is no longer any incremental backups. Rather, every backup goes into its own separate dated subfolder. Old backups can optionally be deleted after a successful backup. Even without incremental backups, the speed increase afforded by tar means that there should be no real difference in the end. (ie: A full backup using my appdata on the old plugin takes ~1.5 hours. This plugin can do the same thing uncompressed in about 10 minutes, and compressed in 20 minutes. The optional verification of the archive takes a similar amount of time. An incremental backup on the old plugin using my appdata averaged around 35 minutes). The options for separate destination for USB / VM libvirt backups is changed so that if there is no destination set for those backup's then they will not be backed up. Additionally, unlike the original plugin, no cache drive is necessary, and the appdata source can be stored on any device in your system (ie: unassigned devices). The destination as usual can go to any mount point within your system. Unfortunately because of no more incremental backups, this version may no longer be suitable for ultimately backing up offsite to a cloud service (ie: via rclone) You can find this within the Apps tab (Search for Appdata Backup). The original v1 plugin should be uninstalled if migrating to this version.
  29. 9 points
    By default unRAID, the VMs and Docker containers all run within the same network. This is a straightforward solution, it does not require any special network setup and for most users this is a suitable solution. Sometimes more isolation is required, for example let VMs and Docker containers run in their own network environment completely separated from the unRAID server. Setting up such an environment needs changes in the unRAID network settings, but also requires your switch and router to have additional network possibilities to support this environment. The example here makes use of VLANs. This is an approach which allows to split your physical cable into two or more logical connections, which can run fully isolated from each other. If your switch does not support VLANs then the same can be achieved by connecting multiple physical ports (this requires however more ports on the unRAID server). The following assingments are done: network 10.0.101.0/24 = unRAID management connection. It runs on the default link (untagged) network 10.0.104.0/24 = isolated network for VMs. It runs on VLAN 4 (tagged) network 10.0.105.0/24 = isolated network for docker containers. It runs on VLAN 5 (tagged) UNRAID NETWORK SETTINGS We start with the main interface. Make sure the bridge function is enabled (this is required for VMs and docker). In this example both IPv4 and IPv6 are used, but this is not mandatory, e.g. IPv4 only is a good starting choice. Here a static IPv4 address is used, but automatic assignment can be used too. In this case it is recommended that your router (DHCP server) always hands out the same IP address to the unRAID server. Lastly enable VLANs for this interface. VM NETWORK SETTINGS VMs will operate on VLAN 4 which corresponds to interface br0.4. Here again IPv4 and IPv6 are enabled, but it may be limited to IPv4 only, without any IP assignment for unRAID itself. On the router DHCP can be configured, which allows VMs to obtain an IP address automatically. DOCKER NETWORK SETTINGS Docker containers operate on VLAN 5 which corresponds to interface br0.5. We need to assign IP addresses on this interface to ensure that Docker "sees" this interface and makes it a choice in the network selection of a container. Assignment can be automatic if you have a DHCP server running on this interface or static otherwise. VM CONFIGURATION We can set interface br0.4 as the default interface for the VMs which we are going to create (existing VMs you'll need to change individually). Here a new VM gets interface br0.4 assigned. DOCKER CONFIGURATION Docker will use its own built-in DHCP server to assign addresses to containers operating on interface br0.5 This DHCP server however isn't aware of any other DHCP servers (your router). Therefor it is recommended to set an IP range to the Docker DHCP server which is outside the range used by your router (if any) and avoid conflicts. This is done in the Docker settings while the service is stopped. When a docker container is created, the network type br0.5 is selected. This lets the container run on the isolated network. IP addresses can be assigned automatically out of the DHCP pool defined earlier. Leave the field "Fixed IP address" empty in this case. Or containers can use a static address. Fill-in the field "Fixed IP address" in this case. This completes the configuration on the unRAID server. Next we have to setup the switch and router to support the new networks we just created on the server. SWITCH CONFIGURATION The switch must be able to assign VLANs to its different ports. Below is a picture of a TP-LINK switch, other brands should have something similar. ROUTER CONFIGURATION The final piece is the router. Remember all connections eventually terminate on the router and this device makes communication between the different networks possible. If you want to allow or deny certain traffic between the networks, firewall rules on the router need to be created. This is however out of scope for this tutorial. Below is an example of a Ubiquiti USG router, again other brands should offer something similar. That's it. All components are configured and able to handle the different communications. Now you need to create VMs and containers which make use of them. Good luck.
  30. 9 points
    Mainly bug fix to address "slow transfer to xfs-encrypted array disks" issue. Version 6.7.0-rc4 2019-02-15 Base distro: docker: version 18.09.2 (CVE-2019-5736) qemu: version 3.1.0 (rev 2) patched pcie link speed and width support Linux kernel: version: 4.19.23 md/unraid: version 2.9.7 (setup queue properties correctly) Management: webgui: Syslog: added viewer webgui: Dashboard: table right adjustment in two columns view webgui: Dashboard: table adjustment in three columns view webgui: Diagnostics: dynamic file name creation webgui: Move "Management Access" directly under Settings webgui: OS update: style correction
  31. 9 points
    Now that's said and done, lets move on to the bits you're all interested in. Please note that hardware (GPU) enabled transcoding requires either a Plex Pass or Emby Premium subscription. Note: Emby will both decode and encode using the Nvidia GPU, Plex currently only encodes. nvidia-smi requires a compatible video card - Some older nvidia cards are incompatible, sorry nothing we can do about that, everything will work apart from using watch nvidia-smi to view transcoding processes. Step 1 Install the Unraid-Nvidia plugin from Community Applications or alternatively manually by copying this into the plugin page. https://raw.githubusercontent.com/linuxserver/Unraid-Nvidia-Plugin/master/plugins/Unraid-Nvidia.plg Step 2 Select the version of Unraid you wish to download and click the download and install button the reboot your server Step 3 Go back to the Unraid-Nvidia page and you should find the information on Nvidia graphics cards is populated like below. There are three pieces of information for each card, looking at GPU 0, you can see the GPU name, the bus location and the GPU UUID. Before you start, copy and paste the GPU UUID somewhere handy, or keep the Unraid-Nvidia screen open in a separate browser tab as you'll need it shortly. How to Utilise the GPU in a Docker Container Now to utilise GPU 0 in a docker container, lets look how to do so using the LinuxServer.io version of Plex, (although this can be applied to any container). Add or edit Plex as per normal and switch the template to advanced mode. Now add --runtime=nvidia to Extra Parameters: and then switch advanced mode off again. Now we need to add two extra parameters, the first is the easiest one, as it will be identical to all containers. Click on Add another Path, Port, Variable, Label or Device and change config type to variable. then add in the KEY field NVIDIA_DRIVER_CAPABILITIES and in the VALUE field add all Now the 2nd parameter to add do exactly the same process as before but in the KEY field add NVIDIA_VISIBLE_DEVICES and in the VALUE field add the GPU UUID of the card you want to use. Configuring Plex Go to your server settings and configure like below. and you should find when playing content that is being transcoded you get a hw indication in the webui like below. and running watch nvidia-smi from the terminal shows the transcoding process Configuring Emby Go to your server settings and configure like below. Now you should find when playing content that is being transcoded it shows in the webui like so. and running watch nvidia-smi from the terminal shows the transcoding process That's it. Have fun.
  32. 9 points
    Yeah, I've been busy, quite ironic though that the day before you came along I started working on it again..... I uploaded v6.6.6 last night. There are some caveats, can't get the OOT 10GB Intel drivers to compile. Need to talk to LT about that one.
  33. 9 points
    Once in a while it is good to hear those encouraging words. Thanks 😄
  34. 9 points
    Please report only issues/bugs which are new in the prerelease. We've been holding back on this release a few days waiting for updated Intel ixgbe (10Gbit) OOT driver which builds correctly with 4.18 kernel. But no update yet so this release uses 4.17.19 kernel. Every major subsystem has been updated. In addition @bonienl has added some really cool features and update the default themes to match our new branding. Release notes: Version 6.6.0-rc1 2018-08-31 Base distro: aaa_base: version 14.2 (rev 5) aaa_elflibs: version 15.0 (rev 2) acl: version 2.2.53 acpid: version 2.0.29 adwaita-icon-theme: version 3.28.0 appres: version 1.0.5 (rev 2) at-spi2-atk: version 2.26.2 at-spi2-core: version 2.28.0 at: version 3.1.20 (rev 5) atk: version 2.28.1 attr: version 2.4.48 avahi: version 0.7 (rev2) bash: version 4.4.023 bin: version 11.1 (rev 2) bridge-utils: version 1.5 (rev 2) btrfs-progs: version v4.17 bzip2: version 1.0.6 (rev 3) ca-certificates: version 20180409 cairo: version 1.15.12 celt051: version 0.5.1.3 (rev 2) cifs-utils: version 6.7 (rev 2) coreutils: version 8.29 (rev 2) cpio: version 2.12 (rev 2) cpufrequtils: version 08 (rev 2) cryptsetup: version 1.7.5 (rev 2) curl: version 7.61.0 (CVE-2018-1000300, CVE-2018-1000301, CVE-2018-0500) cyrus-sasl: version 2.1.27_rc8 db48: version 4.8.30 (rev 4) dbus: version 1.12.8 dcron: version 4.5 dejavu-fonts-ttf: version 2.37 (rev 4) dhcpcd: version 7.0.6 diffutils: version 3.6 (rev 2) dmidecode: version 3.1 (rev 2) dnsmasq: version 2.79 (rev 2) docker: version 18.06.1-ce e2fsprogs: version 1.44.2 ebtables: version 2.0.10 (rev 2) editres: version 1.0.7 (rev 2) elvis: version 2.2_0 (rev 4) encodings: version 1.0.4 (rev 2) etc: version 15.0 (rev 6) ethtool: version 4.17 eudev: version 3.2.5 (rev2) file: version 5.33 findutils: version 4.4.2 (rev 2) flex: version 2.6.4 (rev 3) floppy: version 5.5 (rev 2) fontconfig: version 2.12.6 (rev 2) freetype: version 2.9 (rev 2) fribidi: version 1.0.4 fuse: version 2.9.7 (rev3) gawk: version 4.2.1 (rev 2) gd: version 2.2.5 (rev2) gdbm: version 1.15 gdk-pixbuf2: version 2.36.12 genpower: version 1.0.5 (rev 3) getty-ps: version 2.1.0b (rev 4) glew: version 2.1.0 (rev 2) glib2: version 2.56.1 glibc-solibs: version 2.27 (rev 4) glibc-zoneinfo: version 2018e (rev 3) glibc: version 2.27 (rev 4) glu: version 9.0.0 (rev 2) gmp: version 6.1.2 (rev 2) gnome-themes-standard: version 3.22.3 (rev 2) gnupg: version 1.4.23 (CVE-2018-12020) gnutls: version 3.6.2 (rev 2) gptfdisk: version 1.0.3 (rev 2) grep: version 3.1 (rev 2) gtk+3: version 3.22.30 gzip: version 1.9 (rev 2) harfbuzz: version 1.8.1 haveged: version 1.9.2 hdparm: version 9.56 hicolor-icon-theme: version 0.17 (rev 2) hostname: version 3.18 (rev 2) htop: version 2.2.0 icu4c: version 61.1 imlib2: version 1.5.1 inetd: version 1.79s (rev 11) infozip: version 6.0 (rev 4) inotify-tools: version 3.14 (rev 2) intel-microcode: version 20180807 iproute2: version 4.17.0 iptables: version 1.6.2 (rev 2) iputils: version s20140519 (rev 2) kernel-firmware: version 20180814_f1b95fe keyutils: version 1.5.10 (rev 2) kmod: version 25 (rev 2) lbzip2: version 2.5 less: version 530 (rev 3) libaio: version 0.3.111 libarchive: version 3.3.2 (rev 3) libcap-ng: version 0.7.8 (rev 3) libcgroup: version 0.41 (rev 5) libcroco: version 0.6.12 (rev 2) libdaemon: version 0.14 (rev2) libdmx: version 1.1.4 libdrm: version 2.4.92 libedit: version 20180525_3.1 libee: version 0.4.1 (rev 2) libepoxy: version 1.4.3 (rev 2) libestr: version 0.1.10 (rev 2) libevdev: version 1.5.9 (rev 2) libevent: version 2.1.8 (rev 3) libfastjson: version 0.99.8 (rev2) libffi: version 3.2.1 (rev 2) libfontenc: version 1.1.3 (rev 2) libgcrypt: version 1.8.3 (CVE-2018-0495) libgpg-error: version 1.31 libgudev: version 232 (rev 2) libICE: version 1.0.9 (rev 3) libidn: version 1.35 libjpeg-turbo: version 1.5.3 (rev 2) liblogging: version 1.0.6 (rev2) libmnl: version 1.0.4 (rev 3) libnetfilter_conntrack: version 1.0.7 libnfnetlink: version 1.0.1 (rev 2) libnftnl: version 1.1.0 libnl3: version 3.4.0 (rev 2) libpcap: version 1.8.1 (rev 2) libpciaccess: version 0.14 (rev 2) libpng: version 1.6.34 (rev 2) libpthread-stubs: version 0.4 (rev 2) librsvg: version 2.42.5 libseccomp: version 2.3.3 (rev2) libSM: version 1.2.2 (rev 3) libssh2: version 1.8.0 (rev 3) libtasn1: version 4.13 (rev 2) libtirpc: version 1.0.3 (rev 2) libunistring: version 0.9.10 libunwind: version 1.2.1 libusb-compat: version 0.1.5 (rev2) libusb: version 1.0.22 libvirt-php: version 0.5.4 (rev3) libvirt: version 4.6.0 libwebsockets: version 2.4.2 libX11: version 1.6.5 (rev 2) libx86: version 1.1 (rev 3) libXau: version 1.0.8 (rev 3) libXaw: version 1.0.13 (rev 2) libxcb: version 1.13 (rev 2) libXcomposite: version 0.4.4 (rev 3) libXcursor: version 1.1.15 (rev 2) libXdamage: version 1.1.4 (rev 3) libXdmcp: version 1.1.2 (rev 3) libXevie: version 1.0.3 (rev 3) libXext: version 1.3.3 (rev 3) libXfixes: version 5.0.3 (rev 2) libXfont2: version 2.0.3 (rev 2) libXfontcache: version 1.0.5 (rev 3) libXft: version 2.3.2 (rev 4) libXi: version 1.7.9 (rev 2) libXinerama: version 1.1.3 (rev 3) libxkbfile: version 1.0.9 (rev 2) libxml2: version 2.9.8 (rev 2) libXmu: version 1.1.2 (rev 3) libXpm: version 3.5.12 (rev 2) libXrandr: version 1.5.1 (rev 2) libXrender: version 0.9.10 (rev 2) libXres: version 1.2.0 (rev 2) libxshmfence: version 1.3 (rev 2) libxslt: version 1.1.32 (rev 2) libXt: version 1.1.5 (rev 2) libXtst: version 1.2.3 (rev 2) libXxf86dga: version 1.1.4 (rev 3) libXxf86misc: version 1.0.3 (rev 3) libXxf86vm: version 1.1.4 (rev 3) listres: version 1.0.4 (rev 2) lm_sensors: version 3.4.0 (rev 2) logrotate: version 3.14.0 (rev 2) lsof: version 4.91 lsscsi: version 0.29 lvm2: version 2.02.177 lz4: version 1.8.2 mc: version 4.8.21 mesa: version 18.1.2 mkfontdir: version 1.0.7 (rev 2) mkfontscale: version 1.1.3 (rev 2) mozilla-firefox: version 61.0.2 (CVE-2018-12359, CVE-2018-12360, CVE-2018-12361, CVE-2018-12358, CVE-2018-12362, CVE-2018-5156, CVE-2018-12363, CVE-2018-12364, CVE-2018-12365, CVE-2018-12371, CVE-2018-12366, CVE-2018-12367, CVE-2018-12368, CVE-2018-12369, CVE-2018-12370, CVE-2018-5186, CVE-2018-5187, CVE-2018-5188) mpfr: version 4.0.1 (rev 2) mtdev: version 1.1.5 (rev 2) nano: version 2.9.8 ncompress: version 4.2.4.4 (rev 2) ncurses: version 6.1_20180616 net-tools: version 20170208_479bb4a (rev 2) netatalk: version 3.1.11 (rev2) nettle: version 3.4 (rev 2) network-scripts: version 15.0 nghttp2: version 1.32.0 (CVE-2018-1000168) nginx: version 1.14.0 ntfs-3g: version 2017.3.23 (rev 2) ntp: version 4.2.8p12 (CVE-2016-1549, CVE-2018-12327) numactl: version 2.0.11 (rev2) openldap-client: version 2.4.46 openssh: version 7.7p1 openssl-solibs: version 1.1.0i openssl10-solibs: version 1.0.2o openssl: version 1.1.0i (CVE-2018-0732, CVE-2018-0737) p11-kit: version 0.23.12 pango: version 1.42.1 patch: version 2.7.6 (rev 3) (CVE-2018-1000156) pciutils: version 3.5.6 (rev 2) pcre: version 8.42 php: version 7.2.8 pixman: version 0.34.0 (rev 2) pkgtools: version 15.0 (rev 20) pm-utils: version 1.4.1 (rev 6) procps-ng: version 3.3.15 (CVE-2018-1124, CVE-2018-1126, CVE-2018-1125, CVE-2018-1123, CVE-2018-1122) qemu: version 3.0.0 reiserfsprogs: version 3.6.27 (rev2) rpcbind: version 0.2.4 (rev 4) rsync: version 3.1.3 (rev 2) rsyslog: version 8.36.0 samba: version 4.8.4 (CVE-2018-1139, CVE-2018-1140, CVE-2018-10858, CVE-2018-10918, CVE-2018-10919) sed: version 4.5 sessreg: version 1.1.1 (rev 2) setxkbmap: version 1.3.1 (rev 2) shadow: version 4.2.1 (rev 4) shared-mime-info: version 1.9 (rev 2) spice: version 0.14.0 (rev2) sqlite: version 3.24.0 ssmtp: version 2.64 (rev5) startup-notification: version 0.12 (rev 3) sudo: version 1.8.23 sysfsutils: version 2.1.0 (rev 2) sysvinit-scripts: version 2.1 (rev 12) sysvinit: version 2.90 talloc: version 2.1.13 tar: version 1.30 (rev 2) tcp_wrappers: version 7.6 (rev 2) tdb: version 1.3.15 (rev 2) telnet: version 0.17 (rev 4) tevent: version 0.9.36 (rev 2) traceroute: version 2.1.0 (rev 2) transset: version 1.0.2 (rev 2) tree: version 1.7.0 (rev 2) ttyd: version 1.4.0 (rev2) usbredir: version 0.7.1 (rev2) usbutils: version 010 utempter: version 1.1.6 (rev 3) util-linux: version 2.32 vala: version 0.28.1 (rev2) vbetool: version 1.2.2 (rev 2) vsftpd: version 3.0.3 (rev 5) vte3: version 0.44.3 (rev2) wget: version 1.19.5 (CVE-2018-0494) which: version 2.21 (rev 2) xauth: version 1.0.10 (rev 2) xcb-util: version 0.4.0 (rev 3) xclock: version 1.0.7 (rev 3) xdpyinfo: version 1.3.2 (rev 2) xdriinfo: version 1.0.6 (rev 2) xev: version 1.2.2 (rev 2) xf86-input-evdev: version 2.10.6 xf86-input-keyboard: version 1.9.0 (rev 3) xf86-input-mouse: version 1.9.3 xf86-input-synaptics: version 1.9.1 xf86-video-ast: version 1.1.5 (rev 5) xf86-video-mga: version 1.6.5 (rev 3) xf86-video-vesa: version 2.4.0 (rev 3) xfsprogs: version 4.16.1 xhost: version 1.0.7 (rev 2) xinit: version 1.4.0 (rev 2) xkbcomp: version 1.4.2 xkbevd: version 1.1.4 (rev 2) xkbutils: version 1.0.4 (rev 3) xkeyboard-config: version 2.22 (rev 2) xkill: version 1.0.5 (rev 2) xload: version 1.1.3 (rev 2) xlsatoms: version 1.1.2 (rev 2) xlsclients: version 1.1.4 (rev 2) xmessage: version 1.0.5 (rev 2) xmodmap: version 1.0.9 (rev 2) xorg-server: version 1.20.0 (rev 2) xprop: version 1.2.3 (rev 2) xrandr: version 1.5.0 (rev 2) xrdb: version 1.1.1 (rev 2) xrefresh: version 1.0.6 (rev 2) xset: version 1.2.4 (rev 2) xsetroot: version 1.1.2 (rev 2) xsm: version 1.0.4 (rev 2) xterm: version 333 xtrans: version 1.3.5 (rev 2) xwd: version 1.0.7 (rev 2) xwininfo: version 1.1.4 (rev 2) xwud: version 1.0.5 (rev 2) xz: version 5.2.4 zlib: version 1.2.11 (rev 2) Linux kernel: version 4.17.19 intel ixgbe: version 5.3.7 intel ixgbevf: version 4.3.5 highpoint r750: version 1.2.11 highpoint rr3740a: version 1.17.0 added per customer request: CONFIG_BLK_DEV_SKD: STEC S1120 Block Driver CONFIG_BNXT: Broadcom NetXtreme-C/E support CONFIG_CHR_DEV_ST: SCSI tape support changed from built-in to module: CONFIG_BLK_DEV_SR: SCSI CDROM support removed CONFIG_SENSORS_IT87: ITE IT87xx and compatibles it87: version 20180709 groeck Linux Driver for ITE LPC chips (https://github.com/groeck/it87) set CRYPTO_DEV_SP_PSP=N otherwise udev deadlocks on threadripper Management: Docker: add optional per-container wait before starting Enable IPv6 for NTP rc.cpufreq: For CPUs using intel_pstate, always use the performance governor. This also provides power savings on Intel processors while avoiding the ramp-up lag present when using the powersave governor (which is the default if ondemand is requested on these machines). restore docker custom networks upon docker start update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: docker: Correct docker container icon name generation webgui: docker: added cpu load and memory load display webgui: docker: added shell/bash selection for console webgui: docker: Make cpu-load and mem-load only visible in advanced mode webgui: Show accumulated encryption status webgui: Prevent openbox from clipping webgui: Plugins: Show support thread if present webgui: Remove control codes, and extended ascii characters from plugin urls on install webgui: Update link for privileged help text webgui: fix regex matching for unraid.net domain name validation when using new openssl1.1 webgui: docker: Prevent arbitrary bash execution or redirection on docker create/run commands webgui: Plugins: Preserve support link when updating plugin webgui: diagnostics: Replace consecutive repeated lines in syslog webgui: Remove empty entries when storing individual disk settings webgui: docker: fix connect container console to new window webgui: Use timeout in command execution of diagnostics webgui: Fixed 'Update All Containers' button sometimes hidden in docker list webgui: Added version control in docker API calls webgui: Show docker allocations in column format webgui: fix css help button in themes azure and gray webgui: Added docker autostart wait period webgui: Added CPU selection to docker container edit section webgui: Verify internet access using NCSI method NCSI = network connection status indicator. This method tries to access a specific Microsoft site to test internet access. The same method is used in Windows. webgui: Added docker autostart wait period webgui: New CPU pinning functionality webgui: Change wording on dashboard page "memory total usable" webgui: Include meminfo in diagnostics webgui: Docker containers page: show container starting up message Starting docker service and auto-starting containers has been decoupled. This lets the docker page return as soon as the service is ready. The container overview page shows a new message, and will automatically reload to update the status until all containers are auto-started webgui: VMs: preserve XML custom settings in VM config updates webgui: dockerMan: Avoid filename collisions on FAT32 webgui: Extract disk log from multiple rotated logs webgui: theme-match marketing site
  35. 9 points
    I will only post this once. Feel free to refer folks to this post. A few points of clarification: The last update of this image didn't break things. Letsencrypt abruptly disabled the authentication method previously used by this image (tls over port 443) due to a security vulnerability. It is unclear whether they will ever re-enable it again. So we added the option of validating over port 80, via setting the HTTPVAL variable to true (similar to how PUID is set to 99). But you have to make sure port 80 is forwarded to the container from your router. Keep in mind that unraid gui runs on port 80, so you should map port 80 on your router to any other port, ie. 85. Then in the container settings, map port 85 to port 80. Unraid template has been updated to include this new variable setting, and I think the brand new unraid stable as well as the previous betas will automatically add that variable to your settings (not 100% sure because I'm still on 6.3.5). Either way, check your settings. If your isp blocks port 80, there's nothing we can do as it is the only port letsencrypt will validate through at this point. Someone mentioned dns validation. It's not gonna happen as it is. It requires a script to change dns settings on your dns provider. Since all the dns providers have different api's for this process, we cannot automate it for you, therefore we will not add dns validation (unless there is a standardized way to update dns entries in the future but I wouldn't hold my breath). You do not need to make changes to your nginx site config and you do not need to enable listening on port 80. Validation is done through a separate web server temporarily put up during validation and is not affected by your nginx config. And one last thing, the error message about the directory not existing is harmless, it just means that you didn't have a letsencrypt cert the last time the container was started, probably because the validation had failed.
  36. 9 points
    I just wanted to say a big thanks to all the hard work that you guys put into unRAID. Your commitment to constantly improving the product is truely excellent. We get constant updates and new features and we only buy the license once. It really is amazing how you guys do it. Please anyone reading this who feels the same post beneath and lets let the guys know how much we appreciate what they do. I started my journey with computers with an Atari 800xl and absolutely loved it and wasted many hours. Then later had a career in IT and it became a bit of a boring thing I had to do. Having found unRAID I feel like I was when I was a boy and it's just as exciting as my old 800XL was. So unRAID has put the fun back into computers for me. Well, that or I am just in my second childhood already!
  37. 9 points
    With all due respect man, this is unwarranted. We take security very seriously. Case in point: totally dropped development last month to incorporate CSRF protection as fast as possible and it was a hell of a lot of work. We are team of 2 developers in unRAID OS core, and one of us can only spend half time at it because of other hats that must be worn. Reality is 99% of CVE's do not affect unRAID directly. Many are in components we don't use. Many apply only to internet-facing servers. We have always advised, unRAID OS should not be directly attached to the internet. The day is coming when we can lift that caveat but for now VM's can certainly serve that role if you know what you are doing. If you find a truly egregious security vulnerability in unRAID we would certainly appreciate an email. We see every one of those, whereas we don't read every single forum post. Send to tomm@lime-technology.com
  38. 9 points
    Can I manually create and use multiple btrfs pools? Work in progress. Multiple cache pools are not supported yet, though they are planned for the future, until then you can still use multiple btrfs pools with the help of the Unassigned Devices plugin. There are some limitations and most operations creating and maintaining the pool will need to be made using the command line, so if you're not comfortable with that wait for LT to add the feature. If you want to use the now, here's how: -If you don't have yet install the Unassigned Devices plugin. -Better to start with clean/wiped devices, so wipe them or delete any existing partitions. -Using UD format the 1st device using btrfs, choose the mount point name and optionally activate auto mount and share -Using UD mount the 1st device, for this example it will be mounted at /mnt/disks/yourpoolpath -Using UD format the 2nd device using btrfs, no need to change the mount point name, and leave auto mount and share disable. -Now on the console/SSH add the device to the pool by typing: btrfs dev add -f /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end to specify the partition (for NVMe devices add p1, e.g. /dev/nvme0n1p1) -Device will be added and you will see the extra space on the 1st disk free space graph, whole pool will be accessible in the original mount point, in this example: /mnt/disks/yourpoolpath -By default the disk is added in single profile mode, i.e., it will extend the existing volume, you can change that to other profiles, like raid0, raid1, etc, e.g., to change to raid1 type: btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/disks/yourpoolpath See here for the other available modes. -If you want to add more devices to that pool just repeat the process above Notes -Only mount the first device with UD, all other members will mount together despite nothing being shown on UD's GUI, same to unmount, just unmount the 1st device to unmount the pool. -It appears that if you mount the pool using the 1st device used/free space are correctly reported by UD, unlike if you mount using e.g. the 2nd device, still for some configurations the space might be incorrectly reported, you can always check it using the command line: btrfs fi usage /mnt/disks/yourpoolpath -You can have as many unassigned pools as you want, example how it looks on UD: sdb+sdc+sdd+sde are part of a raid5 pool, sdf+sdg are part of raid1 pool, sdh+sdi+sdn+sdo+sdp are another raid5 pool, note that UD sorts the devices by identifier (sdX), so if sdp was part of the first pool it would still appear last, UD doesn't reorder the devices based on if they are part of a specific pool. You can also see some of the limitations, i.e., no temperature is shown for the secondary pools members, though you can see temps for all devices on the dashboard page, still it allows to easily use multiple pools until LT adds multiple cache pools to Unraid. Remove a device: -to remove a device from a pool type (assuming there's enough free space): btrfs dev del /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end Note that you can't go below the used profile minimum number of devices, i.e., you can't remove a device from a 2 device raid1 pool, you can convert it to single profile first and then remove the device, to convert to single use: btrfs balance start -f -dconvert=single -mconvert=single /mnt/disks/yourpoolpath Then remove the device normally like above. Replace a device: To replace a device from a pool (if you have enough ports to have both old and new devices connected simultaneously): You need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/disks/yourpoolpath Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/disks/yourpoolpath If the new device is larger you need to resize it to use all available capacity, you can do that with: btrfs fi resize X:max /mnt/disks/yourpoolpath Replace X with the correct devid, you can find that with: btrfs fi show /mnt/disks/yourpoolpath
  39. 9 points
    After successfully bricking the Fujitsu D2607 by downflashing it I'm proud to be able to contribute to this thread and hereby report: LSI MegaRAID with SAS2008 chipsets 3) DELL Perc H310 as well as H200 Flashed successfully to LSI9211-8i IT (P20) 3TB Drive Support with this card: YES (UPDATE: 5.0Beta7 added 3TB Drive support) Drive Spin Down support: YES (UPDATE: Added as of 5.0Beta7) Drive Temp Readings: YES Toolset_PercH310 to LSIMegaraid.zip (DOS, via bootable usb key) http://www45.zippyshare.com/v/51016808/file.html (for some reason I can't embed the link...) MD5:80174075959fb7d1ff8c6362f7241bfe Update on 06.08.2014 Included the P19 firmware http://www21.zippyshare.com/v/9541812/file.html Update on 01.12.2014 Possible issues with P20 firmware! See this post and this. Update on 23.10.2015 There is an new version of Avago (former LSI) P20 (20.00.04.00) which seems to be OK with unRAID. See this post. Update on 15.09.2015 User opentoe found out that the DELL IT firmware is also working with unRAID. It's your decision what to flash. Flashing DELL firmware is easier and supported by DELL! opentoe's verdict on DELL IT or Avago (former LSI). Update on 07.06.2016 There is a new firmware from Avago. P20.00.07.00 The toolset has been updated accordingly. First impressions. http://www3.zippyshare.com/v/xZKIOHaz/file.html https://www.mediafire.com/?8f82hx4c032a929 MD5: 24f7d428292e00f9dee6f715d8564202 Update on 30.12.2016 Firmware is still P20.00.07.00 Switch to RUFUS for bootdisk creation. Added alternative ways to extract controller info if MegaCli is not working. https://www.mediafire.com/?9cbklh4i1002n23 MD5: 7d90f84c831e8b939c5536d9eb03ba81 Update on 23.02.2017 Firmware is still P20.00.07.00 Uses sas2flsh through the whole process. Tested on a "backflashed" H200, to be confirmed on a stock H200 card and on H310's. Card backup is now dumping the full flash. This can be used to restore the initial condition of the card. Added script for automatic SAS address extraction. No reboot necessary any more. https://www.mediafire.com/?0op114fpim9xwwf MD5: 2fbe3d562846e493714a9e8ac3f15923 Due to missing UEFI environment, no changes nor testing with UEFI shell. Update on 30.03.2017, v2 Firmware is still P20.00.07.00 Spiced up the routines with some checks to automatically select the right tool if one is not working. Tested on a stock H310 as well as a H200 - works for me. Post your experience in the forum. https://www.mediafire.com/?6b77v4s7czluvs2 MD5: 6cb92336ff537aeb838085a028aa6601 Update on 11.04.2017, v3 Firmware is still P20.00.07.00 Added files for use in an EFI environment. Untested due to missing hardware. Post your experience in the forum. https://www.mediafire.com/?9ovj2rxuaf43wv4 MD5: t.b.d. Update on 17.04.2017, v4 <--- this is the latest, use this one! Firmware is still P20.00.07.00 Corrections for EFI environment. Untested due to missing hardware. Post your experience in the forum. https://www.mediafire.com/?py9c1w5u56xytw2 MD5: t.b.d. If you experience the "failed to initialize PAL" error somewhere in step 5, you have to boot from UEFI shell and try again or use another mainboard. See here how to use UEFI shell (Kudos 2 Maglin). Make sure you read and understand the __READMEFIRST.txt before starting! If you experience troubles or something is not clear, don't hesitate to ask for help. You can help improving the howto by doing so. Chances are small but you can brick the controller!
  40. 8 points
    Pulseway is a great tool that allows you to remotely monitor servers from iOS, Android, and the web. I mainly use it on iOS for the notifications about loss of network, high CPU usage or my server being shutdown. It was one of the few things I missed when I switched my server to unRAID. Luckily, I figured out how to get it installed and working. Here's the steps I took: 1. Download the Pulseway Agent for Slackware from Pulseway's website 2. Copy the pulseway_x64.txz to /boot/extra (this is also the flash smb share so flash/extra) 3. Create a new folder,pulseway, in the /boot directory 4. Reboot unRAID 5. SSH into server and copy config.xml.sample to config.xml: cp /etc/pulseway/config.xml.sample /etc/pulseway/config.xml 6. Edit the config.xml file you just copied, you'll need to add your Pulseway username/password and setup any notifications you want to receive (change the Enabled flag from False to True). I enabled Network interface monitoring (change interface name to br0) as well as WhenOffline, HighCpuUsage and MemoryLow notifications. 7. Pulseway needs to be run to verify your config file works and to generate an id file. The Pulseway service looks for version 0.9.8 of libssl and libcrypto, unRAID includes version 1.0.0 so we need to create symlinks to the 1.0.0 files to trick Pulseway into using those: ln -s /lib64/libssl.so.1.0.0 /usr/lib64/libssl.so.0.9.8 ln -s /lib64/libcrypto.so.1.0.0 /usr/lib64/libcrypto.so.0.9.8 8. Start Pulseway service: /etc/rc.d/rc.pulseway start 9. Copy the id file generated by Pulseway to the /boot/pulseway directory (if you don't do this, the server will show up as a new machine in Pulseway every time unRAID boots): cp /var/pulseway/pulseway.id /boot/pulseway/pulseway.id 10. Copy your config file to /boot/pulseway cp /etc/pulseway/config.xml /boot/pulseway/config.xml 11. Add the following lines to /boot/config/go cp /boot/pulseway/config.xml /etc/pulseway/config.xml cp /boot/pulseway/pulseway.id /var/pulseway/pulseway.id ln -s /lib64/libssl.so.1.0.0 /usr/lib64/libssl.so.0.9.8 ln -s /lib64/libcrypto.so.1.0.0 /usr/lib64/libcrypto.so.0.9.8 # As of version 6.6, you also need one for libidn ln -s /usr/lib64/libidn.so.12 /usr/lib64/libidn.so.11 /etc/rc.d/rc.pulseway start 12. Reboot unRAID and make sure everything works! Explanation: unRAID's OS is stored in RAM so any changes you make do not persist after a reboot/shutdown. That's why we need to move everything to the /boot drive (the flash drive unRAID boots from). On startup, we're installing Pulseway, creating symlinks to libraries it needs, copying the config and id files to their respective locations and then starting the service. EDIT (9/26/2018): As of version 6.6, you also need to create a symlink for libidn like so: ln -s /usr/lib64/libidn.so.12 /usr/lib64/libidn.so.11
  41. 8 points
    I am starting a series of videos on pfSense. Both physical and VM instances will be used. Topics such as using a failover physical pfSense to work with a VM pfSense. Setting up OpenVPN (both an OpenVPN server and OpenVPN multiple clients). Using VLANs. Blocking ads. Setting up squid and squid guard and other topics. T This part is an introduction part gives an overview of the series of videos and talks about pfSense and its advantages. Second part of is on hardware and network equipment Part 3 install and basic config Part 4 customize backup and aupdate Part 5 DHCP, Interfaces and WIFI Part 6 Pfsense and DNS Part 7 - Firewall rules, Portforwarding/NAT, Aliases and UPnp Part 8 Open NAT for XBOX ONE and PS4
  42. 8 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. BIOS: Especially if using Virtual Machines, we highly recommend keeping your motherboard bios up-to-date. Bugs: If you discover a bug or other issue new to this release, please open a Stable Releases Bug Report. This will likely be the last of the Unraid 6.6 series since the linux kernel, 4.18.20 was recently marked EOL (no more updates). We had a devil of a time deciding whether to simply update to the 4.19 kernel now, but we're about to start Unraid 6.7 public -rc, which uses 4.19 kernel. If 6.7 stretches out too far we may need to produce a 6.6.7 for security updates of if something apocalyptic comes up. Version 6.6.6 2018-12-01 Base distro: openssl: version 1.1.1a (CVE-2018-0734 CVE-2018-5407) openssl-solibs: version 1.1.1a (CVE-2018-0734 CVE-2018-5407) samba: version 4.8.7 (CVE-2018-14629 CVE-2018-16841 CVE-2018-16851 CVE-2018-16853) Linux kernel: version 4.18.20 OOT Intel 10gbps network driver: ixgbe: version 5.5.2 Management: bug fix: error reported by 'btrfs filesystem show' prevented proper btrs cache pool analysis update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: Black and White theme: make select arrow consistent for all browsers webgui: All themes: create consistent select arrow for all browsers using pure css webgui: Fixed: flash share warning is given when SMB is disabled webgui: Added customizable header background color webgui: All themes: css corrections webgui: Revert FAT32 filename collisions webgui: Dashboard: add disk utilization, cpu load & memory usage coloring webgui: Syslinux Configuration: rename Basic/Advanced view to Raw/Menu webgui: Fixed: parity schedule not taking zero values (e.g. Sunday=0) webgui: Prevent Adblocker from hiding share list webgui: Fixed missing font-awesome class in VM CPU popup window webgui: Improved PCIe ACS override help text webgui: Improved VFIO allow unsafe interrupts help text webgui: Apply syslinux changes to all menus except safe mode webgui: add per DIMM information toggle webgui: Fixed PHP warnings in Vars.page webgui: Docker: narrow CPU pinning list to fit lower resolutions webgui: Added confirmation checkbox when missing disk in array webgui: Disable cache slots selection when cache disks are re-arranged webgui: Added confirmation checkbox when missing cache disk webgui: Remove unused information in array view
  43. 8 points
    Hi everyone, Well it looks like this topic has run its course and its time to be closed. That said, I am going to leverage executive privilege here to have the final word. I think you're probably the first user ever in our history to complain that we update our product TOO often. Keep in mind, all updates are elective, meaning you have to trigger them yourself to make them occur. We aren't Windows 10. We won't auto-update your server / reboot it without your permission. That said, let's go ahead and address the "why" that is really behind your comment. Updates to Unraid OS vary in content. There are major releases (v4, v5, v6), then there are minor releases (6.0, 6.1, 6.2, 6.3, etc.), and then there are security/bugfix releases (6.6.1, 6.6.2, 6.6.3, etc). Features tend to only get added to major and minor releases, though occasionally may be tossed in with a bugfix release if they are incredibly simple/basic in nature. Going back to your chief complaint, it's not about major or minor releases at all, but rather, the security/bugfix releases that happen more frequently. Bugfixes themselves typically address edge cases that weren't discovered until a user with the right combination of things discovers a bug. Anything to do with our code (the array code, the webGui; things that we directly code ourselves), we are pretty quick to respond and patch. Often times this patch will do nothing for the vast majority of users, but only because those users didn't have that "right combination" yet for the bug to occur. These bugfix releases also tend to contain package updates that have CVEs issued. If you're not familiar with CVEs or why they matter, I suggest some reading on the subject (https://cve.mitre.org/ and https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures). In short, Unraid OS contains a large number of common open-source software packages to make it function. From things like the web server, to Docker, VMs, and even the various network file sharing protocols we support, these components are built and maintained by the open source software community. As with any software solution, bad actors in the world are constantly looking for ways to exploit vulnerabilities that may exist to a nefarious end. As such, the good guys constantly are evaluating their own wares and doing their best to keep any exploits patched from use. This means sometimes frequent updates. I'm honestly not sure what part is "confusing" about updates. When you click "check for updates", you can easily see the number designation of the available update and compare that to the existing number in the top header of the UI to see how much of an update it really is. In addition, there is a little "i" symbol next to the updated release which you can click to tell you what's in the new update. And of course you can always visit the forums to see the newly announced release and what people have to say about it. Point is, it's fairly trivial to determine the meat behind an update and whether or not its "changing a lot of stuff." I don't think anything significant was changed in any of the 6.6 bugfix releases. I also don't see any specific complaint from you about a specific change. Seems like your complaint is that you just don't want to click the update button, which is understandable, and also completely solvable...just don't click it . I think the others here have done a pretty good job of explaining the differences between our release candidates and our stable releases, so I'll refrain from beating a dead horse. Now let's talk about the point where this thread got derailed... I'm all for people having the freedom to express themselves in our forum. If you got a problem with us or our wares, you're more than welcome to use this place to complain and make yourself heard. BUT............ When you blatantly ignore responses to your comments and continue your argument without addressing valid points, you're not engaging in a discussion, but rather, noise pollution. Want to see things change? Address the counter-arguments and prove your case. Others have done this in the past and have been wildly successful in getting us to change things about Unraid. Your approach here will change nothing only because you have made no case for a change and when challenged by the community, you ignored those challenges to your argument. In addition, our community members, while with the best intentions, need to avoid bringing the discussion down a level to that of personal attacks and a "gtfo" attitude. My recommendation to folks is to take the thumper approach: Thanks! All the best, Jon
  44. 8 points
    Thought I should chime in here. In short, 6.6 will be out VERY soon, but let's take another stab at quelling the masses about our release process and communication. In short, communicating every issue we run into as we work on a new release benefits no one. The user community gains nothing and neither do we. Even if we started posting status updates on where we are towards the next release, the complaints would shift to that we're not posting enough. So instead, we communicate with folks that can actually help us get past any roadblocks we face (Linux developers, hardware manufacturers, etc.), and I think that's what most users would want us to do anyhow. Also, with regards to concerns about security, let's say we pushed out a release with Linux 4.18 and QEMU 3.0 in it for everyone, but that it caused a lot of users to experience major issues? Would that be acceptable to anyone? Would it be better to push the release out and just say, "I guess you'll just have to deal with it until a future kernel/QEMU update" or should we maybe hold back on pushing that release out until we've resolved critical functionality/performance issues? The point is that just because a software update or kernel update has been publicly released, doesn't mean its been fully vetted and tested with all use-case scenarios to ensure full stability. Furthermore, given that the overwhelmingly vast majority of our users are simply using Unraid in their home where they are the sole user anyway, I don't think exploits such as Spectre and Meltdown are that big of an issue. And if you are hosting your Unraid server in a datacenter/multi-tenant setup where you are worried about those exploits, you should switch to another solution if our release frequency isn't fast enough for you, because we aren't going to push out security releases that break functionality or cripple system performance. Bottom line: patience is a virtue...
  45. 8 points
    After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
  46. 8 points
    Hey, thanks for calling us out on this. Seriously. We dropped the ball on providing you guys with feedback and for that, we are truly sorry. Good news though! Following this exercise, we did end up contracting with a firm that is helping us finalize a new logo and the work done in this thread has definitely contributed towards that end. While I think the final product will probably end up looking a bit different from what we've seen here, this definitely helped us figure out what we did and didn't like and guided us toward that end. That being said, its worth noting that the investment we are making extends far beyond just a logo, and I think you guys will be awfully pleased with the end result once we get there. We're still probably at least a few months away from revealing the efforts of this project, but we may have a few things to share along the way ;-).
  47. 8 points
    This video is an overview and a quick tutorial on the newly released unRAID 6.4.0 stable. This video will compare side by side unRAID 6.4.0 with its predecessor 6.3.5 and see the differences and improvements. It shows how to safely upgrade from 6.3 to 6.4 then how to set up your SSL certificate for HTTPS access and then how to set up an encrypted drive for the array.
  48. 8 points
    Just to provide an update to everyone with VM issues, we did track down a bug report that is affecting the 4.14 Linux kernel related to stability issues with VMs (particular when attempting to shut down or reboot the VMs). We will have more information on this soon. Please stay tuned.
  49. 8 points
    I took the GUI style efforts of Kode and Drakknar together and blended that in a new theme GRAY. Much obliged to them The new color theme has the 'header' buttons at the left side which expand when hovering over them. The top header is fixed, meaning status information stays on screen when scrolling down the page. All-in-all it gives a complete new fresh view to the GUI, hope you guys like this direction. Next, see if this can be integrated with the upcoming release of unRAID. Below some screen pictures to give you an idea.
  50. 8 points
    Well gents, just wanted to give you a update on Ryzen w/ c-states enabled locking up unRAID... or rather how we've fixed it! On my Ryzen test machine, with array stopped (to keep it idle) with c-states enabled in bios, I'm approaching 7 days of uptime. Before the changes we made to the kernel in the upcoming RC7, It would only make it a few hours and lockup. I think you guys will find RC7 just epyc! (sorry, couldn't resist) FYI, these are the two kernel changes we had to add to make it stable for Ryzen: - CONFIG_RCU_NOCB_CPU: Offload RCU callback processing from boot-selected CPUs - CONFIG_RCU_NOCB_CPU_ALL: All CPUs are build_forced no-CBs CPUs Thanks to everyone here for testing and helping narrow it down to c-states. That made it easier for us to test and find a solution.