Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 06/04/20 in Posts

  1. 10 points
    SSD support in Linux and associated file systems has been in place quite a while and is quite mature at this point. However don't be so quick to discount advances in spinners - that industry is not going away quietly and probably will be several years before we see the last of them. With 6.9 we have introduced "multiple pools". At present this only supports btrfs pools but much of the underlying work has been done to support other types of pools (that is, formatted with a file system other than btrfs). Along with this, it gives us a path to generalize pools further and let you define multiple "unRAID" pools. This work will require changes in the unRAID kernel driver, and is naturally the time to address SSD devices in the unRAID array, as well as a few other improvements. How this work is phased into future releases is T.B.D.
  2. 10 points
    I agree that it's not okay to complain about not hearing anything since B1. If for no other reason than this is a crazy time (I've thought a few times "man, I hope those guys aren't sick"). But I would say that it's a missed opportunity to not share at least something of what's happening behind the scenes. This is a pretty committed and passionate user-base, and some level of sharing would only strengthen it. I work in big-tech myself, so I get the struggle of figuring out how much to share with your customers. A monthly blog with some latest development details, near-term roadmap info, things to look forward to with unraid, etc, would go a long way IMHO.
  3. 9 points
    At times you will want to "hide" devices from Unraid so that they can be passed through to a VM. Unraid Prior to 6.7 In the past (pre Unraid 6.7) we would stub the device by adding a Vendor:Device code to the vfio-pci.ids parameter in Syslinux, something like this: append vfio-pci.ids=8086:1533 This worked, but had several downsides: If you have multiple devices with the same Vendor:Device code, all of them would be stubbed (hidden) from Unraid It is a fairly technical process to find the right Vendor:Device code and modify the syslinux file. Make a mistake and your system won't boot! As an alternative, you could add the <Domain:Bus:Device.Function> string to the xen-pciback.hide parameter in Syslinux: append xen-pciback.hide=0000:03:00.0 This had downsides too: Still a technical / risky process If you add/remove hardware after modifying syslinux, the pci address could change and the wrong device could end up being stubbed. This would cause problems if a critical disk controller or NIC were suddenly hidden from Unraid This broke in Unraid 6.7. More details Unraid 6.7 Starting with Unraid 6.7 we could bind devices to the vfio-pci driver based on the <Domain:Bus:Device.Function> string (aka pci address). You needed to manually modify the config/vfio-pci.cfg file and specify the <Domain:Bus:Device.Function> string, like this: BIND=03:00.0 This worked, but still had several downsides: It was a fairly technical process to find the right string to place in the file. But at least if anything went wrong you could simply delete the config file off the flash drive and reboot. We still had the problem where if you add/remove hardware after modifying the file, the pci addresses could change and the wrong device could end up being bound to vfio-pci Unraid 6.9 For Unraid 6.9, Skittals has incorporated the excellent "VFIO-PCI Config" plugin directly into the Unraid webgui. So now from the Tools -> System Devices page you can easily see all of your hardware and which IOMMU groups they are in. Rather than editing the config file by hand, simply add a checkbox next to the devices that you want to bind to vfio-pci (aka hide from Unraid). If a device is being used by Unraid (such as a USB controller, disk controller, etc) then the web interface will prevent you from selecting it. Additionally, we have a new version of the underlying vfio-pci script which can prevent the wrong devices from being bound when hardware is added or removed. When you click to bind a device on the System Devices page, it will write both the <Domain:Bus:Device.Function> and the <Vendor:Device> code to the config file, like this: BIND=0000:03:00.0|8086:1533 In this example, the updated script will bind the device at pci address 0000:03:00.0, but only if the <Vendor:Device> code is 8086:1533. If a different <Vendor:Device> code is found at that address, it will not bind. This means we will never inadvertently bind a device that is important to Unraid! (However, since the desired device is not available to be bound, the VM expecting that device may not function correctly.) Devices bound in this way can be passed through to your VMs by going to the VM tab, editing the template, and then selecting the appropriate device from one of the hardware dropdowns. Can't find it? Check under "Other PCI Devices". If the System Devices page shows that multiple devices are in the same IOMMU group, it will automatically bind all the devices in that group to vfio-pci. You should then pass all devices in that IOMMU group to the same VM. Note: If you make hardware changes after setting this up, it would be a good idea to disable autostart on your VMs first. Then shutdown, add/remove hardware as needed, and boot back into Unraid. Visit the Tools -> System Devices page and ensure the correct devices are still being bound to vfio-pci. Adjust as needed and reboot, then start your VMs. Troubleshooting Tips If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid 6.9 The System Devices page writes the device details to config/vfio-pci.cfg file on the flash drive. If you ever want to "start fresh" simply delete this file and reboot. The /var/log/vfio-pci log file details each of the devices that were (un)successfully bound during boot, along with any available error messages. Be sure to upload your diagnostics when requesting help as both the config file and the log are included in it Hopefully this is helpful Feel free to let me know in the comments if anything is unclear or wrong.
  4. 7 points
    Hi Unraid Community! We have a special request for anyone who is familiar with the work required and wants to make a little $cash! A few years back we released our own USB flash creator tool for Unraid OS. For those of you who remember, installation of Unraid used to require a manual process (documented here), but we wanted this new tool to be a far easier way to get up and running. Here we are a few years later and the tool desperately needs an update, especially the macOS version. Our problem is that the development team is heads down focused right now on getting 6.9 and 6.10 out the door. As such, we wanted to throw out a request to our Community to see if anyone has the tools and talent to help us with this. This is a formal RFQ (Request for Quote) to correct issues in the current USB flash creator for Unraid OS, for both Windows and Mac platforms. We're not necessarily looking for any increased functionality at this time, though creative ideas on how to make it better will be considered. To respond to this RFQ, please email jonp@lime-technology.com with your bid and time estimate for the work. We will update this post once a bid has been accepted. If you have questions regarding the RFQ, please post them here so our responses can be made in the post publicly for all to see. Thanks everyone!! All the best, Team Lime Tech
  5. 7 points
    Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel. Prebuilt images for direct download are on the bottum of this post. By default it will create the Kernel/Firmware/Modules/Rootfilesystem with the nVidia drivers and also DVB drivers (currently DigitalDevices, LibreElec, XBOX One USB Adapter and TBS OpenSource drivers included) optionally you can also enable ZFS support. nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed. ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working. You also can set the ZFS version from 'latest' to 'master' to build from the latest branch from Github if you are using the 6.9.0 repo of the container. ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished. I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient). Plugin now available (will show all informations about the images/drivers/modules that it can get): https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Or simply download it through the CA App This is how the build of the Images is working (simplyfied): The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Please be sure to set the build options that you need. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose). The whole process status is outlined by watching the logs (the button on the right of the docker). The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong. THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER. PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG! I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING. UPDATE NOTICE: If a new Update of Unraid is released you have to change the repository in the template to the corresponding build number (I will create the appropriate container as soon as possible) eg: 'ich777/unraid-kernel-helper:6.8.3'. Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container! Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working. CUSTOM_MODE: This is only for Advanced users! In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable). Note: You can use the nVidia & DVB Plugin from linuxserver.io to check if your driver is installed correctly (keep in mind that some things will display wrong and or not showing up like the driver version in the nVidia Plugin - but you will see the installed grapics cards and also in the DVB plugin it will show that no kernel driver is installed but you will see your installed cards - this is simply becaus i don't know how their plugins work). Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers! After you finished building the images i recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the template is always the newest version! Beta Build (the following is a tutorial for v6.9.0): Upgrade to your preferred stock beta version first, reboot and then start building (to avoid problems)! Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) (You can get the files bzimage bzroot bzmodules bzfirmware also from the Beta zip file from Limetch or better you first upgrade to that Beta version and then copying over the files from your /boot directory to the directory created in step 5 to avoid problems) !!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!! Here you can download the prebuilt images: Unraid Custom nVidia builtin v6.8.3: Download (nVidia driver: 440.100) Unraid Custom nVidia & DVB builtin v6.8.3: Download (nVidia driver: 440.100 | LE driver: 1.4.0) Unraid Custom nVidia & ZFS builtin v6.8.3: Download (nVidia driver: 440.100 | ZFS version: 0.8.4) Unraid Custom DVB builtin v6.8.3: Download (LE driver: 1.4.0) Unraid Custom ZFS builtin v6.8.3: Download Unraid Custom nVidia builtin v6.9.0 beta22: Download (nVidia driver: 440.100) Unraid Custom nVidia & ZFS builtin v6.9.0 beta22: Download (nVidia driver: 440.100 | ZFS Build from 'master' branch on Github on 2020.06.30) Unraid Custom ZFS builtin v6.9.0 beta22: Download (ZFS Build from 'master' branch on Github on 2020.06.30)
  6. 7 points
    Hi guys, i got inspired by this post from @BRiT and created a bash script to allow you set media to read only to prevent ransomware attacks and accidental or malicious deletion of files. The script can be executed once to make all existing files read only, or can be run using cron to catch all newly created files as well. The script has an in-built help system with example commands, any questions let me know below. Download by issuing the following command from the unRAID 'Terminal' :- curl -o '/tmp/no_ransom.sh' -L 'https://raw.githubusercontent.com/binhex/scripts/master/shell/unraid/system/no_ransom/no_ransom.sh' && chmod +x '/tmp/no_ransom.sh' Then to view the help simply issue:- /tmp/no_ransom.sh Disclaimer:- Whilst i have done extensive tests and runs on my own system with no ill effects i do NOT recommend you run this script across all of your media until you are fully satisfied that it is working as intended (try a small test share), i am in no way responsible for any data loss due to the use of this script.
  7. 7 points
  8. 6 points
    Hi, I think it would be really great to be able to set individual mover schedules that are specific to each cache pool. As i would find it useful to have some pools move files more often than others.
  9. 6 points
    Update: there is a lot going one Yes, there are always plans
  10. 5 points
    A discounted 2nd or 3rd license was offered a few years ago in the major transition from unRAID v5 to v6 and I bought a couple of extra licenses. Since then, that has not been offered, but I would not hesitate to buy a couple of extra licenses at full price if a need arose. For my purposes, at the current price levels unRAID remains a bargain even in comparison to free offerings (yes, a discount would still be nice, but, hopefully not an absolute necessity. 😀)
  11. 5 points
    I have the same Problem with my nextcloud. It looks like the Document Server updated app to not work properly. I solved this by disabled the "Document Server" with the terminal. Open the nexcloud terminal window and type this command Lists all your current isntalled apps occ app:list then I searched for the Document Server and used this command to disable it. occ app:disable documentserver_community Now I can login to nextcloud and see the user interface again.
  12. 5 points
    CHANGELOG: 02.07.2020: Fixed move from container-toolkit to nvidia-container-toolkit on Github 30.06.2020: ZFS Pools now loaded/unloaded on Array start/stop (manual load with 'zpool import -a' or unload with 'zpool export -a' is always possible) Added possibility to load Intel Drivers/Kernel Module i915 on startup 20.06.2020: Added possibility to build ZFS from 'master' branch on Github (for 6.9.0) Updated Plugin (added additional information from ZFS pool) 18.06.2020: Added fix for nVidia driver v440.82 and Kernel 5.7 (for 6.9.0 beta22) 15.06.2020: Added option to save the full log output from the build process to a file in the main directory Released Unraid-Kernel-Helper-Plugin (in the CA App) Switched to gcc version 9.3.0-13 (for 6.9.0) 10.06.2020: Separated DVB drivers and don't install all at once (valid options are: 'digitaldevices', 'libreelec', 'tbsos', 'xboxoneusb') 07.06.2020: Added possibility to build Beta versions (for 6.9.0) Added finishing sound (will only play on the motherboard pc speaker and only for 6.9.0 and up) Words are hard (fixed a few typos) 06.06.2020: Added TBS OpenSource drivers to the DVB build step 05.06.2020: Corrected an error that zpools not loaded automatically on boot 04.06.2020: Added ZFS to Kernel build options Added option to include user specific Kernel patch files with automatic build Words are hard (fixed a few typos) 31.05.2020: Added possibility to insert custom Kernel version 26.05.2020: Fixed build steps so that the latest 'nvidia-container-runtime' and 'nvidia-container-toolkit' can be built Fixed CUSTOM_MODE sleep (if CUSTOM_MODE was set to true and the script 'buildscript.sh' was executed from the main directory it says again that CUSTOM_MODE is enabled) Added end message with version numbers Added a warning if build mode nVidia is selected (if a process uses the graphics card the installation of the nVidia drivers will fail) Words are hard (fixed a few typos and sentences that were not comprehensible) 25.05.2020: Initial release
  13. 4 points
    Without reinventing the wheel, why not just supply an compatible img and then tell people to use balena etcher which is pretty much the go-to image writer, you just have to supply an img in a suitable format.
  14. 4 points
    new feature coming soon, minecraft console accessible via web ui, thought this might please a few people, no nasty docker exec
  15. 4 points
    Hello everyone, I wanted to sincerely thank everyone for contributing to the German translation work. If you are interested in an Unraid server case badge, please DM me your mailing address and I will mail them out to you. 🙂
  16. 4 points
    That's an unbelievably selfish view to have on this. We would all like a new release with new features, bugs fixed, etc. but it's not as simple as just flicking a switch and with the current world pandemic things are likely to take even longer. The fact that your only post on here is to bitch that it's been 3 months since a release speaks volumes about how little you know about development and thus aren't in the position to be able to judge.
  17. 4 points
    As a firmware developer, I appreciate all the effort that goes into getting a new release ready for deployment in the wild. There's regression testing, talking with active plugin developers, usability studies, validation testing, fuzz testing, scanning for copyrighted material, etc... Oh crap, bug found. Start over at the beginning and repeat until it is good enough to deploy. Have arguments about what is "good enough". Defeature what isn't stable for this release. Start over with testing. This is the normal stress involved. Now add in COVID on top of it and turn the blender up to 11. I'm sure they are doing what they can as fast as they can giving the circumstances of their professional and personal lives. Remember, updates are FREE for life and their personal lives are none of our business. They released 6.8.3 just 3 months ago.
  18. 4 points
    When is the next new unraid version available?
  19. 3 points
    Let me start with a THANK you, 1. for a great product and 2. the support I find in this forum. I started this projects 1 month ago and not only I build one Unraid but two This was my son boyscouts project as well for one of his eagle merit badges so it was a win win situation. But with that, I had a blast reading, learning, tweaking and getting these 2 builds right. They look and behave ROCK SOLID and I am just ironing out small details. My second Unraid is doing a parity re-build as I was testing/simulating a "potential" drive failure so I just swapped one drive and see how it works. I am sure I will run into issues but the health of the data is as good as the health of the system so maintaining a healthy / functional system hardware is absolutely key. I managed to put in good quality parts so I think that will help in the long run. Thanks and I look forward being a member of this forum and help when and where I can I just wanted to say this, that's all I now have 84TB of available raid / file system 🙂 Is CRAZY
  20. 3 points
    There are additional things that our creator does that a simple IMG file does not. Our tool validates the USB flash GUID as usable (most of the time ;-), it allows the user to toggle EFI boot mode, as well as customize hostname and networking options. It even lets the user select which release to install (from a backup, the current available release, or from our Next branch). These are features that are important to ease of use for new users and while we can appreciate that not everyone needs this, those that do really appreciate it. It is out of scope for the purpose of this RFQ, but know that we are investigating other licensing methods for future inclusion. Changing licensing is always a real iceberg of a problem. Seems small and simple from above the water line, but below it is a gigantic thing just waiting to sink your ship ;-). That is going to have to be another battle for another day. While we definitely appreciate what certain users want, we have to address the wider market of users that aren't as savvy. While I definitely agree if you're savvy enough to build a computer, you're probably savvy enough to figure out how to image a USB flash using some generic tool, but we're not just targeting that kind of customer and perhaps longer term users won't be building their own servers at all. The point is, our flash creator tool should work fine, but it's been a bit more of a bear to maintain than we'd like, so we're looking for offers from developers that want to earn a little extra cash to help build this thing. And yeah, we probably will have to fix it again after the next Mac release comes out, but that's fine and something we're also willing to accept.
  21. 3 points
    New UD release features: Changing the mount point will now change the physical disk label on ntfs, btrfs, xfs, vfat, btrfs encrypted, and xfs encrypted disks. There is a new switch in the upper right corner of the UD page ('None' and 'All') that will show all partitions on all disks when switched on. This is a UD setting and not a cookie, so it will be persistent. Thanks @Lev and @TexasUnraid for your ideas.
  22. 3 points
    Well, snap, I bit the bullet and got a second license. I am officially BROKE
  23. 3 points
    OK Pete, just because you are Asking, I'll bite. Container set to 2GB limit and restarted. DPI still on. If the container crashes and I have to restart it, no big deal. This isn't Plex after all. No angry mobs appear if the UniFi controller goes down!
  24. 3 points
    Multiple mounts, one upload and one tidy-up script. @watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like below soon: 1. My folder structure looks something like this: mount_mergerfs/tdrive_vfs/movies mount_mergerfs/tdrive_vfs/music mount_mergerfs/tdrive_vfs/uhd mount_mergerfs/tdrive_vfs/tv_adults mount_mergerfs/tdrive_vfs/tv_kids 2. I created separate tdrives / rclone mounts for some of the bigger folders e.g. mount_rclone/tdrive_vfs/movies mount_rclone/tdrive_vfs/music mount_rclone/tdrive_vfs/uhd mount_rclone/tdrive_vfs/adults_tv for each of those I created a mount script instance where I do NOT create a mergerfs mount 3. I mount each in turn and for the final main mount add the extra tdrive rclone mounts as extra mergerfs folders: ############################################################### ###################### mount tdrive ######################### ############################################################### # REQUIRED SETTINGS RcloneRemoteName="tdrive_vfs" RcloneMountShare="/mnt/user/mount_rclone" LocalFilesShare="/mnt/user/local" MergerfsMountShare="/mnt/user/mount_mergerfs" # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/mount_rclone/music" LocalFilesShare3="/mnt/user/mount_rclone/uhd" LocalFilesShare4="/mnt/user/mount_rclone/adults_tv" 4. Run the single upload script - everything initially gets moved from /mnt/user/local/tdrive_vfs to the tdrive_vfs teamdrive 5. Overnight I run another script to move files from the folders that are in tdrive_vfs: to the correct teamdrive. You have to work out the encrypted folder names for this to work. Because rclone is moving the files, the mergerfs mount gets updated i.e. it looks to plex etc like they haven't moved #!/bin/bash rclone move tdrive:crypt/music_tdrive_encrypted_folder_name gdrive:crypt/music_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/tv_tdrive_encrypted_folder_name tdrive_t_adults:crypt/tv_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/uhd_tdrive_encrypted_folder_name tdrive_uhd:crypt/uhd_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs exit
  25. 3 points
    I released a new version 2020.06.20 that fully supports 6.9 Multi Language. Those of you on 6.9 Beta will be able to download the language packs and have UD translation installed as it gets translated for each language. @ich777 is working on the German translation.
  26. 3 points
    Ich habe mehr Platz gemacht.
  27. 3 points
    Let's see how many people get this Status
  28. 3 points
    Depends what I have going on in my life at the time. Last 6-9 months have been pretty full on, last 3 months have been particularly hectic. New job, new baby and seeing as I work in front line healthcare as a clinician in one of the initially hardest hit areas of my country in terms of Covid, any dev work has kind of taken very much a back seat.
  29. 3 points
    I just updated to the the nextcloud container to latest. After which i couldnt access nextcloud and was getting 400 bad request error So if anyone else has this error you can fix by rolling the container back To do that change the repository line in the template to linuxserver/nextcloud:18.0.4-ls81 as below
  30. 3 points
    Don't think it's currently possible, but it would be nice to be able to start and / or stop a docker on a schedule. For example, only have download type dockers run overnight, or media servers running on evenings/weekends. Likely not that simple to get a full featured scheduler in there, but might be nice.
  31. 2 points
    Big big thanks to ich777 for this container + instructions. Also big thanks to rachid596 for providing this AMD USB/Audio fix patch from reddit. Such a gamechanger. Love this community. Was able to build this in ~10min with a full core workout. The log showed successful patch of AMD function level reset fix ./ quirk_no_flr.patch sent 2,302 bytes received 38 bytes 4,680.00 bytes/sec total size is 2,168 speedup is 0.93 ---Applying patches to Kernel, please wait!--- patching file drivers/pci/quirks.c Hunk #1 succeeded at 3719 with fuzz 2 (offset -23 lines). Hunk #2 succeeded at 4877 with fuzz 1 (offset 288 lines). patching file drivers/ata/libata-core.c Hunk #1 succeeded at 2388 (offset 1 line). Hunk #2 succeeded at 2481 (offset 4 lines). Hunk #3 succeeded at 2534 (offset 4 lines). Hunk #4 succeeded at 2559 (offset 4 lines). Hunk #5 succeeded at 2655 (offset 5 lines). patching file drivers/scsi/hpsa.c Hunk #1 succeeded at 978 (offset 2 lines). patching file drivers/scsi/mvsas/mv_init.c Hunk #1 succeeded at 655 (offset -31 lines). patching file fs/reiserfs/resize.c patching file fs/reiserfs/super.c patching file arch/x86/kvm/x86.c Hunk #1 succeeded at 116 with fuzz 1 (offset 7 lines). patching file include/linux/blkdev.h Hunk #1 succeeded at 1165 with fuzz 2 (offset -159 lines). patching file drivers/pci/quirks.c Hunk #1 succeeded at 3564 (offset 173 lines). patching file include/linux/pci_ids.h Hunk #1 succeeded at 1797 (offset 13 lines). patching file lib/raid6/algos.c Hunk #1 succeeded at 28 with fuzz 2 (offset -5 lines). Hunk #2 succeeded at 234 (offset 9 lines). Hunk #3 succeeded at 319 (offset 15 lines). patching file drivers/pci/quirks.c Hunk #1 succeeded at 5238 (offset 109 lines). And now I'm rocking and rolling in plex with nvidia and got my Win10 VM with USB hub + audio passthrough working. Life is good. 2020-07-01_05.29.38.log quirk_no_flr.patch
  32. 2 points
    Bonjour, Thanks to @Squid, we now have an easy way to track and see any missing French translations by comparing the French github repo with the English Github repo. To see missing words/phrases, please see here: https://squidly271.github.io/languageErrors.html#fr_FR If you would like to make contributions to the French Github Repo, please do so there and be sure to follow the instructions outlined in the README.md file. All PR's will be reviewed by myself and the French Forum moderators prior to any merges. Merci beaucoup, Spencer
  33. 2 points
    Do we have an ETA on when unRAID will support NFSv4+? I've seen this request come up multiple times on here, and it looks like at one point, Tom even "tentatively" committed to trying to "get this into the next -rc release": Unfortunately, that was over 3 years ago. Do we have any updates on this? I believe adding support for more recent NFS versions is important because it is likely to resolve many of the problems we see with NFS here on the forum (especially the NFS "stale file handle" errors). I think that's why we also keep seeing this request come up over and over again. I understand where Tom is coming from when he says, "Seriously, what is the advantage of NFS over SMB?": The majority of the time, for the majority of users, I would recommend SMB. It's pretty fantastic as it exists today but, there are times when NFS is the better tool for the job. Particularly when the clients are Linux-based machines; NFS offers much better support for Unix operations (i.e. when you're backing up files to an unRAID share and it contains symbolic links). NFS also offers better performance with smaller files (i.e. those short, random-R/W-like file operations). Rereading my post, I hope this request doesn't come off as overly aggressive. That's certainly not the intent. I just wanted to provide some background on the request and advocate for continued NFS support on unRAID. NFS is still an important feature of unRAID. Thank you in advance for your consideration! -TorqueWrench
  34. 2 points
    Thanks Just a small update. Tagging @AdrianF as well. Managed to grab a 11 month old (almost like new) Phantek Enthoo Pro for 40C$ (delivered at home). So I am now going full ATX. I hope it is a decent case for my needs (also upgrades). As I am not in a rush, I am just searching for deals for other components and grab them one at a time.
  35. 2 points
    The line in the script that checks for the disks is: ls -dv /mnt/disk* The cache drive is located at /mnt/cache/ so files on it will not be affected. They'll be moved onto the main array with no issue.
  36. 2 points
    unfortunately this would be a limitation of the script, sonarr/radarr etc would not be able to delete existing media, it would be able add new media only. nope, once locked you would not be able to move media, in that case i would do all your moves/renames etc and then lock it after you are satisfied it wont change. i havent, no. you and me both 🙂
  37. 2 points
    You can edit /config/plugins/dynamix/dynamix.cfg on the flash driveand completely remove the line (and don't leave a blank line in its place) locale="xx_XX" and then reload any page on the UI to go back to English.
  38. 2 points
    Is there any possibility of defining national date format? I unRaid the default is for example "Thursday, 2019 Aprlil 4". In polish it makes no sense, because we are used to a different format like "Thursday, 4 April 2019" which translates to "Czwartek, 4 kwietnia 2019".
  39. 2 points
    Indeed, the inability to use NFS v4 is still an annoyance. I don't have any machines running Microsoft - all my desktop m/cs run Linux Mint, my KODI/LibreElec boxes run a Linux kernel, my Squeezeplayer boxes run a Linux kernel, my homebrew domestic lighting control system runs on Linux, even Android phones run a Linux kernel. Why would I want to run a microsoft network filing technology. T0rqueWr3nch has highlighted some advantages of using the latest version of NFS in such an environment. Please, if it's simply a matter of turning on a kernel option, and it has no adverse effect on any other functionality, can this be implemented in the next release?
  40. 2 points
    My take on your plan is it will work. I would reconsider a few things if the goal is long term upgrade potential. This is only what I would or have done. There are many paths to success. 1. Stay with the B550 because as of right now it will support the next gen Ryzen. The Gen 4 PCIE support isn't a concern today but in 2 years it may make a real difference as will the ability to plug in the next gen CPU. 2. Move up to an ATX or mATX. ITX boards are typically short on slots, both PCIE and Ram. If this server grows you will want to be able to grow it instead of having to swap board and case because of a lack of high speed slots or having to replace all of your memory because you don't have physical room to add. 3. Don't worry about matching PSU to current needs, the PSU will only provide as much power as needed. If the system pulls 420 watts max then a 1000 watt PSU will give it 420 watts, not 1000. Starting with a more powerful PSU will add flexibility for future needs. I am currently running a 650 watt but I only have 5 spinning drives and low power consumption. 4. The 3300X is a good CPU and reportedly fast, but it is 4 cores and 8 threads which may not be enough down the road. You can currently buy a 6 core, 12 thread CPU for $105. The R5 1600AF is structurally a R5 2600 with the same performance. I went this route and have zero complaints. I was considering the 3300X but skipped it. This becomes tricky because a higher clock speed will help with games but a higher core/thread count will help with multiple tasks. 5. Case cooling is important. Whatever form factor you go with, it needs good cooling. This becomes important for the drives. All of my spinning drives are 7200 rpm and I had to change things from my original setup because all of drives were idling in the mid 40's C and cooking when in heavy use. I added a drive cage, 2-5-1/4 x 3-3-1/2 Icy Dock which allowed the hottest drives to be in their own fan cooled cage and allowed me to keep a space between the others for cooling. A fan across tightly packed drives limits cooling. My drives are all in the low to mid 30's now. Having those 5-1/4 bays saved me. 6. You will need a video card to build and setup the system bios, after that you will not have to have one to run and administer the server if you have another computer, notebook, phone, iPad etc. I used a cheap PCIEx1 card that was cheap to get the bios setup. You will only need the keyboard/mouse to setup the system. Your motherboard may require a keyboard to boot, but that is unusual. 7. CPU cooling can be done with the stock cooler. It will work better if the case has excellent cooling. I used a Hyper T4 after switching to the 1600 AF because I had it. If the rest of the system stays cool, it is easier to cool the CPU. If the rest of the system runs hot then the stock cooler may not be enough. I built mine with extra junk from old upgrades, then I upgraded what I needed to. Unraid is very forgiving when it comes to hardware.
  41. 2 points
    SFF-8087 to SATA reverse breakout cables - if connecting the backplane to regular SATA ports on the board/controller SFF-8087 to SFF-8087 to connect the backplane to an LSI or similar controller with miniSAS ports.
  42. 2 points
    Ja danke für die Info, aber ich musste aus hardwaretechnischen Gründen upgraden, da ich ein Z490 mainboard mit intel 10th gen cpu habe und die hardware decodierung ohne das upgrade nicht gegangen ist, sowie die erkennung des onboard 2,5g nics (wobei der letztere immer noch nicht geht). Aber ich hab natürlich alle systemrelevanten Backups gemacht und extern gesichert, falls doch mal was passiert
  43. 2 points
    First of all, I downloaded the Virt_Manager docker by djaydev. Then I went to the support site, and the very first post instructed me to do the following: I followed what @dee31797 posted - here are the itemized steps: turn on netcat-openbsd in Nerd Pack, apply, and restart computer open the virt-manager docker GUI - you'll still have the error in the Docker container, go File > Add Connection Hypervisor: QEMU Select connect to remote host over SSH Hostname: the internal network IP of your server Hit connect It will then ask you to accept certificate, then for your unRaid password. Once cleared you're in I'm now able to see what seems to be an expanded XML file and change that vgamem setting in the Video QXL area of the XML file.
  44. 2 points
    Today, unRAID can use two methods to write data to the array. The first (and the default) way to read both the parity and the data disks to see what is currently stored there. If then looks at the new data to be written and calculates what the parity data has to be. It then writes both the new parity data and the data. This method requires that you have to access the same sector twice. Once for the read and once for the write. The second method is to spin up all of the data disks and read the data store on all of the data disks except for the disk on which the data is to be stored. This information and the new data is used to calculate what the parity information is to be. Then the data and parity are written to the data and parity disks. This method turns out to be faster because of the latency of having to wait for the same read head to get into position twice for the default method verses only once for the second method. (The reader should understand that, for different disks, all disk operations essentially happen independently and will be in parallel.) For purposes of discussion, let's call this method turbo write and the default method normal write. It has been known for a long time that the normal write method speeds are approximately half of the read speeds. And some users have long felt that an increase in write speed was both desirable and/or necessary. The first attempt to address this issue was the cache drive. The cache drive was an parity unprotected drive that all writes were made to and then the data would be transferred to the protected array at a later time. This was often done overnight or some other period when usage of the array would be at a minimum. This addressed the write speed issue but at the expense of another hard disk and the fact that the data was unprotected for some period of time. Somewhere along the way LimeTech made some changes and introduced the turbo write feature. It can be turn on with the following command: mdcmd set md_write_method 1 and restored to the default (normal write) mode with this one: mdcmd set md_write_method 0 One could activate the turbo write by inserting the command into the 'go' file (which sometimes requires a sleep command to allow the array to start before its execution). A second alternative was to actually type the proper command on the CLI. Beginning with version 6.2, the ability to select which method was to be used was included in the GUI. (To find it, go to 'Settings'>>'Disk Settings' and look at the "Tunable (md_write_method):" dropdown list. The 'Auto' and 'read/modify/write' are the normal write (and the default) mode. The 'reconstruct write' is the turbo write mode. This makes quite easy to select and change which of the write methods are used. Now that we have some background on the situation let's look at some of the more practical aspects of the two methods. I though the first place to start was to have a comparison of the actual write speeds in a real world environment. Since I have a Test Bed server (more complete spec's in my signature) that is running 6.2 b19 with a dual parity, I decided to use this server for my tests. If you look at the specs for this server, you will find that it has a 6GB of RAM. This is considerably in excess of what unRAID requires and the 64 bit version of unRAID uses all of the unused memory as a cache for writing to the array. What will happen is that unRAID will accept data from the source (i.e., your copy operation) as fast as you can transfer it. It will start the write process and if the data is arriving faster than it can be written to the array, it will buffer it to the RAM cache until the RAM cache is filled. At that point, unRAID will throttle the data rate down to match the actual array write speed. (But the RAM cache will be kept filled up with new data as soon as the older data is written.) When your copy is finished on your end and the transmission of data stops, you may thinks that the write is finished but it really isn't until the RAM cache has been emptied and the data is safely stored on the data and parity disks. There are very few programs which can detect when this event has occurred as most users assume that the copy is finished when it hands the last of the data off to the network. One program which will wait to report that the copy task is finished is ImgBurn. ImgBurn is a very old freeware program that was developed but in the very early days when CD burners were first introduced back in the days when a 80386-16Mhz processor was the state of the art! (The early CD burners had no on-board buffer and would make a 'coffee cup coaster' if one made a mouse click anywhere on the screen!) The core of the CD writing portion of the software was done in Assembly Language and even today the entire program is only 2.6MB in size! It is fast and has an absolute minimum of overhead when it is doing its thing! As it runs, it does built a log file of the steps to collect much useful data. I decided to make the first test the generation of a BluRay ISO on the server from a BluRay rip folder on my Win7 computer. Oh, I also forgot! Another complication of looking at data is what is meant by abbreviations K, M and G --- 1000 or 1024. I have decided to report mine as 1000 as it makes the calculations easier when I use actual file sizes. I picked a BluRay folder (movie only) that was 20.89GB in size. I spun down the drives for all of my testing before I started the write so the times include the spin-up time of the drives in all cases. I should also point out that all of the tests, the data was written to an XFS formatted disk. (I am not sure what effect of using a reiserfs formatted disk might have had.) Here are the results: Normal Time 7:20 Ave 49.75MB/s Max 122.01MB/s Turbo Time 4.01 Ave 90.83MB/s Max 124.14MB/s Wow, impressive gain. Looks like a no brainier to use turbo write. But remember this was a single file with one file table entry and allocation of disk space. It is the best case scenario. A test which might be more indicative of a typical transfer was need and what I decided to use was the 'MyDocuments' folder on my Win7 computer. Now what to copy it with? I have TeraCopy on my computer but I always had the feeling that it was really a shell (with a few bells and whistles) for the standard Windows Explorer copy routine which probably uses the DOS copy command as its underpinnings. Plus, I was also aware that it Windows explorer doesn't provide any meaningful stats and, furthermore, it just terminates as soon as it passes off the last of the data. This means that I had to use a stop watch technique to measure the time. Not ideal but let's see what happens. First for the statistics of what we are going to do: Total size of the transfer: 14,790,116,238 Bytes Number of Files: 5,858 Number of Folders: 808 As you can probably see this size of transfer will overflow the RAM cache and should give a feel for what effect the file creation overhead has on performance. So here are the results using the standard Windows Explorer copy routines and a stopwatch. Normal Time 6:45 Ave 36.52MB/s Turbo Time 5:30 Ave 44.81MB/s Not near as impressive. But, first, let me point out, I don't when exactly when the data finished up writing to the array. I only know when it finished the transfer to the network. But it typical to what the user will see when doing a transfer. Second thing, I had no feeling about how any overhead in Windows Explorer was contributing to the results. So I decided to try another test. I copied that ISO file (21,890,138,112 Bytes) that I had made with ImgBurn back to the Windows PC. Now, I used Windows Explorer to copy it back to the server using both modes. (Remember the time recorded was when the Windows Explorer copy popup disappeared.) Here are the results: Normal Time 6:37 Ave 55.14MB/s Turbo Time 5:17 Ave 69.05MB/s After looking at these results, I decided to see how Teracopy would perform in copying over the 'MyDocuments' folder. I turned off the 'verify-after-write' option in Teracopy so I go just measure the copy time. (Teracopy also provides a timer which meant I didn't have to stopwatch the operation.) Normal Time 6:08 Ave 40.19MB/s Turbo Time 6:10 Ave 39.98MB/s This test confirmed what I had always felt about Teracopy in that it has a considerable amount of overhead in its operation and this apparently reduced its effective data transfer rate below the write speed of even the normal write of unRAID! When I look at all of the results, I can say that turbo write is faster that the normal write in many cases. But the results are not always as dramatic as one might hope. There are many more factors determining what the actual transfer speed will be than just the raw disk writing speeds. There is software overhead on both ends of the transfer and this overhead will impact the results. During these tests, I discovered a number of other things. First, the power usage of a modern HD is about 3.5W when spun up with no R/W activity and about 4W with R/W activity. (It appears that moving the R/W head does require some power!) It has been suggested that one reason not to use turbo write is that it results in an increase in energy. Some have said that using a cache drive is justified over using turbo write for that reason alone. If you looking at it from an economical standpoint, how many hours of writing activity would you have to have to be saving money to justify buying and installing a cache disk? I have the feeling that folks with small number of data disks would be much better off with Turbo Write rather than installing a cache drive just to get higher write speeds. For those folks using VM's and Dockers which are storing their configuration data, they could now opt for a small (and cheaper) SSD rather than a larger one with space for the cache operation. Thus folks with (say) eight of few drives would probably be better served by using turbo write than a large spinning cache drive. (And if an SSD could handle their caching needs, the energy saved with a cache drive over using turbo write would be virtually insignificant. When you get to large arrays with (say) more than fifteen drives than a spinning cache drive begins to make a bit of sense from an energy consideration. A second observation is that the speed gains with Turbo Write is not as great with transfers involving large number of files and folders. The overhead required on the server to create the required directories and allocate file space has a dramatic impact on performance! The largest impact is on very large files and even then this impact can be diminish by installing large amounts of RAM because of unRAID's usage of unused RAM to cache writes. I suspect many users might be well served to install more RAM than any other single action to achieve faster transfer speeds! With the price of RAM these days and the fact that a lot of new boards will allow installation of what was once an unthinkable quality of RAM. With 64GB of RAM on board and a Gb network, you can save an entire 50GB BluRay ISO to your server and never run out of RAM cache during the transfer. (Actually, 32GB of RAM might be enough to achieve this.) That you give you a write speed faster above 110MB/s! As I have attempted to point out, you do have a number of options to get an increase in write speeds to your unRAID server. The quickest and cheapest to simply enable the turbo write option. Beyond that are memory upgrades and a cache drive. You have to evaluate each one and decide which way you want to go. I have no doubt that I have missed something along the way and some of you will have some other thoughts and ideas. Some may even wish to do some additional testing to give a bit more insight into the various possible solutions. Let's hear from you…
  45. 2 points
    Hello all, Sorry for the English in the German section(😜) : If you worked on the German translation files, please let me know which file(s) you worked on here and what name you would like listed in the translation credits. Thank you all! See here for how it will look:
  46. 2 points
    Totally understandable and me either (I also use many containers from Linuxserver.io) I mainly created this container because I needed some modules in the Kernel and also nVidia drivers with DVB drivers in the images (so that I can pass the DVB streams from my DigitalDevices cards which are managed by TVHeadend to Emby and transcode them within Emby) and decided to release the Container to the public so that everyone who want's to update the drivers or need's ZFS support can build their own images. My personal Pros: You can build your own Images/Kernel with the preferred addons (nVidia drivers, DVB drivers, ZFS) and also with the latest driver versions (or simply download it from the first post in this thread). You also can customize the build process if you want to add something or remove something (Custom Build mode in the template). You can also build Images/Kernel for Beta builds (description in the first post). By default everything is done automatically after the container is started and is finished after the container stops (please keep an eye on the logs for status Informations). My personal Cons: I have no Plugin (yet) to read the GPU UUID from the graphics card (this can be done from the terminal with the command: 'nvidia-smi -L' without quotes), you can also use the Linuxserver.io Plugin to get the UUID but it won't show some informations. You have to place the files yourself onto your USB Flash Device (don't forget to backup the existing files in case something goes wrong). The Build process can take very long but with modern hardware not more than 15 minutes. It's up to you which way you choose. You can try both, you have to only replace the files bzroot, bzimage, bzmodules and bzfirmware and that's it. But the main difference is that you can build the images yourself with the latest drivers (I think Linuxserver.io builds it only with the actual latest drivers if a major Unraid release is done). If you got any question please feel free to contact me or write a short post in this thread.
  47. 2 points
    @ich777 @Marshalleq It worked like a charm ! So far absolutelly no issues noticed all running as it should. WIll do some stress testing later on just to be 100% sure that it is stable enough for me not to care more Thx a BUNCH!!!
  48. 2 points
    Como dijo Squid sus archivos diagnósticos ayudan a determinar cómo está configurado su sistema. ¿Cuando dice que la transferencia de archivos es lenta en unRAID, está hablando de escribir archivos en la matriz unRAID? En un sistema protegido por uno o dos discos duros de paridad, las velocidades de escritura normales están en el rango de 40-60 M/ s. Para alcanzar mas velocidad es necesario utilizar Turbo Write. Para una buena descripción de Turbo Write, haga clic aquí. 10 M/s es muy lento. Verifique el BIOS en su microservidor HP. En muchos microservidores, "write caching" está deshabilitado de forma predeterminada y esto resulta en velocidades de escritura lentas. Habilitar "write caching" en el BIOS si está deshabilitado. Si este no es el problema, necesitamos más información sobre la configuración de su sistema y la naturaleza del problema.
  49. 2 points
    can i be the first to say, wow!, i can see how this will be VERY useful for people wanting to pass through hardware to containers, and now being able to build out an image is impressive work indeed!, and of course takes the load of LSIO to produce the custom image every time unraid bumps the version, a real game changer!.
  50. 2 points
    Would just like to add my 2bits here: There are two ways you can configure dockers and IPv6. The first and foremost way is to configure Docker network with either a default ipv6 pool - same as the one assigned to the interface or statically assign one to it. This makes the whole assignment of IPv6 by Docker work like IPv4 - sequentially in order of the container starting up. Docker will keep track of the IPv6 addresses assigned to the containers. The 2nd way which I am using, is to simply disable IPv6 at the Unraid network configuration level. Then all my containers have the "--sysctl net.ipv6.conf.all.disable_ipv6=0 --sysctl net.ipv6.conf.eth0.use_tempaddr=2" extra parameters set. This make Docker not bother at all with IPv6 assignments, does not do anything to the container routing tables, and does not add extra dns settings to the ipv4 settings. What happens then is that the network stack in the container will attempt to configure IPv6 via SLAAC and attempt to discover the router by router-advertisements over the wire, assigning a privacy address is desired. This mechanism works really well on my network running Mikrotik router (which does not have DHCPv6, so every thing is SLAAC using the dynamic /56 provided by my ISP. The whole thing works very well, particularly since the dynamic /56 has no gurantee of being the same across reboots, and with the absence of DHCPv6, my Unraid docker networks would not be configurable anyway. Also, I wanted to restrict which network interface and IP Unraid was able to use to host the SMB/SSH/HTTP services.