Leaderboard

Popular Content

Showing content with the highest reputation since 02/28/21 in Posts

  1. Refer to Summary of New Features for an overview of changes since version 6.8. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bug
    36 points
  2. Successfully upgraded 3 (encrypted) systems from 6.8.3 to 6.9.0. One of the 6.8.3 systems was running nvidia-plugin, upgrade procedure: 0. Stop docker containers from auto-starting. 1. Download and upgrade to 6.9.0 without rebooting. 2. Go to plugins and select old nvidia plugin, select it and remove. 3. Reboot. 4. Install new nvidia plugin: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg 5. WAIT FOR PROPER INSTALL TO FINISH - IT TAKES TIME 6. Stop and start Docker service 7. Enjoy. NO
    11 points
  3. A small explanation of some of the UI changes that may throw people for a loop. Dashboard: Dragging to rearrange the tiles can now only be done from each tile's title bar Docker: Dragging to rearrange the start order can now only be done via the up/down arrow on the right of each row Diagnostics: The occasional timeouts when grabbing diagnostics (especially on wide arrays with the drives spun down) should now be impossible. A dialog box pops up with the current command being run. This is helpful if the command happens to stop responding. All
    7 points
  4. I'll wait for 6.9.1. There's always some unforeseen bugs that crop up on hardware and I've learned there's no reason to rush these things. Thanks for all the hard work in finally bringing this release out to the masses.
    7 points
  5. @limetech, thank you for formally recognizing the developers who contributed and worked on the nvidia driver ( https://wiki.unraid.net/Unraid_OS_6.9.0#Nvidia_Driver ) :::
    6 points
  6. @Legionofone I implemented now a automatic update check of Valheim so it will check every 60 minutes if there are updates available. VALHEIM AUTOMATIC UPDATE: By default the automatic update is set to 'true' even if you don't have the value not in your Unraid template it will check every 60 minutes if there is an update availalbe. Please update the container to get this feature. To get this Variable you have two ways, the first way would to grab a fresh copy off of the CA App and enter the exact same Values as in your existing template or you crea
    5 points
  7. As always. You guys are fast on keeping us on the latest and greatest. Just surprised you didn't wait until the 1 year announcement of 6.9 beta to give us the final version
    5 points
  8. Upgraded one server from 6.8 to 6.9 without issue - and am happy to upgrade as my second server has been running 6.9 betas/RC's with very few issues and is happy to say that 6.9 has been well tested :)
    5 points
  9. This is some fantastic documentation!!! I didn't realize it'd all been changed from the old format lol - it'd been on my list of 'things to do to be helpful' (getting the wiki at least commented with updates, if not fully worked over of course), but y'all did a stellar job. Thanks dudes!
    5 points
  10. Thx for the new versions. I will wait 2 or 3 days and then i will upgrade
    5 points
  11. I imagine this may be an issue for others so posting my experience here: After upgrade from 6.8.3 (all smooth), I removed the following from my go file: modprobe i915 chmod -R 777 /dev/dri I then created the blacklist override as per release notes and rebooted: touch /boot/config/modprobe.d/i915.conf This correctly loads the i915 module: root@Tower:~# ls /dev/dri by-path/ card0 renderD128 However the permissions seem to be an issue: root@Tower:~# ls -la /dev/dri total 0 drwxr-xr-x 3 root root 100 Mar 3 11:57 ./ drwxr-xr-x
    4 points
  12. I had this issue when I upgraded from 6.8 to 6.8 RC2. Clearing the web browser cache (i.e. Edge) cured it for me.
    4 points
  13. Ich würde Docker und VM deaktivieren, alle Plugins aktualisieren, VFIO Plugin deinstallieren und dann ein Backup vom Stick und der SSD machen und dann erst das Update installieren. Damit ist sichergestellt, dass während dem Update keine Prozesse laufen, die zB Daten im RAM liegen haben. Das kann ja auch bei Docker der Fall sein. zB hat jemand das Problem gehabt, dass nach dem Update kein Docker Container mehr da war. Das ist ja bekanntlich immer dann der Fall, wenn das docker.img korrumpiert. Ist natürlich kein wirkliches Problem, da alle Container über Apps -> Previous Apps wiederhergestel
    4 points
  14. Well, you might still be crazy. This just isn't proof of it.
    3 points
  15. Arguably the smoothest major upgrade for me ever, two servers updated to 6.9.0 without any issues. In fact, this is probably the first time that I don't even have a single warning message in the system log after boot. Thanks to the Unraid team, looking forward to test all the new features.
    3 points
  16. Windows 10 VM updated 02.03.21 with clean install on 01.03.21. Installed in raw, 3 test in Cristal disk mark after that convert to qcow2 and again same 3 tests. Results: qcow2: RAW:
    3 points
  17. We are aware. Let us know what you find after you upgrade and we’ll get together to get this sorted. Cc: @limetech
    3 points
  18. At the risk of jinxing it, I want to report that I am no longer getting the erroneous notifications about disk utilization. My guess is that the changes to "Storage Threshold Warnings" reset something under the hood that cleared whatever was causing this. I am a happy camper.
    3 points
  19. For anyone else that may encounter the same issue, I was able to resolve my issue on 6.9 by removing the nomodeset commands in my /boot/syslinux/syslinux.cfg file, like so: root@Morgoth:~# cat /boot/syslinux/syslinux.cfg default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins)
    3 points
  20. In light of the official release of 6.9.0 I have removed the max version restriction. I am not able to confirm how well it works this week because I am moving across several time-zones. I am also still working on a new version. As a quick heads up, the new version will be dropping support for pigz, but gzip will still work for anyone using legacy compression (this change should not be breaking or require any user input). I also fixed a few bugs, and am adding a few new features.
    3 points
  21. Correct, you'll instead un-blacklist the driver by creating a file to override the driver blacklist: touch /boot/config/modprobe.d/i915.conf
    3 points
  22. Updated from 6.9.0 rc2 to 6.9.0 stable and everything went smoothy as expected. Thanks for all the hard work by unRAID team, @ich777 for his amazing plugins and support, and for @linuxserver.io developers for making almost any software you can imagine in a container and maintaining them.
    3 points
  23. @limetech & @SpencerJ - do not laugh about that I trust in unRAID, but murphy's law knows me best
    3 points
  24. Out of nowhere my container is messing up. I've had it running with no problem for months and now I am getting a bunch of errors. [01.03.2021 08:56:39] WebUI started. [01.03.2021 08:56:42] _cloudflare: Plugin will not work. rTorrent user can't access external program (python). [01.03.2021 08:56:42] _task: Plugin will not work. rTorrent user can't access external program (php). [01.03.2021 08:56:42] autotools: Plugin will not work. rTorrent user can't access external program (php). [01.03.2021 08:56:42] create: Plugin will not work. rTorrent user can't access external program (php).
    3 points
  25. Driver / Device specific steps The following is the most generic option, and should work for most UnRAID deployments that contain SR-IOV supporting NICs, going back to around 6.4, but I would recommend no lower than 6.8.2 if you're working with any device using the i40e driver (save yourself the pain and upgrade!): Open your terminal and edit the go file nano /boot/config/go Add the following line to the bottom, specifying the number of vf's to create for this interface, replacing my ID (17.:0.1) with your own - I chose 4 per interface: echo 4 > /sys/b
    3 points
  26. ok thats fine, so you are using privoxy in this case and NOT network binding multiple containers. so good news, at long last i am able to replicate one of the issues here, so if i set sonarr to point at privoxy running in delugevpn and then add in deluge as a donwload client in sonarr and click on the test button it fails. the reason it fails is that the proxy settings in sonarr are by default set to route everything via the proxy, this causes a problem when attempting to connect to deluge, as the connection will go as follows:- sonarr container outbound to pri
    3 points
  27. In my case, yes and no. I found two problems. A - Qcow2 disk full (no relation with UnRaid 6.9.0) B - Some craziness with XML files. A problem: Disk Full! How I fixed it. 1 - Backup qcow2 disk cd /mnt/user/domains/Hassos/ cp hassos_ova-5.8.qcow2 hassos_ova-5.8.bkp 2 - Them I increased 1Gb using the command qemu-img resize hassos_ova-5.8.qcow2 +1G Restarted the VM. It did not start. It stayed in a black screen with a dot. I realize it was just some VNC problem there. I
    2 points
  28. I found it buried in the backend! Annoying that it was disabled/hidden. I have re-enabled.
    2 points
  29. Latest release of OVMF_CODE.fd and OVMF_VARS.fd from edk2, v. 202102 published today. Compiled in kali linux with gcc, tested and working. Enjoy edk2-OVMF-202102-Stable-RELEASE-GCC5-Kali.zip
    2 points
  30. Information: Je crée ce tuto en français basé sous UNRAID, j'ai essayé de faire le plus simple et le plus explicite pour les novices! Le but de ce tutoriel est de vous montrer qu'il n'y a pas besoin d'une grosse configuration serveur pour faire tourner un serveur dédier PLEX avec plusieurs utilisateurs. (Je n'aborderais pas l'installation de PLEX) En effet, les processeurs INTEL depuis quelques années maintenant sont équipés d'une puce GPU qui permet d'avoir un affichage sans avoir besoin d'une carte graphique. Si je dis pas de bêtises, depuis la 8èm génération des p
    2 points
  31. Just to chip in, I have the same experience - when spun up the disk will not spin down again. I have both the telegram-influxdb-grafana and autofan setup. I tried disabling the three dockers (telegram-influxdb-grafana). Same issue. Then i disable autofan, and the disk-spun down as the should after 15 minutes. Then i tried to start the 3 dockers again, and the problem was back. As has been pointed out, I also have the SMART readings when the disk is not spinning down. I guess this is an OS error, since the problem was not present on 6.8.3, but when I upgrade to 6.9.0rc2 and 6.9.0 and the
    2 points
  32. This is a known problem with Valheim if more people playing on the server, there is a workaround out there but I don't know exactly where... Please leave the developers of Valheim a short message about that on their Discord/Forums/Steam Community Forums, I think they will fix this soon.
    2 points
  33. It's far from perfect, but here's a 'quick n dirty' one I did for you. I might get around to fixing it at some point.
    2 points
  34. Thanks everyone for reporting this issue. @SpencerJ are you and the team aware of this? The UUD has only been tested/released on 6.83 stable. I have yet to upgrade myself. I’ll release 1.6 first, then look into upgrading my 2 servers soon thereafter.
    2 points
  35. Upgraded my main production and backup unraid servers (using the kernel build docker from the ich777 master to include amd gpu reset bug patch and zfs) and besides the spindown issue that still did not go away with the upgrade from rc2 to final all went smooth. After breaking my head why even with all dockers and vm's and shares down it still would not spindown my array drives , i finaly found it was a custom own build script i had running in the user scripts plugins that did smarctl's every 5 mins to collect drive standby statistics for my splunk dashboard. Once i disabled that all was
    2 points
  36. Bonjour à tous, Un nouveau de plus sur le forum. J'ai découvert Unraid il y a déjà pas mal de temps avec Linus, je crois que c'était son projet de 4 VM gaming sur Unraid. Mais je ne l'avais jamais utilisé. J'étais sur une solution Synology pas très officielle (han pas bien !) avec une config perso. J'ai depuis changé pour Unraid pour une utilisation principale de NAS (Cloud, Plex, serveur web basic). Je file créer un nouveau topic car je coince sur de la config Unraid <-> Nginx Proxy Manager <-> Plex
    2 points
  37. Simply delete the old Plugin, go to Tools -> Upgrade OS and update to 6.9.0 stable, reboot, go to the CA App and install the Nvidia-Driver Plugin (then you have two options one is to disable and enable the Docker service again, option 2 is to reboot, that is what I recommend).
    2 points
  38. Honestly, I think your plug-in looks great! The thing I found confusing is what needs to be built in/added to get it working (because this thread goes back to a time before it was in the kernel). It was really dead simple, install 6.9, install plug in, configure, done. That said, I used to admin a large (Netapp) based iSCSI environment, so.. I'm pretty familiar with how all the pieces fit together, target/initiator, LUN, mapping, etc. Certainly would have been harder if I didn't have that experience. I'm running the iSCSI LUN to VMware, and, when I tried this wit
    2 points
  39. I just wanted to say that this is the best feature-set of a major release I've seen in a long time. You guys managed to pack in a lot of community requested items and we love to see it. Great work!
    2 points
  40. Update worked just fine thanks guys.
    2 points
  41. Went fine for me too. thanks all for great system.
    2 points
  42. Thank you all so much for the great work! I will be upgrading first thing in the morning.
    2 points
  43. Thanks for letting us know! I've got the link updated in the wiki now.
    2 points
  44. After a Bios update to version 1.25.0, in search of a problem solver for too much power consumption, the Problem has gone. Auf der Suche nach einer Problemlösung für zu hohen Stromverbrauch existiert das Problem nach einem Bios update auf Version 1.25.0 nicht mehr.
    2 points
  45. I've got the right stuff, where do I start? Alright, on to the config/setup - I'm trying to make this as generic as possible to cover as many possibilities as possible at once, as the implementation of virtual functions and their utilization depends on a combination of both the driver AND the hardware. Let's first gather some information so we know what drivers we're using before we move forward with creating our vfs, and get the script set up so we only have to reboot the one time here: The first thing we need is a script to bind our vfs once they're created by the
    2 points
  46. Just pushed a release that should now correctly track whether a parity check is scheduled or manual (or an automatic parity check after an unclean shutdown) and correctly obey the related pause/resume settings. At this point I "think" all outstanding issues have been resolved. If any unexpected behaviour is encountered then please let me know. As always open to suggestions for improvement.
    2 points
  47. @ich777: Ich wünsche dir zu deinem Geburtstag alles Liebe und Gute—verbringe einen wunderschönen Tag im Kreise deiner Lieben. Have a great day and thank you for all that you do. 😄 -Spencer
    2 points
  48. As mentioned above, the latest tag now installs influx 2.0 and a docker update automatically installed this for me. It was causing Grafana to return Authorization errors for me so clearly not backward compatible. Changing the tag within the Repository to 1.8 solved it for me. Anyone migrated from 1.x to 2.0 yet? Any issues?
    2 points