Leaderboard

Popular Content

Showing content with the highest reputation since 02/21/17 in Report Comments

  1. It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get
    21 points
  2. I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments. While @limetech has developed a great base product i think it’s right to acknowledge that much of the
    19 points
  3. I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its t
    16 points
  4. I know this likely won't matter to anyone but I've been using unraid for just over ten years now and I'm very sad to see how the nvidia driver situation has been handled. While I am very glad that custom builds are no longer needed to add the nvidia driver, I am very disappointed in the apparent lack of communication and appreciation from Limetech to the community members that have provided us with a solution for all the time Limetech would not. If this kind of corporate-esque "we don't care" attitude is going to be adopted then that removes an important differentiating factor betwee
    13 points
  5. The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead come
    13 points
  6. Just a little feedback on upgrading from Unraid Nvidia beta30 to beta35 with Nvidia drivers plugin. The process was smooth and I see no stability or performance issue after 48h, following these steps : - Disable auto-start on "nvidia aware" containers (Plex and F@H for me) - Stop all containers - Disable Docker engine - Stop all VMs (none of them had a GPU passthrough) - Disable VM Manager - Remove Unraid-Nvidia plugin - Upgrade to 6.9.0-beta35 with Tools>Update OS - Reboot - Install Nvidia Drivers plugin from CA (be patient and wait
    12 points
  7. Everyone please take a look here: I want to request that any further discussion on this to please take place in that topic.
    12 points
  8. As a user of Unraid I am very scared about the current trends. Unraid as a base it is a very good server operating system but what makes it special are the community applications. I would be very sad if this breaks apart because of maybe wrong or misunderstandable communication. I hope that everyone will get together again. For us users you would make us all a pleasure. I have 33 docker containers and 8 VM running on my system and I hope that my system will continue to be as usable as before. I have many containers from linuxserver.io. I am grateful for th
    11 points
  9. Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
    11 points
  10. You know you could have discussed this with me right? Remember me, the original dev, along with @bass_rock! The one you tasked @jonp to discuss how we achieved the Nvidia builds back in December last year? That I never heard anything more about. I'm the one that spend 6 months f**king around with Unraid to get the GPU drivers working in docker containers. The one that's been hosting literally 100s of custom Unraid builds for the community to use for nearly five years. With all due respect to @ich777 he wasn't the one who did the bulk of the work here.
    10 points
  11. For Unraid version 6.10 I have replaced the Docker macvlan driver for the Docker ipvlan driver. IPvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network. The end-user doesn't have to do anything special. At startup legacy networks are automatically removed and replaced by the new netwo
    9 points
  12. Next release of 6.9 will be on Linux 5.9 kernel, hopefully that will be it before we can go to 'stable' (because Linux 5.8 kernel was recently marked EOL).
    9 points
  13. Before anyone else beats me to it: Soon™
    9 points
  14. This sums up my stance too. I can understand LimeTechs view as to why they didnt feel the need to communicate this to the other parties involved (as they never officially asked them to develop the solution they'd developed and put in place). But on the flip side the appeal of UnRaid is the community spirit and drive to implement things which make the platform more useful. It wouldnt have taken a lot to give certain members in community a heads up that this was coming, and to give credit where credit is due in the release notes. Something along the lines of: "After seeing the ap
    9 points
  15. Just wanted to share a quick success story. Previously (and for the past few releases now) I was using @ich777's Kernel Docker container to compile with latest Nvidia. Excited now to see this be brought in natively, it worked out of the box for me. I use the regular Plex docker container for HW transcoding (adding --runtime=nvidia in the extra parameters and setting properly the two container variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES). To prepare for this upgrade, while still on beta 30: - disabled docker service - upgraded to beta 35
    9 points
  16. Normal humans often can't see all the ways messaging can be interpreted (this is why we have comms people) The most offensive sounding things are often not intended in that fashion at all. Written text makes communication harder because there are no facial or audio cues to support the language I expected our community developers (being that they've clearly communicated in text behind screens for many years) would understand that things aren't always intended as they sound. In this regard, I support @limetech wholeheartedly. Nevertheless the only
    9 points
  17. @limetech Not wanting to dogpile on to you, but you would do well to go above and beyond in rectifying the situation with any communality developers that have issues. The community plugins and features that supply so much usability and functionality to unraid that are lacking in the base product actually make unraid worth paying for. if you start loosing community support, you will start to loose it all. with that I am sure I and others will not recommend people to purchase and use your software. @CHBMB @aptalca @linuxserver.io Maybe a cool down period
    9 points
  18. Install the referenced plugin. It will install the Nvidia driver and tools needed for transcoding in Docker containers. If you don't care about this, no need to deal with it. The plugin is more of a "work in process" right now and will mature over time, including being added to Community Apps. We didn't include the Nvidia vendor driver built-in to the release for several reasons: The package is very large, over 200MB and no need for everyone to download this thing if they don't need it. Including the driver "taints" the Linux kernel. There is something of a longst
    9 points
  19. Great news. Crazy how they constantly focus on features benefiting more than 1% of the maximum userbase, right. 🤪
    9 points
  20. https://github.com/electrified/asus-wmi-sensors <- this would be nice to for 6.9 for ASUS peoples. It directly reads the WMI interface that ASUS has moved to and displays all sensors properly.(Supposedly)
    9 points
  21. Preparing another release. Mondays are always very busy but will try to get it out asap.
    9 points
  22. No. If you have the latest version, you are good to go.
    8 points
  23. I've been saddened and disheartened today to see what was supposed to be a momentous occasion for us in releasing a substantial improvement to our OS being turned into something else. I have nothing but respect for all of our community developers (@CHBMB included) and the effort they put into supporting extended functionality for Unraid OS. Its sad to see that something we intended to improve the quality of the product for everyone be viewed as disrespectful to the very people we were trying to help. It honestly feels like a slap in the face to hear that some folks believe we wer
    8 points
  24. @aptalca You also know that I had to test things and do also much trail and error but I also got a lot of help from klueska on Github. One side note about that, I tried asking and getting help from you because I don't know nothing about Kernel building and all this stuff but no one ever answered or did anything to help me. I completely understand that @CHBMB is working at healthcare and doesn't have that much time in times like this... @aptalca, @CHBMB & @bass_rock Something about my Unraid-Kernel-Helper container, I know the initial plugin came from you (in terms of
    8 points
  25. The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices. There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.
    8 points
  26. To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image. The notable difference is that now the GUI supports setting a folder directly. The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash). If you simply pick an already existing folder, the system will automatically assume that you want to create
    8 points
  27. There is a Samba security release being published tomorrow. We will update that component and then publish 6.8 stable release.
    8 points
  28. Why are you running something so critical to yourself on pre-release software? Seems a little reckless to me...
    8 points
  29. The issue is that writes originating from a fast source (as opposed to a slow source such as 1Gbit network) completely consume all available internal "stripe buffers" used by md/unraid to implement array writes. When reads come in they get starved for resources to efficiently execute. The method used to limit the number of I/O's directed to a specific md device in the Linux kernel no longer works in newer kernels, hence I've had to implement a different throttling mechanism. Changes to md/unraid driver require exhaustive testing. All functional tests pass however, driver bugs ar
    8 points
  30. Yes Tom, that's how I viewed it when we developed the Nvidia stuff originally, it was improving the product. The thread has been running since February 2019, it's a big niche. 99 pages and 2468 posts.
    7 points
  31. You have just described how almost all software functions 🤣
    7 points
  32. I made a video on what the new 6.7.0 looks like etc. There is a mistake in this video where I talk about the webgui: VM manager: remove and rebuild USB controllers. What I thought it was, I found out after it was available in 6.6.6 as well so wasnt it !! Also shows how to upgrade to rc for those who don't know and downgrade again after if needs be. https://www.youtube.com/watch?v=qRD1qVqcyB8&amp;feature=youtu.be
    7 points
  33. This is from my design file, so it differs a little bit from the implemented solution, but it should give you a general feel about how it looks. (Header/menu and bottom statusbar is unchanged aside from icons) Edit: So I should remember to refresh before posting. Anyway you might not have seen it yet, but the logo on the "server description" tile can be edited, and we have included a selection of popular cases to start you off!
    7 points
  34. Issue appears to be that if you have any VM running that used VNC, then starting it would mess up the displays on the dashboard. Either way, this issue is fixed next rev
    6 points
  35. This is sad, very sad only. unRAID is a unique product, but it's the community what really makes it shines. As a ordinary user, what really makes me feel safe is not that i'm running a perfect piece of software(it's not, no software will ever be), but having a reliable community always have my back when I'm in trouble, and constantly making things better. I'm not in a place to judge, but I do see some utterly poor communications. This could have been a happy day yet we are seeing the beginning of a crack. Guess who gets hurt? LOYAL USERS! Guess w
    6 points
  36. Should be noted that right now, the only SAS chipsets definitely affected is the SAS2116 ie: the 9201-16e. My controller running SAS2008 (Dell H200 cross-flashed) is completely unaffected.
    6 points
  37. Added several options for dealing with this issue in 6.9.0-beta24.
    6 points
  38. Also could use 'space_cache=v2'. Upcoming -beta23 has these changes to address this issue: set 'noatime' option when mounting loopback file systems include 'space_cache=v2' option when mounting btrfs file systems default partition 1 start sector aligned on 1MiB boundary for non-rotational storage. Will requires wiping partition structure on existing SSD devices first to make use of this.
    6 points
  39. Correct. A pool is simply a collection of storage devices. If you assign multiple devices to a pool it will be formatted using btrfs 'raid1' profile which means there is protection against any single device failure in the pool. In future we plan on letting you create multiple "unRAID" pools, where each pool would have it's own parity/parity2 and collection of storage volumes - but that feature did not make this release. edit: fixed typos
    6 points
  40. Maybe sooner if you can bribe him with small fish, crabs, and shrimp.
    6 points
  41. Just an FYI for this forum: It has been 16 days since I have had any corruption with Plex. That only happened with 6.6.7. With anything newer, it would corrupt in less than a day. 6.8.0-rc4 and rc5 have been rock stable with the changes that were made. I'm glad that I stuck with the testing, and was able to work so close with the Unraid team. 🙂
    6 points
  42. We're actively working on this for 6.9
    6 points
  43. Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
    6 points
  44. It really is time for @limetech to incorporate the UD functionality. I do not have the time nor the interest in updating UD for the new GUI.
    6 points
  45. See the post above yours. It gets loaded directly into ram. Speaking as someone who doesn't have a nvidia card, I personally don't want my ram used up for something I don't have.
    5 points
  46. https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Released
    5 points
  47. Also, any users that created what should be a redundant pool from v6.7.0 should convert metadata to raid1 now, since even after this bug is fixed any existing pools will remain as they were, use: btrfs balance start -mconvert=raid1 /mnt/cache To check if it's using correct profile type: btrfs fi usage -T /mnt/cache Example of a v6.7 created pool, note that while data is raid1, metadata and system are single profile, i.e. some part of the metadata is on each device, and will be incomplete if one of them fails, all chunks types need to be raid1 for the p
    5 points
  48. In 6.6.4 release we 'reverted' the cron package back to the one that was installed with 6.5.x releases in order to solve another bug. However the way the cron daemon was started in that package is different and in 6.6.4 the problem is that crond is never started. We will publish a fix for this asap, and in meantime have taken down the 6.6.4 release. If you're running 6.6.4 you can type this command in a terminal window and put in your 'go' file as a workaround: /usr/sbin/crond -l notice
    5 points
  49. Waiting for that promised proper Login page so that a Password Manager can be used...
    5 points