Leaderboard

Popular Content

Showing content with the highest reputation since 02/21/17 in Report Comments

  1. It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
    21 points
  2. I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments. While @limetech has developed a great base product i think it’s right to acknowledge that much of the popularity and success of the product is down as much to community development and support (which is head and shoulders above by comparison) as it is to the work of the company. As a now outsider looking in, my personal observation is that the use of unRAID exploded due to the availability of stable, regularly updated media apps like Plex (the officially supported one was just left to rot) and then exploded again with the emergence of the @linuxserver.ionVidia build and the support that came with it. Given the efforts of the community and groups like @linuxserver.io is even used in unRAID marketing I feel this is a show of poor form. I feel frustrated at Tom’s “I didn’t know I needed permission ....” comment as it isn’t about that. It’s about respect and communication. A quick “call” to the @linuxserver.io team to let them know of the plan (yes I know the official team don’t like sharing plans at risk of setting expectations they then won’t meet) to (even privately) acknowledge the work that has (and continues to) contribute to the success of unRAID and let them be a part of it would have cost Nothing but would have been worth so much. I know the guys would have been supporting too. I hope the two teams can work it out and that @limetech don’t forget what (and who) helped them get to where they are and perhaps looks at other companies who have alienated their community through poor decisions and communication. Don’t make this the start of a slippery slide.
    19 points
  3. @limetech I solved this issue as follows and successfully tested it in: Unraid 6.9.2 Unraid 6.10.0-rc1 Add this to /boot/config/go (by Config Editor Plugin): # ------------------------------------------------- # RAM-Disk for Docker json/log files # ------------------------------------------------- # create RAM-Disk on starting the docker service sed -i '/^ echo "starting \$BASE ..."$/i \ # move json/logs to ram disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ mount -t tmpfs tmpfs /var/lib/docker/containers\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk created' /etc/rc.d/rc.docker # remove RAM-Disk on stopping the docker service sed -i '/^ # tear down the bridge$/i \ # backup json/logs and remove RAM-Disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ umount /var/lib/docker/containers\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker # Automatically backup Docker RAM-Disk sed -i '/^<?PHP$/a \ $sync_interval_minutes=30;\ if ( ! ((date('i') * date('H') * 60 + date('i')) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\ exec("mkdir /var/lib/docker_bind");\ exec("mount --bind /var/lib/docker /var/lib/docker_bind");\ exec("rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers");\ exec("umount /var/lib/docker_bind");\ exec("rmdir /var/lib/docker_bind");\ exec("logger -t docker RAM-Disk synced");\ }' /usr/local/emhttp/plugins/dynamix/scripts/monitor Optional: Limit the Docker LOG size to avoid using too much RAM: Reboot server Notes: By this change /var/lib/docker/containers, which contains only status and log files, becomes a RAM-Disk and therefore avoids wearing out your SSD and allows a permanent sleeping SSD (energy efficient) It automatically syncs the RAM-Disk every 30 minutes to your default appdata location (for server crash / power-loss scenarios). If container logs are important to you, feel free to change the value of "$sync_interval_minutes" in the above code to a smaller value to sync the RAM-Disk every x minutes. If you like to update Unraid OS, you should remove the change from the Go File until it's clear that this enhancement is still working/needed! Your Reward: After you enabled the docker service you can check if the RAM-Disk has been created (and its usage): Screenshot of changes in /etc/rc.d/rc.docker and /usr/local/emhttp/plugins/dynamix/scripts/monitor
    16 points
  4. I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its true that bug reports based on unofficial builds complicate matters. Also its maybe frustrating that people are reluctant to go the extra mile to go back to stock unraid and try to reproduce the error there. Especially since they might be convinced (correctly or not) it has nothing to do with the unoffial build. Granted from an engineers point of view that might be seen as a nuisance. But from a customer driven business point of view its a self destructive perspective. Obviously these builds fill a need that unraid could not, or else they would not exist and there wouldn't be enough people using them to be a "bug hunting" problem in the first place. They expand unraids capabilities, bring new customers to unraid, demonstrate a lively and active community and basically everything i love about unraid. I think @limetech did not mean it in that way, but i can fully see how people who poured a lot of energy and heart into the unraid ecosystem might perceive it that way. I think if you would have said instead: "Finally we incorporated these new features x,y, and z formerly only available in the builds by A, B and C. Thanks again for your great work A,B and C have being doing for a long while now and for showing us in what way we can enhance unraid for our customers. I took a long time, but now its here. It should also make finding bugs more easy, as many people can now use the official builds." then everybody would have been happy. I think its probably a misunderstanding. I can't really imagine you really wanting to discourage the community from making unraid reach out to more user.
    16 points
  5. I like to highlight other improvements available in 6.10, which are maybe not so obvious to spot from the release notes and some of these improvements are internal and not really visible. - Event driven model to obtain server information and update the GUI in real-time The advantage of this model is its scalability. Multiple browsers can be opened simultaneously to the GUI without much impact In addition stale browser sessions won't create any CSRF errors anymore People who keep their browser open 24/7 will find the GUI stays responsive at all times - Docker labels Docker labels are added to allow people using Docker compose to make use of icons and GUI access Look at a Docker 'run' command output to see exactly what labels are used - Docker custom networks A new setting for custom networks is available. Originally custom networks are created using the macvlan mode, and this mode is kept when upgrading to version 6.10 The new ipvlan mode is introduced to battle the crashes some people experience when using macvlan mode. If that is your case, change to ipvlan mode and test. Changing of mode does not require to reconfigure anything on Docker level, internally everything is being taken care off. - Docker bridge network (docker0) docker0 now supports IPv6. This is implemented by assigning docker0 a private IPv6 subnet (fd17::/64), similar to what is done for IPv4 and use network translation to communicate with the outside world Containers connected to the bridge network now have both IPv4 and IPv6 connectivity (of course the system must have IPv6 configured in the network configuration) In addition several enhancements are made in the IPv6 implementation to better deal with the use (or no-use) of IPv6 - Plugins page The plugins page now loads information in two steps. First the list of plugins is created and next the more time consuming plugin status field is retrieved in the background. The result is a faster loading plugins page, especially when you have a lot of plugins installed - Dashboard graphs The dashboard has now two graphs available. The CPU graph is displayed by default, while the NETWORK graph is a new option under Interface (see the 'General Info' selection) The CPU graph may be hidden as well in case it is not desired Both graphs have a configurable time-line, which is by default 30 seconds and can be changed independently for each graph to see a longer or shorter history. Graphs are updated in real-time and are useful to observe the behavior of the server under different circumstances
    14 points
  6. Just a little feedback on upgrading from Unraid Nvidia beta30 to beta35 with Nvidia drivers plugin. The process was smooth and I see no stability or performance issue after 48h, following these steps : - Disable auto-start on "nvidia aware" containers (Plex and F@H for me) - Stop all containers - Disable Docker engine - Stop all VMs (none of them had a GPU passthrough) - Disable VM Manager - Remove Unraid-Nvidia plugin - Upgrade to 6.9.0-beta35 with Tools>Update OS - Reboot - Install Nvidia Drivers plugin from CA (be patient and wait for the "Done" button) - Check the driver installation (Settings>Nvidia Drivers, should be 455.38, run nvidia-smi under CLI). Verify the GpuID is unchanged, which was the case for me, otherwise the "NVIDIA_VISIBLE_DEVICES" variable should be changed accordingly for the relevant containers - Reboot - Enable VM Manager, restart VMs and check them - Enable Docker engine, start Plex and F@H - Re-enable autostart for Plex and F@H All this is perhaps a bit over-cautious, but at least I can confirm I got the expected result, i.e. an upgraded server with all functionalities up and running under 6.9.0-beta35 !
    13 points
  7. I know this likely won't matter to anyone but I've been using unraid for just over ten years now and I'm very sad to see how the nvidia driver situation has been handled. While I am very glad that custom builds are no longer needed to add the nvidia driver, I am very disappointed in the apparent lack of communication and appreciation from Limetech to the community members that have provided us with a solution for all the time Limetech would not. If this kind of corporate-esque "we don't care" attitude is going to be adopted then that removes an important differentiating factor between Unraid and Synology, etc. For some of us, cost isn't a barrier. Unraid has an indie appeal with good leaders and a strong community. Please understand how important the unraid community is and appreciate those in the community that put in the time to make unraid better for all of us.
    13 points
  8. The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
    13 points
  9. Someone please help me understand this? What if I don't wanna associate my keys and my home installs with my unraid forum account? I really like my privacy honestly and what is is my home have no business being associated with any cloud or external servers. This is a very slippery slope and I just don't like it. Is there an option to continue be offline just the way I am right now? I just don't want to be part of this new ecosystem unraid is creating. I trust my privacy and I trust no one, I am sorry..... And there’s absolutely no reason why you can’t support both ways of activating / running unraid. I should not be forced to connect with you if I have a valid key. "UPC and My Servers Plugin The most visible new feature is located in the upper right of the webGUI header. We call this the User Profile Component, or UPC. The UPC allows a user to associate their server(s) and license key(s) with their Unraid Community forum account. Starting with this release, it will be necessary for a new user to either sign-in with existing forum credentials or sign-up, creating a new account via the UPC in order to download a Trial key. All key purchases and upgrades are also handled exclusively via the UPC"
    12 points
  10. For Unraid version 6.10 I have replaced the Docker macvlan driver for the Docker ipvlan driver. IPvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network. The end-user doesn't have to do anything special. At startup legacy networks are automatically removed and replaced by the new network approach. Please test once 6.10 becomes available. Internal testing looks very good so far.
    12 points
  11. Everyone please take a look here: I want to request that any further discussion on this to please take place in that topic.
    12 points
  12. As a user of Unraid I am very scared about the current trends. Unraid as a base it is a very good server operating system but what makes it special are the community applications. I would be very sad if this breaks apart because of maybe wrong or misunderstandable communication. I hope that everyone will get together again. For us users you would make us all a pleasure. I have 33 docker containers and 8 VM running on my system and I hope that my system will continue to be as usable as before. I have many containers from linuxserver.io. I am grateful for the support from limetech & the whole community and hope it will be continued. sorry for my english i hope you could understand me.
    11 points
  13. Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
    11 points
  14. Since the UPC seems to be stirring up a lot of adverse reaction, maybe the easiest option is to set up a setting under either Settings->Identification or Settings->Management Access (whichever is deemed most appropriate) to remove the Login prompt from the top of the GUI? It can then be defaulted to Enabled until explicitly disabled by the user so new users see it is an option. That might also help with making it clearer that this is purely an optional feature
    10 points
  15. I wonder if I need to enter the NO HEALTHCHECK parameter for all these containers? Will they work stably after that?
    10 points
  16. You know you could have discussed this with me right? Remember me, the original dev, along with @bass_rock! The one you tasked @jonp to discuss how we achieved the Nvidia builds back in December last year? That I never heard anything more about. I'm the one that spend 6 months f**king around with Unraid to get the GPU drivers working in docker containers. The one that's been hosting literally 100s of custom Unraid builds for the community to use for nearly five years. With all due respect to @ich777 he wasn't the one who did the bulk of the work here. Remember? You'll be pleased to know I will no longer be producing any custom "unofficial" builds for all the things you've not wanted to pick up and support for the last 10 years. A bit like unassigned devices and @dlandon, hell that should have been incorporated half a decade ago. In fact, I won't be doing anything here from this point on. Nearly 5 years of DVB and nearly 2 years of Nvidia. You're welcome to them. DVB is all yours now as well. Good Luck.
    10 points
  17. Next release of 6.9 will be on Linux 5.9 kernel, hopefully that will be it before we can go to 'stable' (because Linux 5.8 kernel was recently marked EOL).
    9 points
  18. Before anyone else beats me to it: Soon™
    9 points
  19. This sums up my stance too. I can understand LimeTechs view as to why they didnt feel the need to communicate this to the other parties involved (as they never officially asked them to develop the solution they'd developed and put in place). But on the flip side the appeal of UnRaid is the community spirit and drive to implement things which make the platform more useful. It wouldnt have taken a lot to give certain members in community a heads up that this was coming, and to give credit where credit is due in the release notes. Something along the lines of: "After seeing the appeal and demand around 3rd party driver integration with unraid as its matured over recent years we've introduced a mechanism to bring this into the core OS distribution. We want to thank specific members of the community such as @CHBMB and @bass_rock who've unofficially supported this in recent times and which drove the need to implement this in a way which allows better support for the OS as a whole for the entire community, and remove the need for users to use unofficial builds." Also for them to be given a heads up and at least involved in the implementation stages at an advisory or review level as well... Anyway, I hope this communication mishap can be resolved. It was obviously not intentional, and 2020 as a whole has meant we're all stressed and overworked (I am anyway!), so it makes situations like this a lot easier to trigger. Hopefully lessons can be learned here and changes similar to this can be managed with a little more community involvement going forward.
    9 points
  20. Just wanted to share a quick success story. Previously (and for the past few releases now) I was using @ich777's Kernel Docker container to compile with latest Nvidia. Excited now to see this be brought in natively, it worked out of the box for me. I use the regular Plex docker container for HW transcoding (adding --runtime=nvidia in the extra parameters and setting properly the two container variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES). To prepare for this upgrade, while still on beta 30: - disabled docker service - upgraded to beta 35 - uninstalled Kernel Helper Plugin - uninstalled Kernel Helper Docker - rebooted - Installed Nvidia Drivers from CA - rebooted - reenabled docker So far all is working fine on my Quadro P2200 Install Nvidia Drivers Docker Settings: Validate in Settings (P2200 and drivers detected fine) HW transcoding working fine
    9 points
  21. Normal humans often can't see all the ways messaging can be interpreted (this is why we have comms people) The most offensive sounding things are often not intended in that fashion at all. Written text makes communication harder because there are no facial or audio cues to support the language I expected our community developers (being that they've clearly communicated in text behind screens for many years) would understand that things aren't always intended as they sound. In this regard, I support @limetech wholeheartedly. Nevertheless the only way to fix this is probably for @limetech to privately offer an apology and discuss as a group how to fix, then publish back together that it's resolved. (30 years managing technical teams - seen this a few times and it's usually sorted out with an apology and a conversation).
    9 points
  22. @limetech Not wanting to dogpile on to you, but you would do well to go above and beyond in rectifying the situation with any communality developers that have issues. The community plugins and features that supply so much usability and functionality to unraid that are lacking in the base product actually make unraid worth paying for. if you start loosing community support, you will start to loose it all. with that I am sure I and others will not recommend people to purchase and use your software. @CHBMB @aptalca @linuxserver.io Maybe a cool down period is needed, but don't just pack up your things and go home, it's all too common for companies to ignore users and communities, a large part of the community is often invisible to them hidden behind keeping the business running and profits. it makes it easy for customer facing employees to make statements that might be taken as insensitive.
    9 points
  23. Install the referenced plugin. It will install the Nvidia driver and tools needed for transcoding in Docker containers. If you don't care about this, no need to deal with it. The plugin is more of a "work in process" right now and will mature over time, including being added to Community Apps. We didn't include the Nvidia vendor driver built-in to the release for several reasons: The package is very large, over 200MB and no need for everyone to download this thing if they don't need it. Including the driver "taints" the Linux kernel. There is something of a longstanding war between the Linux community and Nvidia because large parts of the Nvidia are not open source. Due to how the driver has to integrate into the kernel, this "taints" the kernel. For us it means if we have an kernel-related issues, no developer will help out if they see the kernel is tainted. Also don't want that message appearing in syslog by default The Nvidia driver might change faster than we want to update Linux kernel. In this case we can build and publish a new driver independent of Unraid OS release.
    9 points
  24. Great news. Crazy how they constantly focus on features benefiting more than 1% of the maximum userbase, right. 🤪
    9 points
  25. https://github.com/electrified/asus-wmi-sensors <- this would be nice to for 6.9 for ASUS peoples. It directly reads the WMI interface that ASUS has moved to and displays all sensors properly.(Supposedly)
    9 points
  26. Preparing another release. Mondays are always very busy but will try to get it out asap.
    9 points
  27. No. If you have the latest version, you are good to go.
    8 points
  28. I've been saddened and disheartened today to see what was supposed to be a momentous occasion for us in releasing a substantial improvement to our OS being turned into something else. I have nothing but respect for all of our community developers (@CHBMB included) and the effort they put into supporting extended functionality for Unraid OS. Its sad to see that something we intended to improve the quality of the product for everyone be viewed as disrespectful to the very people we were trying to help. It honestly feels like a slap in the face to hear that some folks believe we were seeking to intentionally slight anyone. It really makes me sad that anyone could think that little of us. And while I'm not interested in engaging in the rest of the back-and-forth on this topic, I do have to address a specific comment where my name was brought into the mix: @CHBMB, unless I'm mistaken, you're referring to the PM thread that we had back in December of 2019, yes? If so, I re-read that exchange today. I never asked you how you created those builds. I never asked you for build instructions or any code. And in that same thread, I expressed our concern regarding the "proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds." That was the entire purpose of me reaching out to you at the time: to find out what you were talking about in the post about NVIDIA and legal concerns, to make you aware of our concern about having non-official Unraid builds in the wild, and to see what ideas you had to assuage those concerns. I would not describe that as us discussing how you achieved the Nvidia builds. If I'm misremembering and there was another conversation we had over another platform (email, chat, etc.), please remind me, because like everyone here, I'm only human. At the end of the day, I really hope that calmer heads prevail here and we can all get along again. As Tom has already stated, no disrespect was intended. If some folks here feel that we've done something unforgiveable and need to leave our community, I'll be sorry to see that.
    8 points
  29. @aptalca You also know that I had to test things and do also much trail and error but I also got a lot of help from klueska on Github. One side note about that, I tried asking and getting help from you because I don't know nothing about Kernel building and all this stuff but no one ever answered or did anything to help me. I completely understand that @CHBMB is working at healthcare and doesn't have that much time in times like this... @aptalca, @CHBMB & @bass_rock Something about my Unraid-Kernel-Helper container, I know the initial plugin came from you (in terms of the Nvidia driver) but I also wanted to make the build of this some more open and needed a way to integrate DVB drivers with the Nvidia driver to my Unraid system. My work was manly inspired by you guys and I also gave it credit for that but I also want to share it with the community in general if someone want's to integrate anything other than the Nvidia or DVB drivers like now I have support for the Mellanox Firmware Tools or iSCSI... I will also keep updating my Unraid-Kernel-Helper plugin if someone wants a all in one solution and build the bz* images on their own. If you want to blame anyone about this then you have to blame me, when I read about that @limetech is going to integrate the Nvidia drivers into one of the next Unraid beta builds, I wrote a PM to @limetech and offered my help. And that's all what I have to say about that, I think Unraid is one of the best Server OS's out there in my opinion and I simply want to push it forward...
    8 points
  30. The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices. There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.
    8 points
  31. To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image. The notable difference is that now the GUI supports setting a folder directly. The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash). If you simply pick an already existing folder, the system will automatically assume that you want to create an image. Hopefully for the next release, this behaviour will be modified and/or made clearer within the docker GUI.
    8 points
  32. There is a Samba security release being published tomorrow. We will update that component and then publish 6.8 stable release.
    8 points
  33. Why are you running something so critical to yourself on pre-release software? Seems a little reckless to me...
    8 points
  34. The issue is that writes originating from a fast source (as opposed to a slow source such as 1Gbit network) completely consume all available internal "stripe buffers" used by md/unraid to implement array writes. When reads come in they get starved for resources to efficiently execute. The method used to limit the number of I/O's directed to a specific md device in the Linux kernel no longer works in newer kernels, hence I've had to implement a different throttling mechanism. Changes to md/unraid driver require exhaustive testing. All functional tests pass however, driver bugs are notorious for only showing up under specific workloads due to different timing. As I write this I'm 95% confident there are no new bugs. Since this will be a stable 'patch' release, I need to finish my testing.
    8 points
  35. I'll add it back, don't want to get between anyone and their WINE
    7 points
  36. Yes Tom, that's how I viewed it when we developed the Nvidia stuff originally, it was improving the product. The thread has been running since February 2019, it's a big niche. 99 pages and 2468 posts.
    7 points
  37. You have just described how almost all software functions 🤣
    7 points
  38. I made a video on what the new 6.7.0 looks like etc. There is a mistake in this video where I talk about the webgui: VM manager: remove and rebuild USB controllers. What I thought it was, I found out after it was available in 6.6.6 as well so wasnt it !! Also shows how to upgrade to rc for those who don't know and downgrade again after if needs be. https://www.youtube.com/watch?v=qRD1qVqcyB8&amp;feature=youtu.be
    7 points
  39. This is from my design file, so it differs a little bit from the implemented solution, but it should give you a general feel about how it looks. (Header/menu and bottom statusbar is unchanged aside from icons) Edit: So I should remember to refresh before posting. Anyway you might not have seen it yet, but the logo on the "server description" tile can be edited, and we have included a selection of popular cases to start you off!
    7 points
  40. This is sad, very sad only. unRAID is a unique product, but it's the community what really makes it shines. As a ordinary user, what really makes me feel safe is not that i'm running a perfect piece of software(it's not, no software will ever be), but having a reliable community always have my back when I'm in trouble, and constantly making things better. I'm not in a place to judge, but I do see some utterly poor communications. This could have been a happy day yet we are seeing the beginning of a crack. Guess who gets hurt? LOYAL USERS! Guess who gets hurt after users are hurt? Please, look at the bigger picture, fix it when it's not too late. Don't bury what together you have archived just for miscommunication. It's not worthy.
    6 points
  41. Should be noted that right now, the only SAS chipsets definitely affected is the SAS2116 ie: the 9201-16e. My controller running SAS2008 (Dell H200 cross-flashed) is completely unaffected.
    6 points
  42. Added several options for dealing with this issue in 6.9.0-beta24.
    6 points
  43. Also could use 'space_cache=v2'. Upcoming -beta23 has these changes to address this issue: set 'noatime' option when mounting loopback file systems include 'space_cache=v2' option when mounting btrfs file systems default partition 1 start sector aligned on 1MiB boundary for non-rotational storage. Will requires wiping partition structure on existing SSD devices first to make use of this.
    6 points
  44. Correct. A pool is simply a collection of storage devices. If you assign multiple devices to a pool it will be formatted using btrfs 'raid1' profile which means there is protection against any single device failure in the pool. In future we plan on letting you create multiple "unRAID" pools, where each pool would have it's own parity/parity2 and collection of storage volumes - but that feature did not make this release. edit: fixed typos
    6 points
  45. Maybe sooner if you can bribe him with small fish, crabs, and shrimp.
    6 points
  46. Just an FYI for this forum: It has been 16 days since I have had any corruption with Plex. That only happened with 6.6.7. With anything newer, it would corrupt in less than a day. 6.8.0-rc4 and rc5 have been rock stable with the changes that were made. I'm glad that I stuck with the testing, and was able to work so close with the Unraid team. 🙂
    6 points
  47. We're actively working on this for 6.9
    6 points
  48. Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
    6 points
  49. It really is time for @limetech to incorporate the UD functionality. I do not have the time nor the interest in updating UD for the new GUI.
    6 points