Leaderboard

Popular Content

Showing content with the highest reputation since 06/16/20 in Report Comments

  1. I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments. While @limetech has developed a great base product i think it’s right to acknowledge that much of the
    19 points
  2. I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its t
    16 points
  3. I know this likely won't matter to anyone but I've been using unraid for just over ten years now and I'm very sad to see how the nvidia driver situation has been handled. While I am very glad that custom builds are no longer needed to add the nvidia driver, I am very disappointed in the apparent lack of communication and appreciation from Limetech to the community members that have provided us with a solution for all the time Limetech would not. If this kind of corporate-esque "we don't care" attitude is going to be adopted then that removes an important differentiating factor betwee
    13 points
  4. Just a little feedback on upgrading from Unraid Nvidia beta30 to beta35 with Nvidia drivers plugin. The process was smooth and I see no stability or performance issue after 48h, following these steps : - Disable auto-start on "nvidia aware" containers (Plex and F@H for me) - Stop all containers - Disable Docker engine - Stop all VMs (none of them had a GPU passthrough) - Disable VM Manager - Remove Unraid-Nvidia plugin - Upgrade to 6.9.0-beta35 with Tools>Update OS - Reboot - Install Nvidia Drivers plugin from CA (be patient and wait
    12 points
  5. Everyone please take a look here: I want to request that any further discussion on this to please take place in that topic.
    12 points
  6. As a user of Unraid I am very scared about the current trends. Unraid as a base it is a very good server operating system but what makes it special are the community applications. I would be very sad if this breaks apart because of maybe wrong or misunderstandable communication. I hope that everyone will get together again. For us users you would make us all a pleasure. I have 33 docker containers and 8 VM running on my system and I hope that my system will continue to be as usable as before. I have many containers from linuxserver.io. I am grateful for th
    11 points
  7. You know you could have discussed this with me right? Remember me, the original dev, along with @bass_rock! The one you tasked @jonp to discuss how we achieved the Nvidia builds back in December last year? That I never heard anything more about. I'm the one that spend 6 months f**king around with Unraid to get the GPU drivers working in docker containers. The one that's been hosting literally 100s of custom Unraid builds for the community to use for nearly five years. With all due respect to @ich777 he wasn't the one who did the bulk of the work here.
    10 points
  8. Next release of 6.9 will be on Linux 5.9 kernel, hopefully that will be it before we can go to 'stable' (because Linux 5.8 kernel was recently marked EOL).
    9 points
  9. Before anyone else beats me to it: Soon™
    9 points
  10. This sums up my stance too. I can understand LimeTechs view as to why they didnt feel the need to communicate this to the other parties involved (as they never officially asked them to develop the solution they'd developed and put in place). But on the flip side the appeal of UnRaid is the community spirit and drive to implement things which make the platform more useful. It wouldnt have taken a lot to give certain members in community a heads up that this was coming, and to give credit where credit is due in the release notes. Something along the lines of: "After seeing the ap
    9 points
  11. Just wanted to share a quick success story. Previously (and for the past few releases now) I was using @ich777's Kernel Docker container to compile with latest Nvidia. Excited now to see this be brought in natively, it worked out of the box for me. I use the regular Plex docker container for HW transcoding (adding --runtime=nvidia in the extra parameters and setting properly the two container variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES). To prepare for this upgrade, while still on beta 30: - disabled docker service - upgraded to beta 35
    9 points
  12. Normal humans often can't see all the ways messaging can be interpreted (this is why we have comms people) The most offensive sounding things are often not intended in that fashion at all. Written text makes communication harder because there are no facial or audio cues to support the language I expected our community developers (being that they've clearly communicated in text behind screens for many years) would understand that things aren't always intended as they sound. In this regard, I support @limetech wholeheartedly. Nevertheless the only
    9 points
  13. @limetech Not wanting to dogpile on to you, but you would do well to go above and beyond in rectifying the situation with any communality developers that have issues. The community plugins and features that supply so much usability and functionality to unraid that are lacking in the base product actually make unraid worth paying for. if you start loosing community support, you will start to loose it all. with that I am sure I and others will not recommend people to purchase and use your software. @CHBMB @aptalca @linuxserver.io Maybe a cool down period
    9 points
  14. Install the referenced plugin. It will install the Nvidia driver and tools needed for transcoding in Docker containers. If you don't care about this, no need to deal with it. The plugin is more of a "work in process" right now and will mature over time, including being added to Community Apps. We didn't include the Nvidia vendor driver built-in to the release for several reasons: The package is very large, over 200MB and no need for everyone to download this thing if they don't need it. Including the driver "taints" the Linux kernel. There is something of a longst
    9 points
  15. Great news. Crazy how they constantly focus on features benefiting more than 1% of the maximum userbase, right. 🤪
    9 points
  16. For Unraid version 6.10 I have replaced the Docker macvlan driver for the Docker ipvlan driver. IPvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network. The end-user doesn't have to do anything special. At startup legacy networks are automatically removed and replaced by the new netwo
    8 points
  17. No. If you have the latest version, you are good to go.
    8 points
  18. I've been saddened and disheartened today to see what was supposed to be a momentous occasion for us in releasing a substantial improvement to our OS being turned into something else. I have nothing but respect for all of our community developers (@CHBMB included) and the effort they put into supporting extended functionality for Unraid OS. Its sad to see that something we intended to improve the quality of the product for everyone be viewed as disrespectful to the very people we were trying to help. It honestly feels like a slap in the face to hear that some folks believe we wer
    8 points
  19. @aptalca You also know that I had to test things and do also much trail and error but I also got a lot of help from klueska on Github. One side note about that, I tried asking and getting help from you because I don't know nothing about Kernel building and all this stuff but no one ever answered or did anything to help me. I completely understand that @CHBMB is working at healthcare and doesn't have that much time in times like this... @aptalca, @CHBMB & @bass_rock Something about my Unraid-Kernel-Helper container, I know the initial plugin came from you (in terms of
    8 points
  20. The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices. There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.
    8 points
  21. To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image. The notable difference is that now the GUI supports setting a folder directly. The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash). If you simply pick an already existing folder, the system will automatically assume that you want to create
    8 points
  22. Yes Tom, that's how I viewed it when we developed the Nvidia stuff originally, it was improving the product. The thread has been running since February 2019, it's a big niche. 99 pages and 2468 posts.
    7 points
  23. You have just described how almost all software functions 🤣
    7 points
  24. Issue appears to be that if you have any VM running that used VNC, then starting it would mess up the displays on the dashboard. Either way, this issue is fixed next rev
    6 points
  25. This is sad, very sad only. unRAID is a unique product, but it's the community what really makes it shines. As a ordinary user, what really makes me feel safe is not that i'm running a perfect piece of software(it's not, no software will ever be), but having a reliable community always have my back when I'm in trouble, and constantly making things better. I'm not in a place to judge, but I do see some utterly poor communications. This could have been a happy day yet we are seeing the beginning of a crack. Guess who gets hurt? LOYAL USERS! Guess w
    6 points
  26. Should be noted that right now, the only SAS chipsets definitely affected is the SAS2116 ie: the 9201-16e. My controller running SAS2008 (Dell H200 cross-flashed) is completely unaffected.
    6 points
  27. Added several options for dealing with this issue in 6.9.0-beta24.
    6 points
  28. Also could use 'space_cache=v2'. Upcoming -beta23 has these changes to address this issue: set 'noatime' option when mounting loopback file systems include 'space_cache=v2' option when mounting btrfs file systems default partition 1 start sector aligned on 1MiB boundary for non-rotational storage. Will requires wiping partition structure on existing SSD devices first to make use of this.
    6 points
  29. Correct. A pool is simply a collection of storage devices. If you assign multiple devices to a pool it will be formatted using btrfs 'raid1' profile which means there is protection against any single device failure in the pool. In future we plan on letting you create multiple "unRAID" pools, where each pool would have it's own parity/parity2 and collection of storage volumes - but that feature did not make this release. edit: fixed typos
    6 points
  30. Interesting. Unraid OS 6.9.1 is on kernel 5.10.21 and the referenced patch is not applied. Upcoming 6.9.2 release is on kernel 5.10.27 which does have the patch. Working on finalizing the release now.
    5 points
  31. I've done some digging on this - here's what I found. Mount unraid share system on my mac and check its spotlight status: [macbook-pro]:~ $ mdutil -s /Volumes/system /System/Volumes/Data/Volumes/system: Server search enabled. [macbook-pro]:~ $ But as best I can tell "server search" is not in fact enabled. Turns out samba 4.12.0 changed this search-related default: Note that when upgrading existing installations that are using the previous default Spotlight backend Gnome Tracker must explicitly set "spotlight backend = tracker" as the new default is "noindex
    5 points
  32. @limetech Here's what I did: Deleted the two files from /boot/config: rsyslog.cfg rsyslog.conf Rebooted Started the Array Reconfigured the Syslog settings Hit Apply Checked the Syslog and saw that rsyslogd started Verified that there was a file in my share Verified that data was in the file
    5 points
  33. Still running unRAID on my 1GB Kingston DataTraveler from 2008.
    5 points
  34. See the post above yours. It gets loaded directly into ram. Speaking as someone who doesn't have a nvidia card, I personally don't want my ram used up for something I don't have.
    5 points
  35. Thanks for the release. On the docker tab I can't seem to access the docker logs with either basic or advanced view. I thought it may have neen because I am using the docker folder plugin. But having removed it the problem still persists. ........ Edit -- ha found where to get to the logs now!! Its accessed by clicking the docker icon. Thats much better there. Was always annoying when in the advanced view having to switch back to basic to see the logs
    5 points
  36. @CHBMB, @aptalca, @memphisto, everyone else who thinks I somehow disrespected anyone here, would you have felt the same way if we just integrated the bloody Nvidia vendor driver right into Unraid OS distribution like we do other vendor supplied drivers: RocketRaid, Realtek, Tehuti? It is not difficult to do this; actually it's quite easy. The problem is, this driver alone along with support tools adds 110MB to the download and expands close to 400MB of root file system usage. Only a fraction of Unraid users require this, why make everyone bear this cost? Same situation for all the other dr
    5 points
  37. Did a test with a Windows VM to see if there was a difference with the new partition alignment, total bytes written after 16 minutes (VM is idling doing nothing, not even internet connected): space_cache=v1, old alignment - 7.39GB space_cache=v2, old alignment - 1.72GB space_cache=v2, new alignment - 0.65GB So that's encouraging, though I guess that unlike v2 space cache the new alignment might work better for some NVMe devices and don't make much difference for others, still worth testing IMHO, since for some it should also give better performance, for this t
    5 points
  38. I've been playing with the various btrfs mount options and possibly found one that appears to make a big difference, at least for now, and while it doesn't look like it's a complete fix for me it decreases writes about 5 to 10 times, this option appears to work both for the docker image on my test server and more encouragingly also on the VM problem on my main server, and it's done by remounting the cache with the nospace_cache option, from my understanding this is perfectly safe (though there could be a performance penalty) and it will go back to default (using space cache) at next array re-s
    5 points
  39. for all those who are running Ryzen 3rd Gen processors and using VM's, there seems to be a bug in Qemu 5.0 where in windows enters a BSOD with an error message KERNEL_SECURITY_FAILURE. Here is a link to the discussion on reddit: https://www.reddit.com/r/VFIO/comments/gf53o8/upgrading_to_qemu_5_broke_my_setup_windows_bsods/ The following MUST be added to the VM xml file to allow windows to successfully boot. <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinl
    5 points
  40. And I should mention also that the multi cache pool feature really kicks ass ❤
    5 points
  41. Until now every Unraid 6.x version relied on traditional polling of the server to update the GUI in real-time. On the Dashboard there are multiple fields which are getting updated regularly. It is the task of the browser to initiate each time a poll request to obtain the new information and update the GUI accordingly. Polling puts load on both the browser and server, and may consume more memory over time (depending on the browser). Starting with Unraid version 6.10 the traditional polling mechanism is replaced by a websocket event driven model using Nchan.
    4 points
  42. Executing any kind of SMART operation increments both number I/O's to the device (in this case Reads) and number sectors transferred (in this case sectors read). This is true whether device is in standby or not. HOWEVER, I have a fix for this coming in 6.9.1
    4 points
  43. 4 points
  44. To put a close to this thread, my issue was a dead GPU, not an issue with the software. I would just like to take this opportunity to publicly thank @ich777 for all the time and effort he put into working with me to try and figure out the issue I have been having. @ich777 spent a few days working with me to get to the root of the problem and went over and above what I ever could have expected. Thank you.
    4 points
  45. Will update the plugin and add a warning at the top to don't close the windows with the 'X' and wait for the 'Done' button.
    4 points
  46. Ok, let me be a little more clear. There is no publicly accessible official timeline. What limetech does with their internal development is kept private, for many reasons. My speculation of the main reason is that the wrath of users over no timeline is tiny compared to multiple missed deadlines. In the distant past there were loose timelines issued, and the flak that ensued was rather spectacular, IIRC. Rather than getting beaten up over progress reports, it's easier for the team to stay focused internally and release when ready, rather than try to justify delays. When
    4 points
  47. The next iteration of 'multiple pools' is to generalize the unRAID array so that you can have multiple "unRAID array pools". Along with this, introduce concept of primary pool and cache pool for a share. Then you could make different combinations, e.g., brtrfs primary pool with xfs single device cache. To have 'mover' move stuff around you would reconfigure the primary/cache settings for a share. This will work not get done for 6.9 release however.
    4 points
  48. Maybe this will clarify. In addition to the original cache pool named "cache" (2x500 btrfs raid1), I also have a cache pool that I have named "fast" (1x256 xfs). Each user share has an additional setting to select the cache pool for that user share. In this screenshot, I have selected my "fast" pool for the share named DVR with Use cache as Prefer so it can overflow.
    4 points