Jump to content


Popular Content

Showing content with the highest reputation since 02/21/17 in Report Comments

  1. 21 points
    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
  2. 13 points
    The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
  3. 11 points
    Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
  4. 9 points
    https://github.com/electrified/asus-wmi-sensors <- this would be nice to for 6.9 for ASUS peoples. It directly reads the WMI interface that ASUS has moved to and displays all sensors properly.(Supposedly)
  5. 9 points
    Preparing another release. Mondays are always very busy but will try to get it out asap.
  6. 8 points
    To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image. The notable difference is that now the GUI supports setting a folder directly. The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash). If you simply pick an already existing folder, the system will automatically assume that you want to create an image. Hopefully for the next release, this behaviour will be modified and/or made clearer within the docker GUI.
  7. 8 points
    There is a Samba security release being published tomorrow. We will update that component and then publish 6.8 stable release.
  8. 8 points
    Why are you running something so critical to yourself on pre-release software? Seems a little reckless to me...
  9. 8 points
    The issue is that writes originating from a fast source (as opposed to a slow source such as 1Gbit network) completely consume all available internal "stripe buffers" used by md/unraid to implement array writes. When reads come in they get starved for resources to efficiently execute. The method used to limit the number of I/O's directed to a specific md device in the Linux kernel no longer works in newer kernels, hence I've had to implement a different throttling mechanism. Changes to md/unraid driver require exhaustive testing. All functional tests pass however, driver bugs are notorious for only showing up under specific workloads due to different timing. As I write this I'm 95% confident there are no new bugs. Since this will be a stable 'patch' release, I need to finish my testing.
  10. 7 points
    I made a video on what the new 6.7.0 looks like etc. There is a mistake in this video where I talk about the webgui: VM manager: remove and rebuild USB controllers. What I thought it was, I found out after it was available in 6.6.6 as well so wasnt it !! Also shows how to upgrade to rc for those who don't know and downgrade again after if needs be. https://www.youtube.com/watch?v=qRD1qVqcyB8&amp;feature=youtu.be
  11. 7 points
    This is from my design file, so it differs a little bit from the implemented solution, but it should give you a general feel about how it looks. (Header/menu and bottom statusbar is unchanged aside from icons) Edit: So I should remember to refresh before posting. Anyway you might not have seen it yet, but the logo on the "server description" tile can be edited, and we have included a selection of popular cases to start you off!
  12. 6 points
    Added several options for dealing with this issue in 6.9.0-beta24.
  13. 6 points
    Also could use 'space_cache=v2'. Upcoming -beta23 has these changes to address this issue: set 'noatime' option when mounting loopback file systems include 'space_cache=v2' option when mounting btrfs file systems default partition 1 start sector aligned on 1MiB boundary for non-rotational storage. Will requires wiping partition structure on existing SSD devices first to make use of this.
  14. 6 points
  15. 6 points
    Correct. A pool is simply a collection of storage devices. If you assign multiple devices to a pool it will be formatted using btrfs 'raid1' profile which means there is protection against any single device failure in the pool. In future we plan on letting you create multiple "unRAID" pools, where each pool would have it's own parity/parity2 and collection of storage volumes - but that feature did not make this release. edit: fixed typos
  16. 6 points
    Maybe sooner if you can bribe him with small fish, crabs, and shrimp.
  17. 6 points
    Just an FYI for this forum: It has been 16 days since I have had any corruption with Plex. That only happened with 6.6.7. With anything newer, it would corrupt in less than a day. 6.8.0-rc4 and rc5 have been rock stable with the changes that were made. I'm glad that I stuck with the testing, and was able to work so close with the Unraid team. 🙂
  18. 6 points
    We're actively working on this for 6.9
  19. 6 points
    Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
  20. 6 points
    It really is time for @limetech to incorporate the UD functionality. I do not have the time nor the interest in updating UD for the new GUI.
  21. 5 points
    Did a test with a Windows VM to see if there was a difference with the new partition alignment, total bytes written after 16 minutes (VM is idling doing nothing, not even internet connected): space_cache=v1, old alignment - 7.39GB space_cache=v2, old alignment - 1.72GB space_cache=v2, new alignment - 0.65GB So that's encouraging, though I guess that unlike v2 space cache the new alignment might work better for some NVMe devices and don't make much difference for others, still worth testing IMHO, since for some it should also give better performance, for this test I used an Intel 600p.
  22. 5 points
    I've been playing with the various btrfs mount options and possibly found one that appears to make a big difference, at least for now, and while it doesn't look like it's a complete fix for me it decreases writes about 5 to 10 times, this option appears to work both for the docker image on my test server and more encouragingly also on the VM problem on my main server, and it's done by remounting the cache with the nospace_cache option, from my understanding this is perfectly safe (though there could be a performance penalty) and it will go back to default (using space cache) at next array re-start, if anyone else wants to try it just type this: mount -o remount -o nospace_cache /mnt/cache Will let in run for 24 hours and check device stats tomorrow, on average my server does around 2TB writes per day, current value is: But like mentioned it's not a complete fix, I'm still seeing constant writes to cache, but where before it was hovering around 40/60MB/s now it's around 3/10MB/s, so I'll take it for now:
  23. 5 points
    for all those who are running Ryzen 3rd Gen processors and using VM's, there seems to be a bug in Qemu 5.0 where in windows enters a BSOD with an error message KERNEL_SECURITY_FAILURE. Here is a link to the discussion on reddit: https://www.reddit.com/r/VFIO/comments/gf53o8/upgrading_to_qemu_5_broke_my_setup_windows_bsods/ The following MUST be added to the VM xml file to allow windows to successfully boot. <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline>
  24. 5 points
    And I should mention also that the multi cache pool feature really kicks ass ❤
  25. 5 points
    I've changed my VM's NIC also from 'virtio' to <model type='virtio-net'/> and since then no more "unexpected GSO type" in my logs. I've 3 VM's (2 Win + 1 CentOS) running and 3 dockers on br0 with 5.5 kernel.
  26. 5 points
  27. 5 points
    Fixing, adding, updating & upgrading everything people reported plus ranked high on our lists LimeTech - Unraid has been up to the difficult challenge hard at work. What a great bunch of people & community this company is, let's hope they get it all worked out soon so they can take a well earned relaxed break to enjoy the upcoming Holiday's with there Family & Friends with peace, love and no worries. Thank You and Happy Holiday's!
  28. 5 points
    I just connected to the server, and checked it again. No corruption of the databases. I had the server up all weekend (minus a short period where I shut down the entire system to move it to another room). No issues at all. We watched a couple of TV shows from Plex, and I saw that several new shows and files were added over the weekend. Yes....md_restrict 1 is where it is set right now. The system has not stayed stable for this long since the update from 6.6.7.
  29. 5 points
  30. 5 points
  31. 5 points
    We may have got to the bottom of this. Please try new version 6.7.3-rc3 available on next branch.
  32. 5 points
  33. 5 points
    You forget: RC doses not mean "Release Candidate", it means "Rickrolling Complainers" 😎
  34. 5 points
    In 6.6.4 release we 'reverted' the cron package back to the one that was installed with 6.5.x releases in order to solve another bug. However the way the cron daemon was started in that package is different and in 6.6.4 the problem is that crond is never started. We will publish a fix for this asap, and in meantime have taken down the 6.6.4 release. If you're running 6.6.4 you can type this command in a terminal window and put in your 'go' file as a workaround: /usr/sbin/crond -l notice
  35. 5 points
    Waiting for that promised proper Login page so that a Password Manager can be used...
  36. 4 points
    Does it always work fine with another browser?
  37. 4 points
    Here's the problem. As soon as we publish a release with Nvidia/AMD GPU drivers installed, any existing VM which uses GPU pass through of an Nvidia or AMD GPU may stop working. Users must use the new functionality of the Tools/System Devices page to select GPU devices to "hide" from Linux kernel upon boot - this prevents the kernel from installing the driver(s) and initializing the card. Since there are far more people passing through GPU vs using GPU for transcoding in a Docker container, we thought it would be polite to give those people an opportunity to prepare first in 6.9 release, and then we would add the GPU drivers to the 6.10 release. We can make 6.10 a "mini release" which has just the GPU drivers. Anyway, this is our current plan. Look about 10 posts up.
  38. 4 points
    False. BTRFS is the default file system for the cache drive because the system allows you to easily expand from a single cache drive to be a multiple device pool. If you're only running a single cache drive (and have no immediate plans to upgrade to a multi-device pool), XFS is the "recommended" filesystem by many users (including myself) The docker image required CoW because docker required it. Think of the image akin to mounting an ISO image on your Windows box. The image was always formatted as BTRFS, regardless of the underlying filesystem. IE: You can store that image file on XFS, BTRFS, ReiserFS, or via UD ZFS, NTFS etc. More or less true. As said, you've always been able to have an XFS cache drive and the image stored on it. The reason for the slightly different mounting options for an image is to reduce the unnecessary amount of writes to the docker.img file. There won't be a big difference (AFAIK) if you choose a docker image formatted as btrfs or XFS. But, as I understand it any write to a loopback (ie: image file) is always going to incur extra IO to the underlying filesystem by its very nature. Using a folder instead of an image completely removes those excess writes. You can choose to store the folder on either a BTRFS device or an XFS device. The system will consume the same amount of space on either, because docker via overlay2 will properly handle duplicated layers etc between containers when it's on an XFS device. BTRFS as the docker.img file does have some problems. If it fills up to 100%, the it doesn't recover very gracefully, and usually requires a delete of the image and then recreating it and reinstalling your containers (a quick and painless procedure) IMO, choosing a folder for the storage lowers my aggravation level in the forum because by it's nature, there is no real limit to the size that it takes (up to the size of the cache drive), so the recurring issues of "image filling up" for some users will disappear. (And as a side note, this is how the system was originally designed in the very early 6.0 betas) There are just a couple of caveats with the folder method which is detailed in the OP (my quoted text). Cache only share. Simply referencing /mnt/cache/someShare/someFolder/ within the GUI isn't good enough. Ideally within its own separate share (not necessary, but decreases the possibility of ever running new perms against the share) The limitations on this first revision of the GUI supporting folders, that doesn't make how you do it exactly intuitive. Will get improved by the next rev though. Get over the fact that you can't view or modify any of the files (not that you ever need to) within the folder via SMB. Just don't export it so that it doesn't drive your OCD nuts. There is also still some glitches in the GUI when you use the folder method. Notably, while you can stop the docker service, you cannot re-enable it via the GUI (Settings - docker). (You have to edit the docker.cfg file and reenable the service there, and then stop / start the array)
  39. 4 points
    Maybe this will clarify. In addition to the original cache pool named "cache" (2x500 btrfs raid1), I also have a cache pool that I have named "fast" (1x256 xfs). Each user share has an additional setting to select the cache pool for that user share. In this screenshot, I have selected my "fast" pool for the share named DVR with Use cache as Prefer so it can overflow.
  40. 4 points
    Thanks! Luckily we've always been a remote team, but obviously we are all still greatly affected by the pandemic. We are very grateful to the Unraid Community for your patience and support in the last 3 months or so.
  41. 4 points
    A couple of comments: Community Applications HAS to be up to date to install languages. Versions of CA prior to 2020.05.12 will not even load on this release. As of this writing, the current version of CA is 2020.06.13a. See also here This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.
  42. 4 points
    Hey!.... I can wait. You all are awesome to the max.
  43. 4 points
    Worked the last 4 days to resolve this so sure should be Urgent but as soon as kernel reverted it's back to minor - those classifications are something like the Pirate Code anyway.
  44. 4 points
  45. 4 points
    There was a kernel driver internal API change a few releases back that I missed, and md/unraid was doing something that's not valid now. I noticed this and put fix in upcoming 6.8 and gave to someone who could reproduce the corruption. Has been running far longer than it ever did before, so I think is safe for wider testing. Back-ported change to 6.7.3-rc3 and also updated to the latest 4.19 kernel patch release, because, why not?
  46. 4 points
    Forgive the dumb question, I don't actually see where I can upvote this, I can see others that have upvoted, but no option in there for me to do the same, other than to 'like' it. Edit - found it. Hovering over the like button shows an upvote button. Not exactly intuitive but all good.
  47. 4 points
    Perhaps the options reading something like: - update ready (instead of ‘updated’) - apply update (instead of ‘update ready’) would be clearer and less likely to lead to confusion (and not take up more space)?
  48. 4 points
    We could reproduce this specific issue and verify fixed, but the change made might solve some other performance issues as well.
  49. 4 points
    The issue is that FUSE crashes which causes 'shfs' to crash which causes 'nfsd' to crash. Here is the code in FUSE (fuse.c) where the crash is occurring: static struct node *get_node(struct fuse *f, fuse_ino_t nodeid) { struct node *node = get_node_nocheck(f, nodeid); if (!node) { fprintf(stderr, "fuse internal error: node %llu not found\n", (unsigned long long) nodeid); abort(); } return node; } This function is being called with 'nodeid' 0, which is straight out of the NFS file handle. That ‘abort()’ is what ultimately results in bringing down /mnt/user and along with it, nfsd. This is almost certainly due to a kernel change somewhere between 4.14.49 and 4.18. Trying to isolate ...
  50. 4 points
    Perhaps have a second post in the locked thread that asks people to "Like" the second post it if they're not experiencing any issues. That way it's possible to track the number of forum posters who are not experiencing issues garnering feed back without cluttering the forums. I'm sure there are many people that wait until a couple people report in that the latest testing release is stable before they install for themselves.