Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/21/17 in Report Comments

  1. 21 points
    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
  2. 12 points
    The corruption occurred as a result of failing a read-ahead I/O operation with "BLK_STS_IOERR" status. In the Linux block layer each READ or WRITE can have various modifier bits set. In the case of a read-ahead you get READ|REQ_RAHEAD which tells I/O driver this is a read-ahead. In this case, if there are insufficient resources at the time this request is received, the driver is permitted to terminate the operation with BLK_STS_IOERR status. Here is an example in Linux md/raid5 driver. In case of Unraid it can definitely happen under heavy load that a read-ahead comes along and there are no 'stripe buffers' immediately available. In this case, instead of making calling process wait, it terminated the I/O. This has worked this way for years. When this problem first happened there were conflicting reports of the config in which it happened. My first thought was an issue in user share file system. Eventually ruled that out and next thought was cache vs. array. Some reports seemed to indicate it happened with all databases on cache - but I think those reports were mistaken for various reasons. Ultimately decided issue had to be with md/unraid driver. Our big problem was that we could not reproduce the issue but others seemed to be able to reproduce with ease. Honestly, thinking failing read-aheads could be the issue was a "hunch" - it was either that or some logic in scheduler that merged I/O's incorrectly (there were kernel bugs related to this with some pretty extensive patches and I thought maybe developer missed a corner case - this is why I added config setting for which scheduler to use). This resulted in release with those 'md_restrict' flags to determine if one of those was the culprit, and what-do-you-know, not failing read-aheads makes the issue go away. What I suspect is that this is a bug in SQLite - I think SQLite is using direct-I/O (bypassing page cache) and issuing it's own read-aheads and their logic to handle failing read-ahead is broken. But I did not follow that rabbit hole - too many other problems to work on
  3. 11 points
    Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
  4. 9 points
    Preparing another release. Mondays are always very busy but will try to get it out asap.
  5. 8 points
    There is a Samba security release being published tomorrow. We will update that component and then publish 6.8 stable release.
  6. 8 points
    Why are you running something so critical to yourself on pre-release software? Seems a little reckless to me...
  7. 8 points
    The issue is that writes originating from a fast source (as opposed to a slow source such as 1Gbit network) completely consume all available internal "stripe buffers" used by md/unraid to implement array writes. When reads come in they get starved for resources to efficiently execute. The method used to limit the number of I/O's directed to a specific md device in the Linux kernel no longer works in newer kernels, hence I've had to implement a different throttling mechanism. Changes to md/unraid driver require exhaustive testing. All functional tests pass however, driver bugs are notorious for only showing up under specific workloads due to different timing. As I write this I'm 95% confident there are no new bugs. Since this will be a stable 'patch' release, I need to finish my testing.
  8. 7 points
    I made a video on what the new 6.7.0 looks like etc. There is a mistake in this video where I talk about the webgui: VM manager: remove and rebuild USB controllers. What I thought it was, I found out after it was available in 6.6.6 as well so wasnt it !! Also shows how to upgrade to rc for those who don't know and downgrade again after if needs be. https://www.youtube.com/watch?v=qRD1qVqcyB8&feature=youtu.be
  9. 7 points
    This is from my design file, so it differs a little bit from the implemented solution, but it should give you a general feel about how it looks. (Header/menu and bottom statusbar is unchanged aside from icons) Edit: So I should remember to refresh before posting. Anyway you might not have seen it yet, but the logo on the "server description" tile can be edited, and we have included a selection of popular cases to start you off!
  10. 6 points
    Just an FYI for this forum: It has been 16 days since I have had any corruption with Plex. That only happened with 6.6.7. With anything newer, it would corrupt in less than a day. 6.8.0-rc4 and rc5 have been rock stable with the changes that were made. I'm glad that I stuck with the testing, and was able to work so close with the Unraid team. 🙂
  11. 6 points
    We're actively working on this for 6.9
  12. 6 points
    Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
  13. 6 points
    It really is time for @limetech to incorporate the UD functionality. I do not have the time nor the interest in updating UD for the new GUI.
  14. 5 points
    Fixing, adding, updating & upgrading everything people reported plus ranked high on our lists LimeTech - Unraid has been up to the difficult challenge hard at work. What a great bunch of people & community this company is, let's hope they get it all worked out soon so they can take a well earned relaxed break to enjoy the upcoming Holiday's with there Family & Friends with peace, love and no worries. Thank You and Happy Holiday's!
  15. 5 points
    I just connected to the server, and checked it again. No corruption of the databases. I had the server up all weekend (minus a short period where I shut down the entire system to move it to another room). No issues at all. We watched a couple of TV shows from Plex, and I saw that several new shows and files were added over the weekend. Yes....md_restrict 1 is where it is set right now. The system has not stayed stable for this long since the update from 6.6.7.
  16. 5 points
  17. 5 points
  18. 5 points
    We may have got to the bottom of this. Please try new version 6.7.3-rc3 available on next branch.
  19. 5 points
  20. 5 points
    You forget: RC doses not mean "Release Candidate", it means "Rickrolling Complainers" 😎
  21. 5 points
    In 6.6.4 release we 'reverted' the cron package back to the one that was installed with 6.5.x releases in order to solve another bug. However the way the cron daemon was started in that package is different and in 6.6.4 the problem is that crond is never started. We will publish a fix for this asap, and in meantime have taken down the 6.6.4 release. If you're running 6.6.4 you can type this command in a terminal window and put in your 'go' file as a workaround: /usr/sbin/crond -l notice
  22. 5 points
    Waiting for that promised proper Login page so that a Password Manager can be used...
  23. 4 points
    Hey!.... I can wait. You all are awesome to the max.
  24. 4 points
    Worked the last 4 days to resolve this so sure should be Urgent but as soon as kernel reverted it's back to minor - those classifications are something like the Pirate Code anyway.
  25. 4 points
  26. 4 points
    The problem is resolved in 6.8. I guess the RC release isn't too far away.
  27. 4 points
    Stop stealing movies then until 6.8 is out.
  28. 4 points
    There was a kernel driver internal API change a few releases back that I missed, and md/unraid was doing something that's not valid now. I noticed this and put fix in upcoming 6.8 and gave to someone who could reproduce the corruption. Has been running far longer than it ever did before, so I think is safe for wider testing. Back-ported change to 6.7.3-rc3 and also updated to the latest 4.19 kernel patch release, because, why not?
  29. 4 points
    Forgive the dumb question, I don't actually see where I can upvote this, I can see others that have upvoted, but no option in there for me to do the same, other than to 'like' it. Edit - found it. Hovering over the like button shows an upvote button. Not exactly intuitive but all good.
  30. 4 points
    We'll include QEMU 4.x in Unraid 6.8
  31. 4 points
    Perhaps the options reading something like: - update ready (instead of ‘updated’) - apply update (instead of ‘update ready’) would be clearer and less likely to lead to confusion (and not take up more space)?
  32. 4 points
    We could reproduce this specific issue and verify fixed, but the change made might solve some other performance issues as well.
  33. 4 points
    Having just looked at CA for the first time in a while (my server is running fine and I don't fiddle with it much), here's what I said to Squid about all the new CA icons: I took a look at SpaceInvaderOne's video review of 6.7rc1 and I'll repeat my thoughts. The icon shapes are nice, but they're nothing but a sea of white. Again, I get that this is the latest trend in design, but it is, in my opinion a terrible one. There is considerably more cognitive load in mentally parsing the shapes when there is no color to help out. This is especially tough on those of us whose eyesight isn't quite as good as it used to be. There is a reason why modern IDEs all do syntax highlighting. This is an absolute God-send in comparison to the dark ages of working on a monochrome green or amber monitor! Just try disabling syntax highlighting in your modern IDE and see how hard that makes it - there's no good reason to go back. Again, I get that you're just following the "modern" trend, and I have tremendous appreciation for the incredible amount of work that goes into keeping the system updated with all the latest security patches and all the new features you're bringing to the table, but this is one trend I wouldn't mind missing out on. I'm sure that there will be a number of security patches in the 6.7 release, and I'd be remiss in skipping it just because of the UI design, but, based on what I've seen, I'd be tempted. And, yes, I really hate that my Android phone shows all my notification icons in white against a black background. When they allowed color there, I could tell at a glance if I had an orange text notification icon - now I have to analyze each one to see if it's the "new text message" shaped white square, among the sea of other white, essentially square, icons. </rant>
  34. 4 points
    I have a suggestion about the new dashboard. I would love to be able to rearrange the tiles, and turn them on or off completely.
  35. 4 points
    The issue is that FUSE crashes which causes 'shfs' to crash which causes 'nfsd' to crash. Here is the code in FUSE (fuse.c) where the crash is occurring: static struct node *get_node(struct fuse *f, fuse_ino_t nodeid) { struct node *node = get_node_nocheck(f, nodeid); if (!node) { fprintf(stderr, "fuse internal error: node %llu not found\n", (unsigned long long) nodeid); abort(); } return node; } This function is being called with 'nodeid' 0, which is straight out of the NFS file handle. That ‘abort()’ is what ultimately results in bringing down /mnt/user and along with it, nfsd. This is almost certainly due to a kernel change somewhere between 4.14.49 and 4.18. Trying to isolate ...
  36. 4 points
    Perhaps have a second post in the locked thread that asks people to "Like" the second post it if they're not experiencing any issues. That way it's possible to track the number of forum posters who are not experiencing issues garnering feed back without cluttering the forums. I'm sure there are many people that wait until a couple people report in that the latest testing release is stable before they install for themselves.
  37. 3 points
    This situation can be quite confusing. It is not necessarily a VM or Docker specific issue. It seems to relate to static ip addresses in VMs and Dockers. Everything appears to work, but the logging is extreme. I ran into this situation when I was testing the beta 6.8. I was able to get around it by not using static ip addresses in Dockers. Others have been able to solve the issue with an added NIC or setting up VLANS. When I first ran into this situation, I did a little research and the error comes from 'tun'. The developers implemented the log message because they felt that the network error that causes this message was important and the underlying error should be resolved, rather than just being ignored by turning off the logging. This issue came up for me on a specific version of 6.8 beta. LT is researching this issue and what changed between the version that worked for me and the version where I ran into the problem LT will eventually get it resolved. Just turning off the logging is probably not be right answer. Be patient while they work on it. They are not ignoring it! Don't get frustrated if LT does not respond to every post or PM. I'd rather they work on a solution and not spend a lot of time discussing the issue over and over. There are several options you have if you have this issue: Do not use this release candidate and wait for a resolution. Use another NIC for Dockers or VMs so they are separated. Set up VLANs to isolate Dockers and VMs. Remove static ip addresses from Dockers and/or VMs until the error logging stops.
  38. 3 points
    This is a good example of a comment that was probably not intended to be rude but comes across that way. The confusion resulted because you didn't specify which issue you were asking about. Please note: I am going to delete this and a few of the following posts. Please be more mindful in the future.
  39. 3 points
    We confirmed libvirt 5.9.0 has this bug but libvirt 5.8.0 does not. We are producing a 6.8-rc7 that reverts libvirt to 5.8.0 and also updates Intel Microcode (again).
  40. 3 points
    How has the testing gone over the weekend? Is this the magic setting? mdcmd set md_restrict 1
  41. 3 points
    It has been about 3 hours, and there is no corruption with the MD_restrict set 1. I had run a couple of movies through the dockers and file system, and performed some maintenance within Plex to try and stress the DB a bit. But so far, no hiccups in Plex (my main culprit), and the rest of the dockers with sqlite look good also. @limetech: Is there anything you want or need here? Diags? Captures? It will be good to let it run through the weekend with the normal usage and see how things go.
  42. 3 points
    Good morning. With md_restrict set to 1, I ran all night long with no corruption. I started a few jobs before I went to bed to put some of the apps through some work. They all finished without problem. I checked every app this morning that has a sqlite db, and there was no corruption. I plan on trying to stress the machine a bit this morning. I will let you know the results. 🙂
  43. 3 points
  44. 3 points
    You don't understand what fmask and dmask do when mounting FAT32 volumes. They are octal values that turn off corresponding permission bits. It causes all files to have only owner RW bits set and all directories to have only owner RWX bits set. There are sensitive files stored on the flash such as your actual passwd file and ssh keys. We are not going to change this back the way it was. The USB boot flash was never meant to store anything other than files necessary to boot and config files. You might notice that since day 1 we have copied the 'go' file to /tmp and executed it from there. Sorry this is an inconvenience but there are many workarounds.
  45. 3 points
    Part of an effort to start locking down in preparation for multiple users and roles in upcoming releases.
  46. 3 points
    I am from Europe. It seems to be more reliable today, the timeout now happens once I reach AWS in Seattle, which I presume is their firewall.
  47. 3 points
    Also, any users that created what should be a redundant pool from v6.7.0 should convert metadata to raid1 now, since even after this bug is fixed any existing pools will remain as they were, use: btrfs balance start -mconvert=raid1 /mnt/cache To check if it's using correct profile type: btrfs fi usage -T /mnt/cache Example of a v6.7 created pool, note that while data is raid1, metadata and system are single profile, i.e. some part of the metadata is on each device, and will be incomplete if one of them fails, all chunks types need to be raid1 for the pool to be redundant : Data Metadata System Id Path RAID1 single single Unallocated -- --------- --------- --------- -------- ----------- 2 /dev/sdg1 166.00GiB 1.00GiB - 764.51GiB 1 /dev/sdi1 166.00GiB 1.01GiB 4.00MiB 764.50GiB -- --------- --------- --------- -------- ----------- Total 166.00GiB 2.01GiB 4.00MiB 1.49TiB Used 148.08GiB 555.02MiB 48.00KiB
  48. 3 points
    The Plex database corrupted twice on my server, and Sonarr at least twice, resulting in many of my hours used resolving the problems. Both were stored in /mnt/user/appdata/. I have moved everything to /mnt/cache/appdata/ by toggling the 'use cache only' setting and so far no database corruptions. I appreciate limetech taking this matter seriously and want to say thanks for an otherwise fantastic 6.7.0 release.
  49. 3 points
    Yes I've been meaning to add spare file support. This would be handy when backing up vdisk images.
  50. 3 points
    Slackware created a new kernel-firmware package that included the latest AMD microcode updates. I've included it in our upcoming unRAID 6.6.0-rcX build (most likely unRAID 6.6.0-rc2).