kurai

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by kurai

  1. I'm leaning towards the process/functionality being one of those "works for some people but not others" things at this stage. I banged my head against a series of brick walls for an unreasonable amount of time trying all the many and varied approaches I found during the research phase. At the moment I've given up and left it alone, checking back on the few worthwhile information sources now and again to see if anything has substantially improved.
  2. That did the job nicely - all behaving now. Thank you. 👍
  3. Hi Djoss. Thanks for keeping on with the client updates. An issue this time though ... the Tools->History menu opens the activity log in a window 100% of the size of the browser viewport, and has no modal window controls. i.e. No way to go back to the main app interface 'window', and because of the canvas style implementation there's no browser back/history/right-click functionality to exit out of it either. Necessitates a restart of the container to get back to normal. (All the other top menu bar entries and other modal window elements still work as expected).
  4. That seems to have done the job nicely. Don't know where you are storing the graph history now, but wherever it is VNC no longer throws a fit Thank you.
  5. Issue not addressed in v6.11.0 - over large, out of spec cookies are still being set, breaking some Docker & VM VNC access.
  6. Look at my linked thread for the *cause* of the cookie. If you don't take care of that then it will inevitably return. (Assuming it's the same root cause I identified. Please let us know if it's not the case.)
  7. I'm having the same issue with a CrashPlan docker that uses a VNC interface. I've referenced this thread here: If any of you have specific instances of a particular docker or VM type that gives this sort of connectivity error, OR if you aren't using the webGUI Network graph in dashboard and still get the error I'd be grateful if you could add a comment there, so we can drill down to some sort of resolution. Cheers
  8. Overview webGUI can set out of spec large cookies that are breaking some functionality (notably docker VNC connectivity) Problem source https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.10.0#Other_Improvements The optional Network traffic graph [introduced with 6.10.0] in the INTERFACE dashboard panel stores the rolling history in two cookies: rxd-init & txd-init. The value is recorded at 1 second intervals for a history of 10s | 30s | 1min | 2min With the traffic value recorded in 14 digit precision this gives rise to cookie sizes below:- 10 sec: 2 x ~185 bytes 30 sec: 2 x ~540 bytes 1 min : 2 x ~1085 bytes 2 min : 2 x ~2155 bytes Relevance RFC 6265bis has long recommended limits on cookie sizes, which ended up as a requirement to limit the sum of the lengths of the cookie's name and value to 4096 bytes. Pretty much all browser engines complied with this. Because each engine went about managing this limit in subtly different ways a further refinement was added that also specified a limit for the length of each cookie attribute value to 1024 bytes. Any attempt to set a cookie exceeding the name+value limit should be rejected, and any cookie attribute exceeding the attribute length limit should be ignored. Unraid issue Most of the webGUI functionality seems to deal with this out-of-spec condition OK, but it definitely breaks some VM & docker VNC connectivity. e.g. It's possible that this might be a root cause of a variety of other reported bugs - the only one that I can personally confirm is with a CrashPlan docker that fails to connect if the webGUI Network graph is set to 2 minute history and allowed to sample traffic for longer than 1 minute or so. Some other dockers with VNC interfaces, such as Krusader, seem unaffected. Suggested solutions Reduce recorded value to ~7 digit precision from 14 ? Limit recorded intervals to 60 seconds instead of 120 ? Use some other sort of variable state mechanism other than cookies ? Investigate what is happening with some VNC implementations that cause them to mishandle out-of-spec cookies ? Sources https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis/ https://github.com/httpwg/http-extensions/blob/main/draft-ietf-httpbis-rfc6265bis.md#the-set-cookie-header-field-set-cookie
  9. I had what might well be a related issue with a Windows10 VM and torrent clients - every write of a small received chunk of a large file would, seemingly, lead to the rewriting of the entire size of the file every time. [ /bug-reports/stable-releases/680-massive-write-amplification-on-raid-1-btrfs-ssd-cache-pool-with-sparse-files-r811/ ] Temporarily "fixed" by turning off a normally desirable setting in the torrent clients that had worked as intended in earlier UnRaid releases. It looks like a systemic problem that came in with the transition from 6.7.2 to 6.8.0 codebase.
  10. My preferred option for my current setup is uTorrent (the ancient 2.2.1 build, before it got annoying with all the ads and worthless extra `features`) - I just need lightweight/fast/reliable without all the bells and whistles. In qBittorent (4.2.1) the equivalent setting is in Options->Advanced->libtorrent_section: "Enable OS cache" (note that the wording is reversed in qB, so setting the checkbox turns the OS caching ON, wheras in uT it turns the OS caching OFF : see https://www.libtorrent.org/reference-Settings.html#disk_io_write_mode for detail) Also note that some of the caching configuration options (on Windows hosts, at least) are set at application start and require an application restart to take affect.
  11. Update: TL;DR - Turning off "Disable Windows caching of disk writes" in torrent client(s) seems to resolve this particular issue. I had some time this weekend so I decided to try again with Unraid 6.8.2. Initially I had the same issue when using the existing, working, settings from 6.7.2. Went through lots of various combinations of option changes in Unraid & the hosted Windows 10 VM & a few torrent applications, and finally discovered an option that works ... disabling the torrent client's internal option to bypass OS level write caching. This is a performance setting to exempt it's disk writes from the standard OS handling, and look after caching/coalescing large numbers of small writes itself - designed to prevent "double caching" & excessive memory usage and/or swapfile thrashing in some situations - it was useful back in the day if you had a low amount of spare RAM over and above Windows "baseline" usage. e.g. running Windows 7 with 1GB. These days, with widespread usage of much larger/cheaper RAM configurations it's rarely a relevant option. (I have 32GB in my Unraid server, of which 6GB is available to the VM, so not really an issue for me now) Leaving all other Unraid/VM/Windows settings as per the working 6.7.2 config and only changing this option (tried it in 3 different torrent clients) has stopped the massive SSD cache write overhead problem. I still don't have any real idea what particular element of the Unraid 6.7.2 -> 6.8.x update was the root cause of the altered behaviour but this setting change, if not ideal, at least stops my SSDs being murdered.
  12. [Note: I originally added this as as reply to [6.7.2] DOCKER IMAGE HUGE AMOUNT OF UNNECESSARY WRITES ON CACHE Didn't notice it was raised against 6.7.2, not 6.8.0. Sorry] For what it's worth:- Is this perhaps an issue with sparse files ? I'm having a somewhat similar problem when writing a torrent of an ISO to a share on a mirrored cache pool. (2 x Crucial MX500 SSDs, BTRFS, Raid 1) 6.7.2 was fine, on 6.8.0 there's huge write amplification whenever the torrent client tries to write out a chunk. e.g. Torrent client creates a 3.5GB sparse file, then starts downloading chunks to it's internal RAM cache. When a chunk (4MB) part is completely received it writes it to disk into the pre-allocated ISO sparse file. However - instead of the expected 4MB disk write it seems to be rewriting the *entire* 3.5GB file every time it sends a new chunk to the disk file. This leads to the SSDs writing continuously for hours, for what *should* take less than a second (and also getting very hot). Other types of disk write activity (copying, moving, file creation etc) behave normally, at expected speeds and levels of SSD activity. i.e. Copying a regular 3.5GB file to the share takes < 10 seconds. Also - I reverted from 6.8.0 to 6.7.2 and the problem disappeared, so it's not related to a bad Unraid cache pool config, or anything in the, unchanged, torrent client config. I found this bug report before I raised one of my own and I wonder if it was the same root cause as (I believe) the Docker IMG files are also created as sparse files. If I'm way off base, and my issue is unrelated issue please let me know and I'll raise mine as a separate bug report. -- kurai
  13. All the 10Gbit NIC chipsets I've seen run really hot - hence they generally have big heatsinks nailed on top. My Intel X540's can easily reach 70 C if they aren't in direct, fast, airflow from fans.
  14. That... sounds horribly plausible. I *did* have two Chrome tabs open to different pages of the Unraid webUI. When I initially opened WebUI page, Main tab, I had the updates available notification from Fix Common Problems appear and I opened the Plugins page into a new tab and did the updates. I was also doing stuff in other unrelated tabs, and it's entirely possible that when I went back to Unraid a few minutes later I used the original, unrefreshed, tab and went to Empty Trash from there (I have your plugin in main menu bar via Add Custom Tab) instead of the 2nd tab that the updating procedure was done in. I'm not sure if the diag logs will be able to confirm that one way or another, but I'll post them anyway ... eventually. They are saved to the boot thumbdrive and the server is currently shutdown while I am wrestling with XFS undelete/recovery tools, which are proving to be massively time consuming and erratic :/
  15. The log entry for the original deletion event was definitely '/mnt/user/SHARENAME' not '/mnt/RecycleBin/User Shares/SHARENAME/' Just looked at it again at source in case something happened in copy/paste to browser. I haven't done anything to get the original SHARENAME working again yet since I want to minimise array disk write activity until I can reboot from a USB LiveCD and assess some recovery/undelete options. I do, however have another deleted file from a different share, /OTHERSHARE/test.txt that I created and deleted from the SMB client PC to make sure it wasn't a networking/SMB issue when I first realised SHARENAME had been nuked. This does appear as: http://FQDNservername.com/Settings/RecycleBin/Browse?dir=/mnt/RecycleBin/User%20Shares/OTHERSHARE I haven't attempted any trash emptying on that yet because I really *really* don't want to potentially lose OTHERSHARE too. I will post the diagnostics zip shortly - I'm just sanitising the syslog.txt a bit right now since there are some quite sensitive document names mentioned there.
  16. No. I went to Settings tab, User Utilities section, clicked Recycle Bin, which took me to http://FQDNservername.com/Settings/RecycleBin In the Shares / SMB Share section there was the expected SMB Share name "SHARENAME" with a Trash size of ~20GB (the size of filename.foo deleted from SMB connected PC) I then clicked the "Empty All Trash" button. All normal so far. As I said, however ... instead of emptying the trash and deleting only the 20GB file the whole SHARENAME share it was contained in got rm'ed. Correction: It was the simple "Empty" button from the Shares section I clicked, not the "Empty All Trash" button from the top Recycle Bin section - if that makes a difference.
  17. Aaaaargh ! Version 2019.02.03b has just erroneously deleted an entire share instead of a file. Completely. From all member disks in the array. The details of what happened, while I try to stop myself panicking too hard ... I have a an Unraid XFS array setup of 5 HDDs consisting of 2x parity and 3x data disks , plus a BTRFS cache of 2x mirrored SSDs. The share in question was set to include all disks and use cache, with directories set to split as required - it contained ~2.5TB of data. On a client Linux PC (via SMB) I deleted a file from within the share ... so far so normal. It was pretty large (20GB or so) so I went to Unraid WebUI to empty it from Recycle Bin to regain the space on the SSD cache. There was a notification in WebUI that an update was available to the Recycle Bin plugin, so I updated that (from 2019.01.31b to 2019.02.03b) I then opened Recycle Bin plugin and emptied the 20GB file - I didn't do anything unusual, same procedure as I've done many times before. However ... instead of just deleting /SHARENAME/.Recycle.Bin/SHARENAME/20GBfilename.foo it looks like the plugin traversed up the filesystem tree and deleted /SHARENAME completely. I've confirmed it's really gone and not just a clientside SMB read error or something by logging on locally to the Unraid server and looking at the combined array filesystem in /media/user/ The share's split dirs spread across /media/disk1/, /media/disk2/, /media/disk3/, /media/cache/ are all gone. Can't find much in logs, just the regular notification of the start of an empty trash operation: Feb 4 11:42:17 SERVERNAME ool www[31973]: /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin 'share' '/mnt/user/SHARENAME' followed by lots of errors when client PCs try to access the now nonexistent share data: Feb 4 11:44:50 SERVERNAME rpc.mountd[5416]: refused mount request from 192.168.1.2 for /SHARENAME (/): not exported Before I shut down the Unraid server and start trying any recovery operations with a grab-bag of XFS filesystem tools from a LiveCD is there any other info or logs I can give to help with diagnosis ?