MarkUK

Members
  • Posts

    32
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MarkUK's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Same problem here - when adding "2nd Unraid Share" it also becomes stuck on updating. The VM seems to be able to be updated normally otherwise - only this causes the problem... HOWEVER - when I manually edited the XML to add the second filesystem bus (which is put on 0x06, I think, as the first 'bus' ID available) I can now 'save' the settings even in GUI mode...
  2. Hey, Just an update; the previous issue surrounding the cache drive filling us was actually completely my fault; the downloads directory was actually on a share that had cache enabled, to it wasn't leaking out of the appdata directory like I had suspected. The other issue (files remaining in temp) probably persists but also isn't such an issue. Only other thing I've had problems with is the port mapping; sometimes I need to manually change the port mapping, and then change it back, before I can access the web GUI. Anyway, for the time being we're using another method of downloading (hence why I discovered the "cache" issue) but will probably try this again now I've worked out what caused the previous one. Cheers.
  3. The 6-camera limit is imposed by the browser (when viewing more than 6 cameras; I believe a browser can only run 6 simultaneous streams as a hard limit, ZM themselves are trying to resolve it for their future roadmap) so I just use a hosts file with "zoneminder1", "zoneminder2", etc, pointing to the IP, then a PHP page on another server pulls together links to the ZM preview images, but with the custom hosts for each one. Nicely avoids the 6 camera limit. Completely agree with security, of course... I suppose for the most part (for my usage), even when using ZMNinja, it's all internal traffic. Will have to look at the Docker certs shortly, thanks. Cheers
  4. Hey, Thanks for the continued update! Is there a reason why insecure access was removed? Only, for the most part, these are run on local servers behind firewalls with no external access, and even with external access the extra security (to some) may simply not be of interest depending on the purpose. For me, it's just a pain as I'll have to set up self-signed certs - and to avoid the 6-camera limit I use "hosts" (and a custom camera viewing page) - so it's not even on a single host I'd need to set this up for. Having the option for both would be nice - I'll manually re-open HTTP access for my system but that'll potentially be lost with future upgrades! Thanks again for your work 🙂
  5. Hey Markus & all, I've started using this recently and have found it to be pretty good so far! Only, I've got two (very likely related) problems: - Sometimes there is data left in /temp/ even for files that have completely finished downloading. This perhaps is by design and I just need to leave it for longer, but otherwise I'm having to clear out /temp/ occasionally? - The Cache drive was filled up today as the mount must have disappeared. (i.e., what was /mnt/user/downloads/ -> /downloads/ disappeared). There has been an outstanding issue that I believe hasn't occurred again whereby /mnt/user disappears off the entire server - but this is just relating to a single docker. Even after the mount was restored a number of files began downloading off the 'internal' version (within the Docker) and not the 'external' Unraid share. Any idea why? And any idea if there's an straight forward way to make a kill switch or other system that prevents downloads if the mount has disappeared?
  6. Wanted to add that I haven't seen this behaviour since my last post (21 days ago). Hopefully resolved, at least from my end! Cheers
  7. It has just happened again - diagnostics attached. This happened specifically when transferring about 3 GB of files (from a Windows box) over SMB (The share I was copying to is not exported at all over NFS or AFP) unraid-diagnostics-20181010-0041.zip
  8. Throwing another one in the ring here... Unraid 6.6.1, has been running 6.5.3 without problems (that's a lie, but unrelated problems) and now seeing the same - the /mnt/user share disappearing over NFS and, as such, VM's and Dockers dying very quickly. It has happened twice since upgrading to 6.6.1. Attached are diagnostics during a 'crash'. I haven't tried manually remounting the shares, but a reboot has solved it in both cases. I've set the fuse_remember parameter to -1 because I had a problem where I had stale file handlers in one of the VM's (which has happened a few times, too). unraid-diagnostics-20181008-1813.zip
  9. The discussion looks to be very similar in nature to what I've seen (except their problem only lasts 10 - 15 minutes - although, it could just be magnitude; perhaps mine would free up after 10 - 20 hours or more?! Never left it that long so far)... https://www.spinics.net/lists/linux-xfs/msg06058.html Towards the end the discussion moves towards the mass deletion of file structures, similar to what I've seen. Their solution was, effectively, to slow down deletions by reducing how parallel the deletions were...
  10. My guess is that some resource is being exhausted within either XFS or, indeed, one of the supporting techs interfacing either the raid array or the docker/VM's (although I'm pretty certain I've had a crash with a regular Unraid-only rm - no virtual mount / 9p / etc). I had hoped to find the culprit by examining the XFS stats during a crash and seeing an exhaustion of inodes or something, but nothing really looked obviously wrong from that. Honestly, I'm at a loss as to what's causing this - my next step, though, is to try and make this reproducible (and consistent) and remove all the extra factors such as software (ZM), using a VM, a docker, etc etc. Just try and boil it down to the minimum steps to cause the problem... You're right that the level of file access doesn't seem to be abnormal - although the level of deletions may be. The hang is definitely total - even if the system is left for most of the day to recover, it never does. The hang is also purely IO (from what I can see) as the system is still trying to work (e.g., I can run simple SSH commands until it tries to do any IO and then that SSH session will also hang).
  11. Hey Frank, appreciate the reply - I had considered this, but both the memory usage is unaffected, plus the rm can run for many hours (happily deleting files at the time - not just enumerating the list of files to delete) before it crashes! The files are also structured in 5 - 8 folders deep, so each folder only has a couple of thousand files in or less.
  12. Final update for a while. I've reverted back to Zoneminder. I couldn't get on with Motion/Motioneye at all - it was not going to work for us long-term. I've deleted all the old Zoneminder image files - there were about 50 million files and the server crashed once whilst deleting them. I've reduced the storage of ZM images to just events, plus added extra filters to periodically delete older events to remove the 'bulk deleting' that happens when it starts running out of space. Lastly, I'm going to create one (or two) test Unraid boxes running a parity-protected XFS array and try creating tens of millions of tiny files and subsequently deleting them; I'm fairly sure this is where the problem is happening and (for my own sanity, if nothing else) want to prove/disprove this and possibly even create something reproducible from it. Until then, this machine shouldn't fill up for 10 - 20 weeks so it could be a while before I naturally experience this issue again - thanks for your input!
  13. Further to add: the logging server can accept any command that can be executed over SSH; so if there's any good logging commands I could be running during/before a crash let me know and I'll add them to the test!
  14. Hey pwm - I've got various logging happening (namely, I run a continuous "ps" to only show D-state blocking; plus a separate server monitors the xfs_stat output; the CPU load and CPU usage; and memory (free/total/etc)). Before last nights crash the last recorded values were: Memory free: 85% Load: 13 (1 minute), 12 (5 minute), 9 (15 minute) CPU Usage: 2% I didn't (because it's done using Putty, which wasn't running at the time) get the last ps output. I've also got the XFS_Stat output if it's useful, but it seems the metrics in that are just growing statistics rather than a decent snapshot of state. So, short answer: no, nothing is consuming any memory (85% is roughly the normal amount with no deletes running whatsoever!).
  15. I can't believe I'm back on this - again... I changed to Motion/Motioneye (which, by the way, I really didn't get on with - another story, though!). So NO Zoneminder in the mix whatsoever. And it happened again. But - I was mass-deleting files (ironically, the Zoneminder files - a couple of million 100Kb files). I am - nearly - certain that this error is being caused by the mass deletion of millions of files on XFS (or a parity-protected XFS array or some other factor of the setup). To 'check' this I've disabled all Dockers and VM's and am just mass-deleting files. I'll update this if that crashes - and, if so, I'll just run my regular setup (including CCTV - NOT ZM though) and no mass deleting - in theory, the server should stay up indefinitely.........