Jump to content

Alex R. Berg

Members
  • Content Count

    186
  • Joined

  • Last visited

  • Days Won

    1

Alex R. Berg last won the day on October 30 2018

Alex R. Berg had the most liked content!

Community Reputation

13 Good

1 Follower

About Alex R. Berg

  • Rank
    Advanced Member
  • Birthday 05/09/1977

Converted

  • Gender
    Male
  • Location
    Denmark

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Interwebtech: Darn annoying that it doesn't just work! I doubt that minimum dept should matter, if you go shallower. It does as it says on the box, it always scans each share, but then it does 'find -maxdepth n` where n is your depth. With minimum level n is at least 4. So it'll scan /mnt/disk1/share/X/Y/Z/A - though I might be off by one. I've also noticed lately that my system is slow to display folders which is exactly what cache-dirs should prevent, but I've been ignoring it, not wanting to reopen that can of worms called hacking linux with cache-dirs. I remember people having issues with disks spinning up, but I didn't. And therefore I added the scan of /mnt/user, its in the option panel: `Scan user shares (/mnt/user):` I just tested on my unraid 6.8.3, disks spun up when accessing /mnt/user/media/Film, which it definitely shouldn't. I had scan users folder=false. I now set scan users folder =true. Then set max level to 4, to get a quick test, spun down a disk, and accessed share from windows. Now it does not spin up. adoucette: This does not fix the CPU issues mentioned by adoucette, as that is surely caused by loss of cache and a following rescan. Oh, adoucette, you might want to enable logging, I always have logging enabled, so I can check my tail cache log when I want to verify the behaviour of cach-dirs. alias tailcachelog='tail -n 2000 -f /var/log/cache_dirs.log' though beware that log rotation is needed if keeping logging on always. There are files included but I'm not sure if the logrotation is active. See /etc/logrotate.d PS: I use cache-pressure of 1. I think 0 means never release cache, which might cause out-of-memory. It's system wide, not just cache-dirs.
  2. Hi Ari, Here's some help that i wrote in the plugin. ``` Folder Caching Info The folder caching (cache_dirs) reads directories continously to keep them in memory. The program does not know whether a directory is already in memory or need to be reread from disk. Therefore it cannot avoid spinning up disks when it happens that the linux memory cache evicts a directory and cache_dirs rescans the directoy. If a large number of files are being kept in memory then it seems inevitably that some diretories get evicted from the cache when the system is under load. The program by default uses an adaptive mode where it reduces the depth of the directory structure which it tries to keep in memory. When it detects a cache-miss (slow scan time) it will reduce the depth until disks are idle again, but will still be at risk of putting load on the disks for a minute or two until the program gives up and waits for idle disks. The less files the cache needs to hold, the less likely it is that the program spins up disks. ``` It doesn't quite address your issue directly, but consider it spinning up disks, but its part of the same issue, that directories get evicted from memory. Try reducing folders it caches. Don't expect it'll ever be perfect, as its a hack we use, it doesn't tell the OS to never evict dirs. Oh also, quite important, maybe increase the cache-pressure. There's also info on that in the help, by clicking the ? of unraid gui while in folder caching gui Best Alex
  3. Here's a very minor bug i just found, in the current plugin. I just used the nice plugin preClear with text-gui to start a preclear, that I then almost immediately terminated via stop-preclear session in the GUI. The preClear kept being displayed in the unraid under main, until I manually delete the three /tmp/preclear* files. Date Updated:May 7, 2020 Current Version:2020.05.07 Best Alex
  4. Its obviously an unRaid issue if the multiple devices gets assigned to the same bus in the XML, generated by unRaid. It might well be more subtle issue with IPv4 not getting assigned automatically via public br0, i mean routers DHCP. Might be KVM, might be Ubuntu, might be unRaid.
  5. Thank you Mickey. You helped, even if it was just to make me realize I kind of already found the solution, I just didn't know it. I think something have confused me. I tried messing around with the bus after in another thread, and it didn't work. But maybe it did work. I got it working before I wrote the thread by removing the filesystem mount, but now i'm unsure wether that deletion happened, because I noticed I still have the filesystem mount. I just thought I had removed it, so I spent the afternoon messing with samba. Anyway long story short, here's what I did. I swapped the bus so filesystem is 0x02, and eth-interface is 0x01. Then network appear in top right corner and is available in settings in Ubuntu gui. I then added manual IPv4 address, and then it worked. With automatic DHCP it didn't get address, as verified by ifconfig and ping google. <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user'/> <target dir='ubuntu'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:03:7e:e6'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> I just checked and now I have internet and mounting `sudo mount -t 9p -o trans=virtio mnt /mnt`. As a side note I read in another thread that mounting like this would be faster sudo mount -t 9p -o msize=262144,trans=virtio,version9p2000.L,_netdev,rw mount-tag /mnt/user
  6. I have this problem in my freshly installed Ubuntu 20.04 VM. When I boot without any Unraid Share, i have network access, but no network when booting with a share. ip link gives me precisely the same output with or without share in vm-settings (after boot) my link is called 'enp2s0' 'sudo dhclient enp2s0' seems to run forever, I killed it after like 5 minutes. Both with and without network. I don't have /etc/cloud. Also cloud-init command is not found. Maybe because I did minimum ubuntu install. Also 'etc/network/interfaces' does not exist. Probably like one above mentioned, it was moved elsewhere in recente ubuntu version Any idea how to fix issue?
  7. Hmm, its just an annoying warning, putting a scare into users. Unfortunately some GitHub merge stuff went badly between my developing fork and the Community app version by Bergware. Hence the warning, I think. If asked Bergware if he'll fix it, or if perhaps we should switch community apps to this repo. It works fine on unRaid 6.7.1
  8. @bonienlThere a new cache-dirs plugin in my repo fork from 2018-12, that hasn't been merged into the main. We had a merge accident, so a merge came look like a conflict. I havn't merged main changes into my fork, because I didn't want to deal with that. Maybe its easiest to start over on a new cache-dirs-only branch? Or change the dynamix applications to expect my forked github repo, as the source.
  9. Its working for me on unRaid 6.7.0. I tried accessing a folder on my disk2 both from windows, unRaid itself, in both cases both user and disk share. Disk didn't spin up. I'm filled up at the moment, so I probably don't investigate much, but if you want to supply logs, if it is to be useful, the logs from cache-dirs are needed Cache_dirs has a command to collect them. It would be awesome it that could somehow be collected automatically by unRaid, for instance by some kind of plugin-event. I assume that is not possible, but if it is, let me know. If not, it might make sense to ask Tom @ Limetech if that is a reasonable feature to add. Maybe you feel like doing that interwebtech, or tell me where to post? Best Alex
  10. Wow, nice machine Its low-level stuff in linux that does the reading of the directory structure. Cache-dirs itself just calls many find-processes. I think I mentioned in previous messages about how many files I have and what memory I use, so if you skim above you might find something. I cant think of anything helpful for you, except experiment. You can see if cache-dirs is currently scanning disks, check the cache_dirs debug flags, the script contains some statements that might help you debug, if you are not already an expert at linux.
  11. I'm sorry I don't have the time to go into a detailed problem solving session. Its probably uphill for you if you are not familiar with scripts, but here's some hints. cache_dirs should be in path, otherwise use `/usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs` ``` sudo /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -h ``` gives you the commands it runs. It spaws find-subprocesses, which does the actual reading of dirs, so yeah it indicates cache_dirs is not your problem, though of cause maybe it caches to little, when you see those folders. Maybe you have to little ram, but that's just guess-work. Cache_dirs only caches /mnt/disk* and /mnt/cache, not other root folders, but those root folders should be mounted in memory-space on UnRaid. Unless you've done some manual mounts, which I doubt you have. Best Alex
  12. I have no idea. If you run it manually the cache_dirs script, maybe its easier for you to figure out what is going on. There is also a switch to disable background, which may also help in your debugging. You have the command in the top you posted, or at least the first half of it. Best Alex
  13. @bonienl A new change is ready. Unfortunately you may find it to be a pain-in-the-arse because the last was not merged via git, but manually applied. Anyway I havn't created a new change request, as any change I push automatically gets added to the existing change-request. The new version is cache-dirs 2.2.7, and includes fixes to -a option which allows users to exclude via arbitrary find exclude commands, stuff from Joe I ruined long ago. bonienl, I suspect it is this commit you merged manually last: 8bef2e6dff9cb76553965fd898ef8692b0a29625. I would merge everything up to the point you merged last, so git knows it. And then merge normally. I'll point you all to bonienl when this next merge is done. I don't remember if I already pointed everybody back to main in nov/dec, but I'll do it again.
  14. I noticed a few days ago that cache_dirs was scanning my dirs at depth 5 and each scan took 45s, and touched disks and it had been going on for a long time. A parity scan was executed the day before, I don't know if it was related. The interesting part is that it started quickly recovered to full depth idle disks after I wrote 7 GB to the /tmp drive and deleted it (with my test_free_memory.sh script). Perhaps the writing to disks, caused linux to move some memory cache around. I'm using a cache pressure of 1. This looks a bit similar to what was happening to you @wgstarksif I'm not mistaken. In that if there's not enough memory free for whatever reason, cache_dirs spams the disks.
  15. I've released a new version on my 'beta' fork: https://raw.githubusercontent.com/arberg/dynamix/master/unRAIDv6/dynamix.cache.dirs.plg It fixes the -a option, and adds help information to the plugin page for how to filter dirs example: -a '-noleaf -name .Recycle.Bin -prune -o -name log -prune -o -name temp -prune -o -name .sync -prune -o -print' Avoid the () of Joe's example. Unfortunately * and " do not work, so we cannot filter for "*Old". The plugin messes up the double-quotes, and the cache_dirs script does not responde correctly even when it receives proper quoted -name "*Old" -prune. I'll push to dynamix, so it'll probably be live in the main in a few days.