Alex R. Berg

Members
  • Posts

    192
  • Joined

  • Last visited

  • Days Won

    1

Alex R. Berg last won the day on October 30 2018

Alex R. Berg had the most liked content!

1 Follower

About Alex R. Berg

  • Birthday 05/09/1977

Converted

  • Gender
    Male
  • Location
    Denmark

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Alex R. Berg's Achievements

Explorer

Explorer (4/14)

15

Reputation

  1. cache-dirs will never be a real proper caching. Its a hack, where we try to poke linux to cache dirs. Maybe that's what you encounter. You can try to enable logging. You can find some info in this thread somewhere on the logging I believe, also see /var/log/cachedirs or something. It says when it scans everthing again, which on my system happens daily and esp. during large operations like parity check which puts load on the system. Or it could be it just works really bad for you for unknown reasons. What you tried with cache-pressure is also my experience, and for safety I use 1, but also tried 0 shortly.
  2. I'm not sure its possible, that has never been tested. But most likely some of these work You can double-escape with "\"foo bar\"". Also you can escape the space instead: "_CCC\ SafetyNet". This should be equivalent to above. And maybe you need to escape the backspace as well, so you add '\\' which might turn into "\": "_CCC\\\ SafetyNet". It depends on how many layers of quote-removing happens from the gui to the execution. Bash sucks when it comes to spaces in names, as bash operates on strings and interprets spaces as new arguments when calling another function. And unfortunately the script is written in bash, probably it grew from something small to something substantial.
  3. I also like Everything. I've set it to scan daily, but I do also have lots of non-windows shared files like subversion repositories and backups that I don't need it to touch. I don't really think its relevant for me for that usage to use cache-dirs as its only a daily scan for me, but of cause its nice that its there. I don't know why that happens. I do believe unRaid (with some settings) delete empty folders on array disks after moving, but it doesn't delete empty folders that we, the users, have left there. I had some share settings set to Use Cache: Prefer or Only. I do believe after moving, unRaid deletes empty folders in the source-disk. But I also experienced that unRaid does not move files to reflect changed settings, so maybe you have to manually delete empty dirs. `find /mnt/disk* /mnt/cache -type d -empty -mindepth 1 -delete` should delete empty dirs and only that, and thus cleanup your entire array. I advice testing such commands first in a safe location. Maybe with -maxdepth 1 to see root-deletes, and maybe delete mindepth 1 so you also cleanup empty root. You can play around a bit without the -delete option to see what it finds. I have no idea whether this has anything to do with your problem, I just noticed you mentioned it scanning empty dirs. I'm using unRaid 6.8.3.
  4. Yeah, and cache-dirs only caches the dirs. Programs don't care much about dirs, they care about the file content, so it would make no difference what-so-ever to cache a programs directories. Well except for synchronization / backup software that acutally scans folders to see what files they contain. Also, my cache-disk is SSD, no need to have cache-dirs scan it. I'm still looking forward to the day when my main used disk is also an SSD, but it wouldn't make a huge difference until the parity is also an SSD. I cache the stuff, where I notice I have to wait for disks to spin up. If it is unlikely to keep me waiting, no need to cache it.
  5. Hi yo all, I appreciate the thanks, you are welcome. I've done some more testing myself, and I have not seen any spin-ups. So also think it is working now, without scanning of user. It's really good to get rid of that as it was CPU intensive. Its relatively easy to test, by checking disk status in unRaids' Main dashbard. I can click the dots to spin disks down, and see if they are spun up. I then find folder on eg. disk3 and access it though user-share and see if it spins up. So far I have not reproduced any spin-ups on unRaid 6.8.3. So I think we are good, since it also sounds like it works for you out there. Regarding maintenance, life moves on for all of us. I'm not really maintaining it anymore, but fortunately it does not need any maintenance. I might fix it though if a problem occurs that I can fix. ==== Custom Mover Script === Regarding @papnikol Q1, what I do if I have many disks? Well actually regardless of cache_dirs, I use my own custom mover before the official mover script. My custom mover moves certain folders to certain disks. That way I can keep all my most used files on disk1. I have 'ton' of files that consumes little space by todays standard, whereas mostly movies and backups consume huge chunks. So I just keep all those important things on disk1 and configured unRaid to never spin down disk1. It works far better for me than UnRaids default strategy. If my script hasn't chosen the location for a folder, it will be moved by the official mover. I only move to my allocated disks by custom mover when there's at least 50 GB available, and fall back to default strategy, moving to other disks. When space becomes available, my script moves it to correct disk later. All without me having to do anything besides the initial allocation of which folder goes where. If I want to move a share from one disk to another, I typically also just add a rule, and let it sort it out in the background automatically My custom mover also reduces risk of loss, if I lose two disks in unRaid. Because if a git repository/program is spread across many disks, I'll probably lose it all if one of the disks data is lost. UnRaid GUI cannot do that, while still falling back to using any disks, when I go full. custom_mover move_if_idle mover_all
  6. Hey yo all, I'm the last active maintainer of the plugin cache-dir script (I guess since Joe wrote it). There's 'complaints' for a long time here, that cache-dir spikes CPU, and I finally had a look at it. As Hoopster (partially) writes above, a solution to the CPU spikes is to turn of the /mnt/user scanning. There were some hints in the cache-dir log as to what part of cache-dirs caused the cpu-issue, though as I'm the writer of those logs, its easier for me to understand them Here's a bit of info about the logs. This is a sample with user-dirs being scanned (/mnt/user): ``` 2021.01.31 03:01:24 Executed find in (76s) (disks 2s + user 74s) 76.41s, wavg=66.78s NonIdleTooSlow__ depth 9999 2021.01.31 03:22:42 Executed find in (75s) (disks 2s + user 72s) 75.25s, wavg=62.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 6s/26s Scan completed/timedOut counter cnt=4/0/2 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:23:58 Executed find in (75s) (disks 2s + user 72s) 75.36s, wavg=63.44s Idle____________ depth 9999 slept 10s Disks idle before/after scan 26s/101s Scan completed/timedOut counter cnt=5/1/0 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:25:23 Executed find in (74s) (disks 2s + user 72s) 74.99s, wavg=64.42s Idle____________ depth 9999 slept 10s Disks idle before/after scan 111s/186s Scan completed/timedOut counter cnt=6/2/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:26:48 Executed find in (75s) (disks 2s + user 72s) 75.29s, wavg=65.39s Idle____________ depth 9999 slept 10s Disks idle before/after scan 196s/271s Scan completed/timedOut counter cnt=7/3/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:28:13 Executed find in (79s) (disks 2s + user 77s) 79.42s, wavg=65.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 281s/31s Scan completed/timedOut counter cnt=8/0/1 ``` It says in the last line it took 2sec to scan the /mnt/disk* and /mnt/cache*, and 'only' 77 seconds to scan /mnt/user. 77s is so much, its doing some CPU or disk access. 2 sec to scan disks indicates they are properly cached in memory. The second last line reads `idle before/after scan 196s/271s`, which indicates disks where idle before during and after scan, so no file was actually read. Hence cache-dirs is doing some crazy cpu in scanning /mnt/user All cache-dirs does is scan with 'find', /mnt/user so there's no way to fix this, except turn it off. The reason we might want to turn it on, is that in a recent version of unRaid (some years back) it started spinning up disks if /mnt/user wasn't included in the cache. So in other words, its likely cache-dirs is broken now. We need to turn off /mnt/user scan, or it eats our CPU. But if we turn it of, it might not prevent disk spin-ups, from just reading dirs. Which was almost the hole point of it. If it still spins up disks, it might still be useful in some scenarous like if syncing directory structure regurlarly, and then the sync program doesn't have to load all folders from disk, because they actualy are in memory, unRaid just spins up the disks anyway. Best Alex
  7. Interwebtech: Darn annoying that it doesn't just work! I doubt that minimum dept should matter, if you go shallower. It does as it says on the box, it always scans each share, but then it does 'find -maxdepth n` where n is your depth. With minimum level n is at least 4. So it'll scan /mnt/disk1/share/X/Y/Z/A - though I might be off by one. I've also noticed lately that my system is slow to display folders which is exactly what cache-dirs should prevent, but I've been ignoring it, not wanting to reopen that can of worms called hacking linux with cache-dirs. I remember people having issues with disks spinning up, but I didn't. And therefore I added the scan of /mnt/user, its in the option panel: `Scan user shares (/mnt/user):` I just tested on my unraid 6.8.3, disks spun up when accessing /mnt/user/media/Film, which it definitely shouldn't. I had scan users folder=false. I now set scan users folder =true. Then set max level to 4, to get a quick test, spun down a disk, and accessed share from windows. Now it does not spin up. adoucette: This does not fix the CPU issues mentioned by adoucette, as that is surely caused by loss of cache and a following rescan. Oh, adoucette, you might want to enable logging, I always have logging enabled, so I can check my tail cache log when I want to verify the behaviour of cach-dirs. alias tailcachelog='tail -n 2000 -f /var/log/cache_dirs.log' though beware that log rotation is needed if keeping logging on always. There are files included but I'm not sure if the logrotation is active. See /etc/logrotate.d PS: I use cache-pressure of 1. I think 0 means never release cache, which might cause out-of-memory. It's system wide, not just cache-dirs.
  8. Hi Ari, Here's some help that i wrote in the plugin. ``` Folder Caching Info The folder caching (cache_dirs) reads directories continously to keep them in memory. The program does not know whether a directory is already in memory or need to be reread from disk. Therefore it cannot avoid spinning up disks when it happens that the linux memory cache evicts a directory and cache_dirs rescans the directoy. If a large number of files are being kept in memory then it seems inevitably that some diretories get evicted from the cache when the system is under load. The program by default uses an adaptive mode where it reduces the depth of the directory structure which it tries to keep in memory. When it detects a cache-miss (slow scan time) it will reduce the depth until disks are idle again, but will still be at risk of putting load on the disks for a minute or two until the program gives up and waits for idle disks. The less files the cache needs to hold, the less likely it is that the program spins up disks. ``` It doesn't quite address your issue directly, but consider it spinning up disks, but its part of the same issue, that directories get evicted from memory. Try reducing folders it caches. Don't expect it'll ever be perfect, as its a hack we use, it doesn't tell the OS to never evict dirs. Oh also, quite important, maybe increase the cache-pressure. There's also info on that in the help, by clicking the ? of unraid gui while in folder caching gui Best Alex
  9. Here's a very minor bug i just found, in the current plugin. I just used the nice plugin preClear with text-gui to start a preclear, that I then almost immediately terminated via stop-preclear session in the GUI. The preClear kept being displayed in the unraid under main, until I manually delete the three /tmp/preclear* files. Date Updated:May 7, 2020 Current Version:2020.05.07 Best Alex
  10. Its obviously an unRaid issue if the multiple devices gets assigned to the same bus in the XML, generated by unRaid. It might well be more subtle issue with IPv4 not getting assigned automatically via public br0, i mean routers DHCP. Might be KVM, might be Ubuntu, might be unRaid.
  11. Thank you Mickey. You helped, even if it was just to make me realize I kind of already found the solution, I just didn't know it. I think something have confused me. I tried messing around with the bus after in another thread, and it didn't work. But maybe it did work. I got it working before I wrote the thread by removing the filesystem mount, but now i'm unsure wether that deletion happened, because I noticed I still have the filesystem mount. I just thought I had removed it, so I spent the afternoon messing with samba. Anyway long story short, here's what I did. I swapped the bus so filesystem is 0x02, and eth-interface is 0x01. Then network appear in top right corner and is available in settings in Ubuntu gui. I then added manual IPv4 address, and then it worked. With automatic DHCP it didn't get address, as verified by ifconfig and ping google. <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user'/> <target dir='ubuntu'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:03:7e:e6'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> I just checked and now I have internet and mounting `sudo mount -t 9p -o trans=virtio mnt /mnt`. As a side note I read in another thread that mounting like this would be faster sudo mount -t 9p -o msize=262144,trans=virtio,version9p2000.L,_netdev,rw mount-tag /mnt/user
  12. I have this problem in my freshly installed Ubuntu 20.04 VM. When I boot without any Unraid Share, i have network access, but no network when booting with a share. ip link gives me precisely the same output with or without share in vm-settings (after boot) my link is called 'enp2s0' 'sudo dhclient enp2s0' seems to run forever, I killed it after like 5 minutes. Both with and without network. I don't have /etc/cloud. Also cloud-init command is not found. Maybe because I did minimum ubuntu install. Also 'etc/network/interfaces' does not exist. Probably like one above mentioned, it was moved elsewhere in recente ubuntu version Any idea how to fix issue?
  13. Hmm, its just an annoying warning, putting a scare into users. Unfortunately some GitHub merge stuff went badly between my developing fork and the Community app version by Bergware. Hence the warning, I think. If asked Bergware if he'll fix it, or if perhaps we should switch community apps to this repo. It works fine on unRaid 6.7.1
  14. @bonienlThere a new cache-dirs plugin in my repo fork from 2018-12, that hasn't been merged into the main. We had a merge accident, so a merge came look like a conflict. I havn't merged main changes into my fork, because I didn't want to deal with that. Maybe its easiest to start over on a new cache-dirs-only branch? Or change the dynamix applications to expect my forked github repo, as the source.
  15. Its working for me on unRaid 6.7.0. I tried accessing a folder on my disk2 both from windows, unRaid itself, in both cases both user and disk share. Disk didn't spin up. I'm filled up at the moment, so I probably don't investigate much, but if you want to supply logs, if it is to be useful, the logs from cache-dirs are needed Cache_dirs has a command to collect them. It would be awesome it that could somehow be collected automatically by unRaid, for instance by some kind of plugin-event. I assume that is not possible, but if it is, let me know. If not, it might make sense to ask Tom @ Limetech if that is a reasonable feature to add. Maybe you feel like doing that interwebtech, or tell me where to post? Best Alex