Alex R. Berg

Members
  • Posts

    195
  • Joined

  • Last visited

  • Days Won

    1

Alex R. Berg last won the day on October 30 2018

Alex R. Berg had the most liked content!

1 Follower

About Alex R. Berg

  • Birthday 05/09/1977

Converted

  • Gender
    Male
  • Location
    Denmark

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Alex R. Berg's Achievements

Explorer

Explorer (4/14)

17

Reputation

  1. TLDR: see below for my solution to revert the so called 'SSH Improvements', without reverting the security improvements. It is simply to 1) revert the /etc/profile HOME=/root change, and 2) fix users by usermod, and 3) possibly change sshd_config. Man that was one seriously annoying and very bad documented change. It effectively and purposefully disabled all non-root accounts for login on the linux system that unRaid was. At least they could have made bloody clear in the documentation of the change set for 6.9 what they changed, what the consequences would be for users who previously have used login using non-root account, and how those changes were impremented (like change in etc/profile) so we could undo it if in need. Its not like rocket-science on linux to use an admin account instead of root. Thank you to the people of this thread to present the issue and the changes, that was much appreciated. And thank you to @Mihai who found the security issue. I do appreciate fixing the security, but there's zero security risk of having extra non-root users available in linux for login via ssh-key, as apposed to just root (or please enlighten me). I have found a solution for me, though in time I suppose its wise to take the hint that Lime-Tech will happily in the future destroy our multi-user system, so I guess docker or vm. But for now for me that too big a hassle to migrate all my stuff there. My solution involves reverting the annoying "export HOME=/root' change in /etc/profile. Reverting to basic linux standard can hardly be a issue, security wise. My Solution At boot in the go-script I copy all files at /boot/custom/* to / (root), so /boot/custom/etc/profile goes to /etc/profile, and the original /etc/profile goes to /boot/custom_original/etc/profile, so on unRaid upgrades I can detect changes (if I so desire). 1) Revert /ett/profile change: "export HOME=/root' 2) Fix user home-dir and shell (/etc/passwd): For each non-root user that need login-shell, I run usermod to set home-dir and shell, for instance usermod -d /mnt/user/home/alex -s /bin/bash alex usermod -d /home/admin -s /bin/bash admin 3) sshd_config: Password-security in SSH is not good enough for me, and shouldn't be good enough for you if you attach your server so it can be reached from the internet via ssh. For that I use these sshd_config changes. I accepted LimeTechs change of specifiyng which users have `AllowTcpForwarding`, I just allow more users # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no #Limit access to the following users AllowUsers root admin alex # limetech - permit only root SSH tunneling AllowTcpForwarding no Match Group root AllowTcpForwarding yes Match User alex AllowTcpForwarding yes Match User admin AllowTcpForwarding yes 3a) I place sshd_config in `/boot/custom/etc/ssh` instead of /boot/config/ssh, just for that simple reason that my go-script backup unRaid original sshd_config via the custom folder. If you don't use my custom-script, just go with /boot/config/ssh. The keys I leave in /boot/config/ssh, but also copy to /home/admin/.ssh so I have my admin user for when array isn't mounted. unraid-custom-fix-ssh-improvements.zip
  2. The UnRaid > Share gui does not correct present the state of includes defined, when the share includes non-existing disks. Any change in the gui (such as description) will remove non-existing disks from the include-definition of the share. I shrunk my array to one disk from four disks. After that operation I could not write any disk via user-share mount (no space available error), but there was no information regarding this in the UnRaid > Share gui. I realised this was caused by /boot/config/shares/foobar.cfg contaning shareInclude="disk2,disk3" but I only had disk1. I tested it and writing is possible if shareInclude="disk1,disk2", but then when updating share it becomes shareInclude="disk1" Since the OS updates the share automatically after save, it would make sense that it fixes the share on mounting the array the first time it has fewer disks, and simply removes any disk definition from include/exclude that doesn't exist. Or it would be nice to have a button on the outer share overview page that did this.
  3. Hi, I'm interested in getting fan-control to work based on disk-temperatures on QNAP with unRaid. I have bash-experience. I've forgotten which packages to install on unRaid to get c-compilation, so would appreciate a few hinters (I have dev-pack/dev tools plugin, but make fails with 'libguile-2.2' missing lib). Also your script is gone. If you share a gist or a github I can also take it from there. Best Alex
  4. cache-dirs will never be a real proper caching. Its a hack, where we try to poke linux to cache dirs. Maybe that's what you encounter. You can try to enable logging. You can find some info in this thread somewhere on the logging I believe, also see /var/log/cachedirs or something. It says when it scans everthing again, which on my system happens daily and esp. during large operations like parity check which puts load on the system. Or it could be it just works really bad for you for unknown reasons. What you tried with cache-pressure is also my experience, and for safety I use 1, but also tried 0 shortly.
  5. I'm not sure its possible, that has never been tested. But most likely some of these work You can double-escape with "\"foo bar\"". Also you can escape the space instead: "_CCC\ SafetyNet". This should be equivalent to above. And maybe you need to escape the backspace as well, so you add '\\' which might turn into "\": "_CCC\\\ SafetyNet". It depends on how many layers of quote-removing happens from the gui to the execution. Bash sucks when it comes to spaces in names, as bash operates on strings and interprets spaces as new arguments when calling another function. And unfortunately the script is written in bash, probably it grew from something small to something substantial.
  6. I also like Everything. I've set it to scan daily, but I do also have lots of non-windows shared files like subversion repositories and backups that I don't need it to touch. I don't really think its relevant for me for that usage to use cache-dirs as its only a daily scan for me, but of cause its nice that its there. I don't know why that happens. I do believe unRaid (with some settings) delete empty folders on array disks after moving, but it doesn't delete empty folders that we, the users, have left there. I had some share settings set to Use Cache: Prefer or Only. I do believe after moving, unRaid deletes empty folders in the source-disk. But I also experienced that unRaid does not move files to reflect changed settings, so maybe you have to manually delete empty dirs. `find /mnt/disk* /mnt/cache -type d -empty -mindepth 1 -delete` should delete empty dirs and only that, and thus cleanup your entire array. I advice testing such commands first in a safe location. Maybe with -maxdepth 1 to see root-deletes, and maybe delete mindepth 1 so you also cleanup empty root. You can play around a bit without the -delete option to see what it finds. I have no idea whether this has anything to do with your problem, I just noticed you mentioned it scanning empty dirs. I'm using unRaid 6.8.3.
  7. Yeah, and cache-dirs only caches the dirs. Programs don't care much about dirs, they care about the file content, so it would make no difference what-so-ever to cache a programs directories. Well except for synchronization / backup software that acutally scans folders to see what files they contain. Also, my cache-disk is SSD, no need to have cache-dirs scan it. I'm still looking forward to the day when my main used disk is also an SSD, but it wouldn't make a huge difference until the parity is also an SSD. I cache the stuff, where I notice I have to wait for disks to spin up. If it is unlikely to keep me waiting, no need to cache it.
  8. Hi yo all, I appreciate the thanks, you are welcome. I've done some more testing myself, and I have not seen any spin-ups. So also think it is working now, without scanning of user. It's really good to get rid of that as it was CPU intensive. Its relatively easy to test, by checking disk status in unRaids' Main dashbard. I can click the dots to spin disks down, and see if they are spun up. I then find folder on eg. disk3 and access it though user-share and see if it spins up. So far I have not reproduced any spin-ups on unRaid 6.8.3. So I think we are good, since it also sounds like it works for you out there. Regarding maintenance, life moves on for all of us. I'm not really maintaining it anymore, but fortunately it does not need any maintenance. I might fix it though if a problem occurs that I can fix. ==== Custom Mover Script === Regarding @papnikol Q1, what I do if I have many disks? Well actually regardless of cache_dirs, I use my own custom mover before the official mover script. My custom mover moves certain folders to certain disks. That way I can keep all my most used files on disk1. I have 'ton' of files that consumes little space by todays standard, whereas mostly movies and backups consume huge chunks. So I just keep all those important things on disk1 and configured unRaid to never spin down disk1. It works far better for me than UnRaids default strategy. If my script hasn't chosen the location for a folder, it will be moved by the official mover. I only move to my allocated disks by custom mover when there's at least 50 GB available, and fall back to default strategy, moving to other disks. When space becomes available, my script moves it to correct disk later. All without me having to do anything besides the initial allocation of which folder goes where. If I want to move a share from one disk to another, I typically also just add a rule, and let it sort it out in the background automatically My custom mover also reduces risk of loss, if I lose two disks in unRaid. Because if a git repository/program is spread across many disks, I'll probably lose it all if one of the disks data is lost. UnRaid GUI cannot do that, while still falling back to using any disks, when I go full. custom_mover move_if_idle mover_all
  9. Hey yo all, I'm the last active maintainer of the plugin cache-dir script (I guess since Joe wrote it). There's 'complaints' for a long time here, that cache-dir spikes CPU, and I finally had a look at it. As Hoopster (partially) writes above, a solution to the CPU spikes is to turn of the /mnt/user scanning. There were some hints in the cache-dir log as to what part of cache-dirs caused the cpu-issue, though as I'm the writer of those logs, its easier for me to understand them Here's a bit of info about the logs. This is a sample with user-dirs being scanned (/mnt/user): ``` 2021.01.31 03:01:24 Executed find in (76s) (disks 2s + user 74s) 76.41s, wavg=66.78s NonIdleTooSlow__ depth 9999 2021.01.31 03:22:42 Executed find in (75s) (disks 2s + user 72s) 75.25s, wavg=62.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 6s/26s Scan completed/timedOut counter cnt=4/0/2 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:23:58 Executed find in (75s) (disks 2s + user 72s) 75.36s, wavg=63.44s Idle____________ depth 9999 slept 10s Disks idle before/after scan 26s/101s Scan completed/timedOut counter cnt=5/1/0 mode=4 scan_tmo=150s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=58%, filecount[9999]=1048758 2021.01.31 03:25:23 Executed find in (74s) (disks 2s + user 72s) 74.99s, wavg=64.42s Idle____________ depth 9999 slept 10s Disks idle before/after scan 111s/186s Scan completed/timedOut counter cnt=6/2/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:26:48 Executed find in (75s) (disks 2s + user 72s) 75.29s, wavg=65.39s Idle____________ depth 9999 slept 10s Disks idle before/after scan 196s/271s Scan completed/timedOut counter cnt=7/3/0 mode=4 scan_tmo=149s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=55%, filecount[9999]=1048758 2021.01.31 03:28:13 Executed find in (79s) (disks 2s + user 77s) 79.42s, wavg=65.39s NonIdleTooSlow__ depth 9999 slept 10s Disks idle before/after scan 281s/31s Scan completed/timedOut counter cnt=8/0/1 ``` It says in the last line it took 2sec to scan the /mnt/disk* and /mnt/cache*, and 'only' 77 seconds to scan /mnt/user. 77s is so much, its doing some CPU or disk access. 2 sec to scan disks indicates they are properly cached in memory. The second last line reads `idle before/after scan 196s/271s`, which indicates disks where idle before during and after scan, so no file was actually read. Hence cache-dirs is doing some crazy cpu in scanning /mnt/user All cache-dirs does is scan with 'find', /mnt/user so there's no way to fix this, except turn it off. The reason we might want to turn it on, is that in a recent version of unRaid (some years back) it started spinning up disks if /mnt/user wasn't included in the cache. So in other words, its likely cache-dirs is broken now. We need to turn off /mnt/user scan, or it eats our CPU. But if we turn it of, it might not prevent disk spin-ups, from just reading dirs. Which was almost the hole point of it. If it still spins up disks, it might still be useful in some scenarous like if syncing directory structure regurlarly, and then the sync program doesn't have to load all folders from disk, because they actualy are in memory, unRaid just spins up the disks anyway. Best Alex
  10. Interwebtech: Darn annoying that it doesn't just work! I doubt that minimum dept should matter, if you go shallower. It does as it says on the box, it always scans each share, but then it does 'find -maxdepth n` where n is your depth. With minimum level n is at least 4. So it'll scan /mnt/disk1/share/X/Y/Z/A - though I might be off by one. I've also noticed lately that my system is slow to display folders which is exactly what cache-dirs should prevent, but I've been ignoring it, not wanting to reopen that can of worms called hacking linux with cache-dirs. I remember people having issues with disks spinning up, but I didn't. And therefore I added the scan of /mnt/user, its in the option panel: `Scan user shares (/mnt/user):` I just tested on my unraid 6.8.3, disks spun up when accessing /mnt/user/media/Film, which it definitely shouldn't. I had scan users folder=false. I now set scan users folder =true. Then set max level to 4, to get a quick test, spun down a disk, and accessed share from windows. Now it does not spin up. adoucette: This does not fix the CPU issues mentioned by adoucette, as that is surely caused by loss of cache and a following rescan. Oh, adoucette, you might want to enable logging, I always have logging enabled, so I can check my tail cache log when I want to verify the behaviour of cach-dirs. alias tailcachelog='tail -n 2000 -f /var/log/cache_dirs.log' though beware that log rotation is needed if keeping logging on always. There are files included but I'm not sure if the logrotation is active. See /etc/logrotate.d PS: I use cache-pressure of 1. I think 0 means never release cache, which might cause out-of-memory. It's system wide, not just cache-dirs.
  11. Hi Ari, Here's some help that i wrote in the plugin. ``` Folder Caching Info The folder caching (cache_dirs) reads directories continously to keep them in memory. The program does not know whether a directory is already in memory or need to be reread from disk. Therefore it cannot avoid spinning up disks when it happens that the linux memory cache evicts a directory and cache_dirs rescans the directoy. If a large number of files are being kept in memory then it seems inevitably that some diretories get evicted from the cache when the system is under load. The program by default uses an adaptive mode where it reduces the depth of the directory structure which it tries to keep in memory. When it detects a cache-miss (slow scan time) it will reduce the depth until disks are idle again, but will still be at risk of putting load on the disks for a minute or two until the program gives up and waits for idle disks. The less files the cache needs to hold, the less likely it is that the program spins up disks. ``` It doesn't quite address your issue directly, but consider it spinning up disks, but its part of the same issue, that directories get evicted from memory. Try reducing folders it caches. Don't expect it'll ever be perfect, as its a hack we use, it doesn't tell the OS to never evict dirs. Oh also, quite important, maybe increase the cache-pressure. There's also info on that in the help, by clicking the ? of unraid gui while in folder caching gui Best Alex
  12. Here's a very minor bug i just found, in the current plugin. I just used the nice plugin preClear with text-gui to start a preclear, that I then almost immediately terminated via stop-preclear session in the GUI. The preClear kept being displayed in the unraid under main, until I manually delete the three /tmp/preclear* files. Date Updated:May 7, 2020 Current Version:2020.05.07 Best Alex
  13. Its obviously an unRaid issue if the multiple devices gets assigned to the same bus in the XML, generated by unRaid. It might well be more subtle issue with IPv4 not getting assigned automatically via public br0, i mean routers DHCP. Might be KVM, might be Ubuntu, might be unRaid.
  14. Thank you Mickey. You helped, even if it was just to make me realize I kind of already found the solution, I just didn't know it. I think something have confused me. I tried messing around with the bus after in another thread, and it didn't work. But maybe it did work. I got it working before I wrote the thread by removing the filesystem mount, but now i'm unsure wether that deletion happened, because I noticed I still have the filesystem mount. I just thought I had removed it, so I spent the afternoon messing with samba. Anyway long story short, here's what I did. I swapped the bus so filesystem is 0x02, and eth-interface is 0x01. Then network appear in top right corner and is available in settings in Ubuntu gui. I then added manual IPv4 address, and then it worked. With automatic DHCP it didn't get address, as verified by ifconfig and ping google. <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user'/> <target dir='ubuntu'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:03:7e:e6'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> I just checked and now I have internet and mounting `sudo mount -t 9p -o trans=virtio mnt /mnt`. As a side note I read in another thread that mounting like this would be faster sudo mount -t 9p -o msize=262144,trans=virtio,version9p2000.L,_netdev,rw mount-tag /mnt/user
  15. I have this problem in my freshly installed Ubuntu 20.04 VM. When I boot without any Unraid Share, i have network access, but no network when booting with a share. ip link gives me precisely the same output with or without share in vm-settings (after boot) my link is called 'enp2s0' 'sudo dhclient enp2s0' seems to run forever, I killed it after like 5 minutes. Both with and without network. I don't have /etc/cloud. Also cloud-init command is not found. Maybe because I did minimum ubuntu install. Also 'etc/network/interfaces' does not exist. Probably like one above mentioned, it was moved elsewhere in recente ubuntu version Any idea how to fix issue?