NAS

Moderators
  • Posts

    5040
  • Joined

  • Last visited

Posts posted by NAS

  1. I was indeed agreeing.

     

    Just for clarity the normal security reporting methodology is to start with private contact. Normally this is for unpublished vulnerabilities but it holds equally true for published ones where the vendor may just not have noticed or has noticed and something has went wrong and they wrongly assume fixes are in place. It is VERY common for vendors to patch, release but not pen test the actual release after.

     

    After a reasonable period of time if unresolved you can and should then post publicly so that users who are vulnerable have the maximum chance to hear about it and make and informed decision on what the risk is to them and how to handle it.

     

    I dont think it would be unfair to say no one in the history of this project has prodded more about security then me.

     

    I am not and never have been an employee of Limetech LLC and have never received any monetary of gift rewards other than a single license for testing.

    • Like 1
  2. It is important that users who choose a non subscription model, even if that is just implicit by the fact they use only the traditional unRAID product, that there be no phone home or other services that reach out of the system by lieu of the subscription services running in "off" mode of or any other mechanism.

     

    I cannot stress this enough. Feel free to add value in whatever way suits your business but dont break that trust model whilst doing so.

    • Like 2
  3. Whilst it is not ideal that the poster did not follow normal security reporting etiquette it is clear there is an issue and it is off our own making.

     

    See

     

    versus

     

    http://www.slackware.com/security/list.php?l=slackware-security&y=2020

     

    tl;dr we are long overdue an update but we have slipped into the old habit of waiting for the development branch to be ready and ignoring the stable branch.

     

    It is not the end of the world but its a habit we need to break again ASAP

    • Like 3
  4. This is a very interesting poll and I commend the people responsible.

     

    I do however question why `SSD Array option with Trim support` is on this poll though.

     

    The other items in the poll are feature enhancements, nice to haves or power user edge cases..... but supporting SSDs in 2020 should be a basic capability for a NAS not something we poll to see if its optionally wanted.

    • Like 1
  5. On 6/4/2019 at 6:09 AM, bonienl said:

    Docker containers and VMs on the dashboard are loaded in the background.

    If something is amiss with either the Docker or libvirt service, it would hamper the dashboard.

    I think I can actually replicate this now.

     

    If i mount a USB drive and copy files continuously to my SSD cache drive which also is the location of my docker loopback image then after a few minutes docker starts to not respond which obviously ruins the web GUI as well.

     

    I routinely copied files in this way in all previous versions and the cache drive seems fine (its pretty new) and there are no errors in any log that I can see.

     

    The SSD is attached to a motherboard SATA port directly.

     

    I am pretty sure it is IO WAIT as load sky rockets. Will wait and see if its an "only me" problem.

     

  6. 5 hours ago, testdasi said:

    I think the summary of this plugin is, unless you are a spy, use it. :D

     

    (based on Unraid functionalities, I doubt there's any financial institution or cloud / VM provider using it)

    Except if you a company or any public body or a registered non profit or anyone with liability cover or any organization that has to comply is ISO accreditation or worldwide equivs or has independent audits or anyone covered by EU GDPR or or....

     

    context is import and whilst most people here are home users plenty unRAID users are not :)

     

    Nice addon. I really mean that. It absolutely has its place but the default advice should always be "be secure unless you really really understand the risks of not being".

  7. I faced an odd issue where the web gui would load but not completely. Specifically the dash wouldnt load but disk view would excluding the action buttons at the bottom. Docker view wouldnt load at all but settings would. Dockers containers did not work or were super slow (hard to say)

     

    Manually restarting one container instantly kicked web gui and docker into a working state.

     

    I cant replicate.

  8. Sorry if this is obvious but I cant quite nail it down.

     

    I have a need for a thing I am doing to find the device name and disk serial starting knowing either "/mnt/disk4" or "/dev/md4" from the shell

     

    Does anyone know how to get for example "/dev/sdb1"  from this?

     

     

    • Like 1
  9. tl;dr to fully fix this Intel garbage users are going to pay a performance price. The price varies wildly based on user workloads and is basically impossible to predict however I have been in conversation where some devops have seen insane edge cases performance drops.

     

    I would suggest the right way to do this is to fix it by default but document an opt out for those that want to accept the risk because it is not possible for normal humans to really understand this beginning to end.

  10. So one of the problems with cache_dirs is that it is at its core a hack. Quite a bit of the user setup is based on educated guesswork and very little info is presented or logged to know how well it is functioning e.g. how many files are being cached, how often is it waking up disks.

     

    Most of the performance feedback we can gather is based on timings.

     

    Some ideas:

    • track timings and present them as a time based graph for easier analysis
    • store timings outside of rotating logs for easier long term analyisis
    • estimate memory usage

     

    In theory we can can reasonably estimate maximum memory requirements by counting potential inode entrys and multiplying it by the inode size . This would allow what/if scenarios to be built where each share can be presented in terms of estimated memory requirements and if we do end up adding a depth per share option the user can tune their settings (and potentially reorganize their data) to cache as efficiently as possible. This will take some reading and experiemtation and unless someone is capable of reading the kernel code to understand how cache pressure impacts this it will always be worst case..... but to my eye this is the direction to go

    • Upvote 1
  11. On 10/29/2018 at 3:42 PM, Alex R. Berg said:

    I seem to be getting an overload of info for me to handle here. I will not be offering support on the official plugin of cache-dirs at the moment from dynamix, because its not up2date. I don't want to spend my time helping others going over their logs regarding issues I might already have fixed.

     

    I have created (temporary) release of the cache-dirs plugin in my fork of dynamix plugin, the PLG is attached and will download the archive itself. Place the plg in /boot/config/plugins (\\flash\config\plugins) and reboot. It is still the cache_dirs 2.2.1 version, but does add a logrotate script so cache_dirs logging won't waste your mem-mounted root-diskspace too much.

    I think all problems above are because of lack of scan of user-share. I don't need it to avoid disk spin-up, but others do. 2018.10.14 does not contain this feature, as its based on old script 2.0.0j.

    @NAS: I suspect your problem is not the adaptiveness, but the lack of scan of user-share, as reported by others users. Check attached release. Also if you don't find love for the new adaptive feature just disable it. I added it because I hated seeing cache_dirs absoluty thrashing my disks when they where otherwise occupied with writing huge files or scanning for md5's, and that was the moment when the linux filesystem disaded to use the file cache for something else than directories causing cache-dirs to thrash the disks. I'm also find sometimes that the adaptiveness does not seem at all perfect. It does seem to work with me with cache-pressure of 1 though and enough memory or few enough files that it works.

     

    @NAS I can add a global minimum depth that is adjustable. I already have that in the code, its just not user-modifiable. Actually just checking the code now, it looks like I don't have a minimum depth. Maybe I removed that by mistake. But definitely that is easy and a good idea. I'll add it later. I also thought it would be cool to have filters, but its to difficult to add into the bash-script as cache_dirs is implemented in. It certainly possible and not extremely difficult, as find does support excludes, but it's a pain to work with bash. I've considered re-implementing in scala, but don't feel like it. I have discovered in my process of working with it that its impossible to make it really good because cache_dirs is a hack. It scans the dirs repeatedly in the hope of making linux keep the dirs in memory. Sometimes linux will decide to evict the dirs, and there is no way for us to tell whether linux have evicted the dirs. I try to determine this by checking scan-duration and if its long, I kill the scan procses and back-off, to avoid thrashing the disks, when my system use them for other stuff. But that strategy is never going to be perfect, so I don't feel like messing that much more with it. If you feel like adding it to the script, go for it. Actually its a dead simple scan, so implementing in scala seems super easy, but then people would need to download jvm, and might not want it. 

     

    It might be helpful, if others can chime in helping out, if I already helped them through some issues. Read further up to see diagnostics check, something about running cache_dirs -L on the new version attached, if my memory serves me. I think it was Fireball3 I helped.

     

    Sorry to be so long replying. I wanted to wait until I was sure my assumptions fixed the issue before I posted again and time got away from me.

     

    tl;dr my fundamental issue was at some point the addon update dropped the includes but not the excludes so it appeared to be working but actually was doing nothing. Changing the settings mostly fixed the issue.

     

    I am keen to throw some other ideas into the pot but the most important matter at hand is to deal with the PRs. Forking and getting users to manually pull addons from the internet is sub optimal.

     

    There has been some discussion on this above, where are we today with it.

     

  12. 15 hours ago, limetech said:

    These are not random people on the internet, they are specifically users of Unraid and presumably they know how they want iSCSI support to get integrated.  Although the post immediately following yours argues otherwise. 🙄

    I do see the point you are making but statistically it does not matter, any sufficiently large group when asked "what they want" will tend towards 100% coverage of any set of options. This if charted will obviously be a histogram but a forum is a very blunt tool for this kind of requirement gathering.

     

    You will have much better results if you either:

    • ask users for their use cases and do the requirements analysis in reserve from there

    or

    • estimate the base requirements internally and offer a trial working set of features and follow on by capturing the use cases people aren't able to meet with the trial, deciding form there if they are cases you want to support.

    The second option is what I would do as it allows you to internally debate the 10% effort 90% gain tipping point.

    • Upvote 1
  13. Is there a way to add basic support as a trial? I dont know the effort on this but I do know that asking people on the internet what they "need" is generally  a bad idea, much like trying to herd cats with an error margin greater than the sample set, you will just tend towards "everything" the more people you ask.

  14. Something weird is going on. I can see cache dirs running and even though I now have

     

    /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -e appdata -l off -p 0 -D 9999

     

    the cache seems to be perpetually needing disks spun up.

     

    How would you like me to debug this further?

     

    Update: it is this obvious, have all my cache includes been removed so I end with exclusions only and no inclusions  or is there there an implied * include?

  15. Having a hard time learning to love the new adaptive feature.

     

    The idea sounds great in practice but I have found that effectively cache dirs doesn't work any more, with all lower level folder access waking up some disks.

     

    Certainly this will be user specific but without feedback on how the adaptive caching is working, ideally showing depth per top level folder, timings etc it seems to me people would be better served with a default manual setup.

     

    On this note I don't know how other people do this but my ideal use case is to be able to set a global minimum depth and exceptions per folder. Currently I exclude 30% of my array as the data is rarely accessed but very large numbers of files.... however whilst this helps cache dirs a single stray click on one of these top level folders busts the cache and disks spin up.

     

    I would love to get into a discussion on how we could enhance this feature as without a word of a lie, it is the most critical one I operate.