NAS

Moderators
  • Posts

    5040
  • Joined

  • Last visited

Everything posted by NAS

  1. I was indeed agreeing. Just for clarity the normal security reporting methodology is to start with private contact. Normally this is for unpublished vulnerabilities but it holds equally true for published ones where the vendor may just not have noticed or has noticed and something has went wrong and they wrongly assume fixes are in place. It is VERY common for vendors to patch, release but not pen test the actual release after. After a reasonable period of time if unresolved you can and should then post publicly so that users who are vulnerable have the maximum chance to hear about it and make and informed decision on what the risk is to them and how to handle it. I dont think it would be unfair to say no one in the history of this project has prodded more about security then me. I am not and never have been an employee of Limetech LLC and have never received any monetary of gift rewards other than a single license for testing.
  2. Do not open 445 to the internet.
  3. It is important that users who choose a non subscription model, even if that is just implicit by the fact they use only the traditional unRAID product, that there be no phone home or other services that reach out of the system by lieu of the subscription services running in "off" mode of or any other mechanism. I cannot stress this enough. Feel free to add value in whatever way suits your business but dont break that trust model whilst doing so.
  4. Whilst it is not ideal that the poster did not follow normal security reporting etiquette it is clear there is an issue and it is off our own making. See versus http://www.slackware.com/security/list.php?l=slackware-security&y=2020 tl;dr we are long overdue an update but we have slipped into the old habit of waiting for the development branch to be ready and ignoring the stable branch. It is not the end of the world but its a habit we need to break again ASAP
  5. @unRate can you post a few representative examples to set context. Nothing should be `fixed over a year ago` but 280-290 days is unfortunately possible.
  6. This is a very interesting poll and I commend the people responsible. I do however question why `SSD Array option with Trim support` is on this poll though. The other items in the poll are feature enhancements, nice to haves or power user edge cases..... but supporting SSDs in 2020 should be a basic capability for a NAS not something we poll to see if its optionally wanted.
  7. NAS

    Better Defaults

    This is a pretty big deal if it is true. I wouldnt know how to pull this off right now though so more details required.
  8. I think I can actually replicate this now. If i mount a USB drive and copy files continuously to my SSD cache drive which also is the location of my docker loopback image then after a few minutes docker starts to not respond which obviously ruins the web GUI as well. I routinely copied files in this way in all previous versions and the cache drive seems fine (its pretty new) and there are no errors in any log that I can see. The SSD is attached to a motherboard SATA port directly. I am pretty sure it is IO WAIT as load sky rockets. Will wait and see if its an "only me" problem.
  9. Except if you a company or any public body or a registered non profit or anyone with liability cover or any organization that has to comply is ISO accreditation or worldwide equivs or has independent audits or anyone covered by EU GDPR or or.... context is import and whilst most people here are home users plenty unRAID users are not Nice addon. I really mean that. It absolutely has its place but the default advice should always be "be secure unless you really really understand the risks of not being".
  10. yeah i dont think it is there either but i suspect this thread will hang about for years and hit first page on google since its so niche. (anyone else find themselves answering their own question via google in old forum posts... i do more than I care to admit)
  11. I faced an odd issue where the web gui would load but not completely. Specifically the dash wouldnt load but disk view would excluding the action buttons at the bottom. Docker view wouldnt load at all but settings would. Dockers containers did not work or were super slow (hard to say) Manually restarting one container instantly kicked web gui and docker into a working state. I cant replicate.
  12. @limetech can you confirm if this is indeed in `proc` just so I can close this thread down as solved and anyone else that happens upon it knows the definitive answer.
  13. hehe Out of curiusoty i never did find a way to do this by query /proc. Any idea if this data is in there somewhere?
  14. That is excellent thank you very much. I would not have thought to do it like this at all.
  15. I have tried I think all the obvious ways although I still think i missed the one obvious one that works lsblk /dev/md4 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md4 9:4 0 3.7T 0 md /mnt/disk4
  16. Sorry if this is obvious but I cant quite nail it down. I have a need for a thing I am doing to find the device name and disk serial starting knowing either "/mnt/disk4" or "/dev/md4" from the shell Does anyone know how to get for example "/dev/sdb1" from this?
  17. I thought I was going mad. This is most certainly a thing.
  18. tl;dr to fully fix this Intel garbage users are going to pay a performance price. The price varies wildly based on user workloads and is basically impossible to predict however I have been in conversation where some devops have seen insane edge cases performance drops. I would suggest the right way to do this is to fix it by default but document an opt out for those that want to accept the risk because it is not possible for normal humans to really understand this beginning to end.
  19. So one of the problems with cache_dirs is that it is at its core a hack. Quite a bit of the user setup is based on educated guesswork and very little info is presented or logged to know how well it is functioning e.g. how many files are being cached, how often is it waking up disks. Most of the performance feedback we can gather is based on timings. Some ideas: track timings and present them as a time based graph for easier analysis store timings outside of rotating logs for easier long term analyisis estimate memory usage In theory we can can reasonably estimate maximum memory requirements by counting potential inode entrys and multiplying it by the inode size . This would allow what/if scenarios to be built where each share can be presented in terms of estimated memory requirements and if we do end up adding a depth per share option the user can tune their settings (and potentially reorganize their data) to cache as efficiently as possible. This will take some reading and experiemtation and unless someone is capable of reading the kernel code to understand how cache pressure impacts this it will always be worst case..... but to my eye this is the direction to go
  20. Sorry to be so long replying. I wanted to wait until I was sure my assumptions fixed the issue before I posted again and time got away from me. tl;dr my fundamental issue was at some point the addon update dropped the includes but not the excludes so it appeared to be working but actually was doing nothing. Changing the settings mostly fixed the issue. I am keen to throw some other ideas into the pot but the most important matter at hand is to deal with the PRs. Forking and getting users to manually pull addons from the internet is sub optimal. There has been some discussion on this above, where are we today with it.
  21. I do see the point you are making but statistically it does not matter, any sufficiently large group when asked "what they want" will tend towards 100% coverage of any set of options. This if charted will obviously be a histogram but a forum is a very blunt tool for this kind of requirement gathering. You will have much better results if you either: ask users for their use cases and do the requirements analysis in reserve from there or estimate the base requirements internally and offer a trial working set of features and follow on by capturing the use cases people aren't able to meet with the trial, deciding form there if they are cases you want to support. The second option is what I would do as it allows you to internally debate the 10% effort 90% gain tipping point.
  22. Is there a way to add basic support as a trial? I dont know the effort on this but I do know that asking people on the internet what they "need" is generally a bad idea, much like trying to herd cats with an error margin greater than the sample set, you will just tend towards "everything" the more people you ask.
  23. Something weird is going on. I can see cache dirs running and even though I now have /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -e appdata -l off -p 0 -D 9999 the cache seems to be perpetually needing disks spun up. How would you like me to debug this further? Update: it is this obvious, have all my cache includes been removed so I end with exclusions only and no inclusions or is there there an implied * include?
  24. Having a hard time learning to love the new adaptive feature. The idea sounds great in practice but I have found that effectively cache dirs doesn't work any more, with all lower level folder access waking up some disks. Certainly this will be user specific but without feedback on how the adaptive caching is working, ideally showing depth per top level folder, timings etc it seems to me people would be better served with a default manual setup. On this note I don't know how other people do this but my ideal use case is to be able to set a global minimum depth and exceptions per folder. Currently I exclude 30% of my array as the data is rarely accessed but very large numbers of files.... however whilst this helps cache dirs a single stray click on one of these top level folders busts the cache and disks spin up. I would love to get into a discussion on how we could enhance this feature as without a word of a lie, it is the most critical one I operate.
  25. The "Update all" in 6.5.3 works well (great work) but it is a bit unintuitive that there are scenarios where the interface is showing some/most/all containers have an update available but the update all button is not there.