Jump to content

NAS

Moderators
  • Content Count

    4978
  • Joined

  • Last visited

Community Reputation

37 Good

About NAS

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. NAS

    Better Defaults

    This is a pretty big deal if it is true. I wouldnt know how to pull this off right now though so more details required.
  2. I think I can actually replicate this now. If i mount a USB drive and copy files continuously to my SSD cache drive which also is the location of my docker loopback image then after a few minutes docker starts to not respond which obviously ruins the web GUI as well. I routinely copied files in this way in all previous versions and the cache drive seems fine (its pretty new) and there are no errors in any log that I can see. The SSD is attached to a motherboard SATA port directly. I am pretty sure it is IO WAIT as load sky rockets. Will wait and see if its an "only me" problem.
  3. Except if you a company or any public body or a registered non profit or anyone with liability cover or any organization that has to comply is ISO accreditation or worldwide equivs or has independent audits or anyone covered by EU GDPR or or.... context is import and whilst most people here are home users plenty unRAID users are not Nice addon. I really mean that. It absolutely has its place but the default advice should always be "be secure unless you really really understand the risks of not being".
  4. yeah i dont think it is there either but i suspect this thread will hang about for years and hit first page on google since its so niche. (anyone else find themselves answering their own question via google in old forum posts... i do more than I care to admit)
  5. I faced an odd issue where the web gui would load but not completely. Specifically the dash wouldnt load but disk view would excluding the action buttons at the bottom. Docker view wouldnt load at all but settings would. Dockers containers did not work or were super slow (hard to say) Manually restarting one container instantly kicked web gui and docker into a working state. I cant replicate.
  6. @limetech can you confirm if this is indeed in `proc` just so I can close this thread down as solved and anyone else that happens upon it knows the definitive answer.
  7. hehe Out of curiusoty i never did find a way to do this by query /proc. Any idea if this data is in there somewhere?
  8. That is excellent thank you very much. I would not have thought to do it like this at all.
  9. I have tried I think all the obvious ways although I still think i missed the one obvious one that works lsblk /dev/md4 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md4 9:4 0 3.7T 0 md /mnt/disk4
  10. Sorry if this is obvious but I cant quite nail it down. I have a need for a thing I am doing to find the device name and disk serial starting knowing either "/mnt/disk4" or "/dev/md4" from the shell Does anyone know how to get for example "/dev/sdb1" from this?
  11. I thought I was going mad. This is most certainly a thing.
  12. tl;dr to fully fix this Intel garbage users are going to pay a performance price. The price varies wildly based on user workloads and is basically impossible to predict however I have been in conversation where some devops have seen insane edge cases performance drops. I would suggest the right way to do this is to fix it by default but document an opt out for those that want to accept the risk because it is not possible for normal humans to really understand this beginning to end.
  13. So one of the problems with cache_dirs is that it is at its core a hack. Quite a bit of the user setup is based on educated guesswork and very little info is presented or logged to know how well it is functioning e.g. how many files are being cached, how often is it waking up disks. Most of the performance feedback we can gather is based on timings. Some ideas: track timings and present them as a time based graph for easier analysis store timings outside of rotating logs for easier long term analyisis estimate memory usage In theory we can can reasonably estimate maximum memory requirements by counting potential inode entrys and multiplying it by the inode size . This would allow what/if scenarios to be built where each share can be presented in terms of estimated memory requirements and if we do end up adding a depth per share option the user can tune their settings (and potentially reorganize their data) to cache as efficiently as possible. This will take some reading and experiemtation and unless someone is capable of reading the kernel code to understand how cache pressure impacts this it will always be worst case..... but to my eye this is the direction to go
  14. Sorry to be so long replying. I wanted to wait until I was sure my assumptions fixed the issue before I posted again and time got away from me. tl;dr my fundamental issue was at some point the addon update dropped the includes but not the excludes so it appeared to be working but actually was doing nothing. Changing the settings mostly fixed the issue. I am keen to throw some other ideas into the pot but the most important matter at hand is to deal with the PRs. Forking and getting users to manually pull addons from the internet is sub optimal. There has been some discussion on this above, where are we today with it.
  15. NAS

    ISCSI Support

    I do see the point you are making but statistically it does not matter, any sufficiently large group when asked "what they want" will tend towards 100% coverage of any set of options. This if charted will obviously be a histogram but a forum is a very blunt tool for this kind of requirement gathering. You will have much better results if you either: ask users for their use cases and do the requirements analysis in reserve from there or estimate the base requirements internally and offer a trial working set of features and follow on by capturing the use cases people aren't able to meet with the trial, deciding form there if they are cases you want to support. The second option is what I would do as it allows you to internally debate the 10% effort 90% gain tipping point.