NAS

Moderators
  • Posts

    5040
  • Joined

  • Last visited

Everything posted by NAS

  1. Nice work. Anyone know what this is? webGui: Disk read/write IO in background daemon
  2. Thanks @weebtech thats gives me something else to read about and try.
  3. Rather than wait for a reply on the above lets move on. We can see a lot of inode stats using: cat /proc/slabinfo | grep inode Using my cache share with a pressure of 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1198999 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16498 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 then we tree and see: 6085 directories, 310076 files then we ls -R real 0m4.753s user 0m1.146s sys 0m0.795s and finally we look at slabinfo again btrfs_inode 35483 37680 1072 30 8 : tunables 0 0 0 : slabdata 1256 1256 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1198994 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16510 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 Whilst I am not sure that these numbers are the only ones that matter what I can see is that the ones that would appear to be important, inode_cache, xfs_inode etc all appears to be static using pressure 0. Now if we copy a multi GB file and look again btrfs_inode 35483 37680 1072 30 8 : tunables 0 0 0 : slabdata 1256 1256 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1199018 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16484 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 the numbers are all but identical. This allows us to make a reasonable assumption that our second rule of thumb is correct (as per docs): With a cache pressure of 0, assuming you have enough RAM, inodes will not be flushed by other file actions as is usual.
  4. Thanks for that link. It is both useful in itself and probably the bug that inspired me to re-look at cache pressure again. Based on 4kB per inode approximation in that thread if we work backwards that allows us to make our first rule of thumb: For every 250,000 files and folders, with a cache pressure of 0 to cause inodes to not be expelled due to normal use, you need 1GB of dedicated RAM. Now before I move on I find this interesting. If I `df -i` to show inodes per disk only XFS disks give info on the md e.g. reiserfs Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md9 0 0 0 - /mnt/disk9 xfs Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md10 56M 67K 56M 1% /mnt/disk10 and the totals of user0/user do not tally with the sum of the drives Filesystem Inodes IUsed IFree IUse% Mounted on shfs 301M 972K 300M 1% /mnt/user0 shfs 534M 1.4M 532M 1% /mnt/user I would like to know why this is
  5. It never even occurred to me CacheDirs would be changing the default. I assumed the 10 value came from a unRAID changed default. So previously I set it manually using `sysctl` which I will now revert and apply ` -p 1`. What I am unclear on is how to quantify the results. The various comparisons I can find online involve using dd to flood out the ram and then timing ls. I am sure on a vanilla system this is a reasonable enough test but I want to test on a busy non artificial live platform. Given that I think I need to track cached inode stats and have a reasonable idea what number represents a complete in memory cache of everything. I also want to somehow work out how much RAM this actually needs with a view to being able to provide advice to the pro user that wants to use a cache pressure of 0.
  6. I have recently started to play again with vfs_cache_pressure and the other tunables in combination with the cache directory addon. Initial impressions are that as before dropping it down to 1 makes a big difference to browsing responsiveness although it is a bit subjective for my liking. I have ordered another 16GB of ram just to test this and was wondering if anyone else is intested in discussing this age old topic again?
  7. @ken-ji yup you are not the first person to have this idea. Currently this solution is not implemented, planned or supported and is specifically what I meant by " the way you keep your other servers secure cannot be the same as the way you manage unRAID security ". I dont think we should try to push LT into a new real time test and release model and we are probably going off topic here. If anyone wants to start a new threat to create a security community working group to head up these things I will actively join in but I dont have the time to front something of this level of effort at the moment.
  8. We have to be careful with this. By running essentially a firmware based OS you inherently accept two things relevant to OS security: You will never get security fixes as fast as the upstream OS You place a level of trust that the OS vendor (in this case Limetech LLC) is deciding on your behalf what is a serious risk and what is not. In some ways this breaks with traditional "security in depth" which requires at its core you patch every security issue immediately regardless of perceived threat or more importanly your perception of that threat (since the days that someone can understand all-things-security and know how-all-servers-are-deployed in the wild are long since gone). For these two reason alone unRAID can never by definition be as secure and a non firmware based OS and you should plan your security policy accordingly. However for this cost along with a reduced uptime you get a lot in return not least of which is the ability to reinstall at a whim the whole OS. This is why you need to be careful when discussing CVEs etc because the way you keep your other servers secure cannot be the same as the way you manage unRAID security. There is room for improvement in the current model but it is important to set the scene that unRAID is no longer inherently insecure by design.
  9. Apologies if I missed the reply, this is a popular thread. How do I opt out of this permanently.
  10. Just curious if this is being worked on or at least actively considered. I raise it again as I am currently embarking on a new disk refresh and upgrade (3rd since we started discussing this) and it still maintain that this would both be a genuinely useful / time saving feature but also a Unique Selling Point that your competitors simply cant offer (due to the nature of the RAID they use) should be grabbed with both hands IMHO
  11. You will have to excuse the assumption they were a wired provider as I am not even on the same hemisphere as this company. I read Wireless and in my head substituted Cable and Wireless which for me has always been cable and lan extension services. A mistake was made so not as big a deal as thought but still significant.
  12. That is a pretty big deal as it essentially enforces all new business signups use IPv6. I think this will catch many IT people by surprise.
  13. over 30% of the worlds IPv6 access to google is recorded as coming from the USA. Fundamentally every ISP and NOC int the world is jumping through ever more expensive hoops to cater for IPv4 since it the address space exhaustion is making it more and more expensive. Money drives all innovation as usual. Meanwhile IoT, cheap phones and an exponential increase in connected devices increases sucks up more and more IPv4 space.. IPv6 is not just inevitable it is becoming a necessity faster than most predicated. But I point people back to my lists of changes needed for IPv6. Its non trivial and touches the vast majority of unRAID components in some way or another. This is a not a simple tick the box change, its giant. But we need to start somewhere.
  14. Lets get into specifics rather than brinksmanship, what are the real implications of this and do they break anything. e.g perhaps this list includes GUI work for general network config Documentation work (lots) License server Update servers Docker GUI work Docker back end Virtual GUI work Virtual back end Samba control AFS control FTP control NFS control Core addons e.g. unassigned drives Windows domain specific stuff What is missing or included by mistake? What need to work at alpha stage when IPv6 is a command line only option?
  15. Yes I agree. I was not quoting stats to say that there is no other way, I just wanted to point out that global rollout is further along that I expected it to be. Personally I think the security risks of not needing NAT and the privacy issues of the big players being able to profile specific machines within the local network mean that the general userbase should hold off for as long as possible. I dont know when is the right time to start developing it properly but I would have thought as a non GUI, dev only feature that time may be now.
  16. For what it is worth as we speak 1 in every 7 internet users access google using IPv6. This time last year it was 1 in 10 and we are on track for upwards of the 30% of the entire internet google use by end of year. These figures are scewed very heavily by country but they took me by surprise.
  17. How do I permanently opt out of : Preclear Disks Install Statistics Plugin This plugin is used to send statistics anonymously using Google Forms and TOR. Don't worry, you will be asked before sending every report. so that it is not installed by mistake?
  18. NAS

    snmp

    given how far unRAID has come I have to say I strongly agree.
  19. I agree, that is a sensible viewpoint. I expect LT will have a policy already for new features that are beneficial but change previous defaults. My personal preference would be the "norm" of enable as new default but do not alter any existing assets. The downside of this is that is yet more buttons for user to know about and click. IMHO this is worth it since the real unRAID userbase are the silent masses that never talk here. A reasonable (very high?) percentage of these users know little to nothing about IP, ports, NAT or firewalling and it is our duty of care to protect them at default.
  20. bump for comment. Especially interested in what the dockers container devs think.
  21. Superb summary. Can I suggest that in the interim including some easy indicator in emHTTP as to which of the 3 states ("DX_TRIM" or "DZ_TRIM" or "Unsupported") should be added ASAP. This will raise visibilty in the community and start the natural process of recommendations and more importantly removals
  22. A place to discuss potential docker network related security enhancements: This is driven by me seeing an uptick in users who really shouldn't, due to lack of knowledge, blindly following guides and placing apps on the internet. My initial three thoughts are: 1. By default all apps should only allow IANA private IP access. Users should opt in to allowing internet IPs to connect. I suspect this can be done with docker networking. 2. By default all containers should not be able to see any other container i.e. network isolation. This can absolutely be done with docker but likely not with current version and/or emHTTP GUI 3. We should not force users to create a port mapping. i.e. in many instances a container port should never be bridged to a host port for increased security e.g. a nginx reverse proxy on the same docker internal subnet Above all any changes need to be practical but take into account that by default users really dont understand security and networking (and why should they thats our job) and should be secure by default.
  23. CVE-2016-8615: cookie injection for other servers CVE-2016-8616: case insensitive password comparison CVE-2016-8617: OOB write via unchecked multiplication CVE-2016-8618: double-free in curl_maprintf CVE-2016-8619: double-free in krb5 code CVE-2016-8620: glob parser write/read out of bounds CVE-2016-8621: curl_getdate read out of bounds CVE-2016-8622: URL unescape heap overflow via integer truncation CVE-2016-8623: Use-after-free via shared cookies CVE-2016-8624: invalid URL parsing with '#' CVE-2016-8625: IDNA 2003 makes curl use wrong host
  24. Changed days speed wise and it shouldn't go without saying thank you.