NAS

Moderators
  • Posts

    5040
  • Joined

  • Last visited

Posts posted by NAS

  1. So I looked into this some more. The whole NetBIOS/SMB 139/445 thing is way more complicated than it seems on first look but most of it would be out of scope for this thread.

     

    The important bit is that the port 445 recommendations are just badly worded as we expected.

     

    What it essentially mean is "dont trust 445 anymore on a network you dont trust". This is not new advice and no one should have been doing this for as long as I can remember.

     

    Thanks for the links.

  2. I am not sure I follow that one. Disabling 445 would kill all of NetBIOS SMB so I can only assume they are talking about between zones or the interent... and if you have that open you have bigger problems to deal with :)

     

    Or am i reading it wrong?

     

    Edit: this actually doesn't work the way I thought it did. More reading required

     

  3. On 02/04/2017 at 6:55 AM, bonienl said:

     

    The disk I/O read and write speeds which are displayed on the Main page are now calculated in the background.

     

    The GUI used to do this, but now it simply needs to retrieve the obtained values from the daemon. Resulting in less overhead and more accuracy.

     

    Thats great thanks. I read it wrongly as a bug fix which peaked my interest. This feature upgrade is good too obviously.

     

    Cheers

  4. Intersting update.

     

    The the exact same setup as before with the exception of moving to 6.3.3 and adding 16GB more RAM some of the stats have increased

     

    btrfs_inode        18189  19950   1072   30    8 : tunables    0    0    0 : slabdata    665    665      0
    mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
    v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
    xfs_inode         1481005 1481272    960   17    4 : tunables    0    0    0 : slabdata  88842  88842      0
    udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
    fuse_inode        1769711 1769843    768   21    4 : tunables    0    0    0 : slabdata  85799  85799      0
    cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
    nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
    isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
    fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
    ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
    reiser_inode_cache 402622 402622    736   22    4 : tunables    0    0    0 : slabdata  18301  18301      0
    rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
    inotify_inode_mark    184    184     88   46    1 : tunables    0    0    0 : slabdata      4      4      0
    sock_inode_cache    1675   1675    640   25    4 : tunables    0    0    0 : slabdata     67     67      0
    proc_inode_cache   53632  53775    632   25    4 : tunables    0    0    0 : slabdata   2151   2151      0
    shmem_inode_cache  16848  16848    680   24    4 : tunables    0    0    0 : slabdata    702    702      0
    inode_cache         7840   7840    576   28    4 : tunables    0    0    0 : slabdata    280    280      0

     

    I can only assume i have cached some inodes that are excluded from cache_dirs. Will see how the numbers change over time

     

     

  5. Rather than wait for a reply on the above lets move on.

     

    We can see a lot of inode stats using:

     

    cat /proc/slabinfo | grep inode

     

    Using my cache share with a pressure of 0

     

    mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
    v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
    xfs_inode         1198999 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
    udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
    fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
    cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
    nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
    isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
    fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
    ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
    reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
    rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
    inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
    sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
    proc_inode_cache   16498  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
    shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
    inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

     

    then we tree and see:

     

    6085 directories, 310076 files

     

    then we ls -R

     

    real    0m4.753s
    user    0m1.146s
    sys     0m0.795s

     and finally we look at slabinfo again

     

    btrfs_inode        35483  37680   1072   30    8 : tunables    0    0    0 : slabdata   1256   1256      0
    mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
    v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
    xfs_inode         1198994 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
    udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
    fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
    cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
    nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
    isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
    fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
    ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
    reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
    rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
    inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
    sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
    proc_inode_cache   16510  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
    shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
    inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

     

    Whilst I am not sure that these numbers are the only ones that matter what I can see is that the ones that would appear to be important, inode_cache, xfs_inode etc all appears to be static using pressure 0.

     

    Now if we copy a multi GB file and look again

     

    btrfs_inode        35483  37680   1072   30    8 : tunables    0    0    0 : slabdata   1256   1256      0
    mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
    v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
    xfs_inode         1199018 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
    udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
    fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
    cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
    nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
    isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
    fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
    ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
    reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
    rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
    inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
    sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
    proc_inode_cache   16484  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
    shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
    inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

     

    the numbers are all but identical.

     

    This allows us to make a reasonable assumption that our second rule of thumb is correct (as per docs):

     

    With a cache pressure of 0, assuming you have enough RAM, inodes will not be flushed by other file actions as is usual.

    • Upvote 1
  6. Thanks for that link. It is both useful in itself and probably the bug that inspired me to re-look at cache pressure again.

     

    Based on 4kB per inode approximation in that thread if we work backwards that allows us to make our first rule of thumb:

     

    For every 250,000 files and folders, with a cache pressure of 0 to cause inodes to not be expelled due to normal use, you need 1GB of dedicated RAM.

     

    Now before I move on I find this interesting. If I `df -i` to show inodes per disk only XFS disks give info on the md

     

    e.g.

    reiserfs

    Filesystem     Inodes IUsed IFree IUse% Mounted on
    
    /dev/md9            0     0     0     - /mnt/disk9

    xfs

    Filesystem     Inodes IUsed IFree IUse% Mounted on
    /dev/md10         56M   67K   56M    1% /mnt/disk10

     

    and the totals of user0/user do not tally with the sum of the drives


     

    Filesystem     Inodes IUsed IFree IUse% Mounted on
    
    shfs             301M  972K  300M    1% /mnt/user0
    shfs             534M  1.4M  532M    1% /mnt/user
    
    

     

    I would like to know why this is

    • Upvote 1
  7. It never even occurred to me CacheDirs would be changing the default. I assumed the 10 value came from a unRAID changed default.

     

    So previously I set it manually using `sysctl` which I will now revert and apply ` -p 1`.

     

    What I am unclear on is how to quantify the results. The various comparisons I can find online involve using dd to flood out the ram and then timing ls. I am sure on a vanilla system this is a reasonable enough test but I want to test on a busy non artificial live platform.

     

    Given that I think I need to track cached inode stats and have a reasonable idea what number represents a complete in memory cache of everything. I also want to somehow work out how much RAM this actually needs with a view to being able to provide advice to the pro user that wants to use a cache pressure of 0.

  8. I have recently started to play again with vfs_cache_pressure and the other tunables in combination with the cache directory addon.

     

    Initial  impressions are that as before dropping it down to 1 makes a big difference to browsing responsiveness although it is a bit subjective for my liking.

     

    I have ordered another 16GB of ram just to test this and was wondering if anyone else is intested in discussing this age old topic again?

  9. @ken-ji yup you are not the first person to have this idea. Currently this solution is not implemented, planned or supported and is specifically what I meant by " the way you keep your other servers secure cannot be the same as the way you manage unRAID security ".

     

    I dont think we should try to push LT into a new real time test and release model and we are probably going off topic here. If anyone wants to start a new threat to create a security community working group to head up these things I will actively join in but I dont have the time to front something of this level of effort at the moment.

    • Upvote 1
  10. We have to be careful with this. By running essentially a firmware based OS you inherently accept two things relevant to OS security:

     

    1. You will never get security fixes as fast as the upstream OS
    2. You place a level of trust that the OS vendor (in this case Limetech LLC) is deciding on your behalf what is a serious risk and what is not.

    In some ways this breaks with traditional "security in depth" which requires at its core you patch every security issue immediately regardless of perceived threat or more importanly your perception of that threat (since the days that someone can understand all-things-security and know how-all-servers-are-deployed in the wild are long since gone).

     

    For these two reason alone unRAID can never by definition be as secure and a non firmware based OS and you should plan your security policy accordingly.

     

    However for this cost along with a reduced uptime you get a lot in return not least of which is the ability to reinstall at a whim the whole OS.

     

    This is why you need to be careful when discussing CVEs etc because the way you keep your other servers secure cannot be the same as the way you manage unRAID security.

     

    There is room for improvement in the current model but it is important to set the scene that unRAID is no longer inherently insecure by design.

     

    • Like 1
  11. On 18/02/2017 at 10:58 AM, NAS said:

    How do I permanently opt out of :

     

     

    
    Preclear Disks
    Install Statistics Plugin
    
    This plugin is used to send statistics anonymously using Google Forms and TOR.
    Don't worry, you will be asked before sending every report.
     

     

     

    so that it is not installed by mistake?

    Apologies if I missed the reply, this is a popular thread.

     

    How do I opt out of this permanently.

     

    9QxVvpb.jpg

  12. Just curious if this is being worked on or at least actively considered.

     

    I raise it again as I am currently embarking on a new disk refresh and upgrade (3rd since we started discussing this) and it still maintain that this would both be a genuinely useful / time saving feature but also a Unique Selling Point that your competitors simply cant offer (due to the nature of the RAID they use) should be grabbed with both hands IMHO

  13. You will have to excuse the assumption they were a wired provider as I am not even on the same hemisphere as this company. I read Wireless and in my head substituted Cable and Wireless which for me has always been cable and lan extension services. A mistake was made so not as big a deal as thought but still significant.

    • Upvote 1
  14. 2 hours ago, tr0910 said:

    If you are located in the USA, ipv6 isn't likely to be interesting in your lifetime.

     

    I...

    over 30% of the worlds IPv6 access to google is recorded as coming from the USA.

     

    Fundamentally every ISP and NOC int the world is jumping through ever more expensive hoops to cater for IPv4 since it the address space exhaustion is making it more and more expensive. Money drives all innovation as usual.

     

    Meanwhile IoT, cheap phones and an exponential increase in connected devices increases sucks up more and more IPv4 space..

     

    IPv6 is not just inevitable it is becoming a necessity faster than most predicated.

     

    But I point people back to my lists of changes needed for IPv6. Its non trivial and touches the vast majority of unRAID components in some way or another. This is a not a simple tick the box change, its giant.

     

    But we need to start somewhere.

    • Upvote 1
  15. Lets get into specifics rather than brinksmanship, what are the real implications of this and do they break anything.

     

    e.g perhaps this list includes

    • GUI work for general network config
    • Documentation work (lots)
    • License server
    • Update servers
    • Docker GUI work
    • Docker back end
    • Virtual GUI work
    • Virtual back end
    • Samba control
    • AFS control
    • FTP control
    • NFS control
    • Core addons e.g. unassigned drives
    • Windows domain specific stuff

    What is missing or included by mistake?

    What need to work at alpha stage when IPv6 is a command line only option?

     

    • Upvote 1
  16. Yes I agree. I was not quoting stats to say that there is no other way, I just wanted to point out that global rollout is further along that I expected it to be.

     

    Personally I think the security risks of not needing NAT and the privacy issues of the big players being able to profile specific machines within the local network mean that the general userbase should hold off for as long as possible.

     

    I dont know when is the right time to start developing it properly but I would have thought as a non GUI, dev only feature that time may be now.

    • Upvote 1
  17. For what it is worth as we speak 1 in every 7 internet users access google using IPv6. This time last year it was 1 in 10 and we are on track for upwards of the 30% of the entire internet google use by end of year.

     

    These figures are scewed very heavily by country but they took me by surprise.