mgutt

Moderators
  • Content Count

    3980
  • Joined

  • Last visited

  • Days Won

    34

Report Comments posted by mgutt

  1. @ChatNoir

    You posted a screenshot that your WD180EDFZ spins down. Now I'm on 6.9.2, too, but nothing works except of the original Ultrastar DC HC550 18TB which I'm using as my parity disk.

     

    18:38

    914883082_2021-06-2018_38_08.png.4d81634d4167f5345e84ff27a36cd5b6.png

     

    23:06

    205533099_2021-06-2023_06_03.png.b3b39e91a352ddb3f3de6c9ed1bc1ed5.png

     

    As you can see there was no activity on most of the disks, so why isn't unraid executing the spin down command?!

     

    Logs (yes, that's all):

    Jun 20 18:27:00 thoth root: Fix Common Problems Version 2021.05.03
    Jun 20 18:27:08 thoth root: Fix Common Problems: Warning: Syslog mirrored to flash ** Ignored
    Jun 20 19:07:04 thoth emhttpd: spinning down /dev/sdg

     

    If I click the spindown icon it creates a new entry in the logs. And it creates it as well if I execute the following command:

    /usr/local/sbin/emcmd cmdSpindown=disk2

     

    So the command itself works flawlessly, but it isn't executed by Unraid.

     

    @limetech What are the conditions before this command gets executed? Does Unraid check the power state, before going further? Because this disks have the powerstate "IDLE_B" all the time. Maybe you like to send me the source code, so I can investigate it?

  2. 4 hours ago, TexasUnraid said:

    after a few hours

    Was the terminal open in this time? After closing the terminal, the watch process is killed as well.

     

    If you want a long term monitoring, you could add " &" at the end of the command, to permanently run it in the background and later you could kill the process with the following command:

    pkill -xc inotifywait

     

    Are you using the docker.img? The command can't monitor file changes inside the docker.img. If you want to monitor them, you need to change the path to "/var/lib/docker".

  3. 23 hours ago, TexasUnraid said:

    is there a way to tell which files are being written to by docker?

     

    You could start with this, which returns the 100 most recent files of the docker directory:

    find /mnt/user/system/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n100n

     

    Another method would be to log all file changes:

    inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/system/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt
    

     

    More about --no-healtcheck and these commands:

    https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?tab=comments#comment-10983

     

  4. Maybe some of you like to test my script:

    https://forums.unraid.net/topic/106508-force-spindown-script/

     

    It solved multiple issues for me:

    - some disks randomly spin up without any I/O change, which means Unraid does not know that they are spinning and by that they stay infinitely in IDLE_A state and never spin down.

    - some disks randomly return the STANDBY state although they are spinning. This is really crazy.

    - some disks randomly like to spindown twice to save even more power. I think the second spindown triggers the sata port standby state.

     

    Feedback is welcome!

  5. 27 minutes ago, boomam said:

    I get that, but considering it was listed as 'resolved' in the 6.9 update - if its still an issue

    The part that is related to Unraid was solved. Nobody can solve write-amplification of BTRFS and Unraid can't influence how docker stores status updates. Docker decided to save this data in a file instead of RAM. This causes writes. Feel free to like / comment the issue. Maybe it will be solved earlier if devs see how many people are suffering from wearing out SSDs.

  6. 13 minutes ago, TexasUnraid said:

    can you explain the HEALTHCHECK?

    https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?tab=comments#comment-10980

     

    13 minutes ago, TexasUnraid said:

    use tips and tweaks

    You can set those values with your go file as well. No Plugin necessary. I set only vm.dirty_ratio to 50%:

    image.png.26336e18e3822c4037557825d19f9ab7.png

     

    30 seconds until writing is okay for me.

  7. 1 hour ago, TexasUnraid said:

    Yeah, thats the theory but when I tested it in the past I didn't see much of a difference

    Ok, the best method is to avoid the writes at all. Thats why I disabled HEALTHCHECK for all my containers.

     

    1 hour ago, TexasUnraid said:

    increased my dirty writes

    You mean the time (vm.dirty_expire_centisecs) until the dirty writes are written to the disk and not the size?

  8. 7 hours ago, Squid said:

    so that it doesn't cause issues like this

    Is this a "new" feature? Maybe the user had installed a container in the past with a /mnt/cache path and by that the template was already part of his "previous apps" (which bypasses the auto adjustment of CA). The user said he never had a cache pool and this problem occurred not until upgrading to Unraid 6.9.

     

    Finally I suggested the user to edit the /shares/sharename.cfg and disable the cache through the "shareUseCache" variable.

  9. 4 hours ago, TexasUnraid said:

    I don't think that any single directory has more then 10k files, generally only a few hundred per directory. There are around 500k directories last I checked IIRC.

     

    Ok, I changed the code to generate the 1M random files as follows:

     

    share_name="Music"
    mkdir "/mnt/cache/${share_name}/randomfiles"
    for n in {0..999}; do
        dirname=$( printf %03d "$n" )
        mkdir "/mnt/cache/${share_name}/randomfiles/${dirname}/"
    done
    for n in {1..1000000}; do
        filename=$( printf %07d "$n" )
        dirname=${filename:3:3}
        dd status=none if=/dev/urandom of="/mnt/cache/${share_name}/randomfiles/${dirname}/${filename}.bin" bs=4k count=$(( RANDOM % 5 + 1 ))
    done
    

     

    Now we get 1000 directories and each contains 1000 files.

     

     

    More tests follow...

     

  10. 23 hours ago, TexasUnraid said:

    Try it with 1,000,000+

    While the download worked without any problems, I now seem to hit your problem while uploading:

    1584924726_2021-02-0819_19_26.thumb.png.09727e560915fd6b72e84769c00e6314.png

     

    If I pause the process the smbd load directly disappears:

    687529611_2021-02-0819_22_30.thumb.png.fb03a9384d1ae6fe7233d18ae531b0ee.png

     

    And resuming is as slow as before.

     

    Then I tried:

    - to trim the SSD (/sbin/fstrim -v /mnt/cache)

    - Clear the Linux PageCache (sync; echo 1 > /proc/sys/vm/drop_caches)

    - restarted the Samba (samba restart)

     

    Then I created a ramdisk and copied through Windows. The load of the smbd service raises while the transfer speed drops:

    842761580_2021-02-0819_40_00.thumb.png.0a06b0bfef647e2fdde18aeae55a1241.png

     

    I remembered about this tuning guide regarding directories which contain a huge amount of files:

    https://www.samba.org/samba/docs/old/Samba3-HOWTO/largefile.html

     

    So I disabled case sensitivity through "nano /etc/samba/smb-shares.conf":

    387834166_2021-02-0819_58_06.png.08128a240bf8d0f17efbcd8427926bcd.png

     

    And yes, now the transfer and load remains stable:

    1886207609_2021-02-0820_04_00.thumb.png.463d11f4042f338a9f449cc2d70c27e6.png

     

     

    Could this be your problem? Do you have more than 10.000 files in a single sub-directory?

  11. 10 minutes ago, TexasUnraid said:

    I will add them and give it a try when I can take it offline in a day or 2.

     

    Those commands work without reboot. You only need to add them to the smb-extra.conf and execute "samba restart".

     

    But you need to replace:

    10000000000

     

    against:

    1000000000

     

    if you're using an 1G adapter.

     

    But this won't help much as ViceVersa does not use multiple threads. But it will help if you have other background connections to Unraid.

     

    Quote

    I am guessing that I just need to change the IP to my main IP address? I have both 1gig and 10gig. Do I comma separate and add both addresses?

     

    Do you need the 1G connection? If not, don't use it. I never tested multiple adapters, but you need to add the IP of the specific adapter.

     

  12. Just now, TexasUnraid said:

    They are all running default windows 10 network settings though.

    Then they will use SMB Multichannel and if both network adapters support RSS, they will even use this, too. Because its the default:

    https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11)

    Quote

    Because SMB Multichannel is enabled by default, you do not have to install additional roles, role services, or features. The SMB client automatically detects and uses multiple network connections when the configuration is identified.

     

    This is the only "magical" difference between win2win and win2unraid. And if you don't use a direct disc path as your target you additionally suffer from the SFHS overhead.