mgutt

Moderators
  • Posts

    7520
  • Joined

  • Last visited

  • Days Won

    73

Report Comments posted by mgutt

  1. 9 hours ago, ncoolidg said:

    Is there any development on fixing this issue with small files?

    There are different solutions to enhance performance (impact descending): 

    • enable SMB Multichannel + RSS (this enables multi-threading, which allows transferring multiple files simultaneously instead of only one by one, which is the default in Windows, but not with Linux)
    • use Disk Shares instead of User Shares (which avoids Unraid's "FUSE (sshfs) overhead", which can be massive)
    • enable case-sensitivity (this allows co-existence of "photo.jpg" and "Photo.jpg" in the same directory, but makes SMB much faster handling a huge amount of files)
    • upload to an SSD Cache Pool instead of the Array (>1Gbit/s vs ~0.5Gbit/s)
    • install more RAM and raise the Write-Cache with Tips & Tweaks Plugin to 50 - 80%
    • use a faster CPU
    • do not use encryption
    • do not use a CoW filesystem like BTRFS or ZFS (write amplification = slower than usual filesystems)

     

     

  2. Opening the link to view the content of a disk became better, but finally the bug is still present (this shows the most worst example, most of the time I was successful):

     

     

     

    It seems, you reduced the interval of rebuilding the dashboard content. Why do you rebuild the full content at all? Wouldn't it be better to update only those data which has really changed, like for example the temperature of a disk or the disk usage? By that the link itself wouldn't rebuild all the time. Or you update it only after the AJAX call received new data. By that it would reduce the chance to "click at the wrong moment" much more.

  3. This bug seems to be present for a long time:

     

     

     

     

     

    On 4/9/2022 at 3:29 PM, bonienl said:

    When using Unraid 6.10-rc3 or later, you can use the Dynamix File Manager to move files around.

     

    For me its a daily situation (backups) when symlinks are created. So a manually move is not practical.

  4. SSH / Terminal is dead

     

    Apr 11 20:49:59 Tower webGUI: Successful login user root from 192.168.178.25
    Apr 11 20:50:11 Tower kernel: Bluetooth: Core ver 2.22
    Apr 11 20:50:11 Tower kernel: NET: Registered PF_BLUETOOTH protocol family
    Apr 11 20:50:11 Tower kernel: Bluetooth: HCI device and connection manager initialized
    Apr 11 20:50:11 Tower kernel: Bluetooth: HCI socket layer initialized
    Apr 11 20:50:11 Tower kernel: Bluetooth: L2CAP socket layer initialized
    Apr 11 20:50:11 Tower kernel: Bluetooth: SCO socket layer initialized
    Apr 11 20:51:04 Tower nginx: 2022/04/11 20:51:04 [crit] 3589#3589: *7931687 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower", referrer: "http://tower/Settings/ManagementAccess"
    Apr 11 20:51:07 Tower nginx: 2022/04/11 20:51:07 [crit] 3589#3589: *7931687 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower"
    Apr 11 20:51:44 Tower nginx: 2022/04/11 20:51:44 [crit] 3589#3589: *7932666 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower", referrer: "http://tower/Tools/Syslog"
    Apr 11 20:51:46 Tower nginx: 2022/04/11 20:51:46 [crit] 3589#3589: *7932666 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower"
    Apr 11 20:52:49 Tower nginx: 2022/04/11 20:52:49 [crit] 3589#3589: *7933928 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower", referrer: "http://tower/Dashboard"
    Apr 11 20:53:07 Tower nginx: 2022/04/11 20:53:07 [crit] 3589#3589: *7933928 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower", referrer: "http://tower/Dashboard"
    Apr 11 20:54:12 Tower emhttpd: shcmd (80479): /usr/local/emhttp/webGui/scripts/update_access
    Apr 11 20:54:12 Tower sshd[3466]: Received signal 15; terminating.
    Apr 11 20:54:13 Tower emhttpd: shcmd (80480): /etc/rc.d/rc.nginx reload
    Apr 11 20:54:13 Tower root: Checking configuration for correct syntax and
    Apr 11 20:54:13 Tower root: then trying to open files referenced in configuration...
    Apr 11 20:54:13 Tower root: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    Apr 11 20:54:13 Tower root: nginx: configuration file /etc/nginx/nginx.conf test is successful
    Apr 11 20:54:13 Tower root: Reloading Nginx configuration...
    Apr 11 20:54:16 Tower nginx: 2022/04/11 20:54:16 [alert] 3589#3589: *7935181 open socket #4 left in connection 5
    Apr 11 20:54:16 Tower nginx: 2022/04/11 20:54:16 [alert] 3589#3589: *7935187 open socket #16 left in connection 8
    Apr 11 20:54:16 Tower nginx: 2022/04/11 20:54:16 [alert] 3589#3589: aborting
    Apr 11 20:54:28 Tower emhttpd: shcmd (80483): /usr/local/emhttp/webGui/scripts/update_access
    Apr 11 20:55:15 Tower nginx: 2022/04/11 20:55:15 [crit] 30158#30158: *7935815 connect() to unix:/var/run/ttyd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.178.25, server: , request: "GET /webterminal/ttyd/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "tower", referrer: "http://tower/Settings/ManagementAccess"
    

    I'm not using this server for other things except testing.

  5. Some users mentioned the touch problems to open share/disk content. This is not related to mobile devices I think, it's a general problem. Opening the content of a disk on the first click is a 50:50 chance as the html source code seems to be rebuilded every ~0.2 seconds which avoids clicking the icon:

     

    PS why was the icon changed to an external link icon?! Please use the directory icon again. It makes much more sense.

    • Like 1
  6.  I'm facing the same problem:

    982366391_2021-12-2713_19_22.thumb.png.2f430527d990495f571ec95a9d166eeb.png

     

    After enabling the logs it says these files do not exist:

    Dec 27 13:29:21 thoth root: Specified filename /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/cert.pem does not exist.
    Dec 27 13:29:21 thoth move: move: file /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/cert.pem [10301,c50fbc1c]
    Dec 27 13:29:21 thoth move: move_object: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/cert.pem No such file or directory
    Dec 27 13:29:21 thoth root: Specified filename /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/chain.pem does not exist.
    Dec 27 13:29:21 thoth move: move: file /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/chain.pem [10301,c50fbc1d]
    Dec 27 13:29:21 thoth move: move_object: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/chain.pem No such file or directory
    Dec 27 13:29:21 thoth root: Specified filename /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/fullchain.pem does not exist.
    Dec 27 13:29:21 thoth move: move: file /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/fullchain.pem [10301,c50fbc1e]
    Dec 27 13:29:21 thoth move: move_object: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/fullchain.pem No such file or directory
    Dec 27 13:29:21 thoth root: Specified filename /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/privkey.pem does not exist.
    Dec 27 13:29:21 thoth move: move: file /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/privkey.pem [10301,c50fbc1f]
    Dec 27 13:29:21 thoth move: move_object: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/privkey.pem No such file or directory

     

    But I have a different dir which contains links, which were moved:

    1659437646_2021-12-2713_24_26.png.6673690c8f5b82bd751ff157599ccd26.png

     

    The red color means that the symlinks target does not exist. This means the mover already moved the target file. Maybe the mover has a bug which fails because of that?

     

    There are other commands in Linux which fail with the same error:

    getfattr --dump /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/*
    getfattr: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/cert.pem: No such file or directory
    getfattr: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/chain.pem: No such file or directory
    getfattr: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/fullchain.pem: No such file or directory
    getfattr: /mnt/cache/Backups/Shares/appdata/20211224_044001/npm/letsencrypt/live/npm-7/privkey.pem: No such file or directory

     

    getfattr needs an additional flag to check the symlink itself:

    -h, --no-dereference
    Do not dereference symlinks. Instead of the file a symlink refers to, the symlink itself is examined. Unless doing a logical (-L) traversal, do not traverse symlinks to directories.

     

    Is it possible that the Unraid Mover uses this command or a similar one?

     

    EDIT1: Yes, that is the problem. Because if I copy the symlink target files back to the cache, the mover now moves the symlinks to the array (but skips the duplicated files, which is an expected behavior).

     

    EDIT2: One "bug" is this "fuser" command in file /usr/local/sbin/in_use:

    fuser -s "$FILE" && exit

     

    This should be:

    fuser -s "$FILE" 2>/dev/null && exit

     

    But this suppresses only the "root: Specified filename xxx does not exist." error message in the logs. The main problem seems to be inside of "/usr/local/sbin/move" which is a proprietary binary of @limetech, so I can't help further.

     

     

    Internal note:

    rsync --archive --itemize-changes --ignore-existing --remove-source-files /mnt/cache/Backups/Shares/appdata/ /mnt/disk7/Backups/Shares/appdata && rsync --archive --itemize-changes --delete "$(mktemp -d)/" /mnt/cache/Backups/Shares/appdata/

     

    • Like 2
  7. 8 minutes ago, Hank Moody said:

    I'm running 6.10RC2 and I can't transfer single 10kb files onto shares without reestablishing the connection ultiple-times. SSH (winscp) doesn't work well currently too. Tried with different shares and settings, rebooting the server, this is driving me nuts.

     

    You problem is different. Nobody in this topic suffers from reconnects.

  8. On 8/12/2021 at 1:15 AM, limetech said:

    Your syslog from diags report that ACPM is enabled...

    I think around this lies the problem.

     

    One of my servers has "perfectly" working ASPM (without setting any additional Kernel options):

    lspci -vv | awk '/ASPM.*?abled/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM.*?abled)'
    00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 07) (prog-if 00 [Normal decode])
                    LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
    00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0) (prog-if 00 [Normal decode])
                    LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk-
    00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0) (prog-if 00 [Normal decode])
                    LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk-
    00:1c.5 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #6 (rev f0) (prog-if 00 [Normal decode])
                    LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
    00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0) (prog-if 00 [Normal decode])
                    LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
    01:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
                    LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
    04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
                    LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
    05:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
                    LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+

     

    But its syslog claims the opposite?! Why does FADT return the wrong ASPM capabilities?

    ACPI FADT declares the system doesn't support PCIe ASPM, so disable it

     

    And I have a different server which does not return this message in the syslog, but all its devices return "ASPM Disabled", although it's enabled in the BIOS (and enabled, when using Ubuntu). Even "pcie_aspm=force" doesn't change anything. Only setpci works.

     

    For me it sounds like a driver is missing?!

     

    @Falcosc

    Maybe this script helps to enable ASPM automatically:

    https://web.archive.org/web/20190301120327/http://drvbp1.linux-foundation.org/~mcgrof/scripts/enable-aspm

  9. 3 hours ago, Oreonipples said:

    I'm just not in the mood to setup all my dockers again right now. 

    Don't need to. Change to Dir and then Apps > Previous Apps to install them as before. The only thing what happens is re-downloading the packages. Any if you created such, you need to recreate custom networks. But you don't need to change templates or similar things. The docker.img does not contain any important data.

     

  10. My PC woke up and is now reloading a Webterminal infinitely. Even a new opened Webterminal does it. While this happens the logs were filled with the error of this bug report:

    nginx: <datetime> [alert] 8330#8330: worker process <random_number> exited on signal 6

     

    Now I try to investigate the problem. At first what happens on the network monitor of the browser:

     

    /webterminal/token is requested as follows:

    GET /webterminal/token HTTP/1.1
    Host: tower:5000
    Connection: keep-alive
    Pragma: no-cache
    Cache-Control: no-cache
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
    Accept: */*
    Referer: http://tower:5000/webterminal/
    Accept-Encoding: gzip, deflate
    Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
    Cookie: users_view=user1; db-box2=2%3B0%3B1; test.sdd=short; test.sdb=short; test.sdc=short; diskio=diskio; ca_startupButton=topperforming; port_select=eth1; unraid_11111111621fb6eace5f11d511111111=11111111365ea67534cf76a011111111; ca_dockerSearchFlag=false; ca_searchActive=true; ca_categoryName=undefined; ca_installMulti=false; ca_categoryText=Search%20for%20webdav; ca_sortIcon=true; ca_filter=webdav; ca_categories_enabled=%5Bnull%2C%22installed_apps%22%2C%22inst_docker%22%2C%22inst_plugins%22%2C%22previous_apps%22%2C%22prev_docker%22%2C%22prev_plugins%22%2C%22onlynew%22%2C%22new%22%2C%22random%22%2C%22topperforming%22%2C%22trending%22%2C%22Backup%3A%22%2C%22Cloud%3A%22%2C%22Network%3A%22%2C%22Network%3AFTP%22%2C%22Network%3AWeb%22%2C%22Network%3AOther%22%2C%22Plugins%3A%22%2C%22Productivity%3A%22%2C%22Tools%3A%22%2C%22Tools%3AUtilities%22%2C%22All%22%2C%22repos%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D; ca_selectedMenu=All; ca_data=%7B%22docker%22%3A%22%22%2C%22section%22%3A%22AppStore%22%2C%22selected_category%22%3A%22%22%2C%22subcategory%22%3A%22%22%2C%22selected_subcategory%22%3A%22%22%2C%22selected%22%3A%22%7B%5C%22docker%5C%22%3A%5B%5D%2C%5C%22plugin%5C%22%3A%5B%5D%2C%5C%22deletePaths%5C%22%3A%5B%5D%7D%22%2C%22lastUpdated%22%3A0%2C%22nextpage%22%3A0%2C%22prevpage%22%3A0%2C%22currentpage%22%3A1%2C%22searchFlag%22%3Atrue%2C%22searchActive%22%3Atrue%2C%22previousAppsSection%22%3A%22%22%7D; col=1; dir=0; docker_listview_mode=basic; one=tab1
    

     

    response:

    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 28 Aug 2021 16:35:53 GMT
    Content-Type: application/json;charset=utf-8
    Content-Length: 13
    Connection: keep-alive
    
    

    content:

    {"token": ""}

     

    ws://tower:5000/webterminal/ws is requested:

    GET ws://tower:5000/webterminal/ws HTTP/1.1
    Host: tower:5000
    Connection: Upgrade
    Pragma: no-cache
    Cache-Control: no-cache
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
    Upgrade: websocket
    Origin: http://tower:5000
    Sec-WebSocket-Version: 13
    Accept-Encoding: gzip, deflate
    Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
    Cookie: users_view=user1; db-box2=2%3B0%3B1; test.sdd=short; test.sdb=short; test.sdc=short; diskio=diskio; ca_startupButton=topperforming; port_select=eth1; unraid_11111111621fb6eace5f11d511111111=11111111365ea67534cf76a011111111; ca_dockerSearchFlag=false; ca_searchActive=true; ca_categoryName=undefined; ca_installMulti=false; ca_categoryText=Search%20for%20webdav; ca_sortIcon=true; ca_filter=webdav; ca_categories_enabled=%5Bnull%2C%22installed_apps%22%2C%22inst_docker%22%2C%22inst_plugins%22%2C%22previous_apps%22%2C%22prev_docker%22%2C%22prev_plugins%22%2C%22onlynew%22%2C%22new%22%2C%22random%22%2C%22topperforming%22%2C%22trending%22%2C%22Backup%3A%22%2C%22Cloud%3A%22%2C%22Network%3A%22%2C%22Network%3AFTP%22%2C%22Network%3AWeb%22%2C%22Network%3AOther%22%2C%22Plugins%3A%22%2C%22Productivity%3A%22%2C%22Tools%3A%22%2C%22Tools%3AUtilities%22%2C%22All%22%2C%22repos%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D; ca_selectedMenu=All; ca_data=%7B%22docker%22%3A%22%22%2C%22section%22%3A%22AppStore%22%2C%22selected_category%22%3A%22%22%2C%22subcategory%22%3A%22%22%2C%22selected_subcategory%22%3A%22%22%2C%22selected%22%3A%22%7B%5C%22docker%5C%22%3A%5B%5D%2C%5C%22plugin%5C%22%3A%5B%5D%2C%5C%22deletePaths%5C%22%3A%5B%5D%7D%22%2C%22lastUpdated%22%3A0%2C%22nextpage%22%3A0%2C%22prevpage%22%3A0%2C%22currentpage%22%3A1%2C%22searchFlag%22%3Atrue%2C%22searchActive%22%3Atrue%2C%22previousAppsSection%22%3A%22%22%7D; col=1; dir=0; docker_listview_mode=basic; one=tab1
    Sec-WebSocket-Key: aaaaaaaa3CNW7Y3Waaaaaaaa
    Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
    Sec-WebSocket-Protocol: tty
    
    

    response:

    HTTP/1.1 101 Switching Protocols
    Server: nginx
    Date: Sat, 28 Aug 2021 16:35:53 GMT
    Connection: upgrade
    Upgrade: WebSocket
    Sec-WebSocket-Accept: aaaaaaaaFh/OM7XjuLssaaaaaaaa
    Sec-WebSocket-Protocol: tty
    
    

    content:

    data:undefined,

     

    EDIT: Ah, damn it. I closed one of the still open GUI-Tabs and by that the WebTerminal does not reload anymore 😩

     

    So it seems to be a connection between the background process which loads notifications and the WebTerminal. I will try to investigate the problem when it happens again.

     

    But we can compare against the requests which happen if this bug is not present. This time it loads three different URLs:

     

    ws://tower:5000/webterminal/ws

    GET ws://tower:5000/webterminal/ws HTTP/1.1
    Host: tower:5000
    Connection: Upgrade
    Pragma: no-cache
    Cache-Control: no-cache
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
    Upgrade: websocket
    Origin: http://tower:5000
    Sec-WebSocket-Version: 13
    Accept-Encoding: gzip, deflate
    Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
    Cookie: users_view=user1; db-box2=2%3B0%3B1; test.sdd=short; test.sdb=short; test.sdc=short; diskio=diskio; ca_startupButton=topperforming; port_select=eth1; unraid_11111111621fb6eace5f11d511111111=11111111365ea67534cf76a011111111; ca_dockerSearchFlag=false; ca_searchActive=true; ca_categoryName=undefined; ca_installMulti=false; ca_categoryText=Search%20for%20webdav; ca_sortIcon=true; ca_filter=webdav; ca_categories_enabled=%5Bnull%2C%22installed_apps%22%2C%22inst_docker%22%2C%22inst_plugins%22%2C%22previous_apps%22%2C%22prev_docker%22%2C%22prev_plugins%22%2C%22onlynew%22%2C%22new%22%2C%22random%22%2C%22topperforming%22%2C%22trending%22%2C%22Backup%3A%22%2C%22Cloud%3A%22%2C%22Network%3A%22%2C%22Network%3AFTP%22%2C%22Network%3AWeb%22%2C%22Network%3AOther%22%2C%22Plugins%3A%22%2C%22Productivity%3A%22%2C%22Tools%3A%22%2C%22Tools%3AUtilities%22%2C%22All%22%2C%22repos%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D; ca_selectedMenu=All; ca_data=%7B%22docker%22%3A%22%22%2C%22section%22%3A%22AppStore%22%2C%22selected_category%22%3A%22%22%2C%22subcategory%22%3A%22%22%2C%22selected_subcategory%22%3A%22%22%2C%22selected%22%3A%22%7B%5C%22docker%5C%22%3A%5B%5D%2C%5C%22plugin%5C%22%3A%5B%5D%2C%5C%22deletePaths%5C%22%3A%5B%5D%7D%22%2C%22lastUpdated%22%3A0%2C%22nextpage%22%3A0%2C%22prevpage%22%3A0%2C%22currentpage%22%3A1%2C%22searchFlag%22%3Atrue%2C%22searchActive%22%3Atrue%2C%22previousAppsSection%22%3A%22%22%7D; col=1; dir=0; docker_listview_mode=basic; one=tab1
    Sec-WebSocket-Key: aaaaaaaaqoOk/3z+aaaaaaaa
    Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
    Sec-WebSocket-Protocol: tty
    
    

    response

    HTTP/1.1 101 Switching Protocols
    Server: nginx
    Date: Sat, 28 Aug 2021 16:51:11 GMT
    Connection: upgrade
    Upgrade: WebSocket
    Sec-WebSocket-Accept: aaaaaaaaDWIMhZ8VeZoxaaaaaaaa
    Sec-WebSocket-Protocol: tty
    
    

    This time, there was no content!

     

    /webterminal/ request:

    GET /webterminal/ HTTP/1.1
    Host: tower:5000
    Connection: keep-alive
    Pragma: no-cache
    Cache-Control: no-cache
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
    Referer: http://tower:5000/Docker
    Accept-Encoding: gzip, deflate
    Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
    Cookie: users_view=user1; db-box2=2%3B0%3B1; test.sdd=short; test.sdb=short; test.sdc=short; diskio=diskio; ca_startupButton=topperforming; port_select=eth1; unraid_11111111621fb6eace5f11d511111111=11111111365ea67534cf76a011111111; ca_dockerSearchFlag=false; ca_searchActive=true; ca_categoryName=undefined; ca_installMulti=false; ca_categoryText=Search%20for%20webdav; ca_sortIcon=true; ca_filter=webdav; ca_categories_enabled=%5Bnull%2C%22installed_apps%22%2C%22inst_docker%22%2C%22inst_plugins%22%2C%22previous_apps%22%2C%22prev_docker%22%2C%22prev_plugins%22%2C%22onlynew%22%2C%22new%22%2C%22random%22%2C%22topperforming%22%2C%22trending%22%2C%22Backup%3A%22%2C%22Cloud%3A%22%2C%22Network%3A%22%2C%22Network%3AFTP%22%2C%22Network%3AWeb%22%2C%22Network%3AOther%22%2C%22Plugins%3A%22%2C%22Productivity%3A%22%2C%22Tools%3A%22%2C%22Tools%3AUtilities%22%2C%22All%22%2C%22repos%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D; ca_selectedMenu=All; ca_data=%7B%22docker%22%3A%22%22%2C%22section%22%3A%22AppStore%22%2C%22selected_category%22%3A%22%22%2C%22subcategory%22%3A%22%22%2C%22selected_subcategory%22%3A%22%22%2C%22selected%22%3A%22%7B%5C%22docker%5C%22%3A%5B%5D%2C%5C%22plugin%5C%22%3A%5B%5D%2C%5C%22deletePaths%5C%22%3A%5B%5D%7D%22%2C%22lastUpdated%22%3A0%2C%22nextpage%22%3A0%2C%22prevpage%22%3A0%2C%22currentpage%22%3A1%2C%22searchFlag%22%3Atrue%2C%22searchActive%22%3Atrue%2C%22previousAppsSection%22%3A%22%22%7D; col=1; dir=0; docker_listview_mode=basic; one=tab1
    

    response:

    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 28 Aug 2021 16:51:11 GMT
    Content-Type: text/html
    Content-Length: 112878
    Connection: keep-alive
    content-encoding: gzip
    
    

    content:

    <!DOCTYPE html><html lang="en"><head><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"><title>ttyd - Terminal</title>
    ...

     

    /webterminal/token request

    GET /webterminal/token HTTP/1.1
    Host: tower:5000
    Connection: keep-alive
    Pragma: no-cache
    Cache-Control: no-cache
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
    Accept: */*
    Referer: http://tower:5000/webterminal/
    Accept-Encoding: gzip, deflate
    Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
    Cookie: users_view=user1; db-box2=2%3B0%3B1; test.sdd=short; test.sdb=short; test.sdc=short; diskio=diskio; ca_startupButton=topperforming; port_select=eth1; unraid_11111111621fb6eace5f11d511111111=11111111365ea67534cf76a011111111; ca_dockerSearchFlag=false; ca_searchActive=true; ca_categoryName=undefined; ca_installMulti=false; ca_categoryText=Search%20for%20webdav; ca_sortIcon=true; ca_filter=webdav; ca_categories_enabled=%5Bnull%2C%22installed_apps%22%2C%22inst_docker%22%2C%22inst_plugins%22%2C%22previous_apps%22%2C%22prev_docker%22%2C%22prev_plugins%22%2C%22onlynew%22%2C%22new%22%2C%22random%22%2C%22topperforming%22%2C%22trending%22%2C%22Backup%3A%22%2C%22Cloud%3A%22%2C%22Network%3A%22%2C%22Network%3AFTP%22%2C%22Network%3AWeb%22%2C%22Network%3AOther%22%2C%22Plugins%3A%22%2C%22Productivity%3A%22%2C%22Tools%3A%22%2C%22Tools%3AUtilities%22%2C%22All%22%2C%22repos%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%5D; ca_selectedMenu=All; ca_data=%7B%22docker%22%3A%22%22%2C%22section%22%3A%22AppStore%22%2C%22selected_category%22%3A%22%22%2C%22subcategory%22%3A%22%22%2C%22selected_subcategory%22%3A%22%22%2C%22selected%22%3A%22%7B%5C%22docker%5C%22%3A%5B%5D%2C%5C%22plugin%5C%22%3A%5B%5D%2C%5C%22deletePaths%5C%22%3A%5B%5D%7D%22%2C%22lastUpdated%22%3A0%2C%22nextpage%22%3A0%2C%22prevpage%22%3A0%2C%22currentpage%22%3A1%2C%22searchFlag%22%3Atrue%2C%22searchActive%22%3Atrue%2C%22previousAppsSection%22%3A%22%22%7D; col=1; dir=0; docker_listview_mode=basic; one=tab1
    

    response

    HTTP/1.1 200 OK
    Server: nginx
    Date: Sat, 28 Aug 2021 16:51:11 GMT
    Content-Type: application/json;charset=utf-8
    Content-Length: 13
    Connection: keep-alive
    

    content:

    {"token": ""}

     

     

    EDIT: Ok, had this bug again. This time the shares and main tab were parallel open while the Terminal reloaded. After closing the main tab, it stopped. This time I leave only the main tab open to verify the connection to this page.

  11. 8 hours ago, pervin_1 said:

    Nextcloud was in the tmp folder. Added extra parameter to mount the tmp folder in RAM disk ( not sure if you script handles the tmp in docker containers ).

    My script covers only /docker/containers. Everything what happens inside the container isn't covered as its in the /docker/overlay2 or /docker/image/brtfs path. So yes, it was a good step to add a RAM disk path for the /tmp folder of Nextcloud.

     

    8 hours ago, pervin_1 said:

    The OnlyOffice, mainly writes go to /run/postgresql every one minute or so

    This is something which I would not touch. PostgreSQL is a database. It contains important data which shouldn't be in tbe RAM. Note: if you link a container's /tmp path to a RAM Disk, all data inside this path will be deleted on server reboot.

     

    Note: Using /tmp as a RAM disk is the default behavior of Unraid, Debian and Ubuntu. It seems not to be the default for Alpine, but as such popular distributions use RAM disks for /tmp, I think the application developers do not store important data in /tmp.

  12. 14 hours ago, pervin_1 said:

    Does it mean I have some other dockers in the appfolder writing something ( besides status and log files ) to my cash drives,

    Yes. Execute this command multiple times and check which files are updated frequently:

    find /var/lib/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n30

     

    Execute this to find out which folder name belongs to which container:

    csv="CONTAINER ID;NAME;SUBVOLUME\n"; for f in /var/lib/docker/image/*/layerdb/mounts/*/mount-id; do sub=$(cat $f); id=$(dirname $f | xargs basename | cut -c 1-12); csv+="$id;" csv+=$(docker ps --format "{{.Names}}" -f "id=$id")";" csv+="/var/lib/docker/.../$sub\n"; done; echo -e $csv | column -t -s';'
    

     

    14 hours ago, boomam said:

    Let me know if you want/think some diagnostic logs would help diagnose.

    Please post them (and the time when the pool stopped working).

     

     

  13. 5 hours ago, ungeek67 said:

    /mnt/cache/system/docker/docker/containers

    /var/lib/docker/containers

     

    I see the same exact same events on both

    Yes, this drove me crazy as well. Don't ask me why, but because one path mounts to the other, both return the same activity results from the RAM-Disk although I would expect that /mnt/cache reflects the SSD content?!

     

    The only way to see the real traffic on the SSD, is to mount the parent dir:

    mkdir /var/lib/docker_test
    mount --bind /var/lib/docker /var/lib/docker_test

     

    Now you can monitor /var/lib/docker_test and you see the difference. That's why I needed to use the same trick to create a backup every 30 minutes. It took multiple days to find that out 😅

     

    Screenshot:

    630280015_2021-08-2518_10_34.thumb.png.e33dad348e0fc744b15343a061bd2983.png

     

    If you use the docker.img it is easier. It has the correct timestamp every 30 minutes. 🤷‍♂️

     

    PS: You can see the higher write activity on your SSD in the Main tab if you wait for 30 minute backup (xx:00 and xx:30).

     

    Unmount and remove the test dir, when you finished your tests:

    umount /var/lib/docker_test
    rmdir /var/lib/docker_test

     

    • Like 1
  14. 8 hours ago, boomam said:

    I'd have to look into it closer

    I did. It was not as easy as I thought, but finally I was successful. At first I though I could open two terminals and watch for smartctl and hdparm processes (which Unraid uses to set standby):

    while true; do pid=$(pgrep 'smartctl' | head -1); if [[ -n "$pid" ]];  then ps -p "$pid" -o args && strace -v -t -p "$pid"; fi; done
    
    while true; do pid=$(pgrep 'hdparm' | head -1); if [[ -n "$pid" ]];  then ps -p "$pid" -o args && strace -v -t -p "$pid"; fi; done

     

    But I found out that some of the processes were to fast to monitor. So I changed the source code of hdparm and smartctl and added in both apps a sleep time of 1 second (Trick 17, we say in Germany ^^). Then I used this command to watch for the processes:

    while true; do for pid in $(pgrep 'smartctl|hdparm'); do if [[ $lastcmd != $cmd ]] || [[ $lastpid != $pid ]]; then cmd=$(ps -p "$pid" -o args); echo $cmd "($pid)"; lastpid=$pid; lastcmd=$cmd; fi; done; done

     

     

    After that I pressed the spin down icon of an HDD which returned:

    COMMAND /usr/sbin/hdparm -y /dev/sdb (5766)

     

    After the disk spun down, Unraid starts to spam the following comand every second:

    COMMAND /usr/sbin/hdparm -C /dev/sdb (5966)

     

    I think by that Unraid's WebGUI is able to update the Icon as fast as possible if a process wakes up the Disk.

     

    Then I pressed the spin up icon which returns this:

    COMMAND /usr/sbin/hdparm -S0 /dev/sdb (27296)

     

    And several seconds later, after the disk spun up, this command appeared (Unraid checks SMART values):

    COMMAND /usr/bin/php /usr/local/sbin/smartctl_type disk1 -A (28152)
    COMMAND /usr/sbin/smartctl -A /dev/sdb (28155)

     

    The next step was to click on the spin down icon of the SSD... but nothing happened. So this icon has no function. Buuh ^^

     

    Now I set my Default spin down delay to 15 minutes and waited... and then this appeared:

    COMMAND /usr/sbin/hdparm -y /dev/sdb (5826)
    COMMAND /usr/sbin/hdparm -y /dev/sde (6203)
    COMMAND /usr/sbin/hdparm -y /dev/sdc (6204)

     

    And Unraid is spamming again:

    COMMAND /usr/sbin/hdparm -C /dev/sde (6465)
    COMMAND /usr/sbin/hdparm -C /dev/sdb (6555)
    COMMAND /usr/sbin/hdparm -C /dev/sdc (6643)

     

    But as I thought.. no command mentions /dev/sdd, which is my SATA SSD. So Unraid never sends any standby commands to your SSD.

     

    I remember that one of the unraid devs said in the forums, that SSDs do not consume measurable more energy if they are in standby state, so they did not implement equivalent commands.

     

    Conclusion: As you did not change any setting which covers your SSDs power management and as they are working now, your problem should be something else.

    hdparm-9.58-sleep1.txz smartmontools-7.2-sleep1.txz

  15. 30 minutes ago, boomam said:

    saying that the script affects the drives sleep mode

    This was only a guess. Maybe the problem lies somewhere else?! As long nobody else has this problem and it isn't verified, it does not make sense to warn everyone. By now you are the only one who had this problem. And as I said, if it is related to the power management, it can happen all the time. Not only because of this modification.

    30 minutes ago, boomam said:

    Whilst your script doesn't directly affect that, it does dramatically increase the likelihood of it

    If this was your problem, then yes, but by the same argumentation Unraid would need to throw a warning if you disable Docker or if you create multiple pools or...?!

     

    PS Wait a week or so. If it does not happen again, revert your sleep setting and we will see if this was the reason. By the way: How did you disable sleep? For SATA these methods exist:

    max_performance
    medium_power
    med_power_with_dipm
    min_power

     

    Which was active in your setup?