Docshaker

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by Docshaker

  1. On 1/17/2024 at 2:32 AM, itimpi said:

    Do you have the Connect plugin installed?   It had a bug (cleared in the latest update) which did excessive logging and could cause this.    After updating the plugin you need to reboot the server to clear the log space.

     

    If it is not that, then running

    du -h /var/log

    should show what is the likely culprit.

     

    Ah yes I do have the Connect plugin installed and have not done a restart of the server since then. I will restart it and check it. Thank you.

     

    I ran du -h /var/log just in case before restarting it too:

     

    0       /var/log/pwfail
    120M    /var/log/unraid-api
    12K     /var/log/preclear
    0       /var/log/swtpm/libvirt/qemu
    0       /var/log/swtpm/libvirt
    0       /var/log/swtpm
    0       /var/log/samba/cores/rpcd_winreg
    0       /var/log/samba/cores/rpcd_classic
    0       /var/log/samba/cores/rpcd_lsad
    0       /var/log/samba/cores/samba-dcerpcd
    0       /var/log/samba/cores/winbindd
    0       /var/log/samba/cores/smbd
    0       /var/log/samba/cores
    2.2M    /var/log/samba
    0       /var/log/sa
    0       /var/log/plugins
    0       /var/log/pkgtools/removed_uninstall_scripts
    4.0K    /var/log/pkgtools/removed_scripts
    12K     /var/log/pkgtools/removed_packages
    16K     /var/log/pkgtools
    0       /var/log/nginx
    0       /var/log/nfsd
    0       /var/log/libvirt/qemu
    0       /var/log/libvirt/ch
    0       /var/log/libvirt
    123M    /var/log

     

     

  2. yes, multiple times. my browser was closed for those 8 hours too.

     

    edit:

    fixed using terminal to check and update

     

    plugin check dynamix.file.manager.plg

     

    and then


    plugin update dynamix.file.manager.plg

     

    • Thanks 1
  3. The background task was completed 8 hours ago for me, I am still not seeing anything on the GUI.

     

    Am i able to update it using the terminal? and if so, what is the syntax to do so.

     

    Thank you for the quick response and fix, your hard work is appreciated.

  4. 25 minutes ago, bonienl said:

    This works fine for me.

    Can you open the browser console, this is usually CTRL+SHIFT+i

    Are there any javascript error messages when you try to open a GUI page?

     

     

    Yes, I just saw its getting network error:

     

    Quote

    [Network error] TypeError: NetworkError when attempting to fetch resource. unraid.min.js:xxx:xxxxxx
    [SET_MY_SERVERS_ERROR]:  NetworkError when attempting to fetch resource. unraid.min.js:xxx:xxxxxx
    Firefox can’t establish a connection to the server at wss://xxx.xxx.xxx.xxx/graphql. client.js:xxx:xxx
    The connection to wss://xxx.xxx.xxx.xxx/graphql was interrupted while the page was loading.

     

  5. 11 minutes ago, bonienl said:

     

    Latest version is 2023.04.02. Works fine for me on Unraid 6.11 and 6.12

    Do you use this version?

     

     

    I believe that is the version it updated to. I'm currently running unraid 6.11.5

     

    It updated without issue. When I started a copy from a pool drive to an array drive, I minimized the window and switched to another page and then my screen went blank, as shown in the screenshot above.

     

    Syslog doesn't show anything either.

     

    from syslog:

     

    Quote

    Apr  2 05:33:28 TheVoid root: plugin: running: anonymous
    Apr  2 05:33:28 TheVoid root: plugin: creating: /boot/config/plugins/dynamix.file.manager/dynamix.file.manager.txz - downloading from URL https://raw.githubusercontent.com/bergware/dynamix/master/archive/dynamix.file.manager.txz
    Apr  2 05:33:28 TheVoid root: plugin: checking: /boot/config/plugins/dynamix.file.manager/dynamix.file.manager.txz - MD5
    Apr  2 05:33:28 TheVoid root: plugin: running: /boot/config/plugins/dynamix.file.manager/dynamix.file.manager.txz
    Apr  2 05:33:29 TheVoid root: plugin: running: anonymous
    Apr  2 05:33:29 TheVoid root: plugin: dynamix.file.manager.plg updated
    Apr  2 05:34:08 TheVoid flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
    Apr  2 05:35:47 TheVoid webGUI: Successful login user root from xxx.xxx.xxx.xxx

     

  6. Out of my 28 drives, 20 are enterprise drives, a mix of 12TBs & 14TBs of Ultrastars, HGSTs, and Toshibas MG07 and MG08 (5 of which are the MG08 14TB). Idle they all run around 28-30C and the ones that are constantly on, run around 34-35C. Never seen any of them go higher then 36C at worst. Temps are not an issue as long as you have the hardware to support it, i use a supermicro 847 server. Noise wise tho, I hear my server from my bestment into the first floor lol. General pricing ranges from as low as $280 to $600 USD depending on the brand. The Toshibas 14TBs are usually my favorite ones since they run around 280-320 with 5 year warranty. You just need to make sure that you are buying for a seller that covers the 5 year warranty from their end since Toshiba won't cover it if its OEMs.

  7. On 6/28/2021 at 8:12 AM, binhex said:

    completely normal, this is due to the disk caching feature in qbittorrent, as i have mine set to 2048 MB my memory usage is just over 2GB right now with qbittorrent doing sweet FA, note the the highlighted field, you can crank this down but i wouldnt as it might then affect download speeds when you do have something running, TLDR:- dont worry about it.

     

    image.png.b9947db016c00695f490106e089f11b4.png

     

     

    my disk cache is set to -1, this is the default setting from when I installed the docker, would you recommend me changing it?

  8. TL;DR version: Can I do all of these at the same time since parity will be invalidated and thus only need to rebuild parity once? Remove 1 8TB drive, add 4 12TB drives, and change the array configuration?

     

    Long version:

    I want to remove 1 8TB drive, add 4 new drives, and reset the array configuration, all in one go, instead of having to remove the 8TB drive, then have parity run, and then adding the 4 new drives and resetting the configuration, which will cause parity to run again.

     

    - I have two parity drives (14TB)

    - 8TB drive has been emptied using unbalance and no user shares are assigned to it.

    - new drives have been precleared 

    - reset my configuration to keep my current 2 14TB as parity followed by having all the 14TB drives on top, followed by the 12tbs and etc.

     

    My current array:

     

    image.thumb.png.e7e56a6015698724b58a802d74c7c14b.png

     

     

     

     

  9. 22 hours ago, Maticks said:

    I remember reading somewhere that if you remove all the data from a Data Drive and remove it from the array. you don't need to rebuild the array.

    This only works on good drives from the manuals said, unless someone found a way around that.

     

    **This method can only be used if the drive to be removed is a good drive that is completely empty, is mounted and can be completely cleared without errors occurring**

     

    https://wiki.unraid.net/Manual/Storage_Management#Removing_data_disk.28s.29

  10. Fix Common Problems Machine Check Events, attached is my full diagnostics, here's what I believe is the relevant portion from syslog:

     

     

    Quote

    May 28 17:51:59 TheVoid root: Fix Common Problems Version 2021.05.03
    May 28 17:52:00 TheVoid root: Fix Common Problems: Warning: Plugin open.files.plg is not up to date
    May 28 17:52:00 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices.plg is not up to date ** Ignored
    May 28 17:52:00 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices-plus.plg is not up to date ** Ignored
    May 28 17:52:00 TheVoid root: Fix Common Problems: Warning: Docker Application plex has an update available for it
    May 28 17:52:04 TheVoid root: Fix Common Problems: Error: Machine Check Events detected on your server
    May 28 17:52:04 TheVoid root: Hardware event. This is not a software error.
    May 28 17:52:04 TheVoid root: MCE 0
    May 28 17:52:04 TheVoid root: CPU 0 BANK 7 TSC 107634b4f0dc58 
    May 28 17:52:04 TheVoid root: MISC 152048486 ADDR 63182b240 
    May 28 17:52:04 TheVoid root: TIME 1622176217 Thu May 27 23:30:17 2021
    May 28 17:52:04 TheVoid root: MCG status:
    May 28 17:52:04 TheVoid root: MCi status:
    May 28 17:52:04 TheVoid root: Error overflow
    May 28 17:52:04 TheVoid root: Corrected error
    May 28 17:52:04 TheVoid root: MCi_MISC register valid
    May 28 17:52:04 TheVoid root: MCi_ADDR register valid
    May 28 17:52:04 TheVoid root: MCA: MEMORY CONTROLLER RD_CHANNEL0_ERR
    May 28 17:52:04 TheVoid root: Transaction: Memory read error
    May 28 17:52:04 TheVoid root: STATUS cc00014000010090 MCGSTATUS 0
    May 28 17:52:04 TheVoid root: MCGCAP 7000c16 APICID 0 SOCKETID 0 
    May 28 17:52:04 TheVoid root: PPIN 11bbc7a0aaa1b357
    May 28 17:52:04 TheVoid root: MICROCODE 44
    May 28 17:52:04 TheVoid root: CPUID Vendor Intel Family 6 Model 63
    May 28 17:52:04 TheVoid root: Hardware event. This is not a software error.
    May 28 17:52:04 TheVoid root: MCE 1
    May 28 17:52:04 TheVoid root: CPU 0 BANK 9 TSC 107634b4f0dc58 
    May 28 17:52:04 TheVoid root: MISC 90840100010008c ADDR 63182b000 
    May 28 17:52:04 TheVoid root: TIME 1622176217 Thu May 27 23:30:17 2021
    May 28 17:52:04 TheVoid root: MCG status:
    May 28 17:52:04 TheVoid root: MCi status:
    May 28 17:52:04 TheVoid root: Corrected error
    May 28 17:52:04 TheVoid root: MCi_MISC register valid
    May 28 17:52:04 TheVoid root: MCi_ADDR register valid
    May 28 17:52:04 TheVoid root: MCA: MEMORY CONTROLLER MS_CHANNEL0_ERR
    May 28 17:52:04 TheVoid root: Transaction: Memory scrubbing error
    May 28 17:52:04 TheVoid root: MemCtrl: Corrected patrol scrub error
    May 28 17:52:04 TheVoid root: STATUS 8c000040000800c0 MCGSTATUS 0
    May 28 17:52:04 TheVoid root: MCGCAP 7000c16 APICID 0 SOCKETID 0 
    May 28 17:52:04 TheVoid root: PPIN 11bbc7a0aaa1b357
    May 28 17:52:04 TheVoid root: MICROCODE 44
    May 28 17:52:04 TheVoid root: CPUID Vendor Intel Family 6 Model 63
    May 28 17:52:04 TheVoid root: mcelog: warning: 8 bytes ignored in each record
    May 28 17:52:04 TheVoid root: mcelog: consider an update
    May 28 17:52:59 TheVoid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update open.files.plg
    May 28 17:52:59 TheVoid root: plugin: running: anonymous
    May 28 17:52:59 TheVoid root: plugin: creating: /boot/config/plugins/open.files/open.files-2021.05.28.tgz 
    May 28 17:52:59 TheVoid root: plugin: checking: /boot/config/plugins/open.files/open.files-2021.05.28.tgz - MD5
    May 28 17:52:59 TheVoid root: plugin: running: anonymous
    ### [PREVIOUS LINE REPEATED 1 TIMES] ###
    May 28 17:53:15 TheVoid flash_backup: adding task: php /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update
    May 28 17:55:15 TheVoid root: Fix Common Problems Version 2021.05.03
    ### [PREVIOUS LINE REPEATED 1 TIMES] ###
    May 28 17:55:16 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices.plg is not up to date ** Ignored
    May 28 17:55:16 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices-plus.plg is not up to date ** Ignored
    May 28 17:55:17 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices.plg is not up to date ** Ignored
    May 28 17:55:17 TheVoid root: Fix Common Problems: Warning: Plugin unassigned.devices-plus.plg is not up to date ** Ignored
    May 28 17:55:19 TheVoid root: Fix Common Problems: Error: Machine Check Events detected on your server
    May 28 17:55:19 TheVoid root: mcelog: warning: 8 bytes ignored in each record
    May 28 17:55:19 TheVoid root: mcelog: consider an update
    May 28 17:55:20 TheVoid root: Fix Common Problems: Error: Machine Check Events detected on your server
    May 28 17:55:20 TheVoid root: mcelog: warning: 8 bytes ignored in each record
    May 28 17:55:20 TheVoid root: mcelog: consider an update

     

    thevoid-diagnostics-20210528-1756.zip