Docshaker

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by Docshaker

  1. Ah yes I do have the Connect plugin installed and have not done a restart of the server since then. I will restart it and check it. Thank you. I ran du -h /var/log just in case before restarting it too: 0 /var/log/pwfail 120M /var/log/unraid-api 12K /var/log/preclear 0 /var/log/swtpm/libvirt/qemu 0 /var/log/swtpm/libvirt 0 /var/log/swtpm 0 /var/log/samba/cores/rpcd_winreg 0 /var/log/samba/cores/rpcd_classic 0 /var/log/samba/cores/rpcd_lsad 0 /var/log/samba/cores/samba-dcerpcd 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores 2.2M /var/log/samba 0 /var/log/sa 0 /var/log/plugins 0 /var/log/pkgtools/removed_uninstall_scripts 4.0K /var/log/pkgtools/removed_scripts 12K /var/log/pkgtools/removed_packages 16K /var/log/pkgtools 0 /var/log/nginx 0 /var/log/nfsd 0 /var/log/libvirt/qemu 0 /var/log/libvirt/ch 0 /var/log/libvirt 123M /var/log
  2. First time I see this FCP warning: /var/log is getting full (currently 96 % used) Diagnostics attached, would love some insight as to what this is all about. Cheers thevoid-diagnostics-20240117-0142.zip
  3. yes, multiple times. my browser was closed for those 8 hours too. edit: fixed using terminal to check and update plugin check dynamix.file.manager.plg and then plugin update dynamix.file.manager.plg
  4. The background task was completed 8 hours ago for me, I am still not seeing anything on the GUI. Am i able to update it using the terminal? and if so, what is the syntax to do so. Thank you for the quick response and fix, your hard work is appreciated.
  5. Yes, I just saw its getting network error:
  6. I believe that is the version it updated to. I'm currently running unraid 6.11.5 It updated without issue. When I started a copy from a pool drive to an array drive, I minimized the window and switched to another page and then my screen went blank, as shown in the screenshot above. Syslog doesn't show anything either. from syslog:
  7. hmm anyone having issues with this last update? after this last update, my GUI is gone. none of the pages load. I can only see terminal button and log off.
  8. Out of my 28 drives, 20 are enterprise drives, a mix of 12TBs & 14TBs of Ultrastars, HGSTs, and Toshibas MG07 and MG08 (5 of which are the MG08 14TB). Idle they all run around 28-30C and the ones that are constantly on, run around 34-35C. Never seen any of them go higher then 36C at worst. Temps are not an issue as long as you have the hardware to support it, i use a supermicro 847 server. Noise wise tho, I hear my server from my bestment into the first floor lol. General pricing ranges from as low as $280 to $600 USD depending on the brand. The Toshibas 14TBs are usually my favorite ones since they run around 280-320 with 5 year warranty. You just need to make sure that you are buying for a seller that covers the 5 year warranty from their end since Toshiba won't cover it if its OEMs.
  9. Ahh ok thx for the response squid. Now the fun part of figuring out which stick of ram it is
  10. Fix Common Problems Machine Check Events, attached is my full diagnostics thevoid-diagnostics-20210824-0833.zip
  11. my disk cache is set to -1, this is the default setting from when I installed the docker, would you recommend me changing it?
  12. Thank you for answering, i thought as much but figured i would ask just incase and not regret it later.
  13. TL;DR version: Can I do all of these at the same time since parity will be invalidated and thus only need to rebuild parity once? Remove 1 8TB drive, add 4 12TB drives, and change the array configuration? Long version: I want to remove 1 8TB drive, add 4 new drives, and reset the array configuration, all in one go, instead of having to remove the 8TB drive, then have parity run, and then adding the 4 new drives and resetting the configuration, which will cause parity to run again. - I have two parity drives (14TB) - 8TB drive has been emptied using unbalance and no user shares are assigned to it. - new drives have been precleared - reset my configuration to keep my current 2 14TB as parity followed by having all the 14TB drives on top, followed by the 12tbs and etc. My current array:
  14. This only works on good drives from the manuals said, unless someone found a way around that. **This method can only be used if the drive to be removed is a good drive that is completely empty, is mounted and can be completely cleared without errors occurring** https://wiki.unraid.net/Manual/Storage_Management#Removing_data_disk.28s.29
  15. Fix Common Problems Machine Check Events, attached is my full diagnostics, here's what I believe is the relevant portion from syslog: thevoid-diagnostics-20210528-1756.zip