themaxxz

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by themaxxz

  1. I suspect that adding 'missingok' option to /etc/logrotate.d/cache_dirs will prevent the error. Also the permissions of this file could be changed to 644.
  2. Just some follow-up. The full SMART test completed ok and did not report anything, but I went ahead with the replacement anyway. I just put in a new seagate exos 5e8 which is rebuilding.
  3. Hi, Could you please on the 'Self-Test' page for hard disk, add a button 'SMART xerror log' to show xerror log as well? The command seems to be 'smarctl -l xerror <device>'. Reasoning. The regular 'error' log is often empty, and may give a false sense that the harddrive is still good, while information displayed in the xerror log indicates that the harddrive is starting to fail. See also https://www.smartmontools.org/ticket/34 E.g. xerror log can show 'Error: UNC at LBA' entries, while the error log shows nothing. See https://techoverflow.net/2016/07/25/how-to-interpret-smartctl-messages-like-error-unc-at-lba/
  4. Ok, it seems that this log information is printed by the 'smartctl -l xerror' command. Perhaps 'SMART xerror log' could be added to the 'Self-test' unraid UI menu of the hard drives, as it seems xerror log contains more valuable info not always shown in 'error log'.
  5. Hi, Disk8 reported 448 Errors, but the disk did not drop from the raid. As I just read another post where it says some SMART values for WD disk should always be zero, I guess the disk should be replaced. I'm currently running a long smart test. The last long test was run 3 months ago. How often should a long test be run? Only 2 days ago I ran a parity check which resulted with 0 errors. I have attached the smart report, could somebody confirm diagnosis is failing disk? unraid-smart-20180917-1855.zip
  6. Hi, I'd like to report a 'bug' of some sort. After skimming this thread, I thought this plugin could be usefull, so i installed it today on my unraid server (6.5.3). I decided to start slow and perform a disk by disk build. I first did a build on disk13, which only had 2 files. After that completed I started a build on disk3 with ~500 files. Currently still ongoing. (1 bunker process running related to disk3) I suddenly noticed the UI displayed a green check for all but one disk (disk9) under 'Build up-to-date'? I tracked back that it's populated from the file /boot/config/plugins/dynamix.file.integrity/disks.ini. After manually correcting (removing the not build disks) this file, the status in the UI reflects the correct situation. While trying to understand how this could happen, I was able to reproduce the issue, by clicking on the 'hash files' link on Tools/Integrity page. After clicking on this link all but one disk9 were re-added again to disks.ini file. What should be the correct format of this disks.ini file?
  7. I just checked my server (6.5.3) and I also noticed the state was stopped. I started it again using the same procedure. (cache_dirs version: 2.2.0j)
  8. Where do you configure this status emails? Is it the "Array status notification" option ? From the help "Start a periodic array health check (preventive maintenance) and notify the user the result of this check." Can anybody provide some more details on what this means? Will this perform a parity check? Thanks,
  9. Hi, The link to the 'bug report' https://forums.unraid.net/forum/66-bug-reports/ appears to be broken resulting in a 404.
  10. I recently did this with an ubuntu live cd using the setblocksize program. As I'm not sure about the rules of posting external links, google for unsupported-sector-size-520.html
  11. No the UPS never ran out of power. My condition is 5min on battery. The rest of the network stayed up and was still up when power returned after 30min or so. The server just completed a parity check (which I initiated) and everything checks out. I just didn't know about the shutdown time-out settings waiting for a graceful shutdown.
  12. Ok, so it was an unclean shutdown in this case, and most likely due to having a too short timeout set. Thanks.
  13. Thanks, indeed that is where I found the diagnostic file with the syslog.txt file from which I quoted the previous logging. But it's not clear to me how a clean or unclean shutdown logging should look like.
  14. Perhaps a stupid question, but how can I see in the unraid-diagnostics file if there was an 'unclean' shutdown? This morning the ups 'nut' client initiated a shutdown 5min after being on battery as configured. May 9 04:09:06 unraid upsmon[5804]: UPS [email protected] on battery May 9 04:14:10 unraid upsmon[5804]: Signal 10: User requested FSD May 9 04:14:10 unraid upsmon[5804]: Executing automatic power-fail shutdown May 9 04:14:10 unraid upsmon[5804]: Auto logout and shutdown proceeding May 9 04:14:15 unraid shutdown[1509]: shutting down for system halt May 9 04:14:15 unraid init: Switching to runlevel: 0 The very last entry in the syslog.txt is May 9 04:15:48 unraid root: Generating diagnostics... So what does this mean exactly? In the meantime I also changed the timers as suggested.
  15. Fix Common Problems reported an 'error' on my qbittorrent container with version 2017.10.15a. The last version reports an issue with my qbittorrent docker port not being the default 8080. But I actually had to change the port due to conflict with other docker as explained on https://hub.docker.com/r/linuxserver/qbittorrent/ Due to issues with CSRF and port mapping, should you require to alter the port for the webui you need to change both sides of the -p 8080 switch AND set the WEBUI_PORT variable to the new port. For example, to set the port to 8090 you need to set -p 8090:8090 and -e WEBUI_PORT=8090 I tried only changing (one side) the host port as suggested by Fix Common Problems, but this doesn't seem to work for this container.
  16. Thanks for the good info. I knew about SSD write limitations, but never really thought about it. My 14month old 500GB EVO 850 has done 21TB, and it seems to be spec'd for 150TBW. ( http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo.html )
  17. I'm experiencing the same issue. It usually happens after a few days since the last restart.
  18. I don't know about the single vs multiple cache drives, but I think you want at least 500GB or more for cache drive. My appdata is 186GB with only half a dozen of dockers.
  19. Hi, I had the problem after updating to 2017.10.08 I guess I missed the update check for 2017.10.8a. I just updated to 2017.10.09 which seems to work. But the update plugin was missing the 'done' button.
  20. Hi, I iust updated to last version, but I had to first stop and manually kill the remaining 'upsmon' processes to get it working again. Regards,
  21. I also ran unraid virtualized on ESXi for years, but recently switched to running it BM and I'm now discovering and playing with the wonderful world of dockers. I still have another ESXi host to play with as well. As I was curious if I would be able to get this to work, I just was able to get esxi 6.0 update 1 installed on my unraid box as follows: - create a new VM using the coreos template (probably not really imported what template to choose) - make sure to select to provision 2 cpu cores (not threads) to the VM, just assigning 2 threads from a single core failed, with following error: CPU_CORES_ERROR: This host has [1] cpu core(s) which is less than recommended [2] cpu cores <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> - edit the xml and change 1) the network model from virtio to e1000 <model type='e1000'/> 2) storage disk to type 'sata' if not already the case <target dev='hdc' bus='sata'/> All other settings can be left default. During installation I did get a few warnings though: 1) HARDWARE VIRTUALIZATION is not a feature of the CPU, or is not enabled in the BIOS. -> I believe this is related to 'nested' virtualization setting as explained here https://allyourco.de/running-vmware-esxi-under-qemu-kvm/ I did not test, but probably following might work: - stop qemu - edit /etc/modprobe.d/kvm.conf as needed / explain on the previous site - unload / reload the kvm, kvm-intel modules - start qemu 2) UNSUPPORTED_DEVICES_WARNING: This host has unsupported devices [some pci device] -> I can probably go into the xml and remove the device, but it doesn't seem to cause an issue I did not test creating/running a VM inside this ESXi instance, since I believe this requires having the hardware virtualization enabled. I also don't know how well a VM will run. I have played with VMs running in nested ESXi running on ESXi and it ran well enough to play and run basic stuff.
  22. There is no rush. I had a quick look at this myself and adding basename( ) around $excluded appears to work, at least when clicking the 'Backup now' button. In the /usr/local/emhttp/plugins/ca.backup/scripts/backup.php $rsyncExcluded .= '--exclude "'.basename($excluded).'" ';