tazman

Members
  • Posts

    57
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Singapore

Recent Profile Visitors

790 profile views

tazman's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. Upgrade from 6.7.2 -> 6.8.3 without problems.
  2. The stability and functionality make me confident about unRAID today, the responsiveness and great support of Tom, the developer team and the community make me confident about unRAID in the future. Simply said: thank you I keep close track of my hard disks and track the development of especially wear SMART values manually. It would great if unRAID would keep a log of the SMART history for me, e.g. by polling a drive once a day (customizable) and keeping track of changes.
  3. I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example: xfs_repair (without -n) did not do anything either. All files and can be read and copied without problems and are identical to their sources. Note, though, that the folder has actually a whopping 25000+ file in it. When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three. I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash. Appreciate your advice how to judge this and which measure to take to either assess more or rectify. Thanks!
  4. The extended SMART test on the partity 2 drive completed without errors. I then restarted the rebuild and it continued with the normal speed. The read speeds which showed around 4.5MB in the picture above increased to ~75 and the write speed to ~65. After the rebuild was complete I ran DiskSpeed again and it did complete for all drives. Although I obviously like the fact, that it fixed itself, I will keep Parity 2 on close watch.
  5. Just an interim status: I paused the Rebuild and ran DriveSpeed. It's stalling with my Parity 2 drive at about 800Mb. Running an extended SMART test on that drive now.
  6. I ran DiskSpeed a few month ago, it's very insightful. The now to be replaced Disk 11 was actually the slowest in the pack. So do I understand you right that most likely one of the other disks might be slowing down the entire process?
  7. Thanks, Johnnie. The speed view shows all drives reading and writing at about the same speed. Does this tell us something? .
  8. Hi, I am rebuiling my Disk 11 (old 3TB, new 4TB) at the moment. The rebuild began last night with the expected ~80MB/s and was due to complete in 13 or so hours. When I looked at that status this morning the rebuild was at 50% but the speed dropped to ~2MB/s with a time to complete in 8 days. I never had this before and wonder what to do about it. syslog is free of any related error as far as I can tell. The SMART report of the new drive does not show anything either. I am attaching the diagnostics file and welcome your kind advice on how to proceed. Thanks! Tazman ss-diagnostics-20191015-0528.zip
  9. Checking all other drives now. Noticed that the message: "- found root inode chunk" appears always. Probably normal though "chunk" sounds negative. The scan, regardless of -n or -v is added only ends with "done" but doesn't state clearly that errors were found or not. I guess when nothing in-between is mentioned that says "error" it is ok. But wondering if there is a better way of telling if everything is ok nor not.
  10. Thanks, Johnny, that worked. A xfs_repair -Ln was needed and the drive is ok again. However, during the repair it mentioned that it is moving inodes to lost+found but I was not able to find any folder with that name on the drive looking at it using the disk share. It is sad to note that a file system corruption is not protected for by parity and can lead to data loss on a single drive failure.
  11. I just upgraded. During the reboot the machine had problems during the BIOS boot and I had to turn it off again. 6.7.0 then reported one disk as "Unmountable: No file system". I am not sure if this is a 6.7.0 problem or maybe a hardware glitch. I have reported it here:
  12. Hi, during the re-boot after the 6.7.0 upgrade the machine hung during BIOS boot and needed to be turned off again. When 6.7.0 came up it reported Disk 17 as "Unmountable: No file system" with the following errors in the log: May 21 12:24:14 SS kernel: XFS (md17): Internal error xlog_valid_rec_header(2) at line 5283 of file fs/xfs/xfs_log_recover.c. Caller xlog_do_recovery_pass+0xc7/0x514 [xfs] May 21 12:24:14 SS kernel: CPU: 2 PID: 14488 Comm: mount Not tainted 4.19.41-Unraid #1 May 21 12:24:14 SS emhttpd: /mnt/disk17 mount error: No file system The SMART parameters of the drive looks normal, it passes the short SMART test. I didn't run the extended one. Diagnostics are attached. I suspect that the drive got damaged during the stalled BIOS boot and that this is not an unraid issue. However, I wanted to... - ... report it just in case that it might be a (possibly new) unraid issue - ... get your advice on how to best fix this. Normally just rebuild the drive, but a xfs repair might also be possible. Thanks for your support and advice! Taxman ss-diagnostics-20190521-0429.zip
  13. I just did a fresh install and eventually got it to run and the GUI to display. Now, my syslog is flooded with messages like this every minute: I found a possible solution here: https://serverfault.com/questions/317393/connect-failed-111-connection-refused-while-connecting-to-upstream ... but I am too noob with dockers to even grasp what to do. Within rtorrent this error message is logged: Bad response from server: (0 [error,getplugins]) supervisod.log contains: 2019-03-02 20:38:34,021 DEBG 'watchdog-script' stdout output: [info] nginx running [info] Initialising ruTorrent plugins... 2019-03-02 20:38:34,115 DEBG 'watchdog-script' stdout output: [info] ruTorrent plugins initialised What is really strange is that the error messages keep occurring even after the rtorrentvpn docker was stopped! This makes me suspect that the problem might be outside of the docker, despite the fact that it is referenced in the error message. Really appreciate a nudge in the right direction on how to get this fixed! Thanks! Tazman
  14. I cannot connnect to the WebGUI due to a "The connection has timed out" error. after start of the container the unraid log reads: Jan 26 11:55:12 SS kernel: veth3ae72c2: renamed from eth0 Jan 26 11:55:12 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS avahi-daemon[10028]: Interface veth76b2945.IPv6 no longer relevant for mDNS. Jan 26 11:55:13 SS avahi-daemon[10028]: Leaving mDNS multicast group on interface veth76b2945.IPv6 with address fe80::e01b:b9ff:fe94:cf4d. Jan 26 11:55:13 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS kernel: device veth76b2945 left promiscuous mode Jan 26 11:55:13 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS avahi-daemon[10028]: Withdrawing address record for fe80::e01b:b9ff:fe94:cf4d on veth76b2945. Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered disabled state Jan 26 11:55:16 SS kernel: device veth2adbcca entered promiscuous mode Jan 26 11:55:16 SS kernel: IPv6: ADDRCONF(NETDEV_UP): veth2adbcca: link is not ready Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered forwarding state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered disabled state Jan 26 11:55:22 SS kernel: eth0: renamed from veth5e2003a Jan 26 11:55:22 SS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2adbcca: link becomes ready Jan 26 11:55:22 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:22 SS kernel: docker0: port 6(veth2adbcca) entered forwarding state Jan 26 11:55:23 SS avahi-daemon[10028]: Joining mDNS multicast group on interface veth2adbcca.IPv6 with address fe80::843:a0ff:fe94:793f. Jan 26 11:55:23 SS avahi-daemon[10028]: New relevant interface veth2adbcca.IPv6 for mDNS. Jan 26 11:55:23 SS avahi-daemon[10028]: Registering new address record for fe80::843:a0ff:fe94:793f on veth2adbcca.*. Config is: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Transmission_VPN' --net='bridge' --privileged=true -e TZ="Asia/Singapore" -e HOST_OS="Unraid" -e 'OPENVPN_USERNAME'='xxxx' -e 'OPENVPN_PASSWORD'='xxxx' -e 'OPENVPN_CONFIG'='Hong Kong' -e 'OPENVPN_PROVIDER'='PIA' -e 'LOCAL_NETWORK'='192.168.0.0/24' -e 'TRANSMISSION_RPC_USERNAME'='admin' -e 'TRANSMISSION_RPC_PASSWORD'='password' -e 'OPENVPN_OPTS'='--inactive 3600 --ping 10 --ping-exit 60' -e 'PUID'='99' -e 'PGID'='100' -e 'TRANSMISSION_DOWNLOAD_DIR'='/downloads' -e 'TRANSMISSION_RPC_AUTHENTICATION_REQUIRED'='false' -e 'TRANSMISSION_RATIO_LIMIT'='1.1' -e 'TRANSMISSION_RATIO_LIMIT_ENABLED'='true' -e 'TRANSMISSION_RPC_HOST_WHITELIST_ENABLED'='false' -p '9091:9091/tcp' -p '1198:1198/udp' -v '/mnt/user/.incoming/!Torrents/':'/data':'rw' -v '/mnt/user/.incoming/':'/downloads':'rw' -v '/mnt/user/Public/Torrents/':'/watch':'rw' -v '/mnt/user/appdata/Transmission_VPN':'/config':'rw' --restart=always --log-opt max-size=50m --log-opt max-file=1 'haugene/transmission-openvpn' Appreciate your advice on how to debug/fix this! Thanks!
  15. I have experienced this a couple of times now: the internal search function doesn't return anything relevant. Here is a solution: switch to Google, add 'site:forums.unraid.net' to the search term and you will get much better results. Recent example: "remove cache pool drive": Internal search: first three hits: 1. Correct Procedure toremoveCacheDrive? 2. Unraid OS version 6.6.6 available 3. Drive went from 100% full to 'Unmountable: No file system' after a clean shutdown and reboot Google: first three hits: 1. (Solved) Removing a Disk from Cache Pool - General Support - Unraid 2. Removing Cache Drive (SOLVED) - General Support - Unraid 3. Remove drive from Cache pool - General Support - Unraid Merry x-mas! Hope you all find some time to unWIND.