tazman

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by tazman

  1. Just found this statement: My understanding is that the next WD release, the 24TB (Gold and U/star), will be the largest they'll be able to make in CMR format so that may indeed be the one to have in future years. Source: https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2023/ (in the comment section)
  2. I tried one which immediately failed. The RMA replacement is running fine since 4 years. I have not re-considered them and stick to WD reds instead.
  3. Hi, With "best" I mean: this size will be available by all providers (=competition) and has the chance of being the most cost-effective in cents/GB for the next couple of years and has a technology that is reliable and proven. When I started with unRAID back in 2011 2TB drives were the only choice. Relatively soon, 4TB drives became the standard and were the ones with the lowest cost per GB until now. I mostly use WD reds now after HGST was bought by them with some bad experiences with Seagate and Toshiba in-between. At the moment it seems that higher capacities start to challenge the 4TB prices. WD currently has an attractive Black Friday deal for their 18TB Pros. I am wondering if it is the time to swap out the drives starting with my two parity drives. But is 18TB the "right" size to adopt as the new standard? I know that unRAID does not care about drive size as long as the parities are the biggest ones. But with so many new large capacity drives that add yet another TB or two on top I don't want to constantly swap out the parity drives. So I wonder what the your opinion is about: 1. What hard disk sizes are, from a technology perspective, the most reliable ones? I always thought that doubles 2-4-8-16 would be a a logic. 2. Which disk size has the potential to become the new "lowest cost per TB" leader for the next years? I researched this but could not find any clear answers. BackBlaze thinks that the cost will come down to 1ct/GB in 2025: https://www.tomshardware.com/news/backblaze-expects-one-cent-per-gb-hdds-by-2025. They seem to use 16TB drives now but the reports says that the 1ct will be achieved by the 22/24TB drives. 24 would be doubles again. Interim sizes like 16, 18 or 20 would be shorter-lived in-betweens!? So wait for the 24TB? The prospect of parity checks/rebuilds taking 3.5 days with 24TB (my 4TB take 14 hours) scares me a bit. Overall, the WD Reds Plus 12TB seems to be the current non-deal sweet spot and every provider has 12TB drives. So is 12TB a good size to go for now? Appreciate you insights and wisdom. Thanks! Tazman
  4. Dear all, I have .nfo files excluded, as they change frequently. So the Export correctly states that files are skipped: Finished - exported 216628 files, skipped 122 files. Duration: 00:01:59 Yet, upon export, the syslog is flooded with error messages like: Aug 30 17:08:47 SS bunker: error: no export of file: /mnt/disk8/Movies/Brooklyn (2015)/movie.nfo Yes, Mr. Fileintegrity, .nfos should not be exported as configured. Why do you complain about this? It was so bad that the syslog partition (standard size: 128mb) overflowed and nothing was logged anymore. Yet, unraid continued to run like a charm (!!!!). The error message is only generated for excluded files (ie. *.nfo) , not folders. Bug? Please fix! Feature? Please make it configurable! Thanks! The bigger issue at hand seems to be that files handled by Mover are not recognized as modified. Like .nfo files when they are touched by Sonar when a new episode is added. So excluding them is my attempt to prevent me getting flooded by bunker verify warnings. Now, my syslog gets flooded. Can't avoid the flood. Maybe it's global warming! Ultimately, this makes the plugin less useful as you constantly need to remember to run manual Builds and Exports. Could this be schedulable? Tazman
  5. +1. I am running 22 mostly 4TB data drives with overall 82T I also experienced dual drive failure for the second time recently. Surely a nail biting and nerve wrecking experience I do not want to go through again.
  6. Hi Vr2Io, thanks, they are red when a drive is in only. The headers are clean. I just replaced the backplane with most problems (some of the slots were not working, the drives in the other two with a red middle LED are all ok) with a one from Supermicro that I had lying around and so far, the preclears are going good. I am also hanging towards your suspicion of a board failure as power and the SATA cables are working ok now. Best regards, Thomas
  7. Dear all, I have six ICYDOCK 5-3 MB455SPF bays of which 3 show a red HDD light in the middle bay. Red indicates a failed drive. However... According to the manual, the HDD red light needs to be signalled by the host adapter via a specific FAIL connector/cable which is not connected. So it should not be possible to be red. Moving a drive from one slot to another ie. a drive that shows green in one slot into the red one changes nothing. Removing the fan does not change anything. Switching the SATA cable does not change anything. The drives work as expected. The slot is just red, whatsoever. I could find anything online that explains this. Has anybody seen this and can advise what the middle HDD light = red means or has a tipp what I could try to debug this further? Thanks! Best regards, Thomas
  8. Upgrade from 6.7.2 -> 6.8.3 without problems.
  9. The stability and functionality make me confident about unRAID today, the responsiveness and great support of Tom, the developer team and the community make me confident about unRAID in the future. Simply said: thank you I keep close track of my hard disks and track the development of especially wear SMART values manually. It would great if unRAID would keep a log of the SMART history for me, e.g. by polling a drive once a day (customizable) and keeping track of changes.
  10. I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example: xfs_repair (without -n) did not do anything either. All files and can be read and copied without problems and are identical to their sources. Note, though, that the folder has actually a whopping 25000+ file in it. When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three. I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash. Appreciate your advice how to judge this and which measure to take to either assess more or rectify. Thanks!
  11. The extended SMART test on the partity 2 drive completed without errors. I then restarted the rebuild and it continued with the normal speed. The read speeds which showed around 4.5MB in the picture above increased to ~75 and the write speed to ~65. After the rebuild was complete I ran DiskSpeed again and it did complete for all drives. Although I obviously like the fact, that it fixed itself, I will keep Parity 2 on close watch.
  12. Just an interim status: I paused the Rebuild and ran DriveSpeed. It's stalling with my Parity 2 drive at about 800Mb. Running an extended SMART test on that drive now.
  13. I ran DiskSpeed a few month ago, it's very insightful. The now to be replaced Disk 11 was actually the slowest in the pack. So do I understand you right that most likely one of the other disks might be slowing down the entire process?
  14. Thanks, Johnnie. The speed view shows all drives reading and writing at about the same speed. Does this tell us something? .
  15. Hi, I am rebuiling my Disk 11 (old 3TB, new 4TB) at the moment. The rebuild began last night with the expected ~80MB/s and was due to complete in 13 or so hours. When I looked at that status this morning the rebuild was at 50% but the speed dropped to ~2MB/s with a time to complete in 8 days. I never had this before and wonder what to do about it. syslog is free of any related error as far as I can tell. The SMART report of the new drive does not show anything either. I am attaching the diagnostics file and welcome your kind advice on how to proceed. Thanks! Tazman ss-diagnostics-20191015-0528.zip
  16. Checking all other drives now. Noticed that the message: "- found root inode chunk" appears always. Probably normal though "chunk" sounds negative. The scan, regardless of -n or -v is added only ends with "done" but doesn't state clearly that errors were found or not. I guess when nothing in-between is mentioned that says "error" it is ok. But wondering if there is a better way of telling if everything is ok nor not.
  17. Thanks, Johnny, that worked. A xfs_repair -Ln was needed and the drive is ok again. However, during the repair it mentioned that it is moving inodes to lost+found but I was not able to find any folder with that name on the drive looking at it using the disk share. It is sad to note that a file system corruption is not protected for by parity and can lead to data loss on a single drive failure.
  18. I just upgraded. During the reboot the machine had problems during the BIOS boot and I had to turn it off again. 6.7.0 then reported one disk as "Unmountable: No file system". I am not sure if this is a 6.7.0 problem or maybe a hardware glitch. I have reported it here:
  19. Hi, during the re-boot after the 6.7.0 upgrade the machine hung during BIOS boot and needed to be turned off again. When 6.7.0 came up it reported Disk 17 as "Unmountable: No file system" with the following errors in the log: May 21 12:24:14 SS kernel: XFS (md17): Internal error xlog_valid_rec_header(2) at line 5283 of file fs/xfs/xfs_log_recover.c. Caller xlog_do_recovery_pass+0xc7/0x514 [xfs] May 21 12:24:14 SS kernel: CPU: 2 PID: 14488 Comm: mount Not tainted 4.19.41-Unraid #1 May 21 12:24:14 SS emhttpd: /mnt/disk17 mount error: No file system The SMART parameters of the drive looks normal, it passes the short SMART test. I didn't run the extended one. Diagnostics are attached. I suspect that the drive got damaged during the stalled BIOS boot and that this is not an unraid issue. However, I wanted to... - ... report it just in case that it might be a (possibly new) unraid issue - ... get your advice on how to best fix this. Normally just rebuild the drive, but a xfs repair might also be possible. Thanks for your support and advice! Taxman ss-diagnostics-20190521-0429.zip
  20. I just did a fresh install and eventually got it to run and the GUI to display. Now, my syslog is flooded with messages like this every minute: I found a possible solution here: https://serverfault.com/questions/317393/connect-failed-111-connection-refused-while-connecting-to-upstream ... but I am too noob with dockers to even grasp what to do. Within rtorrent this error message is logged: Bad response from server: (0 [error,getplugins]) supervisod.log contains: 2019-03-02 20:38:34,021 DEBG 'watchdog-script' stdout output: [info] nginx running [info] Initialising ruTorrent plugins... 2019-03-02 20:38:34,115 DEBG 'watchdog-script' stdout output: [info] ruTorrent plugins initialised What is really strange is that the error messages keep occurring even after the rtorrentvpn docker was stopped! This makes me suspect that the problem might be outside of the docker, despite the fact that it is referenced in the error message. Really appreciate a nudge in the right direction on how to get this fixed! Thanks! Tazman
  21. I cannot connnect to the WebGUI due to a "The connection has timed out" error. after start of the container the unraid log reads: Jan 26 11:55:12 SS kernel: veth3ae72c2: renamed from eth0 Jan 26 11:55:12 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS avahi-daemon[10028]: Interface veth76b2945.IPv6 no longer relevant for mDNS. Jan 26 11:55:13 SS avahi-daemon[10028]: Leaving mDNS multicast group on interface veth76b2945.IPv6 with address fe80::e01b:b9ff:fe94:cf4d. Jan 26 11:55:13 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS kernel: device veth76b2945 left promiscuous mode Jan 26 11:55:13 SS kernel: docker0: port 6(veth76b2945) entered disabled state Jan 26 11:55:13 SS avahi-daemon[10028]: Withdrawing address record for fe80::e01b:b9ff:fe94:cf4d on veth76b2945. Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered disabled state Jan 26 11:55:16 SS kernel: device veth2adbcca entered promiscuous mode Jan 26 11:55:16 SS kernel: IPv6: ADDRCONF(NETDEV_UP): veth2adbcca: link is not ready Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered forwarding state Jan 26 11:55:16 SS kernel: docker0: port 6(veth2adbcca) entered disabled state Jan 26 11:55:22 SS kernel: eth0: renamed from veth5e2003a Jan 26 11:55:22 SS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2adbcca: link becomes ready Jan 26 11:55:22 SS kernel: docker0: port 6(veth2adbcca) entered blocking state Jan 26 11:55:22 SS kernel: docker0: port 6(veth2adbcca) entered forwarding state Jan 26 11:55:23 SS avahi-daemon[10028]: Joining mDNS multicast group on interface veth2adbcca.IPv6 with address fe80::843:a0ff:fe94:793f. Jan 26 11:55:23 SS avahi-daemon[10028]: New relevant interface veth2adbcca.IPv6 for mDNS. Jan 26 11:55:23 SS avahi-daemon[10028]: Registering new address record for fe80::843:a0ff:fe94:793f on veth2adbcca.*. Config is: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Transmission_VPN' --net='bridge' --privileged=true -e TZ="Asia/Singapore" -e HOST_OS="Unraid" -e 'OPENVPN_USERNAME'='xxxx' -e 'OPENVPN_PASSWORD'='xxxx' -e 'OPENVPN_CONFIG'='Hong Kong' -e 'OPENVPN_PROVIDER'='PIA' -e 'LOCAL_NETWORK'='192.168.0.0/24' -e 'TRANSMISSION_RPC_USERNAME'='admin' -e 'TRANSMISSION_RPC_PASSWORD'='password' -e 'OPENVPN_OPTS'='--inactive 3600 --ping 10 --ping-exit 60' -e 'PUID'='99' -e 'PGID'='100' -e 'TRANSMISSION_DOWNLOAD_DIR'='/downloads' -e 'TRANSMISSION_RPC_AUTHENTICATION_REQUIRED'='false' -e 'TRANSMISSION_RATIO_LIMIT'='1.1' -e 'TRANSMISSION_RATIO_LIMIT_ENABLED'='true' -e 'TRANSMISSION_RPC_HOST_WHITELIST_ENABLED'='false' -p '9091:9091/tcp' -p '1198:1198/udp' -v '/mnt/user/.incoming/!Torrents/':'/data':'rw' -v '/mnt/user/.incoming/':'/downloads':'rw' -v '/mnt/user/Public/Torrents/':'/watch':'rw' -v '/mnt/user/appdata/Transmission_VPN':'/config':'rw' --restart=always --log-opt max-size=50m --log-opt max-file=1 'haugene/transmission-openvpn' Appreciate your advice on how to debug/fix this! Thanks!
  22. I have experienced this a couple of times now: the internal search function doesn't return anything relevant. Here is a solution: switch to Google, add 'site:forums.unraid.net' to the search term and you will get much better results. Recent example: "remove cache pool drive": Internal search: first three hits: 1. Correct Procedure toremoveCacheDrive? 2. Unraid OS version 6.6.6 available 3. Drive went from 100% full to 'Unmountable: No file system' after a clean shutdown and reboot Google: first three hits: 1. (Solved) Removing a Disk from Cache Pool - General Support - Unraid 2. Removing Cache Drive (SOLVED) - General Support - Unraid 3. Remove drive from Cache pool - General Support - Unraid Merry x-mas! Hope you all find some time to unWIND.
  23. The WebUI refuses to connect in the current version or it gives a Connection failed error: There seem to be two problems: 1. PIA needs to be connected to a port forwarding host as mentioned here: https://www.privateinternetaccess.com/helpdesk/kb/articles/how-do-i-enable-port-forwarding-on-my-vpn. This fixes this error: Note, that the settings do not allow for the selection of all those endpoints. For example, there is an invalid Germany choice. Instead there should be a "DE Frankfurt" and a "DE Berlin" 2. From what I could find out there seems to be also a problem with 2.94. Reverting to 2.93 by using "linuxserver/transmission:121" into the repository and applying the change fixed this. But then, this version doesn't use PIA. Does anybody have an idea how to fix this problem e.g. by reverting to an older version of haugene/transmission-openvpn? Thanks! tazman
  24. I found a solution and have updated the first post accordingly.