argonaut

Members
  • Posts

    47
  • Joined

  • Last visited

1 Follower

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

argonaut's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. I think there could be a couple of variables not accounted for in your quest for speed. Having tested on Linux that eliminates the hardware path being an issue. However, have you considered Windows drivers? You might want to experiment with the driver provider and the driver version. I also agree with @jonathanm that any third party software installed on your Windows host is suspect. The only way to test that, is with a clean Windows install without third party applications installed. Windows A/V software and firewall software have the potential to severely impact your performance.
  2. The badness starts at Jan 25 04:01:07. What do you have configured to run at 04:00? It might be something inside one of you containers. Is it always at 04:00? Can you get dmesg output when the system is in a bad state? Also consider a remote syslog facility. Since you say the system is responding over TCP/IP; ssh in, then run dmesg command look for oops (kernel panic) and oom (out of memory) errors.
  3. Firefox since at least version 57 (much longer for sure) has had an issue with black text on black background fields in some CSS elements. I know this happens in Gnome, xfce and KDE. It is much worse if you have a dark theme enabled. I don't use Windows so I don't know about that OS. It can be mitigated via a CSS tweak. The Firefox developers say that this is not an issue due to pages written purely in CSS, not due to a Firefox bug, if you consider native input styles to be a feature rather than a bug. See https://bugzilla.mozilla.org/show_bug.cgi?id=1283086
  4. xfs does not have a verify write option. In order to accomplish what you are asking for you need to compare two sets of files. There are native Linux tools like md5sum that can accomplish this. Search for "Linux compare contents of two directories" and you should find plenty of examples.
  5. You could try a speed test outside of your VM. If performance is slow in Unraid it might point to a different issue. ISPs often manipulate WAN performance when they detect a speed test connecting to speedtest.net. There are many alternatives such as: https://testmy.net/ https://github.com/sivel/speedtest-cli is a CLI speed test tool you should be able to run from the Unraid CLI.
  6. I think it was a controller failure. I took out the two HDDs that were suspect and tested them on a separate host running fsck and smartctl; they had no errors. A friend down the street had a controller I used. I replaced my AOC-SASLP-MV8 controllers with a LSI 9305-24i. After ensuring the BIOS and Unraid saw all the devices I followed The 'Trust My Array' Procedure instructions in the Wiki to trust all devices. I then started the array which warned me the parity drive would be overwritten. Unraid forced a parity rebuild and that is currently happening. Data is being written to the parity disk at 152.9 MB/sec and I'm a few percent complete. So things are looking good now. These controller use Marvell's 88SE6480 Serial ATA Host Controller. All the negative comments about Marvell controllers seem warranted. Save yourself the headache and get rid of them before they bite you.
  7. I am running unraid 6.8.1. I think I've run into a situation where I might be royally screwed. Any assistance is appreciated. I was in the process of replacing a 4TB HDD with a 10 TB HDD. I powered down the array. I then swapped out the physical hard drive and put in the new one. I powered up and all devices were present except for the intentional missing disk 6. Then, I changed disk 6 to point to the new HDD /dev/sdo (sane as before the swap). Unraid informed me that it would erase all data and I proceeded. According to the logs before unraid could even format drive 6 (sdo), drive 7 (sdh) threw errors and and the array failed to start. A kernel panic then ensued, but I think that was from the ATA error. I suspect the Marvell SAS driver has experienced issues like so many others have experienced. I've never has a problem with them until now. So I shutdown unraid. I then put /dev/sdo back to the previous 4TB HDD. I then booted. The BIOS sees all the devices. Here is the entry where the array know I'm missing disk 6: 2020-01-21T19:19:56-07:00 nas1 kernel: md: import disk6: (sdo) WDC_WD100EFAX-68LHPN0_JEKAM33N size: 9766436812 2020-01-21T19:19:56-07:00 nas1 kernel: md: import_slot: 6 wrong Imports all the disks: 2020-01-21T19:20:06-07:00 nas1 kernel: md6: running, size: 9766436812 blocks 2020-01-21T19:20:06-07:00 nas1 kernel: md7: running, size: 9766436812 blocks When unraid tries to mount the disk 7 (md7) filesystem it blows up: 2020-01-21T19:20:11-07:00 nas1 kernel: sas: sas_ata_task_done: SAS error 8a 2020-01-21T19:20:12-07:00 nas1 kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata7: end_device-7:0: cmd error handler 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata7: end_device-7:0: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata8: end_device-7:1: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 2020-01-21T19:20:12-07:00 nas1 kernel: ata7.00: failed command: READ DMA EXT 2020-01-21T19:20:12-07:00 nas1 kernel: ata7.00: cmd 25/00:08:98:ed:ee/00:00:e8:00:00/e0 tag 18 dma 4096 in 2020-01-21T19:20:12-07:00 nas1 kernel: res 01/04:00:a7:97:1a/00:00:e9:00:00/40 Emask 0x12 (ATA bus error) 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata9: end_device-7:2: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: ata7.00: status: { ERR } 2020-01-21T19:20:12-07:00 nas1 kernel: ata7.00: error: { ABRT } 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata10: end_device-7:3: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: ata7: hard resetting link 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata11: end_device-7:4: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata12: end_device-7:5: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: sas: ata13: end_device-7:6: dev error handler 2020-01-21T19:20:12-07:00 nas1 kernel: sas: sas_ata_task_done: SAS error 8a smartctl thinks the disks are okay. Now I have disk 6 (sdo) back but unraid says it is a new device and thus has it disabled and disk 7 is not found. Thus 2 failed devices and I am toast. Array will not start and array will not rebuild from parity. disk 6 is in perfect condition with all data still intact, but I don't know how to get unraid to trust it so i can replaced the failed disk 7 (sdh) with a new one and rebuild disk 7 from parity. My next step is take out /dev/sdh and try looking at the filesystem on another host. I would like to get the array back to a working state first and then replace the controllers later (first need to buy some). Are there any suggestion on how I might escape this quandary? Thanks in advance. nas1-diagnostics-20200121-2035.zip
  8. I have updated the latest stable 1.2.12 and it has been working for a few hours without issue. Thanks. I need more time and bravery to try going to version 1.4.2. (I suppose it's easy to undo being docker, but I'm in the middle of a parity rebuild so I don't want to risk interrupting that until it is done.) I'm hoping the dockerfile gets updated soon so you can do a build like you were doing previously. Thanks.
  9. Geez, I feel kind of special. I should have time in the next couple of days to test. I'll report back.
  10. @spikhalskiy Is there any chance you could find some time to update your docker image to at least version 1.4.0.1? It looks like 1.4.2 is the most recent release. This version or higher contains a fix for mDNS that I would benefit from. Thank you for your consideration.
  11. By default it will join as a client. This image contains ZeroTierOne https://github.com/zerotier/ZeroTierOne ZeroTier controllers (the same thing as my.zerotier.com) a lot more configuration. You will also need additional firewall ports opened for the controller to work. See https://github.com/zerotier/ZeroTierOne/tree/master/controller for more information. You can view the template for this image here: https://raw.githubusercontent.com/Spikhalskiy/docker-templates/master/zerotier.xml
  12. @Dmitry Spikhalskiy I don't have a specific fix in either 1.2.8 or 1.2.10 that I need. Everything is working fine. I was just hoping to maintain the same version across all my zerotier installs. If you are dependent upon an upstream source that's totally cool. Less work for you. Thanks for the quick reply.
  13. First, thanks. I love ZeroTier. I appreciate your efforts to build and maintain this Docker image. In your CLI example ./zerotier-cli listnetwork should be ./zerotier-cli listnetworks per the help file Available commands: info - Display status info listpeers - List all peers listnetworks - List all networks ... Second, any chance you can find some time to upgrade the version to 1.2.8? Great work Dmitry.
  14. A dark theme would be a very welcome addition. I actively avoid this forum as it really hurts my eyes.
  15. Oh man back in the 1.5 TB days Seagate was truly awful. Thousands upon thousands of drives were returned. The firmware updates took several version to stabilize and you had to do through a raw SATA interface via bootable floppy in DOS. I had guys pulling drives out of array for weeks to update. Up to and including 4 TB drives I only purchased WDC I would never let anyone buy a Seagate of any kind below 6 TB. The 4 TB WDC Reds are fantastic. I hated Seagate. However since that time the QA for Seagate has improved dramatically. in the 6 and 8 TB Enterprise/Red I only buy Seagate (6/8TB) now, I don't know about Blues/Greens in these sizes, the WDC might be better, I don't own any. WDC seems less reliable in 6/8 TB Red for me. The Backblaze report has lots of interesting stats. https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2016/