gdeyoung

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by gdeyoung

  1. Can someone give me an example of what to put in the host 5 field in place of /mnt/disks? The mount path I'm using is /mnt/disks/SSD . I'm wrestling with the same error.
  2. Thanks for the reply. I have been working on this and running into a few errors. I'm trying to empty the md6 drive (the one with the BTRFS errors) to be able to reformat. The metadata problem is so bad it fails on the copy so I can't empty the drive. I have two copies of the data on other servers. I just want to reformat the drive and sync the data back. How do I reformat and not have a rebuild on a drive I can't empty?
  3. nasserver-diagnostics-20190115-1127.zip As requested
  4. Keep getting recurring errors in my system log: Jan 15 10:06:19 NASserver kernel: BTRFS critical (device md6): corrupt leaf, slot offset bad: block=1187753492480, root=1, slot=159 Jan 15 10:06:19 NASserver kernel: BTRFS critical (device md6): corrupt leaf, slot offset bad: block=1187753492480, root=1, slot=159 And then this one repeats every 10 seconds: Jan 15 10:53:36 NASserver kernel: pcieport 0000:00:03.2: AER: Corrected error received: id=0000 Jan 15 10:53:36 NASserver kernel: pcieport 0000:00:03.2: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=001a(Transmitter ID) Jan 15 10:53:36 NASserver kernel: pcieport 0000:00:03.2: device [1022:1453] error status/mask=00001000/00006000 Jan 15 10:53:36 NASserver kernel: pcieport 0000:00:03.2: [12] Replay Timer Timeout Not sure if I have had a flaky disk (md6) and that has caused the stability issues. Need help in interpreting what is happening with the system. Before I noticed the errors in the log this morning I was attempting to upgrade the system to 6.6.6 last night. When I upgraded to 6.6.6 last night, the system stopped disk 6/md6 in the middle of the night and make it read-only and it really screwed up the system. Backed off the update this morning to 6.5.3 and it is all stable again. This is not the first issue with this system and the 6.6.x update. Previous upgrades to 6.6.x can't stay up without the shares disappearing after 24 hours of uptime. When on 6.5.x months of stable uptime.
  5. I had the same thing with a white label 8tb shucked drive. It was the 3.3v issue on the shucked drive. Using a 4 pin Molex to sata power adapter solved it. The drive does not initialize with a normal sata power. It is the way WD locks the drives for people that shuck them.
  6. I had the same thing with a white label 8tb shucked drive. It was the 3.3v issue on the shucked drive. Using a 4 pin Molex to sata power adapter solved it. The drive does not initialize with a normal sata power. It is the way WD locks the drives for people that shuck them.
  7. I have the same problem on a beta 21 PC with a AMD config: Gigabyte Technology Co., Ltd. - 990FXA-UD3 CPU: AMD FX-8350 Eight-Core @ 4000 HVM: Enabled IOMMU: Enabled Cache: 384 kB, 8192 kB, 8192 kB Memory: 24576 MB (max. installable capacity 32 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000Mb/s, Full Duplex, mtu 1500 Kernel: Linux 4.4.6-unRAID x86_64 OpenSSL: 1.0.2g Exact same symptons. I end up having to hard reset the box since it will not soft reset from the command line. My NIC is a Realtek RTL8111E chip (10/100/1000 Mbit)