Jump to content

JorgeB

Moderators
  • Posts

    67,459
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Don't see anything that points to a disk/filesystem issue, besides the vdisk is on cache, so can't have anything to do with the upgraded array drive, since it's an OSX VM, best bet it to ask for help in one of guide/support threads for this.
  2. Nothing jumps out before the fs crashed, but best bet now it to backup, re-format and restore the data.
  3. Yes, the topic is mostly about that, but for example I have the problem on one of my VMs, and only one, despite having 3 on the same device, and no issues with the docker image which also is on the same device, it's kind of a strange issue.
  4. Disk upgrade shouldn't cause that, but please post the diagnostics, might be something visible there.
  5. Yes, but the drives still need to be formatted.
  6. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  7. Problems start due to filesystem corruption on cache, best bet is to backup, re-format and restore cache pool data.
  8. Yes, but for best results with Unraid make sure it's flashed to IT mode.
  9. Unfortunately because the syslog is spammed with these: Jun 11 20:18:17 YUKI nginx: 2020/06/11 20:18:17 [error] 10423#10423: *1116981 connect() to unix:/var/tmp/Influxdb.sock failed (111: Connection refused) while connecting to upstream, client: 10.0.0.125, server: , request: "GET /dockerterminal/Influxdb/token HTTP/2.0", upstream: "http://unix:/var/tmp/Influxdb.sock:/token", host: "hash.unraid.net:9001", referrer: "https://hash.unraid.net:9001/dockerterminal/Influxdb/" Jun 11 20:18:17 YUKI nginx: 2020/06/11 20:18:17 [error] 10423#10423: *1119386 conn There's time missing and it didn't catch the start of the problem, but likely some controller issue, despite how it looks this is usually not a catastrophic failure and easy to recover from, looks like Unraid lost contact with all 5 disks and when this happens it disables as many disks as there are parity drives, which disks get disable is a crap-shoot, still since one of the disks dropped offline there's no SMART so power back on, start the array with the emulated disks and post new diags, then if all looks good you can either rebuild on top or do a new config and re-sync parity.
  10. That should never been done, they should use the appropriate attribute or just use a higher number, never a reserved attribute, just uncheck those attributes.
  11. LSI SAS3 HBAs like the SAS3008 models you have can trim if they are in IT mode, but only SSD with deterministic trim support, SAS2 models currently can't trim any SSD.
  12. Still a few months away, at current pace I estimate hitting 1PB around Halloween, this assuming the NVMe device doesn't give up the ghost, since it's well past its 300TBW rating.
  13. Yes it is, it's not "broken", it's just displaying what btrfs outputs for "btrfs balance status /mountpoint", and that's the output when no balace is running.
  14. That's what it looks like. Correct, unless the disk is returning bad data, while this should never happen it's been known to happen from time to time, though in this case it could only be the parity2 disk, since parity1 was correct.
  15. Docker image is always btrfs, even if it's on an XFS device.
  16. Not impossible, but unlikely, I would try a new SATA cable first, even it was already replaced once, try also a different SATA port, swap with another device if needed.
  17. The ATA errors are constant and suggest a SATA cable problem: Jun 17 13:02:18 Arcanine kernel: ata1: SError: { UnrecovData BadCRC Handshk } And it's not your cache like I assumed, it's an unassigned device: Jun 17 13:01:18 Arcanine kernel: ata1.00: ATA-11: SanDisk SDSSDH3250G, 181085804720, X61110RL, max UDMA/133
  18. Cache device dropped offline: Jun 17 14:43:14 Arcanine kernel: ata1.00: failed to set xfermode (err_mask=0x40) Jun 17 14:43:14 Arcanine kernel: ata1.00: disabled Resulting in the next errors, both on the cache filesystem and docker image since it was there. Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 3, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 4, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 5, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 6, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 7, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 8, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 9, flush 0, corrupt 0, gen 0 Jun 17 14:43:14 Arcanine kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 10, flush 0, corrupt 0, gen 0 Jun 17 14:43:17 Arcanine kernel: BTRFS info (device loop2): no csum found for inode 56871 start 5624741888 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 244714144 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 432260600 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 433854440 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 123460544 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 32063328 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 32065344 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 34159616 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 34159712 Jun 17 14:43:18 Arcanine kernel: XFS (sdb1): writeback error on sector 34159904
  19. Yes, with this: But note the you need to manually change it for existing VMs.
  20. Emulated disk still has a filesystem, and it needs checking.
  21. I believe you can, but it's been a while since I used the plugin.
  22. Yes there is, check filesystem on disk6: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  23. Possibly there's some filesystem corruption, please post the diagnostics.
  24. Diags could give more clues but this can sometimes help: Also one report of a similar issue being caused by overheating:
×
×
  • Create New...