Volkerball

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Volkerball's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I see. I am guessing that my only option right now would be mount zfs pool temporarily through CLI, transfer files to unraid array, format disks to zfs through unraid GUI, recreate datasets and transfer files back?
  2. I have zpool with single mirror vdev created on TrueNAS Scale and I want to import it to Unraid. Unfortunately I was not able to do it through GUI, so I am assuming that I need to do it through CLI. What is the best way to import zpool and mount it persistently, so it will mount automatically when I start array? As far as I am aware, editing fstab won't work on Unraid. I did import and mount pool once in rc4, but it did not last after reboot. Also, are there any downfalls I should be aware of moving appdata and docker.img from btrfs ssd pool to zfs ssd pool?
  3. Does 6.12 rc3 still require at least single device in "main Unraid array"? Can I go "only zfs pool" route?
  4. Ok, probably I owe some update. I frankensteined my case (NSC-400 copy, hence no capability to accommodate pcie-cards) to put PCIe-SATA card directly, so I would take PCIe riser out of equation (you never be sure about Chinese engineering). It did not help, so I ordered another ASMedia1061 card with different PCB design (SATA ports directed perpendicularly to MB, not parallel as previous one) just to be sure, and it works fine for almost 48 hours, so I am 90% sure that it solved the problem. Thank you all for suggestions.
  5. Dockers went offline faster than I expected(maybe since I have not re-created docker.img this time), but errors logs this time looks a bit different. Uploading diagnostics again. bokunonas-diagnostics-20210612-1233.zip
  6. I checked connections and even zip-tied PCIe riser connections. Have not tried to use other SATA cable tho (is it a common problem?). I can not check immediately if changing cable helps, since this problem usually occurs in 3-4 days after reboot. It is kinda bizarre that everything works fine for several days and suddenly errors occurs without obvious (at least for me) trigger. UPD. Turned off, swapped SATA cable, turned on, started array. I have no idea does it solved the problem or not, attaching my diagnostics file anyway. bokunonas-diagnostics-20210612-0528.zip
  7. I have SSD as a cache drive connected to ASM1061 SATA controller, which connected to motherboard's PCIe2.0x1 slot through PCIe riser. I get errors mostly during nighttime usually after 3-4 days, which also leads to some broken Docker containers (simple docker restart does not work, need to reboot NAS, re-create docker.img and etc.). I previously asked advice on Reddit and tried to solve it, but apparently it did not fix it. For example, i got this sort of errors repeating continuously until I restart my NAS. Jun 12 04:39:55 BokunoNAS kernel: blk_update_request: I/O error, dev sdc, sector 7410400 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS error (device sdc1): bdev /dev/sdc1 errs: wr 39, rd 51253, flush 0, corrupt 0, gen 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS error (device sdc1): bdev /dev/sdc1 errs: wr 39, rd 51254, flush 0, corrupt 0, gen 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS error (device sdc1): bdev /dev/sdc1 errs: wr 39, rd 51255, flush 0, corrupt 0, gen 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS error (device sdc1): bdev /dev/sdc1 errs: wr 39, rd 51256, flush 0, corrupt 0, gen 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS warning (device sdc1): direct IO failed ino 29048 rw 0,0 sector 0x7112e8 len 0 err no 10 Jun 12 04:39:55 BokunoNAS kernel: BTRFS warning (device sdc1): direct IO failed ino 29048 rw 0,0 sector 0x7112f0 len 0 err no 10 Jun 12 04:39:55 BokunoNAS kernel: BTRFS warning (device sdc1): direct IO failed ino 29048 rw 0,0 sector 0x7112f8 len 0 err no 10 Jun 12 04:39:55 BokunoNAS kernel: BTRFS warning (device sdc1): direct IO failed ino 29048 rw 0,0 sector 0x711300 len 0 err no 10 Jun 12 04:39:55 BokunoNAS kernel: blk_update_request: I/O error, dev loop2, sector 755936 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0 Jun 12 04:39:55 BokunoNAS kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 15817, flush 0, corrupt 0, gen 0 I also attached my diagnostics. I will really appreciate help, since this bug just makes me hesitate to fully use my newly built NAS. bokunonas-diagnostics-20210612-0424.zip UPD. "Fix Common Problems" plugin gives: As far as I remember I did not have such problem when I have used motherboard's SATA slots, so I am guessing that something wrong with either SATA controller or even PCIe riser(?)... I could ditch SATA controller for a while, but in the end I would like to have 5 SATA slots minimum (motherboard only has 4).