Jump to content

JorgeB

Moderators
  • Posts

    67,652
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Yes, most likely a hardware issue, though might not be easy to identify the actual culprit, I assume you're using ECC RAM so next suspects would be a board/CPU or controller issue.
  2. SMART is still available for disable disks, but parity dropped offline, so there's no SMART, you should check/replace cables to rule since that looks more like a connection problem and them and post new diags.
  3. Rebuilding from parity will result in the same exact content you are seeing in the emulated disk, if the emulated filesystem is very damaged, it might be better to re-sync parity with the old disk or use it to copy the data back to the array if you prefer to rebuild to a spare. First thing to do is to confirm the old disk is still mounting and contents are OK, to do that first unassign that disk from the array, then start array, emulated disk will remain as is for now, then change the xfs UUID to the old unassigned disk with: xfs_admin -U generate /dev/sdX1 Replace X with correct letter, after that use the UD plugin to mount old disk and check data, if all looks fine then do either option mentioned above, feel free to ask if there are any doubts.
  4. No, it just means you're running it with the no modify flash (-n), run it again without it.
  5. You could use the user scripts plugin. That or copy it on top of the VM vdisk replacing it.
  6. There's no GUI support for now, but you can do it manually or with a script, more info here: https://forums.unraid.net/topic/51703-vm-faq/?do=findComment&comment=523800
  7. No, it would be slower since parity would need to be concurrently update for all. That's about the max speed you can get with gigabit. For large writes your array disks with turbo write enable should be able to keep up with gigabit, so better just transferring directly to the array.
  8. First try to repair the filesystem on the emulated disk6, if it's successfully repaired and all data looks there you can rebuild on top, if not you can use a spare or do a new config instead.
  9. Unraid already does something similar when you for example create a new pool, might be able to use the same thing: May 24 10:37:40 Tower11 emhttpd: shcmd (5636): mkdir -p /mnt/wd May 24 10:37:42 Tower11 emhttpd: shcmd (5637): blkid -t TYPE='xfs' /dev/sdw1 &> /dev/null May 24 10:37:42 Tower11 emhttpd: shcmd (5637): exit status: 2 May 24 10:37:42 Tower11 emhttpd: shcmd (5638): blkid -t TYPE='btrfs' /dev/sdw1 &> /dev/null May 24 10:37:42 Tower11 emhttpd: shcmd (5639): mount -t btrfs -o noatime,space_cache=v2 /dev/sdw1 /mnt/wd
  10. That's a known issue when running full reiserfs filesystems, convert them all to xfs and it should help.
  11. May 23 04:03:31 NNC kernel: BTRFS error (device sde1): block=112534290432 write time tree block corruption detected May 23 04:03:31 NNC kernel: BTRFS: error (device sde1) in btrfs_commit_transaction:2377: errno=-5 IO failure (Error while writing out transaction) May 23 04:03:31 NNC kernel: BTRFS info (device sde1): forced readonly Cache filesystem is corrupt, best bet is to backup and re-format cache.
  12. The same is happening to other users with the same controller. Then you should improve the cooling, or they will also overheat when you need to run a scrub or some other high i/o operation.
  13. I've not aware of spin down issues because of being a pool, I have several pools myself and they all spin down, are you using a different controller for the pool devices? You can also post the diags.
  14. Iperf just tests the network bandwidth, no drives are involved, so still likely a network related issue.
  15. Does the same happen if you boot in safe mode?
  16. Just to expand if you were worried about the raw read and seek error rates those are normal with Seagate drives: https://forums.unraid.net/topic/86337-are-my-smart-reports-bad/?do=findComment&comment=800888
  17. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ P.S: also need to fix the filesystem on disk1.
  18. File is corrupt, you need to delete or restore from backups, also good idea to run memtest.
  19. This can indeed complicate thing, if the luks hears are damaged the filesystem can never mount, can't really help with encryption since I don't use it but there are some threads here about rebuilding the encryption headers, look for those.
  20. If you mean a single transfer using both NICs it would need SMB multichannel working, and AFAIK no one has been able to make it work with multiple NICs and Unraid.
  21. Docker/VMs are known to write constantly, though much better with v6.9.x, also helped if the SSD was reformatted with the new partition layout. This is basically meaningless, you can check SMART for the total TBW, then check 24H later to see if it is something to worry about.
  22. Enable this then post that log after a crash.
×
×
  • Create New...