Jump to content

JorgeB

Moderators
  • Posts

    61,742
  • Joined

  • Last visited

  • Days Won

    651

Everything posted by JorgeB

  1. USB devices are not recommend for array/pools, the IDs can change with a different kernel, but you can do a new config and check parity is already valid.
  2. See if you can get the syslog at least cp /var/log/syslog /boot/syslog.txt If not enable the syslog server and post that after the next crash
  3. Not really, diags might show something.
  4. Try this first: https://forums.unraid.net/topic/118286-nvme-drives-throwing-errors-filling-logs-instantly-how-to-resolve/?do=findComment&comment=1165009
  5. Syslog is just after rebooting, do you have the one form the syslog server?
  6. Set a couple of the 8TB WD drives as parity, then compare the transfer speed to other WD and the Seagate drives in the array, problem might be just one of them, some of those perform decently, others are super slow, not sure why.
  7. As you had the devices originally, you need to remove this part: </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev>
  8. Samba is a little outside my wheelhouse, maybe @dlandoncan help.
  9. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  10. hmm, maybe it's not a checksum error, try to copy the file with cp and post the diags after you get an error.
  11. Yeah, didn't notice before, device was already part of a pool: Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated Total Slack -- -------------- --------- --------- -------- ----------- --------- ----- 1 /dev/nvme1n1p1 100.00GiB 1.00GiB 32.00MiB 852.84GiB 953.87GiB - 2 /dev/nvme0n1p1 100.00GiB 1.00GiB 32.00MiB 830.48GiB 931.51GiB - No wonder it was busy
  12. Something is using that device not allowing it to be partitioned, but if a reboot doesn't help don't know what it could be.
  13. If all dockers/Vms are stopped it must be a local/LAN transfer, if you can't find it disable SMB to see if it stops.
  14. It's normal to get an i/o error is btrfs detects the file is corrupt, so the user knows there's problem, you can recover the file with btrfs restore, it will not check data integrity.
  15. Feb 7 10:51:59 UNRAID1 emhttpd: shcmd (729): /sbin/wipefs -a /dev/nvme1n1 Feb 7 10:51:59 UNRAID1 root: wipefs: error: /dev/nvme1n1: probing initialization failed: Device or resource busy Device is reporting busy, reboot and try again.
  16. The problem was that the VM was still passing-through that device, you'd need to edit the VM template/XML and correct that.
  17. There's something writing to disk9.
  18. Try switching to ipvlan (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
  19. This is a very bad idea. This error is fatal, it means some writes were lost, it can happen if a storage device lies about flushing it's write cache, this is usually a drive (or controller) firmware problem, most likely the controller in this case. Btrfs restore (option #2 here) is the best bet to try and recover some data for this, then the device will need to be formatted and the data restore.
  20. Post the diagnostics after the problem, if you can't get them enable the syslog server and post that.
  21. Please use the existing plugin support thread:
  22. Both parity disks (and disk4) are SMR, and some units of that specif model are known to have particularly bad performance, if it's a new array you can retest after assigning the other CMR drives only.
×
×
  • Create New...