Jump to content

JorgeB

Moderators
  • Posts

    67,893
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Disks are SMR but if I understood the problem is with read speeds? Try reading from a different disk using a disk share.
  2. Using the console get the syslog after array start: cp /var/log/syslog /boot/syslog.txt Then attach it here.
  3. AFAIK there's no bug, issues affects users that weren't using the optimal settings, and earlier releases still accepted them while now it requires the correct ones, you can take a look here for some more info: https://forums.unraid.net/topic/123901-plex-issues-upon-upgrade-to-6101/?do=findComment&comment=1138715 More specifically about rclone and sonar/radar/plex https://forums.unraid.net/topic/75436-guide-how-to-use-rclone-to-mount-cloud-drives-and-play-files/page/115/#comment-1140504
  4. Try replacing/swapping cables first, but in my experience those Kingston SA400 SSDs are very low quality, if no other issues they tend to be become super slow during reads with use, so if issues persist I would just replace it.
  5. Yeah, you should always try to when possible use the GUI settings instead of a custom go file like you had, of course sometimes it needs to be used for settings that are not available, problem is that it can cause conflicts later when they are introduced.
  6. Negative, that's just the attribute type. It doesn't have 750GB, default is raid1 (mirror) so it can only mirror the smallest drive, look at the used and free space, if you want to use the full capacity you can convert the pool to single profile, but will lose the redundancy.
  7. Try this: https://linux.die.net/man/1/file
  8. Assuming you mean the NVMe device and since the data won't fit on just one the the other cache devices you'll need to remove the other 2 members manually, with the array running type in this order: btrfs balance start -f -mconvert=single /mnt/cache btrfs dev del /dev/sdb1 /dev/sdc1 /mnt/cache When done post new diags
  9. ECC support with Ryzen is mostly unofficial, and difficult to see if it really works since most boards don't report if any errors are found, run memtest, if no errors are found run a scrub on the pool.
  10. I would suggest booting with a different flash drive with a stock Unraid config, no key needed, if it boots correctly backup and recreate your flash drive, restore only the bare minimum from the config backup, like your key, super.dat and pools folder, docker user templates, then either reconfigure the rest of the server or restore the other config files one or a few at a time until you find culprit.
  11. Rename all /boot/config/plugins/*.plg then start re-enabling one by one, or a few at a time until you find the culprit. I believe this one if from the VM backup plugin, but IIRC it's harmless.
  12. You didn't correctly remove the devices from the pool, it still has 3 members: Data Metadata System Id Path single RAID1 RAID1 Unallocated -- -------------- --------- -------- --------- ----------- 1 /dev/nvme0n1p1 849.00GiB 3.00GiB 32.00MiB 1.03TiB 2 /dev/sdb1 - 2.00GiB 32.00MiB 463.73GiB 3 /dev/sdc1 - 1.00GiB - 464.76GiB In the end do you wan the current data there to remain in the NVMe device or in the other two?
  13. UDMA CRC errors are usually a bad SATA cable, just acknowledge the attribute and keep monitoring, as long as it don't increase it's fine, if it does replace the SATA cable.
  14. 41:00.0 SCSI storage controller [0100]: Broadcom / LSI SAS1068E PCI-Express Fusion-MPT SAS [1000:0058] (rev 08) Subsystem: Hewlett-Packard Company SAS1068E PCI-Express Fusion-MPT SAS [103c:130b] It's a controller limitation, get a newer LSI HBA (92xx or newer) or use a different controller.
  15. Same strange crashing and btrfs is detecting data corruption, start by running memtest.
  16. Nov 28 04:40:52 BigBoy kernel: FAT-fs (sda1): error, fat_get_cluster: invalid cluster chain (i_pos 402951190) Flash drive problems, try chkdsk first, if that doesn't help backup and recreate, failing that replace it.
  17. Did safe boot help? Docker service appeasr to be starting correctly in the posted syslog.
  18. No obvious error that I can see but try recreating the docker image, it won't hurt.
  19. Having raid1 metadata indicates there's at least one more device that is still part of the pool, please post the diagnostics.
  20. Then try booting in safe mode, if that doesn't help try booting with a different stock Unraid flash drive, no key needed, just to see if it's a config issue.
  21. Syslog is just spammed with xfs fs corruption detected on disk 6, please post the complete diagnostics but this: suggests bad RAM or other kernel memory corruption, assuming no ECC RAM start by running memtest.
×
×
  • Create New...