Jump to content

JorgeB

Moderators
  • Posts

    67,755
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Unraid has the partitions starting on sector 2048 for SSDs vs 64 for disks, so there's a little less space and you can't direct replace a disk with an SSD of the same capacity, you can create a new array and manually copy the data though.
  2. Correct, a full device write might fix it though, at least for some time, but if it does it's difficult to predict for how long.
  3. And I might be wrong but doubt LT considers this issue a high priority, so you might need to live with it for some time.
  4. Yep, that's a big difference, more than I'm used to seeing, but different hardware can provide very different results, looks like yours is considerably affected by the changes.
  5. You could stop docker/VMs services, then just couple a 2min parity check and upgrade back up.
  6. That still seems too slow to be this issue, I would recommend downgrading to v6.7 and do a quick test, you just need to run a check for a couple of minutes to check the starting speed and confirm.
  7. Not really, but someone else might have.
  8. It should, and with 24 devices and dual parity I could still do 140MB/s on my test server, that's about 12TB per 24H, what speed do you get at the start of the check?
  9. That's expected, the change above it to avoid Unraid crashing, nothing to do with VMs working or not.
  10. Don't think so as it works with RAW, and apparently it works for everybody except you, so no idea what the problem could be.
  11. One thing I've remembered, you have two ports using IDE mode: 00:14.1 IDE interface [0101]: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 IDE Controller [1002:439c] (rev 40) Subsystem: Gigabyte Technology Co., Ltd SB7x0/SB8x0/SB9x0 IDE Controller [1458:5002] Kernel driver in use: pata_atiixp Kernel modules: pata_atiixp IIRC this can cause sync errors with these AMD chip sets, change those (usually SATA5/6) to AHCI/SATA and try again 2 consecutive checks.
  12. With rsync you just use the disks paths instead, e.g.: rsync -av /mnt/disk1/share/ dest_ip_adrress:/mnt/disk1/share/ That limit is for active copper cables, fiber cables can have much longer runs. https://en.wikipedia.org/wiki/Multi-mode_optical_fiber
  13. You can't have the same filesystem mounted twice, doesn't matter if one of them is in the array or not, e.g., if you cloned the disk with dd you couldn't them mount the original and the clone at the same time using UD.
  14. 2022-01-24 01:27:24 Kernel.Warning 10.5.254.80 Jan 24 01:27:25 Astro-Server kernel: macvlan_broadcast+0x10e/0x13c [macvlan] 2022-01-24 01:27:24 Kernel.Warning 10.5.254.80 Jan 24 01:27:25 Astro-Server kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  15. That's to fix the filesystem corruption, not data corruption if it exists, and if it's a RAM issue that's always a possibility, if you can recreate all data from scratch it's always better.
  16. First make sure that's your actual problem, 72H seems like a lot, could be some controller bottleneck or other config issue, difficult to say without any more info.
  17. Fastest way is if you can copy multiple disk to disk sessions with rsync or something similar, without parity of course, I can get usually around 400MB/s sustained for initial server sync, could be faster even without using SSH, for single disk copy you'll get 100 to 200MB/s depending on the disks used.
  18. No, maintenance mode won't attempt to mount the disks.
  19. Please post the diagnostics (downloaded after array start).
  20. It's perfectly safe, why wouldn't it be? It will just limit the available bandwidth, I tested some HBAs bandwidth in different slots here:
×
×
  • Create New...