Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Not sure what you mean by this as there is no ‘parity job’ to pause as parity in Unraid is real time). If you have parity assigned then it is used. If it is not assigned then you get faster transfer speeds. if you mean that you are currently syncing (or checking) parity this will severely degrade any writing of new files until it is finished.
  2. You need to look at the syslog BEFORE restarting the server (which clears the syslog) which is in RAM so does not survive a reboot.
  3. Did you check that the hardware ID's for the passed through hardware did not change after the upgrade and thus the pass throughs need re-doing? This is not uncommon after and upgrade.
  4. You could use the New Config tool and build parity based on the remaining disks (whose data will remain intact). Doing this you are unprotected until the parity rebuild is completed. An alternative would be to replace disk1 and with a new drive and rebuild its contents. This would be rebuilt with reiserfs, but on completion you could follow the process to create an empty XFS file system on it (which only tasks moments) and continue from there. What is not clear to me as (you did not mention it) is whether on completion of this conversion exercise you want to end up with 6 or 7 disks in the array. The answer to that might favour one approach over another. BTW: Disks frequently red-ball for reasons other than the drive actually failing. Just mention in case you want to consider testing the 'failed' 2TB disk1 after removing it and repurposing it. If so you could run a pre-clear cycle on it to test it.
  5. Do you have Turbo Write mode enabled? That will maximise array write speeds at the expense of having all drives spinning.
  6. You can refer to this the online documentation accessible via the Manual link at the bottom of the Unraid GUI to get details of how the User Shares are set up using both old and new terminology.
  7. If you go the New Config route you can remove them all at once as after going this route you will be rebuilding parity to get your array back into a protected state. You are unprotected until that finishes. An alternative process in the first place would have been to use New Config to both remove the old drives and add the new drives at the same time before copying/moving any files and then rebuild parity based on the new set. You could then have mounted the old drives one at a time via the Unassigned Devices plugin to copy their contents back to the array. This would have been the fastest approach although you would then not be protected against one of the old drives failing before you copied their data back to the array. If you want to remain protected all the time then you need to remove them one at a time.
  8. You cannot transfer trial licences to a new drive - you can only get a new trial licence for the new drive.
  9. Your only chance of clearing up this mess is to reboot to get back to a clean system by letting Unraid reloading itself into RAM using its archives off the flash drive.
  10. You need to reinstall the binaries for your docker containers (with previous settings intact) via Apps->Previous Apps->Docker.
  11. The syslog shows that you have docker set to use macvlan AND have bridging enabled on eth0. As mentioned in the Release Notes this combination is know to cause system crashes. You ether need to disable bridging on eth0 or switch docker to using ipvlan networking.
  12. How are the disks connected? Have you checked both the SATA and power cabling? Also the diagnostics say the SATA controllers are set to run in IDE mode - not what you want.
  13. Perhaps you should post a screenshot of the Pools section of the Main tab to see if we can spot what might be going wrong.
  14. You need to run without -n (the no modify flag) to get a repair to run, and if it asks for add -L. after doing that when you restart the array in normal mode the drive should mount.
  15. These happen when there is corruption detected and the repair process has run to fix it. Not necessarily a problem unless you keep getting them.
  16. That looks ominous, but sometimes rewriting all the bz* type files as described here fixes such issues. If you want to replace the drive then no need for a big one - a 8GB drive is more than enough. I suspect even a 2GB drive would be enough but no chance of finding one like that. If you can find one a USB2 drive tends to be more reliable than a USB3 one, and if you cannot find a USB2 drive then a USB2 port on the motherboard is likely to be more reliable than a USB3 one.
  17. It is a requirement of Unraid that each drive has a unique ID (typically serial number). It looks as if the USB3 chassis you are using does not pass through the serial numbers and has given both drives the same ID. The only fix to this is a different USB3 chassis that DOES have unique ID's for each drive.
  18. With the latest Unraid 6.12.x releases if you performance is important then you can make use of ZFS format pools.
  19. Did you make sure to disable bridging on eth0 as mentioned in the Release Notes? Normally required for the system to be stable when using macvlan.
  20. It IS a common problem to end up in this situation when it is not intended which is nearly always the case. For instance it disables the option for that share to get the performance advantage of being an Exclusive share. You can always disable that check if you mean for this scenario to happen.
  21. Not heard of a bug in this area. However, without the diagnostics difficult to say what might actually be going on.
  22. Maybe the message is a bit misleading although it is correct in that there is something to fix. The message should perhaps not refer to 'cache', but instead mention that you have the 'backup' share to only be on the pool, but in practice you also have files on the array for that share.
  23. I suspect that the BIOS on your motherboard was trying to be 'helpful' and trying to boot from the USB SSD !
×
×
  • Create New...