itimpi

Moderators
  • Posts

    13699
  • Joined

  • Last visited

  • Days Won

    31

Everything posted by itimpi

  1. That’s a limitation of the RAID controller you are using.
  2. The system never uses the sdX designations to identify drives. In the main array it is by serial number, and If you want it under UD then you CAN already get UD to always use the same identifier for each drive by simply setting the name to use explicitly.
  3. The boot USB drive does not count. Any other attached storage device counts regardless of how it is connected.
  4. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  5. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  6. If you click on the flash drive on the Main tab there is a backup option on the resulting screen.
  7. Did not spot anything obvious in first glance. You might want to update FCP to the latest version and remove the Preclear plugin that you have installed as it is not compatible with 6.10.2 (you can use the UD Preclear plugin instead).
  8. You are likely to get better informed feedback if you post your system’s diagnostics zip file. That way we can check how you have things configured and what is going on.
  9. Those screen shots are weird - I would not have thought a parity check could be started until the array is started.. maybe you have managed to trigger some unexpected edge case (and if so it would be nice if we could work out how to reproduce it) so it could be fixed). it would be possible to stop the parity check from the command line, but at this point I would recommend that you let it finish as I am not sure what the state would be if you stopped the check.
  10. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  11. Your description does not really make sense - I suspect there is a flaw in your explanation. I would suggest attaching your system’s diagnostics zip file so we can see what is going on.
  12. I don’t know anything about that container, but it sounds like they are talking about the /etc/fstab file that is internal to the container? I would suggest that you post your query in the Support thread for that particular container (click the icon for it on the Docker tab snd select support). You are more likely to get a knowledgeable answer there.
  13. Unraid prohibits the ‘execute’ bit being set on the /boot device for security reasons. Perhaps you should explain why you want to do this - there is almost certainly a better way to achieve what you want.
  14. You seem to be getting continuous errors along the lines of Jun 10 05:33:01 BIGDADDY kernel: sd 2:0:0:0: [sdb] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=DRIVER_OK cmd_age=0s Jun 10 05:33:01 BIGDADDY kernel: sd 2:0:0:0: [sdb] tag#12 CDB: opcode=0x2a 2a 00 0d f0 3f 60 00 00 40 00 Jun 10 05:33:01 BIGDADDY kernel: blk_update_request: I/O error, dev sdb, sector 233848672 op 0x1:(WRITE) flags 0x1800 phys_seg 6 prio class 0 Jun 10 05:33:01 BIGDADDY kernel: BTRFS warning (device sdd1): lost page write due to IO error on /dev/sdb1 (-5) ### [PREVIOUS LINE REPEATED 2 TIMES] ### Jun 10 05:33:01 BIGDADDY kernel: BTRFS error (device sdd1): error writing primary super block to device 2 Jun 10 05:33:03 BIGDADDY kernel: BTRFS warning (device sdd1): lost page write due to IO error on /dev/sdb1 (-5) which indicates that there are issues with your cache pool since the sdd and sdb devices belong to it. Not sure of the best way to proceed - maybe @JorgeB might have a view?
  15. Mover does not have the concept of older files being moved first so that is not an option. There has been some discussion as to whether this might one day be available but I have seen no commitment made to it actually happening. Also, the Use Cache=Prefer means you want files moved TO the cache from the array, not the other way around. If you want files to be moved from cache to array you need the Use Cache=Yes option. The Help built into the GUI describes how the options for this setting work and what action (if any) mover will subsequently take.
  16. This is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  17. To emphasise the point there is no requirement for both parity drives to be the same size as long as they are larger or the same size as the largest data drive.
  18. According to your diagnostics there are problems with your cache drive so it is not mounting. That would explain your symptoms.
  19. Could not spot any reason in the diagnostics why the shares are not showing up as protected.
  20. If you post your system’s diagnostics zip file we can check that out.
  21. Yes. It still works, but the cloud backup is now the recommended option.
  22. Probably need to wait for @JorgeB as the expert in this area, but from what I can see there are only 2 drives being shown in the nvme_protected pool at the btrfs level while I can see in the pool configuration you have 3 drives assigned. Feels like something has gone wrong in adding the 3rd drive? Maybe instead of formatting the drive to btrfs before adding it you should have used the wipefs command instead to remove all vestiges of the previous file system?
  23. The data corruption issue described in the 6.10.2 release notes
  24. You would have had to get the partitioning exactly as Unraid would have done it if you partitioned manually. If you had done it via the Unassigned Devices plugin then you would probably have been OK.
  25. You need to Rebuild the failed drive to clear the disabled status. A bit strange the emulated drive is showing as empty. Did you at any time ask Unraid to format a drive? If you did it would have been the emulated drive that got formatted (which would explain it showing up as empty.