Jump to content

itimpi

Moderators
  • Posts

    19,789
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. At this point it is almost certain that the start of disk1 is damaged beyond being repaired by xfs_repair. I think you can do the following: Use New Config and select the option to keep all assignments Return to the Main tab and check they are correct (including disk1) Check the Parity is valid checkbox Start the array in MAINTENANCE mode to commit the assignments. Since you are running in Maintenance mode nothing will be written to any drive. Stop the array and unassign disk1 Start the array in normal mode and it should now be emulating what it thinks should be on disk1. Ideally it should be mountable and its contents available. If not do not proceed and check back here and provide your diagnostics Stop the array Reassign disk1 (and Unraid should now be offering the option to rebuild it). Start the array to rebuild physical disk1 to match the emulated disk1 You can also wait to see if @JorgeB has any other suggestion as the best authority on recovering from disk failures
  2. yes, which is why I would probably use the different types in different pools optimised for their particular use case (or simply not use them all).
  3. Can keep this to a minimum by using rsync to only copy across changes. Still I take the point and worth mentioning.
  4. Have you tried giving the full path to the newperms command in case it is not on the search path at that point?
  5. If the OP mentions where/how the ZFS pool is being mounted we could probably give helpful advice on how to get things done in the right order?
  6. Ok, I’ll get on to it then. Might take a day or two before I complete it and validate I wrote it up correctly.
  7. It should be possible to make Unraid think that disk1 has failed and make it rebuild disk1 from the combination of parity1 plus the other disks. Do you think parity1 and all the other drives are OK? Do you have any spare drives that could be used in place of disk1 so we can keep it unchanged at this point? I would suggest you post your current diagnostics so we can check things out before giving further advice. HINT: When using the New Config tool if you always start by using the option to preserve all current assignments and then make changes from that position it is harder to have an accident of the type you described.
  8. I would use the additional SSD’s for hosting data for docker containers or VMs. They could also be added to pool1 to give more capacity (or redundancy) there.
  9. What you can have which I think achieves what you want is: 2x18TB in main array 500GB NVME as pool1 set to act as cache for shares on main array pool2 with multiple spinning HD set up as RAID1 to give redundancy to be used for torrenting. still leaves other drives free to either be added to array, or perhaps used as pools for hosting VMs and/or Docker containers. the Unraid roadmap does talk about having multiple main arrays, but if this ever does materialise it is still some way off.
  10. If you are prepared to have 2 licences then it is quite easy to have the second one plugged in (already licenced) that can act as a hot standby that you can switch to by selecting it as the boot device and which will not require internet access. Probably a cost most private users do not want to bear, but could be appropriate in your case? If this is likely to be of use I could write up the details of the way to achieve this and add it to the online documentation (I may do that anyway in case anyone else wants it ) .
  11. What you did should should have worked Nothing obvious springs to mind, but random ideas I have are: Permission issues that could be fixed by running Docker Safe New Permissions tool File System corruption that could be fixed by running File System Check/Repair on all drives As I said earlier your diagnostics might give some other clue.
  12. Normally you do not want to use cache for initial load - it will just slow things down as the cache can be written much faster than it can be emptied. In terms of a suitable size for the cache drive then it should be sized for the typical amount of new material you intend to write per day (and then mover running overnight can empty it) and has nothing to do directly with the size of the array drives.
  13. The screenshots shows that the normal shutdown script has been triggered. This will either be something you are running that calls the script or something obscure that makes Unraid think you have made a short press on the power switch. If it boots OK into Safe Mode then it will almost certainly be one of your plugins.
  14. Have you checked that the file does not already exist on an array disk? Mover will not move a file if one with the same name already exists on the array.. You may get better informed feedback if you post your system’s diagnostics zip file.
  15. Not quite sure what you mean here? You can only have 1 main array but you can have multiple pools that can act as cache or application drives. Pools can be single drive or multi-drive (using BTRFS RAID modes).
  16. Have you tried setting the file system type BEFORE starting the array? I would also suggest that you deleted any existing partitions first as Unraid normally wants to do the partitioning itself of any drive it is going to use.
  17. As was mentioned this is an excellent speed for writing to a parity protected array! you might want to read this section of the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  18. I went with WireGuard in preference because I have it set up so I can access any device on my home LAN, not just the Unraid server. When away from home I regularly want to access a Windows desktop on my home LAN which I access using NoMachine Remote Desktop running over WireGuard.
  19. Do not know anything about Chrome Remote Access, but I do this frequently the time using the WireGuard VPN Server built into Unraid to handle securing the access.
  20. In that case you can follow this procedure from the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  21. You can get Windows giving misleading error messages as Windows only ever allows 1 set of credentials to be used at the same time to a particular server. Aay attempt to use second ones act as if you do not have a valid username/password for accessing that server. A possible workaround is to make one connection using the server name and the other one using the IP address as Windows treats this as two different servers. Whether this is your problem I do not know? What are the access permissions set on that share in the Unraid GUI?
  22. The ability have executeable files on the flash was removed some Unraid releases ago as part of tightening security. another workaround is to precede the script name with ‘sh’ or ‘bash’.
  23. How did you set the script to execute at array start? Have you tried booting Unraid in Safe Mode?
  24. That is because you have configured the docker service to use that location for containers (you must have changed it from the default of using an appdata share).
×
×
  • Create New...