Jump to content

itimpi

Moderators
  • Posts

    20,704
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I have a f@h container set to ONLY use the GPU and it is definitely updating the points totals with the WU that complete.
  2. The calculations for parity1 are not affected by the position a drive occupies. You can achieve the results you want by/; stop the array go to Tools >> New Config, select the option to keep current assignments; click the confirmation checkbox and click on Apply. return to the Main tab and change the assignments for the drives you want to the slots you want click the Parity is Valid checkbox start the array to commit the changes. You can now follow the normal process for adding the pre-cleared drives.
  3. If a drive has been passed through to a VM then it should NEVER also be mounted in Unassigned devices at the same time. Doing so means that you have 2 different OS (Unraid and Windows) both thinking they have exclusive access to the drive and will not see any changes made by the other OS. This can easily lead to complete corruption of the contents. The latest releases of UD have a pass-through option that you should set if passing the drive through to a VM as this will then stop you also trying to mount it at the same time.
  4. 1) and 3) are already available via Settings >> Notifications
  5. What access mode have you set in the drive mappings in the docker template? For UD drives you want it to be one of the ‘Slave’ modes.
  6. Not quite. - it is still one-way. Cache-prefer only ever moves files from array to cache. It is just that if there is no space left on the cache when creating a new file Unraid will let it be written to the array instead. If space later becomes free on the cache then mover will move the file from the array to the cache
  7. Unraid should not care where drives are connected as it identifies them by serial number. The normal recommendation is to have the SSDs connected to motherboard ports.
  8. The logs show that the SSD is no longer online, and as a result all dockers will be failing. Why the SSD is not online I have no idea.
  9. Not really an answer to your question, but have you looked into using the Parity Check Tuning plugin to avoid the parity check running during prime time with its adverse affect on performance?
  10. I think you are over-thinking things All that really matters is that the contents of each parity disk is calculated independently of each other by doing mathematical operations against all the data drives, and that each parity drive allows for the failure of a single array drive to be handled. In your case where you unassign parity1 and assign that drive as a new data drive you expand the available space by the size of that drive, but you are now only protected against a single drive failing as you only have one parity drive left. Unraid does not care that parity1 is no longer present.
  11. You would just plug the drives into a replacement controller and bring the system back online. Not sure what you mean by a z2-raid though?
  12. This is only true if you want to avoid havilg to rebuild parity2 (assuming you even have a parity2). Order is not significant as far as parity1 is concerned. Be interested to know exactly where you saw the statement about order in case the guide needs amending.
  13. If you have containers that only work with /mnt/cache then it probably means that they are using hard links at the Linux level and the handling of those has always been problematical under the /mnt/user path. However why that should have changed recently I have no idea. it could also have something to do with the way that Docker handles these as it is quite likely to have changed in recent Docker releases.
  14. Not sure, but I suspect that you have some of the file/folders for the appdata share on an array drive so they are not seen when you use the /mnt/cache/appdata path but are seen when using /mnt/user/appdata. There are some other more obscure possibilities but that is the most likely. If you posted your diagostics zip file (obtained via Tools >> Diagnostics) we could see if that is the case.
  15. This is the case where one refers to a physical drive, and the other to a logical view that can (potentially) span multiple drives. Mapping to a physical drive can improve performance as it removes the overhead of accesses via the User Share sub-system. /mnt/cache/appdata is meant to refer to the 'appdata' folder on the physical cache device(s). /mnt/user/appdata refers to the logical view provided by the User Share sub-system and will include the contents of any 'appdata' folders found on any array drive or the cache drive. You must NOT attempt to map anything to /mnt/cache if you do not actually have a cache as in such a case that location is purely in RAM and not persistent across reboots. In simplistic terms anything directly under /mnt/cache or /mnt/diskX is a physical device and those under /mnt/user are logical views of the data on the physical devices.
  16. There is no reason the actions you describe should have affected docker unless you did more than you describe Hopefully the Diagnostics will provide some insight into what is causing your symptoms, and the path to getting everything back to a working state.
  17. You. can read about container auto-start settings in the online documentation. Whether these are used by CA Backup I have no idea.
  18. 1). Yes - removing a drive invalidates parity requiring it to be rebuilt. 2). Yes - you can pre-clear a drive in parallel to any other array operation as a drive being pre-cleared is not part of the array. 3). Not sure I understand this question? Parity Swap procedure is only relevant when replacing a drive, not when removing one. in all cases the array remains operational so not sure why you expect prolonged downtime? It is just that performance is likely to be degraded while carrying out the operation?
  19. You have limited the drives that can be included in User Shares under Settings >> Global Share Settings. it is normally best to leave the include/exclude fields on that page empty so that all drives are possible, and then apply any limitations at the individual share level.
  20. You are correct in that as long as you have a single parity drive then you are protected against any single array drive failing. You are incorrect about the two parity drives having the same data - the calculations for parity2 are different to those for parity1.
  21. It was not mentioned when the original post was made but changing the password via the CLI is NOT the way to do it as that will only change it in RAM and not survive a reboot. The correct way to do this is via the Users tab in the Unraid GUI.
  22. If for any reason you need to edit that field it is much easier if you use one of the suffixes to specify the units being used.
  23. Actually, the new diagnostics show that the parity disk IS now online and from the SMART report it looks to be healthy. You could try running an extended SMART test on it to be sure. Unraid will leave a disk marked as disabled (I.e. marked with a red ‘x’) until you take a positive recovery action. The steps to do this are covered In the Replacing a Disk section of the online section. In particular the part for re-enabling a drive currently marked as disabled that you want to re-enable.
  24. Unraid does not have users in the traditional Linux sense! The ‘root’ user is the only one who has access to the Unraid GUI. Any other users you set up are just for controlling access over the network to the shares on the Unraid server. in terms of a being able to logout, the button to do this is the leftmost of those displayed in the group at the top right of every screen in the Unraid GUI.
  25. I assume you are trying to get to the VM? Virbr0 is a NAT bridge so you cannot port forwarding through it (as far as I know). You want a bridge like br0 or br1 that directly connects to the LAN.
×
×
  • Create New...