Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Looks like you have added the pool you want to use for caching purposes (arrary-cache)? If that is the case then you go into the settings for a share, set the cache drive to be primary storage, the array to be secondary storage, and the mover direction in the way you want it to go (probably arrary-cache->array).
  2. I thought I mentioned earlier that the plugin can currently think there has been an unclean shutdown when this not the case so generates that notification spuriously and it can be ignored. This is being worked on.
  3. I think you changed a parity drive and tried to add a new drive? This cannot be done as a single step - do these steps one at a time (order not important).
  4. There is no REQUIREMENT to output a message at all The plugin has to determine if there has been an unclean shutdown for its own purposes as that means it cannot restart an array operation that was already in progress at the time of the shutdown. Since the plugin has detected the unclean shutdown it seemed a good idea to pass this information on to the end user even when a restart was not potentially going to be attempted. I want the event field to identify what has generated the notification (as is the case elsewhere). It is the Subject and/or Description fields that can be changed. However your suggested wording would be inaccurate as It is NOT the plugin that starts the automatic check it is the core Unraid system.
  5. It is simply meant to inform the user there was an unclean shutdown - case (1) The plugin does not initiate anything as a result but Unraid normally does. It would, however, not restart any array operation that was in progress during the shutdown phase (from point then reached). Note the plugin never initiates a parity check or other array operation from the beginning - that is always done at the Unraid level. Maybe I should change the wording and not mention Unraid automatically starting an array operation?
  6. If the drives were on the LSI controller on the old system and will also be on the new system then order is irrelevant. Unraid recognises drives by serial number - not by where they are connected. There is only ever a problem if as part of the upgrade the controller type is changed and this results in the disk getting presented differently to the host. Most likely to happen when RAID controllers are involved rather than simple HBA ones.
  7. Nothing that I know off. If anything the change would be around the shutdown logic causing genuine unclean shutdowns to happen.
  8. I have reproduced this in a test environment so I am virtually certain that the next update will only output this notification if it really is an unclean shutdown. Do you think this message adds value (when it is working correctly) as I could simply remove it from the plugin code. My thought was that it was useful to users to know this had happened without them having to look into the syslog.
  9. That message got introduced when the option to restart array operations was added as the plugin can only restart operations if it was a clean shutdown. It is meant to be an informative message only and does not actually cause anything to happen. I am currently working on some issues around the restart logic so this should get fixed as a by-product off that. I could remove the notification entirely from the code so would welcome some feedback on whether it is useful (assuming it is working correctly). My feeling was that it helped users by them not having to visit the syslog to be certain what had happened.
  10. Glad to hear that it is resolved. I think many people do not realise that if any of the connections on the 4 twisted pairs inside a standard Gbit LAN cable do not connect properly, it can silently downgrade to 100Mbps as that only requires 2 twisted pairs. In some ways it would be easier to diagnose if it stopped working rather continuing to work but in degraded mode.
  11. You can ignore that message as the plugin can generate it spuriously. This is being worked on.
  12. SATA -> SATA splitter cables are available but I would never split a SATA cable more than 2 ways although I would be happy with splitting a molex -> SATA cable 4 ways.
  13. This sounds like the behaviour which has recently been fixed where a resume after pausing for mover or backup active was not resuming when they completed. If it is still occurring with the latest release then let me know. You can also disable this type of pause/resume in the plugin settings but should not now need to.
  14. That implies the array shutdown before the reboot did not complete successfully. If you continue to get this when you reboot then you probably need to look into troubleshooting as described here.
  15. I think with molex on the PSU side you can normally get away with 4-way splitter. Not something I would want do if the PSU side is SATA as they are not rated for as high a current.
  16. If you are going to Preclear (but it is not required) then it is done before adding a disk to the array. formatting wiped Preclear so should NOT be done after Preclear before adding to the array. The check of the disk he’s;the is what matters. Wiping existing data is a by-product. You format after adding to the array to create an empty file system.
  17. I would expect that to be fine because the SATA connectors are all 'crimped' to the power lines.
  18. Only if you are prepared to lose all updates to it that were made since it has been disabled. You would also have to rebuild parity so no other drive must have problems.
  19. Deleting the progress file will not help. What seems to be confusing things is a parity.tuning.restart file in the plugins folder on the flash drive - that should only be present immediately after booting if a restart of an array operation is pending. Deleting this file should solve your immediate problem but it is not clear why it was there in the first place. You do have the restart option enabled but that file should only get created if an array operation was actually in progress when the system was last shutdown/rebooted - I assume this was not the case? There is also a parity.tuning.scheduled file present which I would also expect to be removed after the parity check completes, but I do not see an entry in the parity.tuning.progress file that the completion was ever detected which probably also explains the parity.tuning.scheduled file still being there. I assume it DID finish? i will see if i can recreate the issue, but if after deleting the parity.tuning.restart file it reappears i would love to know what lead up to that. EDIT: I can confirm that I am getting some unexpected behaviour if a restart happens during the check so that gives me something to look at. The restart happens fine but the plugin then seems to get a bit confused about the state of things for tidying up as it should
  20. Note that if using a raidz pool then in the current release a cache drive is not relevant for that purpose as you can only cache writes to the main array, not to a pool. This restriction is expected to be removed in the 6.13 release although we have no ETA for that. having said that you may well want to use the SSD as an application drive for best performance of docker containers and VMs.
  21. Have you rebooted. Until you do you may still have (old) log entries that mention macvlan.
  22. should not have to go through anything complicated. You just need to make sure that there are no files (or folders) for that share on the array or on any other pool and the share has no secondary storage set. If any of these conditions are not met the Exclusive Share setting is automatically set to NO.
  23. Yes. As long as they use the VPN client at their end and you have given them the appropriate WireGuard client configuration file to use your server.
  24. If those are the only drives you are going to start with then it looks like the you will have a standard main Unraid array of 1 parity drive and 2 data drives. At the moment using ZFS in the main array is bad from a performance perspective so you probably want to avoid using it there at this point in time. In terms of the SSD if it is going to be used as a pool for caching and/or other purposes then ZFS is certainly a viable option there.
  25. Have you any backups of the flash drive (perhaps on via MyServers/Unraid Connect or appdata backup plugins)? If not do you at least have a copy of your licence file?
×
×
  • Create New...