Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Having examined the logs you provided I can see the point at which the plugin tries to pin up the drives to get the temperature of those currently reporting ‘*’ because they are spun down (after all drives not spun down have cooled sufficiently to make a resume possible). For some reason it appears that spinup is not working. I will see if I can work out why.
  2. In your previous screenshot it looked like you physically removed the drive, but did not set it to unassigned before attempting to start the array?
  3. This will happen if Unraid cannot read the configuration information off the flash drive. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread taken while this problem is being encountered.
  4. The plugin always treats the case where the temperature is returned as '*' due to spindown as 'cool' so there needs to be something else going on. I will see if I can work out what it is from the syslog you provided.
  5. Changing to an encrypted option involves reformatting the disk(s) so you first need to copy any existing contents elsewhere. There is not any in-place encryption of existing contents
  6. The instructions say to make sure that is in a redundant mode BEFORE you remove the drive. From your description this is not what you did?
  7. Not a direct answer to your question, but what type of data is it? If is mainly media files then they will already be compressed internally so it may be more efficient to not attempt compression during backup.
  8. If you mean that you want to go down to a single drive in the cache, then now that you have converted to raid1 (so that data is on both drives) you can now stop the array, remove one of the cache drives and start the array again. You now set the profile for the cache to be Single (if it has not been done automatically after removing the drive). Note that there is now no redundancy in the cache so if it fails it’s contents will be lost.
  9. Not normally - it would keep restarting such operations from the beginning. if you install the Parity Check Tuning plugin then it has the option to resume such operations from the point reached (as long as the shutdown was a tidy one). 0ne restriction (at the moment) is that the array is not auto-started so you have to manually start the array to continue the operation.
  10. Could not see any indication in the diagnostics of this issue were the posted diagnostics taken while you had this problem occurring?
  11. If you still have this folder after rebooting, then you still have a docker container configured with this path and thus recreating it. You need to fix that and then reboot to confirm this is no longer being created.
  12. You should find this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page to be of use.
  13. Linux is case sensitive so are you sure you used ‘sync’ rather than ‘Sync’ as you posted.
  14. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  15. Are you sure it is not the other way around? Writing via SMB should always obey the share settings, whereas Docker can put files in the wrong drive because of the behaviour described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. if not then this then you should provide your system’s diagnostics and mention a share giving the behaviour you describe.
  16. I would be very surprised if it was not as an alternative to btrfs for multi-drive pools. Doubt if any tooling would be provided for this. I would expect it to be up to the user to format a pool as ZFS and then copy data to it just as is currently the case for btrfs.
  17. What I would expect is that ZFS in the main array would be supported in the same way btrfs is supported today? In other words ZFS drives would be single device ZFS file system. To get maximum performance you would still need zfs pools outside the parity protected array. The big advantage of integration is the ability to participate in User Shares, and perhaps better stability than btrfs. I could be wrong though - just going by what seems a logical first step into zfs support.
  18. I very much doubt it. In line with how Unraid handles existing file systems I expect it will be the users responsibility to move files to/from any zfs encrypted drives. On that basis a full migration would follow the same steps as resiserfs->xfs migration would.
  19. Since you only have single parity it is possible to re-arrange the order of the drives without invalidating parity. This would not have been true if you had parity2 as that uses the slot number of part of its calculations. Use the New Config tool and select the option to retain all current assignments. If you did not use this then Unraid would complain when you tried to move drives to new slots. return to the main tab and make the changes you want to move the array drives up in slot number. tick the parity is already valid checkbox start the array to commit the changes the drives should come up with all their data intact and in the new slots, and Unraid should not be recalculating parity.
  20. You might want to post the output of cat /etc/cron.d/root so we can check what schedule you have set for parity checks. It is possible to get combinations of settings that can cause you symptoms.
  21. Once a drive is disabled (because a write to it failed) then it normally stays disabled until the drive is rebuilt.
  22. That is always possible. Passing memtest is not always a definitive indication that there is no RAM issue (whereas a failure is always an issue).
  23. Did you make sure that the .iso has boot order=1 set?
  24. That power supply looks to me like it may be underpowered to handle the peak power draw from your setup. Do you have another one you could use to test this out?
  25. According to the screenshot you have the VM set to boot off the vdisk, and not the ISO. From you earlier description I thought you needed to boot off the ISO?
×
×
  • Create New...