Jump to content

itimpi

Moderators
  • Posts

    20,699
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. It cannot be quite that simple as the default assumption would always be that a 'failed' drive will be replaced.
  2. Assuming the drive is OK I am a bit concerned about your reference to a "New Disk" In such a case you should be following this procedure from the online documentation that can be accessed via the Manual link at the bottom of the Unraid GUI.
  3. Yes. Instead of formatting the disk you should have followed the procedure documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI for handling drives showing as unmountable.
  4. The more recent versions of the plugin DO have an option to shutdown the server if disks overheat There is, though, no option to do this on something like the CPU overheating.
  5. You can normally avoid converting the .vmdk file and use it directly as long as you enter the full path to it manually and not via selecting it from the GUI.
  6. All options can be used. Easiest is to select ‘All’, return to Main tab and unassign disk you want to remove, make any other changes you want (e.g. Re-order drives) and then start array to commit assignments.
  7. Parity disks do not need to be the same size. The only restriction with dual parity is that they both must be at least as large as the largest data drive.
  8. More detail in how it did not work might allow us to give help such as what was reported when you tried the suggested action.
  9. The standard process for replacing a failed drive is covered here in the online documentation that can be accessed via the Manual link at the bottom of the Unraid GUI.
  10. What issue - the disks being unmountable or disabled? These two states require different recovery actions. From the previous posts it sounds as if only the ones for the unmountable states have been done and clearing the disables states (which require rebuilds) is still outstanding.
  11. Only if doing so meant that you ran out of space on the cache drive as BTRFS file system seems prone to corruption if the free space is exhausted. I can see it not being immediately obvious a vdisks are created as ‘sparse’ files which means they do not use all the space allocated until the code in the VM writes to parts of the vdisk file not currently being used. This is just a theory though so no idea if it could apply in your scenario.
  12. You could always plug it into another machine to read the SMART information as it is stored on the disk itself.
  13. Not directly in terms of checking for Plex activity, but as long as you have periods when you know the server will not be used for Plex you can use the Parity Check Tuning plugin set parity checks to run in increments outside prime time.
  14. it is covered here in the 0nline documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.
  15. What file system is in use for the current cache drive? If it is BTRFS then yes you can add a drive at any time. If it is XFS then you would first need to back it up elsewhere as a multi-drive pool HAS to use a BTRFS file system.
  16. That IS a PCT bug but as you say cosmetic. Still it will get fixed Happy to have any reports on PCT issues as that is the way issues get identified and fixed. What I am trying to avoid is simple dismissing of a problem report if the PCT plugin is installed when the actual issue is probably something else.
  17. The stop file running is part of the normal UnRaid shutdown sequence. I think it is run quite near the end after all the array and services have been stopped but I could check that if you need to know exactly when it runs. it is also quite easy to run scripts on any of the UnRaid internal events if this would be more desirable. I think I have some documentation on how to achieve this.
  18. Sounds like a potential bug if the parity check is started while a rebuild is in progress The Parity Check Tuning Plugin does nothing that you could not simulate manually by using the Pause/Resume buttons on the Main tab at appropriate times so as such it should not be the plugin that causes this issue although the plugin might make it more likely by extending the time for the rebuild to complete. I can see the plugin getting a little confused if a rebuild suddenly changes to a parity check mid-flight but I think it should handle this but i need to check this is correct. I will have to see if I can recreate this exact sequence of events and if necessary raise an appropriate bug report to get clarity on what is expected behaviour. You do not normally get an entry added to the Parity History for a rebuild (or clear). If it is thought it would be of use I could enhance the Parity Check Tuning plugin to add such entries.
  19. Just like there is a ‘go’ file that is run as part of the boot sequence, you can also have a ‘stop’ file that is run as part of the shutdown sequence. Would that do what you want?
  20. This is probably a motherboard/bios issue which is needing a monitor attached during the boot sequence If you cannot change this behaviour via a bios setting then an alternative is to buy a dummy HDMI plug so that the system thinks there is a monitor attached.
  21. The entries go to the syslog. You can access that via the ‘log’ icon at the top right of the GUI. It is also one of the files that are included n the zip created via Tools-Diagnostics.
  22. You mentioned you were MOVING while uploading media to the array which is confusing. if you think mover is not doing its job then maybe turning on mover logging under Settings->Scheduler might provide some useful information in the syslog on what is happening under the covers. If it is not the mover function you are talking about then more detail is going to be required.
  23. A few points are worth raising that might be relevant: unRaid never automatically moves files between array drives - once a file is placed on a drive that is where it stays which can matter if the file is growing in size. It takes manual action by the user to get it moved to another drive. the check for free space is made when a file is first created and takes no account of the size of that particular file (and thus the recommendation for the minimum free space setting to be larger than the biggest file you expect to be placed on the disk). it sounds as I you might need to manually move some files from disk1 to disk2 to get things working better.
  24. If the ‘sdm’ device is your flash disk then it must have dropped offline at some point and then reconnected as a different I’d because the screen shot shows when you booted the boot drive was ‘sda’. As a result UnRaid will now no longer be able to access any of the configuration information it is expecting to find on the ‘sda’ device. UnRaid cannot handle the boot drive apparently disappearing while it is running.
  25. In the syslog I see: Jul 1 04:43:01 Tower crond[1988]: failed parsing crontab for user root: #015 Jul 1 04:54:04 Tower kernel: nvme nvme0: I/O 785 QID 5 timeout, aborting Jul 1 04:54:06 Tower kernel: nvme nvme0: I/O 322 QID 9 timeout, aborting Jul 1 04:54:06 Tower kernel: nvme nvme0: I/O 323 QID 9 timeout, aborting Jul 1 04:54:06 Tower kernel: nvme nvme0: I/O 425 QID 13 timeout, aborting Jul 1 04:54:11 Tower kernel: nvme nvme0: I/O 324 QID 9 timeout, aborting Jul 1 04:54:14 Tower kernel: nvme nvme0: I/O 325 QID 9 timeout, aborting Jul 1 04:54:14 Tower kernel: nvme nvme0: I/O 326 QID 9 timeout, aborting Jul 1 04:54:14 Tower kernel: nvme nvme0: I/O 327 QID 9 timeout, aborting Jul 1 04:54:35 Tower kernel: nvme nvme0: I/O 785 QID 5 timeout, reset controller Jul 1 04:55:05 Tower kernel: nvme nvme0: I/O 0 QID 0 timeout, reset controller Jul 1 04:56:08 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jul 1 04:56:09 Tower kernel: nvme nvme0: Abort status: 0x371 ### [PREVIOUS LINE REPEATED 7 TIMES] ### Jul 1 04:56:39 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Jul 1 04:56:39 Tower kernel: nvme nvme0: Removing after probe failure status: -19 Jul 1 04:57:10 Tower kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1 Which suggests that the cache drive dropped offline, and after that you are getting continual errors trying to access the Docker image file. there is also that message about an invalid entry in the crontab file for the root user - it might be worth trying to sort out what is the invalid entry and thus what might have created it. Not sure if it is a related error or not as the diagnostics do not show what the invalid entry actually is.
×
×
  • Create New...