Jump to content

itimpi

Moderators
  • Posts

    19,965
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. I think that drive has problems. The Pending Sector count needs to be 0 for the drive to be used successfully with unRAID and that SMART report for the drive shows 1453 (it has probably gone up since). That is probably why the count shown is incrementing so slowly - the drive is continually retrying to read sectors, and eventually giving up and marking them as 'pending' to indicate a read failure. I bet if you looked at the syslog it will be filled with read errors for that drive.
  2. In theory you can start again with the post-read, but that leaves open the question of why it failed in the first place! Things to do before trying again would be:Run the tools->Diagnostics option in case it has any information relating to why the preclear failed. You could post the ZIP file here if you want anyone else to look at it. See if you can obtain a SMART report for the drive (there should be a copy in the diagnostics file but doing it for just this drive will see if it may have dropped offline. Also the SMART report might indicate if the drive is having problems. Look in the preclear reports to see if there is anything useful about the earlier stages. Check cabling to the drive
  3. The process seems to be CPU bound so you probably do not want to run more parallel tasks than you have cores in your machine. Certainly I see that as soon as I increase the number of parallel tasks above the number of cores I have the ETA for all the tasks starts getting longer.
  4. I've previously requested ata info here. Unfortunately, it's not invariant, it can change as much as the sdX assignments. However, in a small system with few controllers it can be consistent enough to appear invariant. The more controllers you have, the more the order of loading them can be different. I had always thought the ata values were invariant, bit if they are not then I would have thought there was even more reason for the plugin to display them automatically.
  5. Correct. As long as the drives were previously used by unRAID then after a New Config they can be assigned again and the data remains intact.
  6. I was wondering whether there was any way to get this plugin to automatically show the ataX type numbers associated with a drive as many kernel error messages refer to a drive in this way? If not then I guess I can go through adding the information manually, but I thought that it would be better if the plugin derived it automatically so that it followed any drive being physically moved in the machine. It could then be displayed as part of the device information.e.g. Device: sdX (ataY). The ata type information is useful as it relates directly to the physical port that a drive is plugged into and is invariant across boots (unlike the sdX information that can change).
  7. I would be very surprised if SNAP works correctly on the 6.1.x series of unRAID releases as I do not think it was updated to take account of the changes to the security model. However I have not tried it myself so I could be wrong.
  8. I see no reason why the relevant drivers could not be loaded at the time that the system is started up in the same way that plugins are initialised at that point. The flash drive remains mounted regardless of whether the array is started or not so that is not an issue. I run my VMs from a drive that is not under unRAID control so that is probably all that I personally would need. I would be quite happy to start/stop some VMs via the go/stop files. There may be a significant number of users who would be happy with just this level of support for running VMs independently of the array. Having said that a way to keep the cache drive mounted when the array is stopped would be good. However doing that would mean that GUI support would have to be added for explicitly mounting/unmounting the cache drive for the few occasions where that is needed. I can also see complications arising in cases where a specific users VMs have a dependency on array drives being available. There would also have to be general enhancements around starting/stopping VMs via the GUI.
  9. Not that I am against this being released as a plugin, but surely if you had it as a docker with /mnt internally mapped to /mnt externally (and possibly a mapping for /boot) you would have full access to all the physical media? Having said that since it is basically a script it does not have the sort of dependency issues that are the main reason for NOT installing some apps as a plugin. It therefore does not put system stability at risk.
  10. What format are the disks? Recently a warning was added that problems might be encountered with the plugin if Reiserfs is being used for the disks.
  11. Just in case it might help the limit applies to devices attached to the PC regardless of whether they are being used by unRAID or not. Having said that the if you sent off the email as suggested Limetech are normally good at getting back to you quickly.
  12. I see this as well! I wonder if it is better to simply remove these from the list of tasks rather than try and work out how to mark them as completed rather than aborted? The intend is to show the final result and it stays there as long as you don't leave the page. Once going to another page and do a revisit then anything completed isn't shown anymore. I just checked with the latest update and it does seem to be fixed. The behaviour is now exactly what I wanted.
  13. I see this as well! I wonder if it is better to simply remove these from the list of tasks rather than try and work out how to mark them as completed rather than aborted?
  14. I do not think a laptop is going to be any good in this role. KVM does not work with integrated graphics (at the moment at least) and there needs to be a GPU that is available for unRAID purposes (typically the integrated GPU is used for this) even though unRAID itself can be run headless. Also once you have passed a GPU to a VM then that GPU is not available for either another VM or unRAID until the VM is closed down. You can RDP from one VM to another one but that does not sound like what you are looking for..
  15. no. This is an alternative approach and only provides file integrity checking via checksumming (it has no recovery capability). It also stores the checksum information within the file system metadata so you do not see separate checksum files on the system.
  16. Actually it does not (at the moment at least) as pre-clear is an add-on and not built-in functionality. Note that it is not necessary for a drive that is replacing a failed drive (or when replacing a drive by a larger one) to be pre-cleared. In such a scenario a pre-clear is just an initial stress-test of a drive . The pre-clear will not let you preclear a drive that is assigned to the array to avoid the chance of accidentally pre-clearing a drive that ahs data on it (and thus losing data). In the case of adding a new drive then obviously initially it is not assigned to the array and is thus available to pre-clear. If you have a drive fail and want to pre-clear the drive before replacing it in the array, then I think if you unassign the failed drive and start the array with the drive marked as 'missing' so it is emulated unRAID may let you pre-clear the new drive without grabbing it. However I have not actually tried this so I am not sure.
  17. The easiest way is to install the Nerd Tools plugin (Perl was recently added). Also added recently is an option under Settings to select which of the tools in the Nerd Tools plugin should be installed on each boot.
  18. Have you tried mapping /tmp rather than /temp? On Linux /tmp is the traditional location for temporary files.
  19. No. What you need to do at this stage is Tools->New Config and then assign the drives as you want them to finish up (including parity). Make sure you do not accidentally assign a data disk as parity as this would lead to data loss. When you start the array then unRAID will start building parity from the data disks. It is NOT necessary to have pre-cleared the parity disk in this scenario as the process of building parity overwrites what is there anyway.
  20. Any reason why you are not on the current release (6.1.3)? The Open Files plugin can help with troubleshooting this sort of problems. Also it can be worth installing the Powerdown plugin as this is more likely to succeed in successfully shutting down the array than the standard built-in version.
  21. In the settings for a User share you can specify what disks it is allowed to use.
  22. Once you have used the "Edit XML" option to manually edit the XML you can no longer use the Edit option without losing any custom settings you set via Edit XML. Is that likely to be your problem?
  23. Using a docker has less overhead in both RAM and CPU terms as you are not installing a full OS, but merely a mapping layer between the docker and Linux. Using a VM may give you some more versatility as you have a full OS installed in the VM.
  24. The above results suggest to me that a lot of the time the sleep is not happening correctly, so the subsequent WOL fails. Once that happens the power saw is forcing a reboot. It seems to me that rather than investigating the WOL you need to look into why sleep followed by power saw does not wake the system correctly. My suspicion is if that was always simply waking the system then the WOL would work as well.
  25. I suspect you need to have a partition defined before that icon does anything and if you are using a brand new disk there may not be there one yet.
×
×
  • Create New...