Jump to content
We're Hiring! Full Stack Developer ×

itimpi

Moderators
  • Posts

    20,238
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. I've previously requested ata info here. Unfortunately, it's not invariant, it can change as much as the sdX assignments. However, in a small system with few controllers it can be consistent enough to appear invariant. The more controllers you have, the more the order of loading them can be different. I had always thought the ata values were invariant, bit if they are not then I would have thought there was even more reason for the plugin to display them automatically.
  2. Correct. As long as the drives were previously used by unRAID then after a New Config they can be assigned again and the data remains intact.
  3. I was wondering whether there was any way to get this plugin to automatically show the ataX type numbers associated with a drive as many kernel error messages refer to a drive in this way? If not then I guess I can go through adding the information manually, but I thought that it would be better if the plugin derived it automatically so that it followed any drive being physically moved in the machine. It could then be displayed as part of the device information.e.g. Device: sdX (ataY). The ata type information is useful as it relates directly to the physical port that a drive is plugged into and is invariant across boots (unlike the sdX information that can change).
  4. I would be very surprised if SNAP works correctly on the 6.1.x series of unRAID releases as I do not think it was updated to take account of the changes to the security model. However I have not tried it myself so I could be wrong.
  5. I see no reason why the relevant drivers could not be loaded at the time that the system is started up in the same way that plugins are initialised at that point. The flash drive remains mounted regardless of whether the array is started or not so that is not an issue. I run my VMs from a drive that is not under unRAID control so that is probably all that I personally would need. I would be quite happy to start/stop some VMs via the go/stop files. There may be a significant number of users who would be happy with just this level of support for running VMs independently of the array. Having said that a way to keep the cache drive mounted when the array is stopped would be good. However doing that would mean that GUI support would have to be added for explicitly mounting/unmounting the cache drive for the few occasions where that is needed. I can also see complications arising in cases where a specific users VMs have a dependency on array drives being available. There would also have to be general enhancements around starting/stopping VMs via the GUI.
  6. Not that I am against this being released as a plugin, but surely if you had it as a docker with /mnt internally mapped to /mnt externally (and possibly a mapping for /boot) you would have full access to all the physical media? Having said that since it is basically a script it does not have the sort of dependency issues that are the main reason for NOT installing some apps as a plugin. It therefore does not put system stability at risk.
  7. What format are the disks? Recently a warning was added that problems might be encountered with the plugin if Reiserfs is being used for the disks.
  8. Just in case it might help the limit applies to devices attached to the PC regardless of whether they are being used by unRAID or not. Having said that the if you sent off the email as suggested Limetech are normally good at getting back to you quickly.
  9. I see this as well! I wonder if it is better to simply remove these from the list of tasks rather than try and work out how to mark them as completed rather than aborted? The intend is to show the final result and it stays there as long as you don't leave the page. Once going to another page and do a revisit then anything completed isn't shown anymore. I just checked with the latest update and it does seem to be fixed. The behaviour is now exactly what I wanted.
  10. I see this as well! I wonder if it is better to simply remove these from the list of tasks rather than try and work out how to mark them as completed rather than aborted?
  11. I do not think a laptop is going to be any good in this role. KVM does not work with integrated graphics (at the moment at least) and there needs to be a GPU that is available for unRAID purposes (typically the integrated GPU is used for this) even though unRAID itself can be run headless. Also once you have passed a GPU to a VM then that GPU is not available for either another VM or unRAID until the VM is closed down. You can RDP from one VM to another one but that does not sound like what you are looking for..
  12. no. This is an alternative approach and only provides file integrity checking via checksumming (it has no recovery capability). It also stores the checksum information within the file system metadata so you do not see separate checksum files on the system.
  13. Actually it does not (at the moment at least) as pre-clear is an add-on and not built-in functionality. Note that it is not necessary for a drive that is replacing a failed drive (or when replacing a drive by a larger one) to be pre-cleared. In such a scenario a pre-clear is just an initial stress-test of a drive . The pre-clear will not let you preclear a drive that is assigned to the array to avoid the chance of accidentally pre-clearing a drive that ahs data on it (and thus losing data). In the case of adding a new drive then obviously initially it is not assigned to the array and is thus available to pre-clear. If you have a drive fail and want to pre-clear the drive before replacing it in the array, then I think if you unassign the failed drive and start the array with the drive marked as 'missing' so it is emulated unRAID may let you pre-clear the new drive without grabbing it. However I have not actually tried this so I am not sure.
  14. The easiest way is to install the Nerd Tools plugin (Perl was recently added). Also added recently is an option under Settings to select which of the tools in the Nerd Tools plugin should be installed on each boot.
  15. Have you tried mapping /tmp rather than /temp? On Linux /tmp is the traditional location for temporary files.
  16. No. What you need to do at this stage is Tools->New Config and then assign the drives as you want them to finish up (including parity). Make sure you do not accidentally assign a data disk as parity as this would lead to data loss. When you start the array then unRAID will start building parity from the data disks. It is NOT necessary to have pre-cleared the parity disk in this scenario as the process of building parity overwrites what is there anyway.
  17. Any reason why you are not on the current release (6.1.3)? The Open Files plugin can help with troubleshooting this sort of problems. Also it can be worth installing the Powerdown plugin as this is more likely to succeed in successfully shutting down the array than the standard built-in version.
  18. In the settings for a User share you can specify what disks it is allowed to use.
  19. Once you have used the "Edit XML" option to manually edit the XML you can no longer use the Edit option without losing any custom settings you set via Edit XML. Is that likely to be your problem?
  20. Using a docker has less overhead in both RAM and CPU terms as you are not installing a full OS, but merely a mapping layer between the docker and Linux. Using a VM may give you some more versatility as you have a full OS installed in the VM.
  21. The above results suggest to me that a lot of the time the sleep is not happening correctly, so the subsequent WOL fails. Once that happens the power saw is forcing a reboot. It seems to me that rather than investigating the WOL you need to look into why sleep followed by power saw does not wake the system correctly. My suspicion is if that was always simply waking the system then the WOL would work as well.
  22. I suspect you need to have a partition defined before that icon does anything and if you are using a brand new disk there may not be there one yet.
  23. the zero pending is important and you should not use a drive where that value does not go back to zero after a pre-clear. The zero reallocation is good but not as important. As long as the value is small and is stable (i.e. not increasing) then the drive should be OK to use..
  24. I think with XFS you might have lost all your data in your scenario. The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered. I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery. It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss). The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start. Definitely sounds like I will not be switching any time soon. I definitely want recovery over performance. So I think I have officially changed my opinion. Thanks. I switched to XFS and the system seems more stable since the switch. However I do have full offline backups so can always recover from those if I get bad corruption.
  25. I wondered about that. I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it. So in other words the corruption I experienced would have been the same for either ReiserFS or XFS. That is bad news maybe I won't switch my drives now. Was hoping XFS recoveries would be better then ReiserFS not worse. What I expected to be worse was btrfs. I think with XFS you might have lost all your data in your scenario. The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered. I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery. It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss). The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start.
×
×
  • Create New...