Jump to content

itimpi

Moderators
  • Posts

    20,214
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. Actually it does not (at the moment at least) as pre-clear is an add-on and not built-in functionality. Note that it is not necessary for a drive that is replacing a failed drive (or when replacing a drive by a larger one) to be pre-cleared. In such a scenario a pre-clear is just an initial stress-test of a drive . The pre-clear will not let you preclear a drive that is assigned to the array to avoid the chance of accidentally pre-clearing a drive that ahs data on it (and thus losing data). In the case of adding a new drive then obviously initially it is not assigned to the array and is thus available to pre-clear. If you have a drive fail and want to pre-clear the drive before replacing it in the array, then I think if you unassign the failed drive and start the array with the drive marked as 'missing' so it is emulated unRAID may let you pre-clear the new drive without grabbing it. However I have not actually tried this so I am not sure.
  2. The easiest way is to install the Nerd Tools plugin (Perl was recently added). Also added recently is an option under Settings to select which of the tools in the Nerd Tools plugin should be installed on each boot.
  3. Have you tried mapping /tmp rather than /temp? On Linux /tmp is the traditional location for temporary files.
  4. No. What you need to do at this stage is Tools->New Config and then assign the drives as you want them to finish up (including parity). Make sure you do not accidentally assign a data disk as parity as this would lead to data loss. When you start the array then unRAID will start building parity from the data disks. It is NOT necessary to have pre-cleared the parity disk in this scenario as the process of building parity overwrites what is there anyway.
  5. Any reason why you are not on the current release (6.1.3)? The Open Files plugin can help with troubleshooting this sort of problems. Also it can be worth installing the Powerdown plugin as this is more likely to succeed in successfully shutting down the array than the standard built-in version.
  6. In the settings for a User share you can specify what disks it is allowed to use.
  7. Once you have used the "Edit XML" option to manually edit the XML you can no longer use the Edit option without losing any custom settings you set via Edit XML. Is that likely to be your problem?
  8. Using a docker has less overhead in both RAM and CPU terms as you are not installing a full OS, but merely a mapping layer between the docker and Linux. Using a VM may give you some more versatility as you have a full OS installed in the VM.
  9. The above results suggest to me that a lot of the time the sleep is not happening correctly, so the subsequent WOL fails. Once that happens the power saw is forcing a reboot. It seems to me that rather than investigating the WOL you need to look into why sleep followed by power saw does not wake the system correctly. My suspicion is if that was always simply waking the system then the WOL would work as well.
  10. I suspect you need to have a partition defined before that icon does anything and if you are using a brand new disk there may not be there one yet.
  11. the zero pending is important and you should not use a drive where that value does not go back to zero after a pre-clear. The zero reallocation is good but not as important. As long as the value is small and is stable (i.e. not increasing) then the drive should be OK to use..
  12. I think with XFS you might have lost all your data in your scenario. The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered. I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery. It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss). The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start. Definitely sounds like I will not be switching any time soon. I definitely want recovery over performance. So I think I have officially changed my opinion. Thanks. I switched to XFS and the system seems more stable since the switch. However I do have full offline backups so can always recover from those if I get bad corruption.
  13. I wondered about that. I had to recover ReiserFS and got back 90% of my files after putting a completely full cache drive in as parity in a parity sync for about 5-10 minutes before I caught it. So in other words the corruption I experienced would have been the same for either ReiserFS or XFS. That is bad news maybe I won't switch my drives now. Was hoping XFS recoveries would be better then ReiserFS not worse. What I expected to be worse was btrfs. I think with XFS you might have lost all your data in your scenario. The xfs_repair tool does not seem to have an option equivalent to reiserfsck --scan-whole-partition that looks for file fragments that can be recovered. I assume this is because there is less redundancy in the file system so that such fragments can be properly identified and used in recovery. It simply repairs starting from the superblock and fixing any 'bad' links by chopping off what they might have originally pointed to (thus leading to data loss). The only thing that I have found that it has that reiserfsck does not have is the ability to scan the disk looking for backup copies of the superblock if you happen to corrupt the one at the start.
  14. One thing I can confirm from my own experience is that if you get severe file system level corruption and have to run the appropriate recovery tool (reiserfs/xfs_repair) you are much more likely to experience significant data loss using XFS as reiserfsck does an amazing job of recovery. Against that I expect XFS may be less likely to experience such corruption in the first place.
  15. The only way that might work is to make sure the array is stopped before the sleep is invoked and restarted on waking.
  16. This might mean that there is file system corruption on the flash drive so that updates are not getting saved. Worth putting it into a Windows system to see if it detects corruption (and then fixes it).
  17. Lots of the updates have been done that way. The remove/reinstall route has been required a couple of times as changes to the internal workings for new features and how/where .RecycleBin folders have been used have have meant it was not easy to migrate to a new release. I think things are now probably stable enough that we will not see too many of the remove/reinstall type updates.
  18. No, not at the moment. I put the idea of creating a plugin to run this on my list of 'nice-to-try-creating' items a few weeks ago. However demands on my time have have meant I have not got any further, but it is still something I would like to try my hand at if time becomes available.
  19. It looks like a small modification is required for 6.1 compatibility. Currently the script is looking for 'notify' under /usr/local/sbin. In the 6.1 release this is now at /usr/local/emhttp/scripts. Sounds like an upfront check is needed for the location of this command to handle running on different unRAID versions?
  20. Have you started the array? If not that might explain what you see.
  21. I notice that if you install v5 via the download from the LimeTech site then that download has a 'syslinux' folder inside the ZIP. Make me suspect that many of those who are ending up without a syslinux folder have been upgrading manually (by copying bzimage and bzroot) from earlier releases of unRAID. Maybe the upgrade plugin needs to spot this and take appropriate action.
  22. Stop the array Click on the drive in the Main tab and change the format to the one you want Start the array. It will now say that an unformatted disk is present. Select the option to format unformatted disks. Before doing so check that the serial number displayed is what you expect Make sure that the disk is REALLY empty (or at least only contains data you are happy to lose) before doing the above.
  23. It's not an anomaly. The only shares managed at the Settings->RecycleBin are the user shares at /mnt/user/. Any shares not mounted at /mnt/user have to be managed manually. Click the 'Help' button for more information on the Settings->RecycleBin. I would like to include the disk shares, but I'm having a few issues with that. OK. Since Samba put them into the .RecyledBin folder I thought the hard work had been done and it would probably just be a case of the plugin cycling through the disks as well looking for .RecycledBin folders when working out what to trash. Obviously it is not as easy as it first appears
  24. Just noticed an anomaly! If you have Disk shares enabled and go in via a disk share to delete files then you end up with ./RecycledBin folder at the top level containing all the files. This seems reasonable as that is the level of share I was using. However if you now go to Settings->RecycleBin it claims the recycle bin is empty and using the option to empty the trash does nothing. It looks as if .RecycleBin folders at the disk level are not being seen? I believe that you need to check for .RecycleBin folders at both the disk and share level to rectify this?
×
×
  • Create New...