Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The repair log shows major corruption - I suspect most of the data on the drive will either be lost or end up in the lost+found folder which it takes a LOT of effort to sort out, Do you have decent backups?
  2. It is not the script that is the problem - it is the information about when it should be run.
  3. Unfortunately the short test is nowhere near a through test of the drive. The long test will live up to its name taking hours per TB. Also progress only updates at 10% intervals.
  4. Two things to try that are easy and quick: Clear your browsers cache in case there is something cached causing an issue Reboot the server in Safe mode. If that works you almost certainly have a plugin causing problems.
  5. Stop array Unassign drive Start array to make Unraid 'forget' assignment Stop array Assign drive Start array to initiate the rebuild
  6. Ok - that shows the invalid entry seems to be a User Script called 'qBittorrent Mover'. The fields at the start of the entry are meant to be space separated but the spaces seem to be absent. Not sure if this is a custom schedule you created or not? To get back to you original problem I do not see any obvious reason for you having issues at the moment, but having tidied up these 2 issues it might be worth rebooting to give you a cleaner starting position.
  7. I think you will find it now mounts fine when you restart the array in normal mode.
  8. Thanks - the settings for the plugin now look sensible. I suspect the ones you had were a legacy from quite some time ago and you had not revisited the settings page to update them. the next thing to try and tidy is what is causing the invalid entry in the syslog of the form: Jul 18 17:16:11 Bearcave crond[1173]: failed parsing crontab for user root: 04*** /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/qBittorrent Mover/script > /dev/null 2>&1 To resolve this I would suggest you go into the console level and then post the output of running: cat /etc/cron.d/root and hopefully identify what has inserted the problem entries. i know this is not solving your current problem, but by getting these little things out of the way hopefully we have a cleaner syslog to help with diagnosing the main problem.
  9. It is always partition 1 in current Unraid releases. I think the addition of the p1 to the md devices in the current 6.12 releases is in preparation for allowing other partition numbers, particularly with ZFS where such file systems created on other systems are often on partition 2.
  10. Does not (I think) explain your problem, but it looks like the Settings for the Parity Check Tuning plugin are a bit strange Could you go to the plugins settings under Settings->Scheduler, make a change and hit apply to write a new set. If you could then post diagnostics again I can check if they now look more like what I would expect.
  11. At Settings->Scheduler where you set the schedule for running mover.
  12. For what you want you need the pool to be the primary storage (I.e. the location for new files) and the array the secondary storage, and the mover direction to be array->Cache.
  13. No. A data rebuild overwrites every sector so any format or Preclear done outside the array is irrelevant. If you try to format the drive while it is being emulated then you wipe its contents and update parity to reflect this do the contents are lost.
  14. Normally one would run this from the GUI so you do not select the wrong device name by clicking on the drive on the Main tab. With the 6.12.x releases you now have to include the partition number even with the ‘md’ devices (e.g. /dev/md2p1). The documentation on running via the CLI needs updating to reflect this.
  15. I must admit that I find the % setting as very confusing. Not sure what advantage it gives over having an absolute value.
  16. The standard way at the moment is to use the Appdata backup plugin.
  17. You can always manually downgrade/upgrade to a release without the server running by using the procedure documented here in the online documentation. Have you tried blacklisting the i915 driver when trying to boot 12.2.x release?
  18. Yes, but does the check filesystem still fail?
  19. By definition if the array is stopped the disk is not mounted.
  20. Strange then. At that point I have no problems changing the file system on my own systems,
  21. did you stop the array first! You need to select the file system at that point before any attempt is made to mount it. You cannot change that setting once you have started the array.
  22. That statement is a little out of date as starting with the 6.12 release you can also use ZFS (although btrfs offers more flexibility). As I said you cannot have mixed file systems in pools at the moment.
  23. Note that while you have a HDD as the parity drive the performance of all writes to the array will be limited by that.
  24. Not in the current release, only in the main array. Perhaps you are thinking of btrfs format pools configured to use the Single profile? in a future Unraid release (6.13?) when the current Unraid main array becomes just another pool type it will then be possible.
  25. If you know what format was used (probably XFS in the array) then you can set it explicitly by clicking on the drive on the Main tab with the array stopped. The auto option relies on Unraid being able to dynamically work out what format was used. Once set explicitly the check/repair options appear that are appropriate to the format you set.
×
×
  • Create New...