Jump to content

itimpi

Moderators
  • Posts

    20,779
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Thanks - exactly what I needed. It shows an error in some code that is triggered if you manually pause a running manual check and I can see a typo on the line indicated. Ironically it is just attempting to add to the log a message that says it had detected that manual pause I now know how to recreate the issue so I can test out the fix, and I will issue a fixed version of the plugin either later today or some time tomorrow. I am also going to see if I can incorporate a request to only issue a pause on manual checks and start running in increments if either you a manual pause has been issued or the check had reached the time the that the increment is scheduled for. Thinking about it this is unlikely to break things for anyone else. Funnily enough the actual line that caused the problem above is where I would have to make a change relating to this anyway
  2. That is true (and you also get performance benefits for other pool types) due to by-passing the Fuse overheads of supporting User Shares. However if by any chance you have a top level folder with the same name as the share on any other pool or array drive then the Exclusive mode will not be available.
  3. Enabling this will not affect data. As was mentioned it first needs to be enabled at the global level before you can activate it for individual shares. Even then it will only work if the share really DOES only have file on the pool.
  4. That is useful, but does not quite pin things down. There may be an entry in the PHP log that gives a specific line number in the source so any chance of checking for that?
  5. Cannot see any obvious reason why the files are only going to the first drive Maybe somebody else will spot something just as a check are you sure you are. Opting the files to the User Share and not explicitly to disk1? on a completely different issue I can see entries in your syslog of the form: /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null is there any chance you could do the following for me: enable the Testing mode of logging in the settings for the Parity Check Tuning plugin go to Tools->PHP Settings and set the Error Reporting Level to All let the system run for a while until the plugin’s Monitor task runs again. let me have new diagnostics at that point and also any entries in the PHP logs that reference the plugin. set the items you changed in 1) and 2) back to their original settings to avoid excessive logging. Hopefully this will give me the information to stop those log messages appearing as they indicate some sort of issue with that plugin that I have not managed to reproduce on my test system. I also notice that Fix Common Problems is reporting some issues with the scheduling options for the CA Update plugin - not sure what but probably worth looking at.
  6. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  7. Parity drives have no file system, and also do not need pre-clearing (except perhaps as a stress test) as building parity overwrites every sector on the drive.
  8. The repair capability only applies when the drives are in a redundant ZFS array. When used in the array then each ZFS drive is a single drive system that can detect but not repair corruption. In this respect is similar to BTRFS drives in the main array.
  9. Can you boot in non-GUI mode to get a console login prompt? If so you can use the ‘diagnostics’ command from the command line.
  10. It was not removed when I did the upgrade. I think that: webgui: DeviceInfo: added automatic floor calculation this simply means that this plugin will no longer be required as changing the value to be a sensible default value rather than 0 is built-in, so you can remove the plugin if you have it installed.
  11. The User Scripts allows you to run scripts at array stop/start/first start so you can easily make a custom script. if you want to send a notification you can use the /usr/local/emhttp/webGui/scripts/notify command. If you want to simply write to the syslog then you can call the ‘logger’ command. Run either of these from a command line with —help to get the options supported.
  12. I am not sure that would get you to the type of ZFS pool you want to end up with. ZFS is far more restrictive than btrfs in the options for adding extra drives after the initial setup of a ZFS pool.
  13. The SMART information for the drive looks fine. If you happen to have a spare drive I would recommend rebuilding to that as it keeps the disabled drive intact if for any reason the rebuild fails. Failing that rebuilding the drive to itself is the way to go.
  14. Not sure if you are talking about crash followed by it restarting or if it seems to do a tidy shutdown followed by a reboot? Are you getting an unclean shutdown message after restarting? If a crash I would think the most likely culprit is power as that would be when the system is normally driving the PSU the hardest. You could try setting up the syslog server a get a log that survives a reboot to see if that gives any clue. Also the full diagnostics zip file rather than just the syslog file you posted is better.
  15. Have you checked that you do not have a top level folder called 'disks' on either a cache pool or on any array drive? The FCP messages imply it might be on the cache.
  16. The check/repair process refers to unmountable drives - not disabled ones Since the drive is disabled is it successfully being emulated by Unraid (i.e. does it appear to mount and its contents look intact) when the array is running in Normal mode. It might be a good idea to provided your system's diagnostics zip file so we can see what is going on and provide more informed feedback. The standard way to clear a disabled state is to rebuild the disk. If you think the disabled disk is OK then you can use the process to Rebuild a drive onto itself, but if not sure ask for advice.
  17. No., but likely to be a possibility in a future release.
  18. I do not know if it does, but if you cannot tell us what command you actually used and what sort of output it produced then anything we say is only going to be a guess. I am not sure how xfs_repair interacts with encrypted drives as I have never personally bothered to use xfs encrypted drives. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.
  19. I would recommend leaving things alone if they are working well. ZFS is probably best for those who need additional speed from pools or are already experienced in using ZFS.
  20. If you did not use the right device name then I think you may have corrupted the drive content, particularly if it was an encrypted drive. if you had asked for help earlier and provided diagnostics so we could see the situation we could have told you how to get it to show up in the GUI, or what was the correct command for doing it from the command line.
  21. That syslog is full of errors on device ata4 (without full diagnostics not sure what device that is) that look like cabling issues, and also read errors on disk1.
  22. Did you run the xfs_repair command from the GUI (recommended and safest) or from the command line. If from the command line what was the exact command you used as getting it wrong can potentially cause damage.
  23. That file gets created when you try and alter any network setting. Until that happens the system simply assumes default settings.
  24. After the New Config then the Include/Exclude settings for the shares remain unchanged. It is up to you to adjust them to match the new disk numbers. Note that this setting only applies to where new files are placed - any files belonging to a share are found for read purposes regardless of which array drive or pool they are located on.
  25. All drives in Unraid main array are self contained file systems that can be read by any Linux system. If using Windows then you will need additional software that supports the required file system type (probably reiserFS if really such an old Unraid system)
×
×
  • Create New...