Jump to content

Rorororororororo

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by Rorororororororo

  1. I missed that it's a basic config that's in the standard edit page.. i'm not smart sometimes. i was looking for a custom param to pass; missed the very obvious config.
  2. I'll take another look. I did see in your release notes that the container also supported 7.4. i was looking for the container param to set that but didn't see it. I also checked if it was a release tag. i'll take another look. thanks!!
  3. Looks like the php dependancy changed? pulling the default release from the unraid apps:
  4. Got it. guess it's time to roll the dice. edit: i actually now see the 6.10.0 note for this plugin via Fix Common Problems. thanks for the quick reply!
  5. Is there any issue with this plugin and 6.9.2? saw this issue: root@Server:~# intel_gpu_top intel_gpu_top: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by intel_gpu_top) after uninstalling the intel gpu top plugin, which was marked as 'incompleted installation', i couldn't find it again. is there an updated replacement for 6.9.2?
  6. Bit of the opposite issue with the disk5/sdc1 disk. I could run repair in dryrun mode, but got the same 'valuable metadata changes' error when trying to run live. couldn't mount the disk as described. opt'd for the 'last resort option and used -L After restarting the array, looks like SDC is alive again. no idea what happened. A bit concerning that whatever this issue was caused the array to go into R-only mode.
  7. Didn't notice the link. I'll blame it on being on mobile or it being too early in the am. Thanks!
  8. I have the screenshots mixed up. I ran without the -n twice. Before maintenance mode, there was a failure; no failure after. But this was for a different disk than the one showing as unmounted. I'll try to rerun on that disk today
  9. hey all, looks like I have a similar issue as: --- --- symptoms: i/o error over nfs & smb input/output error when ssh'd into the unraid server and using /mnt/user/ shfs shares going direct to /mnt/disk#/ did allow for a write (ie touch). Notes: There were about 85 errors on one of my array disks that is starting to fail. I've attached my diag zip. --- I attached some screenshots showing: the failed touch against /mnt/user/ the successful touch against /mnt/disk1/ xfs_repair no-changes dry run xfs_repair for-real showing the same error seen in the attached threads xfs_repair for-real running w/o error ** --- ** I started the array in maintenance mode in order to get the xfs_repair to run w/o error. -- I'm restarting the array back into normal mode now, but i wanted to see if anyone could help me through the diag logs to find the best file that has the best lead on my error. device sdg is my failing disk, but i didn't think a working-but-failing drive would be able to take the entire array offline; Array came back online w/o issue. touch-test passes now (final screenshot): -- unraid details: - 6.9.2 - 8 disks + 2 parity -- -- edit 1: I noticed that one of my disks (not the failing disk) was showing as "Unmountable: not mounted" backbone-diagnostics-20220321-2214.zip
×
×
  • Create New...