Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The -L option is frequently required. Since the mount command has already failed you normally cannot do what the message suggests. In practice the -L option never seems to cause a problem as the most it would affect is the last file being written.
  2. Almost certainly. Replacing disks is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  3. See the last line! You need to remove the -n to get anything fixed as otherwise it just a read-only check. If it asks for it add -L
  4. You are getting lots of read errors on disk7 in the syslog and its SMART data shows that it has 1962 reallocated sectors, and a Failing NOW status. I have not checked the SMART data of the other drives to see if any of them might have issues.
  5. The values displayed are simply those being reported by queries to the BIOS. If the 64GB will work in your motherboard then you should be OK regardless if what the BIOS reports to Unraid.
  6. Not sure where the 32GB is actually coming from but I do know it results from some sort of query to the BIOS.
  7. You ran with -n set which does a read-only check. If you want any fix to be made then you need to remove the -n, and if it asks for -L then add that. After that the repair should be done and the disk then mountable.
  8. Any time you make an OS update, BIOS update or hardware change it is a good idea to assume that the hardware IDs could change so that the contents of the vfio-pci.cfg file are no longer correct.
  9. Your ‘isos’ share has is set to use pool ‘cache’ which is strange as you have no such pool. The ‘appdata’, ‘system’, and ‘domains’ shares are set so they would use a pool called’ cache’ if you had one, but since you do not they will stay where they are on the array. The ‘backup’ share is set to only be on the pool, but it also has files on disk1. The settings for that share is such that mover will not try and move files from array to pool. At this point I would think it is probably easiest to use Dynamix File Manager to get any files off disk1 to the pool. i would suggest that you go through all your shares in turn, check the settings are now what you want, make a nominal change and then hit apply to reset the settings, and after that post new diagnostics.
  10. I assume that you actually have 64GB installed? The values displayed by are those reported by the BIOS so you should perhaps be looking for a BIOS update.
  11. It looks like quite a lot of your plugins are badly out of date, and some them I think may not be compatible with the 6.12.x series so you want to get them all up-to-date. You should also update to the 6.12.3 Unraid release as there have quite a few fixes made since the 6.12.0release that you appear to be running from your last diagnostics. I would suggest also installing the Fix Common Problems plugin as that helps with identifying obvious issues.
  12. The command you used is wrong as you always need to include the partition number. Ideally you should run the repair from the GUI as then the correct device id will be used.
  13. There does not seem to be an attached file
  14. You should [ost your system’s diagnostics zip file so we can see your settings and setup so we can give informed feedback.
  15. It might be worth trying to boot your Unraid server of a Linux live distribution to see if the drives show up there? If they do not that would be a very good indication that the issue is at the hardware level.
  16. Something must be making the shfs system crash. Have you made sure that there are no RAM related issues and no over-clocking/xmp profiles on the ram.
  17. The other possibility is that the drives suffer from the 3.3v issue and need that pin taping off.
  18. in the Unraid array any ZFS format drives are a single drive self-contained ZFS file system. in the current release there is a technical requirement to have at least 1 drive in the main array, but if you do not want to actually use it to store anything then something like a small flash drive can satisfy this requirement. You could then set up all your drives as pools.
  19. The diagnostics seem to be just after booting. It would be useful to have some taken when you are experiencing the problems you describe.
  20. According to the diagnostics the Samsung SSD dropped offline as it does not show up in the SMART information part of the diagnostics. Unclear whether this is a real device issue, or something else. I would check that it is well seated. Regardless you should run a check filesystem on it as it is likely that some level of file system corruption has occurred.
  21. Looks like the repair failed as xfs_repair crashed with an assertion failure. Even if it HAD worked it looks like a lot of corruption was detected so that much of the data would have ended up in the lost+found folder with cryptic names. Do you have backups? Not sure you can do much at the Unraid level other than wait to see if a new Unraid with a later version of xfs_repair works. Another possibility is to see if a disk recovery program such as XFS Explorer on Windows can do anything.
  22. Since you have macvlan traces then you may find that this might help
  23. Then you should be getting a file created in the ‘logs’ folder on the flash drive.
  24. Have you set either the mirror to flash option, or set the Unraid server’s IP address into the remote server address.
  25. If you want logs that survive s crash/reboot then you need to enable the syslog server.
×
×
  • Create New...