Jump to content

itimpi

Moderators
  • Posts

    20,214
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. You always have the option of using the Parity Check Tuning plugin instead to achieve this. As well as working on all Unraid releases it provides significantly more functionality.
  2. Check under Settings->Unassigned Devices that you actually have it set to make SMB shares from UD visible on the network.
  3. Are you using locking SATA cables? I believe that with WD drives you are normally better off using non-locking cables.
  4. Parity does not stop a file system getting corrupted. Parity has no knowledge of file or file systems and will simply mirror any file system corruption that does occur. BTRFS is very good when it is working well but is more prone to corruption in the event of hardware errors XFS is more forgiving and you are more likely to be able to repair file system corruption if it occurs.
  5. That site had the first field as seconds which in my experience is non-standard as the first field is normally minutes. A better site to use for reference is wikipedia
  6. No obvious reason in the diagnostics except for the fact that you suddenly started getting read errors on disk4. At first glance SMART data for the drive looks OK. I would suggest you need to run the extended SMART test on the drive (which will take many hours). It is not unusual for a drive to pass the short test but fail the longer one.
  7. According to your diagnostics the ‘appdata’ and ‘system’ shares have files on the cache. The others look like they exist only on the array. You have the Use Cache=“No” set for these shares that means that new files will not get sent to the cache, but any files already there will be ignored by mover. If you want these files to be moved to the array. disable the docker and vm services under settings. This is required as these services hold files open that means they cannot be moved. change the Use Cache setting for the ‘appdata’ and ‘system’ shares to Yes. ( hint: it is worth reading the help built into the GUI to understand why) manually run mover to get the files transferred to the array. when mover finishes change the Use Cache setting for these shares to No so new files only get written to the array. reenable the docker and VM services under settings. Note that most people DO want these shares on a pool (cache) for performance reasons as having them on the array significantly slows down writes to them, and also keeps array drives spinning. They then either use plugins to periodically backup to the array or make the pool/cache multi-drive so it has built in redundancy.
  8. No idea on the scheduling, but I notice that you have it set to be correcting. We normally recommend that scheduled checks are set none-correcting so you do not have a drive that is misbehaving inadvertently corrupting parity. Then if you do get errors reported you can try and work out why and only when you are sure all hardware is behaving itself manually trigger a correcting check.
  9. Sounds as if you are encountering the behavior described here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI. You need to use a copy/delete process to make sure the files get moved instead of ‘mv’ if you Use User share paths. The alternative is to use the ‘mv’ command but specify the physical drives rather then the user share.
  10. There are too many fields - there should be 5 I suspect that one of the * at the start is not what you want. Also not sure if ? Is a valid option - I suspect you want * there instead. Note also that Friday is day 5 not day 6. Never tried the #1 qualifier so no idea if it works.
  11. Yes. It has a special entry for upgrading parity disks and mentions parity in rebuilding onto the same disk. I will look into at least adding a comment in the section of the case of replacing a disk that if it is the same size the same process can be used for parity disks.
  12. As was mentioned earlier we would need your system's diagnostics zip file to see what is going on.
  13. You do not mention having a failed data drive? You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  14. CRC errors are connection issues normally caused by either the power or SATA cabling. They normally trigger a retry and as long as that works the system simply continues unaffected. They never reset to 0 so you just want them to stop increasing. To be able to detect corrupted data you need to either be using BTRFS as the file system or be using something like the Dynamix File Integrity plugin so that you have checksums of your files.
  15. Not quite sure what you mean by this Do you perhaps mean that the power is still on but nothing is working (i.e. the OS has crashed).
  16. If you start with a fresh install and assign your array drives as they currently are then Unraid will recognise drives that have previously been used by Unraid and leave their contents intact. Just make certain you do not accidentally assign a data drive to a parity slot as that will result in its contents being lost. A screenshot of your§ Main tab as it is now is worth having.
  17. We cannot see what lead up to that scenario - you would need to enable the syslog server to get logs that can survive a reboot. Are any of your drives indicating SMART issues on the Dashboard? Looking at the time it started I assume that was a scheduled check? We normally recommend that scheduled checks are set to be non-correcting as you do not want a drive that is playing up to corrupt parity. When you get errors from such a check you try and work out why and check things like the power and SATA cabling that can cause errors. You then run correcting checks manually when you are reasonably certain you have no outstanding hardware/drive issues.
  18. You may find this section of the online documentations accessible via the ‘Manual’ link at the bottom of the GUI to be of use?
  19. It would only get kicked out if a write to it failed so in theory something is missing although it could be something very minor. Unlike traditional RAID systems each array drive under Unraid is a self-contained filing system so if the drive has not physically failed then normally most data can be recovered even when a rebuild is not possible.
  20. If you start by installing the Nerd Pack plugin then iperf3 is one of the additional packages it can (optionally) install
  21. Unraid can do this as well ! However if you are changing the parity drive you are in effect treating that drive as failed, so if you have a failed data drive as well you have 2 failed drives so you have exceeded the level of protection. This is mentioned the section on Upgrading Parity Disks in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. If this happened on a traditional RAID system then you would lose the whole contents of the array. On Unraid the contents of all the drives that have not failed are still intact so in this worst case you have lost the contents of 1 drive instead of everything.
  22. @AquaVixen The method you quoted invalidates the ability to rebuild the contents of a failed data drive if you only have a single parity drive which was the original problem that you asked about, and such would not normally be the correct action to take. It is normal when it is not clear exactly what a user did to try and make sure that they took the correct action for the problem they actually had.
  23. No. If the system was behaving itself then there should have been no issue. many people with Marvel based controllers run for ages without experiencing issues. It appears, however, that something went wrong during the rebuild and it is not clear exactly what. Without the diagnostics we cannot tell exactly what happened.
  24. The diagnostics are needed to get any idea of the best way forward. What you have described should not have had the effect you described unless something else is going on. The controller related statements were standard warnings often given out. The statement about Marvel controllers applies to any using marvel chipsets and applies to any,Linux based system that uses them. It is not that they do not work at all but that they are prone to randomly dropping drives for no apparent reason. Almost certainly an issue at the Linux driver level as they used to work fine in 32 bit kernels (Unraid v5 and earlier) but the issue has been around for many years now (ever since Unraid went 64-bit with v6) and thus uses a 64-bit Linux kernel. It appears that Marvel have not put the effort into resolving this issue with their drivers on Linux and concentrate on Windows support.
  25. As far as I know the format has always been like that in Unraid. Do not forget that Unraid does not use the standard Linux md driver as it instead uses an Unraid specific one so it is possible that more traditional Linux systems do use a slightly different format.
×
×
  • Create New...