-
Posts
19668 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by itimpi
-
-
This is a known issue with 6.11.2 release.
- 1
- 1
-
As a workaround you could use the Parity Check Tuning plugin to handle this instead of asking Unraid to handle the increments.
-
Drives ‘moving’ to assigned normally means the drives dropped offline temporarily and then reconnected but were assigned a different device id so not recognised by Unraid as the original drive. This is a classic problem with USB connected drives but can occur with other connection types as well. The diagnostics would confirm this.
-
3 hours ago, John S said:
Where can you do that? I'm not seeing a downloadable link or section on the website.
The links for the last 3 stable releases are at the bottom of the Download page - the same one that has the USB Creator tool download.
-
It might be worth trying downloading the zip file for the release from the Limetech site and extract all the bz* type files over-writing the ones on the flash.
-
Unraid will only try to read the SMART data if thinks the drive is NOT spundown. You probably have something else trying to read or write to the array that is causing the drives to spin up immediately after a spindown.
-
Have you ensured that the folder for the Time Machine share does not exist on disk8? You might need to manually delete it if it does.
-
5 hours ago, miccos said:
Fixed up the shares using cache all of a sudden and re-run the upgrade to 6.11.0-RC4.
No hiccups this time.
Not really sure what happened in the end.
One way you can get the symptoms you described is mentioned here in the online documentation accessible via the 'Manual' link at the bottom of the GUI. If this was the problem, though, it would not be as a result of the upgrade.
-
On 8/19/2022 at 8:04 PM, hawihoney said:
Whenever a disk spins up all activity on other disks attached to that same HBA stop for a short time.
I believe this a restriction built into the hardware (although I could be wrong).
-
The check runs through the extra space to ensure it is still showing as all zeroes so that if you later add another disk to the main array that is larger than existing ones the add process works correctly.
-
46 minutes ago, Taddeusz said:
Where does it store the files locally so I could exclude it from integrity checking?
The checksums are stored as part of the extended attributes for the file they relate to.
-
2 hours ago, JorgeB said:
Curiously with v6.11.0-rc3 there's no i/o error even when copying using a disk share,
I assume you meant a User Share in this case looking at the command output posted
-
15 minutes ago, gamerkonks said:
Okay, so I've taught myself go and looked at the source code and like Galileo mentioned, node exported is expecting the format of /proc/mdstat to be like in the link he posted, i.e. with spaces.
My /proc/mdstat is similar to Galileo's and doesn't have spaces, so node exporter isn't able to parse my /proc/mdstat
I don't understand how what he did could have solved the problem...As far as I know the format has always been like that in Unraid.
Do not forget that Unraid does not use the standard Linux md driver as it instead uses an Unraid specific one so it is possible that more traditional Linux systems do use a slightly different format.
-
3 minutes ago, hawihoney said:
And you think that this logic avarage users like me will understand? There's an option "Autostart VM" that only works if another option "Autostart Array" is set. But only if it is a first start of the array after boot, not on a second or later start of the array.
I tested with Autostart of the array set to No, and the VM still autostarted when I manually started the array. It appears that it only works once until you next reboot.
-
13 minutes ago, hawihoney said:
Today I had to replace a disk in the array. I manually stopped all running VMs, manually stopped all running Docker Containers, stopped the array, replaced the disk, started the array to rebuild the replacement disk and the VMs, marked with "Autostart", did not start
Note I said FIRST start of the array after booting. You did not mention rebooting in the above sequence.
-
1 hour ago, hawihoney said:
But "Autostart VM" will not work if "Autostart Array" is disabled, the machine is already booted and the Array is started afterwards.
No - just checked and if this is the first start of the array after booting then the VM IS auto started.
-
4 hours ago, hawihoney said:
1,) Autostart on VM page does not Autostart VMs. Why is this option available?
In my experience this works if the array is auto started on boot.
-
3 minutes ago, saber1 said:
So something is wrong in 6.10.X
The problem I see is that with combinations of Linux kernel and Samba updates in new releases there may well be changes affecting this that are outside Limtech’s control and difficult to identify and correct and thus re-occurring stability problems in the capability. The advantage of using a docker container is that it can provide a stable environment independently of changes happening at the host level.
-
29 minutes ago, kubed_zero said:
There have been good reports of the Time Machine docker container functioning well so this might be a good alternative. Cannot very it myself as I do not use a Mac.
-
Since User Shares are equivalent to the top level folders on each drive the only way I could see you ending up with a User Share of that name was if you copied somehow created a top level folder with that name. That was one reason for asking for the diagnostics to confirm if this is what happened.
-
You are likely to get better informed feedback if you post your system's diagnostics zip file so we can see exactly what is going on as something else is needed to explain your symptoms.
BTW: What release of Unraid are you actually running - 6.10.3 is the latest (stable) release.
-
Have you tried clearing your browser cache/cookies? This seems to be needed after the upgrade to fix this issue.
There has been a suggestion that simply deleting the `rxd-init` cookie may be sufficient but I am not sure if this is enough.
-
Exactly how is it failing when you try to boot? Are you sure the BIOS has the Unraid flash drive set as the boot device?
it might be worth carrying out the procedure documented here to see if that helps.
-
1 hour ago, Titan84 said:
say a version 6.11-RC1 for us that has the latest Kernel.
I would expect any 6.11 release to have significant new functionality. If it was only a kernel upgrade I would expect it to be a point release within the 6.10 series.
[6.10.3] cant mount disks formated by unraid itself
in Stable Releases
Posted
What version of Unraid are you using? There is a known issue in the 6.11.2 release with partioning and formatting drives larger than 2TB, and this is corrected in the 6.11.3 release.