-
Posts
16,802 -
Joined
-
Last visited
-
Days Won
66
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by JonathanM
-
-
So, what would be the best way to reset?
14 minutes ago, JorgeB said:it's something I never tried before
🤣 Well, at least I uncovered it early. Too bad I didn't try it during the RC cycle.
-
Maybe order matters?
Already had 3 device pool that originally had the 3 SSD's in a BTRFS 3 way, but slots were empty.
Clicked on first device, changed format to ZFS, 1 group 3 members, applied changes
Assigned 3 devices to the 3 empty slots, confirmed ZFS still was selected
Started array, formatted pool.
I guess to recreate try first setting up a 3 way BTRFS, format it, stop the array, unassign pool slots, start/stop array to commit change, change first pool slot to ZFS, assign disks.
-
4 minutes ago, bmpreston said:
supposedly you can drag and drop. I'm unable to do that. Thoughts?
Do you have the page unlocked?
- 1
-
28 minutes ago, grepole said:
Is there a rough time frame for this feature for share storage? Weeks, months, years?
Soon™
-
2 hours ago, philliphs said:
Sorry, please guide me on posting the diagnostic
Click on the link in his post.
-
18 minutes ago, Joshndroid said:
Thanks, should I be doing this always anyway as a future reference? (as if so, is there a line i should add to my script to run this inline with my shutdown command after a sleep of say 30s)
Shouldn't be necessary, it's just a troubleshooting step to see if the array stops in a reasonable period of time. There is likely something keeping the array from stopping, causing the shutdown to kill the array prematurely, triggering a parity check on startup.
If that's the case, you need to figure out what is keeping the array from stopping.
- 1
-
9 minutes ago, Joshndroid said:
Whether i shut it down using a script, or using the 'shutdown' button on the main page
Try stopping the array before hitting the shutdown GUI button.
-
My speculation is that this looping happens when the list of containers too be updated is not current. I have been able to induce a loop by NOT rechecking for updates, updating one of the containers that had an update available from the automatic check, then clicking update all. It tries to pull updates for all the containers that were listed before I updated the single container, but it still includes the container I just manually updated, pulls zero bytes, and starts looping.
If that is the case, you can work around the issue by first checking for updates, then hitting update all. That way you should be guaranteed to have the latest list.
-
Keep in mind that the memory speed limitation may very well NOT be the memory DIMMs, it can be the memory controller on the CPU or the motherboard itself.
Putting 200MPH rated tires on a toyota corolla is not going to allow the car to go that fast.
All parts in the memory access path must be able to sustain the targeted speed. Run the system at stock speed (no XMP) and see how it behaves. Memtest can only prove the memory is bad, passing memtest doesn't mean the memory is working 100% under all conditions.
-
17 minutes ago, limetech said:
Note this is documented in the Release Notes:
🤣🤣 What is this thing you refer to? Release Notes? Who reads those? 🤣🤣
18 minutes ago, limetech said:Should we add anything to that description?
That's sufficient, more would be redundant.
Is the quoted text also in the inline help system next to the exclusive setting?
(I know, I know, spin up a trial and look for myself)
-
1 minute ago, ljm42 said:
when you start the array the system checks for "sharename" on all your disks. If it only exists on the specified pool then "exclusive mode" is enabled and you will get the speed benefits of the bind mound to that pool.
So content manually added to the sharename folders on a foreign pool will be hidden until the array is restarted?
That's fine, I just wanted to confirm the behaviour so we can help people who will inevitably have a container writing to the wrong location.
-
When is the additional check evaluated? Only when an attempt is made to change the status from NO to YES?
What happens if the share config is manually manipulated?
What happens if an exclusive access share has content added to another pool? Does the check happen every time an exclusive : YES share is mounted? If so, when content is added to the share folder on a "foreign" pool, is the content then made available when the array is stopped and restarted forcing exclusive to NO?
-
Quote
Exclusive shares
If Primary storage for a share is a pool and Secondary storage is set to "none", then we can set up a bind-mount in /mnt/user/ directly to the pool share directory. (An additional check is made to ensure the share also does not exist on any other volumes.) There is a new status flag, 'Exclusive access' which is set to 'Yes' when a bind-mount is in place; and, 'No' otherwise. Exclusive shares are also indicated on the Shares page.
Quick clarification please, for the minority of us that have leveraged "only" and "no" in unconventional ways, i.e. to access all the content of both the array and pool but NOT automatically move things around, manually moving things as needed, does the "additional check" force exclusive access to NO, and continue to work as before showing a fuse mount of array and pool content?
I just want to be sure before I get a nasty surprise where most of my VM's suddenly lose their images. I currently have domains set to cache:only, so new VM's get created on the pool, but if I don't plan to use the VM very often I'll manually move the vdisk to the array, and move it back if I need to.
-
https://forums.unraid.net/topic/92285-new-use-cache-option/?do=findComment&comment=854925
I've been advocating for this change for a few years now.
- 3
- 1
-
9 minutes ago, luzfcb said:
Great. I asked the question because there is no information about Memtest86+ in the Release Notes for 6.12.0-rc1 and 6.12.0-rc2Yeah, probably because there really isn't any updated info, it is what it is.
What I'd LIKE to see is a way to have the end user download the files, and copy them to the USB stick, with a custom boot option. A talented developer could probably come up with a plugin to do it, but since you can't run it without rebooting anyway, the extra steps of downloading the new version and making a memtest USB stick isn't really that huge of a deal. Developing and supporting a plugin that alters the Unraid USB stick like that is probably too much work and risk for too little benefit.
-
58 minutes ago, luzfcb said:
Will the Unraid 6.12 also update the Memtest86+ version to the latest version available?
It already includes the latest version that is licensed for 3rd party redistribution.
Newer versions must be directly downloaded from the original website.
-
-
22 minutes ago, Andiroo2 said:
complete wipe and reformat?
This.
-
17 minutes ago, Kilrah said:
Not that I can see from making a "normal" test VM and removing if from command line with the --nvram switch, no error.
Don't know if it will help make it a thing, but could you do a short feature request to at least put it on the radar? Seems like it should be just a quick addition to the GUI.
- 1
-
5 hours ago, Kilrah said:
virsh undefine --nvram MacinaboxCatalina
Maybe unraid could add it to the "remove vm" command.
Would there be any harm in using the -nvram switch on a VM that didn't "need" it?
-
I've never seen a correlation made to explain why it effects some and not others.
-
macvlan causes crashes for some
-
15 minutes ago, Jclendineng said:
if you have all the same sizes and can do zfs I would think it would be a definite upgrade?
Just keep in mind if you want to upsize the zfs pool I think it's a little more complicated than just adding a single disk, unlike the parity pool.
- 1
-
5 minutes ago, Hellomynameisleo said:
Why can't I update unraid to version 6.12? It says up to date and even when I go into plugins and install manually it says "plugin: not reinstalling same version", I'm currently still on version 6.11.5 of unraid
What version is listed on the "Next" branch? 6.12 isn't stable yet.
Create ZFS pool with 3 way mirror fails
in Stable Releases
Posted
Also, do you want to leave this report open until the GUI is fixed?