-
Posts
15986 -
Joined
-
Last visited
-
Days Won
65
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by JonathanM
-
-
14 hours ago, curled said:
only syslog collection was possible up to a point it would become unresponsive - which is in the OP.
Collect diagnostics before the issue occurs, it contains system profile and settings information that may be relevant. It's not a universal issue that effects everyone, so anything that may help find commonalities is needed.
-
This may partially explain the update loop that happens sometimes. I've always had a sense that when the loop happens it's using old information.
-
What happens if you properly fill out the export path?
In your screenshot the host path is empty, which would explain the error.
I don't think 6.12.6 would have accepted it either.
-
53 minutes ago, dopeytree said:
So if one accidentally boots into GUI mode and wanted to reboot how would a noob do that without typing in the crazy secure password manually..
Normally the password is required for any critical server operation. Windows server operates the same way, it's not a good idea to allow anybody walking by to shutdown a server without verification.
Like you said, if you really must shut down the machine without logging in, a short press of the power button will trigger a clean shutdown if the server is configured correctly.
-
-
- 1
-
33 minutes ago, ChatNoir said:
You should report that in the plugin's thread.
He did. @dlandon said to post here.
-
I think adding a warning requiring typing YES to acknowledge before the wipe command is executed would be sufficient.
Maybe it would be a good idea to add a new section to the "New Config" area to manage the resetting of pools. Limit the current new config reset to parity array only, and add individual line items for each currently defined pool.
-
Not a universal issue, works here.
-
Try using a different ethernet emulation.
-
6 hours ago, NLS said:
Pity nobody actually reacted when priority was urgent.
How is it urgent? There is a workaround.
Not a showstopper, data loss, or server crash.
-
That worked, thanks!
I added a few steps, but the end result was the same. I first selected auto as the file system type, and as expected, it imported the existing BTRFS pool. Then I did the erase and reset.
BTW, the three way default after I did the erase and selected ZFS was RAID0, don't know if that's intended.
-
Also, do you want to leave this report open until the GUI is fixed?
-
So, what would be the best way to reset?
14 minutes ago, JorgeB said:it's something I never tried before
🤣 Well, at least I uncovered it early. Too bad I didn't try it during the RC cycle.
-
Maybe order matters?
Already had 3 device pool that originally had the 3 SSD's in a BTRFS 3 way, but slots were empty.
Clicked on first device, changed format to ZFS, 1 group 3 members, applied changes
Assigned 3 devices to the 3 empty slots, confirmed ZFS still was selected
Started array, formatted pool.
I guess to recreate try first setting up a 3 way BTRFS, format it, stop the array, unassign pool slots, start/stop array to commit change, change first pool slot to ZFS, assign disks.
-
4 minutes ago, bmpreston said:
supposedly you can drag and drop. I'm unable to do that. Thoughts?
Do you have the page unlocked?
- 1
-
28 minutes ago, grepole said:
Is there a rough time frame for this feature for share storage? Weeks, months, years?
Soon™
-
2 hours ago, philliphs said:
Sorry, please guide me on posting the diagnostic
Click on the link in his post.
-
18 minutes ago, Joshndroid said:
Thanks, should I be doing this always anyway as a future reference? (as if so, is there a line i should add to my script to run this inline with my shutdown command after a sleep of say 30s)
Shouldn't be necessary, it's just a troubleshooting step to see if the array stops in a reasonable period of time. There is likely something keeping the array from stopping, causing the shutdown to kill the array prematurely, triggering a parity check on startup.
If that's the case, you need to figure out what is keeping the array from stopping.
- 1
-
9 minutes ago, Joshndroid said:
Whether i shut it down using a script, or using the 'shutdown' button on the main page
Try stopping the array before hitting the shutdown GUI button.
-
My speculation is that this looping happens when the list of containers too be updated is not current. I have been able to induce a loop by NOT rechecking for updates, updating one of the containers that had an update available from the automatic check, then clicking update all. It tries to pull updates for all the containers that were listed before I updated the single container, but it still includes the container I just manually updated, pulls zero bytes, and starts looping.
If that is the case, you can work around the issue by first checking for updates, then hitting update all. That way you should be guaranteed to have the latest list.
-
Keep in mind that the memory speed limitation may very well NOT be the memory DIMMs, it can be the memory controller on the CPU or the motherboard itself.
Putting 200MPH rated tires on a toyota corolla is not going to allow the car to go that fast.
All parts in the memory access path must be able to sustain the targeted speed. Run the system at stock speed (no XMP) and see how it behaves. Memtest can only prove the memory is bad, passing memtest doesn't mean the memory is working 100% under all conditions.
-
17 minutes ago, limetech said:
Note this is documented in the Release Notes:
🤣🤣 What is this thing you refer to? Release Notes? Who reads those? 🤣🤣
18 minutes ago, limetech said:Should we add anything to that description?
That's sufficient, more would be redundant.
Is the quoted text also in the inline help system next to the exclusive setting?
(I know, I know, spin up a trial and look for myself)
-
1 minute ago, ljm42 said:
when you start the array the system checks for "sharename" on all your disks. If it only exists on the specified pool then "exclusive mode" is enabled and you will get the speed benefits of the bind mound to that pool.
So content manually added to the sharename folders on a foreign pool will be hidden until the array is restarted?
That's fine, I just wanted to confirm the behaviour so we can help people who will inevitably have a container writing to the wrong location.
[6.12.9] Disks missing after upgrading to 6.12.9
in Stable Releases
Posted