Niklas
-
Posts
323 -
Joined
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Niklas
-
-
16 minutes ago, MrCrispy said:
It would be quite trivial for Unraid to backup your usb and relevant server data to cloud storage.
"They" do if you use Unraid Connect.
- 1
-
7 hours ago, Hoopster said:
All licenses provide all Unraid features. The only difference is the number of storage devices that can be attached to the server when the array is started.
There is no difference between the Unleashed license and the Pro license other than the fact that Unleashed now has an annual update/maintenance fee. The Lifetime license is just Unleashed with lifetime updates/maintenance.
If you read what you get, the licenses seem to be different. But sure, I also see "All features of Unraid OS" but the listed functions differ. 😉
-
-
Is there any difference between Pro and the new Lifetime?
Why not transfer all Pro to Lifetime and allow upgrade from Basic and Plus to Lifetime?
-
44 minutes ago, aerobrain said:
Is anyone else having issues with the new version causing issues with their dashboard tab?
Just updated the plugin on both my servers and then my Dashboard is totally blank and clicking on that tab sometimes make the GUI hang. Removed the plug in on my second server and it returned to normal operation.
If there's any specific logs you may need to check just let me know (ideally with how to get those logs as it's not something I've done before).
Ta
Yes. Dashboard appeared frozen but just took a very long time to load. -
This is my setup.
I have created a share in the Unraid gui named docker. Cache pool as primary. Nothing as secondary. Shares on zfs will create the share as dataset on 6.12.
In the settings for docker I have the directory set to /mnt/user/docker/
/docker/.* as exclusion pattern in ZFS master.
If you have done it in other ways you may have to delete all the datasets that docker created before. This has been discussed in this thread before. You can't just delete the docker directory, the datasets will stay. They need to be destroyed.
-
Wait for the next release of Unraid (6.13) for more zfs functionality. The zfs support is very new in Unraid.
-
3 hours ago, jinlife said:
This is noisy log info, there is no harm to functions. I have disabled the log in the source code, and following build will be fine.
@ich777 I have no idea how to trigger the plugin build for 6.12.5 official release, would you please trigger it to apply this change. Thanks.
1 hour ago, ich777 said:Sure, I'll trigger the build, the updated drivers should be available in about 15 minutes,
I'll update this post when it's done.The drivers are now updated for 6.12.5
@Niklas to install the new driver package remove the plugin, install it again from the CA App and reboot afterwards.
Done. Thank you!- 1
-
Since I updated to 6.12.5-rc1 (and now 6.12.5 release) I have this filling my syslog. Don't know if it's related to .5 or the driver. 5 mins between them
Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:00:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:05:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:10:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:15:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:20:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:25:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:30:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:35:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:40:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:01 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc Nov 27 17:45:02 Server kernel: r8125 0000:02:00.0 eth0: rss get rxnfc
-
1 month 21 days uptime. Very stable.
Nice work! Thanks.
-
It still expands and show all datasets from time to time. No reboot.
-
36 minutes ago, Joly0 said:
?
Look at page 8-9 in this thread. You will probably need to remove the datasets manually. -
3 minutes ago, Joly0 said:
Yes, i stopped docker, deleted the docker directory throug the docker settings and removed the the folder in the system share afterward aswell
-
1 hour ago, Joly0 said:
Hey guys, i have written above already. I have a ton of datasets or snapshots (i am not sure) and i dont know where they come from?
Others said its because of docker folder, but it isnt, i removed it and they still are there...
Did you remove it like my instructions said? You can't just delete the docker-folder manually. It will not remove the datasets. That's two different things. If you deleted the docker folder only, you'll have to remove all datasets manually. It is mentioned here before too so read back some posts. -
Don't forget to do flash backup and then this part before anything else
"Bring up the Docker settings page and set Enable docker to No and click Apply. After docker has shut down click the Delete directory checkbox and then click Delete. This will result in deleting not only the various files and directories, but also all layers stored as datasets."
Or else you will be left with all the datasets to delete manually and that would suck.
-
Just now, Joly0 said:
Thank you, thats a great tip. I will do that. Though isnt it possible to copy over everything from system/docker/ to then docker/ without loosing or having to recreate all the dockers?
It's probably possible in some way but it would be very impractical and take more effort.
Recreating/restoring the containers using Apps->Previous apps is way faster and easier.- 1
-
17 minutes ago, Joly0 said:
Running 6.12.x?
You would have to exclude /cache/ and that's usually a no go for obvious reasons.
To fix it you will probably have to create a new share with the name docker and switch over to /mnt/user/docker in docker settings, like this:
https://docs.unraid.net/unraid-os/release-notes/6.12.0/#docker
Before doing that, you need to do this:
"Bring up the Docker settings page and set Enable docker to No and click Apply. After docker has shut down click the Delete directory checkbox and then click Delete. This will result in deleting not only the various files and directories, but also all layers stored as datasets."
You can then go to the Apps-tab -> Previous Apps, select and re-install all of your containers with retained config.
Create flash backup before anything.
Edit
After all this, you should be able to exclude /docker/.* -
On 9/28/2023 at 11:10 PM, Iker said:
If you are referring to datasets that were collapsed in the UI, yes, it's correct and expected; the information about datasets hidden/collapsed on the UI is stored on the Unraid cookie; if you reboot, that invalidates the cookie and all the datasets are shown; this also works as a sanity check that everything works once you have rebooted your server.
For me, they are all expanded after some time when returning to the ui. No reboots or logins in between.
6 minutes ago, Joly0 said:Nope, neither of those helped
Hm. Ok. I have the docker share as it's own dataset.
What does "zfs list" in the terminal show you? -
-
2 minutes ago, Joly0 said:
Seen people use the wrong directory before when using the docker dir. What's the path set in Settings - Docker? The "Docker directory:"-path.
I use /mnt/user/docker
-
-
Well. Let's hope for 6.12.4 stable soon.
- 1
-
4 minutes ago, prune said:
Well said, I fully agree.
We are many Unraid users suffering a serious bug in this 6.12.x release, making our servers unusable.
Please handle this case seriously, danioj has proposed several ways to handle and keep us informed.
The problem is solved (for me at least) in 6.12.4-rcX
-
58 minutes ago, Can0n said:
its ok i got them all manually removed now im dealing with freezing on my server...time to hit up another menu to get help
I get a CPU panic and it freezes right afterTry switching from macvlan to ipvlan. Docker need to be stopped to be able to switch.
[PLUGIN] ZFS Master
in Plugin Support
Posted · Edited by Niklas
You have (or have had) docker storage set to directory. Docker is using its zfs storage engine if the dir is on zfs and that's what you see. It has been brought up several times in this thread before so try to look for it (I'm on my phone now). You can exclude them from being shown but it depends on how the dir was/is setup.