
sirkuz
Members-
Content Count
53 -
Joined
-
Last visited
Community Reputation
2 NeutralAbout sirkuz
-
Rank
Advanced Member
Converted
-
Gender
Undisclosed
-
Sudden instability/crashes 6.8.3 LinuxServer Nvidia Build
sirkuz replied to sirkuz's topic in General Support
Thank you kindly Jorge! Will be looking that over and adjusting as needed. -
Before I revert back to standard build to see if anything changes I thought I would first post. I have had the nvidia custom build running for months without any issues and then a month or so ago I started crashing. It typically happens randomly as far as I can tell anywhere from a few hours to 3-5 days max. Attached is the console output as well as pertinent part of the logs. Perhaps someone more familiar with them could let me know if it looks more hardware related (failing mem/cpu) or software. Thank you in advance! Oct 9 03:25:51 Tower root: mover: finished Oct
-
si! dang ctrl+enter reflex
-
Ran into a strange issue and fixed today, not sure if anyone has reported this but couldn't find anything in a rudimentary search. If you uninstall WG after having setup a connection, your wg0 interfaces remain and they are unable to be deleted manually via network settings/unraid routing table gui. You instead have to reinstall WG and then remove the config to correct. I think these routes should be removed once the plugin is uninstalled.
-
sirkuz started following [6.3.0+] How to setup Dockers without sharing unRAID IP address
-
[6.7.x] Very slow array concurrent performance
sirkuz commented on JorgeB's report in Stable Releases
It's not a "show stopper" issue. Backporting would be silly...wasted dev time IMHO. The fact that they acknowledged and fixed in next major rev makes me happy. I don't have anything configured for 6.7.x that prevented a downgrade and I doubt most do. -
[6.7.x] Very slow array concurrent performance
sirkuz commented on JorgeB's report in Stable Releases
Interesting find. Setting the same on mine has helped a lot as well. Thank you! -
[6.7.x] Very slow array concurrent performance
sirkuz commented on JorgeB's report in Stable Releases
Has anyone compared speed of the mover script in 6.6.x vs 6.7.x? -
[6.7.x] Very slow array concurrent performance
sirkuz commented on JorgeB's report in Stable Releases
I bit the bullet and have decided to revert one system as well, as reported many times things seem to be back to "normal". Not sure how long I will be able to hold of on the secondary reverting as well as it was quite simple. -
[6.7.x] Very slow array concurrent performance
sirkuz commented on JorgeB's report in Stable Releases
Good to hear reverting back helps. I am trying to hold off for a fix but this sure is an annoying issue! Been following/researching it for awhile but now that more people are reporting it hopefully it will get some more attention/resolution -
sirkuz started following [6.7.x] Very slow array concurrent performance
-
Any chance this is issue is more related to the rsync application more than anything? The same issue described here happens for me with Plex/Emby using unbalance plugin as well....also rsync. Anyone take a stab at rewriting either plugin to use rclone? I get much better performance on my scripts using rclone for file moving processes.
-
Generally speaking it allows you to adjust the allocations for your tmpfs. See: https://wiki.archlinux.org/index.php/Tmpfs Specifically for me I adjust how much can be allocated to my /tmp and other locations. I believe the default is to only allow up to 50% for the /tmp directory. I have a lot of memory so I allocate much more to it. Here are my settings from my go #resize some directories mount -o remount,size=192G / mount -o remount,size=6G /run mount -o remount,size=6G /dev mount -o remount,size=6G /sys/fs/cgroup mount -o remount,size=384M /var/log Someone else mu
-
I'd like to see the option in settings to adjust how the system memory gets allocated. Currently I do this with the mount -o remont,size= in the go script. I think having the ability to adjust how unraid assigns this would be great for power users. Maybe there is a plugin I haven't come across yet, but will add this request here in the mean time.
-
Appreciate it bud!
-
Any chance someone could post these somewhere else again? Seems like the free download period has ended. Much appreciated!
-
can anyone tell me what version of mlx4 driver is in this one? I had to run lowly copper/gigabit because the mlx4 driver kept crashing my system every couple days in 6.5.1