Kesra
-
Posts
22 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Kesra
-
-
I have two unraid servers, Main and Secondary one, the main one is working flawlessly, the secondary server works okay and after couple of hours when I try to access it remotely it takes me to the page after login but all is blank, stays like this untill I restart:
I run couple of game servers, both are working fine and people can access them outside and inside the network normally, but I can access any thing in the gui, stays just like that.
-
server crashed/restarted almost 10+ in 6 hours
says temporary out of memory but Enshourded server is only using 1.7gb ram out of 128GB available
I have
--oom-kill-disable
and
--restart=unless-stopped
as container parameters
16 Cores 32 threads 128Gb ram all for Enshrouded, nothing running on my unraid Except Enshrouded
here are some screenshots when it happens:
another one here too
I Have attached the log file too. -
On 2/9/2024 at 11:46 AM, ich777 said:
Please read the Release notes:
It was necessary to change the rcon binary to a different one that actually is up to date and is working.
The new command line usage will show something like:
rcon -a YOURIP:PORT -p PASSWORD command
(Please note that if you command consists of multiple words you have to wrap it in double quotes)
The description from the plugin will update after you restart your Unraid server.
thanks but I don't know where to use it I have multiple docker containers that uses 2023.04.23 version of rcon, i don't own these docker containers, In really just want the older version still those developers update their containers.
can you help me to rollback to 2023.04.23 plugin version of rcon.
-
@ich777 after the latest unraid CA RCON plugin update. all my scripts that uses rcon for my palworld server is not working
-
I fixed instead of LISTEN_PORT to Container port, working fine now thanks @ich777
- 1
-
-
1 minute ago, ich777 said:
I just now tried it and it is working flawlessly over here...
wow really
could you please tell me how you managed to install it and run it ??
pls @ich777 i spent 6 hours straight trying my head gonna blow up, you got it up and running in no time
It shows me this but I am sure i installed it wrong way maybe:
-
7 hours ago, ich777 said:
No, why should it be different, it‘s the same game…
What does not work exactly?
connecting this https://github.com/palworldlol/palworld-exporter to server RCON
7 hours ago, ich777 said:And are you sure that RCON is working and enabled correctly? Have you yet tried to connect through RCON through my plugin from the command line?
Working absolutely fine
-
1 minute ago, Spectral Force said:
try -RCON=true
also if using youre external ip, make sure the port is forwarded.
I am already using RCON since server installed and the server is up too. -
Just now, Spectral Force said:
Its set in the ini
I am trying to use https://github.com/palworldlol/palworld-exporter
cant make the palworld-exporter to connect via rcon with "ich777/steamcmd:palworld"
but it works fine with "thijsvanloef/palworld-server-docker:latest"
so there might be some rcon flag difference in "ich777/steamcmd:palworld" -
@ich777 what's the flag to turn on rcon in steamcmd:palworld?
-
1 minute ago, ich777 said:
But it seems that Palworld tries to access memory that it's not allowed to access but as said, that could be maybe also caused by bad RAM.
do you think the windows paths found in the crashlog means anything ?
-
1 minute ago, ich777 said:
Have you yet tried to ask on the official Discord? yes with no luck
Are you sure that your RAM is working properly so to speak maybe bad RAM? got them new 128GB
Can you please post your Diagnostics from Unraid? please check the attached file
-
I am having a very stressful day trying to figure out the issue here, my server keeps crashing like 10 times a day maybe even more, with those With those paths in an unraid server very weird and I cant just fine the issue..
server didn't crash down all the palboxes that belonged to people just disappeared
here what GPT4 answered after analyzing the crashlog:The crash is caused by a "Segmentation fault" (signal 11). This typically occurs when a program tries to access memory that it's not allowed to, which could be due to a bug like a null pointer dereference, buffer overflow, or incorrect memory handling. The call stack in the crash report can help identify the specific code causing the issue.
-
@ich777 anyway around pushing the palworld docker container to use all cores and threads simultaneously instead of only 4 ?
I got all these threads only for palworld later I got surprised that people say that it only uses 4here is my weird statistics:
-
16 minutes ago, ich777 said:
BTW I find it really funny that you run just this one container on Unraid.
I just bought a whole system just for palworld haha!!! so despreat..
17 minutes ago, ich777 said:What do you mean exactly?
server announcments and restarting gracfully and so on.
- 1
-
18 minutes ago, ich777 said:
Everything seems alright at least for the path.
Is this only a single disk in the Array or are there multiple disks?
single disk, updated share settings and restored last backup before the weird server wipe, stable for now.
anyone got some usful user scripts ?
-
4 minutes ago, ich777 said:
Then there is something wrong with the permissions in general since it should be no issue doing that over SMB.
You also don't include the path to the game files in your screenshots, I only see the path for steamcmd, you have to click the little arrow on the Docker page.
EDIT: I just logged in and it is working as expected over here (I was not online a few days since I haven't got much time to play nowadays... )
-
9 minutes ago, ich777 said:
Are you sure that the appdata directory stays on the cache and is not moved to the Array when using /mnt/cache/appdata/… for the game files in the template?
I dont have cache. my new unraid is a 1 array 1TB SSD just for plaworld.
My main unraid had so many issues crashes and resets, its follow me again in my new unraid just for palworld
I am upgrading from 32 gbram to 128, but my main issue is that the back up proccess is hard as SMB wont let me extract or drop the backc up folders while I have all the permissions it says Access Denied -
Any one knows how to roll the server to a specific backup ?
after1.0.4.0 update all the players got wiped
-
Running unraid latest, your Pal world docker keeps crashing with windows paths ?
I have 2 different Unraid server both has the same crash issues and both are 32gbramcrashed 28 times in 4 days
Spoiler<?xml version="1.0" encoding="UTF-8"?>
<FGenericCrashContext>
<RuntimeProperties>
<CrashVersion>3</CrashVersion>
<ExecutionGuid>6359281ADE944C7AAC6C76DFEE609D15</ExecutionGuid>
<CrashGUID>UECC-Linux-80396EDBA067402AB7822A6C1241BE48_0000</CrashGUID>
<IsEnsure>false</IsEnsure>
<IsStall>false</IsStall>
<IsAssert>false</IsAssert>
<CrashType>Crash</CrashType>
<ErrorMessage>Caught signal 11 Segmentation fault</ErrorMessage>
<CrashReporterMessage />
<ProcessId>67</ProcessId>
<SecondsSinceStart>55</SecondsSinceStart>
<IsInternalBuild>false</IsInternalBuild>
<IsPerforceBuild>false</IsPerforceBuild>
<IsSourceDistribution>false</IsSourceDistribution>
<GameName>UE-Pal</GameName>
<ExecutableName>PalServer-Linux-Test</ExecutableName>
<BuildConfiguration>Test</BuildConfiguration>
<GameSessionID />
<PlatformName>LinuxServer</PlatformName>
<PlatformNameIni>Linux</PlatformNameIni>
<EngineMode>Server</EngineMode>
<EngineModeEx>Unset</EngineModeEx>
<DeploymentName>PalServer</DeploymentName>
<EngineVersion>5.1.1-0+++UE5+Release-5.1</EngineVersion>
<CommandLine>Pal -nocore EpicApp=PalServer -useperfthreads -NoAsyncLoadingThread -UseMultithreadForDS</CommandLine>
<LanguageLCID>0</LanguageLCID>
<AppDefaultLocale>en_US</AppDefaultLocale>
<BuildVersion>++UE5+Release-5.1-CL-0</BuildVersion>
<Symbols>**UE5*Release-5.1-CL-0-Linux-Test</Symbols>
<IsUERelease>false</IsUERelease>
<IsRequestingExit>false</IsRequestingExit>
<UserName />
<BaseDir>/serverdata/serverfiles/Pal/Binaries/Linux/</BaseDir>
<RootDir>/serverdata/serverfiles/</RootDir>
<MachineId>-00000063</MachineId>
<LoginId>-00000063</LoginId>
<HostName>faa7aac4b258</HostName>
<EpicAccountId />
<SourceContext />
<UserDescription />
<UserActivityHint />
<CrashDumpMode>0</CrashDumpMode>
<GameStateName />
<Misc.NumberOfCores>16</Misc.NumberOfCores>
<Misc.NumberOfCoresIncludingHyperthreads>32</Misc.NumberOfCoresIncludingHyperthreads>
<Misc.Is64bitOperatingSystem>1</Misc.Is64bitOperatingSystem>
<Misc.CPUVendor>AuthenticAMD</Misc.CPUVendor>
<Misc.CPUBrand>AMD Ryzen 9 3950X 16-Core Processor</Misc.CPUBrand>
<Misc.PrimaryGPUBrand>UnknownVendor</Misc.PrimaryGPUBrand>
<Misc.OSVersionMajor>Debian GNU/Linux 12 (bookworm)</Misc.OSVersionMajor>
<Misc.OSVersionMinor>6.1.64-Unraid</Misc.OSVersionMinor>
<MemoryStats.TotalPhysical>33609101312</MemoryStats.TotalPhysical>
<MemoryStats.TotalVirtual>0</MemoryStats.TotalVirtual>
<MemoryStats.PageSize>4096</MemoryStats.PageSize>
<MemoryStats.TotalPhysicalGB>32</MemoryStats.TotalPhysicalGB>
<MemoryStats.AvailablePhysical>25534193664</MemoryStats.AvailablePhysical>
<MemoryStats.AvailableVirtual>0</MemoryStats.AvailableVirtual>
<MemoryStats.UsedPhysical>6077358080</MemoryStats.UsedPhysical>
<MemoryStats.PeakUsedPhysical>6359027712</MemoryStats.PeakUsedPhysical>
<MemoryStats.UsedVirtual>11931385856</MemoryStats.UsedVirtual>
<MemoryStats.PeakUsedVirtual>11943383040</MemoryStats.PeakUsedVirtual>
<MemoryStats.bIsOOM>0</MemoryStats.bIsOOM>
<MemoryStats.OOMAllocationSize>0</MemoryStats.OOMAllocationSize>
<MemoryStats.OOMAllocationAlignment>0</MemoryStats.OOMAllocationAlignment>
<NumMinidumpFramesToIgnore>0</NumMinidumpFramesToIgnore>
<CallStack>PalServer-Linux-Test!FMallocBinned2::Realloc(void*, unsigned long, unsigned int) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Core/Public/HAL/MallocBinned2.h:605]
PalServer-Linux-Test!TArray<Chaos::TPayloadBoundsElement<Chaos::FAccelerationStructureHandle, double>, TSizedDefaultAllocator<32> >::ResizeForCopy(int, int) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Core/Public/Containers/Array.h:3056]
PalServer-Linux-Test!Chaos::TLeafContainer<Chaos::TAABBTreeLeafArray<Chaos::FAccelerationStructureHandle, true, double> >::Add(Chaos::TAABBTreeLeafArray<Chaos::FAccelerationStructureHandle, true, double> const&) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Experimental/Chaos/Public/Chaos/AABBTree.h:571]
PalServer-Linux-Test!Chaos::TAABBTree<Chaos::FAccelerationStructureHandle, Chaos::TAABBTreeLeafArray<Chaos::FAccelerationStructureHandle, true, double>, true, double>::SplitNode()::'lambda'()::operator()() const [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Experimental/Chaos/Public/Chaos/AABBTree.h:3152]
PalServer-Linux-Test!Chaos::TAABBTree<Chaos::FAccelerationStructureHandle, Chaos::TAABBTreeLeafArray<Chaos::FAccelerationStructureHandle, true, double>, true, double>::SplitNode() [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Experimental/Chaos/Public/Chaos/AABBTree.h:3251]
PalServer-Linux-Test!void Chaos::TAABBTree<Chaos::FAccelerationStructureHandle, Chaos::TAABBTreeLeafArray<Chaos::FAccelerationStructureHandle, true, double>, true, double>::GenerateTree<Chaos::TConstParticleView<Chaos::FSpatialAccelerationCache> >(Chaos::TConstParticleView<Chaos::FSpatialAccelerationCache> const&) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Experimental/Chaos/Public/Chaos/AABBTree.h:2994]
PalServer-Linux-Test!Chaos::FDefaultCollectionFactory::CreateAccelerationPerBucket_Threaded(Chaos::TConstParticleView<Chaos::FSpatialAccelerationCache> const&, unsigned short, bool, bool) [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Experimental/Chaos/Private/Chaos/PBDRigidsEvolution.cpp:155]
PalServer-Linux-Test!Chaos::FPBDRigidsEvolutionBase::FChaosAccelerationStructureTask::UpdateStructure(Chaos::ISpatialAccelerationCollection<Chaos::FAccelerationStructureHandle, double, 3>*, Chaos::ISpatialAccelerationCollection<Chaos::FAccelerationStructureHandle, double, 3>*) [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Experimental/Chaos/Private/Chaos/PBDRigidsEvolution.cpp:441]
PalServer-Linux-Test!TGraphTask<Chaos::FPBDRigidsEvolutionBase::FChaosAccelerationStructureTask>::ExecuteTask(TArray<FBaseGraphTask*, TSizedDefaultAllocator<32> >&, ENamedThreads::Type, bool) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Core/Public/Async/TaskGraphInterfaces.h:1348]
PalServer-Linux-Test!LowLevelTasks::TTaskDelegate<LowLevelTasks::FTask* (bool), 48u>::TTaskDelegateImpl<void LowLevelTasks::FTask::Init<FTaskGraphCompatibilityImplementation::QueueTask(FBaseGraphTask*, bool, ENamedThreads::Type, ENamedThreads::Type)::'lambda'()>(char16_t const*, LowLevelTasks::ETaskPriority, FTaskGraphCompatibilityImplementation::QueueTask(FBaseGraphTask*, bool, ENamedThreads::Type, ENamedThreads::Type)::'lambda'()&&, LowLevelTasks::ETaskFlags)::'lambda'(bool), false>::CallAndMove(LowLevelTasks::TTaskDelegate<LowLevelTasks::FTask* (bool), 48u>&, void*, unsigned int, bool) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Core/Public/Async/Fundamental/TaskDelegate.h:171]
PalServer-Linux-Test!LowLevelTasks::FScheduler::ExecuteTask(LowLevelTasks::FTask*&) [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Core/Private/Async/Fundamental/Scheduler.cpp:184]
PalServer-Linux-Test!LowLevelTasks::FScheduler::WorkerMain(LowLevelTasks::FSleepEvent*, LowLevelTasks::TLocalQueueRegistry<1024u>::TLocalQueue*, unsigned int, bool) [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Core/Private/Async/Fundamental/Scheduler.cpp:402]
PalServer-Linux-Test!FThreadImpl::Run() [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Core/Private/HAL/Thread.cpp:67]
PalServer-Linux-Test!FRunnableThreadPThread::Run() [C:/works/Pal-UE-EngineSource/Engine/Source/./Runtime/Core/Private/HAL/PThreadRunnableThread.cpp:25]
PalServer-Linux-Test!FRunnableThreadPThread::_ThreadProc(void*) [C:/works/Pal-UE-EngineSource/Engine/Source/Runtime/Core/Private/HAL/PThreadRunnableThread.h:185]
libc.so.6!UnknownFunction(0x89043)
libc.so.6!UnknownFunction(0x10961b)
</CallStack>
<PCallStack>PalServer-Linux-Test 0x0000000000200000 + 694c7b4 PalServer-Linux-Test 0x0000000000200000 + 782ea64 PalServer-Linux-Test 0x0000000000200000 + 7861924 PalServer-Linux-Test 0x0000000000200000 + 78613d6 PalServer-Linux-Test 0x0000000000200000 + 7860dfa PalServer-Linux-Test 0x0000000000200000 + 7a3c754 PalServer-Linux-Test 0x0000000000200000 + 7a309bc PalServer-Linux-Test 0x0000000000200000 + 7a20d24 PalServer-Linux-Test 0x0000000000200000 + 7a792ca PalServer-Linux-Test 0x0000000000200000 + 68df411 PalServer-Linux-Test 0x0000000000200000 + 68c42fd PalServer-Linux-Test 0x0000000000200000 + 68c518b PalServer-Linux-Test 0x0000000000200000 + 698007f PalServer-Linux-Test 0x0000000000200000 + 695a5d4 PalServer-Linux-Test 0x0000000000200000 + 692cf00 libc.so 0x000014cbf9c1f000 + 89044 libc.so 0x000014cbf9c1f000 + 10961c</PCallStack>
<PCallStackHash>4ADCE52AA41BFBD61B672DC5E1F338F73163EBB3</PCallStackHash>
<TimeOfCrash>638423379257090000</TimeOfCrash>
<bAllowToBeContacted>1</bAllowToBeContacted>
<PlatformFullName>Linux [Debian GNU/Linux 12 (bookworm) 6.1.64-Unraid 64b]</PlatformFullName>
<CPUBrand>AMD Ryzen 9 3950X 16-Core Processor</CPUBrand>
<CrashReportClientVersion>1.0</CrashReportClientVersion>
<Modules />
</RuntimeProperties>
<PlatformProperties>
<CrashSignal>11</CrashSignal>
<CrashSignalName>Segmentation fault</CrashSignalName>
<PlatformCallbackResult>0</PlatformCallbackResult>
<CrashTrigger>0</CrashTrigger>
</PlatformProperties>
<EngineData>
<MatchingDPStatus>LinuxServerNo errors</MatchingDPStatus>
<RHI.RHIName>NullRHI</RHI.RHIName>
<DeviceProfile.Name>LinuxServer</DeviceProfile.Name>
<NumClients>3</NumClients>
</EngineData>
<GameData>
<__sentry>{ "tags": { "version": "0.1.3.0", "callstackhash": "5D893949E65859C35C342F27BE5D5C9E391D1FB4","revision": "47183", "platform": "Linux" } }</__sentry>
</GameData>
</FGenericCrashContext>
blank page with unraid header, nothing works.
in General Support
Posted
yes the issue presist even in safe mode.
sorry for the mistake in the original post I meant remotely.
could it be that I am 2 Unraid servers ?
both are to the same IP provided by ISP but they each have separate port...