Wicked_Chicken
-
Posts
28 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Wicked_Chicken
-
-
Hello,
I recent had an UD disc share disappear from windows, and I cannot seem to get it to reappear even if I reset the share in UD within Unraid. I have a single UD drive I want to share, and it appears correctly within Unraid, but I still cannot get windows to find it. Any assistance is greatly appreciated.
-
Hello,
I seem to be having issues pulling files from the array in a timely fashion, and Windows is reporting my copy speed is ~3-4MB/s. I am not sure why, however. Any help is greatly appreciated as my attempts to speed things up and solve common problems has not yet been successful.
-
Hello,
I am having difficulty with with a drive that was previously working with UD. Here's the log I'm getting.
Mar 24 18:25:21 UNRAID emhttpd: read SMART /dev/sdb
Mar 24 21:11:08 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 24 21:11:08 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 12:31:23 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 12:31:23 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 12:32:38 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 12:32:38 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 12:33:31 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 12:33:31 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 12:35:00 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 12:35:00 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 14:10:50 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 14:10:50 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 14:11:45 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 14:11:45 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 14:12:06 UNRAID unassigned.devices: Warning: Cannot change the disk label on device 'sdb1'.
Mar 26 14:12:08 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 14:12:08 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.
Mar 26 14:14:48 UNRAID sudo: root : TTY=pts/1 ; PWD=/root ; USER=root ; COMMAND=/sbin/fsck -N /dev/sdb
Mar 26 14:15:33 UNRAID emhttpd: spinning down /dev/sdb
Mar 26 14:16:06 UNRAID unassigned.devices: Partition 'sdb1' does not have a file system and cannot be mounted.
Mar 26 14:16:06 UNRAID unassigned.devices: Error: Device '/dev/sdb2' mount point 'Media' - name is reserved, used in the array or by an unassigned device.Any help is greatly appreciated!
-
Edit:
Issue was confirmed as insufficient CUDA memory.So it appears each custom model essentially runs as an independent process. I did not realize this, and am going to have to do some testing with with Yolov5s models to see if I can get decent models to lower GPU headroom, consider changing my GPU in the server, or offloading deepstack to my main PC with a far better GPU.
@ndetar, you are a rockstar for helping me figure this out.- 1
-
So that's interesting.
I stripped the image and reinstalled, and it appears the GPU is now being taxed per nvidia-smi. What's funny, however, is that as soon as I try to load any custom models, it fails entirely. I expect I have headroom based on the ram utilization of 625mb/2000mb for the base models on high,
but cannot actually recall how I pulled that more detailed log which suggested a CUDA memory issue.Correction, I found the command: Here it is:
Quotesudo docker exec -it container-name /bin/bash
once in the container, run
cat ../logs/stderr.txtAnd the log:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/app/intelligencelayer/shared/detection.py", line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File "/app/intelligencelayer/shared/./process.py", line 36, in __init__
self.model = attempt_load(model_path, map_location=self.device)
File "/app/intelligencelayer/shared/./models/experimental.py", line 159, in attempt_load
torch.load(w, map_location=map_location)["model"].float().fuse().eval()
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 842, in _load
result = unpickler.load()
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 834, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 823, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 803, in restore_location
return default_restore_location(storage, str(map_location))
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 174, in default_restore_location
result = fn(storage, location)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 156, in _cuda_deserialize
return obj.cuda(device)
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 77, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/usr/local/lib/python3.7/dist-packages/torch/cuda/__init__.py", line 480, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memorySame CUDA error. I'll fiddle with this to see if I can get any custom models to run. It'll be really disappointing for 2gb isn't enough for any.
-
That was a good idea. I tried loading object detection only, but when checking nvidia-smi I'm still not seeing any GPU use. I'm wondering if the GPU isn't visible which is why its reading no ram.
-
1 hour ago, ndetar said:
I have been using it with a GPU for a while now and its been working great. Could you provide some additional information such as the log output from the container, maybe a screenshot of your config, etc. It's hard to troubleshoot without more context.
Hey @ndetar!
Thanks for responding! I have loved your container and hope we can figure this out. I really appreciate your time and assistance.
Screenshots:
Logs:
Blockquoteroot@UNRAID:~# sudo docker exec -it DeepstackGPUOfficial /bin/bash
root@ed10552468a7:/app/server# cat …/logs/stderr.txt
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/detection.py”, line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File “/app/intelligencelayer/shared/./process.py”, line 36, in init
self.model = attempt_load(model_path, map_location=self.device)
File “/app/intelligencelayer/shared/./models/experimental.py”, line 159, in attempt_load
torch.load(w, map_location=map_location)[“model”].float().fuse().eval()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 842, in _load
result = unpickler.load()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 834, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 823, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 803, in restore_location
return default_restore_location(storage, str(map_location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 174, in default_restore_location
result = fn(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 156, in _cuda_deserialize
return obj.cuda(device)
File “/usr/local/lib/python3.7/dist-packages/torch/_utils.py”, line 77, in cuda
return new_type(self.size()).copy(self, non_blocking)
File “/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py”, line 480, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/detection.py”, line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File “/app/intelligencelayer/shared/./process.py”, line 36, in init
self.model = attempt_load(model_path, map_location=self.device)
File “/app/intelligencelayer/shared/./models/experimental.py”, line 159, in attempt_load
torch.load(w, map_location=map_location)[“model”].float().fuse().eval()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 842, in _load
result = unpickler.load()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 834, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 823, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 803, in restore_location
return default_restore_location(storage, str(map_location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 174, in default_restore_location
result = fn(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 156, in _cuda_deserialize
return obj.cuda(device)
File “/usr/local/lib/python3.7/dist-packages/torch/_utils.py”, line 77, in cuda
return new_type(self.size()).copy(self, non_blocking)
File “/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py”, line 480, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/detection.py”, line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File “/app/intelligencelayer/shared/./process.py”, line 36, in init
self.model = attempt_load(model_path, map_location=self.device)
File “/app/intelligencelayer/shared/./models/experimental.py”, line 159, in attempt_load
torch.load(w, map_location=map_location)[“model”].float().fuse().eval()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 842, in _load
result = unpickler.load()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 834, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 823, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 803, in restore_location
return default_restore_location(storage, str(map_location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 174, in default_restore_location
result = fn(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 156, in _cuda_deserialize
return obj.cuda(device)
File “/usr/local/lib/python3.7/dist-packages/torch/_utils.py”, line 77, in cuda
return new_type(self.size()).copy(self, non_blocking)
File “/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py”, line 480, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
Process Process-2:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/face.py”, line 73, in face
cuda=SharedOptions.CUDA_MODE,
File “/app/intelligencelayer/shared/./recognition/process.py”, line 31, in init
self.model = self.model.cuda()
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 458, in cuda
return self._apply(lambda t: t.cuda(device))
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 354, in _apply
module._apply(fn)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 354, in _apply
module._apply(fn)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 376, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 458, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/scene.py”, line 65, in scenerecognition
SharedOptions.CUDA_MODE,
File “/app/intelligencelayer/shared/scene.py”, line 38, in init
self.model = self.model.cuda()
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 458, in cuda
return self._apply(lambda t: t.cuda(device))
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 354, in _apply
module._apply(fn)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 376, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 458, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
Process Process-1:
Traceback (most recent call last):
File “/usr/lib/python3.7/multiprocessing/process.py”, line 297, in _bootstrap
self.run()
File “/usr/lib/python3.7/multiprocessing/process.py”, line 99, in run
self._target(*self._args, **self._kwargs)
File “/app/intelligencelayer/shared/detection.py”, line 69, in objectdetection
detector = YOLODetector(model_path, reso, cuda=CUDA_MODE)
File “/app/intelligencelayer/shared/./process.py”, line 36, in init
self.model = attempt_load(model_path, map_location=self.device)
File “/app/intelligencelayer/shared/./models/experimental.py”, line 159, in attempt_load
torch.load(w, map_location=map_location)[“model”].float().fuse().eval()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 842, in _load
result = unpickler.load()
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 834, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 823, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 803, in restore_location
return default_restore_location(storage, str(map_location))
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 174, in default_restore_location
result = fn(storage, location)
File “/usr/local/lib/python3.7/dist-packages/torch/serialization.py”, line 156, in _cuda_deserialize
return obj.cuda(device)
File “/usr/local/lib/python3.7/dist-packages/torch/_utils.py”, line 77, in cuda
return new_type(self.size()).copy(self, non_blocking)
File “/usr/local/lib/python3.7/dist-packages/torch/cuda/init.py”, line 480, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memoryDeepStack: Version 2021.09.01
v1/vision/custom/dark
v1/vision/custom/poolcam
v1/vision/custom/unagi
/v1/vision/face/v1/vision/face/recognize
/v1/vision/face/register
/v1/vision/face/match
/v1/vision/face/list
/v1/vision/face/delete
/v1/vision/detection
/v1/vision/scene
p
v1/restore
Timeout Log:
[GIN] 2021/10/01 - 19:46:30 | 500 | 1m0s | 54.86.50.139 | POST “/v1/vision/detection”
[GIN] 2021/10/01 - 19:46:30 | 500 | 1m0s | 54.86.50.139 | POST “/v1/vision/detection” -
Has anyone recently had any luck using Deepstack with a GPU within Unraid? I've been using the CPU version of @ndetar's which has been working wonderfully, but I have been unable to get either his (which has great instructions for converting it to GPU) or the officially documented Deepstack Docker GPU image here working correctly.
It does appear that Deepstack released a new version of GPU three days ago, but I have still not had luck either with the latest version or the second-most recent revision. I have nvidia-drivers up and running with a recommended device, but am still getting timeouts for some reason despite being able to confirm deepstacks activation.
Any help is much appreciated!
-
Turning off Docker in Settings does in fact resolve the issue, but I don't know how I might narrow it down more than that.
-
Things seem to run fine until I start the array. I believe the issue is docker-related, though there have been no significant changes there and the system has otherwise been incredibly stable. As soon as I start the array, however, inevitably lose access to the webGUI and Putty will not respond.
-
7 minutes ago, Wicked_Chicken said:
Is there a specific way to do that, or should I just spin down the discs?
Disregard. It appears excluding those discs from my shares helped.
-
Is there a specific way to do that, or should I just spin down the discs?
-
I'm getting extremely slow speeds on a parity rebuild as I put in a bigger drive. It seems to be getting progressively slower.
Total size:2 TB
Elapsed time:22 hours, 18 minutes
Current position:1.88 TB (94.2 %)
Estimated speed:2.5 MB/sec
Estimated finish:12 hours, 57 minutes
Sync errors corrected:
I've attached my diagnostics file. Any help is much appreciated.
-
On 7/21/2021 at 11:20 AM, Vr2Io said:
append initrd=/bzroot acpi_enforce_resources=lax ?
modprobe @ go file ?
How do I implement this? Sorry, I'm new to Unraid.
-
LIkewise. Fans set to PWM mode in BIOS but still can't be found.
-
Hello, Unraid Community.
Over the past 2-3 days, Unraid has suddenly become unstable and stops responding after the array is mounted. No significant changes to hardware or software have been made. The array did become close to full yesterday, which I resolved by deleted unneeded files, which appeared to resolve the issue at the time. This morning, unfortunately, Unraid was again unresponsive.
I've attached the diagnostics file. Any assistance is greatly appreciated!
-WC
-
5 minutes ago, ich777 said:
The GeForce GT 520 has 48 Cuda cores I think this would be a waste of electricity even if it draws around 30 Watts...
A Quadro P400 has 256 Cuda cores and also draws around 30Watts...
Anyways, no the drivers or better speaking the container toolkit, runtime,... only supports drivers >= 418.81.07 so you couldn't get it to work in a container anyways, besides that I only compile the latest Nvidia driver that is available when a new unRAID version is released and also the following Nvidia drivers for this release cycle and start over again when a new unRAID version is released.
That makes sense, Lol. I appreciate the response and your work as a dev!!! I'll see if I can't snag something more recent for this project.
- 1
-
3 minutes ago, ich777 said:
May I ask why?
I have a old Geforce 520 I want to dedicate to Deepstack processing.
-
On 9/21/2021 at 5:58 AM, ich777 said:
That's because I'm listing only the 8 last drivers since this can be a really big mess...
Edit the file "/boot/config/plugins/nvidia-driver/settings.cfg" and change the line where it says "driver_version=..." to:
driver_version=460.73.01
After that open up the plugin page and you should see that nothing is selected, just ignore that and click the Download button, then you should see that it downloads the driver 460.73.01, after it finished reboot your server to install it.
Will this work with 390.144?
-
Hello. I've been having problems with this freezing/not responding and it does not appear that updating the app (BlueIris) works. Can this be resolved. So happy BI is in a docker! Thank you!
-
-
I have successfully mounted a drive using "Unassigned Devices", which I can see and read from, but when trying to copy files to that drive I now get errors from windows telling me the drive is "write protected". I have ensured "read only" is not checked and have tried both authenticated SMB mode as well as public settings, both of which still result in the same error. Any assistance is greatly appreciated!
-
Having an issue with the docker. Log as as such:
at com.tplink.omada.start.task.MetaDataInitTask.a(SourceFile:51)
at com.tplink.omada.start.task.f.a(SourceFile:13)
at com.tplink.omada.start.OmadaBootstrap.f(SourceFile:321)
at com.tplink.omada.start.OmadaLinuxMain.b(SourceFile:87)
at com.tplink.omada.start.OmadaLinuxMain.main(SourceFile:25)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:256)
... 27 more
2021-05-08 11:07:18,740 main ERROR Unable to write to stream ../logs/server.log for appender RollingFile: org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to stream ../logs/server.log
2021-05-08 11:07:18,740 main ERROR An exception occurred processing Appender RollingFile org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to stream ../logs/server.log
at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:258)
at org.apache.logging.log4j.core.appender.FileManager.writeToDestination(FileManager.java:177)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.writeToDestination(RollingFileManager.java:185)
at org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:288)
at org.apache.logging.log4j.core.appender.OutputStreamManager.flush(OutputStreamManager.java:297)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:179)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
at org.apache.logging.log4j.core.appender.RollingFileAppender.append(RollingFileAppender.java:268)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2091)
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1993)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1852)
at org.apache.logging.slf4j.Log4jLogger.error(Log4jLogger.java:299)
at com.tplink.omada.start.task.FailExitTask.a(SourceFile:18)
at com.tplink.omada.start.task.f.a(SourceFile:13)
at com.tplink.omada.start.OmadaBootstrap.f(SourceFile:321)
at com.tplink.omada.start.OmadaLinuxMain.b(SourceFile:87)
at com.tplink.omada.start.OmadaLinuxMain.main(SourceFile:25)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:256)
... 27 more
Failed to start omada controller, going to exit
2021-05-08 11:07:18 [main] [ERROR]-[SourceFile:51] - Failed to get WebApplicationContext, Met2021-05-08 11:07:19 [nioEventLoopGroup-5-1] [INFO]-[SourceFile:50] - Omada Controller isn't prepared to handle event
2021-05-08 11:07:27 [nioEventLoopGroup-5-1] [INFO]-[SourceFile:50] - Omada Controller isn't prepared to handle event
ShutdownHook: service stopped. -
Negative. How would I assess port access?
I did scan my LAN and can see Unraid associated with its correct IP, and it does seem to be acting as the workgroup master.
[Support] binhex - qBittorrentVPN
in Docker Containers
Posted
I'm suddenly having an issue with this, getting the following from my logs:
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 109, in emit
self.flush()
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 68, in flush
self.stream.flush()
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 102, in emit
self.stream.write(msg)
OSError: [Errno 107] Transport endpoint is not connected
OSError: [Errno 107] Transport endpoint is not connected
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/supervisord", line 33, in <module>
sys.exit(load_entry_point('supervisor==4.2.5', 'console_scripts', 'supervisord')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/supervisor/supervisord.py", line 361, in main
options.close_logger()
File "/usr/lib/python3.11/site-packages/supervisor/options.py", line 1250, in close_logger
self.logger.close()
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 311, in close
handler.close()
File "/usr/lib/python3.11/site-packages/supervisor/loggers.py", line 86, in close
self.stream.close()
OSError: [Errno 107] Transport endpoint is not connected
2023-08-09 15:29:07,428 DEBG 'watchdog-script' stderr output:
sed: can't read /config/qBittorrent/config/qBittorrent.conf: Transport endpoint is not connected
2023-08-09 15:29:31,452 WARN received SIGTERM indicating exit request
2023-08-09 15:29:31,452 DEBG killing watchdog-script (pid 301) with signal SIGTERM
2023-08-09 15:29:31,452 INFO waiting for start-script, watchdog-script to die
2023-08-09 15:29:31,453 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 23441593907920 for <Subprocess at 23441593916432 with name watchdog-script in state STOPPING> (stdout)>
2023-08-09 15:29:31,453 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 23441592009040 for <Subprocess at 23441593916432 with name watchdog-script in state STOPPING> (stderr)>
2023-08-09 15:29:31,453 WARN stopped: watchdog-script (exit status 143)
2023-08-09 15:29:31,453 DEBG received SIGCHLD indicating a child quit
2023-08-09 15:29:31,454 DEBG killing start-script (pid 300) with signal SIGTERM
2023-08-09 15:29:32,457 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 23441593901904 for <Subprocess at 23441593998288 with name start-script in state STOPPING> (stdout)>
2023-08-09 15:29:32,457 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 23441608411088 for <Subprocess at 23441593998288 with name start-script in state STOPPING> (stderr)>
2023-08-09 15:29:32,457 WARN stopped: start-script (terminated by SIGTERM)
2023-08-09 15:29:32,457 DEBG received SIGCHLD indicating a child quit
Any thoughts? Thank you!