Living Legend

  • Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

2140 profile views

Living Legend's Achievements


Enthusiast (6/14)



  1. Brief summary. I am a fool and run my entire home network through a pfsense VM on unRAID. It has worked for years....until now. Lately, my unassigned device which houses the vdisk seems to unmount every few days. With a reset, the device will come back online mounted. I looked through the syslog and found this at the end related to that device. Jan 9 06:55:19 unraid unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime,discard '/dev/sdk1' '/mnt/disks/diskUnassignedKingston480GB' Jan 9 06:55:19 unraid kernel: XFS (sdk1): mounting with "discard" option, but the device does not support discard Jan 9 06:55:19 unraid kernel: XFS (sdk1): Filesystem has duplicate UUID cc2bfbe6-93f9-49d1-ae40-1741ec6d5d72 - can't mount Jan 9 06:55:19 unraid unassigned.devices: Mount of '/dev/sdk1' failed. Error message: mount: /mnt/disks/diskUnassignedKingston480GB: wrong fs type, bad option, bad superblock on /dev/sdk1, missing codepage or helper program, or other error. Jan 9 06:55:19 unraid unassigned.devices: Partition 'diskUnassign' cannot be mounted. Jan 9 06:55:19 unraid unassigned.devices: Don't spin down device '/dev/sdk'. Is something triggering this, or do I have faulty hardware somewhere? I'm using a SuperMicro 2U server unit with hot swappable drives. I certainly hope this is not an issue with the backplane and instead something simple like the sata cables.
  2. If the computer completely freezes, does running a diagnostic after the reboot recover any useful information or must you be able to run a diagnostic before this happens? I thought I found the culprit with Shinobi, my PVR docker. I saw it hike up to 50+ GB RAM once which caused my PFsense to lockup. But I have had that turned off for the past week and it happened again today.
  3. Geez, what is causing that in the middle of the night when nothing is in use? I have 64GB of RAM with just some Dockers and 1-2 VMs. Seems like something is happening to cause this unreasonable spike.
  4. This has been happening once or twice a week for the past couple weeks. One of two things. I wake up in the morning and hear the server screaming. My VMs will have been shutdown. I run pfSense (probably bad idea) off a virtual machine so that throws off my entire home network. I'm forced to add a monitor to the server. When I view the dashboard it shows CPU/Memory maxing it. No choice but to reboot to bring pfsense online. Occasionally, the server itself will just freeze and I'll have to do the dreaded hard reboot. Here are two diagnostic files. I believe the more recent one was from a VM shutdown where I set up a monitor and was able to save a diagnostic before a reboot: Any ideas?
  5. Has anyone else had trouble setting up motion? I thought I had it figured out, maybe something to do with time stamps where it wasn't making proper comparisons on frames? Was a a complete guess, but when I removed that timezone docker mapping, my watch-only monitor with trigger-record finally would trigger. Then I decided to shorten the 10 minute recording to 30 seconds. And now motion no longer triggers it. I even went as drastic as I could, set a specific done by my front door, only leveraged that zone, set indifference to 1, and then wildly swung the door. It would not trigger the event. Spent too many hours on this so I'll have to shelve it for a while. I may try to run the program natively rather than use the docker. Per usual, I'm sure this is user error, but there's typically less potential for user error outside of the docker realm. EDIT: The attached image seems to be the culprit. If I don't set this for 10, it won't trigger. When I place it on 1, it doesn't trigger. Any ideas?
  6. Figured it out, that was silly. I had the "temp streams" mapped to the same location as the permanent recordings. I assume upon reset, the temp streams folder gets cleared which was ultimately clearing my recrodings folder. Now on to figuring out how to get recording on motion working.
  7. I have all settings in Guac RDP blank besides my IP address, port of 3389, and authentication set to any. Maybe I'm missing something.
  8. I think I can rule out NGINX. I'm home now and I just tested Guacamole locally without passing through NGINX. VNC yields very good results. Not as good as Win Client RDP, but very good, especially locally. RDP is still incredibly laggy. It takes the inital screen multiple seconds to cascade in from top to bottom. Could it be a connection setting somewhere within Guac?
  9. Looks like it's seeing something. And the requests # changes as I attempt to scroll around.
  10. I am using chrome. Here is a screenshot of what I can see from that log file. Sorry, I'm remote now and can only seem to access these files through terminal so I took a screen shot: That first message appears numerous times throughout the log. The messages below only appear that one time.
  11. Both of these are already set as advised. I was reading through the Guacamole docs and noticed this excerpt: Apache will not automatically proxy WebSocket connections, but you can proxy them separately with Apache 2.4.5 and later using mod_proxy_wstunnel. After enabling mod_proxy_wstunnel a secondary Location section can be added which explicitly proxies the Guacamole WebSocket tunnel, located at /guacamole/websocket-tunnel: Is this parameter enabled for this docker?
  12. I did a little searching, but don't see this being a common issue for others. I have a Windows10 VM up and running successfully. I can locally access through Windows RDP client without an issue. In a daring moment, I opened port 3389 and redirected to the VM to test externally. This worked flawlessly too. It operated as clean and quick as if it was the local OS. I've had Guacamole docker up and running for a few years now. I have a VNC connection and RDP connection set up. VNC is okay, but not as good as the native windows RDP client. RDP through guacamole however is incredibly inconsistent. It always connects, but at times is incredibly laggy. I've tried every configuration under the sun through the Guacamole GUI to no avail. I've tried to tinker with my NGINX settings to see if there was something I was missing, but nothing there seems to make a difference either. Any thoughts as to why RDP through Guacamole connecting a Windows to Windows machine can be so unstable?
  13. I have. And I wouldn't call it an issue. Just a question for people that use this docker to see if they have any experience with the filter feature.
  14. Just wanted to bump my post that may have gotten lost in the shuffle earlier this week. @dlandon, any suggestions on the optimal way regarding computer resources to keep MP4 files for 24 hours, then delete, and keep the few daily snapshots in perpetuity? Is there a way to do it with one stream with a filter on the event that can distinguish between MP4 and JPG files, or do I need two independent streams, one to clear MP4s daily and one to keep JPGs?
  15. I have a question on what the optimal way to handle saving/filtering through two camera's JPG and MP4 outputs. I'm currently running two cameras. In my ideal scenario, the cameras would be recording using the H264 camera passthrough 24/7. These recordings would be maintained for 24-48 and then deleted. Additionally, I would like an image saved once an hour per camera. These will never be deleted. What would be the optimal way to do this regarding minimizing server resources? I was able to set up JPGs and MP4s to save from the same camera feed to the same event folder. The problem was I could not figure out how bifurcate images/videos from the event folder via filter. It was an all or nothing proposition. If I wanted to delete beyond 24 hours, I lost the pictures too. If I wanted to keep beyond 24 hours, I was forced to keep all videos. The next option seems to be to set up 2 feeds for each camera, one responsible for images, one responsible for video. The video feed will use the filter to delete the event folders > 24 hours old while the image feed will remain untouched. I hesitated to do this as I assumed it would be more resource intensive. Any suggestions on the best way to accomplish this?