BigBoyMarky

Members
  • Posts

    10
  • Joined

  • Last visited

  • Days Won

    1

BigBoyMarky last won the day on January 22 2023

BigBoyMarky had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BigBoyMarky's Achievements

Noob

Noob (1/14)

10

Reputation

  1. Hey everyone, I know there has been a lot of people who have separate isolated networks for certain containers or VMs but I'm unable to find a clear cut set of instructions on how to properly get everything working with the kind of setup that I want to achieve. I am also not very intellectual when it comes to networking and hope someone is able to help me understand it just a bit more. Context: I am getting into setting up my own CCTV system using the Frigate docker container (open to other suggestions as well). I currently have been using the main network interface eth0 for everything. Now that I want to setup CCTV, I decided to get an unmanaged PoE switch and have it directly connected to my Unraid server as a separate eth1 interface. Then the only connections other than the Unraid server itself to the switch, would be all of the IP PoE cameras. There would be no connection between the PoE switch to my main router. This way from my understanding, isolates and blocks all outbound traffic. Here is what my eth1 interface configuration looks like: I have also enabled the br1 custom network in the Docker settings: Now I'm able to create the Frigate docker container and have it use the br1 network just fine. It is also able to pick up the test camera that I had defined a static IP for within that subnet prior to hooking it up to the PoE switch. However I am unable to access the web UI of Frigate from within my LAN ie: my personal computer accessing the Unraid admin UI and then attempting to open the web UI of the Frigate docker container. The way that I can actually open the Frigate docker container web UI is by using an Unraid VM on the br1 network, defining my own IP on the subnet, and then hitting the web UI URL:PORT. Could anyone provide some guidance on what I would need to do to be able to expose the web UI of that Frigate docker container on that different network interface to my LAN or if this is even possible? I could have completely botched my understanding of how I should be going about isolating these PoE IP cameras (both hardware and network configuration). If this isn't possible with the current path I've taken, would someone be able to provide some guidance for me on what the correct approach actually is? Any help or information would be super appreciated. Thank you!
  2. Thanks again @JorgeB for the help. I was able to follow the steps above and get my data out and rebuild the cache pool. Bumped into 5487623 different other issues with my containers but slowly worked through fixing each one of them. Running a memtest now but marking this thread as resolved.
  3. Hey @JorgeB sorry just for clarity, with the state of my cache pool, I am definitely going to recreate it. But I was wondering if it's possible to have a temp work around to backing up the data by un-assigning one of the mirrored drives and then trying to start the array and getting the shares completely off the cache first? That way it would be easier for me to have all of the shares on the array and then recreate the cache pool without having to backup the data manually over USB etc? Edit: I noticed @viper81 was able to start his array after mounting the cache pool as read only? I'm wondering if it's safe to start the array with the cache pool mounted as read only. That way I believe I can do the following... 1. Start array 2. Copy ALL shares/folders MANUALLY from cache pool into any of the disks on the array 3. Update ALL shares to use array of disks 4. Shutdown server and run memtest 5. If nothing is wrong with the memtest, start server and rebuild cache pool 6. Now with cache pool rebuilt, I can start the array @JorgeB would the above be plausible?
  4. Gotcha okay thanks @JorgeB ! I will do some reading to see if I could dump this data onto one of the disks in my array instead of over USB.
  5. Thanks @JorgeB . I omitted testing RAM first and jumped straight into mounting as read only and it looks like I'm able to access the cache share now. I'll go ahead and try to do a backup first and then run my memtests. I'm currently copying all of the files off to USB drives... With that said, since this is a mirrored ZFS cache pool, would there be alternatives to copying data off and then recreating the pool? Would un-assigning one of the cache drives and attempting to re-import be a potential workaround given the current state? If so, I would be able to change my cache only shares to dump to the array and then recreate the cache pool that way.
  6. Thanks @JorgeB here's the output of zpool import and diagnostics attached. I also added a screenshot of the System Logs when I try to run the import. Looks like it's bumping into the same issue and attempting to generate the diagnostics after I tried to run zpool import gets stuck at the same spot. mark-nas-diagnostics-20230623-0856.zip
  7. Hey @JorgeB thanks for getting back. I'm unable to generate diagnostics when the array is attempting to start and the cache pool gets stuck at Mounting. Would the diagnostics when the array is stopped be of some use? I did comb through them and didn't see anything that stood out which is why the screenshot of the system logs looks like the best information I can pull when the array is trying to start.
  8. Hey Unraid community, I recently upgraded to 6.12.0 and had switched my cache pool (2x2TB SSDs) to ZFS (mirrored). Everything was fine for a couple days but I just attempted to upgrade to 6.12.1 which seemed fine as well but when my server restarted, the array would now be stuck at "Mounting" the cache pool. I disabled array auto start now and can boot my Unraid server with the array stopped as well as start the array in Maintenance mode. However I cannot start the array normally nor can I generate diagnostics logs while my array is attempted to start while trying to mount the cache pool. Trying to generate the diagnostics logs while trying to start the array gets stuck at: /usr/sbin/zpool status 2>/dev/null|todos >>'/mark-nas-diagnostics-20230623-0107/system/zfs-info.txt' Here is the cache pool stuck at "Mounting" When having the array started in Maintenance mode, I can generate the diagnostics logs but looking through it, it doesn't seem like there's anything that sticks out as broken. I then checked the System Logs and see this which likely is an issue: However being a complete noob with ZFS, I'm not sure where to start. Could someone help point me in the right direction? Thank you
  9. I replaced both the ssl.conf and nginx.conf files with the sample ones to update them since I did not make any custom modifications to either one of those and this resolved my issue.
  10. Hi everyone, this container has been amazing so far but I do have some weird issue that I can't seem to find any answers for. For some reason, the number of streams to the share that I'm seeding torrents from gets stuck at 499 (I believe every time but I can't be 100% sure). Is this common? The only thing I changed was by installing the ltconfig plugin and enabling high performance seed settings. It seems like stale connections aren't being cleaned up? I did go through the config settings but can't really tell which one would be directly correlated to this. My upload speeds seems to crawl when this happens but when the number of streams are on the lighter side, my upload speeds could shoot up. Any insight would be super helpful. Thank you!