Jump to content

MrCravon

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by MrCravon

  1. So need a HW decoder for the H264 streams coming from my IP cameras. I have an old Nvidia GTX 660Ti I put in the server and installed the drivers and selected the older release of the drivers which actually support the 660Ti. According to this Wikipedia article the 660Ti should have support for H264 HW decoding. Now on passing the GPU through to my frigate container I didn't get the HW acceleration to work. Before I start spending hours on this, is there any tips or any obvious reason I should avoid going down this rabbit hole trying to get it to work? Anyone have frigate working with old GPUs? (NB: I'm not planning to use this for AI workloads with Cuda, I already know the Cuda version that support the 660Ti is too old for frigate)
  2. Seems like it is only the mosquitto docker container appdata folder i am being denied access to.
  3. I didn't think about that. Thanks, here is the diagnostics file. vault-diagnostics-20240505-2010.zip
  4. I have somehow lost access to my appdata folder? Or at least Krusader has. Why? And any suggestions on fixing this?
  5. This helped a lot. Thanks! I did this and it still didn't work: tailscale set --advertise-exit-node --exit-node-allow-lan-access --advertise-routes=[your subnet] But in the end it was a reboot that was needed to get it working.
  6. I changed from the docker container to this plugin recently. I had some issues getting started but when I used the following command i got it up and running the way (i think) i wanted it: tailscale up --accept-routes --advertise-exit-node --advertise-routes=192.168.30.0/24,192.168.40.0/24 --reset And it seems to work the way I had it set up with the docket version with one exception. I can access everything on my network except the unraid web interface? Am i missing something?
  7. So I mounted the disks in the array without parity. Got the backup data out and made a new USB stick with the original config. However all my disks are now "new devices". How come unraid don't recognize them like before in the array, i thought that was based on serial numbers and stuff?
  8. So my USB flash drive died. Seems the only actual backups I have of it is on the array. I made a new unraid USB and booted up with a trail key with the intention of getting the data off the array. How do I actually get the config off the array? If I try adding the disks to the array it says I will loose my data?
  9. Well I didn't read that exactly. But after restoring the backups the MongoDB container wouldn't start. The log produces an error like this: 2021-06-25T12:07:32.538+0200 E STORAGE [initandlisten] WiredTiger error (0) [1624615652:538965][1:0x14e9929f5a80], file:WiredTiger.wt, connection: __wt_btree_tree_open, 585: You should confirm that you have opened the database with the correct options including all encryption and compression options Raw: [1624615652:538965][1:0x14e9929f5a80], file:WiredTiger.wt, connection: __wt_btree_tree_open, 585: You should confirm that you have opened the database with the correct options including all encryption and compression options Reading up on that it seems this happen if you try to use files that was backed up without stopping all writes to the mongod before copying the files. I read that here under Back Up with cp or rsync: (https://docs.mongodb.com/manual/core/backups/) I realize the plugin uses tar? But I suspect the same applies? So my deduction (which of course may be wrong) is that the MongoDB instance was not shut down or not shut down properly before making the backups resulting in the WiredTiger error. I have only briefly looked in to this though. I found this: https://medium.com/@imunscarred/repairing-mongodb-when-wiredtiger-wt-file-is-corrupted-9405978751b5 which I plan to read through and try this weekend.
  10. I have all my docker appdata and most paths defined in the docker templates located on my cache drive. I have been relying on the CA Backup / Restore Appdata plugin to ensure the data is safely stored on the array in case of a failure of the cache drive. I recently changed my cache drive to one with a larger size and had some hiccups with file permissions resulting in some files left on the old drive not moved to the new drive etc. I fixed it by restoring a backup from some days earlier. The only problem is my MongoDB container which won't start now due to corrupted files. I read up on it and it seems that the WiredTiger.wt gest corrupted when CA Backup / Restore Appdata does it's thing. Now this post is not about fixing my MongoDB install, but rather about container path mapping. After these issues I started thinking about how my container paths are mapped to paths on my server. Two examples are my GitLab and my MongoDb instances. Generally should the data paths of containers be located on the array instead of the cache drive? Also I see the GitLab instance points to /mnt/cache/appdata/ instead of /mnt/user/appdata/ is that a problem?
×
×
  • Create New...