mc_866

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by mc_866

  1. I want all the data on the nvme and to revert back to a single cache drive. it only shows a single drive in the gui, is there a way to change it back to a single drive?
  2. I messed up and rather than starting a new cache pool as I had meant to do with my 2 512 SSD's I incorrectly added them to a pool with my 2TB nvme cache drive. I stopped the array and removed the drives and the slots. So now I'm down to a single cache drive First it was incorrectly showing 1.5TB capacity so I ran the balance and it didn't change things. I did some research and found the post below: So I ran the command below to try and convert back to single vs raid1. Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache I actually ran that twice but am left with 3TB showing above. Here is a view of the balance status and it appears it's still in Raid1 I was of the impression the convert command was supposed to change this. Also I don't see the drop down next to the balance button shown in the post above. Any other ideas on how to get this corrected back to a single drive with 2TB of capacity?
  3. Appreciate that thanks! Can I just copy the info from config/plugins/dockerMan/templates-user over to my new USB drive? Is that safe? Alternatively if I just re-install is there any chance my settings stay? When I look at some of the dockers it appears my settings are there. Or can my backup from appdatabackup, can I pull out the templates from that backup that I created a couple weeks ago?
  4. I had a complete fail on my usb while away on vacation, always nice to come home to a catastrophic failure. Good news was I did have a USB backup, bad news was that it was 4 months old and before I had added a few new drives. I've been able to get the array back and running because I knew which drives were parity. I opted for a new config and my data all seems intact which is great. Trouble now is that my dockers seem to be giving errors and aren't running correctly. I have a recent appdata backup I tried to restore but that didn't seem to work. Is there a way to restore my dockers without completely reconfiguring them?
  5. I'm having difficulty getting the backup to start from my Android device. I've installed and configured the container on my unraid server. I've created a user and can see the Lomorage instance from the Android app. I've successfully logged in and connected to the Lomarage instance from the Android app. On my phone it says x assests needed to backup. Drag down to start. I pull down and the circle spins but it doesn't look like any data has been sent or any photos backed up.
  6. Stability seems to be back in play since getting rid of the static IP on my Unfi container. Hopefully the stability will continue! Thanks for the help!!!!
  7. I set both plex containers to not auto start. Rebooted Started binhexplex and started streaming a title that would require transcode. Still nothing hitting the GPU.
  8. Ver Version 1.20.0.3181 for binhex plexpass
  9. Turned linuxserver.io container off. Double checked my settings in this container and tried turning it on and streaming the same content and no GPU kicks in.
  10. Nvidia driver version 440.59 Nvidia unraid version 6.8.3 Where do you see your plex version?
  11. I used the watch nvidia-smi in console. No activity with this container. Linuxserver.io container shows the GPU doing the work
  12. Yes, this is what my setup looks like
  13. I followed these instructions and watched the spaceinvader one video but it didn't work with this container. I installed the linuxserver.io separately and the hardware transcode worked right away. I'd like to get that working on this binhex container.
  14. Is Nvidia HW transcoding working with this docker? It's been my go to and has been great but I just got a quardo P400 card to help with some transcode and I can't seem to get it to work with this docker. For reference I was able to try out the Linuxio docker and it worked straight away. I'd prefer to stay with this docker if I can.
  15. I deleted those two files manually and set mirror syslog to flash again. I'll have to look into the remote option. Haven't had time to do that just yet.
  16. Confirmed. But no logs since 7-30. That was in /boot/logs using MC. Is there another location I should be looking?
  17. I setup syslog -> mirror to flash I'm still working to see if I can grab the most recent log. Which folder would that be in? I removed the static IP and restarted the docker, now it's handing out an IP outside of DHCP that's tied to one of my AP's so that won't work. Is there a way to force a different IP without defining a static? EDIT, changed network to bridge, think that's it.
  18. No, separate discrete PFSense box running connectivity and DHCP for me. Unraid install separate running storage and dockers.
  19. Thank you! So right now I have a static IP defined for my Unraid server which was defined in my PFSense setup. The unifi-controller has a separate static IP also defined from PFSense Is your thought to keep the static IP for unifi-controller, or abandon that and see if the trouble continues?
  20. Made it 5 days of uptime. Hard lock last night. Left the house and when I returned the server was unresponsive, didn't even have any code on the screen. I don't think I was able to recover the crash logs cause the system was locked. Not sure where to go from here. Any guidance?
  21. Turned on my plex docker. BTRFS errors re-appeared right away. (regular) error at logical 224809541632 on dev /dev/nvme0n1p1 Aug 3 15:01:39 Unraid kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 224809545728 on dev /dev/nvme0n1p1, physical 202227412992, root 5, inode 1290252, offset 19296256, length 4096, links 1 Aug 3 15:01:39 Unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 6, gen 0 Aug 3 15:01:39 Unraid kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 224809545728 on dev /dev/nvme0n1p1 Turn off plex, no errors found. Unfortunately during covid Plex is a tier 1 app so the family depends on it.
  22. Nothing to be concerned with for those irq traces then? I'm working to try and isolate if I can what's causing my lock ups. From my most recent effort it seems to be tied to dockers.