mintjberry

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by mintjberry

  1. Hi all, Having trouble getting Frigate running on docker in unRAID. When I log in into the web UI, all I get is the frigate logo in the top left and a spinning blue icon in the middle. It never loads. I assume the appdata config/frigate directory should not be empty? The 'frigate' folder is created but there is nothing inside it. I get permission errors when trying to create folders/files while the docker image is running. I can't see the logs as the log window auto closes after like half a second. I have had no issues with any of my other containers. I've deleted the docker image and manually created the frigate folder, and created the subfolder and frigate.yaml file (with some default code), but the same issue occurs. Any ideas or am I missing something obvious?
  2. In the last week or so I've noticed that my unassigned devices shares do not show at all in unRAID, so I can't access them over my Windows network. My other shares are fine. I.e the unassigned devices share "SSDTorrents" Does not show at all here I have attached diagnostics. I have tried removing the share and readding it. Hasn't helped. Any ideas? Cheers tower-diagnostics-20220622-2216.zip
  3. Hi all, I'm moving location shortly and will be using Starlink internet. I would like to continue to have remote access to my Unraid server for Plex and security camera access. However, with Starlink using CGNAT there is no option to get a static IP, so I cannot port forward access to my Unraid server. I am in the process of setting up a VPN on a VPS hosted via Oracle (the free tier). I will then reverse proxy in to access various services. However I'm not 100% on the option that I need to configure in Wireguard on Unraid to have a point to point connection to my VPN, so I don't need to open any ports. Is it server to server? I don't want to expose my entire network, only certain internal services running on Unraid, or one of the VMs running on it.
  4. OK so I downloaded the latest qFlood version and it works fine. Something about the latest linuxserver version appears to craps its pants for me. Is there any way to roll back on the linuxserver version? I tried some options as per: https://hub.docker.com/r/linuxserver/qbittorrent but they didn't work, so I probably did it wrong. For example, I tried lscr.io/linuxserver/qbittorrent:12.11.20 with no luck.
  5. I am having serious problems with qBittorrent on unraid, it's been happening for approx the last week. I need help to figure out what's going on. Whenever I download something several of my 3900X CPU cores start maxing out, and the downloads run very slowly. I have gigabit internet and they are downloading at maybe a total of 1MB/s. It's like there's disk throttling issues or something, as if I watch in the bottom to see the download rate it jumps up to 1 or 2MB/s, then after a second drops down to almost zero, then it jumps up again. No idea why the CPU is being hit so hard. As soon as I stop the qbittorrent docker my cores all drop down to practically 0%. All torrents are downloading to an SSD specifically for torrents and nothing else. Only half of the SSD is full. I have rebooted the server, renamed the qbittorrent config folder to start fresh, as well as delete the image and download/set it up again. Nothing obvious shows in the logs.
  6. Hi, I've been having an issue for the last few weeks, where just on one particular drive, I'm getting temperature warnings even though the threshold is set to zero. This has persisted after rebooting the server. I've also tried setting the value > 60c which didn't help. Any ideas about what could be causing this? Diagnostics attached. tower-diagnostics-20220104-2215.zip
  7. I'm having an issue recently with a drive where each time my unraid PC is started I get a bunch of read errors just related to that particular drive. The drive is never disabled in unraid after the read failures, and I can read data from it fine. The affected disk is #4 - sdf. I'm using a supermicro SAS controller, and have switched the location of the drive, but it still occurs regardless. SMART and diagnostics attached. I'm thinking I should just replace the drive, but just wanted to double check here first. tower-diagnostics-20210112-2240.zip tower-smart-20210112-0042.zip
  8. Thank you. Appreciate the help. I'll be more careful next time.
  9. Actually when I hover over the drive it says - device missing / disabled, contents emulated. Great. Can I now stop the array, add that drive in, and let it rebuild it from parity?
  10. I'm not seeing that it says anywhere that it's being emulated, but I can browse the data on that drive as if it's there, so it looks like it is being emulated?
  11. In this case, I'm pretty sure my parity drive should still be all good. If this drive isn't recoverable, can I attempt to rebuild the drive from parity?
  12. When I press the check button nothing happens, looks like the data in the drive may no longer be viable...
  13. Have set to XFS, now I have that option, so will proceed with the repair. Cheers
  14. Does me setting the file system type of that unmountable drive to XFS actually attempt to reformat the drive? If not, could i just set the file system as XFS on the unmountable drive, then try and recover the drive through your previous link? Or do I need to do it via command line? Thanks
  15. The disks have not been added to the pool. They are just unassigned devices. They haven't been precleared or formatted. When I did a new config, I chose all existing drives that were previously in the pool, no more, no less.
  16. Yep it's in maintenance mode, I think it may be because the file system is set to auto. Can I stop the array, set the drive to XFS, then try a repair?
  17. Thanks. I don't have the option to 'check file system status'. I can see it on other drives that are in my array, but not the drive that I need to try it on.
  18. I received a couple of new drives that I was planning on putting into my system. In my excitement I thought I had stopped the array, but I hadn't, and I added the drives. I also removed one other drive, then put it back in. None of the drives were spinning, the entire array was idle. Then I had some issues with a single read error on one of the drives, I assume this was the existing drive that I removed and put back in. unRAID wanted me to do a full parity check, but I didn't want to do that, and decided to shut the array down and make a new config with all the existing settings. I thought that this would bring the array back to fully operational. I was wrong. It looks like the file system on that drive has been wiped. It's not XFS like my other drives, it says 'auto'. The data should still be on there though (this is a media server, nothing super important is stored). I'm guessing that unRAID is currenlty emulating the data in it, though I can't see a message saying that anywhere. All I want to do is bring this drive back online and not lose any data if possible. If a parity rebuild is required then so be it. Should I format the disk like unRAID is wanting me to do, or can I restore the file system that was on it and carry on using it? I've attached my diagnostics. tower-diagnostics-20201021-1320.zip
  19. I'm currently using a 1TB Sata SSD on my unRAID server which stores my dockers and virtual machines, it also is where my torrents download to. The file system is btrfs. I've purchased a 1TB NVME SSD that I plan to move my dockers and virtual machines to, and the existing 1TB Sata SSD will be only used for torrent downloads/seeding. My questions are: 1) What is the best way to replace the Sata drive with the NVME drive? There are a lot of small Plex thumbnail files that I assume will take a long time to transfer over. 2) What file system should I be using for this NVME drive? I have read about issues with btrfs and excess writes. I don't want to wear out my NVME drive prematurely. Should I switch to xfs? I want trim enabled on the NVME.
  20. Hi, is the AX860i supported? I'm looking at purchasing it. I don't know how to integrate 'cpsumoncli' with this plugin (if that's what I need to do)? Thanks
  21. Can anyone else clarify if this is expected behaviour?
  22. I'm assuming this is maybe expected behaviour, but the session is not stored at all. Is there anyway to work around that? I.e. I open a directory through the vscode tab, maybe run a npm script and open a few files. If I then close that browser tab and open it back up all the layout/ terminal scripts are lost/no longer running. I want it so that I can keep the session regardless if I close the tab, or open the vscode tab window on another PC.