lovingHDTV

Members
  • Content Count

    524
  • Joined

  • Last visited

Everything posted by lovingHDTV

  1. Just wanted to say thanks. Works great. Backed-up my current config, stopped older version, installed this, imported my backup, and updated the firmware for all my access points. worked perfectly. thanks
  2. Ok rebooted in safe mode. added the cache drive back, I guess unraid lost track of it. Rebooted and its up and running. thanks
  3. wonderful the cache drive didn't come back. It is only a few months old.
  4. like so: append vfio-pci.ids=8086:10d6 nvme_core.default_ps_max_latency_us=0 initrd=/bzroot
  5. This AM I noticed none of my docker were running. Looking at the log file I found this: Apr 23 00:00:02 tower Plugin Auto Update: Checking for available plugin updates Apr 23 00:00:05 tower Plugin Auto Update: Update available for ca.mover.tuning.plg (Not set to Auto Update) Apr 23 00:00:05 tower Plugin Auto Update: Update available for dynamix.wireguard.plg (Not set to Auto Update) Apr 23 00:00:06 tower Plugin Auto Update: Update available for nvidia-driver.plg (Not set to Auto Update) Apr 23 00:00:06 tower Plugin Auto Update: Update available for parity.check.tuning.plg (Not set
  6. I had three severs defined and they worked fine. I upgrade all my dockers and to 6.9.1 (was 6.9.0-RC<something>) and now mineos only shows the one server I added most recenlty. I do see them in the games area, their backups are there as well. Ideas on why Mineos has lost track of them? thanks
  7. No was not selection a group ID. When I clicked on it the only option was -- so I didn't think it necessary. I typed a 1 and put them all in the same group and it now works. thanks, david
  8. I removed the plugin and the sqlite database from the flash. Re-installed and I can see the scan find my drives. I then add a Disk Layout of 1x16. In the configureation tab I see the layout and drives. I then select the trayID for a drive and hit save. It goes off and when it comes back nothing changes. I'll just wait until 6.9 goes GA. I had to switch because of my SSD drives. thanks
  9. Not concerned about no nvme drive. It seems that others can actually update the tray numbers, which I can't do either.
  10. Just installed, running 6.9.0-rc2. It scans and finds my non-nvme drives, but when I go to assign them to a trayID the save button doesn't do anything. It just resets the trayID settings back to -- and nothing else. ideas on how I can figure out why? One small update: When scanning it does find my nvme drive in the window that pops up, but it never appears in the device list. thanks david
  11. I'm seeing more error reports. Ideas? 2021/01/06 13:27:25 [error] 431#431: *4765522 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24, server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443" 2021/01/06 13:27:33 [error] 430#430: *4768442 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24, server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443" 2021/01/06 13:27:54 [error] 431#431: *4770435 user "report" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24 , server: _, request: "GET / HTTP/1
  12. Today I noticed that nginx is running at ~60% CPU constantly. I do see a lot of these messages in the nginix/error.log 2021/01/05 21:15:50 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:50 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:55 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:55 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver:
  13. OK I guess I really screwed things up. exactly what I didn't want to do. Here is what cache is saying now. The second balance happened, but btrfs is still running keeping me from stopping the array. It also now says single instead of Raid1. I can't show a picture of the main page because I can't stop the array. But I have cache pool set to 2 slots, and the first slot is this drive, and the second is set to no device. How do I recover? The system is running, but I see a lot of these in the log file: For clarity sake, t
  14. the balancing finished. I then stopped all dockers, stopped the array. Changed the cache to only have the new drive and restarted the array. It is now doing a balancing again. Is this expected? I can see the data mounted.
  15. I backed up the drive using the CA plugin, then started this last night. It is now about 93% done. thanks
  16. I've read the wiki on replacing a cache drive but am not sure I understand what is being asked and why it is necessary. I have a 2TB spinning disk for cache formatted with BTRFS. It is in the system and running fine. I also installed a new 2TB SSD M.2 drive that I want to move everything from cache over to. I have VM, docker, mysql, etc on the cache drive. I was planning on stopping VM, dockers, and the array. format the new drive copy everything from the old drive to the new drive assigning the new drive as cache starting everything back up.
  17. Maybe this can help? If I run ps on it I see: 12316 5624 root root find find /mnt/cache/appdata -noleaf Anyone know who/what would be running find on my appdata with the -noleaf option? david
  18. When my server has issues I open a console and look at the processes with top in an attempt to figure out what is going on. However, since every docker runs things as root, I can see many process running, all owned by root. A common one is find. Now how do I figure out which docker, or the unRaid OS itself is running find and why? I do have my mover scheduled to only run at 1AM so I don't think it is unRaid. So how does everyone else deal with this? Can you setup the dockers to have their own user and create a user for each docker by name? I could then at least see
  19. Anyone still running this docker? When I try to go to the web address [IP]:5060 it doesn't connect to anything. The log appears as if it is running: time="2020-12-01T21:32:30-05:00" level=info msg="Cardigann dev" time="2020-12-01T21:32:30-05:00" level=info msg="Reading config from /.config/cardigann/config.json" time="2020-12-01T21:32:30-05:00" level=info msg="Found 0 indexers enabled in configuration" time="2020-12-01T21:32:30-05:00" level=info msg="Listening on 0.0.0.0:5060" I can get a console and run a query, though it errors out because I don't have anything configured.
  20. ############################################################################################################################ # # # unRAID Server Preclear of disk 31454B4A52525A5A # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [27:36:54 @ 100 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [31:47:24 @ 87 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: FAIL # # # # # # # # # # # # # # # ##################
  21. Nope, never ran a plugin for plex. Just the linuxserverio plex docker. Hence my confusion.
  22. I ran open files. It is a script that runs to tell you what processes have open files. These open files keep the OS from unmounting the drives. I could also run top, ps -aux | grep Plex, any number of ways to see the process. I tried kill -9; kill -15, but the process always got restarted. Dockers were turned off as well, but some daemon was restarting plex. I've seen this a couple times now, when I stop the array before all dockers have finished starting.
  23. UPDATE: I was able to restart the server, even though there were open files. I then let it reboot, start all the dockers, manually stop Plex docker, stop array and it worked. Not sure why Plex docker doesn't behave like everyone else, but I was able to add my new drive. I've noticed recently that my tower has issues unmounting drives. I ran open files and the only open files are from Plex Media Server and tuner, scripts, etc, however dockers has been shutdown by the shutdown process. if I kill the plex media server process it just spawns another one. Making it impos
  24. I have two subnets that I would like access to. My LAN subnet 192.168.1.0 and AUG subnet 10.10.1.0. Unraid is on the LAN and I can access this fine, however I cannot access the AUG subnet via wireguard. I can access it fine when I'm on my LAN. Can this be done? For clarification: I can ping 10.10.1.116 from the Unraid box, but I cannot get access to it via WG. thanks david
  25. Played a bit more. It doesn't have anything to do with br0.10 as it doesn't work for br0 either. david