lovingHDTV

Members
  • Content Count

    518
  • Joined

  • Last visited

Community Reputation

0 Neutral

About lovingHDTV

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I had three severs defined and they worked fine. I upgrade all my dockers and to 6.9.1 (was 6.9.0-RC<something>) and now mineos only shows the one server I added most recenlty. I do see them in the games area, their backups are there as well. Ideas on why Mineos has lost track of them? thanks
  2. No was not selection a group ID. When I clicked on it the only option was -- so I didn't think it necessary. I typed a 1 and put them all in the same group and it now works. thanks, david
  3. I removed the plugin and the sqlite database from the flash. Re-installed and I can see the scan find my drives. I then add a Disk Layout of 1x16. In the configureation tab I see the layout and drives. I then select the trayID for a drive and hit save. It goes off and when it comes back nothing changes. I'll just wait until 6.9 goes GA. I had to switch because of my SSD drives. thanks
  4. Not concerned about no nvme drive. It seems that others can actually update the tray numbers, which I can't do either.
  5. Just installed, running 6.9.0-rc2. It scans and finds my non-nvme drives, but when I go to assign them to a trayID the save button doesn't do anything. It just resets the trayID settings back to -- and nothing else. ideas on how I can figure out why? One small update: When scanning it does find my nvme drive in the window that pops up, but it never appears in the device list. thanks david
  6. I'm seeing more error reports. Ideas? 2021/01/06 13:27:25 [error] 431#431: *4765522 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24, server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443" 2021/01/06 13:27:33 [error] 430#430: *4768442 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24, server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443" 2021/01/06 13:27:54 [error] 431#431: *4770435 user "report" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24 , server: _, request: "GET / HTTP/1
  7. Today I noticed that nginx is running at ~60% CPU constantly. I do see a lot of these messages in the nginix/error.log 2021/01/05 21:15:50 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:50 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:55 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53 2021/01/05 21:15:55 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver:
  8. OK I guess I really screwed things up. exactly what I didn't want to do. Here is what cache is saying now. The second balance happened, but btrfs is still running keeping me from stopping the array. It also now says single instead of Raid1. I can't show a picture of the main page because I can't stop the array. But I have cache pool set to 2 slots, and the first slot is this drive, and the second is set to no device. How do I recover? The system is running, but I see a lot of these in the log file: For clarity sake, t
  9. the balancing finished. I then stopped all dockers, stopped the array. Changed the cache to only have the new drive and restarted the array. It is now doing a balancing again. Is this expected? I can see the data mounted.
  10. I backed up the drive using the CA plugin, then started this last night. It is now about 93% done. thanks
  11. I've read the wiki on replacing a cache drive but am not sure I understand what is being asked and why it is necessary. I have a 2TB spinning disk for cache formatted with BTRFS. It is in the system and running fine. I also installed a new 2TB SSD M.2 drive that I want to move everything from cache over to. I have VM, docker, mysql, etc on the cache drive. I was planning on stopping VM, dockers, and the array. format the new drive copy everything from the old drive to the new drive assigning the new drive as cache starting everything back up.
  12. Maybe this can help? If I run ps on it I see: 12316 5624 root root find find /mnt/cache/appdata -noleaf Anyone know who/what would be running find on my appdata with the -noleaf option? david
  13. When my server has issues I open a console and look at the processes with top in an attempt to figure out what is going on. However, since every docker runs things as root, I can see many process running, all owned by root. A common one is find. Now how do I figure out which docker, or the unRaid OS itself is running find and why? I do have my mover scheduled to only run at 1AM so I don't think it is unRaid. So how does everyone else deal with this? Can you setup the dockers to have their own user and create a user for each docker by name? I could then at least see
  14. Anyone still running this docker? When I try to go to the web address [IP]:5060 it doesn't connect to anything. The log appears as if it is running: time="2020-12-01T21:32:30-05:00" level=info msg="Cardigann dev" time="2020-12-01T21:32:30-05:00" level=info msg="Reading config from /.config/cardigann/config.json" time="2020-12-01T21:32:30-05:00" level=info msg="Found 0 indexers enabled in configuration" time="2020-12-01T21:32:30-05:00" level=info msg="Listening on 0.0.0.0:5060" I can get a console and run a query, though it errors out because I don't have anything configured.
  15. ############################################################################################################################ # # # unRAID Server Preclear of disk 31454B4A52525A5A # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [27:36:54 @ 100 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [31:47:24 @ 87 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: FAIL # # # # # # # # # # # # # # # ##################