Wolfman1138

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Wolfman1138's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hey Ken-ji, I have reconfigured my server to have an XFS standalone drive for Dropbox to use, so I have reinstalled the docker. But I have had some issues, and I am hoping that you'd be able to help. I installed Dropbox and it did not appear to sync. I went into the Console and ran "dropbox.py status". This said that Dropbox wasn't running. So I tried "dropbox.py start" which said the daemon wasn't running. So i tried "dropbox.py update" and then "dropbox.py start -i" and it seemed to come up. I left the server and when I came back, the Docker drive was full. Apparently, what ever I did installed the dropbox connection point on the docker.img. Yikes. I uninstalled it, wiped all the data and reinstalled the docker. And here I am. My new log file look like this and I am back to Dropbox isn't running: Any ideas where to start? Thank you. - Wolfman
  2. OK. thanks. that sounds like a good plan. I was trying to save a parity rebuild, but it sounds like I will have to do that anyway. After lots of reading I think I am understanding stuff a little better.
  3. Help. I accidently killed my script window while running the "clear an array drive." What do I do? I know it was about 3TB into a 10TB drive at the time. Right now the 10TB drive reports back as 33.6GB on the Main screen. From the reading I did, it may not report the right size. I searched the forum, and apparently no one else is a dumb as I am and killed the script window. Can I get back the progress window? Can I check the processes to see if it is still running? I was removing the drive because I saw a few reallocated sectors. (pre-fail) I had already "unbalanced" all the data off the disk into the rest of the array, so I am not worried about data loss right now. I just want to finish removing this drive so I can get the array back to normal. Thanks for the help in advance.
  4. Cool! I think I may hve found a bug in Unraid. This is really weird. The container port was grayed out and I could not change it. I could only edit the host port. I was able to fix the container port with the following steps: Edit Docker and change network to Host mode Set Host port back to the match the container default of 8989. Restart docker by hitting apply. (Docker didn't work, but it changed the container port back to 8989.) Edit Docker again and change Network to bridge Restart docker Edit Host port again, and it set it to the desired map port. Restart docker I can recreate the mismatched container port by switching back and forth between Host and Bridge while having the Host set to a port that doesn't match the container port. thanks for the help
  5. One more thing: I am running Unraid 6.8.1 If it helps, here is the Unraid Diagnostics file: monstro-diagnostics-20200206-2310.zip
  6. Hi Folks, Sorry about the title, but I messed up some stuff and I can't seem to find anyone else reporting the same issue. I messed around with adding some dockers and two of my docker Web GUIs stopped repsonding. They are not accessible via their bridge IP address anymore. I didn't make any changes to the two that failed. Example: Long ago I had remapped Sonarr to use port 1234. The WebGui link now reverts to 8989 (the default) But if I type the expected address of :1234 in manually, it still does not work. I cannot get to the GUI in this configuration now at all in bridge mode. I can fire up a console on both failing dockers and they look ok to my untrained eye. I read the forums and did some experiments. I can get everything to work if I switch to a fixed IP using br0, but when I reverted back to Bridge, it didn't fix the issue. Host did not fix the issue and one docker can't run at all in host mode. I also tried rebuilding the docker engine network using these commands: # rm /var/lib/docker/network/files/local-kv.db # /etc/rc.d/rc.docker restart This did not fix the issue. I included a txt capture of running some network status: # ifconfig # docker network ls # docker network inspect bridge # iptables -vnL Network_Config_issue.txt Any help woould be apprecaited.
  7. I updated to 6.6.6 today and had immediate issues. I get minor stuttering in my Win7 VM which I first noticed with my mouse movement seeming a little jerky, but it became more obvious when streaming media because I'd get broken audio and paused video every few seconds. (It is subtle, a few miliseconds, but noticeable) I also want to mention that the CPU pinning plugin did not match the settings I had in the VMs XMLs. I have a second Win10 VM that is currently disabled, but the pinning showed both VMs pinning two of the same cores (Win 7 xml pins cpu 4-11, Win10 uses 12-15) The plug in listed 10 & 11 in both VMs (and 14 & 15 as general pool) I did not grab a screen shot (stupid! sorry) I used the revert feature in "OS upgrade" to restore 6.4.0 again, and it is stable again. Background: I have run Unraid for almost two years with a Win7 VM with hardware passthrough on this machine with no issues. The machine is a Intel Xeon E5-2696 2.2GHz (22 Core) with 64GB RAM, a 1050 and USB3 hardware passthrough to the VM. I pin 8 cores, no HT, and dedicate 16GB of RAM to the VM. I have had no issues with the setup until the upgrade to 6.6.6 I don't have the diagnostics for 6.6.6 because I already reverted back to 6.4.0 before coming to this forum, so what I am putting up here is what I got for 6.4.0. I am reporting this in case others are seeing it. Let me know if there are some files that I can go grab for you that may be useful even after I reverted back. Thanks! And keep up the good work. - Wolfman monstro-diagnostics-20190120-0031.zip
  8. Hi Folks, I need some advice about my Unraid setup. I am trying to run multiple performance VMs with hardware passthrough + underlying UNRAID doing disk management + Dockers. I have been reading the forums, and I have not come across this exact situation yet. Hardware/Software: 500GB 960 EVO NVMe, 250GB 840 EVO SSD, Bunch of spinners. 3 Video cards, 1 Inateck USB3.0 x7 Card. 22-Core Xeon Server Chip + Gigabyte X99 Ultimate Gamer Unraid v6.3.2, Win10Pro, Win7Pro, Dockers My issues started when I completed my transfer of my old bare-metal Win7 machine to a VM. I built the VirtualDisk from the old 840 SSD, and got it all working, including the Video and USB passthrough. My Win10 Game machine VM was also working with Video and USB passthrough. Both VMs had "200GB" drives and I had ~100GB for Docker cache. Life was grand. Then I went to expand my cache space with the old SSD. When I added the 250GB to the Cache Pool, disaster. I expected 750GB, but Unraid moved this to Raid1, dropped the total drive space to a "reported" 375GB and now dockers and VMs would pause, crash, etc. After some reading I figured that I was really only getting ~250GB worth of usable space. After walking through the "Replace your Cache Drive" instructions, I have gotten back to having a 500 and 250 separated. Here is my dilemma. How do I expand my cache space and keep the performance of the NVMEe drive for my VMs? Do I set the 840 SSD drive as the only cache drive in the pool and use the NVMe as a "free" drive in the system? Can I trick UNRAID into giving me the full 750GB with the VMs residing on the NVMe drive for performance? I really want the speed of the SSD caching drive for the large array of spinners, but I don't need the Raid 1 nature of btfrs. All that caching space is for temp stuff and backed up VMs. Thanks for the help in advance. -Wolfman
  9. Hi Ken-Ji, I am a new user and I unfortunately (or fortunately if you are looking for debug cases) have seen both the "tornado access" issue that Zangief sees and the loss of Dropbox link issue. How my install and issues transpired: I installed Dropbox Container last week and successfully download the contents of my Dropbox after linking the container (I believe it was March 8th) I restarted Unraid several times for various reasons and noticed that Dropbox syncing stopped. I re-linked Dropbox again this morning (used the same computer ID according to Dropbox logs) -> This seems to have fixed the file syncing issue Now I see the Tornado warnings. One point to note: Dropbox logs have an info button. They say that the app version that I attached to last wed is different that today. Could that be the source of the warnings? Wed was Ver 20.4.19 (No warnings) Today was Ver 21.4.25 (Tornado Warnings) Thank you for writing and supporting such a great container! Examples of Warnings: WARNING:tornado.access:404 HEAD /blocks/80601104/Pt4BW29AkVG9OHQ0p1mEm9iUohoM4wj5ZrzhMKxWRn8 (172.17.0.1) 7.56ms WARNING:tornado.access:404 HEAD /blocks/80601104/Q4Po9VGoWD2phpWKnpiWcgVApVMVlLtXUPpzRMDUfGw (172.17.0.1) 5.03ms WARNING:tornado.access:404 HEAD /blocks/80601104/_lLWd7V7qli4yTmNul1-GPcHxv8UAsA9Pzq-DaLMUIY (172.17.0.1) 3.85ms WARNING:tornado.access:404 HEAD /blocks/80601104/1YrODn5oJ8vw_8_4DV-ivOg6_ynmEYhGls_mmJQU38U (172.17.0.1) 11.21ms WARNING:tornado.access:404 HEAD /blocks/80601104/mmhWSWkX7oaBNO7rWDvp6A3kQ8IG6a491xsLBGieAdo (172.17.0.1) 6.25ms WARNING:tornado.access:404 HEAD /blocks/80601104/kM5OdnihEZRTzDHPH6B8-Ejl4adctD7GfwNmFJkzy7k (172.17.0.1) 3.20ms WARNING:tornado.access:404 HEAD /blocks/80601104/gfTD7I8RHgNPMNoLd4kNOq6Va7BTvFwojlPRaUu4HS0 (172.17.0.1) 11222.82ms WARNING:tornado.access:404 HEAD /blocks/80601104/ZYgl17MFnN9n4pkCRpXeQnmtH4tMVWbtDSll39Ntzbc (172.17.0.1) 11236.61ms WARNING:tornado.access:404 HEAD /blocks/80601104/TIxIpqiDCVU5aMefJvd7kUAps0NI5cRbddy5loXs_xM (172.17.0.1) 10209.27ms WARNING:tornado.access:404 HEAD /blocks/80601104/vG7IWYDMvbQxTeklh3-IHJz4DAc-iQCzaP7l9Sk8QZE (172.17.0.1) 10215.41ms WARNING:tornado.access:404 HEAD /blocks/80601104/t3xKThU191S3SNQc5Brr6zh2IDdq-RDgcunsQd-ThSI (172.17.0.1) 9215.61ms WARNING:tornado.access:404 HEAD /blocks/80601104/RD17Z3z1OEDwl46CiPp5K9rrfPL-OfVMYUceleqD2xQ (172.17.0.1) 9225.69ms