DanDee

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by DanDee

  1. so it appears after reseating the drives, my speeds are ok again. i must have knocked a cable loose while plugging in the new ssd. It turns out the reason the majority of the dockers were back to their factory state is because when i originally set them up back int he day i mounted the appdata path under /mnt/cache instead of /mnt/user/ sadly, i will need to recreate most of my appdata folder. not the end of the world, At least im in a better situation than i thought i was in. one issue is, how do i re-mount my ssd as a cache drive? I can mount it, but it always just mounts as a stand-alone drive under /mnt/disks/ . To properly set it up as a cache drive do i need to preclear / format it first before given the option ? there is a /mnt/cache folder still. should i remove it or will a symbolic link to the ssd drive be created once properly mounted ? one more thing i notice when i restore the appdata folder, /mnt/cache/appdata gets populated even tho there is no cache drive present. is the fact there are files being created in /mnt/cache/ the reason why i cant mount a cache drive?
  2. Hi all, i tried upgrading my cache drive this weekend using these instructions and failed miserably: http://lime-technology.com/wiki/Replace_A_Cache_Drive odd thing is, even tho the docker was off, the contents of appdata wasnt completely cleared off when i ran the mover. I ended up manually tarring up and moving the residual contents the mover missed elsewhere. when i replaced the cache drive, it copied back what it could thru the mover. When the majority of docker didnt work (they were back to their default) , i manually moved the remaining contents of appdata that seemed to be missed by the mover. now the new cache drive isnt booting as a cache drive. (even tho its mounted under /mnt/disks/) long story short, i think i completely messed up my appdata folder. Copying any data around is now extremely slow. after the latest move i cant even get into the gui, i just get an nginix 504 error. I still have shell access. Am i completely screwed? I do have a clean backup of the appdata folder but its from 2016. any worth trying to restore that or will it be completely incompatable with unraid 6.5? I know i should have made a clean backup of appdata before starting this process....
  3. Hi All, I too was having the issue where i couldn't connect to the deluge webui port 8112 if i had vpn set to true. looking at the logs in debug mode, everything seemed fine, but it was hanging trying to get an ip address from my vpn provider. after banging my head against the wall for some time, i noticed that the default port for my vpn provider was 1194, and NOT 1198 which is default in the docker config. After setting the vpn port to 1194 the app works like a charm. Hopefully this helps others with the same issue. Many thanks Binhex for an awesome package and all your work in helping others.
  4. @Squid, you hit the nail on the head. things are working fine. Family was watching Plex when i came home. While i feel relieved my system is in a working state after having the rug pulled out from under its feet several times during the power outage.... boy, do i ever feel dumb for not testing the obvious. In my defense, i was going on little sleep due to the lack of AC during one of the hottest nights of the year, and was in a rush to get out of the house this morning , and made the post while in the office. It turns out my putty config was properly set to use 'tower' to resolve the host, and all my services i tested above in were tested using my public dns which was NAT'd to 192.168.2.7. Well, thanks for not mocking my stupidity on this public forum -Dan
  5. Thanks @Squid for your quick response. I could have sworn i gave it a static ip of 192.168.2.7. if its a simple IP address update and its indeed now served up on 2.9, i'll be pretty embarrassed! Will try tonight...
  6. Hi Guys, Last night our neighborhood had several power flickers followed by a 6 hour outage. My unraid server is headless, set to wake on power, and unfortunately i don't have it on a UPS. Needless to say, the server is having issues today. The default web page doesn't work, shares are not available, but i can login via ssh. I didn't have too much time to troubleshoot this morning before heading into the office, but I'm wondering if anyone could provide some guidance on troubleshooting steps for when i have some time to look at it, either this evening or this weekend. I did gather the logs which I've attached, but only after a soft and then hard reboot (i didn't see the post about getting the logs before powering off) Info is below, forgive the unspecifics, it's from memory and i haven't had to touch the box in months. If you need any more info let me know. Thanks in advance! -Dan INFO: from what i can remember here are the specs: running on an old i5 8 gigs ram 1 256gig samsung evo ssd cache disk i *think* the array has 2x 2TB drives and 3x 1.5TB drives not sure of the exact version of v6 i'm running, but i'm pretty sure its the most recent stable version. from what i can remember, docker is running the following, theres probably several others, (but i coudl'nt seem to connect to plex/sab/sonarr this am) -plex -sabnzbd -sonarr -cp -duckdns -letsencrypt+nginx -tvheadend (i could have sworn i saw xbmc connect to tvheadend when my son fired up the tv this morning but i could be wrong) -zap2xml -zoneminder -netdata -delugevpn I did have the following notifications from my unraid machine via pushbullet at 12:10am this morning - likely when the power returned, so some services are likely coming up: Notice [TOWER] - Version update 2016.09.07c A new version of community.applications is available Notice [TOWER] - Version update 2016.08.26 A new version of dynamix.cache.dirs is available Notice [TOWER] - Version update 2016.09.06 A new version of open.files is available Notice [TOWER] - Version update 2016.08.26 A new version of preclear.disk.beta is available tower-diagnostics-20160908-0634.zip
  7. Thanks for your replies. I tried setting the bit to no avail. I even tried the 'move the image to another folder with the bit set' trick - also no dice. I ended up creating a new docker image, and re-downloaded the 11 docker apps. Probably for the best, as i had several stale docker apps in the past that were originally filling up the docker file. At the time i created mount points outside the docker image for logs & temp files, but im sure the docker image itself was filled with garbage.
  8. I'm also suffering from this same error. replaced a 128 gig ssd cache drive running xfs with a 256 gig running btrfs using the instructions from the wiki. http://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual#Replace_the_cache_disk could anyone help? i'd rather not have to reinstall and reconfigure 10+ dockers if i don't have to.
  9. No problem. I'm glad you like the containers. Port 80 for that container is optional. The letsencrypt validation occurs through 443 and you're supposed to connect to www through 443 as well in order to use ssl by default. Plus, you have to map 80 to a different port anyway because it's already being used by the unraid gui. You can use duckdns, just make sure to put yoursubdomain.duckdns.org in the url field and leave the subdomain as www thanks for the tips, it worked like a charm. I just needed patience as it took a minute or so to apply the ssl keys. now all i need is to figure out how to setup nginx as a reverse proxy so i can hit some of my other containers from outside using https and i'm golden.
  10. Nginx-letsencrypt questions... and forgive my noobness. 1. I noticed in its setup config page it chooses port 80. im a little bit concerned it will take over the default unraid config running on port 80... it won't, will it? will it work if i just choose another port like 81 instead? 2. do i need to use a domain i own , or can i use a duckdns subdomain for letsencrypt? Thanks! I hope im in the right place, the support page brought me here, tho in the 36 pages i can't seem to find any mention of the letsencrypt app. I admit Im a fan of a couple of Aptala's other docker apps, specifically duckdns and zoneminder... huge thanks!
  11. first off, huge props and thanks to aptalca for creating and maintaining this docker. I had a similar dilemma to the OP's second question where I wanted to take advantage of the ssd cache drive with zoneminder running as a docker in unraid, but didn't want the zoneminder's camera recordings to fill up my docker img. While this solution worked for me, I'm admitting I've only been using unraid/docker/zoneminder for little over a week and my exposure is limited, so there's likely a better way of achieving the results i did. Here is what i did to solve my dillema: 1. go to the share tab in the unraid gui and add a new share. I called mine 'ipcams'. ensure 'use cache disk' is set to yes. 2. open the zoneminder web gui and click options in the top right corner. 3. under zoneminder's options, click the paths tab. 4. change the DIR_EVENTS value from events to /events 5. click save, and exit the zoneminder gui 6. go to the docker tab, click on the zoneminder docker and click edit 7. under volume mappings, add a new mapping. Container Volume: '/events' and Host Path: '/mnt/user/ipcams' (both without quotes) 8. (optional) open up a ssh shell to your unraid server , and move zoneminder's old events from the old to the new location. (ie: mv -R /mnt/cache/appdata/zoneminder/config/data/zoneminder/events/* /mnt/user/ipcams/ ) 9. restart the zoneminder docker. Voila! - now zoneminder records to my ssd cache drive, so no need for any spinning drives during recording. The mover moves the zoneminder recordings off the cache drive to the array once a day, so zoneminder can still pull up old recordings from the array. Hope this helps.
  12. Just like to drop a line and say thanks for this docker. I have nothing against webgrab++ but I was relieved on how quickly this zap2xml script worked after spending a couple of hours with webreb++ which i found a bit daunting. I'm new to unraid/docker but it required only a couple of steps, had it configured and up and running in a few mins, tho i have to admit i've used this script on a mythtv box in the past. I was surprised that worked like a charm and how easily it was able to integrate with tvheadend. Thanks !