jlr2000

Members
  • Posts

    40
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

jlr2000's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Thanks! I noticed I didn't see any sigs anymore and was lost as to why - didn't realize it was an option in profile to show others sigs!
  2. Ok, that was my concern. When the preclear completed I went into unassigned devices via the gui and clicked the option to format it to xfs there....so that was my mistake. Also good to know that the warning/checkbox to clear the drive is ALWAYS there. So a truly precleared drive will be added instantly, but one that is not will take some time. Thanks so much for answering my questions. I will pre-clear the drive again using the plugin and then re-attempt to add to the array to minimize the array downtime. Thanks!
  3. Hi - I'm running 6.1.9 and I'm trying to add a new 4tb drive to my array. I used the Pre-Clear plugin on the drive to prep it before adding it. Once the Pre-clear completed, I formated the drive to xfs from the unassigned devices tab. I then stopped the array. Went to Array Devices and added the new drive to a new slot. It was recognized as a new drive. I then went to Array Operation to start the array back up, but was surprised to see a check box that said I must clear the drive to add and start the array. My question is this: Is that checkbox always there whether you have pre-cleared a drive or not? If the drive is pre-cleared will it just go real fast and start the array? Or did I do something wrong that is now requiring the drive to be cleared again, meaning it will keep the array down for an extended period of time to do this? If it's always there and it will go very fast, I'm fine with that. If I did something wrong in my pre-clear/format steps, I'd rather do that offline and mininize how long the array is offline. Any help or guidance is appreciated... Thanks!
  4. This sounds like a classic case of the share your using not being marked a "cache only", what this means if you dont do this is the internal unraid process called mover comes along and moves all your files in that share to the array, then when you restart the docker you have no config again, and so the cycle repeats. The way to solve this is to mark the share as "cache only", this tells the mover not to touch any files/folders contained within this share, go to unraid ui/shares/click on share/select from dopdown "use cache disk" the value "only", save and your done. ok this will depend on where couchpotato is installed and how its installed, for now im going to assume couchpotato is installed as another docker on the unraid host, if this is the case then read FAQ Q1 in post #2 of this thread and follow it carefully. Just to add to this a little, once mover has messed with it, it seems like you need to remove the delugevpn config folder and let the container recreate it. Also, I'd set the mapping to /mnt/cache/appdata/delugevpn instead of /mnt/user/appdata/delugevpn if that's what you used. Thanks for the info guys: I have my /config folder mapped to a cache "only" share. I have my /data folder mapped to a /download directory on my "yes" cache share. Do I need both mapped to the cache "only"? The reason I did this was I thought I should have the downloads actually go to the large array for space needs...not to the limited cache - but that may be my mistake. BOTH are mapped to "/mnt/user" and not /mnt/cache". Couch is installed as another docker on the unraid host. I followed the instructions in the post cited above but have had no luck. I'll work on the getting the preferences to stay first, then tackle this issue separately. Still not sure they aren't related. Thanks!! Just an update on this earlier post. Since updating to the lastest version of DelugeVPN and unRaid my settings are now persistent! I can stop or restart DelugeVPN and all the settings are there. Not really sure why but I don't want to look a gift horse in the mouth! Still can't get Couch to "see" DelugeVPN, but I wanted to post an update about my settings. Thanks again for providing and updating this great docker!
  5. Yes, mine was the same way when I used notepad as well. I also make sure I enable remote connections for the daemon. What's odd is that my password works, but the settings for download directory, bandwidth,remote connections, label all reset to default whenever I restart.... Like I said, my Sonarr/Deluge is working great, just can't get CP/Deluge to work together (or get my settings to stay once in Deluge). For the record I did delete the container and image file about 30 minutes ago and started over. Same behavior as before. Thanks.
  6. Yes the CP docker is set to bridge. In CP, in settings - I enabled deluge with the following settings: Label: movies Username: deluge webgui username Password: deluge webgui password Directory: left blank Host: my.host.ip.address:58846 Test Deluge: Connection failed. Check logs for details. I've looked in the CP logs but I didn't see anything that stood out as related to this problem. If I test nzbget I get connection successful and the log shows that. If I test Deluge I get Connection failed, but there is nothing in the CP log to show that. I cleared logs and tested a couple times to confirm no info is being sent to all logs or error or info..... I should note that I am able to connect to Deluge with Sonarr and it works as it should. So Deluge is working with Sonarr, just not with CP for me. Thanks for your continued suggestions...
  7. I do have the above information in the "auth" file in the config folder. There are two lines, the first is "admin:password:10" but with my specific details, the second is "localclient:somenlettersandumbers:10" I do have that setup as you have suggested. Thanks.
  8. This sounds like a classic case of the share your using not being marked a "cache only", what this means if you dont do this is the internal unraid process called mover comes along and moves all your files in that share to the array, then when you restart the docker you have no config again, and so the cycle repeats. The way to solve this is to mark the share as "cache only", this tells the mover not to touch any files/folders contained within this share, go to unraid ui/shares/click on share/select from dopdown "use cache disk" the value "only", save and your done. ok this will depend on where couchpotato is installed and how its installed, for now im going to assume couchpotato is installed as another docker on the unraid host, if this is the case then read FAQ Q1 in post #2 of this thread and follow it carefully. Just to add to this a little, once mover has messed with it, it seems like you need to remove the delugevpn config folder and let the container recreate it. Also, I'd set the mapping to /mnt/cache/appdata/delugevpn instead of /mnt/user/appdata/delugevpn if that's what you used. Thanks for the info guys: I have my /config folder mapped to a cache "only" share. I have my /data folder mapped to a /download directory on my "yes" cache share. Do I need both mapped to the cache "only"? The reason I did this was I thought I should have the downloads actually go to the large array for space needs...not to the limited cache - but that may be my mistake. BOTH are mapped to "/mnt/user" and not /mnt/cache". Couch is installed as another docker on the unraid host. I followed the instructions in the post cited above but have had no luck. I'll work on the getting the preferences to stay first, then tackle this issue separately. Still not sure they aren't related. Thanks!!
  9. Now that I have this resolved, I'm wondering if anyone can provide some guidance on getting Couch to connect to DelugeVPN... I've read thru the thread, and I've made the changes as outlined, but I have never been able to connect. I should also say I always have to reset the preferences whenever I update or even just restart DelugeVPN on my unRaid server. Each time I stop and start, I need to go in, reapply the download directory, my seeding preferences, if I want to allow remote connections, labels, etc. Any thoughts on what I could be doing wrong? I know a lot of users aren't having this problem so it has to be something on my side in terms of config. Any suggestions are appreciated. fyi, in Couch I have tried to connect to "localIP:58846", "127.0.0.1:58846" and "localhost:58846". I also tried changing the port number in deluge and in couch but no luck. In the beginning of this thread in the FAQ Binhex says to "enable remote connections, then restart". In my case, every restart resets to unchecked and I have to check again - so now I"m thinking whatever is keeping my preferences from staying might be a contributing factor. Thanks again for any help.
  10. That solved it for me as well! THANKS!!
  11. I have a "!" in my password too. I'm going to change it on the PIA site as you suggested and see if I have the same success.... Thanks a ton for sharing your information!
  12. My Bad Binhex.....sorry.....I'll wait and try later. Thanks for all your efforts!!
  13. sorry if this is a dumb question, but I updated and then came here when the webgui wouldn't start.... I made the following change in the docker config, but it doesn't seem to be working. Did I miss something? LAN_NETWORK 192.168.1.0/24 Thanks in advance for your help. edit: in checking my logs it seems my PIA auth is failing, but nothing changed there on my side. Thoughts? edit2: It seems my PIA password is getting changed in my credentials.conf file. Everytime I correct it and restart the docker the password gets changed and I get an AUTH FAIL.... Anyone else experiencing this problem? I've learned a lot today about other things I've been meaning to dig into, but not why this is happening.....
  14. THANK YOU!!! I did as you said....new config, re-added the drives and nothing showed unformatted. I started array and eveything is up. The drives expanded to their actual size without doing anything! I'm building parity now. Appreciate your input and direction!