denizen

Members
  • Posts

    21
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

denizen's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Just wondering if you had ever got around to testing this. Thanks in advance!
  2. Resolved - Rebooted system and user shares reappeared. Noted that my cache drive was full. Hopefully that was the issue.
  3. Not exactly sure what happened. Was trying to troubleshoot a mcmyadmin2 docker container when I noticed. Was also getting docker disk image becoming full errors. Attached is my diagnostics file. Stopped and restarted the array and now docker failed to start with it. Thanks in advance.
  4. Have yet another 'call traces' error for someone to help me out with. Server seems to be working fine. Thanks in advance! tower-diagnostics-20180418-0946.zip
  5. Yes. After doing the new config you are brought to the main page and all drives are unassigned - as if you were booting the server for the first time. You pick Disk 1 and then assign a drive by serial number, etc. When you get to Disk 4 you'll just pick a drive that remains in your system. Note that you need to refer to your drives by serial number, not sda/sdb, etc. The sda/sdb device names will change when you disconnect some of your disks, so you need to know the serial number of the parity drive in order to reassign it correctly under the new config. The other drive slots don't really matter, but it's critical to reassign parity correctly. Then my proposed steps would be: 1 - Take screenshot of current drive setup 2 - Take the array offline 3 - Unassign Disk4 and Disk5 4 - Shut down the server 5 - Remove Disk4 and Disk5 from the server and swap Disk11 and Disk12 into these positions 6 - Start up the server 7 - Click the "New Config" link 8 - Re-assign all previous drives to their previous positions (including parity and cache) except for the drives which had been Disk11 and Disk12 which will now be assigned to Disk4 and Disk5 9 - Take the array online 10 - Watch parity rebuild Does that sound reasonable? Do I need to do step 3 or just shut down the server and rearrange the drives? Also, I am assuming that invoking New Config will reset my previous share settings. Or am I incorrect in thinking this? Thanks in advance!
  6. I am now in the process of removing some old 1.5 TB drives which are disks 4 and 5 in my array and am planning to use the New Config option for the first time. I am planning to use the method as referenced above, but then I imagine in my case disk4 and disk5 would be missing in the new array ie disk1, disk2, disk3, disk6, disk7, etc. So here is my follow up question: Once I activate the New Config option is it also possible to switch other data drives into the previously removed disk positions and keep the disk numbering contiguous? In other words, as long as I assign the correct drives for parity and cache, can I then rearrange any of the other data drives in any position I want since it is a "new config"?
  7. It just so happens that I have not upgraded to 6.1 yet. Would you see any downsides to trying this out? or (another way to put it) IF Unassigned Devices plugin were verified for 6.1, would this be a way to use the plugin?
  8. Thanks for the quick reply. Makes perfect sense. I would ultimately have to rebuild parity in the above scenario (no time saved ). I guess I have another question then. Would it be possible to use the Unassigned Devices Plugin for this purpose? I have never used this plugin before. Thanks again!
  9. I apologize in advance if this has been covered, but I have not found an answer to a question I have in my brief search. Hopefully it will be an easy question to answer. I am currently preparing for the process of changing my cache drive format from reiserfs to btrfs. I currently have a reiserfs formatted cache drive with my app data and docker image on it and would like to migrate the data to a new btrfs formatted ssd cache drive. In my search of the forums it appears that the most common sense method of doing this would be to copy my cache drive data to the array and then reassign the ssd as the new cache drive and then copy the cache drive data back to the new cache drive. My question is whether I can save a copy step by simply reassigning my old cache drive as a new disk in the array and retain the data on it. Then I would be able to add the new ssd as a cache drive and copy the the data directly to it. Hope this makes sense. Thanks in advance.
  10. It's in the Advanced View -> Environment Variables. Let me know if you get it working. Funny, I just updated to the latest version and I don't seem to have that option under my Environment Variables. I suppose I could just add it. Were there any examples listed under Variable Values?
  11. kurterik, I have been attempting to do the same thing. Where did you see the LAN_RANGE option?
  12. Has anyone set up a Pocketmine container for unRAID? or would be willing to help me set one up? There are already multiple Pocketmine containers on the official Docker site. I know nothing about making a container for unRAID but would be willing to learn. Thanks!
  13. What happens when you open the docker page in unRAID, left click the container icon, and select WebGUI?
  14. Jim and I worked on this quite a bit and could not resolve the CP communications with Deluge. He resorted to using the blackhole method and setting Deluge to a watch directory. While not as elegant of a solution (i.e. wont handle failed downloads, etc.) it does work. Not sure if you guys got this working or not, but here is what I did to get them talking to one another: 1. Open the 'auth' file in your deluge config folder to see what the deluge daemon username and password is. The format is <username>:<password>:10 where 10 represents the user level. Alternatively, you can edit this file to replace or add a <username>:<password>:<userlevel> as you see fit. You may need to restart the deluge container after your edits. 2. Open the couchpotato settings and enter the username and password from the auth file above. Change the Host section to the ip of your unRAID server. The port should already be defaulted to the deluge daemon port which is 58846. Set a label for deluge. 3. Press the Test button and hopefully it will give you 'Connection Successful' If I can remember correctly, that's all it took for me. Hope this helps. Binhex, wondering if you ever got around to allowing local domains ie 172.16.x.x through delugevpn? Still trying to find a good solution for RSS feeds.
  15. the originial issue was to do with allowing the deluge daemon to talk to other dockers which is now resolved, i am using iptables to prevent ip leakage so i had to be very careful as to what to allow. currently i am only allowing docker to docker conversion to the deluge daemon and thus your having the issue when trying to use another host. i could revisit this and relax the restriction a little bit more to allow known private lan ranges, e.g. 192.168.*.* and 10.0.*.* so that you could also connect to the daemon from another host on your LAN, leave it with me. Binhex, that would be great! Thanks for the quick reply. If you are able to do this, could you add 172.16.*.* into the mix? Or...not sure if this would be able to be a user configurable option? Thanks again!