Jump to content

jfrancais

Members
  • Content Count

    76
  • Joined

  • Last visited

Everything posted by jfrancais

  1. OK, when I switched the container to bridge networking everything ran fan. When it was set to br1 with an ip, the container runs fine at first but not long after I can no longer communicate with the docker container. I have two nics in place (br0 and br1) that I set up to get around the macvlan communication restriction. It had been working fine for quite some time. Other containers set up this way have no communication issues. gobo-diagnostics-20190911-1808.zip
  2. OK, that gets rid of the messaging. Was open during my troubleshooting so that makes sense. I have switched it to bridge networking and will run it that way for a while to see if it fixes the issue. It was running as custom on br1
  3. Recently I've started to have issues with the sabnzdb container not working. It seemed that shortly after startup I was no longer able to communicate with it. If I shelled in to the container, everything looked normal, I could communicate from the shell with the entire network as expected. When I looked in my logs I was seeing this repeating every 10 seconds: Sep 10 08:39:58 Gobo nginx: 2019/09/10 08:39:58 [crit] 7367#7367: *5199919 connect() to unix:/var/tmp/sabnzbd.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.18, server: , request: "GET /dockerterminal/sabnzbd/ws HTTP/1.1", upstream: "http://unix:/var/tmp/sabnzbd.sock:/ws", host: "REPLACED" This occurs even when the container it stopped. I tried installing a different version of this and had the exact same issue. I don't think it is related to the container itself. Can anyone assist?
  4. Running 6.7.2 so I guess that is the issue. Is this actively being worked on for a fix? Copying to the cache drive is a non starter for me as I'm moving amounts larger than cache. Would this problem effect drives not in the arrive? I could temporarily add an external drive to copy on and off. Makes me a bit nervous as data wouldnt be protected in flight.
  5. I'm trying to move some files from /mnt/disk3 to /mnt/disk5 and I'm finding things painfully slow and it is affecting other things running on the server quite badly. I first tried with unbalance and the share I gathered averaged 8MB/S transfer speed in the report. doing the same thing via shell and the mv command is giving the same slow experience so it isn't the plugin. It seems like it burst with speed for a bit and then hangs for a while. Is this normal behavior? Any recommendations for speed up? Drivers are both GPT: 4K-aligned xfs formatted. Drives are all 5400 rpm and up, no archive drives in play. No errors or warnings in syslog. Single parity drive. System is a Intel® Xeon® CPU X5650 @ 2.67GHz X 2 with 48GB ECC ram. gobo-diagnostics-20190807-1401.zip
  6. I did, multiple times. It says specifically to leave the disk mounted so Cron has access. This is specifically NOT what I want to do. The reason I’m asking is that I read it and can’t figure out how to accomplish what I did in SNAP after reading the limited documentation(ie the first post)
  7. From the plugin help: '/usr/local/sbin/rc.unassigned mount auto' - all devices and SMB/NFS mounts set to auto mount will be mounted. '/usr/local/sbin/rc.unassigned umount auto' - all devices and SMB/NFS mounts set to auto mount will be unmounted. '/usr/local/sbin/rc.unassigned umount all' - all devices and SMB/NFS mounts are unmounted in preparation for shutting down the array. '/usr/local/sbin/rc.unassigned mount /dev/sdX' - mount device sdX where 'X' is the device designator. '/usr/local/sbin/rc.unassigned umount /dev/sdX' - unmount device sdX where 'X' is the device designator. You can use this command in a UD script to unmount the device when the script has completed. I don't want the drives to be configured to automount, and I don't want to mount/unmount all.
  8. My device is almost always plugged in. I need it to run in a scheduled basis to ensure when I go off site it is no more than 24hrs out of sync. And I need to run it based of serial number and not the /dev it isn’t guaranteed to be in the same dev position.
  9. Wondering if someone might have insight, having trouble finding what I'm looking for. I had written a script previously that I was using for some automation with an unassigned device. Everything was working perfectly but with the move to this plugin, now not so much. I was using the snap.sh script with certain parameters to safely work with the drive. I'm wondering if I can gather the some way programmatically from the new plugin. Here are the functions I was using and am looking to replace: #get the serial number of a drive based on share name /boot/config/plugins/snap/snap.sh -getSerialNumberFromSharename $share_name #verify if the drive by serial number is in the array /boot/config/plugins/snap/snap.sh -isDeviceInUnraidArray $drive_serial #mount the drive by its share name /boot/config/plugins/snap/snap.sh -m $share_name #unmount the drive by its share name /boot/config/plugins/snap/snap.sh -M $share_name #spin down the drive by its serial number /boot/config/plugins/snap/snap.sh -spindown $drive_serial My end game is that I have an external drive that may or may not be connected to my unraid server. Nightly, if the drive is present, it mounts and does a sync to that drive, and then unmounted and spins down so it can be removed. This gives me a drive (or drives) that I can pull whenever I am travelling and it has been synced with a bunch of files I have listed in my script.
  10. Does anyone have any experience setting up ios on demand profiles? I have my Openvpn-AS up and running, working as expected. I can connect via my ios clients. I now want to set up the on demand profile so that the VPN connects when I hit an unsecured network or a couple specifid wifi networks, and disconnect from the VPN whenever connected to my home wifi networks.
  11. Getting the same issue when I went to upgrade Community Applications. It removed the old and couldnt install the new. Still running unRaid 6.6.6
  12. Have the same question as this but I havent seen it answered. Anyone?
  13. Is there any command line interface to this to look up information, mount/unmount drives? I was using SNAP and am now moving to this, but I'm unsure how to fix a couple of the scripts I had that would mount and unmount drives following some logic, as well as checking if a drive is part of the array before performing certain functions.
  14. Having trouble locating documentation on this, hoping someone can assist. I'm moving from SNAP plugin to this. I had a script created that was using some functions in snap.sh and was hoping there was an equivalent. Anyone? I'm currently using the following: snap.sh -getSerialNumberFromSharename SHARENAME snap.sh -isDeviceInUnraidArray SERIALNUMBER snap.sh -m SHARENAME snap.sh -M SHARENAME snap.sh -spindown SERIALNUMBER
  15. Adding a second NIC and second br gets around this restriction. But it still doesnt seem to work with OpenVPN. the OpenVPN container itself can see everything but the clients connected to it cant.
  16. Nope. Still hung up. if I navigate into the container I can see everything, but the connected clients can not. I feel like it is a routing issue or something for the NATed ips but I'm not skilled enough in the networking side to go any further and it seems like no one else is running into this issue.
  17. Still struggling a bit on OpenVPN-AS docker config but I have made some progress. I now have my OpenVPN-AS running in host mode. Docker containers are on br1 (running on the second NIC) with assiged IPs. If I shell into the OpenVPN-AS container I can communicate with everything, the host and all the containers. Clients connected to OpenVPN server can communicate to the unraid host and all the network except for the docker containers in br1. I feel like this is a routing issue that should be fixable. Can anyone provide assistance? I'm really weak on the networking side of things.
  18. Still struggling on getting the OpenVPN Docker working properly when configured with it's own IP. Does anyone have it working in this scenario?
  19. Gotcha, unsure why that wasnt in my screenshot, private subnets should be given access to there is: 172.27.224.0/24 192.168.1.0/24 So I believe I have it set like you are suggesting and still doesnt work
  20. No. I can ping that address but if I go to http://172.27.224.1:943 I get timed out and nothing comes back
  21. Ok, Sorry, I finally got back to this. Just to rehash. I now have 2 nics in my unraid server. I removed the br0 network. I created the br1 network with the eth1 nic in it (eth0 is the unraid server primary nic). I moved all my docker containers with static IPs into br1. I shelled into the openvpn-as container and verified I can ping the unraid host and my main network router by ip and by DNS name. I can do dns lookups (dns server is my main router 192.168.1.1) so it appears I have the docker problem worked around. Problem is, my openvpn connected clients still cant access resources. Once connected to the vpn I am still having connectivity issues. The VPN clients cant ping or access the unraid host (192.168.1.207). They can ping my main router (192.168.1.1) and other docker containers by ip (headphones container at 192.168.1.57 for example). They can't do DNS resolution at all (tried nslookup tool using 192.168.1.1 as name server but it times out, same as using 8.8.8.8 as name server). I have attached screen shots of what I think would be useful settings. Any assistance would be appreciated. I'm banging my head against the wall on this.
  22. Thanks Johnnie. Before I move anything around I wanted to see if I could isolate the issue. From my testing, I think I have it nailed down to my 3 bay icy dock setup. When I removed it from the equation I was able to run a full read test and got no errors. I have then added back the first drive that was flagging with an error (did a full drive diagnostic in another computer, came back clean) and am in the process of verifying that. I had 3 drives in that bay, once of which isnt part of the array, so it makes sense this was the common denominator. Once everything is back up, full test and parity check done, I will move partity drive back to onboard Sata.... althought i havent had any issues with this Marvell controller card since I got it.
  23. Recently I received the following alert from my unraid server: Event: unRAID Disk 1 errorSubject: Alert [GOBO] - Disk 1 in error state (disk dsbl) When I did a check I wasnt able to read from various file systems. I couldnt even shut the array down. I power cycled, got everything back up, and have removed the drive for now, assuming it was failing. Later in the day I started getting a bunch of read error on the parity drive (a BUNCH). I again had troubling shutting down, but got it restarted in maintenance mode. I did a full read test on the array and again got the same read errors. So now I'm not entirely sure what is happening. Both drives are in the same hot swap bay device and off the same raid card, so I'm going thru the process off removing the drive bay from the equation and running a read test (takes like 20 hours to complete each cycle so it is taking forever to test). If I still had errors I will try another raid controller. Is there anything else I'm missing? Is it possible the parity drive got corrected by the disk issue? All disks have done extended smart tests cleany (except the potentially bad data drive which I have no removed and am testing on another computer. Any assistance getting back up asap would be appreciated. Diagnostics attached. gobo-diagnostics-20180720-1925.zip
  24. Got a little further, when I set my OpenVPN-AS to TCP (disabled UDP) I can now connect to the OpenVPN-AS server from the outside world. I have the exact same issue as before. When I connect the OpenVPN-AS server docker container that is setup to use br1 and an assigned ip, VPN clients can talk to other docker containers on br1 but not unraid host, router or internet.