Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by jfrancais

  1. Would love to spin up virtual raspberry pi's for development. Should be relatively straightforward to add with qemu already having ARM support available. Would love to be able to use the webgui to create virtual pis that mount and boot from standard pi image files.
  2. Single parity drive + 5 array drives and dual cache drives. Nothing out of the ordinary. I don't have have a high volume of read/writes on the array happening. Most of the time the drives are spun down with the exception of the Time Machine share which is currently always spun up. Don't think you are correct on the write/read thing. Disks not in use are spun down. If a write to the array caused a read from other drives then all disks would be spun up during writes. Unless the gui is incorrect, that is not the case.
  3. How long did your initial backup take? I'm over a week and it still isn't close to complete. gigabit network. TimeMachine share is one disk, no cache.
  4. Has anyone got TimeMachine working consistently with large backups. I'm trying to get our Mac (3TB) backing up to unraid over SMB. It has been running for days and still under a 100GB backed up and still in progress. I keep seeing references to it being slow but it can't be that slow can it? (Running Unraid newest 6.8, tried both SMB and AFP, same issues)
  5. OK, when I switched the container to bridge networking everything ran fan. When it was set to br1 with an ip, the container runs fine at first but not long after I can no longer communicate with the docker container. I have two nics in place (br0 and br1) that I set up to get around the macvlan communication restriction. It had been working fine for quite some time. Other containers set up this way have no communication issues. gobo-diagnostics-20190911-1808.zip
  6. OK, that gets rid of the messaging. Was open during my troubleshooting so that makes sense. I have switched it to bridge networking and will run it that way for a while to see if it fixes the issue. It was running as custom on br1
  7. Recently I've started to have issues with the sabnzdb container not working. It seemed that shortly after startup I was no longer able to communicate with it. If I shelled in to the container, everything looked normal, I could communicate from the shell with the entire network as expected. When I looked in my logs I was seeing this repeating every 10 seconds: Sep 10 08:39:58 Gobo nginx: 2019/09/10 08:39:58 [crit] 7367#7367: *5199919 connect() to unix:/var/tmp/sabnzbd.sock failed (2: No such file or directory) while connecting to upstream, client:, server: , request: "GET /dockerterminal/sabnzbd/ws HTTP/1.1", upstream: "http://unix:/var/tmp/sabnzbd.sock:/ws", host: "REPLACED" This occurs even when the container it stopped. I tried installing a different version of this and had the exact same issue. I don't think it is related to the container itself. Can anyone assist?
  8. Running 6.7.2 so I guess that is the issue. Is this actively being worked on for a fix? Copying to the cache drive is a non starter for me as I'm moving amounts larger than cache. Would this problem effect drives not in the arrive? I could temporarily add an external drive to copy on and off. Makes me a bit nervous as data wouldnt be protected in flight.
  9. I'm trying to move some files from /mnt/disk3 to /mnt/disk5 and I'm finding things painfully slow and it is affecting other things running on the server quite badly. I first tried with unbalance and the share I gathered averaged 8MB/S transfer speed in the report. doing the same thing via shell and the mv command is giving the same slow experience so it isn't the plugin. It seems like it burst with speed for a bit and then hangs for a while. Is this normal behavior? Any recommendations for speed up? Drivers are both GPT: 4K-aligned xfs formatted. Drives are all 5400 rpm and up, no archive drives in play. No errors or warnings in syslog. Single parity drive. System is a Intel® Xeon® CPU X5650 @ 2.67GHz X 2 with 48GB ECC ram. gobo-diagnostics-20190807-1401.zip
  10. I did, multiple times. It says specifically to leave the disk mounted so Cron has access. This is specifically NOT what I want to do. The reason I’m asking is that I read it and can’t figure out how to accomplish what I did in SNAP after reading the limited documentation(ie the first post)
  11. From the plugin help: '/usr/local/sbin/rc.unassigned mount auto' - all devices and SMB/NFS mounts set to auto mount will be mounted. '/usr/local/sbin/rc.unassigned umount auto' - all devices and SMB/NFS mounts set to auto mount will be unmounted. '/usr/local/sbin/rc.unassigned umount all' - all devices and SMB/NFS mounts are unmounted in preparation for shutting down the array. '/usr/local/sbin/rc.unassigned mount /dev/sdX' - mount device sdX where 'X' is the device designator. '/usr/local/sbin/rc.unassigned umount /dev/sdX' - unmount device sdX where 'X' is the device designator. You can use this command in a UD script to unmount the device when the script has completed. I don't want the drives to be configured to automount, and I don't want to mount/unmount all.
  12. My device is almost always plugged in. I need it to run in a scheduled basis to ensure when I go off site it is no more than 24hrs out of sync. And I need to run it based of serial number and not the /dev it isn’t guaranteed to be in the same dev position.
  13. Wondering if someone might have insight, having trouble finding what I'm looking for. I had written a script previously that I was using for some automation with an unassigned device. Everything was working perfectly but with the move to this plugin, now not so much. I was using the snap.sh script with certain parameters to safely work with the drive. I'm wondering if I can gather the some way programmatically from the new plugin. Here are the functions I was using and am looking to replace: #get the serial number of a drive based on share name /boot/config/plugins/snap/snap.sh -getSerialNumberFromSharename $share_name #verify if the drive by serial number is in the array /boot/config/plugins/snap/snap.sh -isDeviceInUnraidArray $drive_serial #mount the drive by its share name /boot/config/plugins/snap/snap.sh -m $share_name #unmount the drive by its share name /boot/config/plugins/snap/snap.sh -M $share_name #spin down the drive by its serial number /boot/config/plugins/snap/snap.sh -spindown $drive_serial My end game is that I have an external drive that may or may not be connected to my unraid server. Nightly, if the drive is present, it mounts and does a sync to that drive, and then unmounted and spins down so it can be removed. This gives me a drive (or drives) that I can pull whenever I am travelling and it has been synced with a bunch of files I have listed in my script.
  14. Does anyone have any experience setting up ios on demand profiles? I have my Openvpn-AS up and running, working as expected. I can connect via my ios clients. I now want to set up the on demand profile so that the VPN connects when I hit an unsecured network or a couple specifid wifi networks, and disconnect from the VPN whenever connected to my home wifi networks.
  15. Getting the same issue when I went to upgrade Community Applications. It removed the old and couldnt install the new. Still running unRaid 6.6.6
  16. Have the same question as this but I havent seen it answered. Anyone?
  17. Is there any command line interface to this to look up information, mount/unmount drives? I was using SNAP and am now moving to this, but I'm unsure how to fix a couple of the scripts I had that would mount and unmount drives following some logic, as well as checking if a drive is part of the array before performing certain functions.
  18. Having trouble locating documentation on this, hoping someone can assist. I'm moving from SNAP plugin to this. I had a script created that was using some functions in snap.sh and was hoping there was an equivalent. Anyone? I'm currently using the following: snap.sh -getSerialNumberFromSharename SHARENAME snap.sh -isDeviceInUnraidArray SERIALNUMBER snap.sh -m SHARENAME snap.sh -M SHARENAME snap.sh -spindown SERIALNUMBER
  19. Adding a second NIC and second br gets around this restriction. But it still doesnt seem to work with OpenVPN. the OpenVPN container itself can see everything but the clients connected to it cant.
  20. Nope. Still hung up. if I navigate into the container I can see everything, but the connected clients can not. I feel like it is a routing issue or something for the NATed ips but I'm not skilled enough in the networking side to go any further and it seems like no one else is running into this issue.
  21. Still struggling a bit on OpenVPN-AS docker config but I have made some progress. I now have my OpenVPN-AS running in host mode. Docker containers are on br1 (running on the second NIC) with assiged IPs. If I shell into the OpenVPN-AS container I can communicate with everything, the host and all the containers. Clients connected to OpenVPN server can communicate to the unraid host and all the network except for the docker containers in br1. I feel like this is a routing issue that should be fixable. Can anyone provide assistance? I'm really weak on the networking side of things.
  22. Still struggling on getting the OpenVPN Docker working properly when configured with it's own IP. Does anyone have it working in this scenario?
  23. Gotcha, unsure why that wasnt in my screenshot, private subnets should be given access to there is: So I believe I have it set like you are suggesting and still doesnt work