wreave

Members
  • Posts

    133
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by wreave

  1. It does look like that, copy and paste below in case I can't read. #vfs_recycle_start #Recycle bin configuration [global] syslog only = No log level = 0 vfs:0 #vfs_recycle_end Restart the recycle bin. Tried that, it did not work
  2. It does look like that, copy and paste below in case I can't read. #vfs_recycle_start #Recycle bin configuration [global] syslog only = No log level = 0 vfs:0 #vfs_recycle_end
  3. I need more information. You have the recycle bin enabled? You are doing the delete through a smb share? Check the shares tab of the recycle bin to be sure the share is enabled. Look at the deleted files log and see if it has logged the deleted file. A zero length file will not be put in the recycle bin. You did enter something into the text file? I do have the recycle bin enabled. I am using SMB. All my shares are listed under the Shares portion of that page. I have tested with a text file with text in it as well as duplicating one of the other larger files. Deleting these files does not create the recycle folder. The deleted files log on the recycle bin settings page is blank, I do have logging enabled. Copy and post the /etc/samba/smb-shares.conf file. Tell me which share you are trying to delete the file from. Have you tried another share? Here is the requested file http://pastebin.com/wJLkj5ak I have tried on the TV - Shows share and the Downloads share.
  4. I need more information. You have the recycle bin enabled? You are doing the delete through a smb share? Check the shares tab of the recycle bin to be sure the share is enabled. Look at the deleted files log and see if it has logged the deleted file. A zero length file will not be put in the recycle bin. You did enter something into the text file? I do have the recycle bin enabled. I am using SMB. All my shares are listed under the Shares portion of that page. I have tested with a text file with text in it as well as duplicating one of the other larger files. Deleting these files does not create the recycle folder. The deleted files log on the recycle bin settings page is blank, I do have logging enabled.
  5. This no longer seems to be moving things to the recycle bin. I have upgraded to 6.3, uninstalled the plugin, removed all .Recycle.Bin folders and reinstalled it from community applications. However when I create a text file and delete it it does not create a .Recycle.Bin folder. Is there any other information I can provide? I'd love to get this working again. Thanks!
  6. This seems to have worked. I edited nothing, just made the docker again and moved the files over.
  7. This is a completely different container image. You can't just use the data folder from the other one here. As I posted on the other thread, install this one in a new config folder and then move your config and web files from the other piece by piece. The are too many changes under the hood. I did do that. I moved over my site conf files after making the new container. Edit: site-conf file example that I am having this issue with http://pastebin.com/7ppWPWfe
  8. Switched to this container from the inital aptalca but I am running into a permission error. When attempting to open a URL that is using reverse proxy I get the following in error.log: /js/alllibs.js", host: "SUB.DOMAIN.com" 2017/01/28 08:20:38 [crit] 321#0: *1 open() "/var/lib/nginx/tmp/proxy/6/01/0000000016" failed (13: Permission denied) while reading upstream, client: 192.168.10.1, server: SUB.DOMAIN.com, request: "GET /static/css/bright.css.map HTTP/2.0", upstream: "http://192.168.10.10:5075/static/css/bright.css.map", host: "SUB.DOMAIN.com" This coincides with errors like the following in my chrome developer console: GET https://SUB.DOMAIN.com/static/js/nzbhydra.js net::ERR_SPDY_PROTOCOL_ERROR This seems to indicate some kind of issue with SPDY/HTTP2, but I am not really sure. Any insight into why this is happening? I am pretty stuck and this was all working before moving over to the linuxserver version (all I changed was the paths for /keys and /fastcgi_params in the site-confs to match up with the container changes). I should also note this doesn't happen for every proxy I do, just 3 or 4 of them. All the site-confs are the same besides the URL and the IP:port. Thanks!
  9. Oh man, it was bad DNS. Thank you for the help and for teaching me how the search works in the forums
  10. Something happened over the last few days where I am not unable to update my containers or install new ones. Attempting to update a container just hands for a while and then when the page reloads nothing changed. When I go to install a new container (in this example glances) I get the following error "Error: Tag latest not found in repository docker.io/nicolargo/glances". Note that this is for EVERY container I have tried and I suspect it is also the cause of the failed updates. Any thoughts on what could be causing this?
  11. Oh, well thanks for the hard work on figuring that out. I am going to enable it for now
  12. I am still having an issue with the WebUI for InfluxDB, any idea as to why this will not load?
  13. @wreave, you just open the ports in the service you want to connect to and then have the other service connect to those ports. For example: I have influxdb running on my unRAID box. It has port 8086 open as in the influxdb setup.png attachment. I have grafana also running on my unRAID box. It connects to ip of unRAID and the influxdb port for its default datasource, as shown in the Grafana Setup.png. Its really that simple. Let me know if you have any further questions. That helps a lot, last questions I hope. Does the Glances docker auto export to InfluxDB or do I have to do something special? Also the web interface for InfluxDB refuses to connect, any ideas? Thanks again You will need to modify the glances config file as explained here: http://glances.readthedocs.io/en/stable/gw/influxdb.html. But it also says you need to start glances with the influxdb flag. I think a better way would be to use Telegraf to get stats on your unRAID box. Have you looked into Telegraf before? Its my preferred method. I have not but I shall give it a look. Thanks again
  14. @wreave, you just open the ports in the service you want to connect to and then have the other service connect to those ports. For example: I have influxdb running on my unRAID box. It has port 8086 open as in the influxdb setup.png attachment. I have grafana also running on my unRAID box. It connects to ip of unRAID and the influxdb port for its default datasource, as shown in the Grafana Setup.png. Its really that simple. Let me know if you have any further questions. That helps a lot, last questions I hope. Does the Glances docker auto export to InfluxDB or do I have to do something special? Also the web interface for InfluxDB refuses to connect, any ideas? Thanks again
  15. Is there a way to get the Glances, InfluxDB and Grafana dockers to communicate? I am trying to pass my data from Glances to Grafana but I can't seem to see how to do this after reading the docs and a lot of trial and error. http://glances.readthedocs.io/en/latest/gw/influxdb.html#grafana Thanks.
  16. What do the sickrage logs say when you start the container up? It should show something about it getting an IP address (see my plex container output below). Joining mDNS multicast group on interface eth1.IPv4 with address 192.168.10.12. New relevant interface eth1.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 192.168.10.12 on eth1.IPv4. This is the last thing that shows up when initially restarting the container.
  17. Somehow I did everything BUT look at the container settings. I found a container that had the share configured in it still. That will likely solve the issue. Thanks!
  18. I have a share that just doesn't seem to want to die. I remove it and eventually it comes back (empty). Is there a way to see what is creating the folder to check if unRAID or one of my Docker containers keeps bringing it back from the dead? timing with Halloween not intentional Thanks!
  19. Also are you adding the command to the container you want it applied to or the pipework container? It should be in the actual sickrage container in your example. And you dont replace the container name piece with the actual container name.
  20. Not sure about the orphan image. Maybe remove both and try again from scratch. Also I used static info instead of DHCP options (see below for my example). -e 'pipework_cmd=br0 @CONTAINER_NAME@ 192.168.10.12/[email protected] FC:AA:14:91:7D:2E'
  21. Did you edit the Repository as explained in the support thread for that container? https://lime-technology.com/forum/index.php?topic=43970.0 For unRAID 6.2 you need to use dreamcat4/pipework:1.1.6
  22. Oh no I am one of those people who find a solution and dont update the thread! The pipeworks container let me get a custom mac and a customer IP. The only issue is it changes the br0 IP address when I start the Docker using the custom IP. I fix this by statically assigning the br0 IP through the terminal.
  23. Is there a way in UnRAID 6.2 to have a custom MAC address and IP on a Docker container? I know they added some VLAN support and the like but I am not clear on how(if possible) to configure any of this. Thanks!