net2wire

Members
  • Posts

    107
  • Joined

Everything posted by net2wire

  1. FWIW Updated ownCloud via docker to 8.0.4 Stable. Unfortunately ownCloud got stuck in 'maintenance mode" for well over an hour. To remedy this persistent problem I did the following to correct this issue on my system: 1. SSH to unRAID 2. on the command line enter: docker exec -ti ownCloud /bin/bash 3. docker command line enter: sudo -u nobody php /var/www/owncloud/occ upgrade wait a few seconds or minutes and all should be back to normal except for addons that will be disabled.
  2. I did in fact install vanilla Windows 7 with Virtio drivers a few times. I'm going to try to install Linux Mint as dual boot and see what happens. The vdisk1.img I created was from a dual boot laptop. Hope it works. Thanks for the suggestion.
  3. I changed the VM boot order and booted from the Win7 Install ISO and attempted to load the drivers but so far have been unable to as Win7 install error returns with unable to find image (or storage) device. But I will keep trying. Maybe I need to try other Virtio drivers as well. A dual booting VM... That's an interesting concept. Yes I thought it would be interesting too. Maybe not as practical as just installing the OSes on separate VM's which I've already done, but it seems to be another direction in the world of VM's.
  4. I tried this today but had an error: qemu-img: Must specify image file name. Not sure what I did wrong, but I removed -O and was able to image my laptop's HDD and boot successfully to Linux Mint KDE. This particular system was a dual boot Linux Mint / Windows 7. I setup the VM as Linux and played around with it out of curiosity to see if I could boot Win7. Win7 begins to boot but soon it fails with BSOD and restarts to the GRUB bootloader. Just wondering as a matter of testing and trying new things if the added complexity of a dual boot system is never possible with KVM?
  5. I was attempting to do the same thing when I realized Windows setup asks for a floppy.
  6. there is that, i don't use the thing so i couldn't really say. personally i'd just pass the whole folder and have done with it, but that's just me. It's not quite that simple. ownCloud has many apps built that are pre-loaded in the apps directory. User apps are added to that directory for additional functionality. Both directories would have to combine to work properly. When I load ownCloud, I just get into the docker with a cli and copy the apps from the appdata/owncloud directory and I'm good to go. This only occurs when I update or make xml changes to the docker image. I basically followed your advice and do it this way for now, but this needs to be remedied to make it easier for all users in general.
  7. one thing to consider is that many of ownCloud apps are installed manually and not available via the ownCloud webgui, thus needing /var/www/owncloud/apps exposed in order for admins to install third party apps.
  8. not sure if it's better to just make /var/www/owncloud/apps persistent?
  9. the path to the python executable is wrong, it should be /usr/bin/env python2 sabToSickBeard.py I made the corrections thank you and no longer receive the previous error but get a new error: Unable to open URL: [Errno socket error] [Errno 110] Connection timed out To be clear I am using this Sabnzbdvpn docker and trying to get it to work with PhAZe's Sickbeard plugin and I am not sure if this is causing the new error.
  10. SabnzbdVPN returns with the following error: Exit(127) /usr/bin/env: python: No such file or directory Everything seems to be working between Sab and Sickbeard and Sab does seems to be downloading properly except that at the very end when sabToSickBeard.py starts, it doesn't complete and then it fails with the error.
  11. Try adding /zm to the end. In my case I used http://192.168.0.25:xxxx/zm (where xxxx is the port you assigned). I also got the apache startup page.
  12. I agree that it will survive a restart but I think if you update the container then you'll need to make the changes again. I was wondering whether it's just possible to expose the /var/www/owncloud/apps folder to a configurable folder in the appdata share using a symbolic link. I think it's been used on other containers, but I haven't really tried to look at it yet. Thanks for letting us know how you managed it though. Nice to have choice. +1 on exposing the /var/www/owncloud/apps folder. With all the different apps available it would make sense to make the data persistent, and easier to configure and manage.
  13. "Bug: bad page state in process php5-fpm pfn:3b6b8" This error first occurred 18 or so hours after upgrading from 6.0-beta14b-x86_64 to 6.0-rc2-x86_64. unRAID continues to run however. My system: M/B: ASUSTeK COMPUTER INC. - H97-PLUS CPU: Intel® Core™ i7-4790 CPU @ 3.60GHz Cache: 256 kB, 1024 kB, 8192 kB Memory: 8192 MB (max. installable capacity 32 GB) Network: eth0: 1000Mb/s - Full Duplex Kernel: Linux 4.0.2-unRAID x86_64 OpenSSL: 1.0.1m Plugins: SAB, SickBeard, CouchPotato, Subsonic, unMenu, Plex Media Server Dockers: Guacamole, MariaDB, Nginx, ownCloud, PlexWatch Since the error I have stopped "non-essential" apps; plexwatch, guacamole syslog-may-18-2015-1.txt
  14. Doesn't seem to affect anything but during start at cpuload the following comes up; cpuload started install/doinst.sh: line 1: python: command not found
  15. Tried this as suggested: My Volume Mapping as follows to add ownCloud apps: Container Volume: /var/www/owncloud/apps Host Path: /mnt/user/Data/owncloud/apps I placed my unzipped Apps in the Host Path directory. Started ownCloud docker and all I get is a blank page.
  16. Been away for a while. Yes in fact I did finally set up Nginx as a reverse-proxy and it seems to work very well. I chose smdion's Nginx docker over the Apache Reverse-Proxy. I do not use DuckDns, I use Dyn.Com aka: Dyndns.org for my dynamic ip services. As a side note, I setup approximately a dozen ownCloud accounts for friends and family and all of them seem to like how ownCloud is working, and removed BTSync entirely. The nginx reverse-proxy is very helpful.
  17. Have been getting this error in Sickbeard. AASSLError: The read operation timed out AA return self._sslobj.read(len) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/ssl.py", line 160, in read AA return self.read(buflen) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/ssl.py", line 241, in recv AA data = self._sock.recv(self._rbufsize) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/socket.py", line 476, in readline AA line = self.fp.readline(_MAXLINE + 1) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/httplib.py", line 365, in _read_status AA version, status, reason = self._read_status() AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/httplib.py", line 409, in begin AA response.begin() AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/httplib.py", line 1045, in getresponse AA r = h.getresponse(buffering=True) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/urllib2.py", line 1187, in do_open AA return self.do_open(httplib.HTTPSConnection, req) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/urllib2.py", line 1222, in https_open AA result = func(*args) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/urllib2.py", line 382, in _call_chain AA '_open', req) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/urllib2.py", line 422, in _open AA response = self._open(req, data) AA File "/usr/local/PhAzE-Common/usr/lib64/python2.7/urllib2.py", line 404, in open AA usock = opener.open(url, post_data) AA File "/usr/local/sickbeard/sickbeard/helpers.py", line 181, in getURL Still looking around for additional info but could this be an issue with the search provider?
  18. I've had the same issue. I disabled then re-enabled and got the apps working again. Haven't figured it out though.
  19. After some trial and error I came to the conclusion that to get rid of the Nginx reverse proxy server error Request Entity Too Large all that was needed in my setup was to add the following to the default.conf file, in the ssl 443 section inside the "location /": example: # set client body size to Unlimited # client_max_body_size 0m; location / { proxy_pass https://192.168.100.100:8000/; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host my.personal.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; [b]# set client body size to Unlimited # client_max_body_size 0m;[/b] } I set the client_max_body_size 0m, where the 0m is for unlimited file size. As I mentioned in a previous post that trying to upload files larger than 1.5MB to ownCloud returned the mentioned error and the uploads failed on all devices including Windows ownCloud client, iPhone and Android ownCloud clients. So far I have successfully uploaded various files as large as 20MB in size with no hiccups. Hope anyone can make use of this info. ;-)
  20. FYI on ownCloud docker when using the Nginx as a Reverse Proxy (smdion/docker-nginx:latest). When Nginx is setup as a reverse proxy for your ownCloud and you get errors from ownCloud client e.g. Request Entity Too Large, it looks like the file being uploaded to ownCloud is larger than 1.5MB size and Nginx reverse proxy does not allow that file to upload. I used the ownCloud windows client application and this is the error I received on numerous PDF work files that were queued to upload. I imagine this error will come up on other ownCloud enabled devices too; actually yes I just checked my iPhone and some HD pics and Videos received an upload error too. To determine where the problem could be in my system I turned off the Nginx reverse proxy server docker and then uploaded all the PDF files larger than 1.5MB size successfully. Nginx server set up as a reverse proxy seems to be the culprit in my case. I found some information suggesting that making some changes to Nginx server file /etc/nginx/nginx.conf might take care of this. I'm wondering that maybe another field could be added to the Nginx server setup to allow for nginx.conf and maybe other types of config related files too? Seems like a good idea for anything having to do with Apache and Nginx. Anyway here's the info for Nginx reverse proxy for ownCloud. http://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/ In the mean time if the maintainer doesn't get a chance to make changes to the Nginx server image I will be boning-up on making the necessary changes to the Nginx server if I can. Fingers crossed.
  21. Thanks for the input. I'm slowly getting into Docker and hope to be able to make my own images in the near future. Docker seems to have piqued my interest more than I had imagined. Once I figure some of this out I will definitely forward any image changes to the maintainer. Time to hit the books I suppose.
  22. In general users may need to know how to make permanent changes in their Nginx or Apache related dockers, and any other docker image provided here. In my case both the Nginx and ownCloud+Nginx Docker need some tweaking to nginx.conf. For one example ownCloud comes up with an error "Requested Entity Too Large" which does not allow a file of a particular size to upload via ownCloud. This may be easily resolved by making changes to Nginx server/proxy and/or to ownCloud itself. At present I know how to access the Docker image and fool around with files, but it seems those changes are not permanent unless I'm doing something incorrect at this stage. Still learning how to play with Docker Images but need some further direction in saving my changes. Thanks.
  23. A few years ago I used TekRADIUS on Windows and would like to be able to see if a linux version of a RADIUS Docker could be done. Basically I used it for secure access to WiFi routers. I'm not up to par on which RADIUS version has the best GUI or ease of installation. I did see that a few Docker images exist in the Docker repository. Thanks.
  24. Just put them on the config folder, probably under your appdata folder. The files are server.pem and server.key. Great! Thanks. Works like a charm. I've seen those files (server.key & server.pem) there since initial installation and noticed in the logs that they get reinstalled every time there is an edit to owncloud. Anyway with this new ssl cert I had to decrypt the server.key (unRAID shell) so as to get rid of this error 1. Error: 2. I stopped ownCloud and in the working folder (config folder) I use (/mnt/user/Data/owncloud-mariaDB/) I removed the original server.key and server.pem files. I winscp'd to that working folder and copied my new ssl.key and ssl.pem files that I received from the Cert Authority. I putty'd via SSH to unRAID to the working (config) folder and ran this: openssl rsa -in ssl.key -out server.key , and entered password that was setup when the SSL Cert was created at the Cert Authority. Started ownCloud and the result: BTW I used https://www.startssl.com/ as previously suggested. The instructions are for obtaining a free SSL Cert, and to setup a Reverse Proxy, but I was not interested in the proxy, yet. Follow directions carefully. http://www.seandion.info/unraid/add-ssl-to-your-reverse-proxy-for-free/ Thanks for the guidance.
  25. So far I have about 10 people using my ownCloud with MariaDB without any problems and people seem to like it. I will probably have many more users before long so I am wondering how one can install a new ssl cert from a Certificate Authority in ownCloud? I'm researching this: docker run -v /host/path/to/certs:/container/path/to/certs -d IMAGE_ID "update-ca-certificates" , don't know if it's the right idea, and need more info on the container/path/to/certs. A little nudge goes a long way! =D