sse450

Members
  • Posts

    95
  • Joined

  • Last visited

Everything posted by sse450

  1. I messed up with official mariadb repo. My apologies.
  2. Upgraded to 6.9.2. If I assign any tag other than latest, pulling image fails. Please find below output for mariadb: Pulling image: linuxserver/mariadb:10.5 TOTAL DATA PULLED: 0 B Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='mariadb' --net='proxy-net' -e TZ="Europe/Minsk" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'MYSQL_ROOT_PASSWORD'='xxxxxxxx' -p '3306:3306/tcp' -v '/mnt/user/appdata/mariadb':'/config':'rw' 'linuxserver/mariadb:10.5' Unable to find image 'linuxserver/mariadb:10.5' locally docker: Error response from daemon: manifest for linuxserver/mariadb:10.5 not found: manifest unknown: manifest unknown. See 'docker run --help'. The command failed. If ":10.5" is removed, all works as expected for the "latest" image. ":10" works as well, but ":10.5" doesn't. It doesn't matter what image you pull. The above one for mariadb is just an example. I would appreciate any hint.
  3. As I wanted to remove disk3 from the array, I tried to move a folder with 35.000 files (recursively) from disk3 to disk4. Used mv /mnt/disk3/thefolder /mnt/disk4/ However, it stopped somewhere in the middle after copying some 16.000 files. Nothing is deleted from the source folder. Then, I did cp -r -p -n /mnt/disk3/thefolder /mnt/disk4/ Can I now safely delete "thefolder" using the following command? rm -rf /mnt/disk3/thefolder Thanks.
  4. I am trying to find out a way to download or export some selected pictures. I would appreciate any hint.
  5. My Unraid, 6.8.3, developed a strange error by itself. All of my dockers shows "version not available". I can reach dockers (for example nextcloud) from the internet. I can ping anywhere from Unraid. So, I assume there is no a network problem. What could be the problem?
  6. @aptalca Thank you for the hint. But, second sentence in your reply is important. It is not very clear tome. I do reverse proxy using LE docker. But, I think that onlyoffice still needs the certs in its /Data/certs directory. Am I wrong? How do just reverse proxying solve that issue without certs in OO docker?
  7. @aptalca , thank you for indicating the readme file. I successfully mounted LE config folder to onlyoffice docker. Howver, I still need to present the certs in the filenames onlyoffice required onlyoffice.crt, onlyoffice.key. Should I use "ln -s" or create a cron job to copy LE certs in the filenames required? I would appreciate any advice. Thank you.
  8. Onlyoffice DS docker needs the certificates installed in /mnt/user/appdata/onlyofficeds/Data/certs folder. I copied the certs from letsencrypt to this folder. It works. But, I need to find a way to automate the certs from LE docker as the static LE certs in onlyoffice docker will expire in max. 3 months. How can I do that? Does a symbolic link to LE certs work? Or should I set a cron job to copy LE certs everyday? Thanks.
  9. I know it is an old thread. But, @livingonline8, thank you very much for the method you explained. I finally could install onlyoffice with nextcloud. I have two questions though: 1. I copied certs from letsencrypt to onlyoffice/Data/certs folder. But what will happen when the certs expire? Should not it be integrated to letsencrypt docker somehow? Will it suffice to create two symbolic links to the LE certs in folder letsencrypt/keys? 2. In the onlyoffice.subdomain.com file, proxy pass directive is using port 443. Isn't it 4430? We use 443 for letsencrypt docker. Thanks.
  10. I am using latest docker image with Calibre 4.5. I cannot enter any non-ascii characters in calibre metadata edit window. However, if I install Calibre on my PC (Ubuntu), then this problem disappears. There must be some char encoding problem with the docker, or I have a misconfiguration. I would appreciate any help.
  11. I am not sure when it started, but having problem with this Calibre docker lately. When double click on the book in the main window, calibre viewer opens in the same main window of Calibre. The problem is there is no control in the viewer to close the viewer and get us back to the main window. So, as soon as I open the viewer, I am stuck with the viewer until I restart the docker. I believe the viewer should open in a new tab. Standalone installations open the viewer in a separate window. Hence, no problem with them. Thanks.
  12. Gosh! So easy. Feeling stupid. Thank you. Sent from my SM-G955F using Tapatalk
  13. I cleared and added a new drive to the array. But ended up with an "unmountable" drive. Here is the screenshot: I would appreciate any hint to fix it. Thank you.
  14. Thank you very much for the docker. Works nicely. I have a request if possible. Would you please install "ingest-attachment" plugin in the docker. Nextcloud needs it for indexing. It can be added by /bin/elasticsearch-plugin install ingest-attachment
  15. Old ISP: Your Public IPv4 is: 176.88.224.xxx Your IPv6 is: Not Detected Your Local IP is: 10.10.30.30 New ISP: Your Public IPv4 is: 78.188.71.yyy Your IPv6 is: Not Detected Your Local IP is: 10.10.30.30 Note: If I use the old ISP but without revising the IP in Cloudflare settings, I get exactly the same errors in NC. Note2: Related NC docker nginx log entry: 2019/03/21 14:49:35 [error] 314#314: *15995 upstream sent invalid status "0" while reading response header from upstream, client: 172.17.0.1, server: _, request: "GET /index.php/apps/richdocuments/settings/check HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.mydomain.com:443" Note3: Perhaps, port 80 is blocked by the new ISP. I temporarily forward port 80 to a webpage on the same Unraid. And checked it using some web tools. Not blocked. Note4: Re-installed LE docker from scratch. Renewed certificates but the problem persists. Frustrated. Kill me.
  16. I don't know where the correct tree is Hence, asking for help. Today, I did a very useful test. Currently, both ISPs, the old and the new, have connection at the same location. I reversed back to the old ISP and changed the IP number to that of old ISP at Cloudflare DNS settings. Collabora is again started working nicely. So, it looks it is related to the new ISP connection. If it is nothing to do with letsencrypt certs, then what could it be? Router (pfSense) is the same with forwarded ports 80 and 443 to 81 and 443, respectively. Thanks for your support. More Info on my setup: 1. I use your collabora and linuxserver's nextcloud and letsencrypt dockers. 2. Nextcloud and collabora are setup as subdomains. Setup procedure as per your guide. 3. I use wildcard with Cloudflare plugin in Letsencrypt settings. 4. NC: 15.0.5, Collabora: 4.0.2, Letsencrypt: Latest LS docker, Collabora Plugin NC: 3.2.4 4. The error in NC log is as below: Error PHP Undefined offset: 0 at /config/www/nextcloud/lib/private/AppFramework/Http.php#150 2019-03-21T12:20:17+0300 Error PHP Cannot declare class GuzzleHttp\Handler\CurlFactory, because the name is already in use at /config/www/nextcloud/3rdparty/guzzlehttp/guzzle/src/Handler/CurlFactory.php#15 2019-03-21T12:20:17+0300 Error richdocuments GuzzleHttp\Exception\ConnectException: cURL error 28: Connection timed out after 5001 milliseconds (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) 2019-03-21T12:20:17+0300 Someone had solved this problem by stating "Solved! it was an expired certificate for the loolwsd server." (https://github.com/nextcloud/server/issues/11278). That's why I wanted to force renew the certs.
  17. Due to change of ISP, I had to change my static IP in Cloudflare DNS records. As soon as I did that, my collabora and nextcloud stopped working together. I am receiving errors in NC and documents don't open. I checked NC/Collabora forums and found that this error is related to the invalid certificates. I feel that it is related to the letsencrypt certificates as issued for the old IP. My question is, is there a way to force renewing certificates? Thanks
  18. Hello Fellow Unraiders, All of a sudden, my Collabora stopped working with Nextcloud. Both work independently. When I try to open a document, Nextcloud gives the following errors: Error PHP Cannot declare class GuzzleHttp\Handler\CurlFactory, because the name is already in use at /config/www/nextcloud/3rdparty/guzzlehttp/guzzle/src/Handler/CurlFactory.php#15 2019-03-20T14:07:30+0300 Error richdocuments GuzzleHttp\Exception\ConnectException: cURL error 28: Connection timed out after 5001 milliseconds (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) 2019-03-20T14:07:30+0300 Error PHP Undefined offset: 0 at /config/www/nextcloud/lib/private/AppFramework/Http.php#150 And a pop up message on Nextcloud UI says "Failed to load Collabora online. Please try again later." All I did just before this problem was to change the static IP of the server and subdomains on Cloudflare DNS. I am not sure if this is any related with the issue. Nextcloud is setup as subdomain behind letsencrypt docker as outlined in this forum. Does this error ring any bell? Pretty cryptic for me. Thanks.
  19. I don't know much about btrfs and snapshots. I installed BackupPC docker on the second Unraid and enjoy automatic, incremental backups. Sent from my SM-G955F using Tapatalk
  20. I think that ttyS0 is nothing to do with the ports on PCIe card. PCIe card has 2 serial ports. It should not come up as 1 port, I believe. So, my card is recognized by Unraid as pci device but drivers not loaded. Is it possible? If so, what should I do to properly load drivers for PCIe serial ports?
  21. Interesting results. I put pci-stub.ids=1c00:3253 in the append line and rebooted. Stopped CentOS VM and wanted to add serial port in the edit window. It says at the bottom: Other PCI Devices: None available Started VM and SSHed into it and received below results: [root@centos ~]# setserial -g /dev/ttyS[0123] /dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4 /dev/ttyS1, UART: unknown, Port: 0x02f8, IRQ: 3 /dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4 /dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3 [root@centos ~]# dmesg | grep tty [ 0.000000] console [tty0] enabled [ 1.359365] 00:03: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A So far so good. But, just for a learning experience, I removed pci-stub.ids=1c00:3253 in the append line and rebooted again. Then, I SSHed into CentOS again and received exactly the same results as above. It looks like Unraid host and CentOS VM can both reach the serial interface irrespective of passthrough. How come is it possible? I am stumped. Could you please shed some light here? Thank you.
  22. @Squid Thanks for the hint. Is below text in the append line sufficient/correct? vfio-pci.ids=1c00:3253
  23. I would like to passthrough serial port to my CentOS VM, but have no clue how to do that. I would appreciate any help. Please find below relevant information from my Unraid: Unraid version: 6.6.5 lspci -v 09:00.0 Serial controller: Device 1c00:3253 (rev 10) (prog-if 05 [16850]) Subsystem: Device 1c00:3253 Flags: fast devsel, IRQ 16, NUMA node 0 I/O ports at 2000 [size=256] Memory at 92b00000 (32-bit, prefetchable) [size=32K] I/O ports at 2100 [size=4] [virtual] Expansion ROM at 92e00000 [disabled] [size=32K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/32 Maskable+ 64bit+ Capabilities: [80] Express Legacy Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Kernel driver in use: serial setserial -g /dev/ttyS[0123] /dev/ttyS0, UART: 16550A, Port: 0x02f8, IRQ: 3 Thank you.