RadOD

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by RadOD

  1. Can anyone tell me where to start looking for a "Error when trying to connect (Error occurred in the document service: Error while downloading the document file to be converted.) (version 6.3.1.32)" when I try to add the hostname for the document server to nextcloud? Searching the internets, I have found some recommended fixes such as changing the following config files: Onlyoffice: default.json -> “rejectUnauthorized” to false. local.json -> “header”: “AuthorizationJwt” from “header”: “Authorization” supervisorctl restart all Nexcloud: config/config.php -> ‘onlyoffice’ => array ( ‘verify_peer_off’ => true, ‘jwt_header’ => “AuthorizationJwt” ) ... but this is basically a new install and it doesn't look like default.json is meant to be edited outside of the docker. But /healthcheck/ returns true for both the IP address and when I try to connect via the external domain name so I think the container is up and running but the above fix suggests some part of the forwarding is broken and I don't know what tools might help me find it. My firewall forwards 443 to NginxProxyManager docker on Unraid. NPM has a wildcard SSL cert for the domain and is set to forward nextcloud.domain.name and docserver.domain.name to the appropriate ports for their respective unraid dockers. All 3 dockers are running on a docker proxynet (docker create network proxynet). Both nextcloud and document server are accessible on the local network via IP:port and on the internet via domain name. Both cert and key PEM's are copied to .crt files I get the following in the docker log: If I manually cut and paste https://nextcloud.domain.name/apps/onlyoffice/empty?doc=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY3Rpb24iOiJlbXB0eSJ9.la3XO9qn6tmmWaNhPtzJXk2kMb0u_-gh6ZnwW-iFnY0 it downloads a 7kb file new.docx that looks blank to me. As far as I can tell, everything seems to be working except the file won't transfer from inside the nextcloud docker.
  2. The main page shows everything to look good: everything is green and says passed. I have to actually open each one to see any signs of problems -- such as a failed Reallocated Sectors Count. Is that expected behavior? Seems like those failures and warnings should be out in front.
  3. Mover has been running for hours but I only see a couple GB actually moved. iotop -o Total DISK READ : 0.00 B/s | Total DISK WRITE : 177.10 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 30785 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.02 % [kworker/u8:5-events_power_efficient] 15906 be/4 root 0.00 B/s 3.85 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17443 be/4 root 0.00 B/s 34.65 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 31806 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 22041 be/4 root 0.00 B/s 11.55 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17696 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17796 be/4 root 0.00 B/s 53.90 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16781 be/4 root 0.00 B/s 11.55 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16782 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 19995 be/4 root 0.00 B/s 15.40 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16836 be/4 root 0.00 B/s 23.10 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 Docker and VM are off. Where do I look to see what mover is trying to do to figure out why it does not seem to be making progress?
  4. Is there any wait to push data from outside sources into the influxDB within the GUS container? It would be nice to be able to push from a opnsense or pfsense firewall too and have it all in the same place.
  5. When designating the docker image and appdata folder locations, the help tooltip says the following: "It is recommended to create this folder outside the array, e.g. on the Cache pool. For best performance SSD devices are preferred." But the default is to create it on the array. What is the difference, and is there a difference, between setting it to for example /mnt/somecachepooldevice/appdata/ vs the default /mnt/use/appdata/ and setting the share to cache only, set to that same cache pool drive under shares? Are they equivalent? If not, which is preferred? In some form or another, this question has been asked and answered many times: I see some ambiguity in terms of what works and what causes problems even though they should be one any the same. The particular reason I am asking about this is that after moving my docker.img and /appdata/ folders to a new 2nd cache pool drive manually and setting the location to that drive manually in docker settings one docker does not seem to work: openvpn-as. Upon installing, its appdata folder is completely empty with the exception of two empty folders 'log' and 'etc'. This was not moved from the initial appata location but was installed later for the first time. If I mount the docker.img file, the command find /mnt/docker_temp/containers -name '*.json' | xargs grep -l /user/appdata yields two files: hostconfig.json and config.v2.json both belonging to the container openvpn-as. So should I move everything back to /mnt/user/appdata from /mnt/cache2/appdata? Or is this irrelevant and I should look elsewhere as to why there is nothing in the openvpn container?
  6. Agree! That is a particularly complicated sounding error for such a little thing. and it is easy to lose one of those little console windows behind everything else. Maybe an option to open console and log windows as a new tab instead?
  7. I don't have an answer to your specific question, but hopefully this is helpful: I found DupeGuru's all-or-nothing results to be too unreliable and require too much user interaction. Instead, if you install the nerd pack plug in, there are two command line dupe checkers - fdupes and jdupes. Install either (reportedly jdupes is faster) and the user scripts plug in. Write a script to delete dupes and set it to run periodically. Or, a little more complicated, but you can create a script to move the dupes and create a log of what it did and you can review everything and manually delete. I can't access them right now, but I can post mine if anyone wants them.
  8. Thanks. This fixed it for me. Now to see if it goes out again...
  9. netstat -vatn was able to find the source of the problem. Seems like there should be a server side solution to prevent this. After a time a client anywhere with a bad CSRF token causes parts of Unraid to stop working - possibly from spamming the syslog. How does this work with multiple users? Do administrators email all their users asking them to close their forgotten browswer tabs?
  10. Yes, thank you. You might notice is you read the second sentence is that I have seen that. However, as of right now I am only using one browser on one computer after a fresh reboot. So do you mean I have to go find any and every instance of an open webpage on any computer I might have left open somewhere at any point in the past? And any phone or tablet that has ControlR? Because this could cover a seriously lot of hardware and a lot of square miles to find!
  11. My syslog is overrun with wrong csrf_token errors generated from the unassigned devices plug in. This starts immediately after reboot with only one web browser page open so the faq does not seem to be relevant: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988 It did not stop after uninstalling the plugin. It did not stop after reboot after uninstalling the plugin. There is no UnassignedDevices.php - at least in /boot/config/plugins/unassigned.devices May 17 08:11:06 NAS root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
  12. Fixed.... I could not figure out in software what prevented unmounting the cache drive. I shut down, removed the unformatted drive, rebooted, shut down, re-added the drive, and then I could click on the format button in the array.
  13. I cannot stop the array: Array Stopping•Retry unmounting disk share(S) May 6 09:51:37 HomeNAS emhttpd: Unmounting disks... May 6 09:51:37 HomeNAS emhttpd: shcmd (440): umount /mnt/cache May 6 09:51:37 HomeNAS root: umount: /mnt/cache: target is busy. May 6 09:51:37 HomeNAS emhttpd: shcmd (440): exit status: 32 May 6 09:51:37 HomeNAS emhttpd: Retry unmounting disk share(s)... May 6 09:51:42 HomeNAS emhttpd: Unmounting disks... May 6 09:51:42 HomeNAS emhttpd: shcmd (441): umount /mnt/cache May 6 09:51:42 HomeNAS root: umount: /mnt/cache: target is busy. May 6 09:51:42 HomeNAS emhttpd: shcmd (441): exit status: 32 May 6 09:51:42 HomeNAS emhttpd: Retry unmounting disk share(s)... May 6 09:51:47 HomeNAS emhttpd: Unmounting disks... May 6 09:51:47 HomeNAS emhttpd: shcmd (442): umount /mnt/cache May 6 09:51:47 HomeNAS root: umount: /mnt/cache: target is busy. May 6 09:51:47 HomeNAS emhttpd: shcmd (442): exit status: 32 May 6 09:51:47 HomeNAS emhttpd: Retry unmounting disk share(s)... Rebooted - no change. Not sure what all I did to put unraid in this state. In the process of removing a disk that had failed, I removed the wrong disk before correcting my error. First I had this (https://forums.unraid.net/topic/91867-solved-can-not-mount-unassigned-drive-after-disk-drive-shuffle/?tab=comments#comment-852174) I probably inadvertently turned on pass through but that drive had been mounted before and I was clicking around in the first place because I couldn't get it to mount. Also, I manually ran preclear on the new disk. After preclear ran I thought I formatted it but I could easily have forgot and missed that step. After preclear I stopped the array, added the drive and started the array. Unraid automatically ran preclear on its own which took a couple more days. After that finished I think I just tried to start the array. (I did not realize drive was unformatted.) Then unraid ran a parity check which took another day. Now - not entirely sure what damage I have done - I think I just need to stop the array, format the disk and start it up. But the array won't stop.... homenas-diagnostics-20200506-1152.zip
  14. I can no longer mount an unassigned drive. The option is greyed out. It had been working fine until I went to replace a failed drive. I accidentally removed the drive that I can no longer mount and installed the new one. After booting up I realized my mistake, shut down and put this drive back on its same sata connector and moved the new drive where it should have been in the first place. Now the drive is listed in unassigned but I cannot click on 'mount'.
  15. A question about Zabbix-Server and Zabbix-Webinterface - but perhaps a question more about dockers in general: I installed both. Server logs looks like it is running and the DB is being used. However I can't configure anything at http://ip/zabbix as the page does not load. I assume that is the webinterface docker's job - and if I look at the logs I see: 2020/04/18 13:07:19 [emerg] 25051#25051: bind() to 0.0.0.0:80 failed (98: Address in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use) Obviously port 80 is already in use elsewhere. The webinterface docker does not give me any port options when setting it up, but configures itself to listen on 80 and 443. This doesn't conflict with any other docker, but Unraid itself is using those and (thankfully) must keep them from the container. Here is where things get more unclear to me: Zabbix is using a subdirectory and not its own port or a subdomain. Is this where I would setup nginx to route traffic to the proper place? Or is the container network type just setup differently - as bridge? Or both: setup let's encrypt with nginx and zabbix-webinterface both on their own proxy-net?
  16. I guess what I should have said was everything else was default. I did not realize that I could not edit the ports and just restart a container. In the end I was unable to even delete and rebuild with altered ports. I have not yet taken the time to learn how this docker works internally but I get what you are saying now. Thanks for all the time you take responding to those of us 'learning the hard way'.
  17. But everything is default -- so I could not see how I could be doing that! Since I was merely adjusting editing the port range I did not pay much attention to "Link to traccar.xml: https://raw.githubusercontent.com/traccar/traccar/master/setup/traccar.xml Add it to your host path before starting the container." Turn out that even if you have created traccar.xml, docker installation modifies its files including deleting or moving the traccar.xml file. It looks as though you need to recreate traccar.xml each and every time you edit your docker. And since traccar.xml is a configuration file, my guess is editing and restarting the docker with a new file may well wipe out any configuration you had. Back-up!
  18. After a fresh install of the traccar docker I get an error that the command failed. Unraid shows it created the docker, but attempting to start it fails. This is the error on install. I'm not sure how to interpret and further debug this error: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='traccar' --net='bridge' -e TZ="America/Chicago" -e HOST_OS="Unraid" -p '8082:8082/tcp' -p '5000-5150:5000-5150/tcp' -p '5000-5150:5000-5150/udp' -v '/mnt/user/appdata/traccar/logs':'/opt/traccar/logs':'rw' -v '/mnt/user/appdata/traccar/traccar.xml':'/opt/traccar/conf/traccar.xml':'rw' --restart always --hostname traccar 'traccar/traccar' b6252155b590c414bb1d755c395e0daf5835b6bb4ade7d0496cb3caafe8f85e2 /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/mnt/user/appdata/traccar/traccar.xml\" to rootfs \"/var/lib/docker/btrfs/subvolumes/e250d9ea7f9fcb448c569c00daa8d9cba97937dddef0f0b9b608ee98fd5f6b86\" at \"/var/lib/docker/btrfs/subvolumes/e250d9ea7f9fcb448c569c00daa8d9cba97937dddef0f0b9b608ee98fd5f6b86/opt/traccar/conf/traccar.xml\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. The command failed. I had it installed previously with no problems - but it used way more ports than I needed so I tried editing the container changing only the host port 1 and 2 from 5000-5150 to 5030-5035 which gave this error: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='traccar' --net='bridge' -e TZ="America/Chicago" -e HOST_OS="Unraid" -p '8083:8082/tcp' -p '5030:5000-5150/tcp' -p '5030:5000-5150/udp' -v '/mnt/user/appdata/traccar/logs/':'/opt/traccar/logs':'rw' -v 'https://raw.githubusercontent.com/traccar/traccar/master/setup/traccar.xml':'/opt/traccar/conf/traccar.xml':'rw' --restart always --hostname traccar 'traccar/traccar' /usr/bin/docker: invalid publish opts format (should be name=value but got '8083:8082/tcp'). See '/usr/bin/docker run --help'. So I deleted the docker, under advanced I deleted to orphan image and I renamed the folder in appdata. If anyone can tell me in general what is going on, I don't really know what is going wrong or where to start with these errors.
  19. "GUI address is overridden by startup options" means you set the address:port to access the syncthing docker when you create the docker. You cannot modify that in the GUI itself.
  20. Thanks for this plug-in. Super helpful. Feature request: -Ability to sort by name or scheduled frequency. -Also it would be helpful to be able to tag them with user created categories. Mine are starting to get too many to keep organized.
  21. Thanks, that is quite helpful. But just to confirm before I accidentally erase any data... My goal is to create a command in User Scripts to auto move, rename and sort photos from /Incoming into /Sorted. exiftool '-filename<$EXIF:DateTimeOriginal' -d '/mnt/user/Photos/%Y-%m/%Y-%m-%d-%H%M%S%%-c.%%e' -r /mnt/user/Incoming Both folders are user shares. /Incoming is public and /Photos is secure. The windows user has r/w access to /Photos and I am assuming User Scripts is running scripts as root which should have access to all shares. What is the proper way to reference the shares Photos and Incoming? Also, for the most part I am using the files from as the windows user via SMB. I would assume SMB shares by design prevent users from accidentally screwing anything up. However sometimes windows wants to download then re-upload files (I think when they are different shares), I am doing a lot of the initial sorting via a Krusader docker. That is running as root, so when you are using the gui, you are root. If I am moving files from place to place between the two shares, I would assume the proper path to the share is the same as above, correct? Finally - and I assume this is the same answer here too - I am using a dupeguru docker to go through the /Photos share to find duplicates and move them out to a separate /Duplicated share. Same path?
  22. So far I have been treating subfolders of subfolders of /mnt as interchangeable. But from the terminal, if I go to what I thought was the same directory mapped differently, one folder contains a lot more. Can someone explain how/why the directories are set up this way?
  23. Clean install into alternate config directory and syncthing still seems unable to modify index.db folder contents. I installed xamindar syncthing overtop into same config directory (syncthing not Syncthing) then installed this one again. Once db files written out by the other version, this one works fine.
  24. My syncthing docker is reporting "insufficient space on disk for database (/config/index-v0.14.0.db): 1.0 % < 1 %" for any and all folders. This is an almost-new Unraid install and there is plenty of space in appdata according to Unraid's GUI. From within the docker: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop2 20971520 3231044 17510492 16% / tmpfs 65536 0 65536 0% /dev tmpfs 16491516 0 16491516 0% /sys/fs/cgroup shm 65536 0 65536 0% /dev/shm shfs 9764349900 69093472 9695256428 1% /sync /dev/loop2 20971520 3231044 17510492 16% /etc/hosts tmpfs 16491516 0 16491516 0% /proc/acpi tmpfs 16491516 0 16491516 0% /sys/firmware So my guess is this is some sort of file permission issue. But I don't know how to fix it. (Oh also - I have a nearly identical unraid server which has no such issues. As far as I can see the docker and pretty much everything are set up the same.)