RadOD

Members
  • Posts

    32
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

RadOD's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I get this from a 2 port NIC that tries to swap MAC addresses between port 1 and 2. Check of the MAC is the same except for the last octal. I think it is safe to ignore but I'm looking for a way to get it to stop clogging up my logs.
  2. I plan to replace one at a time, rebuilding the first before replacing the second. I just want to be certain that this intermediate step is not going to result in something like the 1st new larger disk getting a smaller partition to match the older drive, and then after I add the second larger parity drive which matches the first, and in the end I have two larger drive not being fully used. Or does unraid do all this automatically?
  3. After a kernel panic and crash which I suspect was related somehow to assigning custom IP addresses on BR0 similar what is described here... Many dockers have nothing under the "port mappings" column even though they used to. I cannot access any docker at all via http://IP:port even those that still list the IP:port mappings. I am left with two 'versons' of BR: br-9c2cde536e88 and br-03ece3ee359d that I didn't create and cannot delete. At this point, I am over my head and totally confused. Is there an easy way to wipe all the docker network setting and just start over from scratch?
  4. Can anyone tell me where to start looking for a "Error when trying to connect (Error occurred in the document service: Error while downloading the document file to be converted.) (version 6.3.1.32)" when I try to add the hostname for the document server to nextcloud? Searching the internets, I have found some recommended fixes such as changing the following config files: Onlyoffice: default.json -> “rejectUnauthorized” to false. local.json -> “header”: “AuthorizationJwt” from “header”: “Authorization” supervisorctl restart all Nexcloud: config/config.php -> ‘onlyoffice’ => array ( ‘verify_peer_off’ => true, ‘jwt_header’ => “AuthorizationJwt” ) ... but this is basically a new install and it doesn't look like default.json is meant to be edited outside of the docker. But /healthcheck/ returns true for both the IP address and when I try to connect via the external domain name so I think the container is up and running but the above fix suggests some part of the forwarding is broken and I don't know what tools might help me find it. My firewall forwards 443 to NginxProxyManager docker on Unraid. NPM has a wildcard SSL cert for the domain and is set to forward nextcloud.domain.name and docserver.domain.name to the appropriate ports for their respective unraid dockers. All 3 dockers are running on a docker proxynet (docker create network proxynet). Both nextcloud and document server are accessible on the local network via IP:port and on the internet via domain name. Both cert and key PEM's are copied to .crt files I get the following in the docker log: If I manually cut and paste https://nextcloud.domain.name/apps/onlyoffice/empty?doc=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY3Rpb24iOiJlbXB0eSJ9.la3XO9qn6tmmWaNhPtzJXk2kMb0u_-gh6ZnwW-iFnY0 it downloads a 7kb file new.docx that looks blank to me. As far as I can tell, everything seems to be working except the file won't transfer from inside the nextcloud docker.
  5. The main page shows everything to look good: everything is green and says passed. I have to actually open each one to see any signs of problems -- such as a failed Reallocated Sectors Count. Is that expected behavior? Seems like those failures and warnings should be out in front.
  6. Mover has been running for hours but I only see a couple GB actually moved. iotop -o Total DISK READ : 0.00 B/s | Total DISK WRITE : 177.10 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 30785 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.02 % [kworker/u8:5-events_power_efficient] 15906 be/4 root 0.00 B/s 3.85 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17443 be/4 root 0.00 B/s 34.65 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 31806 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 22041 be/4 root 0.00 B/s 11.55 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17696 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 17796 be/4 root 0.00 B/s 53.90 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16781 be/4 root 0.00 B/s 11.55 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16782 be/4 root 0.00 B/s 7.70 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 19995 be/4 root 0.00 B/s 15.40 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 16836 be/4 root 0.00 B/s 23.10 K/s 0.00 % 0.00 % shfs /mnt/user -disks 31 -o noatime,allow_other -o remember=0 Docker and VM are off. Where do I look to see what mover is trying to do to figure out why it does not seem to be making progress?
  7. Is there any wait to push data from outside sources into the influxDB within the GUS container? It would be nice to be able to push from a opnsense or pfsense firewall too and have it all in the same place.
  8. When designating the docker image and appdata folder locations, the help tooltip says the following: "It is recommended to create this folder outside the array, e.g. on the Cache pool. For best performance SSD devices are preferred." But the default is to create it on the array. What is the difference, and is there a difference, between setting it to for example /mnt/somecachepooldevice/appdata/ vs the default /mnt/use/appdata/ and setting the share to cache only, set to that same cache pool drive under shares? Are they equivalent? If not, which is preferred? In some form or another, this question has been asked and answered many times: I see some ambiguity in terms of what works and what causes problems even though they should be one any the same. The particular reason I am asking about this is that after moving my docker.img and /appdata/ folders to a new 2nd cache pool drive manually and setting the location to that drive manually in docker settings one docker does not seem to work: openvpn-as. Upon installing, its appdata folder is completely empty with the exception of two empty folders 'log' and 'etc'. This was not moved from the initial appata location but was installed later for the first time. If I mount the docker.img file, the command find /mnt/docker_temp/containers -name '*.json' | xargs grep -l /user/appdata yields two files: hostconfig.json and config.v2.json both belonging to the container openvpn-as. So should I move everything back to /mnt/user/appdata from /mnt/cache2/appdata? Or is this irrelevant and I should look elsewhere as to why there is nothing in the openvpn container?
  9. Agree! That is a particularly complicated sounding error for such a little thing. and it is easy to lose one of those little console windows behind everything else. Maybe an option to open console and log windows as a new tab instead?
  10. I don't have an answer to your specific question, but hopefully this is helpful: I found DupeGuru's all-or-nothing results to be too unreliable and require too much user interaction. Instead, if you install the nerd pack plug in, there are two command line dupe checkers - fdupes and jdupes. Install either (reportedly jdupes is faster) and the user scripts plug in. Write a script to delete dupes and set it to run periodically. Or, a little more complicated, but you can create a script to move the dupes and create a log of what it did and you can review everything and manually delete. I can't access them right now, but I can post mine if anyone wants them.
  11. Thanks. This fixed it for me. Now to see if it goes out again...
  12. netstat -vatn was able to find the source of the problem. Seems like there should be a server side solution to prevent this. After a time a client anywhere with a bad CSRF token causes parts of Unraid to stop working - possibly from spamming the syslog. How does this work with multiple users? Do administrators email all their users asking them to close their forgotten browswer tabs?
  13. Yes, thank you. You might notice is you read the second sentence is that I have seen that. However, as of right now I am only using one browser on one computer after a fresh reboot. So do you mean I have to go find any and every instance of an open webpage on any computer I might have left open somewhere at any point in the past? And any phone or tablet that has ControlR? Because this could cover a seriously lot of hardware and a lot of square miles to find!
  14. My syslog is overrun with wrong csrf_token errors generated from the unassigned devices plug in. This starts immediately after reboot with only one web browser page open so the faq does not seem to be relevant: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988 It did not stop after uninstalling the plugin. It did not stop after reboot after uninstalling the plugin. There is no UnassignedDevices.php - at least in /boot/config/plugins/unassigned.devices May 17 08:11:06 NAS root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
  15. Fixed.... I could not figure out in software what prevented unmounting the cache drive. I shut down, removed the unformatted drive, rebooted, shut down, re-added the drive, and then I could click on the format button in the array.