Jump to content

potjoe

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by potjoe

  1. What happens if you manually start the update from ncp-panel ? Can you provide the relative log entries ? Log file can be accessed within the container in /var/log/ncp.log
  2. Hi, @outi I spent a LOT of time trying to get a stable solution for this, and ultimately I really think that you should not have the data mounted separately, especially in Unraid. (See here for instance : https://github.com/nextcloud/nextcloudpi/issues/1243#issuecomment-810691176) Nextcloudpi is great, but definitely have trouble maintaining its stability when there are multiple mount points with different filesystems. This is not only a matter of Nextcloudpi not starting : it's random crashes, permissions issues, impossibility to perform upgrades. Moreover, fuse filesystem which is used by Unraid to manage the array is considered to be a "corner case" by Nextcloudpi's team (no criticism intended, it just makes unraid's array irrelevant to store nextcloud data dir as long as it is the case). You should have a dedicated cache drive for Nextcloudpi, that you can mount directly : not with /mnt/user/SHARE, but with /mnt/cache_name/ to avoid fuse. If you want to protect your data on the array, I recommend to mount another share, and to set up Nextcloudpi scheduled backup to this folder. It's not incremental, but it's the best solution in my opinion for now.
  3. You're probably having the same issue mentionned a few messages above. Check in Nextcloudpi panel, System info, I guess "Data directory" is set to /data-ro/nextcloud/data : it should be /data/nextcloud/data. You're not seeing your data in the share because /data-ro/ mountpoint is inside docker.img. You can try what I suggested.
  4. That's worth the shot indeed. Not like i've been moving arround my server though! Edit: Drive reseated, and dust cleaned up! l Let see if the error occurs again now.. Edit2: Soo it's been a few days, and no Warning to declare in logs after cleaning dust / reseating the drive. I sent back the replacement one, hope it'll last for a few months ! Thank you for your help, subject can be tagged as solved.
  5. Thank you for your time! I've added the smart report in the initial post, I forgot to post it. Since I've been running this server in this configuration (this bios version and this drive) for almost a year now, I'll bet on the drive for now. I ordered one, I'll try to swap it and keep you updated!
  6. Hi everyone, Looking at logs to check if upgrade to 6.9 was going smoothly, I noticed two episodes of "ata12.00: exception Emask" errors, followed by a lot of "ata12.00: failed command: READ FPDMA QUEUED" and, finally "ata12: hard resetting link". I've done my homework : if I understand correctly what I found on the forum, three suspects are usually considered, SATA cables, PCIe module if relevant, or the drive itself. In this case, it's an M.2 Sata drive, directly plugged in MB's slot, and Smart report does not seems to point out a bad drive. Bad drive or mobo's fault then ? Have I missed something in the logs ? Before replacing anything, some help would be great! Thank you Logs : Mar 21 12:53:25 MonaLisa kernel: ata12.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x6 frozen Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:00:98:89:9d/00:00:06:00:00/40 tag 0 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:08:d8:8a:9d/00:00:06:00:00/40 tag 1 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/20:10:f0:8a:9d/00:00:06:00:00/40 tag 2 ncq dma 16384 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/40:18:a8:82:a2/07:00:06:00:00/40 tag 3 ncq dma 950272 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:20:38:86:9d/00:00:06:00:00/40 tag 4 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:28:18:8b:9d/00:00:06:00:00/40 tag 5 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/f8:30:28:73:a2/02:00:06:00:00/40 tag 6 ncq dma 389120 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/10:38:70:87:9d/00:00:06:00:00/40 tag 7 ncq dma 8192 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/70:40:28:88:9d/00:00:06:00:00/40 tag 8 ncq dma 57344 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/28:48:68:89:9d/00:00:06:00:00/40 tag 9 ncq dma 20480 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:50:38:87:9d/00:00:06:00:00/40 tag 10 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/10:58:d8:87:9d/00:00:06:00:00/40 tag 11 ncq dma 8192 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/10:60:f0:87:9d/00:00:06:00:00/40 tag 12 ncq dma 8192 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/10:68:08:88:9d/00:00:06:00:00/40 tag 13 ncq dma 8192 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/20:70:a0:88:9d/00:00:06:00:00/40 tag 14 ncq dma 16384 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/48:78:d8:88:9d/00:00:06:00:00/40 tag 15 ncq dma 36864 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/30:80:b8:89:9d/00:00:06:00:00/40 tag 16 ncq dma 24576 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/a8:88:f8:80:a2/01:00:06:00:00/40 tag 17 ncq dma 217088 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:90:08:8a:9d/00:00:06:00:00/40 tag 18 ncq dma 4096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/08:98:28:76:a2/03:00:06:00:00/40 tag 19 ncq dma 397312 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/98:a0:38:79:a2/00:00:06:00:00/40 tag 20 ncq dma 77824 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/18:a8:28:89:9d/00:00:06:00:00/40 tag 21 ncq dma 12288 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/80:b0:d8:79:a2/00:00:06:00:00/40 tag 22 ncq dma 65536 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/48:b8:18:8a:9d/00:00:06:00:00/40 tag 23 ncq dma 36864 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/f0:c0:60:7a:a2/03:00:06:00:00/40 tag 24 ncq dma 516096 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/98:c8:58:7e:a2/02:00:06:00:00/40 tag 25 ncq dma 339968 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/30:d0:50:85:9d/00:00:06:00:00/40 tag 26 ncq dma 24576 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/18:d8:70:8a:9d/00:00:06:00:00/40 tag 27 ncq dma 12288 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/18:e0:90:8a:9d/00:00:06:00:00/40 tag 28 ncq dma 12288 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/18:e8:18:87:9d/00:00:06:00:00/40 tag 29 ncq dma 12288 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/18:f0:48:89:9d/00:00:06:00:00/40 tag 30 ncq dma 12288 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12.00: failed command: READ FPDMA QUEUED Mar 21 12:53:25 MonaLisa kernel: ata12.00: cmd 60/20:f8:b0:8a:9d/00:00:06:00:00/40 tag 31 ncq dma 16384 in Mar 21 12:53:25 MonaLisa kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 21 12:53:25 MonaLisa kernel: ata12.00: status: { DRDY } Mar 21 12:53:25 MonaLisa kernel: ata12: hard resetting link monalisa-diagnostics-20210322-1829.zip monalisa-smart-20210322-1840.zip
  7. Actually, it is : Network Type is set to "bridge", so you're trying to assign port 80 to this container, already used by Unraid's webui! 2 solutions : keep it in bridge mode, and use different ports for Http and Https (not 80 or 443 then : you can use 1080 or 1443 for instance), or change network type to br0 and assign a static ip address outside DHCP range in "Fixed ip adress" field. You do not need to change WebUI field. Have a look to some local networking introduction course, and then to some content to understand how Unraid and docker are dealing with network. It'll be very useful, and necessary if you're setting up long term services such as nextcloud which needs reliability and accessibility at any given time. To begin with : an IP adress has available any ports between 1 and 65535 to communicate (receive/send) with other IP adresses. Some ports are used by common services, such as SSH on 22, Http on 80, Https 443... You cannot use the same ip with the same port to host two different services. When you're dealing with web services, such as Nextcloud, and you're trying to access it through your web browser : if you're not using standard Http and Https port, then you have to specify the port you want to connect to. For instance, if my Nextcloud service is using port 14443 for https, and uses IP 192.168.0.25, then I would have to type "https://192.168.0.25:14443" to access it. If I only type "https://192.168.0.25", I'm implicitely trying to contact nextcloud on standard https port, 443, which is not assigned to nextcloud in this example.
  8. Just to follow up, host-passthrough mode seems to be working again for me under 6.9 with my ryzen 1600!
  9. Sure! But I advise you to be really careful messing arround with these config files, Nextcloud seems to be really capricious.. BIG warning there once again, backup the data before any modification, and before making the change, be sure to replicate from within the container the content from /data-ro/nextcloud/data to /data/nextcloud/data, including hidden files. Otherwise, you'll just lose access to the data, or Nextcloud won't be able to access it (cause it'll be missing the .ocrdata file). Be careful to keep the same permissions for the files. Then, In the host, not within the container this time, stop the container, go to your data folder under nextcloud/config. You'll find a config.php file. In this file, you can try changing 'datadirectory' => '/data-ro/nextcloud/data', to 'datadirectory' => '/data/nextcloud/data', Again, this is NOT a workaround, I ended up reinstalling from scratch (at one point, Nextcloud just deleted all my files). I suggest to wait for Dimtar's further investigation.
  10. Joining the discussion since I had the same issue a few weeks ago after an update. If you look carefully at the "System info" screenshot you provided, you'll notice that nextcloudpi is setting the data-ro directory as the data directory (datadir). Thing is, data-ro is inside the docker.img, and that's why it's now filled up. I had a real nightmare reverting the change, only fixing the nextcloudpi config file (datadir field actually) ended up in ncp not finding the database and the data, so I finally reinstalled from scratch.... Here the data folder seems to be mapped correctly and recognized inside the container, it's a matter of ncp configuration. I think it happened after an update, 19.0.4 if I'm correct.
  11. You have to copy the previous content inside the new folder, including hidden files. For a few versions now, Nextcloudpi is creating daily backups in the docker img, which is really annoying because all automated backups are suppose to be disabled from nexctloudpi control panel. Does anyone has encounter this behaviour or know how to disable it ? Backups are created in the /var/www/backups repository inside the container.
  12. Thank you both of you for your detailed explanations. Now I better understand what happened and the succession of events, not specifically related to heat as I understood initially, which was mysterious to me. I guess it's just the way UPS and power fluctuations works, and I'll have to move to a higher end unit to be able to protect more efficiently my server.
  13. Thank you for your complete reply! I realize now that electric fluctuation may be the cause of my troubles. You did not mention the heat variable, do you exclude the possiblity that the ups overheated ? I was surprised by this behavior, since this model has an AVR feature, supposed to handle electric fluctuation without switching on battery. I already have an extension for USB plugged into the motherboard, Unraid flashdrive is plugged on it! Maybe I should switch for a 4 USB extension considering the low cost. Regarding Email, I received the "Communication lost with ups" alert at 00:08, but no alerts for the three precedent power failure. When testing the UPS, I usually receive the notification... Finally, do you advise me to replace the UPS unit/UPS battery/power supply after that ?
  14. Hi all, A few weeks ago, we had a heat wave here in France, and both my server and its ups (APC Back-ups Pro 550G) runned hotter than usual (+6 C° delta). On August 10th, I noticed that Apcupsd reported a 96% battery charge constantly. I thought of a bug, and restarted the server. Battery ranked up to 100%. Later, Unraid reported (without email notification) a power failure at 23:11 for 2 seconds, a second one at 00:03 for a second, and a last (lethal) one at 00:08, immediately followed by a "Communications with UPS lost." report. Looking in logs, my domotic antenna got also disconnected (FTDI USB Serial Device), it was connected on the same usb bus. The next day, neither of the two devices were recognized using their original usb ports, I had to switch to another one to get them running again. Connecting a usb stick in the assumed dead usb ports confirmed it : nothing appeared in the log as they were plugged. In the same time, from August 10th to a few days later during the heat wave, I noticed electrical flickering in my house (Tv and Light), and the ups was still reporting 96% battery charge with no outage to report. In order to prevent this in the future, I'd like to understand what happened. Thus, I have 3 questions : - Why the ups battery charge was only 96% while plugged, and can it be related to heat ? - Did my ups kill a usb bus, should I be concerned regarding its lifetime or its battery, and how to avoid this situation in the future ? - Could electrical flickering have caused ups to go crazy, or could the ups going mad (due to heat?) have caused elctrical flickering in the house ? I'm really concerned about my house and hardware safety at this time. Thank you for reading me. monalisa-diagnostics-20200811-0119.zip
  15. Thank you very much! organizr/organizr here we are.
  16. Hi @Roxedus! Thanks for the work done on improving Organizrv2 which is a very useful tool. I just saw your announcement "DEPRECATING ORGANIZRTOOLS/ORGANIZR-V2", I was wondering if a new official repo on Docker Hub was already available, or even better, a new "app" for an unraid server ? I was not able to find any for the moment. Thanks!
  17. Hi all, I just noticed a weird behaviour while migrating some docker containers to Ipv6. They appear to be assigned two different ipv6 addresses on the same subnet. Ip addr on a container : eth0@if108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:c0:a8:28:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.40.4/24 brd 192.168.40.255 scope global eth0 valid_lft forever preferred_lft forever inet6 2a01:****:****:**e4:****:****:****:2804/64 scope global dynamic mngtmpaddr valid_lft 86392sec preferred_lft 14392sec inet6 2a01:****:****:**e4::7/64 scope global nodad valid_lft forever preferred_lft forever inet6 fe80::42:c0ff:fea8:2804/64 scope link valid_lft forever preferred_lft forever The 2a01:****:****:**e4::/64 subnet is configured on VLAN 40 with Router Advertisement on a pfsense VM. In Unraid > Settings > Network settings, VLAN 40 is configured with IPV4 and IPV6, and but no ip are assigned (because I don't want to access my server on this subnet, only web services, docker and VMs). Finally, containers are using the br0.40 interface and here is the configuration in Settings > Docker for ipv6 : IPv6 custom network on interface br0.40: Subnet: 2a01:****:****:**e4::/64 Gateway: 2a01:****:****:**e4::1 DHCP pool: not set One last thing, when disabling Router Advertisement on this interface in pfSense, the stateless ip is no longer generated. Only remains 2a01:****:****:**e4::7. Why is this last ip assigned whereas there is no DHCP pool nor RA ? Could it be related to this bug report ? Thanks everyone
  18. Hi unforntunately I was'nt able to sort it out. I still don't know what's causing the issue. I'm currently upgrading my router setup to a pfSense VM, i'll give it another try with the new setup ! Have you any idea where it could come from ?
  19. Absolutely! I tried to perform the challenge with letsencrypt docker in a similar setup (using br0 interface), and it works like a charm. I don't understand why is it any different in this case. I also added my subdomain in the trusted domains list.
  20. Hi! Thanks for all the work and the support. I have an issue with letsencrypt integrated app, I cannot obtain a valid certificate for a subdomain. Nexcloudpi docker uses br0 interface to get it's own ip by the router, so there is no conflict with ports. They are opened in the router, and redirected to the docker's ip. I still have a timeout error when performing the challenge...
×
×
  • Create New...