Jump to content

joersch

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by joersch

  1. On 10/9/2022 at 2:02 PM, trurl said:

    If you insist on using USB, don't use parity. USB disconnects will make the array out-of-sync and require frequent rebuilds. Without parity nothing is in sync anyway.

    For now it is pretty stable. I only have this issue when I have to reboot the server. This does not happen very often, so I can live with that for now.

  2. On 10/5/2022 at 10:03 PM, JonathanM said:

    USB enclosures sometimes alter ID's in unpredictable ways, for instance yours may be reporting for the first slot that responds, and subsequent drives aren't. Some USB cages are better than others, but all suffer from bandwidth issues for parity.

     

    Ok. I did not know that. So maybe it was not the best decision to go for a NUC and external case (power consumption optimization in mind)...

     

    Funny enough: today I rebooted because of the "Version: 6.11.1" update. I wanted to take screenshots again but this was the first time the drives where recognized correctly since installation! This gives mit a little hope... ;-) Anyways: the restarts do not happen so often, and even when the drive is not identified correctly it runs just fine after manually assigning it. Just the parity check afterwards is quite annoying. My parity drive is in the same enclosure and I did not have any issues up till now...

  3. Hi!

    I have frigate working quite well now with an Anke C800 and Coral USB. One thing is annoying: somewhen during night the container is stopped. First suspect was "Backup/Restore Appdata" Plugin, that is backing up my AppData each night at 3. First this was the case, but as frigate stores only the config in AppData no stop for backup is necessary.

     

    before: 

    Oct  3 03:00:01 datenhaufen CA Backup/Restore: Stopping frigate
    Oct  3 03:00:05 datenhaufen kernel: docker0: port 1(vethc84b59a) entered disabled state
    Oct  3 03:00:05 datenhaufen kernel: vethf852a89: renamed from eth0
    Oct  3 03:00:05 datenhaufen  avahi-daemon[8102]: Interface vethc84b59a.IPv6 no longer relevant for mDNS.
    Oct  3 03:00:05 datenhaufen  avahi-daemon[8102]: Leaving mDNS multicast group on interface vethc84b59a.IPv6 with address fe80::38bc:a9ff:fe6c:49d0.
    Oct  3 03:00:05 datenhaufen kernel: docker0: port 1(vethc84b59a) entered disabled state
    Oct  3 03:00:05 datenhaufen kernel: device vethc84b59a left promiscuous mode
    Oct  3 03:00:05 datenhaufen kernel: docker0: port 1(vethc84b59a) entered disabled state
    Oct  3 03:00:05 datenhaufen  avahi-daemon[8102]: Withdrawing address record for fe80::38bc:a9ff:fe6c:49d0 on vethc84b59a.
    Oct  3 03:00:05 datenhaufen CA Backup/Restore: docker stop -t 60 frigate

    after:

    Oct  4 03:00:01 datenhaufen CA Backup/Restore: #######################################
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: Community Applications appData Backup
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: Applications will be unavailable during
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: this process.  They will automatically
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: be restarted upon completion.
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: #######################################
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: frigate set to not be stopped by ca backup's advanced settings.  Skipping
    Oct  4 03:00:01 datenhaufen CA Backup/Restore: Stopping iobroker

     

    But still the container is down in the morning.

    Last lines of the log (including manual restart):

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

    Any ideas where I can dig into? Is there a chance to increase the logging of the container to show when/why it shut down?

     

    cheers

    Jörg

  4. Hi!

    I set up my Unraid server (NUC7I5 with 32 GB and USB 3.1 4-bay SATA). It works fine apart from one annoying issue: I have one disk that is not recognized correctly at boot-up (serial/code is zeroes), so I have to rebuild parity after every reboot (in case of a software update (upgraded to 6.11.0 today)). See attached image. After rebuild the disk works just fine. Has anyone experienced such issue before and has an explanation?

     

    cheers

    Jörg

     

    image.png.f2fd32eda9635b1d0470d128fd8862d9.png

×
×
  • Create New...