Jump to content

johnny2678

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by johnny2678

  1. oh geez, I'm sorry to waste your time.  I thought I read somewhere that it was removed automatically when you upgraded to folder view but I probably confused that with the CA backup plugin or something else.

     

    It's working now after I removed the old plugin. thx again.  cheers.

    • Like 1
    • Upvote 1
  2. 9 minutes ago, scolcipitato said:

    The createFolderBtn function is defined inside the docker.js, you can check if it is loaded either by going to the network tab and searching for that or by looking thought the source of your docker page.

    I see it in sources - not sure if the error is relevant?  

    Untitled.png

  3. any idea about this console error?  I see the folders on the Dashboard tab (imported from json), but on the docker tab I just see dockers, no folders and nothing happens when I click 'Add Folder`, and this error is thrown in the console:

     

    Docker:3940 Uncaught ReferenceError: createFolderBtn is not defined
        at HTMLInputElement.onclick (Docker:3940:2)

     

    edit: have tried removing the plugin and adding it again with a fresh config but I get the same error.  Happens with a clean install as well as after importing a json.

  4. After adding the `APP_KEY` variable, I'm getting conflicting messages in the logs:

     

      An application key exists.

     

    AND 

     

    [2023-11-17 12:59:53] production.ERROR: No application encryption key has been specified. {"exception":"[object]

     

    Anyone else?

     

    Full log:

    🔗  Creating symlinks for config and log files...
      Symlinks created.
    
      An application key exists.
    
    💰  Building the cache...
      Cache set.
    
    🚛  Migrating the database...
      Database migrated.
    
      All set, Speedtest Tracker started.
    
    [17-Nov-2023 07:58:38] NOTICE: fpm is running, pid 115
    [17-Nov-2023 07:58:38] NOTICE: ready to handle connections
    [17-Nov-2023 07:58:38] NOTICE: systemd monitor interval set to 10000ms
    [2023-11-17 12:59:53] production.ERROR: No application encryption key has been specified. {"exception":"[object] (Illuminate\\Encryption\\MissingAppKeyException(code: 0): No application encryption key has been specified. at /var/www/html/vendor/laravel/framework/src/Illuminate/Encryption/EncryptionServiceProvider.php:79)
    [stacktrace]

     

  5. 6 minutes ago, JorgeB said:

    Pool looks fine, it's using raid0 but I assume that was a choice.

    Any ideas on why the write counts are so lopsided?

     

    Any ideas on why tar-ing (or even rsync-ing) plex bundles on the cache pool never finish?

     

    Again, thx for the attention. Not your job to fix my stuff.  Nice to have someone to bounce ideas off of.

  6. Just going to throw this out there - I run the official plex docker.  For a few days now, I've watched as it constantly runs transcode operations on the same show (kids show/~50 short episodes).  Runs for hours, then stops, then runs again the next day on the same show - was guessing this was due to the plex maintenance window.  I have >20TB of media, but plex stays hyperfocused on this show for some reason.

     

    At the same time, I've been having problems with my plex backup script.  It keeps getting hung up on the bundles under:

    config/Library/Application\ Support/Plex\ Media\ Server/Media/localhost/

     

    It never completes.

     

    Today, I tried to delete the offending show - never really cared about it anyways.  Then I went to Plex Setup -> Troubleshooting -> Clean Bundles... Plex has been "Deleting 51 Media Bundles for over 2 hours now:

     

    image.png.c70876587581f48061c01d3776a86af4.png

     

    The underlying disk(s) is an SSD cache pool in Raid0 (sdb & sdc):

     

    image.thumb.png.76777891498ed336ae58bc352ebed7d9.png

     

    During this 2+ hour bundle delete operation, the read counter is wildly out of sync on the cache pool between these two devices:

    image.thumb.png.a136f36ec8c3a77b9c601cc2ab432567.png

     

    Is my BTRFS messed up?  Could that be causing my issues?  Adding a current diagnostic if it helps.  thx again for looking @JorgeB.  Would be happy to try any ideas you have to unstick this.

     

    15620-beast-diagnostics-20231026-1022.zip

  7. 18 minutes ago, JorgeB said:

    /dev/sdb cannot be an NVMe device, if you post the diags we can tell you which device ATA1 is.

     

    Sorry, you are correct.  It's a SSD.  I'll check the cable/replace next time I open the box.

     

    I haven't moved to 6.12.4 yet but curious if you can tell if the logs are still being spammed with NIC issues?  I rebooted hoping they would clear so we could see if there were any other issues that might cause the "no exportable shares" issue.  Worried if I move to 6.12.4 I'll just take the existing problems with me.

     

     

    15620-beast-diagnostics-20231025-0912.zip

  8. @JorgeB just rebooted to see if the NIC errors cleared and noticed this in the logs.  Does this say anything to you?

     

    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: exception Emask 0x10 SAct 0x4000 SErr 0x280100 action 0x6 frozen
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: irq_stat 0x08000000, interface fatal error
    Oct 25 08:08:16 15620-BEAST kernel: I/O error, dev sdb, sector 520 op 0x0:(READ) flags 0x80700 phys_seg 37 prio class 0
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: exception Emask 0x10 SAct 0x8000 SErr 0x280100 action 0x6 frozen
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: irq_stat 0x08000000, interface fatal error
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: exception Emask 0x10 SAct 0x8000 SErr 0x280100 action 0x6 frozen
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: irq_stat 0x08000000, interface fatal error
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: exception Emask 0x0 SAct 0x10000 SErr 0x880000 action 0x6
    Oct 25 08:08:16 15620-BEAST kernel: ata1.00: error: { ICRC ABRT }

     

  9. Thanks @JorgeB, the 6.12.4 macvlan fix didn't work for me. Despite following it to the letter, I would still get frequent lockups that would force a hard restart.  That's why I went back to 11.5.  I never really needed macvlan (was just the unraid standard when I started on 6.8), so when the system still wasn't stable after reverting to 11.5, I switched to ipvlan.  Now I don't have to hard restart, but I'm getting the "no exportable shares" error every so often. 

     

    Frustrating for a system that used to stay up for months at a time with no issues.  I will try 6.12.4 the next time it goes down and report back.

  10. I've been running Unraid without issue for 3 years.  Then I try to upgrade to 6.12.4 and have had nothing but problems since.  Got hit by the macvlan bug, so I went back to 6.11.5 where I thought it was safe.  But now all my shares disappear every few days.  When I go to the shares tab it says something like "No exportable shares".  Data is fine but most dockers/vms stop working and it takes a reboot to get the shares back.  My uptime sux now.  Can anyone ELI5 what's going on??  TIA

     

    15620-beast-diagnostics-20231024-2042.zip 

  11. From this page on the 6.12.4 release::

     

    "However, some users have reported issues with [...] reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode."

     

    I have tried 6.12.3 and 6.12.4 - both versions crashed, forcing me to hard shutdown the system.  I followed the update instructions to the letter, disabling bridging... etc

     

    I have no idea if this is related to the macvlan issue or not. 

     

    I run Unifi full stack

    I run vlans on unifi

    I do have custom docker networks on unraid.

    I do not assign ips to my containers or run them on custom vlans.

     

    Should I just try 6.12.4 with IPVlan? or should I just wait for 6.12.5 since .3 and .4 were not ready for primetime?

     

    edit:

    6.12.3 crash reported here - 

     

    6.12.4 crash reported here - 

     

  12. I shut down docker/vms services and went through settings again.  Turned off bonding and restarted. I had one nic card unplugged because I need to recable.  Thought bonding would just use the other line, but maybe not. edit: just to clarify, I was running a bond with one cable from 6.8.x and no issues until 6.12.4

     

    Either way, VMs can see container services now that I'm just running on ETH0.  crisis averted... for now.  

     

    Wondering if this also explains the CPU pegged at 1000?

  13. Just realized this is a Sev1 issue in my house because I have WeeWX running in an unraid VM sending all house temperature data to MQTT running in an unraid docker container.  Been working this way for years but now on 6.12.4 the VM can't see the container. 

     

    My AC trips on/off based on that temperature data getting to the MQTT container.  I live in FL. Will have to revert to 11.5

  14. Background:

    • since I started running unraid 3 years ago, I access VMs via Microsoft RDP and access my unraid docker services from that VM over RDP

     

    Summary:

    • Containers can't be accessed by VMs with VM set to vhost0, but VMs can access services running on other VMs
    • VMs can't be access by RDP (MS RDP) with VM set to vbri0, but containers can be accessed by vms
    • overnight, CPU load goes to 1000 forcing a hard shutdown

     

    I don't like to change my settings once I have things working, but I tried upgrading to 6.12.3 and bumped into the macvlan issues so I rolled back to 6.11.5.  Then, I decided to give 6.12.4 a shot since it was supposed to fix the macvlan issue.

     

    Made the changes outlined here:

    • bonding = yes
    • bridging = no
    • host access = yes

     

    1st boot on 6.12.4, everything ran fine for a couple of days and I thought the macvlan problem was solved.  Then I woke up this morning to an unresponsive server (see top screenshot).

     

    Had to hard reset. 💩

     

    When it came back up, I ran into the conditions in the summary above where I couldn't access docker containers from a VM like I have been doing for years.  I really don't want to roll back to 6.11.5 again but might have to.  Hoping someone here can see something that I missed... TIA.

     

    Questions:

    • What setting do I need to make on 6.12.4 to access docker containers from a VM?
    • What is causing my server to go unresponsive forcing a hard reset?

     

     

    Screenshot 2023-09-05 at 7.04.44 AM.png

    15620-beast-diagnostics-20230905-0809.zip

  15. 40 minutes ago, Heppie said:

    Its the DNS servers that are causing issues. I changed them to 8.8.8.8 and 8.8.4.4 and all is working well.

     looks like multiple DNS servers set via NAME_SERVERS but failover not working.  Changed mine to 1.0.0.1, 8.8.8.8 and looks like it's working now

    • Thanks 1
  16. 5 hours ago, binhex said:

    i need full log to debug the issue

    attaching mine.  On this run, it looks like it does manage to get an ip from pia, but ping response times are awful and most requests just time out.  That's why I was waiting to post.  Getting inconsistent results.  Something not right but not sure it's the container.  Might be PIA.supervisord.log

     

    here's another one a different pia server.  This time it fails to generate token for wireguard 4-5 times before succeeding.  Also, fails to check external ip 3-5 times. supervisord.log2

    • Like 1
  17. 14 hours ago, frodr said:

    I'm stuck too.

    Assuming you are all using PIA?  Might have been a hiccup on their side.  I noticed none of my indexers were working.  Played with the settings for a bit but was getting inconsistent readings in the logs.  Sometimes it would connect, get an ip, etc.  Sometimes it would time out.  This AM, everything seems to be restored.  Hope it stays that way.

     

    edit: nevermind - still wonky.  attaching logs below in a new reply

×
×
  • Create New...