hermy65

Members
  • Posts

    257
  • Joined

  • Last visited

Posts posted by hermy65

  1. @trurl

     

    Interesting, checked the system/libvrt folder on cache and its empty, the one on disk17 has the .img in it and it was last modified after i started the array when you told me to. Is there a reason that .img file would be on disk17 and being used when the setting is as you posted above?

  2. @trurl appreciate the help!

     

    The lost+found folders have been taken care of, appreciate the alert on that. As for the system folder, it looks like it has my libvrt.img in it for some reason, i assume i can move that to cache and then update the location in VM settings or will that cause an issue?

  3. @trurl without -n, i see no mention of -L in this output.

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
    bad CRC for inode 87062
    inode identifier 16149622513293918207 mismatch on inode 87062
    bad CRC for inode 87062, will rewrite
    inode identifier 16149622513293918207 mismatch on inode 87062
    cleared inode 87062
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 1
            - agno = 3
            - agno = 4
            - agno = 7
            - agno = 6
            - agno = 5
    entry "11 - The Amity Affliction - Stairway to Hell.mp3" at block 0 offset 736 in directory inode 87051 references free inode 87062
    	clearing inode number in entry at offset 736...
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
    bad hash table for directory inode 87051 (no data entry): rebuilding
    rebuilding directory inode 87051
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    done

     

  4. @trurl here is the output:

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
    bad CRC for inode 87062
    inode identifier 16149622513293918207 mismatch on inode 87062
    bad CRC for inode 87062, would rewrite
    inode identifier 16149622513293918207 mismatch on inode 87062
    would have cleared inode 87062
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 4
            - agno = 2
            - agno = 3
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 1
    entry "11 - The Amity Affliction - Stairway to Hell.mp3" at block 0 offset 736 in directory inode 87051 references free inode 87062
    	would clear inode number in entry at offset 736...
    bad CRC for inode 87062, would rewrite
    inode identifier 16149622513293918207 mismatch on inode 87062
    would have cleared inode 87062
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
    entry "11 - The Amity Affliction - Stairway to Hell.mp3" in directory inode 87051 points to free inode 87062, would junk entry
    bad hash table for directory inode 87051 (no data entry): would rebuild
    would rebuild directory inode 87051
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

     

  5. Popped open my syslog this morning and its full of this but it doesnt list what disk its detecting the corruption on so i have no idea what to do. None of the disks show errors on the dashboard either. Diagnostics are attached. 

     

    Feb 28 08:55:51 Storage kernel: XFS (md18p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x15416 dinode
    Feb 28 08:55:51 Storage kernel: XFS (md18p1): Unmount and run xfs_repair
    Feb 28 08:55:51 Storage kernel: XFS (md18p1): First 128 bytes of corrupted metadata buffer:
    Feb 28 08:55:51 Storage kernel: 00000000: 49 4e 81 ff 03 02 00 00 00 00 00 63 00 00 00 64  IN.........c...d
    Feb 28 08:55:51 Storage kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
    Feb 28 08:55:51 Storage kernel: 00000020: 5c e8 88 37 21 21 f6 d0 5c e8 88 12 1e 16 0a a8  \..7!!..\.......
    Feb 28 08:55:51 Storage kernel: 00000030: 5c e8 93 98 08 a2 4f 36 00 00 00 00 00 b7 90 e7  \.....O6........
    Feb 28 08:55:51 Storage kernel: 00000040: 00 00 00 00 00 00 0b 7a 00 00 00 00 00 00 00 01  .......z........
    Feb 28 08:55:51 Storage kernel: 00000050: 00 00 18 01 00 00 00 00 00 00 00 00 d5 4c 13 f1  .............L..
    Feb 28 08:55:51 Storage kernel: 00000060: ff ff ff ff fa 0a 06 c8 00 00 00 00 00 00 00 0d  ................
    Feb 28 08:55:51 Storage kernel: 00000070: 00 00 00 01 00 18 ff 66 00 00 00 00 00 00 00 00  .......f........
    Feb 28 08:55:51 Storage kernel: XFS (md18p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x15416 dinode
    Feb 28 08:55:51 Storage kernel: XFS (md18p1): Unmount and run xfs_repair
    Feb 28 08:55:51 Storage kernel: XFS (md18p1): First 128 bytes of corrupted metadata buffer:
    Feb 28 08:55:51 Storage kernel: 00000000: 49 4e 81 ff 03 02 00 00 00 00 00 63 00 00 00 64  IN.........c...d
    Feb 28 08:55:51 Storage kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
    Feb 28 08:55:51 Storage kernel: 00000020: 5c e8 88 37 21 21 f6 d0 5c e8 88 12 1e 16 0a a8  \..7!!..\.......
    Feb 28 08:55:51 Storage kernel: 00000030: 5c e8 93 98 08 a2 4f 36 00 00 00 00 00 b7 90 e7  \.....O6........
    Feb 28 08:55:51 Storage kernel: 00000040: 00 00 00 00 00 00 0b 7a 00 00 00 00 00 00 00 01  .......z........
    Feb 28 08:55:51 Storage kernel: 00000050: 00 00 18 01 00 00 00 00 00 00 00 00 d5 4c 13 f1  .............L..
    Feb 28 08:55:51 Storage kernel: 00000060: ff ff ff ff fa 0a 06 c8 00 00 00 00 00 00 00 0d  ................
    Feb 28 08:55:51 Storage kernel: 00000070: 00 00 00 01 00 18 ff 66 00 00 00 00 00 00 00 00  .......f........

     

    storage-diagnostics-20240228-0851.zip

  6. @JorgeB

     

    Done - logs are below, what should i do next?

     

     

     disk 1 with -L did this:

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    destroyed because the -L option was used.
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
    sb_icount 23808, counted 132032
    sb_ifree 2334, counted 35228
    sb_fdblocks 10484039, counted 33519164
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 9
            - agno = 3
            - agno = 1
            - agno = 6
            - agno = 7
            - agno = 5
            - agno = 8
            - agno = 4
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Maximum metadata LSN (3:1193189) is ahead of log (1:2).
    Format log to cycle 6.
    done

     

    Disk 8 did: 

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    destroyed because the -L option was used.
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
    sb_icount 61696, counted 312768
    sb_ifree 9362, counted 47709
    sb_fdblocks 47064461, counted 92818335
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 7
            - agno = 12
            - agno = 5
            - agno = 3
            - agno = 6
            - agno = 8
            - agno = 2
            - agno = 9
            - agno = 11
            - agno = 10
            - agno = 13
            - agno = 14
            - agno = 15
            - agno = 4
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Maximum metadata LSN (7:742200) is ahead of log (1:2).
    Format log to cycle 10.
    done

     

  7. @JorgeB im getting this on both

     

    Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.

  8. Tuesday i started to hear a sound from my server that sounded like a fan going bad then about 5 minutes later i got notification that some drives were hot so i pulled the lid off to check the fan and it was good. Put things back together then went back to the dashboard and was missing 4 drives from one backplane. Shut down, reseated the 4 drives and powered back on, 2 came back and 2 were marked as missing. Thought maybe a backplane went bad so i powered down, swapped a new one in and then the 2 missing drives showed back up but were disabled so i kicked off a rebuild on the first one. Today the rebuild finished but now it says its unmountable, same with the other drive that was missing.

     

    What do i do now? Diagnostics are attached

    storage-diagnostics-20231207-0936.zip

  9. I just added another USB device to passthru into a VM today and now im getting all kinds of issues with USB devices and an error saying No free USB ports. I found another post saying to bump the USB controller to 3.0 (qemu XHCI) and that would fix the issue but it did not. I have 4 devices im trying to pass thru to a Home Assistant VM. 

     

    This is what i see in syslog: Jul 7 15:51:58 Storage usb_manager: Info: virsh called HomeAssistant 003 021 error: Failed to attach device from /tmp/libvirthotplugusbbybusHomeAssistant-003-021.xml error: internal error: No free USB ports

     

    Diagnostics are attached

     

    Thanks for the assistance!

    storage-diagnostics-20230707-1527.zip

  10. 27 minutes ago, yayitazale said:

    Ok now the error is different. Are you sure you did the chmod +x to the file in /mnt/user/Backup/tensorrt models/ ??

    @yayitazale well, not sure what happened here but i did a wget again on the script and did a chmod+x again and now its working. Not sure why but it worked this time as there are 3 yolov models in the folder now. 

  11. 1 hour ago, yayitazale said:

    Can you try to put the script on the path of first entry "TRT-Models folder path", in the case of the image, "/mnt/user/Backup/TensorRT Models/" ??

     

    I suspect that the second entry is not working but as I tested using the same folder for both I didn't notice it. If this works I would change the template and the steps on the description.

    @yayitazale So unless i misunderstood you, i moved the script to the user/backup/tensorrt models folder, updated the paths, re-did the chmod +x and ran it but the same error happened.

     

    Here is my current config

     

    image.thumb.png.ac248f89e42c9b6e41da71ffbe33605b.png

     

    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Permission denied
    /opt/nvidia/nvidia_entrypoint.sh: line 49: exec: /tensorrt_models.sh: cannot execute: Permission denied
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Permission denied
    /opt/nvidia/nvidia_entrypoint.sh: line 49: exec: /tensorrt_models.sh: cannot execute: Permission denied
    /opt/nvidia/nvidia_entrypoint.sh: /tensorrt_models.sh: /bin/bash^M: bad interpreter: No such file or directory
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Success
    /opt/nvidia/nvidia_entrypoint.sh: /tensorrt_models.sh: /bin/bash^M: bad interpreter: No such file or directory
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Success
    
    
    =====================
    == NVIDIA TensorRT ==
    =====================
    
    NVIDIA Release 22.07 (build 40077977)
    NVIDIA TensorRT Version 8.4.1
    Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
    
    Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
    
    https://developer.nvidia.com/tensorrt
    
    Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
    
    This container image and its contents are governed by the NVIDIA Deep Learning Container License.
    By pulling and using the container, you accept the terms and conditions of this license:
    https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
    
    To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
    
    To install the open-source samples corresponding to this TensorRT release version
    run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
    plugins, and samples for current top-of-tree on master or a different branch,
    run /opt/tensorrt/install_opensource.sh -b <branch>
    See https://github.com/NVIDIA/TensorRT for more information.
    

     

  12. 6 hours ago, yayitazale said:

    Can you show how are you launching the template?

     

    I guess is something related with the path itself. Try to use the suggested path /mnt/user/appdata/trt-models/ and try again.

     

    @yayitazale Below is how i have it currently setup, i tried changing the path to user/appdata but it still fails

     

    image.thumb.png.3ee2a7da549be8320414a4969f3d31a2.png

     

     

    This is what i see in the container logs

     

    /opt/nvidia/nvidia_entrypoint.sh: /tensorrt_models.sh: /bin/bash^M: bad interpreter: No such file or directory
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Success
    /opt/nvidia/nvidia_entrypoint.sh: /tensorrt_models.sh: /bin/bash^M: bad interpreter: No such file or directory
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Success
    
    =====================
    == NVIDIA TensorRT ==
    =====================
    
    NVIDIA Release 22.07 (build 40077977)
    NVIDIA TensorRT Version 8.4.1
    Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
    
    Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
    
    https://developer.nvidia.com/tensorrt
    
    Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
    
    This container image and its contents are governed by the NVIDIA Deep Learning Container License.
    By pulling and using the container, you accept the terms and conditions of this license:
    https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
    
    To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
    
    To install the open-source samples corresponding to this TensorRT release version
    run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
    plugins, and samples for current top-of-tree on master or a different branch,
    run /opt/tensorrt/install_opensource.sh -b <branch>
    See https://github.com/NVIDIA/TensorRT for more information.

     

     

  13. Can anybody give me some guidance on the Tensorrt-models container? For the life of me i cannot get it to work. Ive installed the container, done the chmod +x but when i run it i get this in the logs and the container dies:

     

    /opt/nvidia/nvidia_entrypoint.sh: /tensorrt_models.sh: /bin/bash^M: bad interpreter: No such file or directory
    /opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Success

  14. Im attempting to use this for the first time to build out this: Spodcast Github but im struggling with getting it figured out. I assume i need to add mount points somehow in the compose file so it wont just fill my docker image up but i cannot for the life of me figure it out.  Following an example i found in this thread yields a: 

    volumes.spodcast_data must be a mapping or null

     

    If i add a /data directly after the spodcast_data: like throughout the rest of the compose file i get a:

    yaml: line 38: mapping values are not allowed in this context

     

    Attached is what i currently have for my compose file, can someone point me in the right direction on this? Thanks!

    spodcast.txt

  15. 7 minutes ago, LeonStoldt said:

    @hermy65 it looks like your connection to redis does not work properly. How did you setup redis and reference it in the ghostfolio container?

     

    I could reproduce your issue without the REDIS_PORT variable in the ghostfolio container. I forgot to add the variable to the template and will do it soon. For now, please add a new variable to your ghostfolio template and call it "REDIS_PORT" (case sensitive). It should be 6379 by default or your custom port of your redis container.

    The REDIS_HOST variable should be set to your Unraid IP or local hostname. 

     

    To check, you could run 'ping ${Unraid_IP}:6379' and check if redis is reachable from your ghostfolio console / container.

     

    It sounds like you already did most of it, so maybe check your configurations, give the containers a restart in the correct order and check if it works.

     

    I am able to ping the ip:port of redis through the container. I had already added the REDIS_PORT variable and mapped it to 6379. As for how i configured it, ive tried redis both through host mode and br0 with a custom ip and neither appear to work for me.

  16. Managed to get this up and running after adding the redis_port variable and an account created, however, whenever i try to add anything i save it and at the bottom it says Oops, something went wrong, please try again.  

     

    In the container logs i see this

     

    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [38;5;3m[NestApplication] [39m[32mNest application successfully started[39m[38;5;3m +319ms[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m ________ __ ____ ___[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m / ____/ /_ ____ _____/ /_/ __/___ / (_)___[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m / / __/ __ \/ __ \/ ___/ __/ /_/ __ \/ / / __ \[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m/ /_/ / / / / /_/ (__ ) /_/ __/ /_/ / / / /_/ /[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m\____/_/ /_/\____/____/\__/_/ \____/_/_/\____/ v1.150.0[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32mListening at http://localhost:3333[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [32m[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:39:45 PM [32m LOG[39m [38;5;3m[DataGatheringService] [39m[32mData gathering has been reset.[39m
    [32m[Nest] 1 - [39m05/23/2022, 2:40:00 PM [32m LOG[39m [38;5;3m[DataGatheringService] [39m[32m7d data gathering has been started.[39m
    [31m[Nest] 1 - [39m05/23/2022, 2:42:23 PM [31m ERROR[39m [38;5;3m[ExceptionsHandler] [39m[31mConnection is closed.[39m
    
    Error: Connection is closed.
    
    at Redis.sendCommand (/ghostfolio/apps/api/node_modules/ioredis/built/redis/index.js:636:24)
    at Script.execute (/ghostfolio/apps/api/node_modules/ioredis/built/script.js:27:34)
    at Redis.addJob (/ghostfolio/apps/api/node_modules/ioredis/built/commander.js:158:27)
    at Object.addJob (/ghostfolio/apps/api/node_modules/bull/lib/scripts.js:49:19)
    at addJob (/ghostfolio/apps/api/node_modules/bull/lib/job.js:82:18)
    at /ghostfolio/apps/api/node_modules/bull/lib/job.js:95:14
    [31m[Nest] 1 - [39m05/23/2022, 2:43:04 PM [31m ERROR[39m [38;5;3m[YahooFinanceService] [39m[31mBadRequestError: Missing required query parameter=symbols[39m
    
    [31m[Nest] 1 - [39m05/23/2022, 2:43:37 PM [31m ERROR[39m [38;5;3m[ExceptionsHandler] [39m[31mConnection is closed.[39m
    
    Error: Connection is closed.
    
    at Redis.sendCommand (/ghostfolio/apps/api/node_modules/ioredis/built/redis/index.js:636:24)
    at Script.execute (/ghostfolio/apps/api/node_modules/ioredis/built/script.js:27:34)
    at Redis.addJob (/ghostfolio/apps/api/node_modules/ioredis/built/commander.js:158:27)
    at Object.addJob (/ghostfolio/apps/api/node_modules/bull/lib/scripts.js:49:19)
    at addJob (/ghostfolio/apps/api/node_modules/bull/lib/job.js:82:18)
    at /ghostfolio/apps/api/node_modules/bull/lib/job.js:95:14

     

  17. Any chance anyone can help me with some samesite/sameorigin issues im having with swag+organizr?

     

    I was able to use the chrome flags when they were available to get things working, then they removed the flags so i resorted to a registry tweak but that no longer works for me either. I would like to get this fixed once and for all so i can continue to use things.

     

    Right now i have everything going through swag using a name.domain.com format. If i use local ips for organizr + the organizr tabs use local ips i get the attached image when i inspect the tab. If i go through my reverse proxy with organizr.domain.com and all the tabs using service.domain.com i get the same thing. How do i resolve this once and for all?

    1.png

  18. Any chance anyone can help me with some samesite/sameorigin issues im having with swag+organizr?

     

    I was able to use the chrome flags when they were available to get things working, then they removed the flags so i resorted to a registry tweak but that no longer works for me either. I would like to get this fixed once and for all so i can continue to use things.

     

    Right now i have everything going through swag using a name.domain.com format. If i use local ips for organizr + the organizr tabs use local ips i get the attached image when i inspect the tab. If i go through my reverse proxy with organizr.domain.com and all the tabs using service.domain.com i get the same thing. How do i resolve this once and for all?

     

     

    1.png