Jump to content

fonzie

Members
  • Posts

    233
  • Joined

  • Last visited

Posts posted by fonzie

  1. I imported an audio book that had all the tracks named "Lesson 001" "Lesson 002" etc as ID3 tags and also the filenames "Lesson 001.mp3".  I then manually changed the names of the first two chapters from within audiobookshelf.

     

    image.png.01217f6435ae894d6fc510baa4ebd6b8.png

     

    I then changed every track title with MediaMonkey:

    image.thumb.png.c2136ecad3ad9797aae5123b7c6aeb14.png

     

    I deleted the book from audiobookshelf and re-scanned it, hoping that it would update the track titles, but it retains the original tracks. It even keeps the first two tracks that I renamed from within audiobookshelf, which leads me to believe that audiobookshelf is somehow keeping the metadata somewhere. 

     

    How can I force audiobookshelf to force update the chapter titles using the ID3 tags?

  2. I've had my current rig for about 6 years and I'm looking to upgrade.

     

    --------------------------------------------------------------------------------------------------------------

    My current setup:

     

    Mobo: ASRock EP2C602-4L/D16

    http://www.newegg.com/Product/Product.aspx?Item=N82E16813157350&ignorebbr=1

     

    CPU: 2x Xeon 2670 matching pair SR0KX

    https://www.ebay.com/itm/264797287830

     

    Cooler: 2x Noctua NH-U9DXi4

    https://www.amazon.com/gp/product/B00E1JGFA0/ref=ox_sc_act_title_2?ie=UTF8&psc=1&smid=A1H9NMCPZH97BO

     

    Memory: 96GB (12x 8GB) Micron DDR3 Ram

    http://www.ebay.com/itm/122249072561?_trksid=p2057872.m2749.l2649&ssPageName=STRK%3AMEBIDX%3AIT

     

    PSU: EVGA SuperNOVA 1000 G2, 80+ GOLD 1000W

    https://www.amazon.com/gp/product/B00CGYCNG2/ref=ox_sc_act_title_1?ie=UTF8&psc=1&smid=ATVPDKIKX0DER

     

    SAS Card: LSI SAS 9201-16i

    https://docs.broadcom.com/doc/12352036

     

    Case: Norco 4224

    https://www.amazon.com/NORCO-Mount-Hot-Swappable-Server-RPC-4224/dp/B00BQY3916/ref=sr_1_1?s=pc&ie=UTF8&qid=1482874823&sr=1-1&keywords=norco+4224

     

    # of Hard drives: 24

     

    ---------------------------------------------------------------------------------------------------

     

    My needs:  I want to be able to handle 4K HEVC transcoding on and Emby docker using Intel Quicksync.  I don't want an external GPU.  I want to run at least one Windows 10 VM at a time.

     

    I'm thinking based off my limited research that the i7-12700K might be good enough with its UHD770 graphics which are the same on the 13700K (so no need to pay more for same GPU power)

     

    Looking at maybe 32GB DDR4 RAM with the possibility of expanding it to 64GB in the future.

     

    I also just got a PCIe 4.0 NVMe m.2 drive so I'll need a new mobo with a slot for that (I'm sure most new ones already have m.2 slots)

     

    My budget is around $800, but if it can be done for less or if you think the CPU above is overkill let me know.  I'm open to any suggestions or corrections.

  3. On 1/22/2023 at 9:52 AM, DontWorryScro said:


    I switched to using PIA WireGuard instead of OpenVPN and rutorrentvpn and my speeds doubled almost tripled

     

    I'll give that a try.  I see the option in the docker settings for rutorrentVPN to change from openvpn to wireguard, but where in the private internet access website can i find the config files for wireguard?  And where in the docker will I place them once I have them?

  4. On 1/19/2023 at 2:25 PM, DontWorryScro said:

     

     

    +1

    Been noticing that the little port checker icon on the bottom has been giving me lots of yellow triangles or straight up red exclaimation circles telling me ports are unverified or closed with a 127.0.0.1:[port] showing often.  I have to assume something has changed with PIA OpenVPN servers recently.  I was using Berlin flawlessly for years.  Now I cant even find port forwarded location list on the PIA site.  They seem to make it hard to find.  But I only ask here to see if this is not in fact a PIA thing and that something may have changed in the client.  Anyone solve this yet?

     

    Haven't fixed it for myself yet.  I'm using PIA and installed the qbittorrentVPN docker and the speeds are much faster using the same PIA VPN server so I have to assume that it's an issue with either rtorrent or the docker itself.

     

    I just wish I could fix this issue because I much prefer the feature rich rTorrentVPN docker over the qbittorrentVPN docker.  I'll be checking in this thread regularly to see if any updates come by that let me come back to this docker.

     

  5. This docker has been working perfectly for a long time, but just recently have the speeds been super slow to completely stopped.  Some torrents will show there are no seeders (when I know there should be) and even those with seeders will not download.

     

    I'm using Private Internet Access as a VPN provider.  I was using the Berlin VPN for years and it's been fine up until recently.  Tried changing it to the Mexico one, due to lower latency, but it is the same issue, if not worse.

     

    Does anyone have a recommendation for a city to use with Private Internet Access and this docker?  Or know of anything that could be causing my downloading issues?

  6. I've been using the offical emby docker for years now without issue but lately I've found that my "Movies" share isn't storing metadata.  No subtitles, or fanart, or even thumbnails are being stored alongside the movies in their corresponding folders.  Yes, I have the settings correct.  It's worked for years and I didn't change anything in emby.  Could it be a permissions setting within unraid that's not allowing the docker to write to the share.

     

    I'm using Sonarr to download movies and place them in the correct folder, so I don't know why Sonarr would be able to write to that share when it seems that Emby does not?  Any tips would be appreciated.

  7. Lately, my unRAID server (about once a week or maybe less often) gets stuck in a reboot cycle by itself.  It randomly will restart the server and starts doing a partiy check (as if it wasn't a clean reboot) then after about 10 minutes it restarts again and the cycle continues. Even if I press the restart button on the unRAID GUI and it cleanly restarts it will still keep on the cycle.

     

    The only way to stop it is to physically press the power button, wait a few minutes, then press the power button again to start it back up. Obviously this isn't ideal, as I am usually not home when this cycle starts.  

     

    I've attached the diagnostics

     

    What log or information can I post to get some assistance?  It's currently stuck on the reboot cycle as I post this.

    media-diagnostics-20220813-1922.zip

  8. 9 minutes ago, alturismo said:

    may i ask what is filling your xteve log like this ?

     

    to prevent this ... well, im not using docker logs after my docker is running, so i always set

     

    image.png.391f71bf7147c599f37841fba70cd397.png

     

    but i would be interested what ...

    Where did you set this Extra Parameters?  I can't find the setting 

  9. My rtorrentvpn has been down since yesterday and I cannot get it back up and running.

     

    It's been working fine for over a year and yesterday I added several torrents all at once.  It started getting slow and then stopped responding.  I've tried restarting the docker and even restarted the entire server but still it does not load.

     

    the log says:

    [info] rTorrent process listening on port 5000
    
    2022-06-15 09:54:20,242 DEBG 'watchdog-script' stdout output:
    [info] Initialising ruTorrent plugins (checking rTorrent is running)...
    
    2022-06-15 09:54:20,253 DEBG 'watchdog-script' stdout output:
    [info] rTorrent running
    [info] Initialising ruTorrent plugins (checking nginx is running)...
    
    2022-06-15 09:54:20,264 DEBG 'watchdog-script' stdout output:
    [info] nginx running
    [info] Initialising ruTorrent plugins...
    
    2022-06-15 09:54:31,849 DEBG 'watchdog-script' stdout output:
    [info] ruTorrent plugins initialised
    
    2022-06-15 09:55:17,100 DEBG 'watchdog-script' stdout output:
    [info] rTorrent not running
    
    2022-06-15 09:55:17,108 DEBG 'watchdog-script' stdout output:
    [info] Removing any rTorrent session lock files left over from the previous run...
    
    2022-06-15 09:55:17,114 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start rTorrent...
    
    2022-06-15 09:55:17,115 DEBG 'watchdog-script' stdout output:
    Script started, output log file is '/home/nobody/typescript'.
    
    2022-06-15 09:55:17,151 DEBG 'watchdog-script' stdout output:
    Script done.
    
    2022-06-15 09:55:19,190 DEBG 'watchdog-script' stdout output:
    [info] rTorrent process started
    [info] Waiting for rTorrent process to start listening on port 5000...
    
    2022-06-15 09:55:19,201 DEBG 'watchdog-script' stdout output:
    [info] rTorrent process listening on port 5000
    
    2022-06-15 09:55:19,202 DEBG 'watchdog-script' stdout output:
    [info] Initialising ruTorrent plugins (checking rTorrent is running)...
    
    2022-06-15 09:55:19,213 DEBG 'watchdog-script' stdout output:
    [info] rTorrent running
    [info] Initialising ruTorrent plugins (checking nginx is running)...
    
    2022-06-15 09:55:19,226 DEBG 'watchdog-script' stdout output:
    [info] nginx running
    [info] Initialising ruTorrent plugins...
    
    2022-06-15 09:55:39,515 DEBG 'watchdog-script' stdout output:
    [info] ruTorrent plugins initialised
    
    2022-06-15 09:56:09,726 DEBG 'watchdog-script' stdout output:
    [info] rTorrent not running
    
    2022-06-15 09:56:09,734 DEBG 'watchdog-script' stdout output:
    [info] Removing any rTorrent session lock files left over from the previous run...
    
    2022-06-15 09:56:09,737 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start rTorrent...
    
    2022-06-15 09:56:09,738 DEBG 'watchdog-script' stdout output:
    Script started, output log file is '/home/nobody/typescript'.
    
    2022-06-15 09:56:09,773 DEBG 'watchdog-script' stdout output:
    Script done.
    
    2022-06-15 09:56:11,807 DEBG 'watchdog-script' stdout output:
    [info] rTorrent process started
    [info] Waiting for rTorrent process to start listening on port 5000...
    
    2022-06-15 09:56:11,817 DEBG 'watchdog-script' stdout output:
    [info] rTorrent process listening on port 5000
    
    2022-06-15 09:56:11,818 DEBG 'watchdog-script' stdout output:
    [info] Initialising ruTorrent plugins (checking rTorrent is running)...
    
    2022-06-15 09:56:11,828 DEBG 'watchdog-script' stdout output:
    [info] rTorrent running
    
    2022-06-15 09:56:11,828 DEBG 'watchdog-script' stdout output:
    [info] Initialising ruTorrent plugins (checking nginx is running)...
    
    2022-06-15 09:56:11,839 DEBG 'watchdog-script' stdout output:
    [info] nginx running
    [info] Initialising ruTorrent plugins...
    
    2022-06-15 09:56:20,175 DEBG 'watchdog-script' stdout output:
    [info] ruTorrent plugins initialised
    

    image.thumb.png.c506dc8a3a9f61875f35c92339c97e14.png

     

    any suggestions or advice?

  10. 3 hours ago, trurl said:

    The usual reason for filling docker.img is an application writing to a path that isn't mapped. Check each application that writes data such as downloads or transcodes or dvr, etc. and make sure it is configured to only write to a path that corresponds exactly to a container path in the mappings.

     

     

    I've had experience with this happening in the past so I'm careful to make sure the directories are correct when adding new dockers.  I haven't configured or added any new dockers recently so I'm confused as to why this would happen suddenly this morning.  Is there a way to see what is being written to the docker.img in realtime?

     

    25 minutes ago, rachid596 said:

    Maybe you have a lot of orphan images

    Envoyé de mon HD1913 en utilisant Tapatalk
     

    How do I check for the orphan images?

     

    I was planning on deleting the docker.img and rebuilding but I'm afraid that what caused it to happen the first time will just do it again

  11. I haven't made any changes to my dockers and haven't gotten any new ones, but this morning I woke up to messages about every 10 minutes that my docker.img was filling up 1% each time.  What's the best way to find out which docker is causing the issue?

  12. Follow up curveball question:

     

    Can I copy all of the contents of disk21 (including the incomplete files from the cancelled disk4 process) to another disk (lets say disk 8) and then run the command:

     

    rsync -avX /mnt/disk4/ /mnt/disk8

     

    and have it resume the sync to disk8 even though the original process was started moving the files to disk 21?

     

    I'm thinking (hoping) yes because it will just compare files from both drives and sync them, as the name implies...but I want to be sure.

  13. I was following the Safer Method outlined in the wiki and had transferred about 800GB of Disk 4 over to Disk 21 when a different drive in the array (Disk 7) failed and was being emulated. I decided to play it safe and force shutdown the server mid transfer because I didn't want to risk a second drive failing and losing data ( I only have 1 parity drive at the moment)

     

    I've already replaced the failed drive and rebuilt parity. My question is: Can I run the same command again without issue?

     

    rsync -avX /mnt/disk4/ /mnt/disk21

     

    What will happen if I type that again? Is it smart enough to see that I already transferred about 800GB of files and resume where it left off? Will it start from the beginning and simply overwrite all of the data again? Or will it make duplicates on the drive?

     

    **The reason I was scared of another drive failing and forced the shutdown is because I have about 3 old disks with SMART errors (which is why I'm consolidating old drives into a new larger one)

  14. I am in the process of changing some of my disks' filesystems from reiserfs to xfs (I already completed one drive successfully).  I was following the guide here:

     

    https://wiki.unraid.net/File_System_Conversion

     

    I did the "Mirror each disk with rsync, preserving parity method" and followed those instructions.  In those instructions it states to basically shut down all dockers, vms, movers, etc to avoid something else getting written into the original drive or the new empty drive during the copy process.

     

    My question is, can I simply stop the mover and parity checks from automatically running and be safe while still running my dockers?

     

    Essentially, what I want to do is continue running my dockers during this process because there are a lot of functions that I really can't go without for this long transfer process (each 4TB disk takes me around a day to complete).  All of my dockers run on the cache drive and I have all new data written to the cache drive first before it is distributed to the main array during it's nightly scheduled move.

     

    Is there any detail that I'm missing that could cause hiccups along the way?

  15. I've been having similar issues as others of when the docker is left running for a while it would stop responding.  Stopping the docker and restarting it would resolve the issue.  But the last few days I have not been able to get the docker up and running.  I have not changed any settings and stopping and restarting the docker no longer resolved the issue, no matter how many times I try.

     

    Here's the log:

     

    ErrorWarningSystemArrayLogin
    
    
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: supervisor died
    rdpmouseControl: what 2
    rdpmouseDeviceOff:
    rdpkeybControl: what 2
    rdpkeybDeviceOff:
    rdpSaveScreen:
    s6-svwait: fatal: timed out
    XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":1"
    
    after 1870 requests (1870 known processed) with 0 events remaining.
    [cont-finish.d] executing container finish scripts...
    rdpkeybControl: what 3
    rdpkeybUnInit: drv 0x55a70db2ffb0 info 0x55a70de47160, flags 0x0
    rdpUnregisterInputCallback: proc 0x1458a4983530
    rdpmouseControl: what 3
    rdpmouseUnInit: drv 0x55a70db30e10 info 0x55a70dcd6540, flags 0x0
    rdpUnregisterInputCallback: proc 0x1458a4b86c60
    s6-svwait: fatal: timed out
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    chansrv::main: using log file [/tmp/xrdp-chansrv.1.log]
    [20200217-18:34:04] [CORE ] main: app started pid 523(0x0000020b)
    [20200217-18:34:08] [INFO ] main: DISPLAY env var set to :1
    [20200217-18:34:10] [INFO ] main: using DISPLAY 1
    [20200217-18:34:11] [INFO ] channel_thread_loop: thread start
    [20200217-23:23:29] [INFO ] term_signal_handler: got signal 15
    [20200217-23:23:30] [INFO ] channel_thread_loop: g_term_event set
    xrdp-chansrv [1433847801]: scard_deinit:
    chansrv:smartcard_pcsc [1433847801]: scard_pcsc_deinit:
    [20200217-23:23:32] [INFO ] channel_thread_loop: thread stop
    [20200217-23:23:34] [INFO ] main: app exiting pid 523(0x0000020b)
    s6-svwait: fatal: timed out
    [s6-finish] sending all processes the TERM signal.
    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 01-envfile: executing...
    [cont-init.d] 01-envfile: exited 0.
    [cont-init.d] 10-adduser: executing...
    usermod: no changes
    
    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 11-moduser: executing...
    [cont-init.d] 11-moduser: exited 0.
    [cont-init.d] 12-prep_xrdp: executing...
    [cont-init.d] 12-prep_xrdp: exited 0.
    [cont-init.d] 13-update_app_name: executing...
    [cont-init.d] 13-update_app_name: exited 0.
    [cont-init.d] 14-configure_openbox: executing...
    [cont-init.d] 14-configure_openbox: exited 0.
    [cont-init.d] 30-update_webapp_context: executing...
    [cont-init.d] 30-update_webapp_context: exited 0.
    [cont-init.d] 35-update_guac_creds: executing...
    [cont-init.d] 35-update_guac_creds: exited 0.
    [cont-init.d] 50-config: executing...
    [cont-init.d] 50-config: exited 0.
    [cont-init.d] 99-custom-scripts: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-scripts: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    Unable to find an X display. Ensure you have permission to connect to the display.
    
    X.Org X Server 1.19.6
    Release Date: 2017-12-20
    X Protocol Version 11, Revision 0
    
    Build Operating System: Linux 4.4.0-148-generic x86_64 Ubuntu
    Current Operating System: Linux 733b9b179683 4.19.98-Unraid #1 SMP Sun Jan 26 09:15:03 PST 2020 x86_64
    Kernel command line: iommu=pt initrd=/bzroot BOOT_IMAGE=/bzimage
    
    Build Date: 03 June 2019 08:10:35AM
    xorg-server 2:1.19.6-1ubuntu4.3 (For technical support please see http://www.ubuntu.com/support)
    Current version of pixman: 0.34.0
    
    Before reporting problems, check http://wiki.x.org
    to make sure that you have the latest version.
    
    Markers: (--) probed, (**) from config file, (==) default setting,
    (++) from command line, (!!) notice, (II) informational,
    (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
    
    (==) Log file: "/var/log/Xorg.pid-408.log", Time: Mon Feb 17 23:29:55 2020
    (++) Using config file: "/etc/X11/xrdp/xorg.conf"
    (==) Using system config directory "/usr/share/X11/xorg.conf.d"
    guacd[420]: INFO: Guacamole proxy daemon (guacd) version 0.9.14 started
    
    guacd[420]: INFO: Listening on host 127.0.0.1, port 4822
    xorgxrdpSetup:
    xrdpdevSetup:
    rdpmousePlug:
    rdpkeybPlug:
    rdpIdentify:
    rdpDriverFunc: op 10
    
    :
    rdpPreInit:
    rdpScreenInit: virtualX 800 virtualY 600 rgbBits 8 depth 24
    rdpScreenInit: pfbMemory bytes 1920000
    rdpScreenInit: pfbMemory 0x14e62b404010
    rdpSimdInit: assigning yuv functions
    rdpSimdInit: cpuid ax 1 cx 0 return ax 0x000206d7 bx 0x28200800 cx 0x1fbee3ff dx 0xbfebfbff
    rdpSimdInit: sse2 amd64 yuv functions assigned
    rdpXvInit: depth 24
    rdpClientConInit: kill disconnected [0] timeout [0] sec
    
    
    rdpScreenInit: out
    guacd[420]: INFO: Guacamole connection closed during handshake
    rdpmousePreInit: drv 0x5637cb13ce10 info 0x5637cb2e2910, flags 0x0
    rdpmouseControl: what 0
    rdpmouseDeviceInit:
    rdpmouseCtrl:
    rdpRegisterInputCallback: type 1 proc 0x14e62b7ddc60
    rdpmouseControl: what 1
    rdpmouseDeviceOn:
    rdpkeybPreInit: drv 0x5637cb13bfb0 info 0x5637cb453530, flags 0x0
    rdpkeybControl: what 0
    rdpkeybDeviceInit:
    rdpkeybChangeKeyboardControl:
    rdpkeybChangeKeyboardControl: autoRepeat on
    rdpRegisterInputCallback: type 0 proc 0x14e62b5da530
    rdpkeybControl: what 1
    rdpkeybDeviceOn:
    rdpSaveScreen:
    rdpDeferredRandR:
    rdpResizeSession: width 1024 height 768
    calling RRScreenSizeSet
    rdpRRScreenSetSize: width 1024 height 768 mmWidth 271 mmHeight 203
    rdpRRGetInfo:
    screen resized to 1024x768
    RRScreenSizeSet ok 1
    rdpInDeferredUpdateCallback:
    rdpkeybChangeKeyboardControl:
    rdpkeybChangeKeyboardControl: autoRepeat off
    rdpRRGetInfo:
    Obt-Message: Xinerama extension is not present on the server
    Warning: Cannot convert string "-*-helvetica-bold-r-normal--*-120-*-*-*-*-iso8859-1" to type FontStruct
    
    Warning: Cannot convert string "-*-courier-medium-r-normal--*-120-*-*-*-*-iso8859-1" to type FontStruct
    
    Openbox-Message: Unable to find a valid menu file "/var/lib/openbox/debian-menu.xml"
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    Connection failure: Connection refused
    pa_context_connect() failed: Connection refused
    
    s6-svwait: fatal: timed out
    rdpRRGetInfo:
    QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-abc'
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    qt.qpa.xcb: QXcbConnection: XCB error: 148 (Unknown), sequence: 181, resource id: 0, major code: 140 (Unknown), minor code: 20
    
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    DBusExport: Failed to connect to DBUS session bus, with error: org.freedesktop.DBus.Error.NotSupported: Using X11 for dbus-daemon autolaunch was disabled at compile time, set your DBUS_SESSION_BUS_ADDRESS instead
    
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    Traceback (most recent call last):
    File "site-packages/calibre/gui2/notify.py", line 159, in get_notifier
    File "site-packages/calibre/gui2/notify.py", line 89, in get_dbus_notifier
    File "site-packages/dbus/_dbus.py", line 211, in __new__
    File "site-packages/dbus/_dbus.py", line 100, in __new__
    File "site-packages/dbus/bus.py", line 122, in __new__
    DBusException: org.freedesktop.DBus.Error.NotSupported: Using X11 for dbus-daemon autolaunch was disabled at compile time, set your DBUS_SESSION_BUS_ADDRESS instead
    
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out
    s6-svwait: fatal: timed out

     

  16. 8 hours ago, uaborne said:

    I just resolved my issue. From the dockers console I ran the following commands which allowed me to login. 

    
    /usr/local/openvpn_as/scripts/sacli --key "vpn.server.daemon.enable" --value "false" ConfigPut
    /usr/local/openvpn_as/scripts/sacli --key "vpn.daemon.0.listen.protocol" --value "tcp" ConfigPut
    /usr/local/openvpn_as/scripts/sacli --key "vpn.server.port_share.enable" --value "true" ConfigPut
    /usr/local/openvpn_as/scripts/sacli start

     

    how do you input those commands?  Where exactly do you do it?

  17. I'm not sure what happened but I just noticed today that I cannot access my dockers outside of my network (Airsonic, Booksonic, Emby, etc).  

     

    I have the ports forwarded and I've been able to access them previously, this is a new problem that just arose.

     

    The only thing I can think that has changed is that I updated to the latest unRAID 6.8.1

     

    Sonarr can still search indexers and download.  I can also still connect to my OpenVPN docker on that same machine while outside the network.  I'm not sure how to even begin to diagnose the problem so I can fix it.  Any tips?

×
×
  • Create New...