j0nnymoe

Members
  • Posts

    353
  • Joined

  • Last visited

Posts posted by j0nnymoe

  1. 1 hour ago, toasty said:

    I keep getting internal server errors when trying to log in and according to the logs, they all share the same error

    
    General error: 1114 The table 'oc_filecache' is full"

    Does anyone have an idea how to resolve this? I have tried running occ files:cleanup but it didn't find any orphaned files.

    Is your disk full?

  2. 7 hours ago, Bob1215 said:

    Yeah, if you're looking for the quassel-web thread, it can be found here:

     

    So, I'm not quite sure how these linuxserver proxy-conf files work, but here is my attempt at creating one for quassel-core:

    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name quassel.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_quassel_core quassel-core;
            proxy_pass http://$upstream_quassel_core:4242;
        }
    }

     

    It's likely you won't be able to reverse proxy an IRC connection (Atleast to my knowledge)

  3. 11 minutes ago, MothyTim said:

    Ok, well now I'm confused, because I haven't added anything except the line that seems to break it!? I deleted it as per instructions and it re-created itself, I then added 

    
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";

    to try and fix the warning message?

    Wait, penny just dropped, I've been looking at the default file for letsencrypt, not nextcloud itself.

     

    Get into the container of letsencrypt and see if you can `ping nextcloud`.

  4. 1 minute ago, MothyTim said:

    Hi, yes the Nextcloud container is on the same custom network as Letsencrypt container.

    Ok, well looking at your default file, you've added a ton of stuff that isn't need. Confused why you've done that and used the subdomain proxy config aswell. You need to use the supplied default file, you've removed the bit that tell's nginx to use the proxy-conf.

  5. 11 hours ago, Fizzyade said:

    4 xbox one tuners cost me 20 pounds, it’s around 150 for the 4 tuner hd homerun.  big difference!  plus i can add extra tuners as required for 5 pounds a go.....

     

    hdhomerun would be less hassle though, i’d prefer to do this all on unraid, but if i end up having to use a pi just for headend the it’s no great issue.

     

    If i get any problems then i’ll reconsider.

    I only ever recommend network tuners these days. It takes away alot of the stress of getting stuff like this configured, granted it does cost more but in the long run it's much more hassle free. I used to run PCI-e for my DVB-T2/DVB-S2 setup, now I just use a network tuner (Digitbit R1) which TVHeadend controls.

    • Like 1
    • Thanks 1
  6. 8 hours ago, zygmunt said:

    I'm trying to run the image in k8s but can't get it to work with permanent storage. As long as I don't define a persistent volume it works as intended. When I define a hostpath volume for the /config directory the controller process refuses all connections. The pod creates three directories in the mounted hostvolume: data, logs and run. The data directory contains a binary file named keystore, the others are empty.

     

    Inside the pod there's a java process running using the user abc 911:911 and command, java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar

     

    The unifi service is not running. No sign of any mongodb process or service.

     

    I'm not sure how to troubleshoot this. Any pointers are appreciated. It's obviously related to the storage.

    This thread is for users who need support on unraid.

    Please use our forum or discord for support.

  7. 11 hours ago, Benni-chan said:

    rc5 doesn't work for me. several error messages about not finding unraid modules

    updating module dependencies...

    file not found

    (sry, don't have a screencapture, getting server back to work was more important at the moment)

    when booted i don't have any network and emhttpd crashes with a segfault

     

    back to rc4 works fine

    Tested / updated to rc5 myself yesterday and it worked fine. Sounds like the plugin didn't finish downloading the files. Please try it again and check the plugin shows all the files completed.

  8. 9 hours ago, NLS said:

    I noticed that for browsers that don't work (Chrome in normal or safe mode and Firefox fresh downloaded with no extensions)

    
    <label class="inputLabel inputLabel-float inputLabelUnfocused" for="txtManualPassword">Κωδικός:</label>

    inputLabelUnfocused never turns to inputLabelFocused (or does it just for a split second).

    So there lies the issue, something "steals" the focus from the field entry, an issue I have with NO other web page or web based app.
     

    (btw "Κωδικός" means password in case you didn't get it)

     

    This needs to be reported upstream to jellyfin then.

  9. 4 hours ago, DavyV97 said:

    The logs shown by UnRaid:

    
    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing...
    
    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 99-custom-scripts: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-scripts: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

    The server.log shows the following:

    
    [2019-10-29 19:46:12,172] <launcher> ERROR db     - Got error while connecting to db...
    com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=127.0.0.1:27117, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
    	at com.mongodb.connection.BaseCluster.createTimeoutException(BaseCluster.java:377)
    	at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:104)
    	at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:75)
    	at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:71)
    	at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)
    	at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:90)
    	at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:85)
    	at com.mongodb.operation.CommandReadOperation.execute(CommandReadOperation.java:55)
    	at com.mongodb.Mongo.execute(Mongo.java:836)
    	at com.mongodb.Mongo$2.execute(Mongo.java:823)
    	at com.mongodb.DB.executeCommand(DB.java:729)
    	at com.mongodb.DB.command(DB.java:491)
    	at com.mongodb.DB.command(DB.java:507)
    	at com.mongodb.DB.command(DB.java:449)
    	at com.ubnt.service.OoOO.W.OÒ0000(Unknown Source)
    	at com.ubnt.service.OoOO.W.afterPropertiesSet(Unknown Source)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1758)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1695)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317)
    	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.resolveBeanReference(ConfigurationClassEnhancer.java:392)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:364)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.dbService(<generated>)
    	at com.ubnt.service.AppContext.statService(Unknown Source)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.CGLIB$statService$9(<generated>)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9$$FastClassBySpringCGLIB$$d0757215.invoke(<generated>)
    	at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.statService(<generated>)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:582)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1247)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1096)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:535)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317)
    	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.resolveBeanReference(ConfigurationClassEnhancer.java:392)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:364)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.statService(<generated>)
    	at com.ubnt.service.AppContext.houseKeeper(Unknown Source)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.CGLIB$houseKeeper$17(<generated>)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9$$FastClassBySpringCGLIB$$d0757215.invoke(<generated>)
    	at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
    	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361)
    	at com.ubnt.service.AppContext$$EnhancerBySpringCGLIB$$76d57cd9.houseKeeper(<generated>)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:582)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1247)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1096)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:535)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317)
    	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
    	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759)
    	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869)
    	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
    	at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:88)
    	at com.ubnt.service.B.Oo0000(Unknown Source)
    	at com.ubnt.service.B.Õ00000(Unknown Source)
    	at com.ubnt.ace.Launcher.main(Unknown Source)
    [2019-10-29T19:46:54,193] <localhost-startStop-1> INFO  system - ======================================================================
    [2019-10-29T19:46:54,200] <localhost-startStop-1> INFO  system - UniFi 5.11.50 (build atag_5.11.50_12745 - release/release) is started
    [2019-10-29T19:46:54,200] <localhost-startStop-1> INFO  system - ======================================================================
    [2019-10-29T19:46:54,201] <localhost-startStop-1> INFO  system - BASE dir:/usr/lib/unifi
    [2019-10-29T19:46:54,273] <localhost-startStop-1> INFO  system - Current System IP: 172.30.32.1
    [2019-10-29T19:46:54,273] <localhost-startStop-1> INFO  system - Hostname: NAS
    [2019-10-29T19:56:54,276] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T19:56:54,277] <db-server> WARN  db     - Unknown error, restarting mongo without logging to verify error
    [2019-10-29T20:07:09,495] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T20:17:47,545] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T20:27:24,934] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T20:36:54,339] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T20:48:31,788] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T20:59:08,710] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T21:08:54,787] <db-server> WARN  db     - Mongo start up failed with rc=134
    [2019-10-29T21:19:59,955] <db-server> WARN  db     - Mongo start up failed with rc=134

    Apparently those Mongo start up fails have been appearing since 28-09-2019, which might be when I updated. These failures are still occuring to this date. 

    The mongod.log shows this:

    
    2019-10-29T19:56:53.112+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:112068][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14399 through 14414
    2019-10-29T19:56:53.156+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:156585][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14400 through 14414
    2019-10-29T19:56:53.189+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:189946][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14401 through 14414
    2019-10-29T19:56:53.223+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:223168][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14402 through 14414
    2019-10-29T19:56:53.278+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:278849][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14403 through 14414
    2019-10-29T19:56:53.312+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:312130][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14404 through 14414
    2019-10-29T19:56:53.345+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:345406][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14405 through 14414
    2019-10-29T19:56:53.378+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:378795][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14406 through 14414
    2019-10-29T19:56:53.412+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:412067][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14407 through 14414
    2019-10-29T19:56:53.456+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:456668][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14408 through 14414
    2019-10-29T19:56:53.490+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:489995][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14409 through 14414
    2019-10-29T19:56:53.530+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:530956][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14410 through 14414
    2019-10-29T19:56:53.686+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:686403][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14411 through 14414
    2019-10-29T19:56:53.853+0100 I STORAGE  [initandlisten] WiredTiger message [1572375413:853061][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14412 through 14414
    2019-10-29T19:56:54.008+0100 I STORAGE  [initandlisten] WiredTiger message [1572375414:8626][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14413 through 14414
    2019-10-29T19:56:54.167+0100 I STORAGE  [initandlisten] WiredTiger message [1572375414:167723][287:0x14700983ad40], file:index-54-4948728851063389280.wt, txn-recover: Recovering log 14414 through 14414
    2019-10-29T19:56:54.247+0100 E STORAGE  [initandlisten] WiredTiger error (0) [1572375414:247549][287:0x14700983ad40], file:collection-179--1479618200839713417.wt, WT_SESSION.checkpoint: read checksum error for 4096B block at offset 4440064: block header checksum of 1413291893 doesn't match expected checksum of 1740413080
    2019-10-29T19:56:54.247+0100 E STORAGE  [initandlisten] WiredTiger error (0) [1572375414:247658][287:0x14700983ad40], file:collection-179--1479618200839713417.wt, WT_SESSION.checkpoint: collection-179--1479618200839713417.wt: encountered an illegal file format or internal value
    2019-10-29T19:56:54.247+0100 E STORAGE  [initandlisten] WiredTiger error (-31804) [1572375414:247719][287:0x14700983ad40], file:collection-179--1479618200839713417.wt, WT_SESSION.checkpoint: the process must exit and restart: WT_PANIC: WiredTiger library panic
    2019-10-29T19:56:54.247+0100 I -        [initandlisten] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 365
    2019-10-29T19:56:54.247+0100 I -        [initandlisten] 
    
    ***aborting after fassert() failure
    
    
    2019-10-29T19:56:54.272+0100 F -        [initandlisten] Got signal: 6 (Aborted).
    
     0x564eb09cbad1 0x564eb09cace9 0x564eb09cb1cd 0x14700843b390 0x147008095428 0x14700809702a 0x564eafc5b5a7 0x564eb06cfff6 0x564eafc65c24 0x564eafc65e49 0x564eafc660ab 0x564eb12d865f 0x564eb12d49ca 0x564eb12d5b93 0x564eb12f5f81 0x564eb139457a 0x564eb13992f2 0x564eb130c9f5 0x564eb13c50ef 0x564eb13c6507 0x564eb13c7559 0x564eb13b4291 0x564eb13cc41a 0x564eb1330da7 0x564eb132921b 0x564eb06b441f 0x564eb06acb12 0x564eb059f750 0x564eafc46463 0x564eafc67496 0x147008080830 0x564eafcc7879
    ----- BEGIN BACKTRACE -----
    {"backtrace":[{"b":"564EAF418000","o":"15B3AD1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"564EAF418000","o":"15B2CE9"},{"b":"564EAF418000","o":"15B31CD"},{"b":"14700842A000","o":"11390"},{"b":"147008060000","o":"35428","s":"gsignal"},{"b":"147008060000","o":"3702A","s":"abort"},{"b":"564EAF418000","o":"8435A7","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"564EAF418000","o":"12B7FF6"},{"b":"564EAF418000","o":"84DC24","s":"__wt_eventv"},{"b":"564EAF418000","o":"84DE49","s":"__wt_err"},{"b":"564EAF418000","o":"84E0AB","s":"__wt_panic"},{"b":"564EAF418000","o":"1EC065F","s":"__wt_block_extlist_read"},{"b":"564EAF418000","o":"1EBC9CA"},{"b":"564EAF418000","o":"1EBDB93","s":"__wt_block_checkpoint"},{"b":"564EAF418000","o":"1EDDF81","s":"__wt_bt_write"},{"b":"564EAF418000","o":"1F7C57A"},{"b":"564EAF418000","o":"1F812F2","s":"__wt_reconcile"},{"b":"564EAF418000","o":"1EF49F5","s":"__wt_cache_op"},{"b":"564EAF418000","o":"1FAD0EF"},{"b":"564EAF418000","o":"1FAE507"},{"b":"564EAF418000","o":"1FAF559","s":"__wt_txn_checkpoint"},{"b":"564EAF418000","o":"1F9C291"},{"b":"564EAF418000","o":"1FB441A","s":"__wt_txn_recover"},{"b":"564EAF418000","o":"1F18DA7","s":"__wt_connection_workers"},{"b":"564EAF418000","o":"1F1121B","s":"wiredtiger_open"},{"b":"564EAF418000","o":"129C41F","s":"_ZN5mongo18WiredTigerKVEngineC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mbbbb"},{"b":"564EAF418000","o":"1294B12"},{"b":"564EAF418000","o":"1187750","s":"_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv"},{"b":"564EAF418000","o":"82E463"},{"b":"564EAF418000","o":"84F496","s":"main"},{"b":"147008060000","o":"20830","s":"__libc_start_main"},{"b":"564EAF418000","o":"8AF879","s":"_start"}],"processInfo":{ "mongodbVersion" : "3.4.23", "gitVersion" : "324017ede1dbb1c9554dd2dceb15f8da3c59d0e8", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.19.56-Unraid", "version" : "#1 SMP Tue Jun 25 10:19:34 PDT 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "564EAF418000", "elfType" : 3, "buildId" : "91B53A60D2F6A2BE28D415B74844C8722A21A4FB" }, { "b" : "7FFF7D9FB000", "elfType" : 3, "buildId" : "66C7D3E7CFA7FD6793FA4CD5E237FFD24E2F88F8" }, { "b" : "1470093B7000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "FF69EA60EBE05F2DD689D2B26FC85A73E5FBC3A0" }, { "b" : "147008F72000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "15FFEB43278726B025F020862BF51302822A40EC" }, { "b" : "147008D6A000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "69143E8B39040C964D3958490535322675F15DD3" }, { "b" : "147008B66000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "37BFC3D8F7E3B022DAC7943B1A5FACD40CEBF0AD" }, { "b" : "14700885D000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "BAD67A84E56E73D031AE507261DA066B35949D34" }, { "b" : "147008647000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "68220AE2C65D65C1B6AAA12FA6765A6EC2F5F434" }, { "b" : "14700842A000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "B17C21299099640A6D863E423D99265824E7BB16" }, { "b" : "147008060000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "1CA54A6E0D76188105B12E49FE6B8019BF08803A" }, { "b" : "147009620000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "C0ADBAD6F9A33944F2B3567C078EC472A1DAE98E" } ] }}
     mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x564eb09cbad1]
     mongod(+0x15B2CE9) [0x564eb09cace9]
     mongod(+0x15B31CD) [0x564eb09cb1cd]
     libpthread.so.0(+0x11390) [0x14700843b390]
     libc.so.6(gsignal+0x38) [0x147008095428]
     libc.so.6(abort+0x16A) [0x14700809702a]
     mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x564eafc5b5a7]
     mongod(+0x12B7FF6) [0x564eb06cfff6]
     mongod(__wt_eventv+0x3D7) [0x564eafc65c24]
     mongod(__wt_err+0x9D) [0x564eafc65e49]
     mongod(__wt_panic+0x2E) [0x564eafc660ab]
     mongod(__wt_block_extlist_read+0x8F) [0x564eb12d865f]
     mongod(+0x1EBC9CA) [0x564eb12d49ca]
     mongod(__wt_block_checkpoint+0x673) [0x564eb12d5b93]
     mongod(__wt_bt_write+0x4F1) [0x564eb12f5f81]
     mongod(+0x1F7C57A) [0x564eb139457a]
     mongod(__wt_reconcile+0x1272) [0x564eb13992f2]
     mongod(__wt_cache_op+0x875) [0x564eb130c9f5]
     mongod(+0x1FAD0EF) [0x564eb13c50ef]
     mongod(+0x1FAE507) [0x564eb13c6507]
     mongod(__wt_txn_checkpoint+0xD9) [0x564eb13c7559]
     mongod(+0x1F9C291) [0x564eb13b4291]
     mongod(__wt_txn_recover+0x5FA) [0x564eb13cc41a]
     mongod(__wt_connection_workers+0x37) [0x564eb1330da7]
     mongod(wiredtiger_open+0x197B) [0x564eb132921b]
     mongod(_ZN5mongo18WiredTigerKVEngineC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mbbbb+0x70F) [0x564eb06b441f]
     mongod(+0x1294B12) [0x564eb06acb12]
     mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x6B0) [0x564eb059f750]
     mongod(+0x82E463) [0x564eafc46463]
     mongod(main+0x966) [0x564eafc67496]
     libc.so.6(__libc_start_main+0xF0) [0x147008080830]
     mongod(_start+0x29) [0x564eafcc7879]
    -----  END BACKTRACE  -----

    I am using the :latest version of the Unifi Controller on UnRaid 6.7.2 and tried connecting using both Chrome, Firefox and even Edge. This using Host and br0 configurations. Host configuration has always worked for me.

    Could be a corrupted database? restore a backup of your appdata and see what happens.

  10. 3 hours ago, doweaver said:

    Hitting a NullReferenceException when trying to import users... everything else seems to be working okay. Any ideas?

     

    Found this ticket which was marked as closed a while back: https://github.com/tidusjar/Ombi/issues/2910

    
    2019-10-28 21:40:30.650 -07:00 [Information] Quartz scheduler 'DefaultQuartzScheduler' initialized
    2019-10-28 21:40:30.650 -07:00 [Information] Quartz scheduler version: 3.0.7.0
    2019-10-28 21:40:30.651 -07:00 [Information] JobFactory set to: Ombi.Schedule.IoCJobFactory
    2019-10-28 21:40:30.739 -07:00 [Information] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
    2019-10-28 21:40:30.745 -07:00 [Debug] Batch acquisition of 0 triggers
    2019-10-28 21:40:36.450 -07:00 [Debug] Batch acquisition of 1 triggers
    2019-10-28 21:40:36.477 -07:00 [Debug] Batch acquisition of 1 triggers
    2019-10-28 21:40:36.478 -07:00 [Debug] Batch acquisition of 0 triggers
    2019-10-28 21:40:36.525 -07:00 [Debug] Calling Execute on job Plex.IPlexUserImporter
    2019-10-28 21:40:36.525 -07:00 [Debug] Calling Execute on job Emby.IEmbyUserImporter
    2019-10-28 21:40:36.638 -07:00 [Debug] Trigger instruction : DeleteTrigger
    2019-10-28 21:40:36.647 -07:00 [Debug] Deleting trigger
    2019-10-28 21:40:37.677 -07:00 [Error] StatusCode: UnprocessableEntity, Reason: Unprocessable Entity, RequestUri: https://plex.tv/users/account.json
    2019-10-28 21:40:37.685 -07:00 [Error] Job Plex.IPlexUserImporter threw an unhandled Exception:
    System.NullReferenceException: Object reference not set to an instance of an object.
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.<>c__DisplayClass7_0.<ImportAdmin>b__0(OmbiUser x) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 137
       at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Func`2 predicate, Boolean& found)
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.ImportAdmin(UserManagementSettings settings, PlexServers server, List`1 allUsers) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 136
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.Execute(IJobExecutionContext job) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 63
       at Quartz.Core.JobRunShell.Run(CancellationToken cancellationToken)
    2019-10-28 21:40:37.754 -07:00 [Error] Job Plex.IPlexUserImporter threw an exception.
    Quartz.SchedulerException: Job threw an unhandled exception. ---> System.NullReferenceException: Object reference not set to an instance of an object.
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.<>c__DisplayClass7_0.<ImportAdmin>b__0(OmbiUser x) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 137
       at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Func`2 predicate, Boolean& found)
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.ImportAdmin(UserManagementSettings settings, PlexServers server, List`1 allUsers) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 136
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.Execute(IJobExecutionContext job) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 63
       at Quartz.Core.JobRunShell.Run(CancellationToken cancellationToken)
       --- End of inner exception stack trace --- [See nested exception: System.NullReferenceException: Object reference not set to an instance of an object.
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.<>c__DisplayClass7_0.<ImportAdmin>b__0(OmbiUser x) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 137
       at System.Linq.Enumerable.TryGetFirst[TSource](IEnumerable`1 source, Func`2 predicate, Boolean& found)
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.ImportAdmin(UserManagementSettings settings, PlexServers server, List`1 allUsers) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 136
       at Ombi.Schedule.Jobs.Plex.PlexUserImporter.Execute(IJobExecutionContext job) in C:\projects\requestplex\src\Ombi.Schedule\Jobs\Plex\PlexUserImporter.cs:line 63
       at Quartz.Core.JobRunShell.Run(CancellationToken cancellationToken)]
    2019-10-28 21:40:37.755 -07:00 [Debug] Trigger instruction : DeleteTrigger
    2019-10-28 21:40:37.755 -07:00 [Debug] Deleting trigger

     

    Config:

    
    Version	3.0.4817
    Branch	master
    Github	https://github.com/tidusjar/Ombi
    Discord	https://discord.gg/Sa7wNWb
    Reddit	https://www.reddit.com/r/Ombi/
    Issues	https://github.com/tidusjar/Ombi/issues
    Wiki	https://github.com/tidusjar/Ombi/wiki
    OS Architecture	X64
    OS Description	Linux 4.19.56-Unraid #1 SMP Tue Jun 25 10:19:34 PDT 2019
    Process Architecture	X64
    Application Base Path	/opt/ombi

     

    You're best to report this upstream to Ombi direct.

    • Thanks 1
  11. 45 minutes ago, hawihoney said:

    Sorry, but there's not a single hint regarding my question on that site. Does "containers" refer to the Letsencrypt container and/or the Nextcloud container?

     

    A "default" file does exist in exact the same location in both containers (/config/nginx/site-confs/default).

     

    When I wrote the article, I'm purely talking about the nextcloud container, Nothing else.

  12. 5 hours ago, AnnabellaRenee87 said:

    I don't suppose you all can try these drivers next release, could ya?

    https://www.nvidia.com/Download/driverResults.aspx/150803/en-us

    Reason I'm wanting to try that driver is because money is tight right now and I can't afford to upgrade my GTX 760 I used for Transcoding before it quit working and the release notes for that version has this in it.

    
    Fixed a regression introduced in the 430.* series of releases that caused a segmentation fault in libnvcuvid.so while using Video Codec SDK APIs on certain graphics boards.

     

    For the time being, we only use the current latest driver that's available at build time.

  13. 1 hour ago, ksarnelli said:

    Updated the container this morning and now getting:
     

    
    nginx: [emerg] dlopen() "/var/lib/nginx/modules/ngx_stream_geoip2_module.so" failed (Error loading shared library /var/lib/nginx/modules/ngx_stream_geoip2_module.so: No such file or directory) in /etc/nginx/modules/stream_geoip2.conf:1

     

    Checked and the module is in fact missing.  Any ideas?

    Already fixed and new image pushed.

    • Like 1
  14. 2 hours ago, drdebian said:

    Glad to be of service! I actually ordered a HDHomeRun in my despair to circumvent the DVB/TVH issue, not being aware of the fantastic minisatip option... Guess I'll be sending that back now... :D

    FWIW - I personally recommend network tuners to anyone asking me about DVB card these days. No need to worry about drivers compiling etc.

    • Like 1
  15. 10 minutes ago, bastl said:

    Quick question. I am on 16.0.5 stable chanel. Stable 17 is released in september, right? The docker won't provide me the stable version 17 via the settings overview page. Is there anything special I have to do to switch to the next stable version? If I change the update chanel to beta it shows me the 17.0.0. Don't know if this is a beta it shows or the actual stable 17 release. If I switch back to stable chanel it shows again 16.05 is the latest stable. Am I missing something?

    If you wish to update via the webui, you have to wait for NextCloud to roll it out to you. (It's staggered for obvious reasons)

    There are instructions on the first post on how to update manually.

  16. 4 hours ago, Dazog said:

    Has anyone wrote a bash script to enable these commands after array has started?

     

    nvidia-smi --persistence-mode=1
    fuser -v /dev/nvidia*

     

    It allows the cards to go into p8 when not in use. (Save power mode)

    I ain't very good at bash scripts.

    Use the user scripts plugin.

    • Thanks 1
  17. 2 hours ago, wesman said:

    @Pducharme Sorry If I was not terrible clear, on the Intel System, it has both a Discrete GPU (Plugged in card) and QuickSynce, part of the graphics integrated into the CPU. 

    I have the NVIDIA set at the card the Dockers and the one VM to use, I know that if I have the VM running, the dockers can not use it, but will they fall back to the Quicksync Graphics integrated into the CPU, so just software?

     

     

    I believe this could happen with Emby as you can select multiple transcoders but plex just uses either one or the other. From our testing of VM's using GPU's that are assigned to containers, we found it locked up it up.