Jump to content

Chriexpe

Members
  • Posts

    28
  • Joined

  • Last visited

Posts posted by Chriexpe

  1. 13 hours ago, ich777 said:

    You have to understand that Inte GPU Top is only a tool for reading the utilisation from you Intel GPU.

    My bad, I thought it enabled it too, but now I remember I also have Intel Graphics SR-IOV plugin that replaces it with i915 driver.

    13 hours ago, ich777 said:

    Hower encoding AV1 is on Intel hardware is not supported on Linux currently only Windows is supported from the software side regardless from what software are you using.

    So only Intel Arc is currently supported? I thought SVT-AV1 encoding worked on 12th+ Intel CPUs too, but can't find anything online about it. (or is it like 5x slower than HW HEVC encoding?)

    13 hours ago, ich777 said:

    I would not invest too much time into AV1 yet because it will take a few more years for it to really take of so that more and more devices are supporting it, h.265 is the much better choice for now.

    I did use HW QSV h.265 to transcode all my media on Immich, but Firefox can't play it (at least on Linux), with AV1 it should work fine.

  2. On 4/2/2024 at 6:31 AM, JorgeB said:

    Looks like a zfs related crash, so possibly and issue with a zfs pool, looks like you have two, could be either one, you can try reformatting one of them, it may also be a good idea to run memtest.

    Thanks, I did try 6.12.10 and also going back to 6.12.8 and the error was still "the same", where it would output those kernel errors, and containers crash (but not my HAOS VM), the GUI kinda works but ignores any attempt of rebooting/shutdown.

    Either way, there is some things that I don't comprehend like why only some cores are at 100% even if top reports VM and python (I didn't install it) using 9%.image.thumb.png.b13c28b2439fccb14059b9c6a85ae56e.png

    You did mention possible ZFS crash, but I can still access both shares through SMB and Dynamix File Manager plugin.

    In case you need to take another look, here is my diagnostics file:

    tower-diagnostics-20240419-1151.zip

     

    And memtest went fine (but now that I'm looking at it, wait is the memory timing at 76?)

    image.thumb.png.a42b7709633d4d76367ed9bab4714a12.png

  3. This is odd, I created the dataset "users" and later "user_files" through UI (just like all others), they're recognized by zfs but when I try copying anything to them they don't show up... And if I manually add the folder name after /nasa/ DFM gives an "invalid target" error.

    I already tried creating these datasets with commands, restarted the server, updated to stable 6.12 and yet this bug persists.

    image.thumb.png.1c0bb02337b8fe60e63e8e4feff5969d.png

  4. For some odd reason Deluge isn't saving any config, if I restart the docker/server it rolls back to default (any setting, for ex: Queue and Bandwidth).

    The permissions on Appdata are drwxrwxrwx, so it might be something else.

    I only use the ItConfig plugin (I needed to put it into web.conf otherwise it dissapeared) and that dark theme mod from Joelacus.

    Also is there a way to permanently remove/add new columns?

  5. 14 hours ago, JorgeB said:

    Make sure you backup anything important that you can then try exporting and re-importing the pool in read/write mode now, if it still fails to import best to recreate.

    Thanks! Unfortunately I couldn't import my pool so I needed to format all disks, but thanks to you I was able to save the files.

    I decided to use the GUI to create this new ZFS pool and yeah, definitely Unraid need some polishing with it, for example a separate button to manage the pool instead of clicking in the main disk, or after changing the filesystem and creating the ZFS pool warn the user and format the pool (instead of just saying that the pool is unmountable like my case), and automatically add the datasets to shares.

    image.thumb.png.a32f6446591862e328612d353485cace.png

  6. I'm also having this problem where nothing loads at benchmark page, I tried with version 2.10 and it also had this, but when I used the oldest image 2.9.4 everything worked just fine.

    image.thumb.png.a1691c9e3ed17b8d2d8750607f00a71c.png

    If I ispect the page this is the error:

    image.png.19e7c88bd75f369d7dadd1e4e7f3f038.png

    Also when I downgraded from 2.10.5 to 2.10 this error showed up on the right side of the interface (but I didn't delete the appdata folder, so this might be the cause to the error).

     

    Lucee 5.3.10.97 Error (expression)

    Messageinvalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4]

    patternlistgetat(list:string, position:number, [delimiters:string, [includeEmptyFields:boolean]]):string

    StacktraceThe Error Occurred in
    /var/www/DispBenchmarkGraphs.cfm: line 6

    4: <CFSET SSDsExist=ListGetAt(SeriesData,2,"|")>
    5: <CFSET MaxSSDSpeed=ListGetAt(SeriesData,3,"|")>
    6: <CFSET SSDScript=ListGetAt(SeriesData,4,"|")>
    7: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")>
    8: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")>
     

    called from /var/www/DispOverview.cfm: line 76

    74: </CFOUTPUT>
    75:
    76: <CFINCLUDE TEMPLATE="DispBenchmarkGraphs.cfm">
    77:
    78: <CFOUTPUT>
     

    Java Stacktracelucee.runtime.exp.FunctionException: invalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4]
     

    Spoiler

      at lucee.runtime.functions.list.ListGetAt.call(ListGetAt.java:46)
      at lucee.runtime.functions.list.ListGetAt.call(ListGetAt.java:40)
      at dispbenchmarkgraphs_cfm$cf.call(/DispBenchmarkGraphs.cfm:6)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948)
      at lucee.runtime.PageContextImpl.doInclude(PageContextImpl.java:929)
      at dispoverview_cfm$cf.call(/DispOverview.cfm:76)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948)
      at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:65)
      at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45)
      at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2493)
      at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2478)
      at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2449)
      at lucee.runtime.engine.Request.exe(Request.java:45)
      at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1216)
      at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1162)
      at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97)
      at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
      at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
      at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
      at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
      at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
      at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
      at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
      at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
      at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
      at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
      at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
      at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
      at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
      at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:769)
      at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
      at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)
      at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
      at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
      at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890)
      at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1789)
      at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
      at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
      at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
      at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
      at java.base/java.lang.Thread.run(Thread.java:829)

     

  7. 5 hours ago, JorgeB said:

    That's normal, I see that because the output of that command comes with the diags.

     

    There's a few options you can try, 1st one is to import in readonly mode:

     

    zpool import -o readonly=on zfs

     

    Yup it did work, I was trying with -f and -F flags but had no success, now even tho unraid still reports the pool as unmountable I can access all files inside it, zpool status gives this result:
     

    pool: zfs
     state: ONLINE
    status: One or more devices has experienced an unrecoverable error.  An
            attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
            using 'zpool clear' or replace the device with 'zpool replace'.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
      scan: scrub repaired 0B in 04:41:10 with 0 errors on Mon Feb 27 16:58:43 2023
    config:
    
            NAME        STATE     READ WRITE CKSUM
            zfs         ONLINE       0     0     0
              raidz1-0  ONLINE       0     0     0
                sdd     ONLINE       0     0     1
                sdc     ONLINE       0     0     0
                sdb     ONLINE       0     0     0
    
    errors: No known data errors

    I created this diag file after importing this pool.

    chriexpe.server-diagnostics-20230526-1603.zip

  8. 6 minutes ago, JorgeB said:

     

    Looks like it, kind of strange that the before importing it shows all pool devices available and online:

    pool: zfs
         id: 15271653718495853080
      state: ONLINE
     action: The pool can be imported using its name or numeric identifier.
     config:
    
        zfs         ONLINE
          raidz1-0  ONLINE
            sdd     ONLINE
            sdc     ONLINE
            sdb     ONLINE

     

    The are a few import options you could try but assuming there's a backup best to just recreate and restore the pool.

     

    I've never done any backup of it, is there any other way to import it?

  9. 26 minutes ago, JorgeB said:

    Yes, that was my bad, pool is available to import, not online, stop array, unassign all pool devices, start array, stop array, re-assign all pool devices, post new diags.

    Well, it's stil ummountable, before doing this I also changed default file system to ZFS on Disk Settings and even tried with different pool names, but got the same results.

    image.thumb.png.0e908eac1de72ba7e27e0b620e93429d.png

    chriexpe.server-diagnostics-20230526-1307.zip

  10. 12 minutes ago, JorgeB said:

    Diags are before array start, cannot see what the error is, I do see the pool is already online, pool cannot be online before Unraid tries to import it, export the pool using the command line:

    zpool export zfs

    start array and post new ones.

    Ok, I waited for the array to start (and all containers go up) and attached the diag file.

    I did the command before and after starting the pool and got the same response:

    cannot open 'zfs': no such pool

     

    chriexpe.server-diagnostics-20230526-1226.zip

  11. Out of nowhere the WebUI started crashing and the page would always go blank only coming back for a few seconds after refreshing the page many times, and I couldn't get the diagnostics because the page went blank in the middle of the process (through ssh it reported that there was no space left), but one thing that I've found odd is that Dockers kept working just fine.

    The only way I found to get it working again was deleting disks.cfg from /boot/config and rebooting (and that was how I was able to generate the diagnostics file attached below), but after some time the problem always goes back.

    So I'm pretty sure I've lost all my files, so before formatting them, is there any other useful data that I can attach here to prevent this from happening to me or other users? Before these problems begun I was trying to get Filerun (it's like Nexcloud) working with my ZFS volume (/mnt/zfs) as I had it working before reinstalling that docker, but it (filerun) couldn't recognize the files there, and at some point I tried running the docker on privileged status, I guess this might be the reason why my ZFS pool got corrupted.

    Also, I had 5 Datasets in my ZFS pool, but now there is only 3 left with just a few files (but these where created by other dockers).

    This ZFS pool was created with ZFS Master and Unassigned Devices Plugin (following SpaceInvaders video) and later on 6.12 I just straight up added to the pool and it was working just fine until now.

    image.thumb.png.bb0bcc1e57bc9f4b198a093c1230b78b.pngimage.thumb.png.941eca022736a8f9ef4e6dabebd2894d.png

     

    chriexpe.server-diagnostics-20230526-1001.zip

  12. On 4/28/2023 at 2:15 AM, EDACerton said:

    Can you access br1 via the subnet router? I can see that you have a shim interface for br1 but not for br0.

    Nope, only Unraid UI, tho I'll be honest, after that I removed the card (BR1) and reinstalled Tailscale, after this it actually worked! Same settings as before, Exit Node and Subnet at the same IP range as my network.

    Also I was almost blaming Tailscale for daily crashing my Omada container, but apparently it was this card too lol.

     

  13. I've never used Tailscale before but the setup was pretty straightforward, and after searching a bit I realized that in order to use it as if I was in my local network I needed to set my Unraid as Exit Point + Subnet (on the same IP range as my network), with this I was able to use my Pi-hole DNS and access Unraid WebUI through it's IP, but... I couldn't access any other br0 docker, is there any special setup I need to do?

  14. Quick question, I'm planning on finally upgrading my server and using it as my desktop too (through KVM), after looking around many parts and prices, I ended up realizing that the combo of a 13700k + Z690 UD DDR4 and 5900x + x570s Tomahawk cost basically the same, so it sounds like a no-brainer choise and just go for the Intel one as it's way faster in single core and more recent right? But one thing bothers me, as I'll be pinning specific CPU cores for VM, dockers (auth, file manager, NVR) and Minecraft Server (it easily chomps my I7 4770), would this dance of P and E cores here and there actually end up hurting performance? Where instead I could go for 5900x w/ 12c24t but that have "homogenous" cores (2x6 CCX).

    Ps. Yes I'll use only DDR4, and energy efficiency isn't important.

×
×
  • Create New...