Chriexpe

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Chriexpe

  1. My bad, I thought it enabled it too, but now I remember I also have Intel Graphics SR-IOV plugin that replaces it with i915 driver. So only Intel Arc is currently supported? I thought SVT-AV1 encoding worked on 12th+ Intel CPUs too, but can't find anything online about it. (or is it like 5x slower than HW HEVC encoding?) I did use HW QSV h.265 to transcode all my media on Immich, but Firefox can't play it (at least on Linux), with AV1 it should work fine.
  2. Does Intel GPU TOP plugin supports SVT-AV1 encoding? My CPU is i5 12500. I get this error when transcoding to AV1 on Immich: (with QSV is the same) An error occurred while configuring transcoding options: UnsupportedMediaTypeException: VAAPI acceleration does not support codec 'AV1'. Supported codecs: h264,hevc,vp9
  3. Thanks, I did try 6.12.10 and also going back to 6.12.8 and the error was still "the same", where it would output those kernel errors, and containers crash (but not my HAOS VM), the GUI kinda works but ignores any attempt of rebooting/shutdown. Either way, there is some things that I don't comprehend like why only some cores are at 100% even if top reports VM and python (I didn't install it) using 9%. You did mention possible ZFS crash, but I can still access both shares through SMB and Dynamix File Manager plugin. In case you need to take another look, here is my diagnostics file: tower-diagnostics-20240419-1151.zip And memtest went fine (but now that I'm looking at it, wait is the memory timing at 76?)
  4. Yeah but this problem always comes back after some time, and randomly some containers break and I can't even restart it due to "server error". But what about these kernel error.txt? What are the odds of it being related to my motherboard?
  5. This started right after downloading and installing this update, where I was asked to reboot the server to complete the install but no matter what I tried, it just ignored both commands from UI and console. Curiously after trying to reboot/shutdown this happens with CPU, and all dockers and VM keeps working fine. This is the diagnostics file: tower-diagnostics-20240328-2011.zip
  6. How do you guys handle Databases containers? I've set it to do backups of everything once a day but I have a feeling that somewhat it's breaking some containers (in a sense of stopping/starting it all the time).
  7. This is odd, I created the dataset "users" and later "user_files" through UI (just like all others), they're recognized by zfs but when I try copying anything to them they don't show up... And if I manually add the folder name after /nasa/ DFM gives an "invalid target" error. I already tried creating these datasets with commands, restarted the server, updated to stable 6.12 and yet this bug persists.
  8. For some odd reason Deluge isn't saving any config, if I restart the docker/server it rolls back to default (any setting, for ex: Queue and Bandwidth). The permissions on Appdata are drwxrwxrwx, so it might be something else. I only use the ItConfig plugin (I needed to put it into web.conf otherwise it dissapeared) and that dark theme mod from Joelacus. Also is there a way to permanently remove/add new columns?
  9. Thanks! Unfortunately I couldn't import my pool so I needed to format all disks, but thanks to you I was able to save the files. I decided to use the GUI to create this new ZFS pool and yeah, definitely Unraid need some polishing with it, for example a separate button to manage the pool instead of clicking in the main disk, or after changing the filesystem and creating the ZFS pool warn the user and format the pool (instead of just saying that the pool is unmountable like my case), and automatically add the datasets to shares.
  10. I'm also having this problem where nothing loads at benchmark page, I tried with version 2.10 and it also had this, but when I used the oldest image 2.9.4 everything worked just fine. If I ispect the page this is the error: Also when I downgraded from 2.10.5 to 2.10 this error showed up on the right side of the interface (but I didn't delete the appdata folder, so this might be the cause to the error). Lucee 5.3.10.97 Error (expression) Messageinvalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4] patternlistgetat(list:string, position:number, [delimiters:string, [includeEmptyFields:boolean]]):string StacktraceThe Error Occurred in /var/www/DispBenchmarkGraphs.cfm: line 6 4: <CFSET SSDsExist=ListGetAt(SeriesData,2,"|")> 5: <CFSET MaxSSDSpeed=ListGetAt(SeriesData,3,"|")> 6: <CFSET SSDScript=ListGetAt(SeriesData,4,"|")> 7: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")> 8: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")> called from /var/www/DispOverview.cfm: line 76 74: </CFOUTPUT> 75: 76: <CFINCLUDE TEMPLATE="DispBenchmarkGraphs.cfm"> 77: 78: <CFOUTPUT> Java Stacktracelucee.runtime.exp.FunctionException: invalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4]
  11. Yup it did work, I was trying with -f and -F flags but had no success, now even tho unraid still reports the pool as unmountable I can access all files inside it, zpool status gives this result: pool: zfs state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 04:41:10 with 0 errors on Mon Feb 27 16:58:43 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdd ONLINE 0 0 1 sdc ONLINE 0 0 0 sdb ONLINE 0 0 0 errors: No known data errors I created this diag file after importing this pool. chriexpe.server-diagnostics-20230526-1603.zip
  12. Also it's odd that if I type just zpool import it output exactly what you've sent root@chriexpe:~# zpool import pool: zfs id: 15271653718495853080 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zfs ONLINE raidz1-0 ONLINE sdd ONLINE sdc ONLINE sdb ONLINE
  13. I've never done any backup of it, is there any other way to import it?
  14. RIP root@chriexpe:~# zpool import zfs cannot import 'zfs': insufficient replicas Destroy and re-create the pool from a backup source.
  15. Well, it's stil ummountable, before doing this I also changed default file system to ZFS on Disk Settings and even tried with different pool names, but got the same results. chriexpe.server-diagnostics-20230526-1307.zip
  16. Ok, I waited for the array to start (and all containers go up) and attached the diag file. I did the command before and after starting the pool and got the same response: cannot open 'zfs': no such pool chriexpe.server-diagnostics-20230526-1226.zip
  17. Also on Main page my SSD (the only array device) and the pool keeps constantly showing and disappearing from there lol (after this webgui becomes inaccessible, it doesn't load).
  18. Out of nowhere the WebUI started crashing and the page would always go blank only coming back for a few seconds after refreshing the page many times, and I couldn't get the diagnostics because the page went blank in the middle of the process (through ssh it reported that there was no space left), but one thing that I've found odd is that Dockers kept working just fine. The only way I found to get it working again was deleting disks.cfg from /boot/config and rebooting (and that was how I was able to generate the diagnostics file attached below), but after some time the problem always goes back. So I'm pretty sure I've lost all my files, so before formatting them, is there any other useful data that I can attach here to prevent this from happening to me or other users? Before these problems begun I was trying to get Filerun (it's like Nexcloud) working with my ZFS volume (/mnt/zfs) as I had it working before reinstalling that docker, but it (filerun) couldn't recognize the files there, and at some point I tried running the docker on privileged status, I guess this might be the reason why my ZFS pool got corrupted. Also, I had 5 Datasets in my ZFS pool, but now there is only 3 left with just a few files (but these where created by other dockers). This ZFS pool was created with ZFS Master and Unassigned Devices Plugin (following SpaceInvaders video) and later on 6.12 I just straight up added to the pool and it was working just fine until now. chriexpe.server-diagnostics-20230526-1001.zip
  19. Nope, only Unraid UI, tho I'll be honest, after that I removed the card (BR1) and reinstalled Tailscale, after this it actually worked! Same settings as before, Exit Node and Subnet at the same IP range as my network. Also I was almost blaming Tailscale for daily crashing my Omada container, but apparently it was this card too lol.
  20. chriexpe.server-diagnostics-20230426-1338.zip
  21. I've never used Tailscale before but the setup was pretty straightforward, and after searching a bit I realized that in order to use it as if I was in my local network I needed to set my Unraid as Exit Point + Subnet (on the same IP range as my network), with this I was able to use my Pi-hole DNS and access Unraid WebUI through it's IP, but... I couldn't access any other br0 docker, is there any special setup I need to do?
  22. Quick question, I'm planning on finally upgrading my server and using it as my desktop too (through KVM), after looking around many parts and prices, I ended up realizing that the combo of a 13700k + Z690 UD DDR4 and 5900x + x570s Tomahawk cost basically the same, so it sounds like a no-brainer choise and just go for the Intel one as it's way faster in single core and more recent right? But one thing bothers me, as I'll be pinning specific CPU cores for VM, dockers (auth, file manager, NVR) and Minecraft Server (it easily chomps my I7 4770), would this dance of P and E cores here and there actually end up hurting performance? Where instead I could go for 5900x w/ 12c24t but that have "homogenous" cores (2x6 CCX). Ps. Yes I'll use only DDR4, and energy efficiency isn't important.
  23. Long story short, I want to force one docker (NVR) to use another NIC so it wouldn't overload the main one from the motherboard, but couldn't get working unless I created a new IP range, does anyone here knows if I could assign a custom network to it?