Chriexpe

Members
  • Posts

    28
  • Joined

  • Last visited

Chriexpe's Achievements

Noob

Noob (1/14)

0

Reputation

  1. My bad, I thought it enabled it too, but now I remember I also have Intel Graphics SR-IOV plugin that replaces it with i915 driver. So only Intel Arc is currently supported? I thought SVT-AV1 encoding worked on 12th+ Intel CPUs too, but can't find anything online about it. (or is it like 5x slower than HW HEVC encoding?) I did use HW QSV h.265 to transcode all my media on Immich, but Firefox can't play it (at least on Linux), with AV1 it should work fine.
  2. Does Intel GPU TOP plugin supports SVT-AV1 encoding? My CPU is i5 12500. I get this error when transcoding to AV1 on Immich: (with QSV is the same) An error occurred while configuring transcoding options: UnsupportedMediaTypeException: VAAPI acceleration does not support codec 'AV1'. Supported codecs: h264,hevc,vp9
  3. Thanks, I did try 6.12.10 and also going back to 6.12.8 and the error was still "the same", where it would output those kernel errors, and containers crash (but not my HAOS VM), the GUI kinda works but ignores any attempt of rebooting/shutdown. Either way, there is some things that I don't comprehend like why only some cores are at 100% even if top reports VM and python (I didn't install it) using 9%. You did mention possible ZFS crash, but I can still access both shares through SMB and Dynamix File Manager plugin. In case you need to take another look, here is my diagnostics file: tower-diagnostics-20240419-1151.zip And memtest went fine (but now that I'm looking at it, wait is the memory timing at 76?)
  4. Yeah but this problem always comes back after some time, and randomly some containers break and I can't even restart it due to "server error". But what about these kernel error.txt? What are the odds of it being related to my motherboard?
  5. This started right after downloading and installing this update, where I was asked to reboot the server to complete the install but no matter what I tried, it just ignored both commands from UI and console. Curiously after trying to reboot/shutdown this happens with CPU, and all dockers and VM keeps working fine. This is the diagnostics file: tower-diagnostics-20240328-2011.zip
  6. How do you guys handle Databases containers? I've set it to do backups of everything once a day but I have a feeling that somewhat it's breaking some containers (in a sense of stopping/starting it all the time).
  7. This is odd, I created the dataset "users" and later "user_files" through UI (just like all others), they're recognized by zfs but when I try copying anything to them they don't show up... And if I manually add the folder name after /nasa/ DFM gives an "invalid target" error. I already tried creating these datasets with commands, restarted the server, updated to stable 6.12 and yet this bug persists.
  8. For some odd reason Deluge isn't saving any config, if I restart the docker/server it rolls back to default (any setting, for ex: Queue and Bandwidth). The permissions on Appdata are drwxrwxrwx, so it might be something else. I only use the ItConfig plugin (I needed to put it into web.conf otherwise it dissapeared) and that dark theme mod from Joelacus. Also is there a way to permanently remove/add new columns?
  9. Thanks! Unfortunately I couldn't import my pool so I needed to format all disks, but thanks to you I was able to save the files. I decided to use the GUI to create this new ZFS pool and yeah, definitely Unraid need some polishing with it, for example a separate button to manage the pool instead of clicking in the main disk, or after changing the filesystem and creating the ZFS pool warn the user and format the pool (instead of just saying that the pool is unmountable like my case), and automatically add the datasets to shares.
  10. I'm also having this problem where nothing loads at benchmark page, I tried with version 2.10 and it also had this, but when I used the oldest image 2.9.4 everything worked just fine. If I ispect the page this is the error: Also when I downgraded from 2.10.5 to 2.10 this error showed up on the right side of the interface (but I didn't delete the appdata folder, so this might be the cause to the error). Lucee 5.3.10.97 Error (expression) Messageinvalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4] patternlistgetat(list:string, position:number, [delimiters:string, [includeEmptyFields:boolean]]):string StacktraceThe Error Occurred in /var/www/DispBenchmarkGraphs.cfm: line 6 4: <CFSET SSDsExist=ListGetAt(SeriesData,2,"|")> 5: <CFSET MaxSSDSpeed=ListGetAt(SeriesData,3,"|")> 6: <CFSET SSDScript=ListGetAt(SeriesData,4,"|")> 7: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")> 8: <CFSET SeriesData=ListDeleteAt(SeriesData,1,"|")> called from /var/www/DispOverview.cfm: line 76 74: </CFOUTPUT> 75: 76: <CFINCLUDE TEMPLATE="DispBenchmarkGraphs.cfm"> 77: 78: <CFOUTPUT> Java Stacktracelucee.runtime.exp.FunctionException: invalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [4]
  11. Yup it did work, I was trying with -f and -F flags but had no success, now even tho unraid still reports the pool as unmountable I can access all files inside it, zpool status gives this result: pool: zfs state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 04:41:10 with 0 errors on Mon Feb 27 16:58:43 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdd ONLINE 0 0 1 sdc ONLINE 0 0 0 sdb ONLINE 0 0 0 errors: No known data errors I created this diag file after importing this pool. chriexpe.server-diagnostics-20230526-1603.zip
  12. Also it's odd that if I type just zpool import it output exactly what you've sent root@chriexpe:~# zpool import pool: zfs id: 15271653718495853080 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zfs ONLINE raidz1-0 ONLINE sdd ONLINE sdc ONLINE sdb ONLINE
  13. I've never done any backup of it, is there any other way to import it?
  14. RIP root@chriexpe:~# zpool import zfs cannot import 'zfs': insufficient replicas Destroy and re-create the pool from a backup source.