duelistjp

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by duelistjp

  1. i'm having trouble every week or two where i start getting a ton of stuff in the queue showing red saying release wasn't grabbed by radarr, skipping. any ideas why? only thing i've found that seems to help is removing from queue and blacklisting the release. it is annoying as i also have to handle the already downloaded files and if i don't catch it immediately the downloads ssd gets completely filled.
  2. i'm having trouble every week or two where i start getting a ton of stuff in the queue showing red saying release wasn't grabbed by sonarr, skipping. any ideas why? only thing i've found that seems to help is removing from queue and blacklisting the release. it is annoying as i also have to handle the already downloaded files and if i don't catch it immediately the downloads ssd gets completely filled.
  3. i'm trying to set this up. i can access it via novnc webpage. i go to steamlink on an client. i can see the computer. the network test runs fine. it shows the computer and my controller etc. when i click start streaming it goes to a blue screen showing connecting to steamheadless that never seems to complete. on some clients it seems to go indefinitelt and on some it seems to complete and goes to a black screen with spinning wheel for a bit before it seems to crash and restart the app. also what needs to be done to give it access to my intel igpu if anything
  4. having trouble with this. i installed the docker and it hangs and eventually times out on the scan controllers page. here is the part of the error message that looked relevant without the entire stacktrace. Lucee 5.3.10.120 Error (application) Messagetimeout [90000 ms] expired while executing [/sbin/parted -m /dev/sds unit B print free] StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 1964 1962: <CFFILE action="write" file="#PersistDir#/#exe()#_parted_#DriveID#_exec.txt" output="/sbin/parted -m /dev/#DriveID# unit B print free" addnewline="NO" mode="666"> 1963: <CFIF URL.Debug NEQ "FOOBAR"><cfmodule template="cf_flushfs.cfm"></CFIF> 1964: <cfexecute name="/sbin/parted" arguments="-m /dev/#DriveID# unit B print free" variable="PartInfo" timeout="90" /> 1965: <CFFILE action="write" file="#PersistDir#/#exe()#_parted_#DriveID#.txt" output="#PartInfo#" addnewline="NO" mode="666"> 1966: <CFSET TotalPartitions=0> called from /var/www/ScanControllers.cfm: line 1853 1851: </CFIF> 1852: </CFLOOP> 1853: </CFLOOP> 1854: 1855: <!--- Admin drive creation --->
  5. i am trying to instal iotop and when i click apply it opens up a white window labeled package manager but nothing shows up
  6. the backblaze personal backup container directs here if you click support. i try to start it but the webui does not respond. the log says [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... any idea what is wrong
  7. alright. i had moved a few drives internal to set things up before converting the old server into a jbod to minimize downtime. moved it back to the sas backplane tonight hopefully that error will stop
  8. here is the diagnostics rosewill-diagnostics-20230314-1018.zip
  9. Mar 13 22:56:40 Rosewill kernel: ata5: COMRESET failed (errno=-16) Mar 13 22:56:41 Rosewill kernel: ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 13 22:56:42 Rosewill kernel: ata5.00: configured for UDMA/33 Mar 13 22:56:50 Rosewill kernel: ata5: link is slow to respond, please be patient (ready=0) this keeps showing up in my log. can anyone help me know what it means and if there is any action i need to be taking in response to it.
  10. i've decided to put up with some inconveniences and not do it but have strongly considered virtualizing unraid in the past because vms and containers have to stop when the array is down but a good number of them i don't really need the full array. strongly considered virtualization to handle my main storage array i don't want to mess with zfs on but then have other things doing the vms and containers i don't need to have go down every time i need to restart the array
  11. those drives finished fine and with a 3rd small one i cleared as well for this. ran this a second time in a jbod i attached to the server rather than internal to the mobo on a sata port. it failed within an hour of starting the post read. am starting the rma process. but have also started running memtest from passmark about 4 hours in now and will let it run till after work tomorrow but my gut says the ram is fine
  12. makes sense. certainly wasn't intentional but glad the terminal still works for mounting these as network is slow for transferring 40TB of data
  13. well i figured out how to mount it using fstab and am currently transferring data off them. don't know how i did that. is there a reason UD doesn't support mounting things that way though. is it just uncommon enough to not be worth doing or is there a reason mounting such drives is bad
  14. i was able to use the terminal to mount them if i told it to mount /dev/sdn there is no /dev/sdn1. which is what was odd. don't know how it happened but it works in unraid if i mount via the terminal. will copy them onto the array and then clear them so i guess i can mark it as solved. no idea why the drives were that way though
  15. I have 3 of the 20+ disks that i am trying to move from my old server to unraid that only show a format option in unraid. in the old computer they mount fine but i notice they don't seem to have a /dev/sdX1 just a /dev/sdX. i can read the files so they are still there. how can i get these mounted in unraid so i can transfer data off them before i wipe and add them to the array?
  16. i'm surprised unraid does not have enough info about your array to tell the original size of the replaced drive or that a drive is simply being added and only check that amount of the parity. it is not a huge deal however. thanks for taking the time to answer me.
  17. this server was just built. i did run mprime blend for 24 hours and this is ddr5 ecc memory. 24 hours of that test showed no correctable errors on the ram even. i don't see how to view ecc errors in unraid yet if you know how to check that i would love to know as well. Is memory still likely to be the issue given it recently went through a mprime test that does push the memory as well as the cpu without issue? i have 2 other drives that will finish preclear within a few hours and i can run memtest then though. do i use the one that is bundled with unraid or should i download passmarks?
  18. i don't understand why removing a drive from the array necessitates rebuilding the entire parity though. if i have a 18TB parity drive and i remove a 1 TB drive from the array why does that effect the parity drive beyond the 1st terabyte. my understanding of parity is fairly limited when it comes to galois fields but xor parity shouldn't require rebuilding the entire parity in such a case according to what i know and i'm pretty sure dual parity should work similarly
  19. depends how many sticks. if you have more than 2 you are better off with a binary search doing half at a time. makes a pretty big difference if you are running server hardware with 16 sticks of ram.
  20. new to unraid. testing out parity drive for server and it failed in the preread stage. i t said to check the log for more details and it did but doesn't seem to indicate what the failure actually was. can anyone help me interpret this. someone on another forum suggested i should rerun with a different sata cable but was hoping to figure out what went wrong. the drive is brand new and can be returned easily i just want to be fairly sure it actualliy is the drive. Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5353465+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11227029831680 bytes (11 TB, 10 TiB) copied, 58447.6 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5354224+0 records inMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5354223+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11228619472896 bytes (11 TB, 10 TiB) copied, 58460 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5354967+0 records inMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5354966+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11230177656832 bytes (11 TB, 10 TiB) copied, 58472.4 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5355721+0 records inMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5355720+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11231758909440 bytes (11 TB, 10 TiB) copied, 58484.9 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5356470+0 records inMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5356469+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11233329676288 bytes (11 TB, 10 TiB) copied, 58497.3 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5356487+0 records inMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 5356487+0 records outMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-Read: dd output: 11233367425024 bytes (11 TB, 10 TiB) copied, 58497.6 s, 192 MB/sMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: Pre-read: pre-read verification failed!Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Error:Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.:Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUSMar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 -Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Power_On_Hours 7 23 Up 16Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Reported_Uncorrect 0 0 -Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Airflow_Temperature_Cel 31 33 Up 2Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Current_Pending_Sector 0 0 -Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Offline_Uncorrectable 0 0 -Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 -Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: S.M.A.R.T.: Mar 11 08:24:13 preclear_disk_ZR5CLZVV_14183: error encountered, exiting ...
  21. my understanding of finite fields is limited but i think the calculation for a nth parity would merely involve calculating disk number to the nth power in one. not sure how much more overhead this would be than the 2nd power like it is now for dual parity. if it isn't prohibitive i would like to be able to add a 3rd parity