Jump to content

halexh

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by halexh

  1. Trying to run the mover to move things off the cache drive on to the array, but I am constantly getting "No space left on device" as seen here in the system log
     

    Oct 1 17:17:46 UnraidServer shfs: copy_file: /mnt/cache/data/cameras/recordings/2023-09-25/05/Doorbell/55.32.mp4 /mnt/disk1/data/cameras/recordings/2023-09-25/05/Doorbell/55.32.mp4.partial (28) No space left on device

     

    I guess while on vacation my cache drive filled up with security camera recordings. Before leaving Disk 1 was at ~13.5 TB capacity and Disk 2 was ready to be used. But why isnt the mover copying anything to Disk 2?

     

    image.thumb.png.172c6e8a4ba3c327e08d17e1869e2640.png

     

    Here is a photo of the share in which the mover is operating on:

    image.thumb.png.53268f76e1b40f9db2df4ab59c631116.png

     

  2. 38 minutes ago, Kilrah said:

    To expand, when you replace a drive it stays available during the rebuild process, albeit at reduced performance of course.

    Thank you for that. I was about to ask, well isn't it going to spend the next 12+ hours rebuilding the new drive without the ability to read from it? Reduced performance is fine with me.

    I'm assuming I will also be able to write to the array while this new drive is being rebuilt?

  3. I have two drives in my array. One is a parity drive, and the other drive is going bad, but still very much usable and technically healthy according to unraid. Is it possible to replace the bad drive with a third, brand new drive without taking the array offline while I essentially clone (somehow) the bad drive's contents onto the new drive? All drives are the exact same size / model.

     

    While the array is still online, I imagine new things will be written to the bad drive, which would require potentially running this cloning process (whatever that may be), multiple times, in order to get all of the data to the brand new drive.

     

    Thanks

  4. I have a 14 TB HDD that recently started showing Current_Pending_Sector errors. At that point, was not running a parity drive, and that was the only hard drive in my system. I had a second identical drive, and made it the parity drive. That finished successfully (took about 2 days). At that point it mentioned there were parity errors and that I needed to do a parity check, which I did. That also finished successfully and took around the same time. I documented that in this post.

     

    After that I ran a short SMART test and surprisingly, it passed, which you can see in the attached SMART report

     

    Since Monday (7/17/23), I have been running an extended smart self test and its been sitting at 90% complete. As I understand it, the actual checking of each sector of the disk doesnt happen until 90%, so that should rightfully take the longest. But I am approaching 4 days here, and not sure if its actually going to complete, or if there is even an issue.

     

    So far, there are no errors at all in my system log, since starting this extended smart test.

     

    Its also sort of confusing to me that the disk in question lists a number for the "Errors" column when viewing it on the Main Page under Array Devices, yet its still considered healthy?

    WDC_WUH721414ALE6L4_9JHDH26T-20230721-0939.txt

  5.   

    34 minutes ago, itimpi said:

    You are going to want to replace disk1 anyway as any drive that fails the Extended SMART test should be replaced.

    Yep, I will. I figured in the mean time it would be advantageous to add disk 2 as a parity disk though, right?

     

    Wonder how I would replace disk 1 in the future though

  6. Attached is the downloaded smart report. When I attempt to run a extended SMART report, I get the following:
    image.png.e687af00f846d652dc4d9bcd256caa9d.png

     

    Kind of surprised to be honest, as this drive only has ~181 days of power on lifetime.

     

    Heres the bad part:

    image.thumb.png.59e49a557d0c3e8bfc5e2531cbc2a6d5.png

     

    I never decided to setup a parity drive. Disk 2 is identical to disk 1, and is basically empty (not sure what that 102 GB is tbh). What is the best way to move forward to insure I dont lose all my data?

    WDC_WUH721414ALE6L4_9JHDH26T-20230715-0852.txt

  7. I have some potential issues that I am looking for confirmation/resolution on.

     

    The first is whether or not I setup my jellyfin container to correctly utilize Quicksync. Within the docker section for jellyfin, when I first configured all of this months ago, for some reason I added both of these lines to the extra parameters section. 

    --device /dev/dri/renderD128:/dev/dri/renderD128
    --device /dev/dri/card0:/dev/dri/card0

     

    Looking back at the instructions for linuxserver.io's version of this jellyfin container, it seems only --device=/dev/dri:/dev/dri should be included, as described in the Hardware Acceleration section of https://github.com/linuxserver/docker-jellyfin

     

    Restarting the container to pass in the entire /dev/dri folder, then starting something in jellyfin that requires transcoding for the client, running intel_gpu_top within a console of the unraid host pc results in something similar to the following:
     

    intel-gpu-top: Intel Alderlake_s (Gen12) @ /dev/dri/card0 - 1257/1443 MHz;   0% RC6;  2.92/45.62 W;    19769 irqs/s
    
             ENGINES     BUSY                                                                                                          MI_SEMA MI_WAIT
           Render/3D   71.84% |█████████████████████████████████████████████████████████████████████████▍                            |      0%      0%
             Blitter    0.00% |                                                                                                      |      0%      0%
               Video   25.13% |█████████████████████████▊                                                                            |      0%      0%
        VideoEnhance    0.00% |                                                                                                      |      0%      0%
    
       PID              NAME           Render/3D                      Blitter                        Video                      VideoEnhance           
     19689            ffmpeg |█████████████████▊          ||                            ||█████▋                      ||                            |

     

    They key part that had me worried that I set it up incorrectly is the fact that it lists /dev/dri/card0 instead of /dev/dri/renderD128. I would imagine renderD128 should be listed here, because thats what Jellyfin is utilizing to transcode. 

     

    If I go to the logs of jellyfin, the ffmpeg command to transcode the content looks like this:

    [2023-07-08 10:33:07.636 -04:00] [INF] [16] Jellyfin.Api.Helpers.TranscodingJobHelper: "/usr/lib/jellyfin-ffmpeg/ffmpeg" "-analyzeduration 200M -init_hw_device vaapi=va:,driver=iHD,kernel_driver=i915 -init_hw_device qsv=qs@va -filter_hw_device qs -hwaccel qsv -hwaccel_output_format qsv -c:v h264_qsv -autorotate 0 -i file:<redacted> -autoscale 0 -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_qsv -preset 7 -look_ahead 0 -b:v 292000 -maxrate 292000 -bufsize 584000 -profile:v:0 high -level 41 -g:v:0 72 -keyint_min:v:0 72 -vf \"setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale_qsv=w=426:h=284:format=nv12\" -codec:a:0 libfdk_aac -ac 2 -ab 128000 -af \"volume=2\" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename \"/config/data/transcodes/af3ec090ed41953b9acce436bf7903c3%d.ts\" -hls_playlist_type vod -hls_list_size 0 -y \"/config/data/transcodes/af3ec090ed41953b9acce436bf7903c3.m3u8\""

     

    I am assuming that the inclusion of

    Quote

    -hwaccel qsv

    confirms that it is indeed utilizing quicksync?

     

    Still curious why intel_gpu_top lists /dev/dri/card0 though.

     

    For the second issue, I am trying to understand exactly what is happening with the following. I have 4k content, and if I decide to say set the quality in jellyfin to something ridiculous, like 4k @ 120Mbps the cpu struggles, resulting in big slow downs on playback, especially after seeking. I noticed a pattern that during the slowdowns, intel_gpu_top will look like this:

    intel-gpu-top: Intel Alderlake_s (Gen12) @ /dev/dri/card0 - 1246/1446 MHz;   0% RC6;  1.08/19.48 W;        0 irqs/s
    
             ENGINES     BUSY                                                                                                          MI_SEMA MI_WAIT
           Render/3D    0.00% |                                                                                                      |      0%      0%
             Blitter    0.00% |                                                                                                      |      0%      0%
               Video   50.00% |███████████████████████████████████████████████████▏                                                  |     50%      0%
        VideoEnhance    0.00% |                                                                                                      |      0%      0%
    
       PID              NAME           Render/3D                      Blitter                        Video                      VideoEnhance           
     24614            ffmpeg |                            ||                            ||██████████████▉             ||                            |

     

    The video portion is just capped at 50%, during the entirety of the playback being paused due to not transcoding quick enough. According to jellyfin, this Video section represens the QSV decoder/encoder workload, as described in Jellyfin's documentation here: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#verify-on-linux

     

    Any ideas why it would be capped at 50% and utilizing 100%?

     

    Lastly, shouldnt I expect the VideoEnhance section to be utilized somewhat? My Jellyfin playback seciton looks like this:
     

    image.thumb.png.ef3abc6f4a681559ac702c12fb34aa70.png

     

    Motherboard: ASUS TUF GAMING Z690PLUS WF D4

    CPU: Intel i5-12600k

     

    Thanks!

  8. 6 minutes ago, JorgeB said:

    See if this helps, on the main GUI page click on the flash drive, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (top right) and add this to your default boot option, after "append initrd=/bzroot"

    nvme_core.default_ps_max_latency_us=0 pcie_aspm=off

    e.g.:

    append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 pcie_aspm=off


    Reboot and see if it makes a difference, if it doesn't I would try a different model NVMe device if that's a possibility.

     

    Its going into a power saving state, causing a write/read failure? Just curious on your thinking as to how that could be the issue

  9. I have had my unraid server running for about 2 months now, and this has happened once in the past. The solution then was just to restart and everything was fine. Now that it has happened again, I am more concerned.

    root@UnraidServer:~# btrfs device stats /mnt/cache/
    [/dev/nvme0n1p1].write_io_errs    1
    [/dev/nvme0n1p1].read_io_errs     1046565
    [/dev/nvme0n1p1].flush_io_errs    1
    [/dev/nvme0n1p1].corruption_errs  0
    [/dev/nvme0n1p1].generation_errs  0

     

    This issue manifests itself through the fact that almost all of my containers become unresponsive, no longer function, and then will not restart/start. In the container's logs, I get errors like these:

    grep: (standard input): I/O error
    /usr/bin/wg-quick: line 50: read: read error: 0: I/O error
    Sonarr failed to start: AppFolder /config is not writable
    

     

    The nvme in question is a SK hynix Platinum P41 1TB. It is installed in the M.2_1 slot, as seen in the image below (taken from my TUF GAMING Z690-PLUS WIFI D4 manual):

    image.thumb.png.111c3a2e7ae03f1a965ec91f32974629.png

     

    I have attached my system log / diagnostics zip file. As you can see, there is no SMART log for the nvme in question. I rebooted the server, and then attached it.

     

    Again, everything is back to normal now that I have rebooted. The syslog.txt is no longer being flooded with

    BTRFS error (device nvme0n1p1: state EA): bdev /dev/nvme0n1p1 errs: wr 1, rd 519402, flush 1, corrupt 0, gen 0

    like it is in the attached syslog.txt (within the diagnostics zip file), and all my containers are running again without issue.

     

    After rebooting, I did a scrub on the nvme drive:

    UUID:             7579a732-bdd9-4dac-b182-f01dbb08f3c7
    Scrub started:    Tue Feb 21 08:19:52 2023
    Status:           finished
    Duration:         0:00:32
    Total to scrub:   143.52GiB
    Rate:             4.48GiB/s
    Error summary:    no errors found

    and reran this:

    root@UnraidServer:~# btrfs device stats /mnt/cache/
    [/dev/nvme0n1p1].write_io_errs    0
    [/dev/nvme0n1p1].read_io_errs     0
    [/dev/nvme0n1p1].flush_io_errs    0
    [/dev/nvme0n1p1].corruption_errs  0
    [/dev/nvme0n1p1].generation_errs  0

     

    I feel like in another week or so, this issue is going to pop up again. Any ideas on what I can do to resolve this so I don't get these errors anymore? Thanks

     

    unraidserver-diagnostics-20230221-0720.zip SHPP41-1000GM_SSB6N82781170710H-20230221-0812.txt

  10. 2 hours ago, biggiesize said:

    Try one more time to set the firewall to false. If it is still failing then it's not being blocked. I'm also assuming you are at least restarting qBittorrent each time you rebuild gluetun.

     

    Yea I am restarting qBittorrent container each time I rebuild gluetun. Tried it again with the firewall set to false. Still having the same results as before.

     

    For comparison, I went and setup binhex's delugevpn container, and that worked like a charm on the exact same torrent. Both gluetun and delugevpn claim they are up and running, and the results of 

    curl -sd port=<Port Mullvad Provided> https://canyouseeme.org | grep -o 'Success\|Error'

    are successful on both containers. Wish I could get gluetun working though - seems better, and I like that I can use it with any torrent client.

     

    I also have a Nordvpn account, and I initially setup the nordlynx container with qbittorrent. When I run that, it works quite well, and I dont get the same firewalled icon. I figured obtaining a VPN that was capable of port forwarding would improve things further, so I gave Mullvad a try, which led me to Gluetun.

  11. 4 hours ago, biggiesize said:

    Go into qBittorrent settings and grab the listening port on the connections tab and set the network interface back to 'any'.

     

    Then try using this for your gluetun run command:

     

    -e 'FIREWALL_VPN_INPUT_PORTS'='8081,<listening port from qBittorrent>'

    -e 'FIREWALL_INPUT_PORTS'='8081,<listening port from qBittorrent>'

    -p '<listening port from qBittorrent>:<listening port from qBittorrent>/tcp'

    -p '<listening port from qBittorrent>:<listening port from qBittorrent>/udp'

     

    In this case, <listening port from qBittorrent> is the same value as <Port Mullvad Provided>. I believe that is what you would expect, right? In any case, I still get the same results - one connection and the icon implying I am firewalled.

  12. 9 hours ago, biggiesize said:

    It is the port you specified for the qbittorrent webui (8081). You can also try turning the firewall off for gluetun. There is a possibility the torrent tracker is being blocked.

     

    I have individually tried all of the following. I made the change, restarted the Gluetun container, restarted the qbittorrent container and each time it had the same effect: only connecting to one peer (the same peer too, which is somewhat odd) after starting the torrent for the ubuntu image:

    • Setting FIREWALL_INPUT_PORTS to 8081
    • Turning off the firewall altogether
    • Setting BLOCK_MALICIOUS to off
    • Removing OPENVPN related variables so they arent set / defined when the docker run command is run, since I am using WIREGUARD
    • Attempted to change the "192.168.0.0/16" in the DOT_PRIVATE_ADDRESS variable to 192.168.68.0/24
    • Resetting FIREWALL_OUTBOUND_SUBNETS to its default value of nothing, from the value I had of "192.168.68.0/24". Obviously no change since turning the firewall off altogether had no affect.
    • Renamed the GluetunVPN container to "gluetunvpn", to remove case sensitivity.
    • Setting the gluetunvpn container to privileged when creating/updating it in unraid.
    • Triple checked that qbittorrent container is indeed using glutetun container's network. Running ifconfig in a shell associated with each container produces identical output.
  13. 25 minutes ago, biggiesize said:

    For unRAID you will also have to add the qbittorrent port to FIREWALL_INPUT_PORTS. I run qbittorrent as well and have not had luck getting the ports to stay on a custom port. Your mileage may vary.

     

    What specifically do you mean by the qbittorrent port?

     

    Is this not the same thing I am using for FIREWALL_VPN_INPUT_PORTS? I went ahead and set FIREWALL_INPUT_PORTS to the same value as FIREWALL_VPN_INPUT_PORTS. Still behaves the same as before

  14. Hi,

     

    Just started using unraid for the first time about a week ago and it has been great. Got the *arr suite of software setup, along with my downloaders, and now trying to configure jellyfin to utilize quiksync within my i5-12600k on my ASUS TUF GAMING Z690-PLUS WIFI D4 motherboard.

     

    I am utilizing binhex's jellyfin container. Within the setup of the container in unraid, I have added the following to the extra parameters section:
     

    --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card0:/dev/dri/card0

     

    I can obviously see this within the Jellyfin container's shell.

     

    I have also installed intel_top_gpu as a means of confirming the transcoding is actually utilizing the iGPU. This is what I see when running that, obviously no load since its not being utilized.

    intel-gpu-top: Intel Alderlake_s (Gen12) @ /dev/dri/card0 -    0/   0 MHz; 100% RC6;  0.00/ 6.66 W;        0 irqs/s
    
             ENGINES     BUSY                                                                                                          MI_SEMA MI_WAIT
           Render/3D    0.00% |                                                                                                      |      0%      0%
             Blitter    0.00% |                                                                                                      |      0%      0%
               Video    0.00% |                                                                                                      |      0%      0%
        VideoEnhance    0.00% |                                                                                                      |      0%      0%
    
       PID              NAME           Render/3D                      Blitter                        Video                      VideoEnhance 

     

    If I attempt to utilize either VAAPI or Intel Quicksync in the Transcoding settings of Jellyfin, I am met with the following when attempting to play something:

     

    Quote

    Playback Error
    This client isn't compatible with the media and the server isn't sending a compatible media format.

     

    When I select VAAPI, it correctly defaults to /dev/dri/renderD128

     

    Lastly, just for the sake of conforming with all the other instructions I have found. I created /boot/config/modprob.d/i915.conf (because it didnt exist) and added "blacklist i915" to it. Probably pointless, as I believe this only exists to ignore pre-existing drivers in favor of whats in /dev/dri/ ?

     

    Anyways, looking for advice on what I am missing. Thanks!

     

     

×
×
  • Create New...