TommyJohn

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by TommyJohn

  1. 1 hour ago, Rolox said:

    Hi all, I'm fairly new to UnRAID and hanbrake coming from an Adobe workflow background.

     

    I recently installed the docker container and while performing some tests, I ran into the following issue.

     

    I added some files to the Watch Folder that the Automatic Converter uses as a source to start enconding. Everything encoded fine. 

    • However, I wanted to try a different preset and I followed these steps.
    • Modified the preset in the docker container settings
    • Deleted the source files in the "in" folder
    • Deleted the processed files in the "out" folder
    • Re-added the source files to the "in" folder

     

    Now HandBrake does not start encoding again, effectively skipping the files as noted in the attached log files. It only starts encoding new files that it has never seen before.

     

    [autovideoconverter] Skipping video '/watch/T-TS-0037.MP4' (4f70aab708be8878314571bf18f1e4c8): already processed successfully.
    [autovideoconverter] Skipping video '/watch/T-TS-0038.MP4' (4be6aec3b22155ac7758643b8ba5ef49): already processed successfully.
    [autovideoconverter] Skipping video '/watch/T-TS-0039.MP4' (8e5ce0fa131d539f969d3b5f62c35b5b): already processed successfully.
    [autovideoconverter] Skipping video '/watch/T-TS-0040.MP4' (cb5a4c7d6dec289e0ca1a2f8ebfe6d6c): already processed successfully.
    [autovideoconverter] Skipping video '/watch/T-TS-0041.MP4' (1ecd8183b93c2a29c76d5ee5e53c780b): already processed successfully.

    Is there some cache that I need to clear for Handbrake to recognize and process files that it has seen before? 

     

    Thank you in advance! 

    Handbrake skipping log.txt 38.8 kB · 0 downloads

    Hi Rolox, try changing the names of the files, tdarr should process them again.  

  2. Hi guys. Been using unraid/handrake for a couple years now and successfully encoded TB of video, all of a sudden I'm getting the "Cannot read or write the directory error"

     

    I haven't changed permissions anywhere to my knowledge, 

    output folder is set to

    /mnt/user/Media/_Handbrake Transcodes/

     

    and this is the folder i'm choosing to save to. I have tried different folders with the same error. 

     

    permissions look right to me:

    ls -la "/mnt/user/Media/_Handbrake Transcodes"
    total 23851556
    drwxrwx---  1 nobody users         41 Mar 30  2021 ./
    drwxrwxrwx  1 nobody users         23 Feb  2  2021 ../

     

    handbrake docker settings and diagnostics attached. There's no encode log because encoding won't even start. 

     

    Edit: I can encode from within windows handbrake to this directory no problem, its only in unraid that I'm having this issue. 

     

    I've removed and installed fresh docker image.. nothing working for me. Hope someone can point out what will probably be an obvious mistake, thanks!

    unraid_handbrake_settings.png

    diagnostics-20220226-1150.zip

  3. Hi Guys, 

    Restoring my data to an array and want to start Deluge without a connection so that no torrents start downloading when I start the docker, what would be the best way to do this? Would I just change a port setting to block traffic somewhere? 

     

    Reason is because last time this happened when i installed the dockervpn and restored all my data from backup, deluge "forgot" the states of all the torrents and started downloading like mad, I want to be able to go back into each torrent and make sure the skip options are set correctly. 

    TIA.

  4. Benson: the h200 was installed after the h700 was removed, they weren't simultaneously connected. 

     

    johnnie: This is a good point, I honestly can't remember if I formatted the disks on the h700 before moving them to the h200, though almost certain it was after because I wanted to ensure I had the correct serial numbers on the new drives showing in unraid. 

     

    So what do I do now? Do I attempt xfs_repair -L ? Will this potentialy repair the filesystem?

     

    UPDATE: Powered down for 24hrs and when powering back up this morning the array was back online... but all drives were empty. Looks like I've lost the data and have to recover from backup. Still worried that this might happen again, any indicators I should look for in logs? Any way to get a warning about the uuid issue in the future?

     

    Here is my new xfs_repair message:

     

    root@Tower:~# xfs_repair -v /dev/sdb1
    Phase 1 - find and verify superblock...
            - block cache size set to 1513776 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 662654 tail block 662650
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

    Should I proceed with -L in attempt to recover data?

  5. So I finally took the plunge and replaced my perc h700 card with an h200 flashed to IT mode. I had an existing xfs array on the perc with 12tb of data. Fingers crossed when I rebooted the array wouldnt start and the drives all gave the error "Unmountable: No file system"

    I tried xfs_repair -v on the drives, but no luck. (errors corrected but still won't mount). I assumed changing cards that this was to be expected and I decided to start a new array with some new drives I had, and at the same time I figured by copying all my data to a new array at least I would have a non-fragemented array as my old one was probably 90% fragmented. Anyway 4 days later I finally finished copying everything and setting up all my dockers. Now I decided to add in 2 drives for parity. Precleared the new disks without errors, and rebooted only to have the exact same error as before, "Unmountable: No file system" !  Arghh. Side note- any advantage to setting up the parity while building the array as opposed to after? Would this have slowed down the copy process?

     

    So, again, xfs_repair -v the array drives, fixed some errors, ran again, no more errors. Rebooted, same problem, "Unmountable: No file system"

     

    What are my recommended next steps? Do I proceed with an xfs_repair -L even though I'm not getting any xfs_repair errors? I don't want to have to start from scratch again and copy 12tb of data, which is hard because its all sitting on shares spread across drives from my former array. But as a last resort I can do that. I'm hoping to salvage the existing data.

     

    Attached are my diagnostics, thanks in advance to those offering some guidance.

     

    EDIT:  After the xfs_repairs on all disks and powering down overnight, the array came back online on next boot, but with data gone. 

     

    tower-diagnostics-20190808-1546.zip

  6. Worst case I have all my data backed on other drives so I can start fresh if necessary, would just like to avoid it. 

     

    I'm looking at the H200.. how would it support 12 drives like the H700? It seems to only support 8 drives, so does the SAS expander on the R510 backplane take care of the rest? 

  7. 13 hours ago, jonathanm said:

    Be aware that using a RAID controller for unraid is a bad idea.

    Please read this entire thread.

     

    Hi Jonathan, yes I did read about raid controllers prior to ordering my server and from what I understood the issues with certain RAID controllers (not specific to the H700) were not passing along SMART info and not reporting the device size correctly. My controller is currently passing both of those without issue However I did read that if the H700 fails then my only recourse is to get another H700 in its place.  

     

    If I were to swap out the H700 for say an H200, would I have to re-format and re-transfer all my data or would the new card handle everything like nothing had changed?

  8. New user here in trial period. So far I'm pretty impressed with Unraid and think I will be moving to the licensed version. After comparing FreeNAS I think Unraid is the more flexible solution for my needs. 

     

    I've sucessfully created an array with 4x3tb drives and dual parity. I have a separate SSD intended for VMs but I think I incorrectly added the SSD to the array which would not be the recommended approach? Using the unassigned drives plugin is the better method? I do not use the SSD for cache, this is purely a media server. 

     

    So now I need to remove the ssd from the array, but I'm confused as to the best approach. My ssd is not mounted yet so do I have to rebuild the parity since it shouldnt be affected? I looked at the options here and am unsure of the best procedure:

     

    https://wiki.unraid.net/Shrink_array

     

    The The "Clear Drive Then Remove Drive" Method only seems to be valid if the drive is mounted, which mine is not. 

    Any guidance you guys can provide would be helpful. 

    Thanks

     

    image.thumb.png.6ccc97b61893fbd21324dffd81364d0c.png