Akshunhiro

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by Akshunhiro

  1. Hello, could I please get some help with date formats to setup Shadow Copies?

     

    I'm still on 6.11.4 with a pool created in TrueNAS and wanted to get Shadow Copies working in Windows but first need to work out the correct date format for both my User Script script;

     

    #!/bin/bash
    DATE=$(date +%B%y)
    zfs snapshot -r chimera@auto-$DATE

    (this used to be "zfs snapshot -r chimera@auto-`date +%B%y`" but just updated it to the above while testing matching the format).

     

    and my "shadow: format = auto-" in /boot/config/smb-extra.conf (only contains ZFS pool share)

     

    If I leave "shadow: format = auto-" and "shadow: localtime = yes" then all the previous versions have the same date. Any other combo and I don't see any.

     

    Here's the output of zfs list -t snapshot showing the current format;

     

    zfs list -t snapshot
    NAME                            USED  AVAIL     REFER  MOUNTPOINT
    chimera@auto-April23              0B      -      224K  -
    chimera@auto-May23                0B      -      224K  -
    chimera@auto-June23               0B      -      224K  -
    chimera@auto-July23               0B      -      224K  -
    chimera@auto-August23             0B      -      224K  -
    chimera@auto-September23          0B      -      224K  -
    chimera@auto-October23            0B      -      224K  -
    chimera/data@auto-April23      31.0G      -     20.0T  -
    chimera/data@auto-May23         594M      -     21.1T  -
    chimera/data@auto-June23       45.5G      -     21.9T  -
    chimera/data@auto-July23        103G      -     22.3T  -
    chimera/data@auto-August23      118G      -     22.9T  -
    chimera/data@auto-September23  89.8G      -     23.5T  -
    chimera/data@auto-October23    34.7G      -     24.8T  -

     

    This pool is only hosting media so I didn't think it'd be worthwhile setting up zfs-auto-snapshot.sh

     

    Fixed it with: "shadow: format = auto-%B%y"

  2. Hi everyone,

     

    Just noticed I've lost a few entries after restoring my container from backup.

     

    Unfortunately my array drive died (SSD so no parity) but I wasn't too worried as I had backups.

     

    I was keen to see how Vaultwarden would go as I'd read that the apps and browser extensions would keep a copy and sync back when it can.

     

    There are only 3 entries but not sure why these didn't update to the container.

     

    I have 2 other devices so disconnected them from the net before opening the app and I can see them there (216 entries vs 213).

     

    Will the logs show what's happening if I let one of them sync?

  3. Hi all, just wanted to register my interest in adding support for Dell servers.

     

    I've got an R520 and had been using a crude script to get me by.

     

    This plugin is installed but I have no idea how to setup a JSON for fan control. Reading is working fine though.

     

    For whatever reason, my script has randomly stopped working twice now. Logs say it's ran, and it outputs/prints the temps but that's it.

     

    No errors seen in iDRAC, no change if I reboot the iDRAC or even uninstall ipmitool, the plugin or a combination of all.

     

    I ended up installing PowerEdge-shutup but struggling to fine-tune is as it seems pretty aggressive.

    PowerEdge-shutup with log output.sh Original with log output

  4. Ooh, I found it.

     

    Was reading up more on where snapshots are stored and was able to navigate to /mnt/chimera/.zfs/snapshot/manual/data and everything's there.

     

    It's read-only though so a standard move is taking just as long as a copy from the array. Anything else I can try?

     

    I suspect a zfs send and zfs recv will suffer from the same bottleneck.

     

    EDIT: Nevermind. It's not an elegant solution since I copied to the root and not a child dataset. Not sure how I managed that but I'll just copy everything again.

     

  5. Hi all, wondering what I may have done wrong here.

     

    I've setup a pool and moved data from the array to the pool but noticed most of the folders are empty.

     

    I ran rsync -avh /source /destination and it took about 36 hours to move 15TB

     

    Once the transfer had completed, I took a snapshot before renaming the source folder from data to data_old with "mv /mnt/user/data /mnt/user/data_old"

     

    I then edited the mountpoint for the pool with "zfs set mountpoint=/mnt/chimera chimera" and symlinked /mnt/user/data with /mnt/chimera/data

     

    I saw the free space for the share in unRAID GUI reflected the available space but, after checking the share via SMB, most folders are empty. Confirmed this was the case in CLI as well.

     

    I don't think I can rollback either as the "refer" is in the pool, not the data dataset. When copying another folder, it seemed to write everything back and not just restore or refer from snapshot.

     

    Don't really want to transfer everything all over again, is there anything I can do?

     

    zfs list
    NAME           USED  AVAIL     REFER  MOUNTPOINT
    chimera       15.2T  28.2T     14.9T  /mnt/chimera
    chimera/data   312G  28.2T      312G  /mnt/chimera/data
    
    zfs list -t snapshot
    NAME                  USED  AVAIL     REFER  MOUNTPOINT
    chimera@manual          0B      -     14.9T  -
    chimera/data@manual   719K      -      164G  -

     

  6. Ah yep

    Damn, wonder why it's so big then haha

     

    So I've tested reverting back to a 20GB btrfs image and Docker is working fine now

     

    I'll make a note of the container configs just in case and purge the /mnt/user/system/docker directory tonight (I assume that's fine to do since it's an option in the GUI? And yes, I'll do it through the GUI)

     

    I'll then keep Docker setup as a directory instead of an image, I think I changed it because of excess write concerns on the SSD

     

    Thanks for your assistance!

  7. 9 hours ago, trurl said:

    Nothing can move open files. Did you disable Docker and VM Manager in Settings before attempting move?

     

    Thanks trurl, I didn't and that was my first mistake

     

    I did get a pop up from Fix Common Problems saying that Docker would break if files were moved

    The Docker tab was showing a lot of broken containers with the LT logo and broken link icon

     

    Pretty sure I turned off Docker and VM Manager when I got that pop up so ran the mover again, was left with ~7GB on the old cache drive and, after checking the contents, assumed it was done

     

    My problem was likely caused by manually copying (cp -r) the appdata, system & domain folders (these were all originally set to cache:Prefer) to the new cache drive rather than letting mover do it as I don't think it retained the correct permissions

     

    With the other changes to my setup, I read that having a docker folder rather than the image is a better way to go so that change was made a while ago

    I no longer have any vDisks as I recently grabbed a Synology E10M20-T1 so have a m.2 passed through to each of my VMs

    I can't remember where Plex metadata is stored but suspect that's what contributing to the 132GB

     

    The only container I have working is Cloudflare tunnel but that was setup through CLI

    I tried removing vm_custom_icons but get the same error message when re-installing and attempting to start it

    VMs are still working

     

    I think I need to purge Docker completely and set it up again but not sure what the correct method would be

    I also tried installing 6.11.0-rc2 to upgrade Docker without any luck but suspect it's because either system or appdata still have broken permissions, it won't pickup those folders properly

  8. So I failed at swapping over my cache SSD

     

    I set cache to yes and ran the mover but took a backup of appdata and system anyway

     

    What I failed to do was let mover automatically move stuff back to the new cache SSD instead of choosing prefer:cache to retain the permissions

     

    Long story short, permissions are borked and I'm not sure how to fix it

     

    I've tried restoring a backup but appdata and system are on the array so will continue to have borked permissions

     

    Tried deleting appdata so it'd be recreated but still can't get containers to start

     

    Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown

     

    ganymede-diagnostics-20220801-1759.zip

  9. Hey all, sorry to create a new topic but my Google-Fu is failing me

     

    Recently setup an unRAID box and it's been great

     

    One annoyance I've noticed though is SMB stops working after creating a new share, I need to restart to bring it all back

     

    I couldn't find anyway to restart smbd manually but could be looking for/trying to do the wrong thing

     

    Could someone please advise how to restart the service (if that is in fact the issue) rather than restarting the whole box?

     

    I'm on 6.9.2

     

    Thanks

  10. 4 hours ago, ich777 said:

    Is this a new or a used card?

     

    Something seems very strange here to me... Never had a problem using my P400 and playing such files.

     

    Brand new. Sealed in box from my local computer parts store.

     

    Ah well, I'm not too concerned. It works well in HandBrake so not a total loss.

  11. The Matrix is being weird (not transcoding and glitching back and forth) but I've noticed TLotR is using a lot more VRAM!

     

    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 470.74       Driver Version: 470.74       CUDA Version: 11.4     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  Quadro P400         Off  | 00000000:08:00.0 Off |                  N/A |
    | 34%   46C    P0    N/A /  N/A |   1565MiB /  2000MiB |     86%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |    0   N/A  N/A     39146      C   ...ib/jellyfin-ffmpeg/ffmpeg     1563MiB |
    +-----------------------------------------------------------------------------+

     

    Will keep playing with it

  12. Thanks guys

     

    Yes, I have setup RAM cache via 2 different methods to test

     

    Just worked out that /tmp for some reason wouldn't work (I swear it used to) so used /transcode

     

    --mount type=tmpfs,destination=/transcode(tried /tmp as well),tmpfs-size=8000000000 --no-healthcheck (path was definitely matched in Plex transcoder settings)

     

    Also found the alternate method last night which I was testing by mapping /tmp to /transode via path (no issue with that either as I have quite a lot of RAM)

     

    Tried disabling RAM cache and playback was even worse erroring almost immediately

     

    Will try Jellyfin and report back

    • Like 1
  13. I've purged the Codecs folder and increased the buffer to 300 seconds again.

     

    Passengers and TLotR seem better but The Matrix still crashes soon after starting playback.

     

    Also, not sure if you noticed from the screenshot but I'm only in an 8x slot. Did see that bandwidth on the P400 is only 32GB/s but should still be okay on an 8x slot, yeah?

  14. Appreciate all the help.

     

    I did see others recommend deleting/renaming the Codecs folder as I did get a Codec error when trying TLotR but seemed okay after restarting.

     

    I had played with the buffer as well (increased it to 5 minutes) but it's back on 60 seconds.

     

    Definitely seems like a buffer issue though as I can see it catch-up on the playback bar and that's when it errors. But usage on the GPU is only ~600MB.

  15. Okay, so seems the card isn't up to task and I'm an idiot for wanting to transcode 4K HEVC

     

    Have seen the T600 recommended for this but I honestly shouldn't need it/it's not worth the money

     

    Just tested ~6 simultaneous 1080p transcodes and the GPU didn't even break a sweat

  16. Just the one transcode, not tried simultaneous.

     

    Tried with The Matrix and The Lord of the Rings. TLotR plays for a little bit longer but still gives the same error.

     

    The Matrix;

     

    Codec HEVC

    Bitrate 30704 kbps

    Bit Depth 10

    Chroma Subsampling 4:2:0

    Coded Height 1600

    Coded Width 3840

    Color Primaries bt2020

    Color Range tv

    Color Space bt2020nc

    Color Trc smpte2084

    Frame Rate 23.976 fps

    Height 1600

    Level 5.1

    Profile main 10

    Ref Frames 1

    Width 3840

    Display Title 4K (HEVC Main 10 HDR)

    Extended Display Title 4K (HEVC Main 10 HDR)

     

    TLotR;

     

    Codec HEVC

    Bitrate 67944 kbps

    Bit Depth 10

    Chroma Subsampling 4:2:0

    Coded Height 2160

    Coded Width 3840

    Color Primaries bt2020

    Color Range tv

    Color Space bt2020nc

    Color Trc smpte2084

    Frame Rate 23.976 fps

    Height 2160

    Level 5.1

    Profile main 10

    Ref Frames 1

    Width 3840

    Display Title 4K (HEVC Main 10 HDR)

    Extended Display Title 4K (HEVC Main 10 HDR)

     

    Here are the stats during playback;

     

    h9dW59X.png

     

    19 minutes ago, ich777 said:

    May I ask if you are willing to test Jellyfin with HW transcoding if is the same?

     

     

    Sure, I could give it a try tomorrow.

  17. Thanks for the prompt responses!

     

    18 minutes ago, ich777 said:

    Where did you get such a message?

     

    19 minutes ago, alturismo said:

    when i see $MOVIE ... where is this error from ? plex logs ... ?

     

    Ah, sorry. Was just trying to give the examples of 4K movies. The error is during playback. It'll pop-up and I get the option to retry.

     

    I'm testing on the web as other devices will direct play, just wanted to see if transcoding was working properly when/if needed.

     

    18 minutes ago, ich777 said:

    Why not a Nvidia T400

     

    Unfortunately they're still ~$100 more than the P400 in Australia. I paid $179 for the P400.

     

    RPL0hXp.png

  18. Hey all, got a question regarding my setup.

     

    Had been doing a bit of research on which card to get just for Plex/Tdarr/HandBrake and decided on a Quadro P400.

     

    I did want to get a 1050Ti but they're scarce and the Pascal Quadro's are still in stock.

     

    The numerous threads and questions on Reddit I'd found said I shouldn't need anything more than a P400.

     

    Just finished setting it up and it's definitely working but the 4K files I've tested throw a Playback Error: An error occurred trying to play "$MOVIE". Error code: h3 (Decode). This can vary from 10 seconds to 5 minutes.

     

    Looking at the GPU stats, memory is fine but load is maxed out. With Plex, I've set the transcode directory to RAM using the path option. I did have the 

    --mount type=tmpfs,destination=/tmp,tmpfs-size=8000000000 entry in Extra Parameters (and set the transcode directory to /tmp) but found it wasn't reliable, was testing this with Live TV & DVR using Telly and it would error most of the time.

     

    Could this be an issue with Plex or is the card not up to task. I'm on the latest version (1.24.3.5033) of Plex and the latest nVidia driver (470.74).

  19. Hey all, just wanted to show my appreciation for this product and the amount of support and guides out there (especially the content from Ed/Spaceinvader One, goddamn!).

     

    I recently picked-up a free PowerEdge R520 which was decommissioned at work. Had wanted to do either Proxmox or ESXi but figured I'd check-out unRAID after hearing so much about it (thanks LTT).

     

    I currently run a basic TrueNAS box which hosts Plex and Transmission plus shares etc. so was somewhat limited as far as ease for setting up RancherOS (or an Ubuntu VM) and playing with Docker.

     

    I love that unRAID makes Docker and VMs so easy for simpletons like me.

     

    I've already setup a Mojave VM using Spaceinvader One's guide (which has since replaced a Hackintosh so that hardware has gone to my housemate who's quite happy with an upgrade), Windows 11 VM to test out & a Windows 10 VM for work which I was able to make compliant by setting up swTPM.

     

    Still waiting on my second processor and heatsink but the box runs like a dream and is very quick after setting up a cache SSD and dedicated virtual disk SSD for the VMs. It was easy enough to flash the Dell PERC and also found details on setting fan speed via IPMI!

     

    Was great to see just how much info is out there and how large the community is, goes to show just how good the product is.

     

    I'm only 5 days into the trial period but will be slamming that cash down shortly.

     

    Cheers, JB.

    • Like 3