Jump to content

Reynald

Members
  • Posts

    31
  • Joined

  • Last visited

Posts posted by Reynald

  1. On 1/22/2024 at 7:59 AM, ich777 said:

    Sorry but this Kernel version is not supported by Unraid nor is it used in any Unraid version.

    I just compile the plugin packages for Kernel versions that ship with Unraid.

    Where can I find supported kernel versions please? Do I have to pack my own bz images?

    As seen here, I shall be able to use up to 6.6.13 https://github.com/ich777/unraid-coral-driver/releases and been able to install your plugins.

  2. Hello all,

     

    @ich777 awsome work with these plugin.

     

    I'm running Unraid OS 6.12.3 with stock kernel.

    However, I need to update the kernel so modules for the NCT6775 can be loaded. 

    I also use BTRFS extensively in my pools (for data bandwidth), and features and bugfixes are frequent on this FS within linux kernel updates.

    I also need drivers for Coral TPU and nvidia GPU.

     

    I can see that your plugins goes up to kernel 6.6.13-unraid. Where can I find the bz* corresponding images please?

  3. Hello,

    I've sorted the claim thing by adding a volume and claiming via script. It survives reboots ;)

    Added this volume mount in the template: 

    /var/lib/netdata/cloud.d/ -> /mnt/user/appdata/netdata/cloud.d/

    As per read here: https://learn.netdata.cloud/docs/agent/claim#connect-an-agent-running-in-docker

    Quote

    For the connection process to work, the contents of /var/lib/netdata must be preserved across container restarts using a persistent volume. See our recommended docker run and Docker Compose examples for details.

    (well, this doc is quite outdated because mounting /etc/netdata or /var/lib/netdata won't work as we know...)

     

    Then ran this command on host:

    docker exec -it netdata netdata-claim.sh -token=TOKEN -url=https://api.netdata.cloud

    As per documentation: https://learn.netdata.cloud/docs/agent/claim#using-docker-exec

     

    Maybe 'netdata-claim.sh -token=TOKEN -url=https://api.netdata.cloud' works in container console from unraid GUI instead of ssh'ing in the host (but as I'm an SSH man ...)

     

    Happy supervision!

    Reynald

    • Like 1
  4. Hello,

    Thank you for your interest and warm words @hugenbdd ! This script takes me quiet some hours of thinking/scripting.

     

    2 hours ago, hugenbdd said:

    Question, why are you using rsync instead of the mover provided binary?

    I was not aware of mover binary. 

    If I'm not mistaken the /usr/local/sbin/mover.old script where you find your piece of script for your example was invoking rsync in the past. I recall having picked rsync options (-aH) from this mover.old script

     

    My strategy is not to move, but to archive-sync in both direction (same as mover), and to delete from cache depending on disk usage, not deleting on array.
    Some benefits:

    - File is not overwritten if identical, latest copy is on cache if it exists on cache.

    --> Moving from cache to array and vice-versa will take more time than duplicating data (mover will also not move, but sync and delete).

    --> Copying from array to cache let the data secured by parity

    --> Having control on deletion allows to handle hardlinks (a torrent seeded by transmission from cache is also available for plex). Mover will preserve them also as it move a whole directory, but I'm moving files.

    --> I can bypass cache "prefer/yes/no/only" directives, and set mover so it won't touch my "data" share until I'm short on space on cache(i.e if this smart-cache script is killed).

    --> Using rsync with -P parameters while debugging/testing give some status/progress info 

    Drawbacks: 

    - data is duplicated

    - deletion and modification from array using /mnt/user0 or /mnt/diskN is not synced to /mnt/cache. This is not possibble if we use /mnt/user for the 'data' share.

     

    But thanks to your suggestion (with the filelist idea), I have an idea about how to sync cache only files (i.e fresh transmission downloads during quiet hours) to array.
    Also, mover may do some extra checks from array to cache. From cache to array, I use unraid shfs mechanism as I sync to /mnt/user0 (and not to /mnt/diskN), same for hardlink that are well handled by shfs.

     

    If you want to use this script for plex only, you can:

    - set $TRANSMISSION_ENABLED to false

    or if you want to clean the script:

    - remove #Transmission parameters, transmission_cache fonction and its call ('$TRANSMISSION_ENABLED && transmission_cache') at the bottom of the script.
    I may extend to other torrent client later.

    • Like 2
  5. Updated: v0/5/14:

    - Improvement on verbosity (new settings)
    - Added  parameter CACHE_MAX_FREE_SPACE_PCT="85" in addition to CACHE_MIN_FREE_SPACE_PCT="90"
    => When cache usage exceed CACHE_MIN_FREE_SPACE_PCT  (here 90%), it is freed until CACHE_MAX_FREE_SPACE_PCT is achieved, here 85%

  6. Hello all,

     

    I updated the script 2 days ago, it's holding tight !

     

    I have very very few spinup now, because 1.4To of most recent data are duplicated on SSD.

     

    Quote

    - Added max number of torrent/sessions for transmission/plex, preventing caching all torrents at transmission startup/torrent checks
    - Added syncing new files on cache to storage during day hours
    - Improved logging
    - Improved error handling for caching/uncaching function

     

    It's on my github: https://bit.ly/Ro11u5-GH_smart-cache

     

    Shall I make this a plugin?

  7. Hello all,

     

    Background

    I have a 8 mechanical HDD 40TB array in a Unraid server v6.8.2 DVB.
    I have 40GB memory, only about 8GB are used. I don't use memory for caching, for now.

    I have a 2TB SSD mounted as cache and hosting docker appdata and VM domains, and until now I was using Unraid cache system to store only new files from a data share, with a script moving to array when SSD was 90% full. With this method, only latest written file were on the cache, so I rethought the whole thing, see below.

    I use plex to stream to several devices on LAN (gigabit ethernet) or WAN (gigabit fiber internet), and also seed torrents with transmission.

     

    Here is my share setup:

    image.thumb.png.6d74cd225525368b2cb79f5c2646761d.png

     

    So I wanted to dynamically cache file from data share to SSD. Main file consumers are plex and transmission, which have their data in a data share

    As a fail-safe, I set mover to only move file if cache usage is more than 95%.

    I wrote a script to handle automagically caching of the data share to use SSD up to 90% (including appdata and VMs).

     

    What the script needs:

    • a RPC enabled transmission installation (optional)
    • access to Plex web API (optional)
    • path to a share on cache
    • path to same share on array

     

    What the script do:

    When you start it, it makes basic connection and path checks and then 3 main functions are executed:

    1. Cleans selected share on cache to have at least 10% free (configurable). To free space, oldest data are copied back to array then deleted from cache.
    2. Retrieves the list of active torrents from transmission-rpc daemon and copy to cache without removing from array.
      (note: active-torrents are those downloading and seeding during the last minute, but also starting and stopping, that's a caveat if you start/stop a batch of torrent and launch the script in the minute)
    3. Retrieves the list of active playing sessions from Plex and copy (rsync, same as mover or unbalance) movies to cache without removing from array. 
      For series, there are options to copy:
      • current and next episode
      • or all episodes from current to the end of season
    4. Cleans again 

    Note:

    • to copy, rsync is used, like mover or unbalance, so it syncs data (don't overwrite if existing)
    • in addition, hard-links, if any (from radarr, sonarr, etc...), are recreated on destination (cache when caching, array when cleaning cache)
    • if you manually send file to the share on cache, it will be cleaned when getting old :) you may write a side script then (for working files, libraries, etc..)

     

    Because of shfs mechanism accessing a file from /mnt/user will read/write fro cache if it exists, then from array. Duplicate data are not a problem and globally speed up things.

     

    The script is very useful when, like me, you have noisy/slow mechanical HDDs for storage, and quick and quiet SDD to serve files.

     

    Script installation:

    I recommend copy/paste it in a new script created with User Scripts. 

     

    Script configuration:

    No parameters are passed to the script, so it's easy to use with User Scripts plugin.

    To configure, a relevant section is at the beginning of the script, parameters are pretty much self explanatory:

    image.png.72a81ea60071e281b115841dd0d4d9f6.png

    Here is a log from execution:

    image.thumb.png.5a027d970e4e2a81671d5e3fabe05d34.png

    Pretty neat hum?

     

    Known Previous issues (update may came to fix them later):

    • At the moment, log can become huge if, like me, you run the script every minute. This is the recommended interval because transmission-RPC active torrent list contain only the ones from last minute. 
      Edit 13-02-2020: corrected in latest version
    • At the moment, a orphan file (only on cache) being played or seeded is detected, but not synced to the array until it needs to be cleaned (i.e: fresh torrents, recent movies fetched by *arr and newsgroup, etc...). 
      Edit 13-02-2020: corected in latest version: it sync back to array during set day (noisy) hours.
    • I don't know if/how shfs will handle the file on cache. I need more investigation/testing to see if it efficiently read the file from cache instead of array. I guess transmission/plex need to close and reopen the file to pick it from new location? (my assumption is that both read chunks, so caching shall work).
      Edit 13-02-2020: yes, after checking with File Activity plugin, that's the case and its plex/transmission take the file on cache as soon as it is available!

     

    Conclusion, disclaimer, and link:

    The script run successfully in my configuration since yesterday. Before using rsync I was using rclone which has a cache back-end, a similar plex caching function, plus a remote (I used it for transmission), but it's not as smooth and quick as rsync.

    Please note that, even if I use it 1440 times a day (User Scripts, Custom schedule * * * * *), this script is still experimental and can:

    • erase (or most likely fill up) your SSD,
      Edit 13-02-2020: but I did not experienced this, error handling improved
    • erase (not likely) your array
      Edit 13-02-2020: Technically, this script never delete anything on array, it won't happen
    • kill your cat (sorry)
    • make your mother in law move to your home and stay (I can do nothing for that)
    • break your server into pieces (you can keep these)

    Thanks for reading to this point, you deserve to get the link to the script (<- link is here).

     

    If you try it or have any comment, idea, recommendation, question, etc..., feel free to reply ;)

     

    Take care,

    Reynald

     

    • Upvote 1
  8. Solved:

     

    Here is what I did:

     

    First error was reported in the gui after an unclean reboot: 

    Unmountable BTRFS - No filesystem

    So I mounted array in maintenance mode and did:

    root@Serveur:~# btrfs rescue super-recover -v /dev/mapper/md1 # reported all supers are valid and did not recover
    root@Serveur:~# btrfs-select-super -s 1 /dev/mapper/md1 # to force using first mirror

    The drive mounted but has been reconstructed...

    After because I still had error reported by btrfs check, I used unBalance to transfert to a healthy drive.

     

    I then mounted array in maintenance mode and used:

    mkdir -p /mnt/disk2/restore && mount /dev/mapper/md2 /mnt/disk2/restore
    btrfs restore -v /dev/mapper/md1 /mnt/disk2/restore

    It restored some other broken files.

     

    Finally, I unmounted, changed filesystem to another one in GUI so drive got reformatted, and changed back to BTRFS Encrypted, and... voilà !

  9. Hello all,

    [Solved], see below, thanks @johnnie.black

     

    History

    I had a problem with "Unmountable BTRFS - No filesystem" on a (almost new, installed few weeks ago) array drive. I solved it with the help of this topic: 

     

    Basicaly I copy a mirror superblock to the main one.

     

    root@Serveur:~# btrfs rescue super-recover -v /dev/mapper/md1 # reported all supers are valid and did not recover
    root@Serveur:~# btrfs-select-super -s 1 /dev/mapper/md1 # to force using first mirror

     

    The drive mounted and, despite data was already there (manually mounted with array in maintenance mode), it has been reconstructed.

     

    I then btrfs scrubed it, no errors.

    I have a inaccessible folder that I can't even delete.


    Now I still have some unraiparable errors:

    root@Serveur:~# btrfs check --repair /dev/mapper/md1
    enabling repair mode
    WARNING:
    
            Do not use --repair unless you are advised to do so by a developer
            or an experienced user, and then only after having accepted that no
            fsck can successfully repair all types of filesystem corruption. Eg.
            some software or hardware bugs can fatally damage a volume.
            The operation will start in 10 seconds.
            Use Ctrl-C to stop it.
    10 9 8 7 6 5 4 3 2 1
    Starting repair.
    Opening filesystem to check...
    Checking filesystem on /dev/mapper/md1
    UUID: c262207c-8afe-4501-9ed2-522fd075f58d
    [1/7] checking root items
    Fixed 0 roots.
    [2/7] checking extents
    ref mismatch on [15044599808 131072] extent item 1, found 0
    incorrect local backref count on 15044599808 root 5 owner 4996 offset 28704768 found 0 wanted 1 back 0x1730fd0
    backref disk bytenr does not match extent record, bytenr=15044599808, ref bytenr=15043026944
    backpointer mismatch on [15044599808 131072]
    owner ref check failed [15044599808 131072]
    repair deleting extent record: key [15044599808,168,131072]
    Repaired extent references for 15044599808
    ref mismatch on [1099503798605 131072] extent item 0, found 1
    unaligned extent rec on [1099503798605 131072]
    record unaligned extent record on 1099503798605 131072
    No device size related problem found
    [3/7] checking free space cache
    cache and super generation don't match, space cache will be invalidated
    [4/7] checking fs roots
    root 5 inode 4996 errors 4540, bad file extent, file extent discount, nbytes wrong
    Found file extent holes:
            start: 28704768, len: 131072
    ERROR: errors found in fs roots
    found 2487837077504 bytes used, error(s) found
    total csum bytes: 2426215532
    total tree bytes: 2765979648
    total fs tree bytes: 6717440
    total extent tree bytes: 16760832
    btree space waste bytes: 283448596
    file data blocks allocated: 2485070966784
     referenced 2485070966784

    I need advice for the next steps to have a clean filesystem please.

     

    I see several solutions:

    1. there is something more to do to recover with btrfs check --repair I'm not yet aware of
    2. use btrfs restore to copy to another drive, then reformat
    3. using unbalance to move data elsewhere, (maybe btrfs recover?), then reformat
    4. fourth answer 😆, please tell

     

    Thank you for any guidance or suggestion !

     

    Take care and stay at home

     

    Reynald

     

     

  10. Hello,

     

    I'd like to report similar thing.


    I subscribed to this thread because I had the same error with GUI throwing 404 or 500 error and empty /usr/local/emhttp/ folder. I'm running Unraid DVB 3.8.2.

    It happened 4 times over 3 weeks, in march. I hadn't issue in April and cannot reproduce. Only thing to do is to reboot.
    My user script and plugins were running since months.

     

    I have no more issue and did not find the culprit. I assume it was an hardware issue because the flash is inserted in an adapter directly on the motherboard pins.

     

     

  11. @trurl I checked 2 Reported uncorrect on parity disks while disk rack was hot after a parity check after reboot.
    No pending sector or reallocated. I ran parity check and it hasn't reproduced.

    image.png.2088448d477f96d74c9f739f2478a791.png


    Now I'd like to ack/reset the error in FCP without ignoring it. Is it possible?

×
×
  • Create New...