Reynald

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Reynald's Achievements

Newbie

Newbie (1/14)

4

Reputation

  1. Awsome ! Thank you very much. I recall having a USB watchdog needing a specific kernel driver compiled, now I may play again with that !
  2. Hello, Thank you for your interest and warm words @hugenbdd ! This script takes me quiet some hours of thinking/scripting. I was not aware of mover binary. If I'm not mistaken the /usr/local/sbin/mover.old script where you find your piece of script for your example was invoking rsync in the past. I recall having picked rsync options (-aH) from this mover.old script My strategy is not to move, but to archive-sync in both direction (same as mover), and to delete from cache depending on disk usage, not deleting on array. Some benefits: - File is not overwritten if identical, latest copy is on cache if it exists on cache. --> Moving from cache to array and vice-versa will take more time than duplicating data (mover will also not move, but sync and delete). --> Copying from array to cache let the data secured by parity --> Having control on deletion allows to handle hardlinks (a torrent seeded by transmission from cache is also available for plex). Mover will preserve them also as it move a whole directory, but I'm moving files. --> I can bypass cache "prefer/yes/no/only" directives, and set mover so it won't touch my "data" share until I'm short on space on cache(i.e if this smart-cache script is killed). --> Using rsync with -P parameters while debugging/testing give some status/progress info Drawbacks: - data is duplicated - deletion and modification from array using /mnt/user0 or /mnt/diskN is not synced to /mnt/cache. This is not possibble if we use /mnt/user for the 'data' share. But thanks to your suggestion (with the filelist idea), I have an idea about how to sync cache only files (i.e fresh transmission downloads during quiet hours) to array. Also, mover may do some extra checks from array to cache. From cache to array, I use unraid shfs mechanism as I sync to /mnt/user0 (and not to /mnt/diskN), same for hardlink that are well handled by shfs. If you want to use this script for plex only, you can: - set $TRANSMISSION_ENABLED to false or if you want to clean the script: - remove #Transmission parameters, transmission_cache fonction and its call ('$TRANSMISSION_ENABLED && transmission_cache') at the bottom of the script. I may extend to other torrent client later.
  3. Updated: v0/5/14: - Improvement on verbosity (new settings) - Added parameter CACHE_MAX_FREE_SPACE_PCT="85" in addition to CACHE_MIN_FREE_SPACE_PCT="90" => When cache usage exceed CACHE_MIN_FREE_SPACE_PCT (here 90%), it is freed until CACHE_MAX_FREE_SPACE_PCT is achieved, here 85%
  4. Hello all, I updated the script 2 days ago, it's holding tight ! I have very very few spinup now, because 1.4To of most recent data are duplicated on SSD. It's on my github: https://bit.ly/Ro11u5-GH_smart-cache Shall I make this a plugin?
  5. Hello all, Background I have a 8 mechanical HDD 40TB array in a Unraid server v6.8.2 DVB. I have 40GB memory, only about 8GB are used. I don't use memory for caching, for now. I have a 2TB SSD mounted as cache and hosting docker appdata and VM domains, and until now I was using Unraid cache system to store only new files from a data share, with a script moving to array when SSD was 90% full. With this method, only latest written file were on the cache, so I rethought the whole thing, see below. I use plex to stream to several devices on LAN (gigabit ethernet) or WAN (gigabit fiber internet), and also seed torrents with transmission. Here is my share setup: So I wanted to dynamically cache file from data share to SSD. Main file consumers are plex and transmission, which have their data in a data share As a fail-safe, I set mover to only move file if cache usage is more than 95%. I wrote a script to handle automagically caching of the data share to use SSD up to 90% (including appdata and VMs). What the script needs: a RPC enabled transmission installation (optional) access to Plex web API (optional) path to a share on cache path to same share on array What the script do: When you start it, it makes basic connection and path checks and then 3 main functions are executed: Cleans selected share on cache to have at least 10% free (configurable). To free space, oldest data are copied back to array then deleted from cache. Retrieves the list of active torrents from transmission-rpc daemon and copy to cache without removing from array. (note: active-torrents are those downloading and seeding during the last minute, but also starting and stopping, that's a caveat if you start/stop a batch of torrent and launch the script in the minute) Retrieves the list of active playing sessions from Plex and copy (rsync, same as mover or unbalance) movies to cache without removing from array. For series, there are options to copy: current and next episode or all episodes from current to the end of season Cleans again Note: to copy, rsync is used, like mover or unbalance, so it syncs data (don't overwrite if existing) in addition, hard-links, if any (from radarr, sonarr, etc...), are recreated on destination (cache when caching, array when cleaning cache) if you manually send file to the share on cache, it will be cleaned when getting old you may write a side script then (for working files, libraries, etc..) Because of shfs mechanism accessing a file from /mnt/user will read/write fro cache if it exists, then from array. Duplicate data are not a problem and globally speed up things. The script is very useful when, like me, you have noisy/slow mechanical HDDs for storage, and quick and quiet SDD to serve files. Script installation: I recommend copy/paste it in a new script created with User Scripts. Script configuration: No parameters are passed to the script, so it's easy to use with User Scripts plugin. To configure, a relevant section is at the beginning of the script, parameters are pretty much self explanatory: Here is a log from execution: Pretty neat hum? Known Previous issues (update may came to fix them later): At the moment, log can become huge if, like me, you run the script every minute. This is the recommended interval because transmission-RPC active torrent list contain only the ones from last minute. Edit 13-02-2020: corrected in latest version At the moment, a orphan file (only on cache) being played or seeded is detected, but not synced to the array until it needs to be cleaned (i.e: fresh torrents, recent movies fetched by *arr and newsgroup, etc...). Edit 13-02-2020: corected in latest version: it sync back to array during set day (noisy) hours. I don't know if/how shfs will handle the file on cache. I need more investigation/testing to see if it efficiently read the file from cache instead of array. I guess transmission/plex need to close and reopen the file to pick it from new location? (my assumption is that both read chunks, so caching shall work). Edit 13-02-2020: yes, after checking with File Activity plugin, that's the case and its plex/transmission take the file on cache as soon as it is available! Conclusion, disclaimer, and link: The script run successfully in my configuration since yesterday. Before using rsync I was using rclone which has a cache back-end, a similar plex caching function, plus a remote (I used it for transmission), but it's not as smooth and quick as rsync. Please note that, even if I use it 1440 times a day (User Scripts, Custom schedule * * * * *), this script is still experimental and can: erase (or most likely fill up) your SSD, Edit 13-02-2020: but I did not experienced this, error handling improved erase (not likely) your array Edit 13-02-2020: Technically, this script never delete anything on array, it won't happen kill your cat (sorry) make your mother in law move to your home and stay (I can do nothing for that) break your server into pieces (you can keep these) Thanks for reading to this point, you deserve to get the link to the script (<- link is here). If you try it or have any comment, idea, recommendation, question, etc..., feel free to reply Take care, Reynald
  6. Solved: Here is what I did: First error was reported in the gui after an unclean reboot: Unmountable BTRFS - No filesystem So I mounted array in maintenance mode and did: root@Serveur:~# btrfs rescue super-recover -v /dev/mapper/md1 # reported all supers are valid and did not recover root@Serveur:~# btrfs-select-super -s 1 /dev/mapper/md1 # to force using first mirror The drive mounted but has been reconstructed... After because I still had error reported by btrfs check, I used unBalance to transfert to a healthy drive. I then mounted array in maintenance mode and used: mkdir -p /mnt/disk2/restore && mount /dev/mapper/md2 /mnt/disk2/restore btrfs restore -v /dev/mapper/md1 /mnt/disk2/restore It restored some other broken files. Finally, I unmounted, changed filesystem to another one in GUI so drive got reformatted, and changed back to BTRFS Encrypted, and... voilà !
  7. Thank you, I'm unbalancing then. Should I try to btrfs restore before formatting? (I cannot know if there are files missing/still to recover)
  8. Thank you @johnnie.black I already read this. As disk is mounted, shall I prefer btrfs restore with array in maintenance mode or unbalance please?
  9. Hello all, [Solved], see below, thanks @johnnie.black History I had a problem with "Unmountable BTRFS - No filesystem" on a (almost new, installed few weeks ago) array drive. I solved it with the help of this topic: Basicaly I copy a mirror superblock to the main one. root@Serveur:~# btrfs rescue super-recover -v /dev/mapper/md1 # reported all supers are valid and did not recover root@Serveur:~# btrfs-select-super -s 1 /dev/mapper/md1 # to force using first mirror The drive mounted and, despite data was already there (manually mounted with array in maintenance mode), it has been reconstructed. I then btrfs scrubed it, no errors. I have a inaccessible folder that I can't even delete. Now I still have some unraiparable errors: root@Serveur:~# btrfs check --repair /dev/mapper/md1 enabling repair mode WARNING: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck can successfully repair all types of filesystem corruption. Eg. some software or hardware bugs can fatally damage a volume. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... Checking filesystem on /dev/mapper/md1 UUID: c262207c-8afe-4501-9ed2-522fd075f58d [1/7] checking root items Fixed 0 roots. [2/7] checking extents ref mismatch on [15044599808 131072] extent item 1, found 0 incorrect local backref count on 15044599808 root 5 owner 4996 offset 28704768 found 0 wanted 1 back 0x1730fd0 backref disk bytenr does not match extent record, bytenr=15044599808, ref bytenr=15043026944 backpointer mismatch on [15044599808 131072] owner ref check failed [15044599808 131072] repair deleting extent record: key [15044599808,168,131072] Repaired extent references for 15044599808 ref mismatch on [1099503798605 131072] extent item 0, found 1 unaligned extent rec on [1099503798605 131072] record unaligned extent record on 1099503798605 131072 No device size related problem found [3/7] checking free space cache cache and super generation don't match, space cache will be invalidated [4/7] checking fs roots root 5 inode 4996 errors 4540, bad file extent, file extent discount, nbytes wrong Found file extent holes: start: 28704768, len: 131072 ERROR: errors found in fs roots found 2487837077504 bytes used, error(s) found total csum bytes: 2426215532 total tree bytes: 2765979648 total fs tree bytes: 6717440 total extent tree bytes: 16760832 btree space waste bytes: 283448596 file data blocks allocated: 2485070966784 referenced 2485070966784 I need advice for the next steps to have a clean filesystem please. I see several solutions: there is something more to do to recover with btrfs check --repair I'm not yet aware of use btrfs restore to copy to another drive, then reformat using unbalance to move data elsewhere, (maybe btrfs recover?), then reformat fourth answer 😆, please tell Thank you for any guidance or suggestion ! Take care and stay at home Reynald
  10. Hello, I'd like to report similar thing. I subscribed to this thread because I had the same error with GUI throwing 404 or 500 error and empty /usr/local/emhttp/ folder. I'm running Unraid DVB 3.8.2. It happened 4 times over 3 weeks, in march. I hadn't issue in April and cannot reproduce. Only thing to do is to reboot. My user script and plugins were running since months. I have no more issue and did not find the culprit. I assume it was an hardware issue because the flash is inserted in an adapter directly on the motherboard pins.
  11. I second that question! I have 500 Internal error on time to time so have to do a host reset, which lead on a parity check after restart...
  12. Hello, I'm in with Bionic RDP. Even using advanced view, I cannot find where to choose the team when adding the project... Any guidance please?
  13. Same, plus somethimes I have to delete data. Issue opened here: https://github.com/home-assistant/core/issues/25747