IronBeardKnight

Members
  • Posts

    40
  • Joined

  • Last visited

Posts posted by IronBeardKnight

  1. On 6/4/2022 at 11:09 AM, WenzelComputing said:

    i gave up on waiting for an official fix. here is my unofficial fix: 

    image.thumb.png.432810319bd7287c52cd99837df36a34.png

     

    go back to 4.3.9 version.

     

    new version is completely broken.

     

    Also was getting this issue when enabling vpn yes on the latest tag.  I loose all access to the gui.

    Rolled back as per the previous posts has brought me back up and running.

    Obviously not a full solution.

     

    Found this is the Supervisord.log

    image.png.1282ae40609613901f3067b7426b0662.png

    Edit: Found that this still did not fix the issue as after I did a CA Backup and the container auto started again it was back to no gui and this error above.

    Please help 

     

  2. 6 hours ago, hugenbdd said:

    Few things.  If you scroll back way in the thread we talk about a-time etc.  I can't remember the specific's but the way unRAID is implemented it doesn't allow us to see the last "accessed" time of files.  Would have made it very easy to move files over.

     

    If, someone comes up with code that provides a filelist, I could call that and send it to mover.  i.e. off cache or onto cache.  i.e. Essentially just using a cat filelist>mover

    If I get some time over the next few days ill give it a shot otherwise our own or derived version of (active io streams) or (open files) plugin would need to be created to keep track of accessed / last accessed files which would give you your list.

  3. mmm

    On 5/11/2022 at 6:50 PM, -Daedalus said:

     

    Got'cha, thanks for the explanation.

     

    Though what I was requesting was different:

    I'd like the ability to say "Move according to tuning rules for all shares except these ones. For these ones, just follow stock mover rules (daily, etc.)"

    Everything is possible with time, however, I don't believe there has been significant  need or request for something like this yet as what we have currently caters for 95% of situations, however I have come across a couple of situations where having temporary delayed moving on a file/folder level would have been good.

    So to state your request another way, basically wanting a timer option that can be set per file/folder/share?

    As for getting this done, the best point to start would be then to modify this current "File list path" image.png.b267cfa607a664d43bf16de0c8bf0401.png option to do what you want E.g you would just add as normal your file/ folder locations  with a (space at the end followed by 5d, 7h, 9m, 70s) this would be the time it stays on cache for when using the parent share folder with cache Yes.
     

    Code changes would need to change from:
    looped if (exists)

    to something like: 

    looped if (exists && current date < $variableFromLine).

     

    Not actual code ^ but you get the hint :)


    The problem with this is that the script now has to store dates and times of arrival on cache for said folder / file per line but on every subsequent child file/folder and give them individual id's and times to reference you can imagine how this now grows very fast in both compute, io, that the mover would need not to mention now a database need, etc etc, a mover database which won't be as easy as first through to implement with a bash script which is what is run for the mover is a-lot of extra coding to be required and potentially this edges on complete mover redesign.
     

    I cannot see this being implemented from my point of view.

     @-Daedalus thank you for raising it here though.

    New Idea!
    What could potentially solve your issue and would be very cool in nearly every single case is if we where to use for example the code from the " Active-IO-Streams" and/or "Open-Files" plugins  then modify a little to then advise the mover  what files where frequently most accessed  by way of access count and time open and have the mover be bi-directional, as in taking those frequent files and moving them to the cache.
    Having the option to auto add said files to the "exclude file list option" of the mover would also be great as this stops the files from being moved back so soon if your like me and have your mover run hourly or so, at this point of adding to the exclude list you could have each and every file and or folder added to the list automatically ( basically using the txt file as a db ) which would allow you to then add a blanket timeframe to each entry as well if you want for your original needs OR instead of a blanket have the timeframe auto configure based on the usage of the file or folder e.g accessed > 9 times etc 

     

    The mover then becomes essentially a proper bidirectional cache and your system gets a little smarter by making what is accessed frequently available on the faster drives but again basically a mover plugin redesign.

    I would be happy to help out getting something like this out the door but as this is not my plugin and my time is limited its a decision that is not up to me.

    @hugenbdd not sure if your down to go this far but its all future potential and idea's

    Pardon the grammer and spelling was in a rush.

  4. Has anyone actually done a perf test of different size files using xfs vs btrfs with mover to help speed things up.

    I know that xfs yes less features has better io in general on linux, however, with the mover choking on larger amounts of small files eg kb's using btrfs I wonder has anyone actually tested move time / perf of the two file systems for the purpose of the the mover?

  5. On 5/9/2022 at 11:23 PM, -Daedalus said:

     

    I saw that, but the wording of the help section makes it sound like it'll just never get moved off cache:

     

    image.png.076392c1050b0fd62ffdfc406ce2597f.png

    this is correct it is used for keeping things on the cache (excluding from the move)

    Situation example:

    You have a share set to cache yes  data is read written to the cache until criteria is met for the mover to run, the mover runs and normally every bit of that data currently under that share that is on the cache is moved to the array.

    Let;s say you have a bunch of sub files or folders in that share that you would like to have stay on the array when the mover runs so that applications that depend on that data can run faster using the cache.

    having this option allows you to have less shares created and increase speed of some application you have used it for.

    e.g
    Nextcloud requires a share for user data which includes docs. thumbnails, photos etc, if you set that share to cache_yes the all the data that was once on the cache becomes very slow now especially small files after the mover runs and it gets transfered/moved to the array as things like thumbnails etc have to be then read from the array instead of the cache.

    Enter this mover feature! 
    Allowing you to find the thumbnail sub sub sub folder or what ever else you want and set it to stay on the cache regardless of mover run, however, all the actual pictures docs etc not specified get moved to the array still, keeping your end user experience nice and fast in the gui/webpage of nextcloud as you cached your thumbnails but allowing you to optimize your used storage of your cache by having the huge files sit in slower storage as they are not regularly accessed,

    Summery is that this feature of mover allows for more:
    Granular cache control
    cache Space Saving
    Application/docker performance
    Less Mover run time
    faster load times of games: if you set assets or .exe files etc to stay on cache
    etc
    etc



     

  6. On 4/25/2022 at 6:52 PM, Neo_x said:

    Running in a bit of a challenge here, where mover only seems to move a certain percentage of files.

    Expected operation -> cache share hits 95% usage, mover then moves all files to array. In stead it only moves roughly 20%.

    on the 23rd i did a manual move(Main Menu, the Move button) which took it down to +-10%.

    24th and 25th was the automaic moves once it hits 95%, but it only moved about 25%

    image.png.43605762a08d88fd71b6a840c39eedbb.png

     

    configuration as follows :

    image.png.3f5de84df9853634abe5f1a6a8201d52.png

    image.png.6cdf87ea371ffdff5253c445d2c512dc.png

     

    Any ideas? should i enable mover logging and/or test mode for further troubleshooting?

     

    Thx!

    check your share settings as you may have had something going to cache that has now been set to only use the array thus the files get left on the cache and never moved.

    Correct procedure if changing your caching of shares to not use the cache anymore is always to stop what ever is feeding that share, run a full mover run  then change the share setting back to array.

     

    I hope this helps

     

  7. 3 hours ago, -Daedalus said:

    It goes without saying, but I'll say it anyway: Fantastic plug-in, on my list of must-haves.

     

    Feature request: Ability to exclude/include specified shares in mover tuning, vs just moving on whatever the set schedule is (as if tuning wasn't installed).

     

    Use case: Newly added files go to cache and are added to Plex, and usually watched over the next few days (they're popular at the time, etc) so I like them to hang around for a few days to avoid issues around hard drive thrashing.

    I have several things backing up to cache (perf reasons), which I'd like moved to array every night as they don't need to live on cache for the 5 days like everything else.

     

    Unless of course I'm missing a way to do this as-is!


    This is already a feature via exclude location list.

  8. 1 minute ago, IronBeardKnight said:

    Still not able to see icons.  Everything else is working just not the icons.

    Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it. :(



    I have fixed this issue for anyone wanting to know what the problem is
    step 1: Grafana

    image.png.e8bc45e22158a968c77d85ad52ed1cec.png

     

    Step 2: Grafana

    make sure this is your local ip if your not exposing unraid or grafana to the internet and your keeping it local.

    image.png.d300ca995f9cd59fda837317a4f29be5.png

     

    Step 3: Grafana 

    Confirm you have the correct Encryption method selected or if your running non engcryption and apply.

    Do this for every graph that uses the Unraid-API

    image.png.5dabc316bceae061bc753b6c0bb8a6c6.png

  9. On 3/22/2021 at 11:04 AM, skaterpunk0187 said:

    I could not find anything about this. I spent the last hour on this, I mirrored my server port in my switch and ran tcpdump. It turns out if you are using the unraid.net plugin with remote access enabled it disables IP direct connections with a DNS hackery with a random string .unraid.net. This blocks the UNRAID-API from connecting to unraid with an IP and UNRAID-API is not capable (from what I can tell) of using a FQDN as a connection. Side note it just doesn't use DNS lookup. Also UNRAID-API or unraid itself seems to have an issue with "Use SSL/TLS" set to auto (or at lease for me) but works 100% if that setting is set to yes or now and proper settings are used in UNRAID-API. Soon as I signed out and removed the unraid.net plugin the API worked just fine. Hopefully this will help others. 

    And Awesome work @falconexe with 1.6.

    Still not able to see icons.  Everything else is working just not the icons.

    Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it. :(

  10. On 3/22/2021 at 8:55 AM, falconexe said:

    For all those having issues with the images, please see the post right after the release post. You need to adjust the IP address in the query (server) and the Base URL in the plugin.

     

    For those who are having issues with the UNRAID API even showing data (on its own web page), please ensure you log in with "root" and that password. Please also pay attention to the HTTPS checkbox. If you get this wrong, it will not work! Give it a few minutes. If you are still not having success, try stopping and restarting the docker. If that does not work, completely blow away the docker AND APP DATA folder for the docker, and try again with "root" and the correct level of security (checkbox).

     

    If all else fails, please report the NON UUD issue to the topic forum that handles that Docker.

     

  11. Just now, IronBeardKnight said:

    man I'm having so much trouble with this container 

     

    .env trick then get chown  error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further.

    then straight up after submit on setup page straight up 500 error. and so many errors in the logs 

     

     child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36

     

    etc etc. 

    would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.

    tried editing in  GEN_SSL but it just completely breaks the container.

  12. man I'm having so much trouble with this container 

     

    .env trick then get chown  error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further.

    then straight up after submit on setup page straight up 500 error. and so many errors in the logs 

     

     child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36

     

    etc etc. 

    would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.

  13. Currently Invoiceninja unraid container is not working at all and no instructions to navigate the errors.

     

    It appears your docker container is broken.

    Not only do you have to run php artisan migrate but after you get the db and everything setup for this you run into the below errors along with many more of the same type.


    [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: ***RuntimeException*** [0] : /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/Encrypter.php [Line 43] => The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths. {"context":"PHP","user_id":0,"account_id":0,"user_name":"","method":"GET","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.86 Safari/537.36","locale":"en","ip":"192.168.1.14","count":1,"is_console":"no","is_api":"no","db_server":"mysql","url":"/"} []"
    [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: [stacktrace] 2021-03-15 04:19:32 The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.: #0 /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/EncryptionServiceProvider.php(28): Illuminate\Encryption\Encrypter->__construct('7kg2Ca9E8BTaSa8...', 'AES-256-CBC') #1 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(749): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array) #2 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(631): Illuminate\Container\Container->build(Object(Closure)) #3 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(586): Illuminate\Container\Container->resolve('encrypter', Array) #4 /var/www/app/vendor/turbo124/framework/src/Illuminate/Foundation/Application.php(732): Illu...

     

  14. On 1/7/2020 at 6:24 PM, ich777 said:

    Can you please attach the logs?

    Also please note that the first startup can take really long (don't interrupt that since it's compiling RapidPhotoDownloader even on my 32core machine it takes about 3 minutes).

    I recommend you to delete the container (and also the folder in the appdata) and start over.

    Hi @ich777,

     

    Thank you first of all for all your application containers.

     

    RapidPhotoDownloader seems to be broken.

    I have tried wiping container completely and then redownloading with both the latest url (version 24) and also the default url that you have published the docker with being (version 17) and both seem to be getting the below error over and over again.

     

    image.png.cfe57103cd39442baa75a512a6182b15.png

  15. On 1/15/2020 at 12:31 PM, sjerisman said:

    And, I repeated the same Windows 7 'real' VM test one more time, but this time used the SSD cache tier as the destination instead of the HDD...

     

    With the old compression code, it took 1-2 minutes to copy the 18 GB image file from the NVMe UD over to the dual SSD cache, and then still took 13-14 minutes to further .tar.gz compress it down to 8.4 GB.  The compression step definitely seems CPU bound (probably single threaded) instead of I/O bound with this test.

     

    With the new inline compression code, it still only took about 1-2 minutes to copy from the NVMe UD and compress (inline) over to the dual SSD cache and still produced a slightly smaller 8.2 GB output file.  The CPU was definitely hit harder, and probably became the bottleneck (over I/O), but I'm really happy with these results and would gladly trade off higher CPU for a few minutes for much lower disk I/O, much less disk wear, and much faster backups.

    very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. 

     

    I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not.

     

    This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.

  16. hmm if its munching cpu hard is it trying to move something that is actually running like vm's or docker containers or anything. cpu would spike hard if you where to try move something that is actively processing data. you may also have downloads trying to feed your cache i you have set it up that way and when the mover runs its trying to move active files ?

    What are your share settings per share that use the cache?

     

    There are many possibilities here and given to the amount of variables that can be associated with using cache and thus the mover its just a process of narrowing things down one at a time. 

    Its possible if you have an overclock as well that bclk if modified is too high causing further instability on other system device e,g sata/sas drives, pcie lanes which could have adverse affects. 

     

    I know its a pain but stock everything except your raid itself and go one by one if all else fails.

     

    I'm not having a go just trying to be helpful as much as I can :), Please let us know how you go.

  17. 9 hours ago, hugenbdd said:

    Thanks for the sparse file link.   I will be able to test/recreate now.  I created the find script based on a post a page or so back, but wasn't really sure how to test it.

     

    Option 1: This should be possible and a smaller change overall.

    Option 2: Sounds like you want options applied per share.  I'm not ready to change the code so it supports different settings for different shares.  There is a LOT of work there...

    Hey Mate, :) yeah just suggestions I think option one would also be the easiest as well. The second option was is such a rare case situation that it can be stored in the archive for later use if you ever need or there is demand for it later maybe.

     

    Oh also I think the spelling mistake may still be there fyi as I could see it even after updating the plugin.

     

    Hint For The Masses:

    We need keep  in mind that exclusion file  types or locations should always come before criteria to move  based on ages/ last accessed/ space or what ever else as exclusions are for sub-directories and/or files that need to stay in cache no matter what other options you have selected.

     

    Personally these new features have sped up my nextcloud instance alone exponentially and I'm looking to do some testing with game load time as well in future.

     

    Thank you again to @hugenbdd for doing all the ground work.

     

     

  18. A Small Guide

     

    For those that want to use the new features but are a little lost or perhaps this will save you some time.
     

     

     

    Open Unraid CMD.

        cd /boot/config/plugins/ca.mover.tuning

        sudo nano skiplist.txt

     

    In the nano editor you can list your locations to skip like the following:

    image.png.7244f4c9f55bfa2d7d4bab2f055b0fc0.png 

     

       Ctl + o 

       Ctl +x

     

    Note: The list of locations may include or not include /mnt/cache/ as this is already catered for withing the mover script

     

    Find Specific files of a name or kind you may find for example all .png in a location  and then put them in your exclusion list incase they are mixed in with other files and such example below.


      find "/mnt/cache" -type f \( -name "*.png"\) -print


    Open CA Mover Tuning GUI

    In CA Mover Tuning Plugin Set the location of your skiplist.txt as below

        File list path: /boot/config/plugins/ca.mover.tuning/skiplist.txt


    image.thumb.png.b25b7d3f362ec74b73d1286ca274125d.png

    • Like 1