IronBeardKnight

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by IronBeardKnight

  1. 1 minute ago, IronBeardKnight said:

    Still not able to see icons.  Everything else is working just not the icons.

    Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it. :(



    I have fixed this issue for anyone wanting to know what the problem is
    step 1: Grafana

    image.png.e8bc45e22158a968c77d85ad52ed1cec.png

     

    Step 2: Grafana

    make sure this is your local ip if your not exposing unraid or grafana to the internet and your keeping it local.

    image.png.d300ca995f9cd59fda837317a4f29be5.png

     

    Step 3: Grafana 

    Confirm you have the correct Encryption method selected or if your running non engcryption and apply.

    Do this for every graph that uses the Unraid-API

    image.png.5dabc316bceae061bc753b6c0bb8a6c6.png

  2. On 3/22/2021 at 11:04 AM, skaterpunk0187 said:

    I could not find anything about this. I spent the last hour on this, I mirrored my server port in my switch and ran tcpdump. It turns out if you are using the unraid.net plugin with remote access enabled it disables IP direct connections with a DNS hackery with a random string .unraid.net. This blocks the UNRAID-API from connecting to unraid with an IP and UNRAID-API is not capable (from what I can tell) of using a FQDN as a connection. Side note it just doesn't use DNS lookup. Also UNRAID-API or unraid itself seems to have an issue with "Use SSL/TLS" set to auto (or at lease for me) but works 100% if that setting is set to yes or now and proper settings are used in UNRAID-API. Soon as I signed out and removed the unraid.net plugin the API worked just fine. Hopefully this will help others. 

    And Awesome work @falconexe with 1.6.

    Still not able to see icons.  Everything else is working just not the icons.

    Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it. :(

  3. On 3/22/2021 at 8:55 AM, falconexe said:

    For all those having issues with the images, please see the post right after the release post. You need to adjust the IP address in the query (server) and the Base URL in the plugin.

     

    For those who are having issues with the UNRAID API even showing data (on its own web page), please ensure you log in with "root" and that password. Please also pay attention to the HTTPS checkbox. If you get this wrong, it will not work! Give it a few minutes. If you are still not having success, try stopping and restarting the docker. If that does not work, completely blow away the docker AND APP DATA folder for the docker, and try again with "root" and the correct level of security (checkbox).

     

    If all else fails, please report the NON UUD issue to the topic forum that handles that Docker.

     

  4. Just now, IronBeardKnight said:

    man I'm having so much trouble with this container 

     

    .env trick then get chown  error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further.

    then straight up after submit on setup page straight up 500 error. and so many errors in the logs 

     

     child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36

     

    etc etc. 

    would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.

    tried editing in  GEN_SSL but it just completely breaks the container.

  5. man I'm having so much trouble with this container 

     

    .env trick then get chown  error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further.

    then straight up after submit on setup page straight up 500 error. and so many errors in the logs 

     

     child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36

     

    etc etc. 

    would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.

  6. Currently Invoiceninja unraid container is not working at all and no instructions to navigate the errors.

     

    It appears your docker container is broken.

    Not only do you have to run php artisan migrate but after you get the db and everything setup for this you run into the below errors along with many more of the same type.


    [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: ***RuntimeException*** [0] : /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/Encrypter.php [Line 43] => The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths. {"context":"PHP","user_id":0,"account_id":0,"user_name":"","method":"GET","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.86 Safari/537.36","locale":"en","ip":"192.168.1.14","count":1,"is_console":"no","is_api":"no","db_server":"mysql","url":"/"} []"
    [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: [stacktrace] 2021-03-15 04:19:32 The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.: #0 /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/EncryptionServiceProvider.php(28): Illuminate\Encryption\Encrypter->__construct('7kg2Ca9E8BTaSa8...', 'AES-256-CBC') #1 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(749): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array) #2 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(631): Illuminate\Container\Container->build(Object(Closure)) #3 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(586): Illuminate\Container\Container->resolve('encrypter', Array) #4 /var/www/app/vendor/turbo124/framework/src/Illuminate/Foundation/Application.php(732): Illu...

     

  7. On 1/7/2020 at 6:24 PM, ich777 said:

    Can you please attach the logs?

    Also please note that the first startup can take really long (don't interrupt that since it's compiling RapidPhotoDownloader even on my 32core machine it takes about 3 minutes).

    I recommend you to delete the container (and also the folder in the appdata) and start over.

    Hi @ich777,

     

    Thank you first of all for all your application containers.

     

    RapidPhotoDownloader seems to be broken.

    I have tried wiping container completely and then redownloading with both the latest url (version 24) and also the default url that you have published the docker with being (version 17) and both seem to be getting the below error over and over again.

     

    image.png.cfe57103cd39442baa75a512a6182b15.png

  8. On 1/15/2020 at 12:31 PM, sjerisman said:

    And, I repeated the same Windows 7 'real' VM test one more time, but this time used the SSD cache tier as the destination instead of the HDD...

     

    With the old compression code, it took 1-2 minutes to copy the 18 GB image file from the NVMe UD over to the dual SSD cache, and then still took 13-14 minutes to further .tar.gz compress it down to 8.4 GB.  The compression step definitely seems CPU bound (probably single threaded) instead of I/O bound with this test.

     

    With the new inline compression code, it still only took about 1-2 minutes to copy from the NVMe UD and compress (inline) over to the dual SSD cache and still produced a slightly smaller 8.2 GB output file.  The CPU was definitely hit harder, and probably became the bottleneck (over I/O), but I'm really happy with these results and would gladly trade off higher CPU for a few minutes for much lower disk I/O, much less disk wear, and much faster backups.

    very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. 

     

    I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not.

     

    This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.

  9. hmm if its munching cpu hard is it trying to move something that is actually running like vm's or docker containers or anything. cpu would spike hard if you where to try move something that is actively processing data. you may also have downloads trying to feed your cache i you have set it up that way and when the mover runs its trying to move active files ?

    What are your share settings per share that use the cache?

     

    There are many possibilities here and given to the amount of variables that can be associated with using cache and thus the mover its just a process of narrowing things down one at a time. 

    Its possible if you have an overclock as well that bclk if modified is too high causing further instability on other system device e,g sata/sas drives, pcie lanes which could have adverse affects. 

     

    I know its a pain but stock everything except your raid itself and go one by one if all else fails.

     

    I'm not having a go just trying to be helpful as much as I can :), Please let us know how you go.

  10. 9 hours ago, hugenbdd said:

    Thanks for the sparse file link.   I will be able to test/recreate now.  I created the find script based on a post a page or so back, but wasn't really sure how to test it.

     

    Option 1: This should be possible and a smaller change overall.

    Option 2: Sounds like you want options applied per share.  I'm not ready to change the code so it supports different settings for different shares.  There is a LOT of work there...

    Hey Mate, :) yeah just suggestions I think option one would also be the easiest as well. The second option was is such a rare case situation that it can be stored in the archive for later use if you ever need or there is demand for it later maybe.

     

    Oh also I think the spelling mistake may still be there fyi as I could see it even after updating the plugin.

     

    Hint For The Masses:

    We need keep  in mind that exclusion file  types or locations should always come before criteria to move  based on ages/ last accessed/ space or what ever else as exclusions are for sub-directories and/or files that need to stay in cache no matter what other options you have selected.

     

    Personally these new features have sped up my nextcloud instance alone exponentially and I'm looking to do some testing with game load time as well in future.

     

    Thank you again to @hugenbdd for doing all the ground work.

     

     

  11. A Small Guide

     

    For those that want to use the new features but are a little lost or perhaps this will save you some time.
     

     

     

    Open Unraid CMD.

        cd /boot/config/plugins/ca.mover.tuning

        sudo nano skiplist.txt

     

    In the nano editor you can list your locations to skip like the following:

    image.png.7244f4c9f55bfa2d7d4bab2f055b0fc0.png 

     

       Ctl + o 

       Ctl +x

     

    Note: The list of locations may include or not include /mnt/cache/ as this is already catered for withing the mover script

     

    Find Specific files of a name or kind you may find for example all .png in a location  and then put them in your exclusion list incase they are mixed in with other files and such example below.


      find "/mnt/cache" -type f \( -name "*.png"\) -print


    Open CA Mover Tuning GUI

    In CA Mover Tuning Plugin Set the location of your skiplist.txt as below

        File list path: /boot/config/plugins/ca.mover.tuning/skiplist.txt


    image.thumb.png.b25b7d3f362ec74b73d1286ca274125d.png

    • Like 1
  12. On 3/25/2020 at 1:28 AM, guythnick said:

    Yes, I think should be independent of the age option, but both can be used.

    Under 3MB seems to capture all the images and subtitle files that I want to keep on the cache.

     

    If it is piping into the find command, I believe you really only need to add one switch to the command when the option is selected:

     

    -size +x

    where x would be the integer in megabytes.

    Hello @guythnick and @hugenbdd I also though of this but I tough one thing at a time.
     

    Option 1

    This is the change I was thinking removing yes and adding in bigger  and smaller as the on options which change the functionality of the size in the next line.

    image.png.402225dd86a730f288f925ebb7b9d640.png

     

     

    Option 2

     

    The option that has just been introduced to filter files on extension

    image.png

    would also solve this same issue if it was ONLY used in relation to your locations you stipulate in the path skiplist.txt file, however this is not the case as the extension option covers all files with specified extension across the ENTIRE cache.

     

     

    On a side note:

    @hugenbdd I also tested your sparseness option using ( Create Sparse File ) however it did not seem to pick it up for movement, however I may be testing incorrectly I'm not sure.

     

     

     

  13. 1 hour ago, CS01-HS said:

    I think there's a bug where the script doesn't quote share directories so e.g. a share with the name "Time Machine" will cause this error:

    
    May 11 05:42:07 NAS root: find: '/mnt/cache/Time': No such file or directory
    May 11 05:42:07 NAS root: find: 'Machine/': No such file or directory

    which I hadn't seen previously (though it's possible I missed it.)

     

    It's probably not best practice to include spaces in share names and I've solved it by renaming but thought I'd mention it.

     

    Great plugin by the way, thanks.

     

    You should not have to rename it.

     

    The strings in the file should be being passed through this plugin with any paths being encased in double quote.

     

    @hugenbdd I think I also mentioned this to you in my notes from first testings with you as well, All file Paths need to be encased in "" otherwise the script language thinks its a new parameter or command.

     

    Also form one terrible speller too probably someone how made a mistake, you have a spelling error :) 

    I'm setting up for some testing tonight.

    Spelling mistake.PNG

  14. 13 hours ago, hugenbdd said:

    Well, the mtime is interesting..  Found this post.  I'll try and make some chages/test so that older than 1 day works right.

    https://unix.stackexchange.com/questions/92346/why-does-find-mtime-1-only-return-files-older-than-2-days

     

    hahahah I tried to explain it much simpler so others reading this could understand as well, that is exactly the same thing was talking about.

     

    I did not know about this thread though. :)

     

    I'm still working out some other kinks from my side with some other features I'm working on and will try to test the Sparsness if i understand it correctly when I get some more time. 

    • Like 1
  15. ok cool that is what i was trying to convey with the mtime parameters, maybe i did not correctly convoy correctly.

     

    From my testing for you i have deduced:

    mtime:

     

    mtime 0   [Younger than 24hrs and Older]

    mtime +0   [Older than 24hrs]

    mtime +1    [Older than 48hrs]

    mtime +2   [Older than 72hrs]

     

    if you "skip" from the statement mtime completely based on == 0 or == +0 then this only allowing for 48hrs or older in your script. I'm only going off what i have seen so far.

     

     

    Looking at the schedular options for mover I was just wondering if the options actually reflected the time frames true and / or may cause some confusion.

     

    I'm happy to test sparseness for sure but I am having a hard time understanding what it actually is to test it.

    :)

     

     

     

  16. Hi @hugenbdd  

     

    Please find attached your script containing my notes and recommendations for things and why I recommend them.

     

    I hope that was ok. I am not sure as some people get iffy editing their files so I only added some notes if you would like to make some minor tweaks and test with a new script. 

     

    I have explained as best I can as to why  you may have been getting inconsistent results using the find command in the past but let me know if you need more.

     

    @LeoFender I am not sure if you have had a chance to test as well yet but please feel free to also review my information and proposedmovertest changes.

     

     

    Oh P.S I'm not the best at spelling and I have lazy grammar so please excuse. :)

  17. On 4/21/2020 at 3:43 AM, hugenbdd said:

    I have spent some time today working on some test code to replace what is in the "age_mover" (Custom mover bash script based off of unRAID's default).

     

    It now consists of several inputs 

    Postion - Input

    1 - start/stop/kill (from base script)

    2  - Age in days

    3 - Size in M

    4 - Sparsenes value (1-9) (A . will be placed in the find before this value)

    5 - Exclude filelist

     

    Example of my test script in action to create the "find" Commnand

     ./movertest start 30 45 2 '/tmp/text.txt'

     

    Age supplied
    Size supplied
    Sparness supplied
    Skipfilelist supplied
    SKIP FILE: /tmp/text.txt
    Find string: find /mnt/cache/TV -depth -mtime +30 -size +45M
    Find string: find /mnt/cache/TV -depth -mtime +30 -size +45M -printf '%S:%p\0' | awk -v RS='\0' -F : '$1 > 0.2 {sub(/^[^:]*:/, ""); print}'
    Adding Skip File List
    Skip File List Path: /tmp/text.txt
    aftr string: find /mnt/cache/TV -depth -mtime +30 -size +45M -printf '%S:%p\0' | awk -v RS='\0' -F : '$1 > 0.2 {sub(/^[^:]*:/, ""); print}' | grep -vFf '/tmp/text.txt'
     

     

    It would be nice if you guys could test this with  your needs and see if the find command is correct.  (Script attached)

     

    The "exclude file" should just be a list of files with their full path... in /mnt/cache of course.

    Also, change line 13 to your cache path.

    SHAREPATH="/mnt/cache/TV"

     

     

     

     

    IronBeardKnight

    Once I have the functionality of an "ignore filelist" working, I think the next step would be to work on a GUI to create that file list. (Probably several versions away.)

     

    movertest 1.71 kB · 0 downloads

     

     

    Apologies my work has been very busy as of late. 

     

    Agreed! functionality first always.

     

    I'm happy to give this some testing over the week after I finish work each day however the real  progress will be made on the weekend most likely.

     

    Ill get back at you with my results, HOWEVER if anyone else in the forum is interested by all means jump in as well and giver the script a test as well. 

     

    @hugenbdd Ill get back at you asap. :) 
     

    • Like 1
  18. I might have to take a quick shortcut here and give props to bergware/dynamix for his folder caching plugin.

     

    https://github.com/bergware/dynamix/tree/master/source/cache-dirs

     

    This plugin has done most of the heavy lifting for us in a way. Its not exactly what we need as unfortunately ram is more expansive than SSD or NVME.

     

    However to list the check boxes this git provides for a start (with minor tweaking to meet our needs). :)

     

     

    GUI & PLAN

    This picture below shows the kind of option menu that would be perfect although ideally we would like it to go deeper into the folder structure if needed for the user  as this one only collects the first level of folder names.

    image.thumb.png.a780e5de04d8e0f4108cc930a5417a1b.png 

    Once a single or multiple folder locations are selected

    image.png.c163b864c344cf5d2542c1e4b7ed0f0b.png

    we then pipe the directory values into a recursive scan to list/pass every file and / or folder location string there after to the mover script to then miss as needed.

    Using perhaps these commands its possible to get a bunch of information that we may need to a list, array or variable string i'm really not sure how or where the actual mover script is, I don't know the best method of transferring the location strings to the mover script itself.

     

    commands:

     

    https://www.cyberciti.biz/faq/linux-show-directory-structure-command-line/


    #This gives us share folders that are setup in unraid to be cached. -d (directories only) -L (Tree Level deep from stated location)
    tree -d -L 1 /mnt/cache 

     

     

    https://www.explainshell.com/explain?cmd=find+.+-type+f+-name+"*confli*"+-exec+rm+-i+-f+{}+\;

     

    #This line will search and return the entire provided location (string or variable)

    find /mnt/cache -print

     

    #This line will search and return the entire provided location (string or variable) for any file or folder that has the exact name of NextCloud
    find /mnt/cache -name "NextCloud" -print
     

    #This line will search and return the entire provided location (string or variable) all .png files 
    find /mnt/cache -name "*.png" -print

     

    This would be an very effective way to select folders to exclude from move and thus stay cached providing you can go deeper into the folder structure than what is coded in the folder caching plugin code from what I can see it looks to be pretty simple to change but then having the GUI adjust size for the structure as you navigate i'm a little unsure as web dev is not my apple pie just yet :)

     

     

     OPTIONAL Future Improvement

     

    Perhaps the below could be optionally configured per folder but if left blank then everything in that folder would be classed as excluded from move.

    image.png.83cd45b50234e760fedfcd17c09a063d.png 

    Just a thought. and would require further thought or dynamicly generated User defined options to be generated per your selected folder.  MUCH Thought out.

     

     

    CURRENT OPTION LOGIC

    In regards to how your existing options will be affected or affect this I believe this may be easier than first thought by myself.

    We add Boolean menu option like your others  under the folder selection list with the title "Include other Mover conditions on these Folders/files" .

     

    its just a matter of having an if statement in the mover script check the passed in location strings and if a match skip that directory and child items.

     

    I'm not sure if this is what you where actually looking for @hugenbdd but it took me a little while to put this together so I hope its useful.

     

    Happy to try and help any way I can.

     

  19. 12 hours ago, hugenbdd said:

    I've thought about this also.

     

    Found this site on how to use a regex inside of a find command.  However, it uses the "-type f" option.  And I would rather stay away from that or else all folders will stay on your cache drive (most likely empty).

    https://www.cyberciti.biz/faq/find-command-exclude-ignore-files/

     

    Could you provide a find command that "excludes" the files/folders you don't want to move?  It will make it easier for me to assess how to put it into the code.

     

    I'm assuming this would also need to be able to be combined with the size/age options also?

     

    And, fyi... I still need to recode how I create the request in the script to make multiple options easier to handle.

     

    Nice! ill have a gander around internet and try some testing now to see if I can get something together.

     

  20. Hi Guys,

     

    I know this is a little long but please hear me out and let me know if you thing this is something we should be chasing as a community stuck at home. :) 

    Pretty Pretty Please with a cherry on top can we have an option that allows for exclude files and or folders from the mover via perhaps name xyz or within a regex range (self configured regex string) this would allow a long standing issue to be at least half addressed.

     

    Issue: some config folders allow for faster loading or web pages and also when running dockers usually a seperate data share is created but you don't want all the data in that share in the cache as it will fill the cache too quickly.

     

    Hmm a better example is Nextcloud in docker and having a separate userdata folder as storing everything in cache is not really an awesome idea. thumbnails for pictures are stored in this userdata folder along with the images themselves which causes quite a bit of delay in nextcloud reading from spinning disk.

     

    The same can be said for plex or emby in some cases where folder art, meta data and images are stored with the media, by allowing the option to exclude files or folders via regex string or name you can set your entire share directory to Yes for caching and when the mover runs it will move all except the imprtant files off the cache to the array.

     

    Hell even as far as particular game install files if you know what your doing you can effectively speed up your networked game directories of massive size or per game.

     

    This gives the ability to improve docker efficacy astronomically while not being as risky as caching in ram.

     

    Ideally it would be great if unraid had the option to select folders or files to be cached and give the same options given to the shares however this is something they where aware of a in early 2018 i think and never came to life so having this plugin skip allocated files and folders is the next best thing.

     

    Is this something we can look at. 

     

    I'm happy to give it a crack developing such a thing on the back of this plugin to help out. 

     

    cheers guys :) :) :) 

  21. 8 hours ago, IronBeardKnight said:

    Sure I’ll have some time tomorrow night and pull some stats for you. I’m running designer rev 1 board with f12i bios but I’m thinking it’s more a controller issue perhaps for me or sata cables but I plan on doing some testing tomorrow after work will let you know. 

    Ill check this out tonight.

     

    This bios download changes happened a few times now on the gigabyte site.

     

    I know that bifurcation is supported in f12i at 4x4 and a newer agesa but not sure what else may be in the firmware.

     

    It is very annoying as they put no information as to why its taken down or what changes where done..

     

    On the BIOS note I'm hesitant do downgrade as it appears that the agesa version in f11 is a beta from the looks of it 

     

    So after booting in this morning before work I found that another drive went offline along with my two parity drives. I'm thinking that this may be drive type specific more than controller and cable as all others are working without issue.

     

    Either that or the latest kernal may be having support issues in my board perhaps will need to do further digging.

     

     

  22. 1 hour ago, skois said:

    Thanks for replying!
    I was thinking to go 6.8.0 stable. Maybe 4.19.88 kernel is more stable? 
    But not sure if my hardware is even supported on this kernel
    Do you mind sharing some info about bios?
    Like what version you running and what is your bios settings, and what kernel parameters you have?
    Maybe that will give me a good headstart to start digging more.
    I though i could give as much info as i can to avoid things like, "hey send this, run this command, try this" if i have already done it :)  But i agree this info was maybe too much lol!

    Sure I’ll have some time tomorrow night and pull some stats for you. I’m running designer rev 1 board with f12i bios but I’m thinking it’s more a controller issue perhaps for me or sata cables but I plan on doing some testing tomorrow after work will let you know.