• Posts

  • Joined

  • Last visited

Everything posted by IronBeardKnight

  1. I have fixed this issue for anyone wanting to know what the problem is step 1: Grafana Step 2: Grafana make sure this is your local ip if your not exposing unraid or grafana to the internet and your keeping it local. Step 3: Grafana Confirm you have the correct Encryption method selected or if your running non engcryption and apply. Do this for every graph that uses the Unraid-API
  2. Still not able to see icons. Everything else is working just not the icons. Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it.
  3. I believe from memory I just wiped the kvm image from unraid but left the vdisk in tact then recreated the kvm and pointed it to the old disk and it worked.
  4. Same for me the server now has to be fully rebooted just to bring kvm back up. did you guys get any updates on this?
  5. tried editing in GEN_SSL but it just completely breaks the container.
  6. man I'm having so much trouble with this container .env trick then get chown error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further. then straight up after submit on setup page straight up 500 error. and so many errors in the logs child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36 etc etc. would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.
  7. Currently Invoiceninja unraid container is not working at all and no instructions to navigate the errors. It appears your docker container is broken. Not only do you have to run php artisan migrate but after you get the db and everything setup for this you run into the below errors along with many more of the same type. [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: ***RuntimeException*** [0] : /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/Encrypter.php [Line 43] => The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths. {"context":"PHP","user_id":0,"account_id":0,"user_name":"","method":"GET","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.86 Safari/537.36","locale":"en","ip":"","count":1,"is_console":"no","is_api":"no","db_server":"mysql","url":"/"} []" [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: [stacktrace] 2021-03-15 04:19:32 The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.: #0 /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/EncryptionServiceProvider.php(28): Illuminate\Encryption\Encrypter->__construct('7kg2Ca9E8BTaSa8...', 'AES-256-CBC') #1 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(749): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array) #2 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(631): Illuminate\Container\Container->build(Object(Closure)) #3 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(586): Illuminate\Container\Container->resolve('encrypter', Array) #4 /var/www/app/vendor/turbo124/framework/src/Illuminate/Foundation/Application.php(732): Illu...
  8. Hi @ich777, Thank you first of all for all your application containers. RapidPhotoDownloader seems to be broken. I have tried wiping container completely and then redownloading with both the latest url (version 24) and also the default url that you have published the docker with being (version 17) and both seem to be getting the below error over and over again.
  9. very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not. This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.
  10. hmm if its munching cpu hard is it trying to move something that is actually running like vm's or docker containers or anything. cpu would spike hard if you where to try move something that is actively processing data. you may also have downloads trying to feed your cache i you have set it up that way and when the mover runs its trying to move active files ? What are your share settings per share that use the cache? There are many possibilities here and given to the amount of variables that can be associated with using cache and thus the mover its just a process of narrowing things down one at a time. Its possible if you have an overclock as well that bclk if modified is too high causing further instability on other system device e,g sata/sas drives, pcie lanes which could have adverse affects. I know its a pain but stock everything except your raid itself and go one by one if all else fails. I'm not having a go just trying to be helpful as much as I can , Please let us know how you go.
  11. Hey Mate, yeah just suggestions I think option one would also be the easiest as well. The second option was is such a rare case situation that it can be stored in the archive for later use if you ever need or there is demand for it later maybe. Oh also I think the spelling mistake may still be there fyi as I could see it even after updating the plugin. Hint For The Masses: We need keep in mind that exclusion file types or locations should always come before criteria to move based on ages/ last accessed/ space or what ever else as exclusions are for sub-directories and/or files that need to stay in cache no matter what other options you have selected. Personally these new features have sped up my nextcloud instance alone exponentially and I'm looking to do some testing with game load time as well in future. Thank you again to @hugenbdd for doing all the ground work.
  12. A Small Guide For those that want to use the new features but are a little lost or perhaps this will save you some time. Open Unraid CMD. cd /boot/config/plugins/ca.mover.tuning sudo nano skiplist.txt In the nano editor you can list your locations to skip like the following: Ctl + o Ctl +x Note: The list of locations may include or not include /mnt/cache/ as this is already catered for withing the mover script Find Specific files of a name or kind you may find for example all .png in a location and then put them in your exclusion list incase they are mixed in with other files and such example below. find "/mnt/cache" -type f \( -name "*.png"\) -print Open CA Mover Tuning GUI In CA Mover Tuning Plugin Set the location of your skiplist.txt as below File list path: /boot/config/plugins/ca.mover.tuning/skiplist.txt
  13. Hello @guythnick and @hugenbdd I also though of this but I tough one thing at a time. Option 1 This is the change I was thinking removing yes and adding in bigger and smaller as the on options which change the functionality of the size in the next line. Option 2 The option that has just been introduced to filter files on extension would also solve this same issue if it was ONLY used in relation to your locations you stipulate in the path skiplist.txt file, however this is not the case as the extension option covers all files with specified extension across the ENTIRE cache. On a side note: @hugenbdd I also tested your sparseness option using ( Create Sparse File ) however it did not seem to pick it up for movement, however I may be testing incorrectly I'm not sure.
  14. You should not have to rename it. The strings in the file should be being passed through this plugin with any paths being encased in double quote. @hugenbdd I think I also mentioned this to you in my notes from first testings with you as well, All file Paths need to be encased in "" otherwise the script language thinks its a new parameter or command. Also form one terrible speller too probably someone how made a mistake, you have a spelling error I'm setting up for some testing tonight.
  15. hahahah I tried to explain it much simpler so others reading this could understand as well, that is exactly the same thing was talking about. I did not know about this thread though. I'm still working out some other kinks from my side with some other features I'm working on and will try to test the Sparsness if i understand it correctly when I get some more time.
  16. ok cool that is what i was trying to convey with the mtime parameters, maybe i did not correctly convoy correctly. From my testing for you i have deduced: mtime: mtime 0 [Younger than 24hrs and Older] mtime +0 [Older than 24hrs] mtime +1 [Older than 48hrs] mtime +2 [Older than 72hrs] if you "skip" from the statement mtime completely based on == 0 or == +0 then this only allowing for 48hrs or older in your script. I'm only going off what i have seen so far. Looking at the schedular options for mover I was just wondering if the options actually reflected the time frames true and / or may cause some confusion. I'm happy to test sparseness for sure but I am having a hard time understanding what it actually is to test it.
  17. Hi @hugenbdd Please find attached your script containing my notes and recommendations for things and why I recommend them. I hope that was ok. I am not sure as some people get iffy editing their files so I only added some notes if you would like to make some minor tweaks and test with a new script. I have explained as best I can as to why you may have been getting inconsistent results using the find command in the past but let me know if you need more. @LeoFender I am not sure if you have had a chance to test as well yet but please feel free to also review my information and proposedmovertest changes. Oh P.S I'm not the best at spelling and I have lazy grammar so please excuse.
  18. Apologies my work has been very busy as of late. Agreed! functionality first always. I'm happy to give this some testing over the week after I finish work each day however the real progress will be made on the weekend most likely. Ill get back at you with my results, HOWEVER if anyone else in the forum is interested by all means jump in as well and giver the script a test as well. @hugenbdd Ill get back at you asap.
  19. I might have to take a quick shortcut here and give props to bergware/dynamix for his folder caching plugin. This plugin has done most of the heavy lifting for us in a way. Its not exactly what we need as unfortunately ram is more expansive than SSD or NVME. However to list the check boxes this git provides for a start (with minor tweaking to meet our needs). GUI & PLAN This picture below shows the kind of option menu that would be perfect although ideally we would like it to go deeper into the folder structure if needed for the user as this one only collects the first level of folder names. Once a single or multiple folder locations are selected we then pipe the directory values into a recursive scan to list/pass every file and / or folder location string there after to the mover script to then miss as needed. Using perhaps these commands its possible to get a bunch of information that we may need to a list, array or variable string i'm really not sure how or where the actual mover script is, I don't know the best method of transferring the location strings to the mover script itself. commands: #This gives us share folders that are setup in unraid to be cached. -d (directories only) -L (Tree Level deep from stated location) tree -d -L 1 /mnt/cache"*confli*"+-exec+rm+-i+-f+{}+\; #This line will search and return the entire provided location (string or variable) find /mnt/cache -print #This line will search and return the entire provided location (string or variable) for any file or folder that has the exact name of NextCloud find /mnt/cache -name "NextCloud" -print #This line will search and return the entire provided location (string or variable) all .png files find /mnt/cache -name "*.png" -print This would be an very effective way to select folders to exclude from move and thus stay cached providing you can go deeper into the folder structure than what is coded in the folder caching plugin code from what I can see it looks to be pretty simple to change but then having the GUI adjust size for the structure as you navigate i'm a little unsure as web dev is not my apple pie just yet OPTIONAL Future Improvement Perhaps the below could be optionally configured per folder but if left blank then everything in that folder would be classed as excluded from move. Just a thought. and would require further thought or dynamicly generated User defined options to be generated per your selected folder. MUCH Thought out. CURRENT OPTION LOGIC In regards to how your existing options will be affected or affect this I believe this may be easier than first thought by myself. We add Boolean menu option like your others under the folder selection list with the title "Include other Mover conditions on these Folders/files" . its just a matter of having an if statement in the mover script check the passed in location strings and if a match skip that directory and child items. I'm not sure if this is what you where actually looking for @hugenbdd but it took me a little while to put this together so I hope its useful. Happy to try and help any way I can.
  20. Nice! ill have a gander around internet and try some testing now to see if I can get something together.
  21. Hi Guys, I know this is a little long but please hear me out and let me know if you thing this is something we should be chasing as a community stuck at home. Pretty Pretty Please with a cherry on top can we have an option that allows for exclude files and or folders from the mover via perhaps name xyz or within a regex range (self configured regex string) this would allow a long standing issue to be at least half addressed. Issue: some config folders allow for faster loading or web pages and also when running dockers usually a seperate data share is created but you don't want all the data in that share in the cache as it will fill the cache too quickly. Hmm a better example is Nextcloud in docker and having a separate userdata folder as storing everything in cache is not really an awesome idea. thumbnails for pictures are stored in this userdata folder along with the images themselves which causes quite a bit of delay in nextcloud reading from spinning disk. The same can be said for plex or emby in some cases where folder art, meta data and images are stored with the media, by allowing the option to exclude files or folders via regex string or name you can set your entire share directory to Yes for caching and when the mover runs it will move all except the imprtant files off the cache to the array. Hell even as far as particular game install files if you know what your doing you can effectively speed up your networked game directories of massive size or per game. This gives the ability to improve docker efficacy astronomically while not being as risky as caching in ram. Ideally it would be great if unraid had the option to select folders or files to be cached and give the same options given to the shares however this is something they where aware of a in early 2018 i think and never came to life so having this plugin skip allocated files and folders is the next best thing. Is this something we can look at. I'm happy to give it a crack developing such a thing on the back of this plugin to help out. cheers guys
  22. Excellent news. I however still seem to be having issues with shucked easystore WD100EMAZ drives waiting for yet another rebuild then once its done ill probably roll back to 6.7 to see if that helps
  23. Ill check this out tonight. This bios download changes happened a few times now on the gigabyte site. I know that bifurcation is supported in f12i at 4x4 and a newer agesa but not sure what else may be in the firmware. It is very annoying as they put no information as to why its taken down or what changes where done.. On the BIOS note I'm hesitant do downgrade as it appears that the agesa version in f11 is a beta from the looks of it So after booting in this morning before work I found that another drive went offline along with my two parity drives. I'm thinking that this may be drive type specific more than controller and cable as all others are working without issue. Either that or the latest kernal may be having support issues in my board perhaps will need to do further digging.
  24. Sure I’ll have some time tomorrow night and pull some stats for you. I’m running designer rev 1 board with f12i bios but I’m thinking it’s more a controller issue perhaps for me or sata cables but I plan on doing some testing tomorrow after work will let you know.