Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Auto set turbo write mode Note that "auto" setting in disk settings (6.2) does NOT operate like this. According to the help text, auto mode always turns off turbo mode. Adjusts the turbo write setting depending upon the number of drives spun down. Adjustable within the script for the number of drives. Only makes sense to really run this at a frequency of hourly (or the forth coming custom frequency) #!/usr/bin/php <?PHP $spinDownAllowed = 0; # Set this to the maximum number of drives allowed to be spundown in order to enable turbo write mode function startsWith($haystack, $needle) { return $needle === "" || strripos($haystack, $needle, -strlen($haystack)) !== FALSE; } $disks = parse_ini_file("/var/local/emhttp/disks.ini",true); foreach ($disks as $disk) { if ( startsWith($disk['color'],"grey") ) { continue; } if ( startsWith($disk['name'],"cache") ) { continue; } if ( ! strpos($disk['color'],"on") ) { ++$spunDown; } } if ( $spinDownAllowed >= $spunDown ) { exec("/usr/local/sbin/mdcmd set md_write_method 1"); } else { exec("/usr/local/sbin/mdcmd set md_write_method 0"); } ?> auto_turbo_write.zip
  2. Enable / Disable Turbo Write Mode Enable #!/bin/bash /usr/local/sbin/mdcmd set md_write_method 1 echo "Turbo write mode now enabled" Disable #!/bin/bash /usr/local/sbin/mdcmd set md_write_method 0 echo "Turbo write mode now disabled" turbo_writes.zip
  3. Record Disk Assignments Records your current disk assignments to a file on the flash drive called DISK_ASSIGNMENTS.txt (config folder). Not necessary if you run CA's appdata backup as that's done automatically #!/usr/bin/php <?PHP $availableDisks = parse_ini_file("/var/local/emhttp/disks.ini",true); $txt .= "Disk Assignments as of ".date(DATE_RSS)."\r\n"; foreach ($availableDisks as $Disk) { $txt .= "Disk: ".$Disk['name']." Device: ".$Disk['id']." Status: ".$Disk['status']."\r\n"; } file_put_contents("/boot/config/DISK_ASSIGNMENTS.txt",$txt); echo "Disk assignments have been saved to the flashdrive (config/DISK_ASSIGNMENTS.txt)\n"; ?> record_disk_assignments.zip
  4. Got it... Just had to think it through... You're not getting an ignored list because you've also acknowledged the error. Which means that the error doesn't exist anymore on the server. And the ignored section doesn't show everything that's ignored. What it shows is errors that FCP found on the last scan that's ignored. Net result is that FCP won't notify you about the unclean shutdown, and through the GUI as it stands now, you can't monitor it again until an unclean shutdown happens again (at which point it will show up as an ignored error in the ignored section). Looks like I'll have to add another section to the GUI that displays all errors/warnings that are ignored whether or not they were found again. In the meantime, your course of action is to delete the ignoredList.json file from the flash drive.
  5. Somethings not right... I can't replicate it however. Can you give me the output of cat /tmp/fix.common.problems/errors.json so that I can test with what your system is showing. (But you can get it to be monitored again by either editing that ignoredList.json file (or deleting it altogether)
  6. Is the popup not disappearing or is the ignored section not appearing?
  7. Once the pop up disappears (ie tests are finished should only take a minute or so) the ignored section should appear
  8. Does it show up if you hit rescan?
  9. Actually, I tried that but the notice didn't go away, so I used "Ignore". Did I screw up again? The acknowledge button would have disappeared. At that point, a rescan (or the next scheduled scan) wouldn't have bugged you about it. If you want to receive notifications again about it, just hit monitor error next to it in the ignored section
  10. just go into FCP and acknowledge the error
  11. It should be.. But admittedly I use MySQL instead. Sent from my LG-D852 using Tapatalk
  12. You either need a computer with Putty installed on it, or install the plugin "Command Line" via CA and then you can do everything via the webUI, or locally at the attached keyboard / monitor You're going to want to run this command: ls -al /mnt/cache/Docker/apps/MakeMKV/config
  13. Win 10 won't accept all valid win 7 keys. EG: It will not accept mine (100% legit) Win 7 Ultimate Key (nor according to the link will it accept Win 7 Enterprise (and since Ultimate is a superset of Enterprise, that's why it won't accept mine)
  14. Major revamp to backup / restore appdata module. More or less a one click operation now to restore even in a catastrophic cache drive failure. - Previously the module would not allow you to perform a restore unless the backup settings were valid. What this meant in practice was that if you replaced a failed cache drive, you would have to recreate the appdata share, make sure that it was cache only with no files existing on the array, set all the settings, and then finally restore. Now restore will use the latest saved settings (whether are are valid in the current context or not - ie: appdata share doesn't already exist). Just select the backup set to restore and your done. Additionally, at the end of the restore, the settings for the appdata share will be automatically set to be cache-only, and any files in the share that may have wound up on the array (by inadvertently running the docker apps without a valid appdata share) will be automatically deleted. Procedure to follow for a failed cache drive: - Replace cache drive - Recreate the docker.img file. - Restore the backup by going to the restore tab and selecting your backup set - Have a beer - Ideally now you would restart the server (or stop and start) to ensure that the changes to the appdata share configuration made by the module take effect - Go to CA's Previous Apps section and reinstall whatever apps you want. All the templates will already be filled out with your mappings, ports, etc. - Have another beer to celebrate how easy it was to recover Other changes: Backing up of the docker.img has now been removed (it was optional previously) - While I don't personally think that backing up of the image file is ever necessary, I ran through some experiments, and have removed the option altogether (it will now always be excluded). This is because CA and the various modules (including backup/restore) store various files, settings, logs, etc within the docker.img file so that there is never any possibility of running out of RAM if they were stored there. And the logs for instance from backup / restore are HUGE because every single file is logged as its moved (logs are removed at the conclusion of the backup / restore). Net result is that docker.img is basically always in use during a backup/restore, and during a restore, the operation will either fail, or will have weird consequences. Backups that fail for whatever reason are now renamed to have -error appended to the folder name. This will prevent you from inadvertently restoring a backup set that did not complete successfully. 2 Scripts are now included in the module: Delete Old Backup Sets - This will entirely remove all backup sets from the array. This is useful for when transitioning from non-dated backups to dated backups. Delete Failed Backup Sets - Similar to the above, but will only delete backup sets from the array that failed.
  15. Run mover at a certain threshold of cache drive utilization. Adjust the value to move at within the script. Really only makes sense to use this script as a scheduled operation, and would have to be set to a frequency (hourly?) more often than how often mover itself runs normally. #!/usr/bin/php <?PHP $moveAt = 70; # Adjust this value to suit. $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percent = ($diskTotal - $diskFree) / $diskTotal * 100; if ( $percent > $moveAt ) { exec("/usr/local/sbin/mover"); } ?> run_mover_at_threshold.zip
  16. FS1 is the usage of the docker image. FS2-5 are the usage of 4 of the array drives (if you keep scrolling down on the main UI page you'll see exactly what drives correspond with them) CPU self explanatory Memory is the memory utilized on the system as a whole including the cached memory. (ie: should always be high) Realistically cAdvisor is great for seeing what individual apps are utilizing at any moment in time (the same as CA's resource monitor but in greater detail and prettier). But, here are the docs for cAdvisor: https://github.com/google/cadvisor/tree/master/docs But they are all really about how to interface to it if you're programming your own statistic grabbing app.
  17. More of a dockerMan issue than binhex's. Never really noticed, but 6.2 may stop you from duplicating ports (it at least shows you which host ports are already in use). Regardless of which version you're running, the docker run command would have pointed out the same error through the GUI at the bottom when adding / reinstalling / editing. I hope that duplicate ports would be allowed, but noted as an error condition when you try to start a second instance if the first is already running. I have a couple dockers running individually that use the same port, and sharing other resources. I only ever want to run one at a time for obvious reasons, but I don't want to have to change the port settings to shuffle them around to use the same host port for each. You're 100% correct. Like I said, I never really noticed. More of a dockerMan issue than binhex's. Never really noticed, but 6.2 may stop you from duplicating ports (it at least shows you which host ports are already in use). Regardless of which version you're running, the docker run command would have pointed out the same error through the GUI at the bottom when adding / reinstalling / editing. Yeah, I would've thought the GUI should flag the same error. Perhaps I just missed it in the pop up window during the docker install? The weird thing is it said it installed successfully. 6.1.x has a nasty habit of displaying an error from the docker run command and on the next line saying that the command completed successfully.
  18. More of a dockerMan issue than binhex's. Never really noticed, but 6.2 may stop you from duplicating ports (it at least shows you which host ports are already in use). Regardless of which version you're running, the docker run command would have pointed out the same error through the GUI at the bottom when adding / reinstalling / editing.
  19. Time permitting I'll try and bang together a post explaining exactly how everything is supposed to work.
  20. I read the docker run reference but still can't figure out how to priority one docker cpu over another. Please can you share how. Is it the --cpu-shares? Just based on the information provided on the reference, I set Handbrake to 128, Plex to 8192 and the rest to 1024. Iirc everything is based upon 1024. Or setting 2 containers to 512 will each allow them to max out at 50% if they both want to run full bore. If only one wants to run full bore it will max out at 100% there is no way to prioritize or set a max cpu usage for a container. Things get more complicated using 3 or more containers but the concept is the same Sent from my LG-D852 using Tapatalk
  21. You can limit everything. Cores, memory, etc. To limit cores add to extra parameters: --cpuset-cpus=0 to limit memory --memory=4G Additionally you can prioritize one docker app's cpu resources over another. Google docker run for a complete breakdown of everything possible. Personally, I think its in everyone's best interests to limit the resources of every app with the exception of Plex
  22. CA - Go to the resource monitor. Install cAdvisor if its not already installed (a direct link is in resource monitor), then click the icon for the app in question, and it will show you the utlization per core for that app. By default every docker app will use all cores available to unRaid.
  23. sorry but advanced view where, can you please guide me? thanks in advanced Docker Tab. Top Right. Hit "Basic View" To Toggle It To Advanced View
  24. Great idea. Maybe in a seperate thread though. I've created a separate thread for this HERE
×
×
  • Create New...