Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Everything posted by Cpt. Chaz

  1. I shot myself in the foot. long story short, i wiped my existing flash drive and have no zipped backups for it, so i'm looking at starting with a fresh flash install. However, all the data on my array should be in tact. obviously i don't want to lose any data on the array starting with a new flash drive (i do have my license key) but i'm wondering what my next move should be. I did have ca-backups installed, with backups of appdata and flashdrive files (not a zip) on the array itself. i'm learning the hard way right now about keeping a zipped copy off of flash off of the array right now. What should be next?
  2. this little gem buried deep down in this thread saved me on my r720. thanks @saarg for anyone else doing this, to persist after reboot, i added these quick lines to my go file (in my case it's sr0 and sg3, adjust accordingly) # Make Optical Disc Drive Accessible chmod 777 /dev/sr0 chmod 777 /dev/sg3
  3. Wanted to give a final update, 10 months later... i've become slightly more familiar with basic scripting and linux commands, enough to where i was able to get this implemented and running without any problems. @Hoopster your scripts and go files were key for me to go back and look at. My end result looks quite a bit different, but it was definitely the foundation i needed. Thanks again for the assistance.
  4. alright, i'll go the ssh key route then. thanks
  5. No dice. i've also tried variations of read and echo | sudo from write-ups in other forums with no luck either. i've also read warnings of putting passwords in scripts, but all i'm dealing with here are two of my personal private networks over vpn. no exposure from the inside or outside, so i'm not worried about it from a security standpoint.
  6. even with vpn already set up between the machines?
  7. i'm setting up an rsync script, where the first command prompt's a request for the destination source's password. Ideally the script would enter the password for the rsync to run. but how exactly do i write that into the script (or is there another way?) i have no problem running this in terminal manually where i can type the password in, it looks like this: rsync -avhP --progress /source/file/path [email protected]:/destination/file/path [email protected]'s password: so writing the script, what should the entry to fill the password look like here? #!/bin/bash echo "Starting File Server Backup" echo "This May Take a While" #Backup and Sync local file server rsync -avhP --progress /source/file/path [email protected]:/destination/file/path [email protected]'s password: <-- what should i enter on this line?
  8. wow. i use unmanic to convert to mp5 x265, and most media is mkv. looks like the updates i was seeing in the history were from these filename changes. still doesn't seem to make unmanic useless for me, as the actual file sizes are significantly smaller most of the time, regardless of whether sonarr reflects that or not. you could try putting in a feature request in github for sonarr, but until then, just make it a point to "refresh and update disk" when you click on a show if you need it to reflect proper filesize in sonarr (i guess that's what i'll be doing from now on). still going to let unmanic keep chewing through the library, but glad to learn something new though
  9. go to settings -> media management -> file management and look for the setting "Rescan Series Folder after Refresh". Set this to always. to check the status of the refresh, go to system -> tasks and you'll be able to see the intervals for the "refresh series" task and manually run if needed
  10. Probably best to put this in the pre-clear support forum for faster resolution
  11. Realize this is an old post of mine, but the problem persisted for me and i finally solved it. Below is the script I use to clean out my usenet completed downloads folder. It's set to clean out files older than 3 days, run daily. Maybe it can be useful for somebody #!/bin/bash echo "Searching for (and deleting) Files older than 3 Days" echo "Should Only Take a Second" #dir = WHATEVER FOLDER PATH YOU WANT CLEANED OUT dir=/path/to/folder/ #ENTER NUMERIC VALUE OF DAYS AFTER "-MTIME +" find $dir* -mtime +3 -exec rm -rfv {} \; echo "Done for Now" echo "See Ya Tomorrow"
  12. SV3 will automatically detect these file changes during scheduled library changes, same for RadarrV3
  13. Having not used quick sync before this, I thought cpu utilization was normal when enabling QS transcoding. i downloaded the intel-gpu-tools container to take a closer look, and turns out it's not working in the unmanic container for me, even with all the prerequisites in the go file, parameters, etc. Also tried all the encoders like you did, and the only one that works is the libx265. However, i posted earlier in the forum (when i thought QS was working) that i noticed my processing time was almost cut in half, on average. Whereas it used to take my little intel 8-12 hours to process a 2gb 1080p x264 file w/ 2 workers, that time is down now to about 3-4 hours with QS enabled. It seems QS is not working (unless it involves high cpu usage and the intelgpu container isn't working) though the change in process times is definite, if not inexplicable.
  14. Thanks for this! adding the chown entry fixed the persistence issue for me
  15. May have solved my own problem, but would love for somebody to double check me. I added "find $dir* -mtime +14 -exec rm -rfv {} \" to the top of the script (to minimize time needed to scan newly added files) #!/bin/bash #backs up #change the location below to your backup location backuplocation="/mnt/disks/192.168.1.121_Backups/Office/Quickbooks/Manual" # do not alter below this line datestamp="_"`date '+%d_%b_%Y'` dir="$backuplocation"/Backups/"$datestamp" # dont change anything below here if [ ! -d $dir ] ; then echo "making folder for todays date $datestamp" # make the directory as it doesnt exist mkdir -vp $dir else echo "As $dir exists continuing." fi echo "Deleting Old Backups" find $dir* -mtime +14 -exec rm -rfv {} \; echo "Saving QB Backup Files" rsync -aP --no-o /mnt/disks/CLEMENTSS-IMAC_Quickbooks_2019/Backups/ $dir/Backups/ echo "Saving QB Company Files" rsync -aP --no-o '/mnt/disks/CLEMENTSS-IMAC_Quickbooks_2019/Company Files/' $dir'/Company Files/' chmod -R 777 $dir sleep 5 exit
  16. I've got a nice and simple rsync script that runs a daily backup between 2 remote unraid servers. Can't take any credit for it, I just modified the "vm settings backup" script to suit the purpose of backing up some quickbooks company files. What i'm missing is an entry in the script to delete backups older than X number of days. The script runs daily, with each new backup in a date-stamped folder like this "/path/Backups/_23_Jul_2020". Ideally, i'd like the script to delete folders older than 14 days from "/path/Backups/" if possible. I'm open to changing any aspect of my present script to achieve desired results. I've included my modified version of the script below for reference. Thanks! #!/bin/bash #backs up #change the location below to your backup location backuplocation="/mnt/disks/192.168.1.121_Backups/Office/Quickbooks/Manual" # do not alter below this line datestamp="_"`date '+%d_%b_%Y'` dir="$backuplocation"/Backups/"$datestamp" # dont change anything below here if [ ! -d $dir ] ; then echo "making folder for todays date $datestamp" # make the directory as it doesnt exist mkdir -vp $dir else echo "As $dir exists continuing." fi echo "Saving QB Backup Files" rsync -aP --no-o /mnt/disks/CLEMENTSS-IMAC_Quickbooks_2019/Backups/ $dir/Backups/ echo "Saving QB Company Files" rsync -aP --no-o '/mnt/disks/CLEMENTSS-IMAC_Quickbooks_2019/Company Files/' $dir'/Company Files/' chmod -R 777 $dir sleep 5 exit
  17. the remote device is just the headless backup unraid server (and my home nas). I have a Syncthing container running between the two (and this does not rely on wireguard) and i have a weekly backup script for VM's and appdata. Again, no open browser pages. i restarted the backup server, and it cleared that error. but now the main server is showing a new error to chase down. Clements-Tower kernel: CIFS VFS: Close unmatched open
  18. May have made a little progress. I unmounted all unassigned devices, still got the error. I tried disabling docker, still got the error. Disabled VMs, still got the error. I even unplugged the ethernet, still got the error. I use UD to mount remote shares via wireguard connection. So in a last ditch effort, it occurred to me to disable wireguard. Voila, the error went away. The second I turn wireguard back on and get a successfull handshake, the error comes back (even without any remote shares mounted). Now what? haha Jul 22 12:37:27 Clements-Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Jul 22 12:37:29 Clements-Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Jul 22 12:37:30 Clements-Tower wireguard: Tunnel WireGuard-wg0 stopped Jul 22 12:38:43 Clements-Tower root: Fix Common Problems: Other Warning: Unassigned Devices Plus not installed ** Ignored Jul 22 12:40:57 Clements-Tower kernel: mdcmd (41): set md_write_method 1 Jul 22 12:40:57 Clements-Tower kernel: Jul 22 12:44:18 Clements-Tower wireguard: Tunnel WireGuard-wg0 started Jul 22 12:44:22 Clements-Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Jul 22 12:44:25 Clements-Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Jul 22 12:44:27 Clements-Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
  19. Odd though it may seem, that server is in a very limited-access environment available only from my desktop (or remote home desktop). Even running netstat, it's not showing any other connections to that server other than the one desktop. As i said, odd. Thanks for the input though, i'll look forward to the update!
  20. I'm showing version 2020.07.19a installed as the latest version, while getting the error.
  21. definitely confirmed none. this server is only accessed from 2 machines, my imac on the local lan, and my imac at home (via vpn). both of which i've triple checked, no browser pages / tabs / access open at all. Edit: I have another unraid tower at home connected with a 24/7 wireguard connection, that i use for remote backups with remote smb shares mounted. but it certainly doesn't have any browser pages open for this
  22. Hi guys. Getting this error in the logs #root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token According to this post, the fix is to close out any browser on another network machine that may be connected, etc. I've only got one other machine that access the (headless) server on the network, and i have no open browser pages to it at all. multiple restarts, still no luck. I ran netstat -vatn and the only addresses shown connecting to the server are the previously mentioned machine with no open browsers, and my VM. Logs attached, would appreciate any thoughts. Thanks clements-tower-diagnostics-20200721-1234.zip
  23. The reason I ask is because your appdata config shows # Share exists on no drives I'm not sure why this would happen out of the blue, unless it's some sort of bug from the 6.9 beta. Until someone comes along with a more definitive answer, i'd try troubleshooting with a couple different things. 1. Roll back to the latest stable release 2. Confirm that you can actually find the appdata folder and it's contents. 3. Set your appdata cache use to "only" instead of "prefer" which is how you have it set now. Then run mover to get any remaining appdata off the array and wholly into cache. If this doesn't work for whatever reason, you can try using the Unbalance Plugin or Krusader docker to manually move the appdata from the array to cache. Then reboot and see how it looks. Certainly not a diagnosis as to what caused this, somebody more adept at reading logs may have more insight than i can offer, but maybe it'll help get you going in the right direction. If you were unable to ever find your appdata and its contents, you may be looking at having to restore from a backup
  24. manually like this (or whatever the PreRolls folder path is) #newperms /mnt/user0/Media Can also use the Docker Safe New Permissions under Tools to hit all the shares (minus the appdata)