Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. I bow to your greatness CHBMB Now, now that's the media god you're talking to. Show the appropriate reverence and kneel at his shrine But I feel for him http://lime-technology.com/forum/index.php?topic=48733.0
  2. Well, I disagree with case 2. Simply because 2 files have the exact same hash does not mean that they are dupes. Will you ever see that? Who knows but it is entirely possible Sent from my LG-D852 using Tapatalk
  3. Well that solves part of the mystery. A bit of a problem in that my other server that I ultimately want this server to sync with doesn't have a cache drive and is pretty tight in terms of spare slots and stata plugs (I do have one slot/sata plug left but was saving that for expansion). Just out of curiosity, can (and is it worth it) one use like a 4-16gb usb flash drives as a cache drive? I have some of those I could spare and my little G7 N54L has a quite a few free usb ports Installing an os (and docker apps would qualify as that) to a flash drive is going to quickly wear it out as they are not designed for the write cycles Sent from my LG-D852 using Tapatalk
  4. Because they're not in the same folder on multiple disks. Therefore its not a duplicate file that's going to mess up smb/unraid as which to utilize.
  5. Nice! Anyone with a zero byte super.dat will especially appreciate it. Still no separate destination though, so ultimately you have to start the array (and therefore assign at least one disk) to get at the assignments (if you don't print them out), but my new routine seems to be bouncing back and forth between plugs now. Would be nice to have that file on the flash drive! When CHBMB starts laughing at me, I'm blaming you <CHANGES> ###2016.06.29a### - Because RobJ wants perfection ###2016.06.29### - Backup Disk Assignments and super.dat (renamed) as part of USB backup
  6. Nice! Anyone with a zero byte super.dat will especially appreciate it. Still no separate destination though, so ultimately you have to start the array (and therefore assign at least one disk) to get at the assignments (if you don't print them out), but my new routine seems to be bouncing back and forth between plugs now.
  7. Great addition! I'd like to request though that you do add the super.dat file to the backup, but rename it. Often, it's the one and only file that's needed from the backup. If it's renamed, it has to be a conscious thought-out decision to use it, won't be automatically used. Another idea, since the single biggest risk of the wrong super.dat is a parity drive change, why not rename the super.dat to include the model and serial of the parity drive (e.g. super.paritymodel_serial.parity2model_serial.dat). At a glance, the user will probably know if it's OK to use. While a slot change for a data drive or a drive addition is an important change, neither should cause data loss, and should be apparent on first array start, if it even can start. A parity drive mistake however can be disastrous, and may not be immediately apparent. I also like the idea above to create a text file with the current drive configuration, name it something like superdat.txt? I assume that at some point, you'll add the ability to configure the flash backup destination path? Done. Stored within the backup is now config/DISK ASSIGNMENTS.txt (Dos formatted text file with all of the disks, their assignments, and the current unRaid status of them super.dat is now renamed to be super.dat.CA_BACKUP
  8. Might have to run makebootable on the flash drive. Where did you get 6.0.3? The Latest version is 6.1.9
  9. That's correct. Was just spitballing ideas
  10. ALL GLORY TO THE HYPNOTOAD - Added in automatic background script logging (and ability to delete the logs when the script isn't running) - Fixed the inability to run a script in the background if the directory contained a space The log display is just a tail of the log (at the start and end of execution, the plugin will inform you of the location of the complete log should you need to analyze it) ALL GLORY TO THE HYPNOTOAD
  11. I don't use this version, nor do I particulary use Plex, but a ton of those little weird issues with Plex can be traced back to having its /config folder mapped to /mnt/user/appdata/plex... instead of /mnt/cache/appdata/plex
  12. If there's tons and tons and tons of stuff to display then it's going to lag because of the constant page update for each line and chrome chugging through it. No different than hitting the log button on unraid keeping it open for a month (so there's an insane amount of lines already displayed) and the watching it as you log more items. Nothing I can do about it. The upcoming background running with logging will alleviate that to a certain extent Sent from my LG-D852 using Tapatalk
  13. Thank you, this is what I suspected. Do you have any idea on how I can fix this issue? I have no way off accessing the Sonarr app. I have no idea where to find the particular files to delete them to start again. I have turned off docker and delete the docker.img file and started again, with no luck. Any ideas of what to do? So on a completely empty docker.img its still pulling 0B and telling you the layer exists?
  14. New permissions doesn't take long to run if there aren't that many files. And on 7tb of media files there aren't that many files. Additionally, if you run docker on that server, you should never run the new permissions took as it will odds on mess up the apps. Instead you should run the docker safe new permissions tool that's included with the fix common problems plugin. Sent from my LG-D852 using Tapatalk
  15. It's already installed. When you uninstalled it you merely removed the container, not the container and image. Perfectly normal Sent from my LG-D852 using Tapatalk
  16. It's also on the plug page. Everything chosen to make it easier for noobs
  17. What credentials is it asking for (root, nobody, etc) Sent from my LG-D852 using Tapatalk
  18. Probably because I'm Canadian, but that thought never even crossed my mind. (Although I was out for dinner with my Boss in Montreal, and after we order he says to the waitress "Mercy Buckets" and laughed. Had to fake an illness to get out of eating the food that I'm sure would have been poisoned)
  19. The only time it sould be needed on unRAID is if you wanted to run a command as the user nobody so that it is done the same way as across the network. For instance if you wanted to run MC as the user nobody: "sudo -u nobody mc". Awesome. Learn something new everyday. I always thought of sudo as raising permissions not lowering them or running as another user Correct. It would be different if unRAID ever addresses the security concerns, but in the mean time you can even surf the net using the highest permissions possible (root) in the unraid-gui of 6.2 series. Maybe when they no longer phone home on the betas they will get rid of internet access from the unRAID GUI? Tbh I don't see that. Maybe not running as root perhaps. The problem is that people have the misunderstanding that one computer can take the place of all of them. All well and good and it is a true statement. Until you get to the point that there's a problem and none of your vms work because of trouble with the server. Now you're in the situation of how do I access the gui when I don't own a phone, tablet, or other computer (and its happened at least once here) hence Firefox being included. Sent from my LG-D852 using Tapatalk
  20. Only in Louisiana. Sent from my LG-D852 using Tapatalk
  21. The only time it sould be needed on unRAID is if you wanted to run a command as the user nobody so that it is done the same way as across the network. For instance if you wanted to run MC as the user nobody: "sudo -u nobody mc". Awesome. Learn something new everyday. I always thought of sudo as raising permissions not lowering them or running as another user Thanks Sent from my LG-D852 using Tapatalk
  22. Love it... Not too often people can type with an accent +1
  23. Was not thinking of a general backup solution - I just thought it might be easier for a user to grab their flash backup from a FTP location if they have it, vs installing a trial version 1st to get access to the array to get at the flash backup off of..... I will just make a cron then and send it to a btsync or dropbox folder so it gets to my desktop and/or laptop Maybe I came off a bit wrong then... The whole "not a general purpose backup thingy" is my standard defence against things. In this case, I do see the positives involved, but to be quite honest, while I can use filezilla, the underlying protocol involved I know less than nothing about and just don't want to start something like that. Future (prob week or so -> or as CHBMB would state it'll be tomorrow), separate destinations for flash backups are forthcoming. Piggy-backing accomplished two things: #1 - dirt easy to accomplish, and fixes what I think of the biggest problem with restores -> the plugins folder (in particular dockerMan). Like everything else, CA / FCP / Scripts does what I want it to do first, and everything else comes after.... #2 - (primary reason) it justified another update to CA without having CHBMB laugh his head off at me over the 25k vs 25l thing
  24. Not a Linux guy but I don't think sudo is ever needed with unraid since you're always running as root Sent from my LG-D852 using Tapatalk
  25. Image is already installed. Probably you hit update and there was nothing to update. Updates on some apps always show available. Nobody is 100% sure if it's unraid or docker hub that's causing that Sent from my LG-D852 using Tapatalk
×
×
  • Create New...