Terebi

Members
  • Posts

    121
  • Joined

  • Last visited

Everything posted by Terebi

  1. Easier and more secure to just TOTP. Just add it to your 2 factor app of choice and you are done. This of course only protects the login page from remote users. Anyone with physical access has the keys to the kingdom. And if there is a vulnerability in the web UI that could bypass any security implemented. A simple implementation of this could be implemented in an hour or two probably. Just a screen for the setup, and some setting on the USB for the time seed.
  2. The instructions to get to the FTP including hostname, username, and password, would need to be on the USB for that to work. So anyone who has your USB can also go do those things, unless there are firewalls in the way of the FTP Depending on who you are worried about, where that FTP is, how it itself is protected, that again may reduce effective security to zero.
  3. Comment necromancy, but TVs are an easy to point to example. Lots of TVs only include 100megabit ethernet connections, which cannot support 4k remuxes etc, so you have to fall back to wireless!
  4. "secure" and "automatic" are mutually exclusive unless you don't care about wide swaths of "secure" to the point that you shouldn't bother. Disk encryption protects against someone having physical access to your disks. In order to automatically decrypt the disks, the encryption key would need to be on your USB, or somewhere your USB can access. Anyone who has access to your disks also has access to your USB. Therefore the encryption is providing little to no value. There are some scripts that can improve this situation, by downloading the key at boot, but that means the key is sitting somewhere to be downloaded from. Depending on where that location is, and what kind of threat you are protecting against it may reduce effective security down to zeroish.
  5. Security is not really unraids strong suit, almost everything in the system is running as root. The text file will go where ever the output directory is set in the script. You just have to point mover tuners ignore file to the same file/path
  6. My script itself is safe to run if something is seeding. But the regular problems the mover has if something is seeding are still applicable. My script helps with this problem, because it will tend to extend how long a file lives on the cache, which will reduce the frequency of a torrent being actively seeded by the time the files get moved to array. The biggest problem is if you have the torrents hardlinked (from radarr/sonarr), because if that file is one that should be moved based on time/threshhold the "plex" copy will be able to be moved, but the torrent copy might not move it the file is in use. That will break the hardlink and cause space duplication. If you aren't using hardlinks, then it probalby doesn't matter, because if the torrent fails to move it will just retry it the next day. Because of this, its probably safest to continue to pause/resume torrents as part of the mover process, even though it may work ok 9/10 times. You may need to modify my script to add in the code to pause torrents, because I think the tuner setting used to enable my script is the same setting used to pause torrents.
  7. The binhex apps run arch, and do not have SSH running, so are not exploitable even if you have the bad versions. . Just wait for the next version of the binhex images to come out.
  8. Even given 2fa you should probably still use VPN. The risk of someone brute forcing your u/p is probably not as big of a concern as some unknown vulnerability that just bypasses login all together.
  9. Adding the parity drive first will be faster. If you add parity second, it will have to read all of the new drives to calculate its parity. If you add the drives second, they can be individually zeroed without affecting parity.
  10. The file will get recreated from scratch every time mover runs, so you can't really edit it to control what happens. In the first post I give a command that lets you manually move a file/directory immediately. It doesn't "delete" oldest files really, because it recreates the file from scratch every time. But that is the effect. Files will be added newest to oldest until the size is reached, and then the oldest files won't be added to the file again. Then the mover will run, and since those oldest files aren't listed, they will be moved.
  11. Yeah, the purpose of the script is so you can ignore this and it self manages based on time. But I admit I do manually move things off of cache that I know won't get watched soon/frequently to make room for other things to stay on cache longer
  12. You don't need to recreate the file. In notepad++ you can see the line endings (see the LFs in my file) You can then convert using notepad++ too in the edit menu.
  13. It looks like you saved the file with Windows line endings. (\n\r). It needs to have linux endings (\n). Some more advanced text editors (notepad++) can convert line endings for you. Or there are various linux command lines you could google to do the replacements.
  14. You do not need an end slash in the output dir. You can see in the next line (output file) that a slash is added there between the directory and filename. The file should be created every time mover runs. Otherwise the mover would move anything not listed in the file. You can run the moverignore script manually to make sure the file gets generated by going into the directory in a shell in the sh file is in, then running "bash moverignore.sh" . If you still have the movertuning threshholds in place for size and age, then mover may not be running at all, which would mean my script would also not run.
  15. you can run the moverignore script manually before the mover runs, to generate the file and make sure it appears as you expect. Instructions for doing so are in the first post.
  16. If thats what the output directory is set to, yes
  17. you don't need to use the age or size threshholds in mover tuner anymore unless you want them for reasons outside my script. If you run mover every day, it will run my script, and as long as you are below the script threshhold, nothing will be moved. If you are above the threshhold, oldest files will be moved off to get you under the threshhold.
  18. .sh not .sv you don't need to create moverignore.txt, it will be created by the script. you only need to change target dir if your media share/directory name is different.
  19. If you are talking about a plex/emby/kodi library, the general recommendation is that it should be on the appdata share, and most people keep appdata as cache only (and exclusive) for performance reasons. If that data is part of the appdata share, just set the mover to move from array to cache, then run the mover. After everything is moved, disable secondary storage and make sure the appdata share is set to exclusive. To get a subset of files back to cache you could also set the mover from array->cache, and then manually run the mover on a given subdirectory by using the following command find /mnt/user/DirectoryNameGoesHere* -type f | /usr/local/sbin/move& ^^^^^^^^^^^^^^^^^^^^^^^^
  20. Mover tuning. The only thing my script does is tell mover to ignore up to X newest bytes of content in a particular directory. Its to keep my media share full of newer content to keep drives spun down. Other shares, or other settings from mover or mover tuning are not affected.
  21. During bulk transfers where you don't want to use the cache, either disable the cache temporarily, or copy into the disk shares directly to skip past the cache. Mover tuner cannot control what happens with the cache at any time OTHER than moving. So all of your bulk imports or whatever, mover tuning is completely irrelevant. It has no interaction whatsoever with file creation/copies etc going on. Currently there is no "read cache" to move some files off of the array onto the cache, while simultaneously moving other files from the cache to the array (for the same share)
  22. I honestly think my script will do what you want with only variable tweaks. Set my script to your media dir. Run mover hourly. disable all other tuning settings (age/size) Every hour, everything NOT on the media dir will move to the array. Every hour the media share will get moved to the array, BUT skip the newest files up to the threshhold. each time a new file is downloaded, the next hour, an equal amount of the oldest data should move.
  23. #3 can be dealt with using either mover tuning settings, or by adding in this script
  24. The array is (or should be) protected by parity against disk failure. But if you have a single cache drive, then the files there are not protected. For users not using mover shenanigans like this script, thats probably ok, because things will get moved to the array every morning. But using this script, things may stay on cache for weeks/months, so if you don't have mirrored cache setup, there is increased risk of data loss if the cache drive fails. If you do have mirrored cache, then the risk is pretty minimal.