Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Very recently another user found that plex had a setting configured to remove watched episodes for a specific series. Maybe start the search there?
  2. I would be curious to know the answer to this, as in the not too distant past 9p was a good bit slower than SMB. Personally I prefer SMB anyway, as it allows identical configurations shared between VM and physical boxes, one less thing to remember how I set it up.
  3. Works fine, you just have to follow the normal directions to allow all the needed access through the other container. You won't have a shortcut in the container popup, so you will need to manually type the web address with port, or simply create a browser shortcut. Exactly the same procedure, nothing special.
  4. No. SSD reads are "free", they don't add measurable wear. It's only writes that wear out SSD's. There is a reason TBW is specified for SSD's.
  5. Emby doesn't seem to have that problem, at least I've never seen it.
  6. Another factor is the speed at which ReiserFS deletes files. I always recommend copying instead of moving if you are planning to erase the source disk anyway. 2 reasons, you can compare the results of the copy if desired, and ReiserFS file deletions can take eons, especially if the filesystem is larger than 2TB and has seen a good amount of file activity over its lifetime. When ReiserFS was current many moons ago, I would routinely copy the entire content of regularly used drives to other drives, allowing me to format a drive fresh. A newly formatted ReiserFS filesystem is MUCH faster than an experienced one.
  7. Yes. Parity is the missing bit combined with ALL the data drives. So any data altered on the rest of the drives is going to corrupt the rebuild of disk1. With that much altered on the rest of the data disks, emulating disk1 is no longer an option. Sometimes you can get away with a few writes caused by the mounting process where the corruption is small enough that a file system check can repair things, but new files being written to the remaining data drives is going to corrupt things way beyond repair. The parity drive in isolation is worthless, it has no data whatsoever. It only has value when it's in sync with all the rest of the data drives. Remove one data drive, and the rest of the data drives plus the parity bit emulate the single missing drive.
  8. In the device manager, does the monitor show up the same in bare metal as it does in pass through?
  9. They're not hardlinked, they ARE the same files. User shares are all the files in all the disks in the share named root folder. If you set the share in question to cache:yes and run the mover, those files will get transferred to the array disks as long as nothing is holding them open.
  10. Try https://192.168.30.201 if you enabled ssl or http://192.168.30.201 if you have not.
  11. Each parity must be as large or larger than any data drive.
  12. You are probably going to have to turn off all your containers, put the data back, then see if they survive the night, enable half the containers, wait, etc. Unless someone is connecting to your server and deleting the files, it's got to be one of the containers doing it.
  13. As long as you have a copy of the files on the key, you can set up a new key, and on first boot it will walk you through transferring the license to that new key. You can do that once a year automatically through the Limetech servers, if you mess up a key before the year is up you would need to talk directly to support and explain what happened, and they can manually transfer the license. Yes, the data and pool disks are all standard linux formats, currently either XFS or BTRFS, so any OS capable of mounting and reading those file systems can access the data.
  14. Two cron lines would do it, or you could use the user scripts plugin and schedule that way. Starting and stopping or pausing an already installed container is super easy from the command line, I use docker pause <container name> and docker unpause <container name> Your specific command would probably be docker pause binhex-urbackup assuming you didn't change the name.
  15. Use the appropriate virsh command in a cron job.
  16. Have you run the specs through the estimator to see if it will even work for what you want to do? https://shinobi.video/estimate
  17. There are very good, if a little complex, reasons NOT to automatically repair after a crash. You can always stop a non-correcting check and start a correcting one at any time if you are sure all your hardware is healthy and the only reason for the errors is an improper shutdown.
  18. Yes, but it didn't have the error I was looking for that may have accounted for what you are seeing. Just to be clear, the files are gone on the disk? From your description it sounded like it could be just plex losing track.
  19. Why not specify a different share for the backup location, and have that share cache:yes?
×
×
  • Create New...