Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. The super.dat file contains the array drive assignments. Back up the two files, wipe the USB drive, prepare as new, copy those two files back to the config folder, boot up, start reconfiguring things. Your users and share settings will all be set back to defaults.
  2. Slim to almost none. Since they don't execute themselves, anything embedded has to trigger a vulnerability in the player / viewer. Which means it's trivial to scan them without risking a breach. Your rebuild plan sounds solid to me. If the only thing you keep from your USB config folder is your license key and super.dat file, I can't see how a vulnerability could survive there, and deleting your appdata and docker image should take care of any compromised executables there.
  3. Are you sure the serial number thing isn't a controller difference? However, the firmware serial number should still match the label on the drive, so...
  4. Ouch. If I were in your shoes, I'd run it through a couple preclear cycles and see how it acts. Hopefully if it's going to fail early, you can induce the failure before your data is at risk. If it survives, hopefully it will stay OK for a good long while.
  5. Modern consumer drives are typically very quiet, pronounced ticking is not normal. Perhaps google your specific model and see if people in general are complaining about noise, if so, it's probably normal for that model. Some drives have a specific SMART test for shipping and handling damage, maybe see if yours does? Bottom line, if it makes you uncomfortable, you will never fully trust the drive, so do what it takes to come to terms with it, either by getting examples of similar behaviour in long term perfectly functioning drives, or returning it and replacing with a new one. Don't RMA it, you are pretty much guaranteed to get somebody else's problem.
  6. As long as the share allocation settings are set correctly, Unraid will write to a different disk when needed. Yes, if you point plex at the user share, it will see all the files, regardless of which disk they happen to be on. Share allocation can be managed a multitude of different ways, so there is no one correct setting. Some people want to fill up a single drive before moving on, some want to spread the data evenly across all or a subset of the drives, some want to be sure to keep like files on the same drive. There are gotcha's with each scenario, so it's very possible to run into a combination of settings that ends up running out of space and not moving to a new drive automatically. You will need to figure out which way you want to keep things organized, and read the help tips with each setting to be sure you don't have clashing requirements.
  7. Each data drive is a separate file system and can be accessed as such if necessary, but for normal use, most people use "User Shares". Every root folder on each data drive and cache pool is a user share, and the contents all appear together for identically named root folders. So if disk1 has a root folder called MOVIES, and disk2 also has a root folder MOVIES, then all the files in both those folders will be accessible in the user share MOVIES. There are rules that each user share can be configured to follow that determine which physical drive is the initial and final destination for a file written to that share.
  8. Sounds like you didn't disable the services, if the VMS and Docker tabs are still there on the GUI, the files in question are still in use, regardless of whether any containers or VM's are running.
  9. The limit of 2 applies to parity. You can use pretty much any valid BTRFS RAID level in the cache pool. Some configurations are more stable or performant than others. Current limit of 24 drives in the cache pool. YMMV
  10. Weird. My profile just shows <Last session>, which I never edited. I just went through the check uncheck exit dance and it worked.
  11. That's because there is already a topic dedicated to this container. If you click on the container icon in unraid, there is a Support item that will take you to the correct page to get support for the container.
  12. Depends on the root folder for source and destination. As long as you stay inside /mnt/user you can move things around however you want.
  13. Something that may be of interest to you. I've never tried to set it up, but in principle it sounds like what you wanted to accomplish without losing the quality of the original lossless files. https://khenriks.github.io/mp3fs/
  14. Also, if you are going to start actively adding files again, you should migrate from your current ReiserFS format to XFS on new drives that you add. ReiserFS performs very poorly on drives larger than 2TB, especially when adding and deleting files on mostly full drives. Migrating entails copying data from the ReiserFS drive to a different drive, and formatting the source drive to XFS to become the destination from another ReiserFS drive. Unfortunately you can't change formats on a drive and keep the data intact. If you never plan to add files to Unraid and simply want to read your old files, it's probably not worth the hassle.
  15. Plenty of guides on the internet. Search migrate kvm to virtualbox https://funcptr.net/2012/04/01/converting-kvm-virtual-machines-to-virtualbox/
  16. Even worse for predictions, files in place can be grown over time. You pretty much have 2 choices, set a minimum free space large enough that you should theoretically never overrun it and use good choices for allocation and split level, or micromanage each disk by manually moving things around. The default high water allocation is a decent way to keep things working well automatically, since it fills each drive sequentially until each threshold is met. By the time you fill all your disks more than 3/4, it's probably a good idea to start shopping around for more capacity by replacing or adding drives. If you micromanage and move things around manually, you can afford to run much closer to the space limits on the drives before you add capacity. I personally tend not to let total array free space fall below the size of my largest data disk, but that's because I like to know that in a pinch I can totally free up my largest drive.
  17. Install teamviewer / splashtop / nomachine INSIDE the vm's.
  18. Set the folder views the way you want them, make sure it's set to save all the settings, then close it, open back up and it should be where you left it. UNCHECK the save settings on exit and last session, and it should always come up the way you saved it.
  19. I can't see any mention of port forwarding, in fact just the opposite, they advertise double NAT. So, no ability to correctly set up torrent clients, and no point in trying to securely reverse proxy into your apps. Might be ok for a browsing client, no good at all for torrenting or a server.
  20. You don't have to change them if you don't want to, just understand the differences. Setting "Cache Only" only applies to new writes to the user share. Here is an example to hopefully clarify your second quoted sentence. Say TESTING is a user share set cache only. This will create /mnt/cache/TESTING You, or an app you configured, writes to /mnt/disk1/TESTING/blah.txt instead of /mnt/user/TESTING/blah.txt. That file will be available at /mnt/user/TESTING/blah.txt. Now, even though the share is set cache only, that file (blah.txt) exists on disk1. FCP will complain, and you either will have to manually fix it by moving the file from /mnt/disk1/TESTING to /mnt/cache/TESTING, or set the share to cache prefer and run the mover while nothing has blah.txt open.
  21. Not enough data to determine risk. Timing of the pending sectors? Any corresponding events like power or cable disturbance? In general a drive that passes a couple preclear cycles with no outstanding issues should be good to go, but it's still possible the drive is dying. All drives fail eventually, the trick is figuring out when before it actually happens. My crystal ball is broken.
  22. Typo. https://github.com/maschhoff/shortipy
  23. Yes, but... The nvidia build of Unraid is unsupported by Limetech. Feel free to support this yourself, but let people know that if they have issues with unraid while implementing this that it's not a general support thing, they will need to follow up with the LinuxServer IO folks and yourself, and any general support issues need to be dealt with while NOT using the nvidia build. There are so many things that are particular to this setup, and the docker system in unraid is meant to be as gotcha free as possible. Having containers that only run when you have a LSIO plugin supported nvidia video card as a non primary GPU not used in a VM and all the settings that must be modified goes against the "just click and run a container" mindset. Running VM's with hardware passthrough is complex enough, I can't see unraid generally supporting container video passthrough any time soon.
×
×
  • Create New...