Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


jonathanm last won the day on August 14

jonathanm had the most liked content!

Community Reputation

588 Guru


About jonathanm

  • Rank
    unAdvanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If you really want to, you could do something like you said if you have enough RAM. However... if you have a power outage, you will need to manually intervene and have a long enough UPS runtime to stop the docker service, move appdata and system share to an array disk, then shut down after all data is safely back on the array. I'd guestimate you'd need probably around an hour of runtime to get all that accomplished, so not a consumer grade UPS, or have a backup generator that will allow seamless power through the UPS. Then, when the coast is clear, start the array, manually move the appdata and system to /mnt/cache (the mover won't work if there is no real cache drive), enable the docker service and be back up running. If at any point the box shuts down before you get your data moved out of RAM, it's all gone. So, theoretically given enough resources (RAM, UPS runtime) you could make it work. However, it would seem to me that sourcing a SSD cache drive would be much cheaper and less stress.
  2. If you want, since you are rebuilding parity anyway, you could skip the clearing process on the old 6TB and add it as a data drive along with the 2 new parity drives. Set a new config and retain all, then make all your assignment changes at once. If you leave the old 6TB unformatted, and don't write to the array while parity is being built, you could actually roll back and reconstruct one failed data drive if you have an issue during the 8TB parity build process.
  3. Why? Do you want to ensure that they can't run simultaneously? I don't see a good reason to go through the extra hassle vs. just setting up 2 VM's.
  4. I suspect the docker settings aren't actually the OP's issue. It sounds to me like the appdata got lost / corrupted in the handling of the 1TB/120GB SSD cache pool.
  5. Direct quote from the link. "Plex does not support the use of ISO, IMG, Video_TS, BDMV, or other “disk image” formats. If you wish to use those with Plex, you should convert them to a compatible format." "adding a scanner" whatever that means, is not going to change the fact that plex can't use .iso
  6. I'm fairly sure it will bail out and allow you to remove it. I can't remember where I saw that, and I haven't personally tested it. Fairly easy to test if you wanted to risk invalidating parity for the sake of demonstration. Add a not precleared drive in a hot swap bay, let the clear process start, turn off the power to that specific bay and see what happens.
  7. Wait, what? So if you edit an existing script, it will be affected? I'm guessing what you are saying is that until you open and save an existing script, the update doesn't change anything. Would it break things if instead of silently stripping the offending characters, remap the characters in the display to some visible character? What I'm getting at is instead of silently fixing the issue, make it obvious where the characters are, like maybe blink the space? That way if the unprintable is in a spot where it will break the script, the user can just delete it, if it's intentional you can still use the web editor without stripping them?
  8. BTW, I'm pretty sure unraid trial only requires internet to start the array. As long as you keep the array started you don't need internet.
  9. Also, what was your intent with pairing up a 1TB SSD with a 120GB SSD in the cache pool? If you haven't manually adjusted things, that will end up with 120GB of useable space, basically ignoring the capacity of the 1TB, and a free space display in the GUI that is seriously wrong.
  10. Correct. Your license is good indefinitely. You only get one automatic transfer to a different flash drive per year, but should you have an issue before the year is up, you can email limetech and they will take care of you. The license key file is tied to the physical drive, as long as you keep that key file, it's registered. Doesn't matter what you do to the rest of the files on the drive, format it, whatever, as long as you put the same key file back in the config folder, it's fine. That may not have been the best move long term, physically larger drives seem to last longer. It would have been better to get a cable to mount the USB inside the case and attach to the motherboard's IDC header. Something like this or similar. https://www.newegg.com/startech-usbmbadapt-usb-a-female-to-usb-motherboard-4-pin-header/p/N82E16812200294
  11. Quick question. How feasible would it be to set up another machine at a different IP with read only access to that same 350TB?
  12. Simple solution. User Scripts plugin, schedule this script to taste, once an hour or whatever. #!/bin/bash docker restart binhex-nzbget Symptoms solved, no interaction required. The root CAUSE is still there, but I haven't seen any bad effects running this.
  13. The real fix is running the 2.0 docker with the 2.0 windows front end. See here for some help. *(disclaimer. I don't use the windows client, I use linux desktop machines. The following information is what I obtained with 2 minutes of googling and reading) https://forum.deluge-torrent.org/viewtopic.php?f=12&t=55404&start=20#p230175