ajm

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by ajm

  1. I recently did a WHS to Unraid transfer. I bought a new drive to start as my parity drive, then drained a disk from the WHS box to start the process. By running the "Remove Disk" wizard within WHS it will move all your data off it and onto other space in your WHS box. Once I had drained the disk, I installed it in my new Unraid box in bay 2 (bay 1 being the new party drive) and built the unit up. I did a clear on the drive, created a share on the array (with a single drive), and exposed it over the network. It's then just a question of copying data off WHS into the new Unraid share, then deleting said copied data data off WHS to free up a drive to run the Remove Disk wizard again. Drain a disk, move it to Unraid box, clear it, add it to the Share, and then copy more data off WHS. You do eventually have to delete data off WHS, so to protect from failure I let Parity build each time I did a disk transfer. That way I was never in a situation that I couldn't recover from a drive failure.
  2. Hi bonienl, Yep I'm going to do that too, just seems sort of inefficient to write a large file to non-volatile storage when the payload is inherently volatile/temporary. If the ability to flex the size of /tmp was added, based on ones own use case or workload, it would be great.
  3. Currently got 16G of RAM in my server and looks like half of this gets assigned to root / partition, which includes a /tmp path Using Plex Media Server and I wanted to transcode to RAM for efficiency, so I map /transcode to /tmp in the PMS docker. However, given / is limited to 8GB, plex can then only transcode files that fit into the 8GB limitation of the root FS. It might be useful (to me, maybe others) if you can configure the default size of root, or even better, setup /tmp as a separate thing so then you can just configure how much memory is assigned to /tmp. I may just be a fringe use case however ...