Jump to content

Pandemic

Members
  • Posts

    167
  • Joined

  • Last visited

Posts posted by Pandemic

  1. 10 hours ago, alturismo said:

    thats 2 different things

     

    1/ plex consumption itself (can be up and down, while 12 gigs are pretty high ;))

    2/ some tempfs for transcoding ... different value ...

    For sure but the docker tab showing 12gb was different than both /tmp and /plex_tmp was showing. Is it possible the container was using extra memory not allocated to transcoding?

  2. I used appdata backup (I think I did it right) and then stopped the array, selected no disk for cache, started the array, stopped the array, and then selected the 512gb NVME. I kept the 300gb docker size just so I don't have to redo that aspect. I know it's way more than I need but with some LLMs being 20+ gigs I figured I'd try to future proof. 

    Can I just reduce the size without having to rebuild everything?

     

    Funny thing though, as I'm moving data to the UNRAID box, my speeds are about 10-20MB/sec slower than they were with the raptor drive. Everything I'm seeing suggests NVME would have a faster write speed so I'm not sure what's happening.

     

    edit: this may be an issue originating from the box containing the files to transfer to the unraid box. That's about the only logical conclusion I can think of.

  3. I'm looking for help setting up docker in the most ideal way. I've made other posts but they died out and rather than trying to modify what I have I'll just ask for the best way and do that.

     

    I've got an array of about 10 disks, one cache drive which is a 10k rpm raptor 150gb, and (for now) a 512gb NVME. I want to set up docker to run plex and ollama containers. Because ollama (and the UI) will be running LLMs I'm hoping to set this up to run as fast as possible.

     

    I need to figure out how to remove the raptor drive from the mix, I'll figure that out later.

     

    I'll run the 512gb NVME as the cache drive, I expect. Is a pool a mix of multiple drives (ex: two NVME drives?)

    I think running docker from the NVME will be best, keeping it outside the array.

     

    What should the docker size allocation be? I had it set to about 300gb just to keep space on reserve for the future but I'm not sure if that's necessary.

     

    Let me know your suggestions so I can try to understand how to future-proof this setup.

  4. 8 hours ago, alturismo said:

    you can always check in the terminal ... sample for /tmp

     

    root@AlsServerII:~# df -h /tmp/
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           16G  1.5G   15G  10% /
    root@AlsServerII:~#

     

    which is already mounted per default with 1/2 RAM usually

    I was just coming in to mention the Plex container is currently using: 12.79GiB / 31.19GiB while there has been no transcoding activity for at least 8 hours.

     

    df -h /tmp/ shows:

    root@name:~# df -h /tmp/
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           16G  1.1G   15G   7% /

     

    df -h /plex_tmp/ shows:

    root@Arc:~# df -h /plex_tmp/
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           16G  1.1G   15G   7% /

     

    Both look fine but the docker tab in unraid is showing more consumption and over the 4gb I thought had been set.

     

     

  5. I've noticed Plex is using 14.79GiB / 31.19GiB this morning when the last play was a few hours ago. This is the highest usage I've seen.

     

    /data/mnt/user/Media Library/
    /config/mnt/user/Docker/Plex-Media-Server
    /transcode/tmp

     

    --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000

     

    I thought the above would limit RAM usage for transcoding to 4gb. I'm just confused as to what's happening to cause almost 15gb usage in Plex from what should have been direct play over three hours ago.

  6. If I'm alright with making sure the NVME disk is mounted each time, whats the benefit of running docker from a pool or array rather than the disk itself?

     

    edit: I have a 10,000 RPM raptor drive that's 150gb as cache right now. If I use the NVME 512gb drive to replace the cache drive and act as the docker directory, would that be the ideal way to set this up?

  7. 7 minutes ago, Kilrah said:

    So you've put that on an unassigned device? That's not a great idea. That won't be available when expected at array start unless you always manually make sure to mount it, should be on a pool or array.

     

    Why a folder for both? Normally there are separate shares for each and they would be either on a pool or array as well.

     

    /mnt/user is a merge of all array disks and pools. /mnt/user0 is a merge of all array disks but not pools.

     

    Why do everything differently from standard setups?

    I don't think i understood what I was doing when I set it up years ago. I'm finding the common question from others is why this is so different than default. I think it'd make the most sense to purge everything and try to rebuild but I'd like to explore how best to set it up in the future.

     

    I run media from Plex, I run Storj and share a disk outside the array, and I'm hoping to run ollama UI. I'm hoping I may be able to share the GPU between plex and ollama when one is not in use.

  8. I'm spinning my wheels and I don't know how to dig my way out here. Originally I set up Docker to use Disk 7 of the array. I want to use Ollama UI along with a GPU to host a local LLM and thought moving docker to a SSD would be a good idea. I copied the docker-xfs.img as well as the 'Docker and VM' folder.

     

    Docker settings show the following locations:

    Docker vDisk location: /mnt/disks/BE77072214FF00210169/docker-xfs.img

    Default appdata storage location: /mnt/user/Docker and VM/

     

    My confusion is that I have 'docker and vm' on Disk7 still (never removed yet), as well as the SSD (makes sense) as well as /user and /user0

     

    I don't understand what containers are using what information. When I moved the docker-xfs.img and 'docker and vm' folder from Disk 7 to the SSD, Plex reverted to default and I lost my play history, etc. I don't understand why that happened.

     

    I need to know if this is worth fixing/organizing or if I should scrub everything and start over from the SSD.

    Is the /user folder a part of the array? If so, is there a way I could make Docker seperate from the array in an effort to allow the drives to spin down and possibly save power?

     

    Will utilization of the 512gb SSD make Ollama and local LLMs that much faster?

     

    I copied the contents of Disk 7 'Docker and VM' folder to the /user 'docker and vm' folder and I'm getting this error with plex:

    Stopping Plex Media Server.
    Starting Plex Media Server.
    Error: Unable to set up server: sqlite3_statement_backend::loadOne: attempt to write a readonly database (N4soci10soci_errorE)

     

    Storj has an error too saying:

    Error: Error opening database on storagenode: database: orders opening file "config/storage/orders.db" failed: unable to open database file: no such file or directory

     

    Storj and Plex had been working fine after the move. Storj started the error last week and Plex started this error after I copied the 'docker and vm' contents from Disk 7 to the /user directory in an attempt to restore historic data.

     

    Thanks for the help.

     

     

    arc-diagnostics-20240819-0953.zip

  9. The Docker and VM was set up, originally, on disk 7 and, yeah, the parity and disk 7 always spun. I moved everything over to an SSD and allocated 300gb to it because I'm planning on installing the ollama UI along with a few models and wanted to make sure I had enough space.

     

    Now that disk 7 is not associated with docker, I'm still finding the parity and disk 1 are almost always spinning for some reason. Disk 1 is where the bulk of media is stored so maybe plex is keeping it spinning, I'm not sure.

     

    Why did I change away from standard defaults? I'm not sure. I may have set it up not understanding what I was doing or maybe there was a reason for it. I'd set it up from scratch but I have a storj instance running and I seem to remember it was a pain to get working. I'd also like to figure out why my plex server had to be set up from scratch causing my watch history to be lost.

     

     

  10. Most of the array is not in use so the drives can spin down. I moved docker to the SSD so the HDDs should be able to spin down. The thing is, parity and disk 1 (the two 12tb drives) are constantly up. Is it possible to see what's accessing them to see if they can be off-loaded? I was thinking this may help save a bit of power but, so far, it seems to be a lot of work for nothing. I appreciate the help.

  11. On 7/29/2024 at 3:15 AM, itimpi said:

    Wherever you configure it to be located :) 

     

    Sounds as if you have not set up Plex to use it.   Perhaps a screenshot of your Plex settings for docker might give us a clue as to how you have things set up for running Plex.

    Appdata is under mnt/user/docker but this directory didn't change, I don't think. I have an application support folder in the docker folder that I copied to the SSD.

    On 7/29/2024 at 10:50 AM, dragoontwo said:

    Did you move the docker vdisk location in docker settings?

    I copied the docker folder from the HDD to the SSD and pointed docker to the new location. Appdata location did not change.

  12. I'm running two Supermicro 8-Port SAS/SATA Card - (AOC-SASLP-MV8) for a 15 HDD bay Unraid server at the moment. I want to put a GPU in the PCIEx16 slot so I only have room for one of these cards.

     

    I've got 11 HDDs right now so I'm using 8 ports on the SAS card plus 3 SATA ports on the motherboard. I an probably continue like this for a few years, to be honest.

    I'm thinking about how best to go forward. I only really use about 2-3 drives at any given time.

     

    So my options are to look around for two 8 port SAS HBA thats PCIE x1  (if it exists) or a 16 port SAS HBA on PCIE x4 or x8. Gen 3 PCIE x1 seems to have a max throughput of 1GB/s which should? be fine for 2-3 drives at full R/W speeds. I doubt there would be a quality chipset or if this even exists.

     

    Any suggestions are welcome. I'm not sure that running through the 3 SATA ports on the motherboard + 8 ports on the current expansion card is a good config.

     

     

  13. I'm having a QUIC issue where it constantly goes to misconfigured. I've checked the network gear and port forwarding is correct. Sometimes when I restart the Storj docker it works for minutes or hours then it goes out again. I'm not sure exactly what's happening but when I look in the docker tab I notice the Port Mappings (App to Host) "unknown IP:28967 <-> local ip:28967" I assume the unknown IP should be my WAN IP?

     

     

×
×
  • Create New...