Jump to content

jedimstr

Members
  • Posts

    124
  • Joined

  • Last visited

Posts posted by jedimstr

  1. For the drives, just make sure /mnt/ is assigned on Host Path 4

     

    Slightly confused by this statement, I understand how to assign, but what does "Host Path 4" mean? Here's the docker command I'm preparing:

     

    docker run -t -v /var/run/docker.sock:/var/run/docker.sock -v "/mnt":"??" --name="telegraf" --net="host" -e INFLUXDB_URL=http://192.168.1.127:8086 -e HOSTNAME=tower jjungnickel/telegraf

     

    What would I need to map /mnt to?

     

    Thanks jedimstr!

     

    I meant in the unRAID interface for the Docker Container for "Host Path 4".  The reason you want to map /mnt in the container is to have recognizable /disk1 - /diskn and /user available for any storage based stats in InfluxDB.    I also mapped the customized config file as well as "Host Path 2" /mnt/cache/appdata/telegraf/telegraf.conf.tpl to Container Path: /etc/telegraf/telegraf.conf.tpl .

     

    Here are my mappings in my Telegraf Container settings:

     

    width=900http://i.imgur.com/kI9SeTu.png[/img]

  2. This is my docker run command:

    docker run -d --name="telegraf" --net="bridge" --privileged="true" -e HOST_PROC="/rootfs/proc" -e HOST_SYS="/rootfs/sys" -e HOST_MOUNT_PREFIX="/rootfs" -e HOST_ETC="/rootfs/etc" -e TZ="America/Denver" -v "/mnt/user/appdata/telegraf/telegraf.conf":"/etc/telegraf/telegraf.conf":ro -v "/proc":"/rootfs/proc":ro -v "/":"/rootfs":ro -v "/var/run/docker.sock":"/var/run/docker.sock":ro -v "/sys":"/rootfs/sys":ro -v "/etc":"/rootfs/etc":ro telegraf

     

    This requires you to put in your own telegrafl.conf file at /mnt/user/appdata/telegraf/telegraf.conf

     

    This has been working for me. The only thing that doesn't look right is my network sent and received. Haven't figured that out yet. But everything else is accurate.

     

    Two things about the way Network sent and received are set in Telegraf.

    [*]Data is stored as BYTES

    [*]Data is progressive total

     

    To get your sent and received you need to *8 the values to get bits per second and then to translate the values into recognizable graphs you have to use non_negative_derivative which will track the changes in value rather than just spouting out the growing total.

     

    Example:

    IHV2U6A.jpg

  3. Have you experimented with using an SMB share with your Mac instead?

     

    I know it's "common wisdom" that Mac's are slower with SMB than AFP, but in the last few versions of OSX SMB is actually the primary share format and AFP is deprecated. Apple considers AFP a "Legacy Service". From Mavericks and higher, OSX tries to connect to shares via SMB first before falling back to AFP.

     

    On my 10GbE fiber connections, my old Mac Pro 1,1 on Yosemite transfers upwards of 600MB/s to an SMB share which compares favorably to my Windows 10 PC which transfers around 800MB/s on average to the same share.  Bottleneck for my Mac Pro is probably the old PCIe 1 bus that the SFP+ card is connected to rather than what SMB on OSX can do.  I keep AFP off now.

  4. I mentioned this in other threads, but this applies here too...

     

    I just did a migration of my Plex Server stuff from the Limetech repo to the LinuxServer.io repo.

    I got tired of waiting for a a version update so I just switched to the LinuxServer.io repo for Plex from the Limetech one since LinuxServer.io keeps theirs up to date.

     

    Here are the steps I took:

    [*]Make note of your container mapped custom media path settings if you have them specifically set for PMS (for instance mine map /Media to /mnt/user/Media)

    [*]Shutdown the Limetech PMS container, but don't delete it yet

    [*]Backup your appdata config for PlexMedia Server (mine was in /mnt/cache/appdata/PlexMediaServer) to another folder for safety

    [*]Add the LinuxServer.io Plex container.  It can coexist with the other one temporarily as long as the other one isn't running since they're named differently

    [*]Add the mapped media path you had in the Limetech container settings if you had it set there.  It's not there by default in LinuxServer.io (although you could use the /user mapping that it does come with, I prefer a direct reference for isolation)

    [*]The LinuxServer.io container will automatically startup after you're done setting it up and saving/apply.  Access the server via the webUI and log in.

    [*]Once the new Plex container is up and running after initial wizards, shut that container down again.

    [*]With both containers shutdown, copy all the content from /mnt/cache/appdata/PlexMediaServer/Library/Application Support/Plex Media Server to /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server (or whatever old and new Plex paths you have for the appdata stuff)

    [*]Once you're done copying all the appdata stuff, startup your new Plex container again.

    [*]Access the new Plex server via WebUI or from the Plex.tv site.  Verify all your content, settings, channels, and custom Agents survived the transition, and that you're on the latest 1.0.0.2261 version in the Settings/Server page

    [*]When you're satisfied everything survived the transition, you can delete the old Plex Media Server container.

    [*]Update any other container paths that refer to your Plex Media Server install (i.e. the logs path for plexpy) since it will be under a new location in appdata

     

    Hope this helps.

     

     

    • Like 1
  5. is there any easy was to change from one repo to another? i know i saw something in the forums once but cant seem to find it anymore.

     

    Thanks

     

    I just did a migration of my Plex Server stuff from the Limetech repo to the LinuxServer.io repo.

    I got tired of waiting for a a version update so I just switched to the LinuxServer.io repo for Plex from the Limetech one since LinuxServer.io keeps theirs up to date.

     

    Here are the steps I took:

    [*]Make note of your container mapped custom media path settings if you have them specifically set for PMS (for instance mine map /Media to /mnt/user/Media)

    [*]Shutdown the Limetech PMS container, but don't delete it yet

    [*]Backup your appdata config for PlexMedia Server (mine was in /mnt/cache/appdata/PlexMediaServer) to another folder for safety

    [*]Add the LinuxServer.io Plex container.  It can coexist with the other one temporarily as long as the other one isn't running since they're named differently

    [*]Add the mapped media path you had in the Limetech container settings if you had it set there.  It's not there by default in LinuxServer.io (although you could use the /user mapping that it does come with, I prefer a direct reference for isolation)

    [*]The LinuxServer.io container will automatically startup after you're done setting it up and saving/apply.  Access the server via the webUI and log in.

    [*]Once the new Plex container is up and running after initial wizards, shut that container down again.

    [*]With both containers shutdown, copy all the content from /mnt/cache/appdata/PlexMediaServer/Library/Application Support/Plex Media Server to /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server (or whatever old and new Plex paths you have for the appdata stuff)

    [*]Once you're done copying all the appdata stuff, startup your new Plex container again.

    [*]Access the new Plex server via WebUI or from the Plex.tv site.  Verify all your content, settings, channels, and custom Agents survived the transition, and that you're on the latest 1.0.0.2261 version in the Settings/Server page

    [*]When you're satisfied everything survived the transition, you can delete the old Plex Media Server container.

    [*]Update any other container paths that refer to your Plex Media Server install (i.e. the logs path for plexpy) since it will be under a new location in appdata

     

    Hope this helps.

     

     

  6. Alongside the release of Plex Media Server 1.0 to the Public channel, they're also cleaning out their download archives of all versions of Plex lower than 1.0 and are only maintaining support/development of version 1.0+ going forward.

     

    We need the Limetech PMS Docker instance updated to the new and now ONLY officially supported release version.

     

    UPDATE:  Ok I just did a migration of my Plex Server stuff from the Limetech repo to the LinuxServer.io repo.

    I got tired of waiting for version updates that never come so I just switched to the LinuxServer.io repo for Plex from the Limetech one since LinuxServer.io keeps theirs up to date.

     

    Here are the steps I took:

    [*]Make note of your container mapped custom media path settings if you have them specifically set for PMS (for instance mine map /Media to /mnt/user/Media)

    [*]Shutdown the Limetech PMS container, but don't delete it yet

    [*]Backup your appdata config for PlexMedia Server (mine was in /mnt/cache/appdata/PlexMediaServer) to another folder for safety

    [*]Add the LinuxServer.io Plex container.  It can coexist with the other one temporarily as long as the other one isn't running since they're named differently

    [*]Add the mapped media path you had in the Limetech container settings if you had it set there.  It's not there by default in LinuxServer.io (although you could use the /user mapping that it does come with, I prefer a direct reference for isolation)

    [*]The LinuxServer.io container will automatically startup after you're done setting it up and saving/apply.  Access the server via the webUI and log in.

    [*]Once the new Plex container is up and running after initial wizards, shut that container down again.

    [*]With both containers shutdown, copy all the content from /mnt/cache/appdata/PlexMediaServer/Library/Application Support/Plex Media Server to /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server (or whatever old and new Plex paths you have for the appdata stuff)

    [*]Once you're done copying all the appdata stuff, startup your new Plex container again.

    [*]Access the new Plex server via WebUI or from the Plex.tv site.  Verify all your content, settings, channels, and custom Agents survived the transition, and that you're on the latest 1.0.0.2261 version in the Settings/Server page

    [*]When you're satisfied everything survived the transition, you can delete the old Plex Media Server container.

    [*]Update any other container paths that refer to your Plex Media Server install (i.e. the logs path for plexpy) since it will be under a new location in appdata

     

     

     

     

  7. Yup, working great for me.

    For the drives, just make sure /mnt/ is assigned on Host Path 4 and for Network set Network Type to Host.

     

    For additional SNMP network stats from my network switches and to set the InfluxDB Host I modified the telegraf.conf in appdata/telegraf (derived from telegraf.conf.tpl), but by default all drives and networks are monitored as long as you have the assignments correct in the Container config.

     

    Once you have Telegraf feeding InfluxDB correctly, you can experiment with the data you get.

     

    Here's an example for the network stats:

     

    IHV2U6A.jpg

     

    And here's my dashboard for my unRAID server:

    0HZAxqe.jpg

  8. Has anyone gotten the nicolargo/glances container working in unRAID with webserver active?

     

    The https://hub.docker.com/r/nicolargo/glances/ listing looked fairly straightforward and I've been passing combinations of ENTRYPOINT and CMD variables along with priviledged access so it can get to the docker.sock, but so far no joy for me.

     

    If anyone is wondering what glances is about:  https://raw.githubusercontent.com/nicolargo/glances/master/docs/_static/screenshot-wide.png

     

    I wanted something alongside my Grafana Dashboard and unRAID's builtin interface (mixed with all my other stuff under muximux) that shows individual Docker Container stats and this looked like it fit the bill but it just won't startup.

  9.  

    That's good to hear.  What fans did you end up using?  Do you know how much power its pulling?

     

    Nice setup!

     

    I used Noctua NF-A4x10 fans for one of the Power Supplies (I kept one stock to compare sound/temps) and swapped the case fans with Sunon 40x40x20mm 3 pin fans.

     

    No clue on power... As long as it's not tripping the fuse on my UPS I'm good.

     

     

    Thanks!

     

  10. How are you liking the Quanta LB6M?  I have been considering one.

     

    It's working great for me.  No real issues after I modified it with quieter but still effective fans.

    Took some googling to find the FastPATH docs online so I could configure static LAGs/Trunk, but it's fairly simple to setup and run.  LACP worked out of the box as well.  It's taken every transceiver and DAC I've thrown at it (generic, Intel, Chelsio, Mellanox, etc).

  11. Here's my rack album:

     

    width=400http://i.imgur.com/1f8WCBE.jpg[/img]  width=400http://i.imgur.com/mMNlMbI.jpg[/img]

     

    From Top to Bottom:

    • Kingwin Fan Controller for the NavePoint Rack's topmount Fans.
    • TP-Link TL-SG3216 Gigabit Switch
    • Quanta LB6M 10GbE 24x SFP+ Switch 4xTrunked to the TP-Link
    • Holocron unRAID server
    • Tripp Lite SMART1500LCD UPS
    • CyberPower CPS-1215RMS PDU
    • Verizon FiOS Quantum Router
    • Box and Bags of Rack Mounting screws/bolts

     

    I also have other gigabit switches and a D-Link AC3200 Ultra acting as a bridged WiFi/ac AP in other locations in the house.

     

     

    width=400http://i.imgur.com/FC4BCv2.jpg[/img]

     

    Holocron unRAID Server Internals:

    • Norco 4U 450TH with 10x Hot Swap Bays
    • Additional Norco 3x Hot Swap Bays in the middle drive slots
    • Icy Dock Tougharmor Mb998sp-b 8-Bay Hot Swap Drive Enclosure for SSD Cache Pool (fits 8x 2 1/2" drive bays into the space of one 5 1/4" drive slot)
    • 6TB Parity Drive
    • 4x HGST 3TB Drives
    • 4x Seagate 3TB Drives
    • 2x Samsung 500GB 850 EVO Cache Pool
    • SUPERMICRO MBD-X10DRL-I
    • 2x INTEL XEON QEYK E5-2670 V3 ES 2.2-2.8GHz 12 Core
      (Total 24 Core / 48 HThreads)
    • 115GB ECC DDR4
    • Adaptec ASR-71605E 16 Port 6Gbps PCIe RAID Card
    • Intel Ethernet Converged Network Adapter X520 (SFP+ x2 - 10GbE)
    • Intel Transceivers and OM3 fiber in 2x LACP
    • 2x Noctua NH-D9DX i4 3U Coolers
    • 4x Noctua NF-A8 PWM
    • EVGA SuperNOVA 1000 PS 80+ PLATINUM PSU

     

     

  12. Been using for a week, but I have an odd problem:

     

    All of my Dockers say they have an update available, but when I go to update them they do not connect and ultimately fail. The Dockers themselves have full network connectivity, and when I SSH to the unraid host I can ping github/etc. just fine. Any thoughts? Using mostly linuxserver.io Docker images.

     

    EDIT: Appears that the Docker engine does not like Jumbo frames. I have 2 NICs bonded, and the NIC interfaces and the bond0 interface all have MTU9000 set. This appears to be the bug.

     

    I'm seeing the same issue where the virtual interfaces won't set to jumbo frames even if all ethx and bond0 is 9000, causing new container install or updates of older containers to fail.  Also noticed that the virbr0 for VM's is stuck at 1500 as well.  Tried changing the rc.d daemon settings to force --mtu=9000 and still no joy.  ifconfig did show the change to the virtual interfaces but adding containers still failed.  I had to revert to 1500 frames to get it to work again.

     

  13. So I have a new install of 6.2.1-beta1 and I keep getting messages similar to the following examples no matter which container I try to install:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="CrashPlan" --net="host" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "TCP_PORT_4242"="4242" -e "TCP_PORT_4243"="4243" -e "TCP_PORT_4280"="4280" -e "VNC_PASSWD"="" -e "UDP_PORT_4239"="4239" -v "/mnt/user/appdata/crashplan/":"/config":rw -v "/mnt/":"/mnt/user":rw -v "/mnt/user/Backup/":"/backup":rw -v "/mnt/user":"/unraid":rw -v "/mnt/disks/":"/unassigned":rw,slave gfjardim/crashplan

    Unable to find image 'gfjardim/crashplan:latest' locally

    docker: Error response from daemon: Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1> Your browser didn't send a complete request in time. </body></html> ".

    See '/usr/bin/docker run --help'.

     

    The command failed.

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="Grafana" --net="bridge" -e TZ="America/New_York" -e HOST_OS="unRAID" -p 3000:3000/tcp grafana/grafana

    Unable to find image 'grafana/grafana:latest' locally

    docker: Error response from daemon: Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1> Your browser didn't send a complete request in time. </body></html> ".

    See '/usr/bin/docker run --help'.

     

    The command failed.

     

    No issues pinging github from the commandline and I'm not behind a proxy.

     

    I encounter this (or similar to this) failed error whether I try to install via Community Applications, a template repository, or directly created container pull.

    Have I missed a step somewhere?  Nothing in the Docker tutorials, FAQ, or thread seem to address this (except for a few discussing updates to the webUI or unraid server plugins, and both mine are up to date).

     

    Should I revert to the 6.1.9 Stable build and just start from scratch?

     

    UPDATE: Ok, so I figured it out.  Docker doesn't like Jumbo Frames for pulling containers. Switched MTU to 1500 on my bonded connections and Containers installed.  Does anyone know a way around this for Docker?  I'd hate to lose Jumbo Frames at MTU 9000 since it works so well for transfers across my 10GbE network.  Glad I didn't actually downgrade to 6.1.9 since that probably wouldn't have solved the issue.

  14. So tried to install via both Community Applications and directly from the Docker Repository and got this:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="CrashPlan" --net="host" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "TCP_PORT_4242"="4242" -e "TCP_PORT_4243"="4243" -e "TCP_PORT_4280"="4280" -e "VNC_PASSWD"="" -e "UDP_PORT_4239"="4239" -v "/mnt/user/appdata/crashplan/":"/config":rw -v "/mnt/":"/mnt/user":rw -v "/mnt/user/Backup/":"/backup":rw -v "/mnt/user":"/unraid":rw -v "/mnt/disks/":"/unassigned":rw,slave gfjardim/crashplan

    Unable to find image 'gfjardim/crashplan:latest' locally

    docker: Error response from daemon: Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1> Your browser didn't send a complete request in time. </body></html> ".

    See '/usr/bin/docker run --help'.

     

    The command failed.

     

     

    Fresh install of unRAID v. 6.2.0-beta21

    So what am I doing wrong?

     

    UPDATE: Turns out this happens for any container I try to install and not just Crashplan.  Will create a new thread for the general docker engine question.

     

    UPDATE2: Ok, so I figured it out.  Docker doesn't like Jumbo Frames for pulling containers. Switched MTU to 1500 on my bonded connections and Containers installed.  Does anyone know a way around this for Docker?  I'd hate to lose Jumbo Frames at MTU 9000 since it works so well for transfers across my 10GbE network.

×
×
  • Create New...