sacretagent

Members
  • Posts

    943
  • Joined

  • Last visited

Posts posted by sacretagent

  1. ALL OK now,  in the sense that it automatically started all my containers

     

    not sure but i read somewhere that there was an extra field on the extensions / docker' s page ?

    a field to put the containers you want to start automatically?

    that is not there... even pushed F5 a few times to be sure it is not there

     

    what i did is yesterday evening, just before powering down the server for the night,  i put the plg in /boot/config/plugins

    and this morning let him wake with cron as usual...

    and to my surprise (good surprise ;) ) all my dockers were running

     

    so if you just would care to inform us if there is a field or not then i am happy...

    i can see the advantage of that field though .... but on the other side a container is quickly made... although you would loose you settings if you delete your container ....

    so i guess choose which containers you want to start is better...  (one man discussion with himself... where is that looney smiley? )

     

    anyway autostart works

  2. yeah i also had to use --net=host as otherwise the 2 unraid servers where not finding each other....

    which was kind of funny... my windows machines found the one unraid server and where backing up to it...

    but the 2 unraids said offline to each other....

     

    once i changed to --net=host the issue went away

  3. maybe this would be an idea....

     

    dockers based on debian (ubuntu,....)

    dockers based on red hat (centos, Fedora,....)

     

    basically put them together according package management

     

    then if you are a bit clever then you just have to adapt the first line of the dockerfile (from xxxxxx)  for your personal docker ....

     

    i downloaded today all of sensei's dockers and build them with a few adjustments to my likings on my machine ....

     

    i think that is the easiest way to use an existing docker but with your specific needs....

     

    I also used GFJArdims crashplan docker

    and hurricanes dropbox is next on the list after i complete tetum's mysql

    so far i use only one ubuntu 14.04 image

    root@R2D2:/mnt/cache/docker/dockerfiles/dropbox-master/docker-dropbox# docker  images

    REPOSITORY          TAG                IMAGE ID            CREATED            VIRTUAL SIZE

    nzbdrone            latest              42023e208d3e        About an hour ago  541.4 MB

    sickrage            latest              9e8cf499022f        2 hours ago        430.8 MB

    sickbeard          latest              2afac386bd52        2 hours ago        381.7 MB

    sabnzbd            latest              a1745bf1034b        3 hours ago        378.8 MB

    <none>              <none>              074dbcb9f5aa        3 hours ago        378.1 MB

    crashplan          latest              0385fcc22223        4 hours ago        792.1 MB

    ubuntu              14.04              e54ca5efa2e9        5 days ago          276.1 MB

    <none>              <none>              3a32fd5b15e4        13 days ago        407.6 MB

    <none>              <none>              5cf50a598ce2        13 days ago        407.5 MB

    <none>              <none>              22e16e92ea98        13 days ago        277.8 MB

     

    will need to clean up the none 's again too ....

    everytime abuild doesn't complete you get these none's

  4. hi,

     

    i tried this plg but i got issue with it

     

    see outcome

    Last login: Tue Jun 24 12:12:41 2014 from peter-z87.lan
    Linux 3.15.0-unRAID.
    root@P8H67:~# docker ps
    2014/06/24 12:17:26 Get http:///var/run/docker.sock/v1.12/containers/json: dial unix /var/run/docker.sock: no such file or directory
    root@P8H67:~# docker start couchpotato
    Post http:///var/run/docker.sock/v1.12/containers/couchpotato/start: dial unix /var/run/docker.sock: no such file or directory
    2014/06/24 12:18:35 Error: failed to start one or more containers
    root@P8H67:~# /etc/rc.d/rc.docker restart
    stopping docker ...
    docker already running
    root@P8H67:~# docker ps
    2014/06/24 12:20:15 Get http:///var/run/docker.sock/v1.12/containers/json: dial unix /var/run/docker.sock: no such file or directory
    root@P8H67:~# /etc/rc.d/rc.docker stop
    stopping docker ...
    root@P8H67:~# /etc/rc.d/rc.docker start
    docker already running
    root@P8H67:~#
    Broadcast message from root@P8H67 (Tue Jun 24 12:21:42 2014):
    

     

    so i removed theplg again and things work again

     

    root@P8H67:~# docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    root@P8H67:~#
    

     

    i took the plg from the github page

  5. GFJARDIM ... YOU THE MAN  :o

     

    clean docker script worked a charm :)

     

    I see you made a crashplan docker ...

    how do we access it ?

    i assume with this supervisor?

    but how does it work ?

    what is even the startup line for this crashplan?

     

    Docker -d -h="hostname" --name="crashplan" -V ? -p 4242:4242 -p 4243:4243 gfjardim/crashplan ??

     

    and then 2 more questions....

    you guys are running this from the pure unraid setup ? no Xen?

     

    and any serious mysql / php/Apache docker that you know off ?

    to run my personal newznab server which is running in a xen for now....

     

     

     

  6. DOcker sucks as we are using it now

     

    there is no way to do a it pull inside a docker

    and upgrading a plugin like sickbeard inside the docker doesn't work

    i tried a lot of things but or the program doesn't shut down

    or it shuts down and the docker with it

     

    i tried installing ssh server but then the docker crashes in 2 minutes from start

     

    if for every git pull you need to make a new image then this beats the purpose of git in my eyes

     

    spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

     

    I disagree!  Docker is certainly not perfect now, and can absolutely stand to be improved, but to say it sucks is way off base, IMO.

     

    It pretty much works great for what it's supposed to do, to let one run a program, isolated from the unRAID OS.

     

    Sure, it can be improved, but I'm VERY happy with docker at this point.

     

    Constructive ideas for improvement, and/or specific complaints will do FAR more for this community than public displays of personal frustration, which might scare people away who don't take time to read all the posts.

     

    well tell me 2 things

     

    can you upgrade sickrage without rebuilding the image?

    did you have a look in your btrfs/subvolumes directory ?

     

  7. BTRFS SUCKs

     

    it makes the cache drive really slow

    reason why i know is when running crashplan on the cache drive it slow down to a crawl ... and crashplan was not quick to begin with .....

    top is getting like 20 kworkers running around and every few minutes btrfs messages ....  :'(

    root@R2D2:/# ps x | grep kworker

        5 ?        S<    0:00 [kworker/0:0H]

      13 ?        S<    0:00 [kworker/1:0H]

      17 ?        S<    0:00 [kworker/2:0H]

      21 ?        S<    0:00 [kworker/3:0H]

    1271 ?        S      0:09 [kworker/0:1]

    3075 ?        S      0:05 [kworker/u8:4]

    3077 ?        S      0:08 [kworker/u8:9]

    7115 ?        S      0:00 [kworker/3:4]

    7343 ?        S<    20:04 [kworker/0:1H]

    7824 ?        S      0:00 [kworker/2:0]

    8252 ?        S<    12:21 [kworker/2:1H]

    8257 ?        S<    12:04 [kworker/1:1H]

    8258 ?        S<    4:02 [kworker/3:1H]

    8516 ?        S      0:04 [kworker/u8:0]

    9122 ?        S      0:02 [kworker/u8:3]

    10107 ?        S      0:02 [kworker/u8:7]

    10209 ?        S      0:00 [kworker/3:0]

    10712 ?        S      0:00 [kworker/0:2]

    10832 ?        S      0:00 [kworker/3:8]

    10946 ?        S      0:31 [kworker/u8:13]

    11269 ?        S      0:00 [kworker/1:1]

    11307 ?        S      0:00 [kworker/u8:8]

    12576 pts/0    S+    0:00 grep kworker

    15429 ?        S      0:23 [kworker/u8:11]

    17295 ?        S<    0:00 [kworker/u9:2]

    18911 ?        S      0:19 [kworker/u8:6]

    18945 ?        S      0:22 [kworker/u8:10]

    19565 ?        S      0:01 [kworker/2:1]

    23033 ?        S<    0:00 [kworker/u9:0]

    23590 ?        S      0:14 [kworker/u8:2]

    23874 ?        S      0:13 [kworker/u8:12]

    23927 ?        S      0:00 [kworker/1:2]

    24670 ?        D      0:01 [kworker/3:1]

    30423 ?        S      0:08 [kworker/u8:1]

     

    and what is all this shit ?

     

    /mnt/user/docker/btrfs/subvolumes/018d01da3eb97eea9cd93c8dd0adf511ab3e346738f5a6f7c813c5db872f4a77/

    /mnt/user/docker/btrfs/subvolumes/018d01da3eb97eea9cd93c8dd0adf511ab3e346738f5a6f7c813c5db872f4a77-init/

    /mnt/user/docker/btrfs/subvolumes/05c0a7ae5bdeb2af4c7f76bf302275d55f06312a8e43e165fe41493839035a59/

    /mnt/user/docker/btrfs/subvolumes/0aa1057cdf960184aa08d0e45e2c2927f2bd9ad7b4d371a4fed4af35915e4c35/

    /mnt/user/docker/btrfs/subvolumes/0dc4ba3787df20ea7a02a4574f9d71a8c1da03617694a09465b47cc8b558ad86/

    /mnt/user/docker/btrfs/subvolumes/0f8c5bbabce98caea40bdf54a0eb84ab2e1029dcf7f87943675dabc853928c94/

    /mnt/user/docker/btrfs/subvolumes/13c1d3856c17852302150deb9f6951eb39d37c7f50aa9a2cf87b64bc59f9dd05/

    /mnt/user/docker/btrfs/subvolumes/1908386b990f5bd3138577a443532425c9e28bb993f226561e53d317a55eed1a/

    /mnt/user/docker/btrfs/subvolumes/1934846441ad56426a772feb1ed363aeba8916ff829db10d35ce4f98e734816e/

    /mnt/user/docker/btrfs/subvolumes/1c3336d191a9a4be86aa6b3b3cc8542631339f80684e552e8f51dd4480c5bae7/

    /mnt/user/docker/btrfs/subvolumes/22cf1cfa21846f3d4c89c3f96d8831df144151ab852d60e742fc17478803c23c/

    /mnt/user/docker/btrfs/subvolumes/23f361102fae912c44247cd8bcc1a1640292a20d4a4953358b86f74228430711/

    /mnt/user/docker/btrfs/subvolumes/248e1024ad119fc564d7e6d54cae236ed8c9e4ff351e6e1a30ff2f810c1ce544/

    /mnt/user/docker/btrfs/subvolumes/2f03228c7af08ac38976d75137513a868ee3f298261f71012f17b9f57c8a92d4/

    /mnt/user/docker/btrfs/subvolumes/2f4b4d6a4a069aba521ca0067b9263e29d7de84d121ff60ee101242bcc36c13b/

    /mnt/user/docker/btrfs/subvolumes/2f8f8e38806f371f82c292eba276c9f1a703f9ef869738d8319e3502ca62e2a2/

    /mnt/user/docker/btrfs/subvolumes/3489622e48f9126244c9ce3f86349b96530b62d4cf4e3d60a4d2430df50aa21e/

    /mnt/user/docker/btrfs/subvolumes/371b5a8919ea9ff8a7e2ae8728cd7c86ae8a3d68181d65d7ec5275049bf059f6/

    /mnt/user/docker/btrfs/subvolumes/43dc9b2ed1acbf0deea5361c8817c4ce81df04b1e454a356fb1765312b28e0e0/

    /mnt/user/docker/btrfs/subvolumes/466bb76037ba0ea3bb6feb778dfb7099e23d9dab9726b98578202395232a4954/

    /mnt/user/docker/btrfs/subvolumes/47401fd0add90e5944cdd6d1fd433ae469acb607e7f02a36af66da839841d0ba/

    can't even delete it ?

    i had 150 gb free on my cache drive

    now i have only 85 gb :(

     

    nah think of returning back to a reiserfs drive without docker ....  :-\

  8. DOcker sucks as we are using it now

     

    there is no way to do a it pull inside a docker

    and upgrading a plugin like sickbeard inside the docker doesn't work

    i tried a lot of things but or the program doesn't shut down

    or it shuts down and the docker with it

     

    i tried installing ssh server but then the docker crashes in 2 minutes from start

     

    if for every git pull you need to make a new image then this beats the purpose of git in my eyes

     

    spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

  9. ok guys i made a few dockers and they are all running fine so far

    but i have a problem with upgrading them ....

    so i wouldlike to access them while they are running and do a quick git pull

    sickrage for example upgrdes for the moment several times a day....

    if every time i need to stop the docker / remove the doecker / remove the image and build again then i have nothing else to do with my life  ::)

     

    there has to be a way to ssh in these dockers ?

    simply installing openssh server does not work i guess... because when you do a sickrage stop then your docker stops ?

     

  10. Would you care to share the 'steps' for creating one of your containers?  I want to build one for SickRage, but still am not too sure how to get started.

     

    Thanks!

     

    Justin

     

    should be easy

    this is the LT dock for sickbeard

     

    FROM ubuntu:14.04
    MAINTAINER Eric Schultz <[email protected]>
    
    ENV DEBIAN_FRONTEND noninteractive
    
    RUN locale-gen en_US en_US.UTF-8
    
    RUN apt-get update -q
    RUN apt-get install -qy --force-yes python python-cheetah ca-certificates git
    
    RUN git clone https://github.com/midgetspy/Sick-Beard/ /sickbeard/
    
    VOLUME /config
    VOLUME /data
    
    EXPOSE 8081
    
    ENTRYPOINT ["python", "/sickbeard/SickBeard.py"]
    CMD ["--datadir=/config"]
    

    gues all you need to change is the github url and you would need to change the directory

     

    SO it would come down to this

    FROM ubuntu:14.04

    MAINTAINER Eric Schultz <[email protected]>

     

    ENV DEBIAN_FRONTEND noninteractive

     

    RUN locale-gen en_US en_US.UTF-8

     

    RUN apt-get update -q

    RUN apt-get install -qy --force-yes python python-cheetah ca-certificates git

     

    RUN git clone https://github.com/echel0n/SickRage/ /sickrage/

     

    VOLUME /config

    VOLUME /data

     

    EXPOSE 8081

     

    ENTRYPOINT ["python", "/sickrage/SickBeard.py"]

    CMD ["--datadir=/config"]

     

    the sickbeard.py stays same for now as the programmers from sickrage didn't change this yet

    you might want to change the port if you run both of them :P

     

  11. was looking for a crashplan dock and a plexconnect one but came up empty :(

    i know about lamp...

    but lets say i want to stay with ubuntu ... just because it is a popular distro and has a lot of packages

    if i go look on the hub for lamp then i will have to open the dockerfile to see what base it uses ?

     

    i searched using the search on top for mysql and i get a lot of different docks ...  is there a way to weed out all the non ubuntu 14.04 ones?

     

  12. Needo

     

    if i understand well then you are using a debian image to start with ?

    not the ubuntu one from Lime Tech ?

    i understand how you guys hook into that base image

    but where the heck is it coming from

     

    been looking at the dokerfiles code from schutlz and you and all i can see are the different from lines...

    this means the base image is coming from docker?

     

    where can i choose a base image?

    any base images coming with php, mysql and apache?

    or are we supposed to install that one at a time ?

     

    aka base image -> dock for mysql -> dock for php -> dock for Apache

     

    example i need a mysql database for XBMC and one for newznab

    they can run both on the same one  don't care much about that but how do i hook newznab with is HTTP (apache) in the same mysql docker?

    or do i need to run a mysql docker and a mysql+php+apache docker in one docker to run newznab ?

     

     

  13. ok JONP

     

    here we go again BARE IN MIND I HAVE 50.000 TV EPISODES on this server and i don't want to rescan them all

     

    so all paths in all plugins are made on /mnt/user/music or /mnt/user/tv.seeing or mnt/user/tv.to.see

    so all ini files have the /mnt/user/ paths

     

    now if we going to start a docker with this

    docker run -d -h Tower --name="sickbeard" -v /mnt/cache/sickbeard/database:/config -v /mnt/user/:/data -p 8081:8081 eschultz/docker-sickbeard

     

    then i will have to change all paths in the ini files to /data/tv.seeing or /data/tv.to.see

    you said in a few posts ago that we are not supposed to change the right side of the colon aka:/data

     

    can i ask why?

     

    if we use this

     

    docker run -d -h Tower --name="sickbeard" -v /mnt/cache/sickbeard/database:/config -v /mnt/user/:/mnt/user -p 8081:8081 eschultz/docker-sickbeard

     

    then he will use our existing ini file with all /mnt/user paths and just continues spinning?

     

    i fail to see why we need to start from the /data path in a docker ?

     

     

     

  14. hay, hay, hay, who do you call a command line jokey :-)

     

    I hate command line and almost a noob in Linux myself.

    I read manuals and google the rest :-)

     

    also LM did not change file system, they add support for new file system.

    you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid).

    since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good...

     

    NO docker without btrfs

    so they force us to use it :(

  15. so , what's the big deal?

    shrink your btrfs volume

     

    add swap partition to RAW device.

     

    all you guys assume we are all commandline jockeys

    the things is that we have no clue how to even start at that .... we were spoiled with plugins

    we assume if limetech changes a cache drive filesystem that they at least know that a lot of us are running a swapfile ....

    they asked for the data in the polls

    so if they change filesystem they at least need to be sure this will not affect us in any bad way

    2 o clock in the morning and in 4 hours have to get up for work

    and plex is still not running (plex folder = 91 GB)

     

     

  16. thanks for nothing again

     

    >:(

     

    NO SWAP files on btrfs ??

    you couldn't have put that in big letters on the first post

     

    spend 4 hours moving stuff of the cache drive

    reformat the thing and now swap file is not working

     

    Jun 20 00:58:26 R2D2 kernel: swapon: swapfile has holes

    Jun 20 00:58:26 R2D2 rc.swapfile[10796]: Swap file /mnt/cache/swapfile re-used and started

    Jun 20 01:05:59 R2D2 kernel: mdcmd (55): spindown 0 (Routine)

    Jun 20 01:08:34 R2D2 rc.swapfile[11875]: Swap file /mnt/cache/swapfile re-used and started

    Jun 20 01:08:34 R2D2 kernel: swapon: swapfile has holes

    Jun 20 01:12:50 R2D2 rc.swapfile[12244]: Creating swap file /mnt/cache/swapfile please wait ...

    Jun 20 01:14:41 R2D2 rc.swapfile[12382]: Swap file /mnt/cache/swapfile created and started

    Jun 20 01:14:41 R2D2 kernel: swapon: swapfile has holes

     

    a google search gives conflicting info but btrfs wiki says not supported ?

  17. personally i wouldn't trust any resier to btr convert tool as any failure in it would result in complete cache drive data loss

     

    i would suggest the best way to do this would be two simple rsync commands

     

    backing up files to the array, blanking and reformatting the cache and restoring the backup using rsync will be fast and absolutely reliable. at no point would you be doing a hail mary with your data

     

    care to provide these commands for the technically challenged ?

    don't think it is as simple as rsync /mnt/cache /mnt/disk10/cache

     

  18. Ok here we go some positive news too

     

    Beta 6 so far has been running smooth

    MUCH better then Beta3 ad beta 5A

    cpu and memory uitlisation is much better then with both these releases...  i have 2 servers one is running plain unraid 6 and the other one unraid with Xen

    and i just copied the files and rebooted....

    VM came up automatically and like i said the CPU utilisation between this version and the prior ones is night and day difference

    before top ran always around 4 + now it goes down to 1.44...

    and i really didn't change anything else

     

    only caveat i have is to zip my plex folder now and stop all my plugins and move everything from the cache drive for btrfs :(

    i know it might look easy for development people but having 50.000 TV episodes and nearly 900 shows is giving plex a lot of media data ...

    and moving that is not done in 10 minutes :(

     

    hopefully Limetech will stick now with btrfs as not in the mood to do this too often, but i am really interested in docker ... seems like the ideal plugin system

  19. why you have 2 times the bootloader ?

     

    name = "archVM"

    bootloader = "pygrub"

    memory = 2560

    vcpus = '2'

    disk = [

    'phy:/mnt/cache/Domains/ArchVM/arch.img,xvda,w',

    'file:/mnt/cache/Domains/ArchVM/data.img,xvdb,w'

    # 'phy:/mnt/user/nameofshare,xvdb,w'

    ]

    vif = [ 'mac=00:16:3e:27:11:22,bridge=xenbr0' ]

    bootloader = "pygrub"

  20. Haha the plex library move is a pain. There are several hundred thousand files in that folder (no joke)

     

    Plex wiki says that the easiest way is to zip or rar first, then move that file (don't use compression and it will be faster). Otherwise it will take very long time.

     

    I learned that when I moved my plex library from the old plugin into my new VM. It was much easier after rarring it

     

    Lets say you have your plex library specified to live under /mnt/cache/appdata/plex or /mnt/user/appdata/plex or /mnt/disk1/appdata/plex.  Doesn't matter.  Same for where your media content is located (doesn't matter).  You can do this to install and run Plex to a Docker container WITHOUT doing ANYTHING to your existing library data:

     

    docker run -d  --net="host" --name="plex" -v /mnt/path/to/appdata/plex:/config -v /mnt/path/to/mediacontent:/data -p 32400:32400 eschultz/docker-plex

     

    After this command completes, type http://tower:32400/web and enjoy!

     

    DONE!

    If we can't convert from reiserfs to btrfs then we need to move it first of the cache disk and then back so your example above is moot ffor anybody who uses the cache disk  ::)

  21. so from the previous discussion it appears there is no way to convert from reiserfs to btrfs without losing data. so I guess we need to move the data from the cache drive for the apps to a array drive and after we formatted move it back.

    I have just done that and it appears painless enough.  Once I stopped the array I was able to change the format to be btrfs and restart the array.  Once the array restarted the cache drive showed as unformatted, so I selected the option to format it which only took a few seconds.

     

    One thing you do need to do is check any cache-only shares as in my case I seemed to lose those settings.

     

    Having done that I moved my cache drive files back and everything seems to be operating as normal.

    how did you change to btrfs?

    i assume by simply stopping the array it will not change it ...