Jorgen

Members
  • Posts

    269
  • Joined

  • Last visited

Posts posted by Jorgen

  1. On 12/05/2017 at 11:23 PM, Jorgen said:

    At this stage no matter what I do I can't get the container to start.

    To restore functionality I have to delete config/HandBrake.conf and start over from step 3.

     

    Apologies, I got that wrong. I can actually leave the file config/HandBrake.conf as it is after editing.

    To get the container working again I have to delete the whole directory config/handbrake then just start the container.

    The directory and files are recreated and it all works until next container restart (including scheduled appdata backups).

     

  2. 21 hours ago, Kewjoe said:

     

    I didn't do the exact same sequence but I ended up with the same result. I installed the container, which automatically stops. I modified the handbrake.conf and started it. It worked fine for about 48 hours.  I made no further changes. The container re-started today (backup of appdata) and now won't start. I get the following in the log:

     

    *** Killing all processes...
    *** Running /etc/my_init.d/00_config.sh...
    *** Running /etc/my_init.d/01_user_config.sh...
    usermod: no changes
    usermod: no changes
    usermod: no changes
    *** Running /etc/my_init.d/02_app_config.sh...
    *** Running /etc/my_init.d/start.sh...
    Using existing configuration. May require editing.
    [2017-05-14 08:52:42] User "user_99_100" already exists. Skipping creation of user and group...
    [2017-05-14 08:52:42] Running command as user "user_99_100"...
    Ensuring user and group exists
    ln: failed to create symbolic link ‘/nobody/.config/ghb/handbrake’: File exists
    *** /etc/my_init.d/start.sh failed with status 1

     

    That's the exact same error log I got previously.

    I thought I had worked around it, but following this mornings appdata backup I'm in the same situation as you, container refuses to start.

    hmm, maybe it's the appdata backup that's causing problems?

  3. So this one has a working Watching Folder?

    Sure does. I think coppit outlined the changes to this watch folder in sparklyball's handbrake thread. This one is much more reliable, you can add new files at any time and they will be processed. With sparkly's I could only get it to process the first batch of added files.


    Sent from my iPhone using Tapatalk
  4. Thanks for this container, @coppit! I swapped from sparklyballs as I could never get the hotfolder processing to work properly.

     

    I think I've found a bug though, is anyone able to confirm they're seeing the same behaviour as me?

    1. Clean install of container, only set the folder mappings
    2. Container stops automatically (expected)
    3. Edit user and group ID in config/HandBrake.conf
    4. Start container. Starts up fine.
    5. Stop container.
    6. Edit preset in config/HandBrake.conf
    7. Start Container. Error. Container won't start.
    8. Revert change to config/HandBrake.conf
    9. Start Container. Error. Container won't start.

    At this stage no matter what I do I can't get the container to start.

    To restore functionality I have to delete config/HandBrake.conf and start over from step 3.

     

    Funny thing is, if do the preset edit at step 3 together with the user/group edit, the container starts up fine.

    It's like you only have one shot at editing config/HandBrake.conf, and subsequent edits will brake the container.

     

     

     

     

  5. On 16/04/2017 at 6:28 AM, Lynxphp said:

    A question about the case: does anyone with experience with the case knows if it can fit 6 3.5 HDDs and 2 2.5 SSDs with some creative handicraft (I know it officially supports 6 drives) ? 

     

    You can fit at least one SSD behind the front panel. And if you go with sfx psu (highly recommended) and mount it on the case floor, you should be able to add at least one more on top of the psu.

    You can also mount a ssd on the outside of the drive brackets closest to the case sides. (So that's 2 extra drives)

    If you haven't come across it yet, OC has a great owners thread on this case, lots of info for cabling and configuration options: http://www.overclock.net/t/1266342/official-fractal-design-node-304-owners-club

     

    Have fun!

    • Upvote 1
  6. Just a note on the new brackets: I haven't received mine yet, nor seen a picture of the new design. So they may or may not exist in reality. I'll report back when/if I receive them.

     

    Finally received the new Fractal Node 304 brackets that have been redesigned to fit these drives.

    Not sure what I was expecting, but all they did was to add another mounting hole so the drive can be attached at three points instead two.

    Seeing how my 8TB drive was held in place by rubber bands up until now, I'll call it an improvement!

     

    Old bracket, drive can only be mounted to the two holes on the right:

    1585r85.jpg

     

    New bracket, with the extra mounting hole at top left:

    jud83r.jpg

     

     

     

     

     

  7. Just updated today, and now the docker log is getting spammed with this:

    2016-10-07 22:09:27,019 DEBG 'nzbget' stdout output:
    [1d[37m[44m8[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:28,029 DEBG 'nzbget' stdout output:
    [1d[37m[44m9[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:29,044 DEBG 'nzbget' stdout output:
    [1d[37m[44m40[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:30,054 DEBG 'nzbget' stdout output:
    [1d[37m[44m1[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:31,061 DEBG 'nzbget' stdout output:
    [1d[37m[44m2[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:32,071 DEBG 'nzbget' stdout output:
    [1d[37m[44m3[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:33,078 DEBG 'nzbget' stdout output:
    [1d[37m[44m4[24;80H(B[m[39;49m[37m[40m
    

     

    New entry every second. Is anyone else seeing this?

    I did change  Setting/Logging/WriteLog  within nzbget from 'Append' to 'Rotate', but no other changes made apart from this.

    I've reverted that config change but the logs are still getting spammed.

     

    ok so it looks like it does write lots of crap to stdout in console mode, i think the timestamps your seeing are triggered from the auto refresh for nzbget web interface. i have tweaked the image again and it now closes stdout, ive also created /data/dst as for some bizarre reason it doesnt create it, even though thats where the logs go, sounds like a bug to me but hey its now fixed for the docker image, so please pull down the latest and give it a whirl.

    Success! Logs are nice and clean again. Thanks for the quick fix, and thanks for putting so much effort into these dockers, it's very much appreciated!

     

     

    Sent from my iPhone using Tapatalk

  8. Just updated today, and now the docker log is getting spammed with this:

    2016-10-07 22:09:27,019 DEBG 'nzbget' stdout output:
    [1d[37m[44m8[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:28,029 DEBG 'nzbget' stdout output:
    [1d[37m[44m9[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:29,044 DEBG 'nzbget' stdout output:
    [1d[37m[44m40[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:30,054 DEBG 'nzbget' stdout output:
    [1d[37m[44m1[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:31,061 DEBG 'nzbget' stdout output:
    [1d[37m[44m2[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:32,071 DEBG 'nzbget' stdout output:
    [1d[37m[44m3[24;80H(B[m[39;49m[37m[40m
    2016-10-07 22:09:33,078 DEBG 'nzbget' stdout output:
    [1d[37m[44m4[24;80H(B[m[39;49m[37m[40m
    

     

    New entry every second. Is anyone else seeing this?

    I did change  Setting/Logging/WriteLog  within nzbget from 'Append' to 'Rotate', but no other changes made apart from this.

    I've reverted that config change but the logs are still getting spammed.

     

     

  9.  

    • The iGPU device must be at PCI address 00:02.0 for this to work

     

     

    This is very exciting!

     

    Could you advise on how to check the PCI address for the iGPU, please?

     

     

    Sent from my iPhone using Tapatalk

     

    You'll see it in the drop down menu in the VM editor.

     

    Thanks, didn't show up for me, but I assume I need to be on the prerelease? I'm still on 6.2.1 stable and wanted to check it for future use.

     

    This seems to do the trick though:

    lshw | grep -A 10 *-display

     

    And I think I'm good to go :D:

            *-display UNCLAIMED 
                 description: VGA compatible controller
                 product: Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller
                 vendor: Intel Corporation
                 physical id: 2
                 [b]bus info: pci@0000:00:02.0[/b]
                 version: 06
                 width: 64 bits
                 clock: 33MHz
                 capabilities: msi pm vga_controller bus_master cap_list
                 configuration: latency=0

     

    Wait, is vt-d support istill required to take advantage of the iGPU? Or can I get away with just vt-x? \

     

  10.  

    • The iGPU device must be at PCI address 00:02.0 for this to work

     

     

    This is very exciting!

     

    Could you advise on how to check the PCI address for the iGPU, please?

     

     

    Sent from my iPhone using Tapatalk

     

    You'll see it in the drop down menu in the VM editor.

     

    Thanks, didn't show up for me, but I assume I need to be on the prerelease? I'm still on 6.2.1 stable and wanted to check it for future use.

     

    This seems to do the trick though:

    lshw | grep -A 10 *-display

     

    And I think I'm good to go :D:

            *-display UNCLAIMED 
                 description: VGA compatible controller
                 product: Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller
                 vendor: Intel Corporation
                 physical id: 2
                 [b]bus info: pci@0000:00:02.0[/b]
                 version: 06
                 width: 64 bits
                 clock: 33MHz
                 capabilities: msi pm vga_controller bus_master cap_list
                 configuration: latency=0

  11. You might get away with using your tv remote to control Kodi via HDMI CEC. This works well for Openelec on a raspberry pi, but I have no idea if an openelec VM can support CEC. Your TV also needs to support it. Just another option if you want to look into it.

     

    You mentioned Netflix above, I had problems finding a good Kodi Netflix addon when I looked into it a few years ago. Things might have changed though, just want you to be aware of potential problem there since it could completely ruin the WAF...

    Hopefully someone else have hands on experience about Netflix on openelec and can comment on this area.

     

    Also, I'd like to echo the advice above about noise levels with the NAS close to your TV. I specifically built my server for low noise since I have no choice but to place it in my living room, about 3 meters away from the couch. I've since spent a lot of time and energy trying to make it as quiet as possible, but it's not completely silent. In quiet movie scenes I can hear the drives and some fan noise too.

     

     

    Sent from my iPhone using Tapatalk

  12. Ok that's a bit frustrating, so there's no way I can build a single NAS/HTPC box.

     

    It seems a media box is not too expensive if I do it with a Raspberry Pi 3 and I can install OpenElec which is Kodi on it—would that be the recommended route?

     

    So the setup will be [NAS with unRaid and Emby] -> [Raspberry Pi 3 with OpenElec] -> TV?

     

    That's how I do it, and it works very well.

     

    But there is an alternative that lets you achieve an all-in-one nas/htpc. It's slightly more complicated though...

    If your hardware supports it, you can run openelec as a virtual machine under unRaid, effectively turning your NAS into a server and client at the same time.

    For this to work, both your CPU and motherboard must support virtualization with vt-d. You'll need at least an i5 for this, make sure to check the specs for vt-d support as not all of them have it.

    You also need a separate graphics card for the openelec VM.

    You might want some other parts too, like an IR receiver for a remote control, but it's not required.

     

    Something else to be aware of if you go  down this path and also stick with the node 304: you're limited to a mini-ITX motherboard. These only have one pci slot, which the extra graphics card will occupy. This leads to two things:

    1. Your CPU must have integrated graphics for unRaid to use

    2. You can't add any more expander cards if you need more sata ports etc. in the future

     

     

     

    Sent from my iPhone using Tapatalk

  13. Just sent a message through their support page :) Thanks

    By the way, how is the node 304 regarding to hdd temps ? It's so little eheh

    Temps are within reason, around 40 for each disk during parity checks. I built my server with silence as a priority though, so very low power components and minimum heat generation.

     

    Just a note on the new brackets: I haven't received mine yet, nor seen a picture of the new design. So they may or may not exist in reality. I'll report back when/if I receive them.

     

     

    Sent from my iPhone using Tapatalk

  14. Guys a little advice : i'm currently returning to unraid, buying a mix of new and used stuff to build a system.I have 2 seagate 8tb archive to use too (with 1 6tb wd red and 2 wd green 2tb drives).

     

    I love Fractal cases but in my current Define R3 hdd slots lack the hole to secure these archive disks and i hate this.For what i can see even the new Fractal Define R5 has the same problem.

    What can you suggest me to buy with lots of 3,5 or 5,25 slots ? I have seen the Sharkoon T9, but from the reviews on youtube i can see the same "problem" with hdd slots.

    Something that can be bought in europe please :)

     

    Thanks

     

    Might be worth getting in touch with Fractal support. I have the same problem with my node 304 and when I contacted them about the problem they said they have redesigned the hdd brackets to fit drives without the center holes. They even offered to send me two of the new brackets once they had them in stock, free of charge!

    I think the R3 have different brackets, but worth a try.

     

     

    Sent from my iPhone using Tapatalk

  15. As previously mentioned in this thread, I have this hardware providing passthrough.

     

    If I remember correctly, VT-d and VT-x are separate entries in the BIOS. They're in completely different sections though - one is in 'System Agent' and the other in 'CPU' settings.

     

    I finally got around to upgrade my BIOS to the 2003 version and tried switching on the virtualisation.

    I found the VT-x setting but there is no VT-d setting.

     

    Could be because my CPU does not support it!! i3-4130 does have VT-x but not VT-d

    Can I still do virtualisation in unRAID? What would be the restriction?

     

    Thanks

    Yes you can. I have this mobo with an even lower grade CPU (celeron) and I manage to run two VMs. One Linux and one Mac OS X.

    VT-x lets you do VMs.

    VT-d lets you pass through hardware to the VM, for example a dedicated GPU for gaming.

    So dependent on your needs, VT-x might suffice. If not, you need to upgrade your CPU to something that supports VT-d (the mobo does already)

  16. I have problem with the DeleteSamples.py and ResetTime extension scripts. I get an error suggesting that they are not recognizing that NZBGET is version 15 . Below are pertinent snippets from the log. I have made sure permissions are correct.  Any ideas?

     

    -Thanks

     

    Sun Jun 14 10:39:22 2015  INFO  DeleteSamples:  File "/data/scripts/DeleteSamples.py", line 55

    Sun Jun 14 10:39:22 2015  INFO  DeleteSamples:    print "This script can only be called from NZBGet (11.0 or later)."

    Sun Jun 14 10:39:22 2015  INFO  DeleteSamples:                                                                      ^

    Sun Jun 14 10:39:22 2015  INFO  DeleteSamples: SyntaxError: invalid syntax

     

    Sun Jun 14 10:39:22 2015  INFO  ResetDateTime:  File "/data/scripts/ResetDateTime.py", line 25

    Sun Jun 14 10:39:22 2015  INFO  ResetDateTime:    print "This script can only be called from NZBGet (11.0 or later)."

    Sun Jun 14 10:39:22 2015  INFO  ResetDateTime:                                                                      ^

    Sun Jun 14 10:39:22 2015  INFO  ResetDateTime: SyntaxError: invalid syntax

    Sun Jun 14 10:39:22 2015  ERROR  Post-process-script ResetDateTime.py

     

    I had the same errors for ResetDateTime.py. Appears to have been solved by editing the first line of the script to:

     

    #!/usr/bin/env python2
    

     

    Apparently the script is written for python2, while this docker defaults to python3 and it runs into compatibility issues. The edit above makes the script run under python2 and all is good again.

    I'm new to the linux world and it's a steep learning curve, so please everyone let me know if I got this wrong.