casperse

Members
  • Posts

    810
  • Joined

  • Last visited

Posts posted by casperse

  1. On 12/19/2023 at 10:43 AM, alturismo said:

    no need to wakeup from regular standby (deep standby wont work)

     

    and may take another look what i posted to comment out ...

     

    and, you really did test if it works without password ?

    So Sorry, I have reread your post 🙂

    Yes executing this command in the docker terminal works! (Without username and password:
    owi2plex.py -h 192.168.0.250 -b "PLEX (TV)" -o /owi2plex/stue-epg.xml >> /dev/null

    So I have now done what you said in the cron file:
    image.png.606254573446264e04ef0043752dc2f5.png

    image.thumb.png.18a0ba5f5e510f4534218dbbf8d27471.png

    I also set the cron in the UI and in the text file, hoping it will run now 🙂
    image.thumb.png.2e29090f2b69b54ea0484db343ecf1d8.png
     

    And 1 hour later I have the cronjob file:
    image.png.df77055532c2abfceebbce58c889d4d1.png

    This should work now?

    Again thanks! and sorry for not reading your post thoroughly enough

  2. But that would result in not using the proxynet and sharing the docker naming in NMP right?
    I did read (Read most of all the posts in this thread) that running NMP on Host was "recommended" but since I have + 40 proxyhosts I really dont want to change all of them 🙂



    Ok so I read up on ports and I changed them all like below (Making sure no ports conflicted)
    image.thumb.png.a9aa08ba80711538be2e2e20ac0ef66c.pngI now have first in Host and second running in bridge (Hmm I didn't try proxynet? would that work now?)
    Anyway NMP is now working with one as host and one in bridge.
    Thanks again for explaining this!

  3. 4 hours ago, alturismo said:

    ok, you cant run 2 instances on host ... but this would collide with your upper posts about bridge ... so host mode is active i assume, or is your NPM and all other dockers also in host mode ?

    basically, the only issue i could imagine is port colliding ...

     

    I might have this setup wrong, so long time since I did this (So sorry for asking stupid Questions, I am truly reading to find answers :-).
    I have like many others created my "Proxynet" and Yes I use the names of the dockers in NMP, and it works great!
    But I cant use Proxynet for Plex I get no IP? and cant access the Plex server (Like below)

     


    image.thumb.png.0fc832e4986a5a605427a2752d2182a0.png

     

    Quote

    so, either you just setup plex1 in host, plex2 in bridge and change the port mapping from plex2 docker (host side port)

    so, either you setup both in bridge and change the port mapping from plex2 docker (host side port)

    so, either you set them both in custom eth0 (br0) and each will have its own ip ...

    so, either .... all Variants should work to run 2 Plex instances

     

    Each have there own IP now, one is Host IP and the other is br0 fixed custom IP: 192.168.0.2
    If I try bridge or Proxynet I get the above or below picture with no IP?

    image.thumb.png.45e30260ef8164e5f4b7d9ccedbcb310.png

     

    When you say changing host side ports?
    Is that adding each port as a variable in the docker interface and substituting them with others?
    (I think I once tried that, but mostly I dont change ports on Unraid dockers)


    NMP is on the Proxynet:
    image.thumb.png.b1c44e7bb613dcb6a92564d5c4f05db7.png

    My only working options so far (I have tried all other) for Plex has been to keep my main on Host and the other on Br0:
    image.thumb.png.ac459b7dbcf05770f517895afdf5a1b1.png

    I guess my Proxynet is the same as when you say standard bridge?


    The above br0 "works" just not for NMP - But accessing the http://192.168.0.2:32400/web works (Plex 2)
    I added the /web under the host to get that working long ago for the host one, and that works in NMP
    image.png.1904a4402dbc108bcfd376e172b23ba1.png

    Doing this for the 192.168.0.2 br0 I get the:

    image.png.43e6fbc5e8c1a5cefedbf23e81eb4ea4.png


    I read that the function to enable "Host access to custom networks" will be removed un this thread.
    So I guess its better to try other alteratives

    And thanks Alturismo for helping me out!
    Much appreciated, I think I am missing something "stupid" just cant put my finger on it.  

    image.png

    image.png

  4. 2 hours ago, alturismo said:

    i didnt meant to change the internal native port, you are running your dockers in bridge mode ... so you map the ports you need then ..

    Plex Session

     

    1/ 32400 <> 32400 (plex)

    2/ 32400 <> 42400 (plexii)

     

    that would be it, of course you should use 2 different appdata folders ... and different names so you can map npm to

     

    plex > 32400

    plexii > 32400 (or host ip:42400)

     

    but you will find a solution ... ;)

    well, if you want your dockers more isolated, yes, keep as is ... ;) your decision ;)

     

    Sorry I am not following how to do this. (Plex really doesn't like having two instances on the same server)

    I already have two dockers running with each in separate appdata folders.

    Plex (Main): Host mode
    Plex (second): in Br0 - fixed IP

    Many people wants two Plex servers on Unraid, and end up having a second VM with a dedicated LAN port (Passthrough).
    All my other dockers are either using the proxynet but my Plex server (Main) needs to run in host mode.
    So I haven't found any way (Except maybe some special network creation to be used specially for a 2 plex server setup.)

    Mapping ports through a router is easy.
    Are you talking about mapping ports in the nginx proxy manager?

  5. 1 hour ago, alturismo said:

    if you read also the header ...

     

    image.png.fa0b548dd09ccd4ac0e442d875016e1a.png

     

    so yes, if you have this disabled in your docker settings (settings, docker, ... default, no) then yes, bridged, custom and host cant "talk" to each other ... if you enable the host access ... then it of course works and the Info you posted is obsolete ...

     

    image.thumb.png.5442571a0c8f8c88009335a24b3b977f.png

     

    but you also know you just could run your second instance on another port ... just map a different port like 42400 ...

     

    I dont think its possible to change the Plex internal port (Many other posts are at least saying that you cant do that)?
    Hmm I forgot the above setting, when I created the Proxynet on Unraid, I dont beleive it would be good to enable custom networks?
    Or I have to find a LAN card and make a special group for this docker 😞
     

    Thanks for your insight!

  6. 6 hours ago, alturismo said:

    hmm ... i guess wont work without "modification" ;)

     

    1/ spaces ... as you may saw in the sample, i use a "list" in there, seperated by ... spaces ;)

    2/ user/pass ... when i remember correctly, it was not possible without u / p

     

    so, what you could try if the command is working fine in the cmd line

     

    comment out the regular part from the script, add your personal line there with "hardcoded" bouquet & removed u p

     

    sample

    image.thumb.png.01b7ad46407cbb64e18d0096ae51c0c9.png

     

    Great thanks I will try that right away.... 🙂
    image.thumb.png.52ec09a887dff66702cce9181593f08f.png

  7. 9 hours ago, alturismo said:

    no success means ? any output ? if so, which ... ? ;)

     

    so, you tried it manually and not using the cron feature builded in, correct ?

     

    image.thumb.png.82e1164ee20a7c2f9098b94b22ed2f5f.png

     

    image.png.2801f395c781e44049e4ebc374c8f398.png

     

    it actually of course also works manually ... even i dont know why you would do this ;)

     

    but the correct way would be anyhow ...

     

    owi2plex.py -h 192.168.1.13 -u root -p PASSWORD -b BOUQUET_NAME -o /owi2plex/HDTV_bouquet.xml

     

    image.thumb.png.d417e44ff28009af5c0c176be876ae26.png

     Thanks so much! I used the wrong command and the bouquet name was wrong.

    So now I got the command working! - Now to get it running automatically.


    So I dont have a user and password - its a closed home LAN, can I just make the empty in the config file? (" ")
    Also the name of the TV Bouquet was with a space so can I just do: "PLEX (TV)"
    image.png.3af009e0d6967a6e48ad36ee62587f29.png

     

    And then cron setting through the WEB UI and that should be it?
    I can get the picon to be loaded to xTeVe this is so great!

    Before I did manually matching and tuning channels this awsome!
     

  8. This is a very old post, and Unraid have developed allot!!! 🙂

    I really want to run a second Plex server to play with XteVe and the Live TV.
     

    Would it be easier to create a VM with a dedicated LAN passthrough?

    Or is there an easier way to do this? - So far bridging haven't been possible running in two dockers
     

  9. I feel stupid, but reading so many posts I havnet found one that explain how to use ./owi2plex ?
    Reading the github I should use this command to get the XML from my enigma SAT box.

     

    ./owi2plex -b PLEX -h 192.168.0.250 -o /owi2plex/stue-epg.xml

     

    I tried executing this in the docker terminal, but with no succes.

     

    Thanks
     

     

  10. 3 hours ago, blacklight said:

    Also an interesting setup. Are you going for a media center (Plex etc. )? I am asking because of the many m2/nvmes :P

     

    You definitely don't need a GPU for the IPMI ! As I mentioned it could even make your day worse (in Legacy boot mode). What you could do: you could hook a dedicated ASUS VGA to IPMI cable to the GPU, but depending on the use case that is not necessary (my guess is you would only need it, if you do virtualization -> passing through cards and you want to be able to switch OSs on the fly for testing etc. -> again: as long as you only boot UEFI you can plug in what ever you want and assuming you didn't screw up any settings the IMPI should work with or without a dedicated GPU)

     

    Keep in mind that the last two slots are only x4 and ONLY gen3 so you are going to see worse performance with your 3060. Depending on the use case (AI for example, that's what I planned -> stable diffusion etc.) you can maybe ignore the worse performance. 

     

    Anyway would be nice if you could keep us up to date about your system; would be interested how it is going for your PCIe monster. 

     

    And one last question, why are you going for 1G LAN only ? with that much storage wouldn't at least 10G be nice :D Just curious ?


    Yes my existing Unraid server (See below configuration) is lacking the power to run everything.
    Plex/Emby/Jellyfin server yes! - But all the great dockers running on the side:  NextCloud, Paperless, Synology DSM docker (NAS in a NAS with Apps) etc just needs more power and nvmes. And a couple of VMs for gaming (AMP - gaming server VM is awesome).
    Also I would like to have enough Nvme's for a ZFS mirror or raid to protect the data and have snap-shoot support on APPDATA and VM's.

    I would need a GPU for UNMANIC 🙂 but I guess the Quadro P2000 would be better in a 4x slot (And very energy efficient).
    The very small 3060 card was to get better quality when encoding with UNMANIC.

    Correct the 10G is on my list but I really want a Unifi 10G switch and I am waiting for a better product than the existing one that also supports 2,5G Network in a enterprise rack setup.

    I must admit that the lack of Pci-e lanes almost made me go AMD, but the iGPU quicksync performance made me choose Intel again!
    Like many others.

  11. 2 hours ago, blacklight said:

    I have a picture of my layout. Definitely would suggest water cooling, that leaves plenty of space around the VRM. This is the mentioned asus ryujin 2. Directly underneath the small IPMI. Cable management is poor, but my goal was just to connect everything and I couldn't bend the riser cable more. So every PCI (including all M2s except the CPU one) are hooked up. I went to full-size PCIe from the m2s with 2 different adapters. If you have space I would suggest using the m2 to miniSAS or U.2, that is more elegant, but if you need to hook up something different than storage this would be the way to go in my opinion.

     

    Every card is properly recognized, but I didn't performance test them yet, because I have problems with the LSI HBA, which gets extremely hot btw (that's why I put 3 fans next to it, one big, 2 small switch fans). 

    IMG_5859.jpg

    I have plans to use the PCI slots for:
     

    PCIe slot 1: x1 IPMI
    PCIe slot 2: x8 SAS Controller: LSI Logic SAS 9305-24i Host Bus Adapter - x8, PCIe 3.0, 8000 MB/s (6 connectors = Supports 24 internal 12Gb/s SATA+SAS ports - Supports SATA link rates of 3Gb/s and 6Gb/s)

    PCIe slot 3: x8 GIGABYTE - AORUS Gen4 AIC Adaptor, PCIe 4.0 GC-4XM2G4
    PCIe slot 4: x4 NIC: Intel i350-T4 4x 1GbE Quad Port Network LAN Ethernet PCIe x4 OEM Controller card (I350T4V2BLK)
    PCIe slot 5: x4 NVIDIA GeForce RTX 3060
     

    The HBA is for the backplane 48 server case, and the AORUS is for 2 addtional M.2. drives (+ the 3 on the board)
    I only placed the RTX 3060 because from reading I would need a GPU using the IPMI?
    (I might replace that with an old NVIDIA Quadro P2000)

  12. 38 minutes ago, itimpi said:

    The diagnostics definitely show disk1 to disk22 present.   Are you sure that the message is not associated with disk23 disk24 which are not present.

     

    23 and 24 is the Parity 1+2 above in my screenshot.
    I am happy to report that all my drives is functioning in Unraid 🙂

  13. On 11/23/2023 at 11:43 AM, Omid said:

     

    Damn, I wish I had seen this video before! @casperse This should be great info for you! 

    Yes I also found that video, looking into the data sheet for size, looks okay!
    (Just got the ASUS Pro WS W680-ACE IPMI on a Black Friday deal) - now looking at getting the right CPU?
    Since the platform is end of live (Intel will have new sockets in 2024) I thought better to go high!
    i9 14900K and the "tame" the power usage? - That way I can dial it up later if I need extra speed/computing power?
    Also my last build I always regretted that I only got 64G ram and the Intel’s Xeon E-2176G should have maxed it out.
    (Now that's going to be the backup server)

  14. On 6/22/2023 at 10:50 AM, Omid said:

    Thanks for confirming and for the link! Now to try find the cable in the UK... 😅 

     

    And I totally agree with you regarding the motherboard. The build quality and finish isn't too far off ASUS's ROG series of motherboards, so I feel like it's using the same premium components. Its features, ports and PCIe configurations seem very appropriate for an Unraid build (e.g. no bifurcation when maxing out M.2 slots or the SlimSAS connector). My only qualm is that the big cooler I got for the 13700k, the Noctua NH-D15, will be almost touching the IPMI card if I put it in the top PCIe x1 slot (1mm gap; maybe less!). I'm pretty sure they will eventually touch after some time, because...gravity.

     

    I'll confirm that I received 2x 32GB Kingston (KSM48E40BD8KM-32HM) and am currently putting them through memtest where they were detected as supporting ECC.

    Hi @Omid I am in the same situation, what CPU cooler did you get in order to have the IPMI in the first slot?
    I really wanted the Noctua NH-D15 and cant find a cooler that is in the same league and can fit?

  15. My turn if you still do requests 🙂

     

    2. I will need the case manufacturer and model name.

    Shenzhen Toploong Technology
    Model: S865-48
     

    3. I will need a picture (preferably straight from the front)

    8U model, 48 drives
    image.png.d090ca3b5845b10d6a00d5773639f8eb.png

    Thanks again for volunteering to do this! 

  16. On 10/14/2023 at 9:49 PM, blacklight said:

    I got the  W680-M Ace SE with the IMPI card actually :P it has a better pcie layout in my opinion. Unraid works without a problem.
    Specs:

    i9 - 13900KS

    128ECC RAM (have to look up the brand if you need it - was quite a pain to find ECC -> its recognized even in VMs /Singe bit ECC)

     

    PCI Layout:

    - PCI x1: IPMI Card

    - M2 (CPU): empty for future NVME

    - PCIe x8 (gen5 - bifur): Quadro K4200 (for VM testing)

    - PCIe x8 (gen5 - bifur): RTx3060TI (for VM testing)

    - PCIe x4 (gen3): Broadcom 9300-16i (HBA) with 14 hard drives attached

    - PCIe x4 (gen3): Mellanox ConnectX-3 Pro for max 20Gbit

    - 2x M2 (x4 - chipset): optane 900P (for NAS testing, using a adapter M2 to fullsize PCIe)

     

    For now everything is recognized and booting Unraid was a charm, unfortunately I have problems with virtualizing a NAS, but for now it seems like a software problem. I will update the post if I find a incompatibility (IOMMU problems or similiar). The IOMMU layout seems solid btw, no suprises there for now, just that the two onboard SATA groups (1: 4x sata, 2: SLIM-SAS which has a PCIe mode) are in the same group, but I already thought that this would be the case. 

     

    And also the IPMI is a nice bonus in the combo package. Its a little bit expensive compared to a custom one (have no experience there) but it just works.

    I am currently on another continent for a year and the access and controlling power over VPN with the IPMI card is a real relieve.

     

    I attached a screenshot of the IOMMU groups, because I had trouble finding them online. Hope they will be useful. 

    Let me know if you need more information ;)

     

    IOMMU.png

    What CPU cooler did you get in order to have the IPMI in the first slot?