Jump to content

Aumoenoav

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by Aumoenoav

  1. 8 hours ago, JorgeB said:

    I would use only pools for much better performance and TRIM support, my two mains servers have multiple pools and no array, that's not a problem as long as it works for you.

    One of the issues I think is the extra power use with pools that never spin down to take into consideration too

  2. I have tried to ask around and never get a straight answer except I should switch to truenas, but its many reasons why I do not want truenas (for example no bare docker).

     

    Im an early adopter so of course I jumped on the ZFS thing with 4x u.2 drives in a zfs raidz pool for docker, vm, databases etc, and 8x 10 tb in a raidz pool for long term.

     

    of course not using the array is not how unraid is intended, but im not sure if diverging this much is clever.

     

    I have the chance to move things around now so I would like some input what to do.

     

    if you guys had access to 4x 8tb U.2 drives (yes u2, enterprise nvme in ssd size), 8x 10TB and 4x 2TB enterprise nvme, all on SAS cards.

     

    would you do arrays on all, locking databases etc to the u2 and use the nvme for cache?

     

    or would you drop the array and just use pools?

     

    would you use zfs in the array? (What benefit does that even give?)

     

    so any thoughts?

     

    Attached is an example of the current layout 

     

    IMG_0354.thumb.jpeg.283d6fb3d50e50b45f059bd7305da962.jpeg

  3. How is it when I would like to pass a whole USB 2.0 Hub to a VM? As Home Assistant like USB 2.0 Hub, especially on zigbee I would like to pass the whole hub though. Is it any reliable way to do that? For me it seems like 1-13.1 -> is the hub as 1-13.2 is the UPS plugged on the USB2 plug next to the HUB.

     

    Also of the three options should I usually choose? Automount on start, or on vm. What does serial do? And lastly dpes Connect as Serial Guest Port Number do, and why is it always 4?

     

     

    Screenshot 2023-07-29 at 21.05.16.png

    Screenshot 2023-07-29 at 21.02.25.png

    Screenshot 2023-07-29 at 20.56.06.png

  4. This is a fairly new installation, with a 10G spf network card. In general it works fine, but from time to time it get no DNS response, and after 30 seconds its back. Especially plex cant run cause it loose connection every 30 seconds.

     

    I also have an issue with booting takes a long time, very long, up to 10-15 minutes. It halts on the two photos I am attaching. Im also attaching the diagnostic file

     

    Any ideas? Im very happy with unraid, but would like it to work perfectly

     

    P.S I have been a good boy and switched my TLD to home.arpa (server.home.arpa) as this is not the officially only domain name that is supported by standards (never use .local !)

     

    IMG_2637.jpeg

    IMG_2638.jpeg

    server-diagnostics-20230719-2259.zip

  5. 6 hours ago, JorgeB said:

    It's also x8, but PCIe 4.0, you can see all the specs in the Broadcom site.

     

    Yeah so I think I have been looking wrong all along and I dont need a 4x4x4x4 motherboard. The only issue I have now is that its only two ports on those cards. Can you guys help me understand how I can fill up the following ports

     

    3x SFF8643 (3 12g SAS plates with 4 HDD each, regular drives)

    8 x OCuLink SFF-8612, no Tri-mode support (NVME)

    4 x OCuLink SFF-8612 4i (U.2)

     

    I was planning to land on this cards

    1x Broadcom 9400-16i i or Broadcom 9500-16i i (for the tri-mode units)

    1x CEACENT CNS44PE16 Quad Port SFF8654(x8) PCIe 3.0 x16 NVMe U.2 Adapter Card through SFF8654 to 2xSFF8643 cable (for the NVME units)

     

    The next version of the NVME box will support tri-mode, but not sure if its possible to run 8 more ports through the 9400 or 9500).

     

    Can someone help me understand how this will be plugged into the drives since its just two outputs on the broadcom, but I need 3x sff8643 and 4x sff-8612, and on the other card I need 8x sff-8612

     

     

    • Upvote 1
  6. I am having a really hard time finding a mainboard for my new server. I will run multiple M2 and U2 drives, so its needed 4x4x4x4 support. To keep the costs down I was initially thinking to reuse an «old» computer that has the ASUSTeK COMPUTER INC. ROG MAXIMUS XI HERO (WI-FI) , Version Rev 1.xx mainboard and a Intel Core i9-9900K Prosessor (Socket-LGA1151, 8-Core, 16-Thread, 3.60/5.0GHz, Coffee Lake Refresh) and it feels a bit waste of throwing it away, but again it doesnt support.

    The requirement say this: «To ensure compatibility with MB699VP-B V3, please make sure that your add-on card or motherboard’s BIOS/UEFI supports PCIe Bifurcation when using a PCIe 16x or 8x slot. Additionally, set up the PCIe splitter in the BIOS/UEFI with the configuration of x4, x4, x4, x4 for a PCIe 16x slot or x4, x4 for a PCIe 8x slot.»

    Can anyone please help me finding a motherboard, that preferally can support the CPU, so I just have to replace the motherboard. If that is not possible, please suggest a motherboard that can support this. As I will be running PLEX AMD is not really the best thing, but if AMD gives more bang for the bucks I can happily run Plex on its own mini pc that I have.

    So to sum up.

    1. Help me find a mainboard that support Bifurcation (4x4x4x4) for socket LGA1151 with 9th gen I9
    or
    2. Help me find any mainboard that support Bifurcation (4x4x4x4)
    or
    3. Give me an AMD alternative that supports Bifurcation (4x4x4x4)
    or
    4. Any other options, like extension cards that can emulate (4x4x4x4) on a x16 slot

  7. I am building a new server for Unraid and will run 12x 3.5" drives (SAS backplane) and 6x M2 drives with U2. My current mainboard ASUS ROG MAXIMUS XI HERO (WI-FI) S-1151 doesn't seem to support PCIe Bifurcation (4x4x4x4), so it's useless unless I make a mistake.

     

    Would someone be able to help me with some things?

     

    1. DOES my mainboard support 4x4x4x4 with some bios update? 

    2. Can someone suggest a good mainboard for Unraid that support 4x4x4x4 that is stable and good, and that have the S-1151 socket (so I can reuse my CPU and RAM)

    3. Suggest a 12th gen mainboard

    4. Is AMD worth going for if I will do Plex transcoding?

  8. 2 hours ago, juanrodgil said:

    Something weird is happening to me with this plugin.

    I installed the LXC plugin, go to settings, change the directory to /mnt/cache/lxc/ and updated.

    Then i go to the LXC Tab and i try to create a new one based on archlinux, and it seems to work, but when i press the "done" buttom after it finished, the container dissapear.

     

    On the folder /mnt/cache/lxc/ i see that a folder named "cache" was created with the files from the archlinux template and while the server was creating the container other folder "arch-multimedia" was created, but ... when it finish only the "cache" folder is there.

     

    On the window i see this output:

    Creating container, please wait until the DONE button is displayed!
    
    Using image from local cache
    
    Unpacking the rootfs
    
    
    
    To connect to the console from the container, start the container and select Console from the context menu.
    
    If you want to connect to the container console from the Unraid terminal, start the container and type in:
    
    lxc-attach arch-multimedia
    
    It is recommended to attach to the corresponding shell by typing in for example:
    
    lxc-attach arch-multimedia /bin/bash

     

     

    On the logs i see that the containar was created

    Jul  7 12:16:04 hades-raid root: LXC: Creating container arch-multimedia
    Jul  7 12:16:18 hades-raid root: LXC: Container arch-multimedia created

     

    I try with others templates, several ubuntu versions, and debian, but in all the cases the folder with the LXC container is removed and i didnt see any error on the logs.

     

    This happens for me too, on unraid 6.12.2. It worked great on the latest RC though

     

    I can see that the script is making the LXC with all its files in the file system, but then it delete it again automatically.

    • Like 1
  9. I am working on moving from an synology NAS to Unraid, but are a bit undecided on how to setup the storage.

     

    My use for the nas is mainly docker hosting, with 1 or two curtain machines.

     

    So in the machine it’s currently 9x 10TB regular hard drives, 2x 2TB SSD, and 2X 2TB nvme.

     

    I will be running the latest version, so at RC5 of the latest next, so was mainly thinking on ZFS, but I am open for suggestions.

     

    So if you guys got free rain with the drives  mentioned above how would you divide the drives and with what configuration?

  10. Im using unraid 6.8.3 and recently I cannot start, stop or delete docker images. It works over the API android app so its not the docker image, but checking the debug in chrome I see the error below everytime I try to click the button. Creating a docker works, just for example clicking the delete button just produces the following error. Tried in multiple browsers and computers. Any experienced this?

     

    Uncaught TypeError: Cannot convert undefined or null to object
        at Function.keys (<anonymous>)
        at eval (eval at <anonymous> (Docker:1), <anonymous>:5:30)
        at Object.doneFunction (docker.js?v=1581536675:131)
        at i (dynamix.js?v=1570392778:28)
        at Object.h [as handleButton] (dynamix.js?v=1570392778:28)
        at HTMLButtonElement.t (dynamix.js?v=1570392778:28)
    eval @ VM70:5
    (anonymous) @ docker.js?v=1581536675:131
    i @ dynamix.js?v=1570392778:28
    h @ dynamix.js?v=1570392778:28
    t @ dynamix.js?v=1570392778:28

     

  11. Im still struggeling passing the builtin GPU to Plex for transcoding, but got a ASUS GeForce GT 630 2GB Silent PhysX and an 

    ASUS Radeon HD 6450 1GB DDR3 Silent laying around from an older media center PC. Would it be any benefit plugging one of those into the computer? It already got a 1080TI, but thats dedicated for a gaming VM.

×
×
  • Create New...