sekrit

Members
  • Posts

    78
  • Joined

  • Last visited

Posts posted by sekrit

  1. On 2/1/2021 at 7:23 PM, HighRoller said:

    After more research on your site, it appears as if my on-board network HW may not be compatible with the Unraid ver 6.8.3.

    My HW:

    MB = Asrock TRX40D8-2N2T

    uP = AMD 3960x

    Mem = 128G

    Drives = 12 x ST2000NM0033 2TB

    M2 = 2x Force MP600 1TB

    The MB has the following Ethernet interface description:

     Ethernet

    Interface

    - 10Gbps/2.5Gbps

    LAN Controller

    - 2 x RJ45 10GbEby Intel® X710-AT2
    - 2 x RJ45 2.5GbEby Intel® i225
    - Supports Wake-On-LAN
    - Supports Energy Efficient Ethernet 802.3az
    - Supports PXE
    - LAN3 Supports NCSI

     

    Is it reliable, is it stable?  I am looking at getting a TRX40D8-2N2T with 256 RAM but can't really find reviews.  Also can't find  many note on people recommending or discouraging RAM for that board.

  2. 26 minutes ago, ghstridr said:

    I have some experience with building bespoke clusters using various technologies.

    Forgive me here, but proxmox does have this ability if you wanted to started experimenting using a gui or something. Again sorry for mentioning a competing product.

    Back to UnRaid.

    If you have 2 UnRaid servers and separate shared storage, you could move all your data/docker stuff there, but you have to have software to manage the system. One method is to provide a quorum or fencing. Basically this keeps the slave machine/s from accessing data that currently belongs to the master. It keeps the slave/s from locking/writing to files while they are held open by the master, thus avoiding corruption and data loss.

     

    The shared storage can be accomplished over network via iScsi, NFS or even SMB as well as a number of other more advance storage protocols. These are the ones commonly available in Open Source. You can also use older cabled scsi or sas methods as long as you have interfaces and controllers that support fencing to keep the hosts separated.

     

    Now the networking bit. You would need a way of introducing a VIP (virtual ip) to the networking stack so that accessing it would take you to the current 'master' in the cluster. The idea is that when one member of the cluster becomes unavailable, code on the other machine takes over the VIP and takes over all the services that were being manage by the now defunct old master. There are a couple of ways of accomplishing that. 

     

    One such piece of software is Pacemaker. It can handle switching a VIP, stopping/starting services, mounting/connecting storage, etc. It determines which is the master (assuming only 2 members in the cluster) by using a heartbeat signal which is usually sent over a private ethernet link between them. I've just used a direct connection cable between them before and let the nic cards figure the auto-mdx for themselves. The heartbeat allows to keep track of the health of the other machine. It can include status info on several aspects of the opposite machine which can all go into making decisions for what actions to take. 

    Say if you have multiple dockers on a bridge interface so they all have their own ip? Well you could have certain dockers 'moved' over to the other cluster member when, say, the overall cpu usage gets too high on the other machine. I would shut down the docker so that the config info and data shares are updated and flushed to the shared storage. Then you could import it or already have it imported and start it up on the other machine and that docker ip should be now available there.

     

    I'm probably missing/glossing over some of the finer details, but proper fail over clustering is a complicated subject. So it would be possible with UnRaid, but it would be A LOT of hacking. You really have to know and understand your networking and shared storage concepts.

     

    That said, I really love UnRaid for it's standalone abilities and I feel that it does a lot of things very well. Proxmox has the clustering thing designed in pretty well, so that would be the better tool for the job if clustering is your aim. Use the correct tool for the job instead making the one you have fit the odd shaped hole.

    I could not agree with you more, and I would be surprised if anyone takes issue with your sharing such objective insights here. 

     

    In the future, I would love to implements such a solution.  However, that will be many moons from now, as I struggle with my ADHD (and the difficulties it brings me to absorb text information... especially beyond a page or two of text).

     

    Thank you so very much for sharing your thoughts, I have emailed your post to myself so that when I am ready, I can reference it.

     

    Please enjoy your day.

  3. On 5/12/2018 at 7:06 PM, Jcloud said:

    Oh that sucks, I know half the reason you bought that board was from my data/setup.  Lets see. . . 

     

    I'm just going to go down my list:

    1.  BIOS\Adavanced\AMD PBS  ---> "Enuerate all IOMMU in IVRS is "ENABLED"
    2.  BIOS\Advanced\CPU Configuration  -->  "NX Mode" and "SVM Mode" are "ENABLED"
    3.  BIOS\Advanced\AMD CBS\NBIO Common Options\NB Configuration -->   "IOMMU" is "ENABLED"
    4.  syslinux.cfg  :
      
      root@HYDRA:/boot/syslinux# cat syslinux.cfg
      default menu.c32
      menu title Lime Technology, Inc.
      prompt 0
      timeout 50
      label unRAID OS
        menu default
        kernel /bzimage
        append pcie_acs_override=downstream,multifunction initrd=/bzroot
      label unRAID OS GUI Mode
        kernel /bzimage
        append pcie_acs_override=downstream,multifunction initrd=/bzroot,/bzroot-gui
      label unRAID OS Safe Mode (no plugins, no GUI)
        kernel /bzimage
        append pcie_acs_override=downstream initrd=/bzroot unraidsafemode
      label unRAID OS GUI Safe Mode (no plugins)
        kernel /bzimage
        append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode
      label Memtest86+
        kernel /memtest

     

    Going to reboot after posting and look for a board revision number, for comparison.

    I'm sorry dude, I feel like I let you down or I've misreported something (although I honestly don't know what that would be).

    180512231107.gif

    SO VERY SORRY to hijack a thread.  But this is the only one which I could find mentioning "NX MODE".  My Taichi manual doesn't describe it and I don't know what it's for.  It's not been in one of my boards before (Nor has "PSS Support"). 

     

    Would someone please share what "NX Mode" and "PSS Support" are, and recommended general setting for my Unraid build? (I essentially want to avoid any settings which will prevent functionality while I am setting up.  I can dial them in more granularly later.

  4. 55 minutes ago, falconexe said:

    We run a media production company, and have some of the largest serves out there...

    If I May ask:

     

    1.) What your most prominently/frequently produced forms of media?

     

    2.) How does Unraid impact your company as a software utility?

     

    3.) Which Solutions and/or tools would aid your production house to improve quality of production, lowering turn-around-time, acquiring additional clients, leveraging your current resources into greater gains/return on investment, other/not mentioned 

     

    4.) Approximately how many of the staff interact with (maintain, store to, edit from, render from) your servers which operate Unraid on a regular basis?

     

    Thank you for any/all time & information which you are willing to share.  I believe that unaid would have a MASSIVE influx with the proper media-production focused tooling added.  And, I am dead-set on finding ways to prove it.

  5. On 1/19/2011 at 7:40 AM, WeeboTech said:

     

    I've always been surrounded by computers and electronics. From the early days of high school working in a computer room.

    Since I'm a music buff (you have not seen the other wall with thousands of CD's on shelves), musician and computer geek, it's all gotta go somewhere.

    Fact is the inverted U works well for me.

    It's pretty funny when I have a couple people over for lan parties or jamming. But it works.

     

    For a building desk, I have a rolling TV cart that comes out from under the desk with an anti-static mat.

    I don't build nearly as many computers as I used to. In fact I'm dumping about 5 of them (still have 7, and a few laptops).

    These days, I prefer ITX and laptops unless it's a large unraid server with many disks.

     

    In regarding the build with the desk and the window, consider that any computer near the window is subject to the elements.

    I.E. Rapid changes in temperature, humidity, dust, etc, etc. Unless you have an exhaust fan.

    It's a concern for me because I live near the beach. our area may be different. Just thought I would bring it up.

    Do you happen to record your own music using Waves plugins by any chance?

  6. 7 hours ago, ChatNoir said:

    Hi, your poll is missing the choice for "no to all the above" if you want an accurate result.  :D

     

    Well it depends to what is the aim of the poll.

    Fair enough.  In light of that, I have added TWO options... None of the above (as requested), and "Other".

     

    Does that tune your guitar strings?  🙂

     

    • Thanks 1
  7. 19 hours ago, Energen said:

    And would you only add hardware that's used by more than one person or would you let anyone add anything? That opens the door to having 500 motherboards, 500 cpus... etc.. that's not a usable list to choose from, even if low ranked things weren't displayed.

     

     

    So, what I was thinking (and apparently did not convey clearly) was that the leaderboard would have something like the top 5 or so of a set (chipset for a motherboard), cpu for a generational series 3900xt, 3800xt, i9-7980x, etc.

     

    So, up to a set amount for a release period.  Since there are multiple releases per year per category, it could span in segments with 50's boards, 70s boards, x series boards (I only keep coming back to the boards because of consistency).

     

    But each year could have a scroll window and/or filter, which could have a scroll window for the season, and the top 3/ top 5/top 10 (whatever decided) per category (mobo, cpu, ram, ssd, hdd) could have their windows to show their leaderboards.

     

    Leaderboards would would be based upon reliability to function as Unraid servers.

    There could be basic ratings for handling basics like storage/automated tasks/maintenance/iommu etc.

     

    bonuses could be given for speeds, temps, uptimes, expandability beyond a threshold we could set

    detractors could could be given based upon crashes, downtimes, temps, iommu failures, general complications.

     

    I mean, the metrics could be as inclusive or exclusive as wanted (of course).

     

    My main concern is that when people see the leaderboard, that the can feel certainty (far above average) that their system will likely function upon assembly...  barring bad production batches, etc., so that people can buy and build their unraid servers based upon "reliable average reviews".

     

     

    So, to more specifically respond, yes... eventually over 500 items would accumulate over time.

    BUT, since they would be segmented by years (and maybe seasons), then by parts in each section... then aside from records for the sake of posterity the volume is largely irrelevant (especially since they are only text files).  At least in my opinion. 🙂

  8. I agree.  However, as an updated/"Rolling" Leaderboard with a column to show when one of the chipsets on the leaderboard was last updated, I believe that it could prove a very useful resource as more people used it.  Older models would basically fall "off of screen" as newer and newer parts came out to replace their spots on the date prioritized board.

  9. Is there any general consensus regarding which mobo may be the very best X570 for Unraid, supporting passthrough and most any "Unraid Supported" task which the owner "throws at" their unraid server (assuming that the user is not attempting to demand more than the parts were designed for)?

  10. I am wondering if there may be any possibility/interest in having a parts rating feature on the forum or other platform.

     

    This way, people can +1 / -1 per functional part/problematic part for their unraid build.  This could allow our community to review mobos, RAM, CPUs, Graphic Cards, etc for or Unraid specific server builds and detail which problems we may have, or areas where they really shine... although people could just easily see the part ranking  (without review) from like a part "leaderboard" which could perhaps be kept on the site main page, or maybe just the Forum main page.  This way, if people want to just buy parts quickly from community trust, then they can.  And, They could feel confident about the ranking being useful for their unraid specific builds to generally work upon assembly (minor tweaks aside).

  11. On 4/20/2018 at 8:20 PM, SpaceInvaderOne said:

    I am starting a series of videos on pfSense. Both physical and VM instances will be used. Topics such as using a failover physical pfSense to work with a VM pfSense. Setting up OpenVPN (both an OpenVPN server and OpenVPN multiple clients). Using VLANs. Blocking ads. Setting up squid and squid guard and other topics. T

     

    This part is an introduction part gives an overview of the series of videos and talks about pfSense and its advantages.

     

     

     

    Second part of is on hardware and network equipment

     

     

    Part 3 install and basic config

     

     

     

    Part 4 customize backup and aupdate

     

     

     

     

    Part 5   DHCP, Interfaces and WIFI

     

     

    Part 6  Pfsense and DNS

     

     

    Part 7 - Firewall rules, Portforwarding/NAT, Aliases and UPnp

     

     

    Part 8  Open NAT for XBOX ONE and PS4

     

     

    So, 

     

    How do we get the physical machine to turn on as failover when the PFSENSE VM failis?

     

    Also, is there a way to sync settings between the two (I dunno... have both load from a network image or sonething?

  12. On 1/3/2020 at 12:43 PM, scorcho99 said:

    Yeah, it doesn't seem like there is a lot of interest I guess. That's kind of why I set out to try it myself. I actually didn't think I was going to get it to work at all, much less transfer the setup to unraid. I'd given up on the idea since coffeelake support was initially not going to happen and then finally showed up in 5.1 kernel.

     

    Next step is to mod my bios to support increased aperture sizes. That's a large problem for anyone running this on consumer motherboards, the mediated GPUs take their slice from the GPU aperture pie, not the shared variable memory. While changing the aperture size is a supported option, most motherboards seem to just lock it to 256MB. This means I can only create 2 of the smallest type of virtual GPU at the moment.

    I stumbled in here wondering about slicing GPU compute for unraid like theyu do for enterprised virtualized cards, and wondering if its possible in unraid.

  13. 4 hours ago, johnwhicker said:

    What I noticed on the SAS drives is that they're running hotter than SATA. I guess the 72K RPM does that for you.  You will need better airflow and ventilation if you go with SAS.  Ask me how I know :) 

     

    72K RPM (not 15k)?

  14. On 5/13/2020 at 12:35 PM, johnnie.black said:

    Maybe in the near future, at least there were some hints from Limetech that it's a possibility.

    I'd heard those whispers.  Ironically (while i'm stoked about the new pooling features), I look forward to it getting done.  I am REALLY hoping that once they finish the storage pools, that they will focus on some Server to server communication/load balancing for iommu passthroughs etc.  Essentially I want some clusters to balance entertainment, content productions, security across my LANS.  😄

  15. On 1/9/2016 at 2:23 AM, CyberSkulls said:

    Gonna have to follow this thread in hopes it gets more active. I wanted to do the same thing with plex a while back and harness the transcoding power of multiple boxes but received the same replies as the OP about not really possible. Would love to see unRAID have a feature to cluster multiple boxes as a processing node.

    Meeeeee TOO!!!

     

    It's a few years later now.  We just got multiple pools (mostly BTRFS) and I hear chatter of multi-UNRAID-Pools.

     

    There is LITERALLY nowhere to break EPIC ground at this point aside from #UnraidServerInterOperability / clusters / stacks / Etc.

     

    Once that happens, people can spread development between audio/music production on a server, VSTs on smaller purpose based servers to support those, video production on another with load balanced security surveillance, entertainment media over virtual machines (maybe as new docker implements for VMs)...

     

    Sky's the limit once we have unRAID servers locked in productive, intimate embrace.  LOL

    • Like 1
  16. 8 minutes ago, johnnie.black said:

    Then you have a different problem, they are known to work with Unraid.

    The ones previously seen perhaps.

     

    I doubt that it is an entirely different issue regarding same model NVMEs.  Giving such an absolute response when not even knowing which models are effected is "specious" at best.

     

    No?