• Posts

  • Joined

  • Last visited

Everything posted by BVD

  1. Do you have the Full Width plugin installed? Also, do you notice if this typically occurs with smaller windows (as opposed to full screen, or otherwise)? I've noticed that some pages get kinda wonky when attempting to view them in a window that's been resized smaller, or any time I'm browsing it from something that's only got something like an 800x600 resolution available to it. It's rare for me to do so, so I've just kept both plugins installed and keep it in mind any time I see something screwy like this.
  2. Hard drives fail I guess is the gist of it - it's an expected occurrence whose frequency increases with the number of drives you have attached. As we know hard drives die, it just makes sense to me to minimize the impact of addressing those failures when they do occur (taking everything down to address an inevitable eventuality doesn't seem in-line with the idea of a NAS, at least to me anyway)
  3. The broader use case here is being able to replace a failed drive without taking down everything, but instead minimizing the impact strictly to those components necessary to do so imo. Anyone that's self-hosting, whether for business or friends and family use, loathes downtime as it's a huge pain to try to schedule it at a time that's convenient for all. There are currently 4 family businesses as well as 7 friends that rely upon my servers as I've migrated them away from the google platform, and given even that small number, there are always folks accessing share links, updating their grocery lists, etc. All that stuff is on the pools. For the home nas user with no other individuals accessing their systems for anything, I could see how it wouldnt really matter. But I feel like it's not uncommon for an unraid user to have their system set up in such a way that taking everything down is necessary. As far as reboots, that's a separate thing imo - it could also be addressed in the UI by allowing a samba restart button in the even of an edit to SMB-extra, allowing function level reset to be executed from the UI instead of cli for VFIO binds, and so on. Most of these things can be done without reboots on more modern hardware, it's just not yet available in the UI. To me, this is a big logical step towards getting making things smoother for the user.
  4. +1 from me as well. I'd like to try to help justify the efforts with a quick list of use cases: Firewall/Router - As others have noted, many of us run pfSense, OPNsense, ipFire, VyOS, etc. Virtualizing the router makes some sense when running an unpaid server given we're running many applications whose sole function relies heavily upon networking, as the horsepower necessary to properly manage/monitor that traffic (traffic shaping/policing, threat detection and deep packet inspection, and so on). When trying to make the most efficient use of resources, having a separate physical machine for this isn't as cheap as buying an SBC like a pi/odroid/whatever, but can quickly add up to hundreds of dollars (not to mention the electricity efficiency lost by not having it in the same box) Auth / Domain Controllers - For self-rosters and businesses alike, LDAP, DNS, TOTP, and others are often needed. I currently run mine in a VPS, as I can't have users ability to authenticate lost simply because I need to replace or add a drive. Home Automation - While many choose to use docker containers for this, others use Home Assistant OS. Having all your automation go down every time you have to take down the array is a significant annoyance. As time goes on and home automation becomes more and more mainstream, I can see a time in the not-to-distant future where it's not abnormal to have access to your home (integrated door locks) are considered 'normal'. The impact of completely losing access to your home automation's functionality will grow as well. Mail server - I doubt many of us are running our own mail servers, but I know at least a few are. I'm willing to bet that those who are, are also running unraid as a vm themselves under proxmox/vmware/(etc), because this is something you absolutely *can't* have go down, especially for something as simple as adding storage. I'm sure there are others I'm missing, but these are the big ones. Once ZFS is Integrated, it'd be great to get some attention to this - I understand there's some complexities here, so I'll attempt to address some of them where I can with some spitballed ideas: Libvirt lives on the array; how do we ensure it's availability? Make the system share have a pool only (cache, but could be one of any number of pools) assignment requirement ('Offline VM Access - Checkbox here') in order to enable this feature. (think this is the easier method), or - Dedicated 'offline' libvirt/xml storage location - this seems like it'd be more cumbersome as well as less efficient given the need for dedicated devices The array acts as one single storage pool to all applications; how can we ensure the VM's resources will be available if we stop the array? Unassigned Devices - As @JonathanM noted, should limetech take over UD, it could be used to set this completely outside the active array, or Split the 'stop' function of the array - In conjunction with option 1 above, adding a popup when selected stating that 'capacity available for VMs will be limited to the total capacity of the assigned cache pool' or something.. but instead of a one button 'it's all down now', create two separate stop functions One for stopping the array - A notification pops up warning that all docker containers will be taken offline, similar to current popup. Then another for shutting down both the array and pools - Popup again notes impact Idk how useful any of these musings are, but it's been on my mind for a while so I figured what the heck lol. Anyway, BIG +1 for this feature request!
  5. @GuildDarts Apologies if this isn't really a suitable place for this, but thought I'd ask - As a feature request, what about the possibility of adding simple folder functionality to the User Scripts page as well (plugin from @Squid)? I was helping a friend with their unpaid server last night, having him walk me through the steps he'd taken to recreate an issue, and in the course of this saw his user scripts page... It had something like 40+ scripts that he'd accumulated, of which around half of them were manual activation only and used infrequently (though I confirmed he *does* actually still use them). It was a mess of scrolling lol. I mostly just use cron, so I'd not really experienced this, but I'd imagine there are others who'd similarly benefit from some organization ability in the user scripts space. The reason I came to ask here was that we've already got the great folder functionality for docker and VMs in the plugin, and thought to check if this kind of thing was in line with the spirit of the plugins purpose; I could foresee having folders/buttons for each of the options (daily/weekly/monthly/etc), then selecting one of them to drop-down the contents of each and display them as they would normally. If this doesn't really jive with the existing plugin's purpose/design (and I'd understand for sure, as these are static displays that don't really need 'monitoring', and that monitoring is one of the huge values of this plugin), no worries at all, and I'll hit up @Squid's plugin's support page. Thanks for taking the time!
  6. Also, I don't suppose you're using one of the new fractal torrent cases are you? Even if not, if you're using a fan hub, it'd be one of the things in the chain I'd remove during testing to narrow things down 👍
  7. As my earlier post noted, my main server was due for an upgrade - it'll probably keep running for now so as not to overload the backup server due to all the VMs I have running for work-related testing and such, but the replacement, it's finally becoming a reality: It's a supermicro CSE-743-1200B-SQ, a 1200 watt platinum power supply with 8 bays built in, with a 5 bay CSE-M35TQB in place of the 3 5.25" bays, all designed to run at less than 27db, while able to be either ran as a tower, or rack mounted (it'll spend the next 3 months in tower form... seems getting rails for this thing requires first sending a carrier pigeon to hermes, hermes then tasks zeus with forging them in the fires of the gods from unobtainium, who then ships then when he's done doing... well greek stuff). My first 700 series chassis is still doing work, still with it's original X8SAX motherboard, and I see no reason to fix something that isn't broken! While having a bunch of drives is great, the idea here is to have two gaming VMs, run plex, nextcloud, homeassistant, frigate, and numerous others. All of that takes a ton of IO. Enter the motherboard: This motherboard is a friggin monster - but importantly, at least to me, it's design syncs up perfectly with the chassis, so all the power monitoring, fan modulation, and LED/management functions can all be controlled via built-in out of band management. The M12SWA is currently paired to a 3955WX; given how close we are to next gen threadrippers release, I'm going to wait that out for now, and then decide whether to upgrade to next gen's mid-range (whether it be a 5975WX, or whatever the case may be), or otherwise. For now, the VM's will be 4 core / 8 thread to match the CCD's, leaving the rest to docker. Down the line, they'll likely be either 8 cores each, or one 8 and one 4, depending on what the need is. The lighter of the two is going to house an always on emulation VM with a 1650s, which will play all our games on screens throughout the house (or wherever) via moonlight/parsec/whatever. It slots perfectly in the chassis: But cable management is going to be a meeessssss: That ketchup and mustard is hurting my friggin eyes. I'm going to have to wrap those with something More to come on this one - the plan for now is to throw in 128GB of ECC 3200, 4 NVME, an rtx 2070, gtx 1650s, quad 10Gb nic (chelsio, since this thing comes with the stupid acquantia nic which has no SR-IOV support), quad 1Gb nic (since the intel nic they included ALSO doesn't support SR-IOV... ugh), then one slot left for potentially adding either tinker-type toys or an external SAS HBA if I somehow eventually run out of room. There are custom boards out there that combine the x540 and i350 chipsets onto one board, but I may instead consolidate this down to a single X550 or one of those fancy x700 intel based boards... We'll see.
  8. Before replacing more components, I'd start trying to narrow it down further - do you have any stats tracking set up that you could share (grafana etc)? Are you tracking your power usage at all so you might see potential for over current protection to trip? As far as narrowing it down, pull out everything but the ram and run memtest for a bit, if that all clears, pull down to the minimum ram (single channel / one dimm) and stress test the GPU, and so on.
  9. As for the 'awkward' part - customizations include: ZFS for both VM and docker SR-IOV for both 10 and 1Gb/s networks Cron instead of userscripts GUI LVM instead of btrfs (I've only been able to do this in a VM, as it seems while the LVM commands exist, their kernel code's been modified in unraid I guess?) Several others I'm not thinking of... It probably would look like a total hackjob to anyone looking at it from the outside 😅
  10. I've been meaning to post this for months now, but for whatever reason never got around to it. My unraid setups are all what I'd refer to as sort of 'awkward' - while unraid is great 'out of the box' for many applications, I've always tinkered with it a bit to get it to where I need it to be. The chassis here is the venerable CS380b: It's about the only consumer chassis out there that supports both 8 hotswap drives, as well as full ATX motherboards (well... 'supports' full ATX is one thing... but I'll get to that later). There just aren't many options in the consumer space anymore for home servers it feels like. It used to be that you could go to any manufacturer and they'd any number of decent cases for such a situation - they may not've had external hotswap bays regularly, but they'd have tons of 5.25" bays you could use to make it suit whatever purpose you wanted. These days, options are so much more limited. While it's a decent chassis, as others have noted, it needs some 'help' to make it better suited to it's purpose. The design is pretty abysmal on it's own for airflow; if you leave it as it is, the middle 4 drives get little airflow. Not only that, but the 'coolest' drive in the stack is the bottom, with the top drive or two getting a tiny bit of air, and the rest are just left to choke. However, if you fashion yourself a few ducts, things work out pretty well - I cut a few strips of plastic to length, then used a heat gun to form them into shape, and finally install them. Installation was a pain - I had to remove most of the cage in order to knock down some stupidly added metal tabs at the bottom which were making things difficult. While still doing some testing on this, the temperatures were far and away better with the ducts in place... But sustained writes, especially during reconstruct-writes or parity syncs were warmer than I'd have liked with the included fans - it seemed like there just wasn't being even pressure applied with them, and they don't seem best suited to pressure so much as volume. Here's what it looked like during some of the testing: With the thought being that this was a pressure problem, I went ahead and replaced all the fans with more suitable Noctuas: I decided to go with the NF-A12x25 over the pressure specific variants more for reusability than anything else. There's very little room for air to move anywhere other than where the fans are pointing it with these, and their performance is excellent. While I likely would see better performance with a fan specifically tuned for pressure alone, these cool far better than the stock fans, while also still being useful for whatever application they might be needed for down the line when this server inevitably gets swapped out: I swapped out the rear exhaust fan for another noctua I had to spare (a black chromax) so I could control everything with PWM. Figured if I was going to swap out two of them, I may as well do the third as well. More on the hotswap bays - Options for this outside of commercial gear or so sorely lacking. While it's great that this is available, I'm fairly disappointed in the drive mounting mechanisms here. The little plastic tabs on the drive carriers that are meant to hold the drive in place against the backplane are cheap feeling, and definitely won't stand up to repeated swapping of drives. I've already had to use a heat gun on a couple of them to form them back into a shape that allows them to *click* back into place like they're supposed to when installing a drive. The backplane itself looks like it was likely made by hand - theres enough of the capacitors sticking out that they're easy to bump when working in the system, and I was always afraid I was going to pop one off when I was trying to do some cable management. They bend back and forth, and I feel like they just wouldn't stand up to much if any abuse. However, they do seem to be decent components themselves, so as long as you're super careful around them and don't accidentally damage them (which seems like it'd be obscenely easy to do), it should be fine. Silverstone offers no way to purchase replacement parts (backplane, drive carrier) outside of emailing their support team and hoping they have some way of helping you. Makes me feel a little uneasy... But like I said, options are limited. On to the main event: The motherboard is an X11SPM-F - It has dual SR-IOV capable ports, albeit at only 1Gb/s, but using the X722 chipset which means that while communication outside the server is limited to a 1Gb rate, VM-to-VM communications are switched at 10Gb/s fully in hardware, which is great for virtualization workloads. In addition to that, it has support for 12 direct attached sata drives, pretty fantastic for a micro ATX board. I'd previously tried using an X10SRL-F board in the system as that was what was initially planned.. but found that putting a full ATX board in this box just wasn't going to work for me here. For one, the gymnastics you have to do in order to even get the board to sit on it's stands are nearly olympian; the only way to ease that would be to remove the rear fan as well as the CPU heatsink, and even then, you risk busting off one of the capacitors on the (seemingly cheap...) backplane. Once it was installed, there wasn't really a clean way to run power, and nearly nowhere available to route power and fans cleanly. When you then start trying to add all your sata connections as well as tossing in your PCIe cards, well... I was glad to find a mATX board that'd fit the bill! The one problem with mATX is that you have less flexibility in the number of slots available to you for add-in cards. At one point, I'd thought about trying to make this my main server. The main server needs a GPU for various workloads, not the least of which is gaming: The problem here is that I needed 4 NVME drives as well... With the smallest GPU I had available, a dual fan 1650s, there's just no way I could find to make it fit along with the hyper m.2 asus card, even with right angle SFF-8087 connectors; there were about 2mm difference between it working.... and not The bottom card in the above picture is a 10Gb intel NIC - these things do need decent airflow, and since this chassis is in the same room I typically work in, I also needed it quiet. I tacked on a tiny 40mm noctua onto it, and have it's PWM cycle tied to that of the CPU; if the CPU is crunching hard, odds are so is the NIC, and so far, it's worked out well. The final configuration: Chassis: Silverstone CS380b PSU - Seasonic 550w platinum MB - Supermicro X11SPM-F CPU - Xeon Scalable Gold 6139 - 18 core / 36 thread, 3.7Ghz boost Slot 1 - Intel 1.2TB DC S3520 NVME Slot 2 - Asus Hyper M.2 board - 4x WD sn750p (power consumption vs performance makes gen3 NVMe great for VMs!) Slot 3 - Intel X540 SATA Drives - 8x spinners at up to 16TB (used to be the sweetspot, pre-chia... ugh) 1x micron 5100 pro 4TB 1x samsung evo 860 1TB Here's the desktop view as it was a couple weeks ago (just prior to expansion; I was obviously running almost completely out of space lol) - power consumption idles at 97w, but this includes the router and cable modem, so likely closer to 80w or so. The max I've seen with everything going balls to the wall is around 365w, but that's with all 18 cores loaded, 10Gb network saturated, and parity sync running: I've since started a new build for the main server... Here's a teaser:
  11. The behavior may well have changed since nvidia's driver update for code 43 - it also varies depending on the method you use for passthrough (xen passthrough, vs blocking the vendor IDs, etc), at least it did shortly after he made the video when I did my own testing. It has less to do with unraid itself than it does KVM as a whole; I have the sneaking suspicion that AMD behaves completely in its own way as well.
  12. I don't want to get into the whole zealotry that surrounds the 'ECC vs non-ECC' debate (it really is nuts how strong some people's beliefs seem to be on the subject), but will try to help explain just a bit in hopes it helps: The whole idea behind doing any kind of RAID is to protect from data loss due to catastrophic hw failure; you can think of ECC as a way to protect from failures of the 'non-catastrophic' variety, where something 'went wrong', but not 'so wrong the drive failed'. Used to be that virtually all CPUs supported ECC; it was expected that anyone might want to ensure their data was 'correct', even us lowly home users. It wasn't until intel removed support from their consumer products as of the core i series that anyone even thought of trying to 'charge someone to use a previously free feature'. No, you don't "need" ECC, but it's definitely helpful. If you've ever gone to open up an old jpg you'd saved off from 15 years ago and found "wait, why is a quarter of the image this green blobby mess?", then you've potentially encountered some of the results. Assuming it's not straight up 'bit-rot' (I hate that term), sometime during the many transfers from one machine to another, maybe something got flipped in memory, and it thought it wrote something to disk when in fact it wrote something else. It's happened to me, I save everything, and still have files from the 90's saved off. For data that long lived, the odds of encountering something being out of sorts goes up significantly when not using ECC and checksumming (I wasn't at the time lol). If you want a brief synopsis on a recent take, arstechnica commented on Linus Torvald's rant on the same subject here
  13. It's well on it's way to coming out of alpha, possibly even in the next major release - the current tracker for It can be found here. There's a big push to try to get it ready for the next summit in November, but I'm not sure it'll get there... We'll see though, it's exciting! Even when it does though, there's a good many reasons not to use it; as stripes aren't recalculated to the new stripe width, there's some capacity loss overhead among other things. I'd be fine with Limetech simply not supporting modification of the zfs pool after initial creation, or if they want to enable it via the UI, just putting a great big disclaimer there that says something like "At your own risk (etc etc)". The biggest benefit to having zfs support built in for me at least is just the fact that it could then more easily utilize zfs pools within the rest of the ecosystem - there's enough complexity thanks to all the various options available within the filesystem that I could totally understand if initial implementation solely meant that the system's zfs information was better represented in the UI. Even if it meant something like "if you want a zfs pool, you have to create it from the command line, then select 'import pool' to do so". Again given the scope of work, it'd also be understandable to implement UI features in phases, something like: Phase 1 - Basic 'allow creation of zfs pool' in the UI (single files created with it and mounted as a cache pool to house user data, additional filesets are only able to be created via CLI - they just show up as folders inside, zvols the same and would still require manual formatting from cli as well). This is enough complexity for one release IMO Phase 2 - Creation of additional filesets via the UI - The UI design for this is pretty big, so having it on it's own release phase would make sense to me. It means we have to have a way to represent the various filesets all in one place (maybe represent it as a dir tree?), then be able to select that fileset and see it's info properly (zfs get pool/fileset) Phase 3 - UI button for on demand snapshot creation and ability to list/restore/delete snapshots for a given fileset - This is probably the biggest one yet as far as complexity... For that reason, I'd say forego allowing one to browse the snapshot contents in the UI until a later time. (and so on) Anyway, how it's broken down doesn't really matter imo, just that doing so makes the overall task far less monumental an undertaking. For a small team, even one as dedicated as Limetech, trying to do it all in one whack would be... rough lol. If doing this as part of a major release (e.g. 7.0), maybe including 2 of the phases would make sense I guess? I'm super excited for this; I've daydreamed and brainstormed how it might someday be done (could you tell? 😅), to the point I started drawing it out on a notepad a few times lol. Happy days!!
  14. This has already been answered previously a couple of times, with opinions from both myself, limetech, and others as to what the complexities of this are and why. I dont know if they're just being missed, misunderstood, or what's going on there, but I'd have a read back through them. What you're asking for is not simple by any stretch, and multiple multi-billion dollar corporations have been founded to solve the problem surrounding this type of request - again, imo, it's just not reasonable to ask a small team to create such in addition to their main product.
  15. Theres a reason companies like veeam, cohesity, rubrik, and others have huge market valuations - it's simply because backing up everything online and with one tool is a hugely complicated task. The problem here isnt that nobody wants this. It's that most of those that do also understand that the ask isnt a reasonable one, and have implemented their own backup strategies already with that understanding in mind imo.
  16. I'd be surprised if macOS even had a VF driver for anything that doesnt come from apple - my expectation would be that youd have to do some manual driver hacking to make it work... what cards does apple support/sell for 10Gb or higher connectivity? Do they provide driver/firmware downloads?
  17. I'd foresee this being useful for memory constrained environments, especially when there's *fast* storage available. Say you have a system that's limited to 16GB of ram, whether that's because you're limited to dual channel memory on your platform, or because finding memory for your system just isn't cheap/easy (unbuffered ECC DIMMs can be difficult sometimes). You still want to use RAM transcoding, prefer to keep larger chunks in RAM with your preferred downloader to save writes to your NVME device, etc, but if you find that you're doing a couple transcodes at the same time you have a large-ish download coming through, you're bound to run out of RAM. Having a swap saves you from those situations - these days, and especially with PCIe gen 4, storage is approaching (far closer than ever before at least) RAM-like speeds. If you run a system that would only ever need 8-16GB of RAM like 90-95% of the time, it doesn't make much sense to double your memory when you can simply utilize NVMe storage you've already got to cover the gap.
  18. Did the disks come with the shelf? If so, can you confirm that they've been reformatted since you picked it up? Just going through your diags, looks like the config wasn't wiped and DACstore's still on the drives, I'd dd them out. As to these shelves themselves, how familiar are you with lsiutil? IBM at one point had some pretty hardcore vendor locks added into their EEPROM early on in the first gen 6Gb release cycle, so there's the possibility you'll need to update the expander firmware afterwards and wipe that crap out. I'd just dd the drives first though and re-check, could save you a ton of time. Just an FYI on the shelves: the 'non-raid' version of what you're referring to as the 'controller' in that shelf's actually referred to as an ESM (environmental service module) - doesn't actually control anything other than PSU fan speeds, acting more like an expander backplane as far as I/O and SES/i2c communications go. Just wanted to mention the terms explicitly, as should you end up looking to purchase another shelf later down the line, you'll want to ensure it's not using a controller as well, but instead using an ESM. The marketing term's IBM used were "DS" for the controller (RAID) version of a chassis, and "EXP" for the JBOD version of the same shelf (Expansion shelf)... So if you were, for instance, looking for documentation on a SFF 1st gen 6Gb/s expansion shelf, you'd search for something like 'IBM EXP3500' to get the most relevant results (I'm guessing your shelf's an EXP3524 based on the 15k RPM 600Gb drives being in use).
  19. As for transcripts, YouTube's automated closed captioning will help. I feel like theres gotta be a way to extract the closed captions once YT creates them...? On the power saving front, I have some real numbers, but they dont directly relate to what's requested - unraid already has the power saving features I need built in thanks to the unraid array. I went from having a 28 disk zfs pool (24 sas hdds, 2 ssds mirrored for slog and 2 more for l2arc) as my first unraid storage (zfs is what i knew, so i stuck solely with it at first) directly to unraids array. Since zfs doesnt (or didnt, in my experience) really appreciate drives just "not" being able to immediately respond to I/O requests, all drives spin all the time. Let's call that an avg of 7w each (it was more like 5.5-6 for full spin with 0 I/O, 9w during operations, so on avg...) - 24 drives @ 7 watts each = 168w , 24 hours a day. 7 days a week, comes out to about $175 a year here. Let's give the drives a 5 year life expectancy, as that's about the longest warranty one can expect from a HDD, call it $875 as a lifetime total. That's without accounting for the SSDs power usage, wear on those SSDs, the extra computational power needed for zfs (and ram), etc. I then went to an unraid array with the same media. Only needed 18 drives (2 parity, 16 data, as opposed to 4 z2 vdevs of 6 drives each - went with mirrored zfs for the VMs and containers, but again, ignoring those for now). On avg, I have 2 active streams at any given time between plex, nas usage, etc, so we'll just say those are always two different drives and never to the same disk, coming out to 14w (or when writing, 28w thanks to both parity drives). At that rate, that $14.75 for read only, $29.50 for write only - $74-148 after 5 years, a savings of about 700, and that's not even counting the fact that 6 less HDDs were used. Or that my RAM needs for the host went completely through the floor. My real world numbers ended up a bit different, as I eventually migrated from 3TB drives to 16TB and 6TB drives for media. The VMs and docker containers still use the same SSDs in a zfs raid 10. I still use ZFS for my backup server as well, I just dont really need that level of protection and performance on the primary array for multimedia files. However, I did run a month with the same hardware, just using all ZFS one month, and the unraid array for media + zfs for other - UPS showed the avg power draw over the ZFS month to be ~238w. The next month was ~89w
  20. BVD


    Anything 6tb and less, even some 8tb units, they're still close to prior prices - the miners prefer 12tb+ for efficiency sake
  21. Xml is in libvirt.img, you can browse the contents with your preferred archival/compression tool to back them up individually (handy as it enables you to script backing up the xml specific to a given VM at the same time you're running your OS's native backup application), or en mass by simply copying the img entire file out.
  22. BVD


    16TB Exos can be had for 330 bucks on newegg - they're not quiet, but the price/GB can't be beat, especially not with a 5 year warranty.
  23. Finally back home! So this is where I think you got tripped up (just referencing the guide for any others that might come across this): Instead of putting this in a user script, you'd added it to the go file - the intent with the script is to allow you to bind (anything really that has function level reset capabilities, but in this case - ) any port at any time, as long as that port isn't currently in an active (up) state. This is necessary in our case, as the virtual functions don't exist until after PCIe init time - they're only created once the numvfs is called, which then 'partitions' (sorta, but that's the easiest way to think of it anyway) the physical function into whatever number of virtual functions you decide upon. You shouldn't have to bind the physical port at all for method one - you're partitioning at the pci layer, prior to the driver loading. This is where I was saying I'd thought perhaps you'd intermingled the two methods a bit. So starting from scratch, your process should be something like: * Add the 'echo' lines desired to the go file, then reboot * Create a user script that calls the bind script for each of the VFs you want to pass on - run it once now to try it out, then set it to run on first array start only for the future Lemme know if this doesn't get you over the finish line and we can see about taking a look together further down the road _____ As an alternative, method two would be an option here as well: * Run bind script for the port, or bind via UI and reboot (this unloads the ixgbe driver, freeing up the physical function so it can be partitioned into virtual functions - something that isn't necessary in method one due to initiating the vf creation prior to ixgbe being bound to the pf) * Type the 'echo' line into the terminal for the ethX device
  24. I'd re-read the instructions- you have 4 lines for your vfs, doing two separate things, where you should only have one line per physical port Sorry, I'm mobile right now or I'd type out more, but it looks like you mightve mixed both methods for some reason
  25. You have to bind the 520 series prior to using VFs as its partitioning the entire device - take a look at the chipset specific section which goes over that a bit for more detail, but the guide covers it in pretty decent detail, just be sure to follow it through and you'll be set 👍