BVD

Members
  • Content Count

    116
  • Joined

  • Last visited

Community Reputation

36 Good

About BVD

  • Rank
    Member

Converted

  • URL
    vandyke.tech

Recent Profile Visitors

991 profile views
  1. This has already been answered previously a couple of times, with opinions from both myself, limetech, and others as to what the complexities of this are and why. I dont know if they're just being missed, misunderstood, or what's going on there, but I'd have a read back through them. What you're asking for is not simple by any stretch, and multiple multi-billion dollar corporations have been founded to solve the problem surrounding this type of request - again, imo, it's just not reasonable to ask a small team to create such in addition to their main product.
  2. Theres a reason companies like veeam, cohesity, rubrik, and others have huge market valuations - it's simply because backing up everything online and with one tool is a hugely complicated task. The problem here isnt that nobody wants this. It's that most of those that do also understand that the ask isnt a reasonable one, and have implemented their own backup strategies already with that understanding in mind imo.
  3. I'd be surprised if macOS even had a VF driver for anything that doesnt come from apple - my expectation would be that youd have to do some manual driver hacking to make it work... what cards does apple support/sell for 10Gb or higher connectivity? Do they provide driver/firmware downloads?
  4. I'd foresee this being useful for memory constrained environments, especially when there's *fast* storage available. Say you have a system that's limited to 16GB of ram, whether that's because you're limited to dual channel memory on your platform, or because finding memory for your system just isn't cheap/easy (unbuffered ECC DIMMs can be difficult sometimes). You still want to use RAM transcoding, prefer to keep larger chunks in RAM with your preferred downloader to save writes to your NVME device, etc, but if you find that you're doing a couple transcodes at the same time you ha
  5. Did the disks come with the shelf? If so, can you confirm that they've been reformatted since you picked it up? Just going through your diags, looks like the config wasn't wiped and DACstore's still on the drives, I'd dd them out. As to these shelves themselves, how familiar are you with lsiutil? IBM at one point had some pretty hardcore vendor locks added into their EEPROM early on in the first gen 6Gb release cycle, so there's the possibility you'll need to update the expander firmware afterwards and wipe that crap out. I'd just dd the drives first though and re-check, could save
  6. As for transcripts, YouTube's automated closed captioning will help. I feel like theres gotta be a way to extract the closed captions once YT creates them...? On the power saving front, I have some real numbers, but they dont directly relate to what's requested - unraid already has the power saving features I need built in thanks to the unraid array. I went from having a 28 disk zfs pool (24 sas hdds, 2 ssds mirrored for slog and 2 more for l2arc) as my first unraid storage (zfs is what i knew, so i stuck solely with it at first) directly to unraids array. Since zfs doe
  7. BVD

    Drives

    Anything 6tb and less, even some 8tb units, they're still close to prior prices - the miners prefer 12tb+ for efficiency sake
  8. Xml is in libvirt.img, you can browse the contents with your preferred archival/compression tool to back them up individually (handy as it enables you to script backing up the xml specific to a given VM at the same time you're running your OS's native backup application), or en mass by simply copying the img entire file out.
  9. BVD

    Drives

    16TB Exos can be had for 330 bucks on newegg - they're not quiet, but the price/GB can't be beat, especially not with a 5 year warranty.
  10. Finally back home! So this is where I think you got tripped up (just referencing the guide for any others that might come across this): Instead of putting this in a user script, you'd added it to the go file - the intent with the script is to allow you to bind (anything really that has function level reset capabilities, but in this case - ) any port at any time, as long as that port isn't currently in an active (up) state. This is necessary in our case, as the virtual functions don't exist until after PCIe init time - they're only created once the numvfs is called, whic
  11. I'd re-read the instructions- you have 4 lines for your vfs, doing two separate things, where you should only have one line per physical port Sorry, I'm mobile right now or I'd type out more, but it looks like you mightve mixed both methods for some reason
  12. You have to bind the 520 series prior to using VFs as its partitioning the entire device - take a look at the chipset specific section which goes over that a bit for more detail, but the guide covers it in pretty decent detail, just be sure to follow it through and you'll be set 👍
  13. You can do that with as little as a copy/paste... I guess maybe that's why I'm not following. In any case, I have enough data that id have to take the system down for multiple days if I were to copy everything out as a backup at this point, even over a 10Gb network. I'd rather use tools that were designed to back up explicitly what I need backed up, and do so without taking down applications I have people relying on for services I host on unraid.
  14. We understand it... it just isn't something that's reasonable to ask of a hypervisor - I tried to explain the reason for this in the post above, that many applications require their own method of backing up in order to ensure they can be restored in a consistent manner. If you look at any other hypervisor, you'll see they operate similarly. Not because it's just never been asked of them, but because what's being asked of them is unreasonable and unrealistic.
  15. There's a pretty big reason that there can't (well... "won't" would probably be more accurate) be a single all-inclusive backup of everything in unraid - different applications/files have differing requirements for ensuring file consistency. A good starting point for your research would be looking for information regarding application consistent vs crash consistent snapshots/backups. In short, a crash consistent backup will just ensure that the filesystem of the OS itself is intact. An application consistent backup is one from which you can restore the application's specific data i