Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

JonathanM

Moderators
  • Posts

    16,564
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. 1 hour ago, ANJ_ said:

    I think there may have been one instance where my Cache drive wasn't readable randomly and I had to reformat it, that might have occurred after the backup. Is there something I should be careful about in that case?

    Should be fine, did you use the same format type?

    • Like 1
  2. 5 minutes ago, Uaeladen said:

    This is what I mean, as long as they are in the exact same order on the fresh install unraid should figure out that these drives don't need to be formatted, correct?.

    Yes. The main Unraid array will want to rebuild parity, and assuming the drives haven't been mounted elsewhere you can check the box saying parity is already valid.

     

    Unraid should not alter drives (except for the parity slots) without you explicitly clicking the format button after affirming that yes, you want to do this.

     

    You will need to name the pools the way they were previously, pool names aren't derived from the disks like shares are.

  3. 1 hour ago, R__ said:

    Thanks for chiming in. The time that would take seems very significant, I'd rather just have both machines running at the same time and transfer the data over (also sounds like a good time for some data pruning). Also, wouldn't my parity disk have to be 12TB if I wanted to swap one by one?

     

    Would my approach of setting the new one up with a trial, copying stuff over, and then transferring the license be problematic? My old disks would still have my data that way no?

    If you want to start over with your apps, reorganize and copy only what's needed, then your approach would be better.

     

    If you upgrade one at a time, then yes, the first drive to be upgraded would be the parity drive.

     

    Moving VM's and containers is a little trickier, but I suppose it's a good exercise to figure out if your backup routine is robust enough.

  4. 1 hour ago, dcuellar44 said:

    So if one of the two nvme cache drives fails my data is safe until I replace the drive, right? What it won't protect against is corruption in the data. Am I understanding correctly?

    In theory, yes. You just don't have the ability to go back and undo like a real backup. In practice it seems when a device fails the RAID duplication doesn't always take over seamlessly, so a backup is the better use of a second drive in my opinion.

     

    YMMV, my opinion only, others disagree, etc, etc.

  5. 6 hours ago, dcuellar44 said:

    put it in raid1

    Remember, RAID is not a backup, it replicates all changes in realtime, which means corruption, deletion, etc are all faithfully replicated. Backup allows you to step back in time to the moment the backup was taken if something bad happens.

     

    Redundant RAID levels allow for device failure without downtime.

  6. Motherboard groupings are controlled by the chipset. There are some options you can try to break up the groups, but they don't always work and can produce instability in rare cases.

     

    Play around with the various PCIe ACS override: options in the vm manager settings. If you find a setting that seems to work, read up on it to check for signs that it's causing other issues before you run that way long term.

     

    If you can't get it working with those options, a different motherboard may be the only solution.

  7. The only good thing reiserfs has going for it is the extreme resiliency for recovery. If you change the file system type back to reiserfs, start the array in maintenance mode, then run the reiserfs check command

     

    reiserfsck --check /dev/md2

     

    Note that there is a space between --check and /dev/md2

     

    Hopefully it will give further instructions, I'd capture that output and post it here before proceeding further.

     

    This command may take a long time to analyze the drive since it was formatted with another filesystem, let it finish the check.

  8. 10 hours ago, mytech said:

    all of them assigned to it

    The VM's will likely perform better with far fewer cores dedicated. Remove all except 1 pair, 7/15, and see how the VM feels. Add 6/14 and test again. Repeat adding from the high numbers down until the VM doesn't perform better, then back off one pair.

     

    Always leave 0/8 unassigned, the host needs it.

     

    If you have multiple VM's running concurrently, you may need to test different combos, but always try less cores rather than more. The more resources you can let the host use, the better it can serve the VM with I/O and other emulated services.

     

    Same goes for RAM, even more than CPU cores. RAM dedicated to the VM is lost to the host, so you never want to allocate more than is absolutely necessary. RAM available to the host will be used to cache I/O, which really helps the VM feel snappy.

  9. 18 hours ago, matt15k said:

    I read that with DDR5 using all 4 RAM slots can cause stability issues.

    The motherboard power distribution circuits play a role here, so some boards may handle it better than others, and marginal boards may be stable for a while but as they age may not be able to maintain stability.

     

    Not DDR5, but I had a board a few years ago that initially ran fine with all 4 slots, but after a while it started crashing and failed memtest. All 4 sticks individually tested good, and all slots individually tested good, but as soon as more than 2 slots were populated it failed memtest.

  10. 15 hours ago, kimmer said:

    however. It should really go to the scrapyard due to the powerdemands.

    That is not a universal thing, granted it's most of the world, but there are areas where electric usage is not wasted because it's needed as heat anyway, or renewable production is excessive for the local demand.

     

    Your general sentiment is valid, but there are cases where the benefits of continuing to use outdated hardware to the fullest outweighs the waste of discarding it for newly manufactured product that needed resources and energy to produce. The reduce, reuse, recycle triangle applies.

    • Like 1
  11. I do NOT speak for Limetech (not Limewire, that's an old file sharing thing) but I have a question. Would you be OK with an online license verification option? The biggest reason I believe we are still locked in to flash GUID for licenses, is the cheap nature of a unique token that can be verified completely offline. Other options in that space add a minimum of $30 for the hardware, and it would have to be sourced and maintained by Limetech, which adds a bunch of cost and complexity to the licensing scheme, not to mention the extreme PITA when a dongle fails, is lost, or broken.

     

    Currently if the flash drive fails it's a relatively low cost user replaceable item, and the key replacement is handled by the automated system. Limetech even maintains a cloud backup option to make it even easier.

     

    I don't see a whole lot of good options for people who want a totally firewalled system that runs without contacting Limetech on startup.

     

    What did you have in mind?

     

    @SpencerJ, care to chime in here?

    • Upvote 1
  12. 1 hour ago, mtftl said:

    I can spend weeks enabling single docker services and wait for a crash. But what can I do if I find one is breaking things?

     

    Shouldn't take weeks if it crashes before 2 days.

    Enable half of your normally running containers. If it crashes, divide those in half. If it doesn't, disable that half and enable the other set. I recommend printing a list and noting the start time of each container and notate crash times, keeping track of which containers were running at that point.

     

    Shouldn't take more than a few cycles to narrow it down, unless it's a combination of containers that only crash when they interact, or you have 100's of containers.

     

    Bonus is, you get to continue using critical containers.

  13. 50 minutes ago, JP said:

    2)  Connect something like only a light to one of the protected outlets on the UPS and trip the fuse.

    If you can manage to simultaneously feed your Unraid uninterrupted power, then when the UPS goes into backup mode Unraid should start the shutdown process.

     

    If you connect something with a similar electrical load to the UPS, maybe a small space heater, then you can see if the Unraid shutdown beats the UPS running out of power.

     

    It's entirely possible Unraid's shutdown process is timing out before something can get stopped, forcing Unraid to shut down uncleanly. If you are running VM's, they may not be stopping properly, or take too much time. Personally I run apcupsd software inside my guest VM's in slave mode, so when the host detects a power outage the VM's immediately start the shutdown process.

  14. 3 minutes ago, DarkP said:

    I need to think it over, what config would you suggest?

    Your current layout of 2 spinners and one SSD is typical.

     

    If you are only using it for bulk storage, you don't need SSD, but if you want to run containers and VM's, it's pretty much a necessity unless you are prepared for a severe drop in performance.

    • Like 1
  15. 10 minutes ago, DarkP said:

    Current setup:

    - 2 HDD's - as array, one for parity, one for data

    - 1 ssd - pool - cache.

     

    Future setup:

    - 2 HDD's - as array, one for parity, one for data

    Docker containers and VM's will be extremely slow if you try to run them from the parity array. Is there no way for you to add a SSD of some flavor? Empty PCIE slot? M2 slot?

  16. The easiest way would be to temporarily put all the old drives in the new system, they don't need to be in the bays as long as you can run SATA and power to them.

     

    How are your current drives assigned? Array, pool?

     

    How are you planning to assign the 2 permanent drives?

×
×
  • Create New...