Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

JonathanM

Moderators
  • Posts

    16,564
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. eSATA has strict cable length limits, but can work if each drive has its own eSATA connection. Most eSATA enclosures have a port multiplier instead of individual links, so any operation involving more than one of the disks in the enclosure is going to be severely bandwidth limited. Parity checks would take forever.

     

    USB can be problematic, as Unraid assumes a perfect drive connection and a normal USB reset will cause the drive to be dropped from the array. Also, many USB drive interfaces don't pass SMART data, some even mangle the hardware addressing, causing issues if you later want to move the drive to another connection.

     

    SAS is the only protocol really meant to handle permanently server attached drives that Unraid expects.

     

    There are other issues that I haven't covered with USB.

  2. 13 hours ago, Karl Strachan said:

    During boot that 1 core is at 100% and boot takes long because - well only one core is used.

    You missed my point. If you actually go through with my proposed experiment, you will probably see that boot times are significantly faster with fewer resources bound exclusively to the VM.

     

    Any resources bound to the VM are unavailable for the host, meaning the virtual motherboard that the VM is running is crippled. Slow virtual motherboard = slow I/O = slow boot times, and well as sub-optimal load times as the VM needs to fetch data from the vdisk.

     

    Keep in mind that the CPU usage graph in Unraid includes IOWait, so what you are seeing is the slowness of the data being read from the vdisk into RAM as the guest boots. Give the host more resources so it can feed the VM faster.

  3. Reduce the core assignment to a single core, start with the highest number core, and reduce the RAM max to 4GB. Test. Add another core, test. When the VM stops feeling faster with each addition, keep that number of cores and increase RAM by 2GB, test and repeat. VM's typically perform best with much less assigned than seems intuitive.

  4. 1 minute ago, nrgbistro said:

    are there any new config cases where disk data is overwritten, besides the previously mentioned case where I have a parity drive?

    Yes, if you define a pool with the "wrong" format and add a disk. If you leave the format type auto it should just work.

    • Upvote 1
  5. 1 minute ago, nrgbistro said:

    I've already done a new config and all my drives are listed as "New Device" now. Is it still safe to add them and start the array if I don't add a parity drive (temporarily, while I make sure my data is still here)?

    Yes.

    1 minute ago, nrgbistro said:

    I had an external hard drive added as a pool device for backups.

    What format does it have? You should be able to define a new pool and add it just as it was.

  6. 1 hour ago, mad_dr said:

    My PSU is a Corsair RM850x Shift so it has no molex connectors

    According to this link https://www.corsair.com/us/en/p/psu/cp-9020252-na/rm850x-shift-80-plus-gold-fully-modular-atx-power-supply-cp-9020252-na

    it comes with 2 PATA cables with 4 molex connectors each. It has 5 connectors on the PSU itself, each can take either SATA or PATA cables, so if you order 3 of these,

    https://www.corsair.com/us/en/p/pc-components-accessories/cp-8920315/premium-individually-sleeved-peripheral-power-molex-style-cable-4-connectors-type-5-gen-5-black

    you can have 20 4 pin molex connectors.

  7. You've been doing this tech thing with Unraid for a very long time, relatively speaking. Why not salvage your old hardware and build another Unraid box with used parts? A backup tower would only need to be powered up for the duration of the backup, so heat and power consumption aren't a huge factor. You could use a couple new faster drives in your production box, and move your current smaller drives to the backup unit.

     

    You seem to be focused on the ability to plug the bare drives into a windows box to read natively, but once you add hardware RAID into the mix you lost that already. If the hardware RAID enclosure dies, you would need to source a compatible unit to read the drives. If you run a second Unraid server, the drives are portable to virtually any hardware as you know.

     

    IIRC @Hoopster has something like this set up and running with automatic scripting to do the backups.

  8. 54 minutes ago, nrgbistro said:

    Can I safely add them to my array without losing my data?

    As long as parity is not currently installed and valid, yes.

    If the array has valid parity added disks will be cleared, which irretrievably erases them.

    Tools, new config, preserve all will allow you to add the disks without erasing them and build parity with all the disks.

    • Thanks 1
  9. Script that checks for the existence of a specifically named file. If that file does not exist, create it. If it does exist, rm -rf. Every time the script runs you need to delete the file.

  10. 14 hours ago, ZooMass said:

    Unplug boot USB and mount the drive on another computer, preferably not Windows because of possible CLRF line ending issues conversions

    Theoretically CLRF issues are dealt with automatically, IIRC all system calls to the flash config files are run through the fromdos function.

  11. 2 hours ago, vrytired said:

    Did you ever find a solution to this?

    Nope, I'm just living with it. The mapping program I use, uNmINeD, gives a permission error for the GUI version, but I can live with that because the CLI version still works fine, and that's what I use on a schedule to generate a webpage for the people using my server.

     

    I'm not happy that the GUI version of the mapping program doesn't work, but it doesn't effect my daily workflow.

     

    There probably is a solution, but I haven't put in the legwork because it works enough for me as is.

  12. You have the trial period you can use for 30 days, then you can do the starter subscription for $49. That will give you at least 1 year, probably longer, until you need to pay more for updates. If, at the end of all that, you decide you want lifetime updates, you can change to a lifetime for $209. Total outlay, $258, and only after living with Unraid for much more than a year before you have to make a choice.

     

    So, the "penalty" for not going to lifetime immediately is only $9 net.

     

    Try it out for free, and if it looks good and you want to continue, get the starter license. If at the end of all that, you want to change to lifetime, you aren't spending significantly more than if you fork it all out immediately.

    • Like 1
  13. 1 hour ago, dopeytree said:

    The path forward is one has to go through each docker 1 by 1 and see which is causing it. 

    Or, if the cause is truly just one container, start roughly half of the containers, check, if the problem is found, start half of the those, if the problem isn't found, start half of the remainder.

     

    Depending on how many items you have to check, elimination by halves can be much quicker than 1 by 1.

    • Like 1
×
×
  • Create New...