JonathanM

Moderators
  • Posts

    14407
  • Joined

  • Last visited

  • Days Won

    62

Community Answers

  1. JonathanM's post in I need help with some drive and file management (moving drives around) was marked as the answer   
    Assuming you have the file manager plugin, click on the "view" icon at the far right of disk1, select all, move, and select disk 2 as the destination.
  2. JonathanM's post in Strange power consumption issue? was marked as the answer   
    Thank your lucky stars that you didn't blow up all your drives. Modular cables are not always compatible, EVEN from the same name brand.
     
    Unless you know how to operate a volt meter and test the pinouts before powering your drives, NEVER reuse modular cables with the different PSU, just keep them together as a set.
  3. JonathanM's post in Can't get rid off OOM crashes for windows VM, PLEASE HELP!!! was marked as the answer   
    Either
    add more RAM
    limit the amount of RAM your containers are allowed to use
    quit using the containers that use too much RAM
    set up a swap file on a pool disk (not officially supported, but has worked for some in the past)
     
    Keep in mind that the OS unpacks into, and runs entirely from, a RAM disk, so killing processes that are consuming excessive RAM is the only defense against total crashes. It's also possible that you have something writing to RAM, since all the typical OS locations are in RAM instead of on a hard drive. If a file is being written to a location not mounted to one of your array or pool disks, it's writing to RAM.
     
    Without you taking the steps to figure out which container(s) are causing the overflow it's tough to help. The VM reserves as much as it's assigned, and that chunk is untouchable to the OS, so even if the VM has free RAM the OS can't use it. Best to reduce the VM RAM to the smallest possible amount, and let the OS manage the rest.
  4. JonathanM's post in Incorrect share size shown was marked as the answer   
    Perhaps this?
    https://www.gbmb.org/tib-to-tb
  5. JonathanM's post in Unmountable disk present when setting up array for the first time was marked as the answer   
    Update to 6.11.3
  6. JonathanM's post in Problem Replacing a Failed Drive was marked as the answer   
    Upgrade to 6.11.3
  7. JonathanM's post in Cache pool with temporarily some unassigned drives was marked as the answer   
    Sure, just use BTRFS
  8. JonathanM's post in License Transfer (2nd in less than 12 months) was marked as the answer   
    Yes, if you set up a new flash drive with a trial, purchase the license, then copy the entire config folder to the newly licensed flash without the old *.key file.
     
    Since you say you are having issues with the flash drive I'm assuming you are working with backup files from the drive, so just make sure you keep the key file with the physical key it was issued to, and overwrite the rest of the config folder's content with the active server config files. Probably should just remove the old dead *.key file from the backup you are working with, and make a separate copy of the newly issued key file and label it to correspond with the new physical USB stick.
     
    I'm not sure how clear I was, so if you have questions just ask.
     
    P.S. Are you using a USB 2.0 connection to the motherboard? Typically that results in less issues. My favorite USB drives are all metal and relatively large, for good heat dissipation.
  9. JonathanM's post in One disk not coming up correctly was marked as the answer   
    USB enclosures sometimes alter ID's in unpredictable ways, for instance yours may be reporting for the first slot that responds, and subsequent drives aren't. Some USB cages are better than others, but all suffer from bandwidth issues for parity.
     
    The only external hard drive cages that work comparable to internal SATA connections with Unraid is either ESATA with one port per drive, or SAS which allows multiple drives for a single cable.
     
    USB suffers from poor connection stability under heavy load, which can cause Unraid to disable drives somewhat randomly.
     
    You should be able to successfully use Unraid with USB connected array drives if you forego parity protection and just use data drives and single volume pools, any attempt to use parity with multiple USB data drives is going to be a frustrating experience compared to HBA sata internal connections.
  10. JonathanM's post in Increase Windows 10 VDISK size was marked as the answer   
    I can't remember if there is a guide, but here it is in a nutshell.
     
    1. Shut down the Windows 10 VM fully, hold down the shift key while clicking the Shut Down item.
    2. On the VM page, click on the name text of the VM, it will drop down a list of the attached disks.
    3. Click on the 70GB text in the capacity column, and change it to 90.
    4. Boot the VM back up, go to disk management, and expand the partition to fit the new vdisk.
  11. JonathanM's post in rootfs & /var/log getting full was marked as the answer   
    After a reboot, yes.
  12. JonathanM's post in Cache raid mirror on normal drive vs ssd? or just run a backup script? was marked as the answer   
    mirror is NOT backup.
     
    Backups can be restored to fix things like file corruption and accidental deletion.
     
    Mirror keeps the files identical in realtime, so if a file is deleted, it's gone everywhere.
     
    mirror is NOT backup.
     
    mirrors perform best with identical drives, but can be forced to use vastly different capacities and speeds, typically you will be restricted to the limits of the slower and smaller of the disparate drives.
  13. JonathanM's post in Migrating from Flexraid to Unraid was marked as the answer   
    👍
  14. JonathanM's post in [thinking] of UnRaid was marked as the answer   
    This.
    You can attach as many additional drives as you want after the array has started, but the array will NOT start if the number of license limit devices are exceeded. Virtual devices don't count.
     
    Why? All you really need to get started is the USB stick to set up a trial Unraid install, and a single drive that can be erased and used as Disk1 in the array. As long as you don't assign any of your current drives to Unraid, it won't touch them.
  15. JonathanM's post in VM - Execution error - Can't find USB device after installing new Motherboard was marked as the answer   
    You can just remove the section of xml that you pictured.
     
    Read this thread for more discussion.
     
  16. JonathanM's post in Can't access home assistant in browser was marked as the answer   
    Enable bridging in the main network settings.
  17. JonathanM's post in Radarr wont start - cache full, but its not? was marked as the answer   
    Since appdata is cache pool only instead of cache pool prefer, it can't overflow to the array when it runs up against the pool minimum free space. Click on the pool name "Cache" in your screenshot to see the minimum free space setting.
  18. JonathanM's post in cache pool corruption and strange behaviour [6.10.3] was marked as the answer   
    If the two drives were EVER assigned to the same pool, even briefly, they "permanently" remember that status because BTRFS keeps a record of pool members totally separate of Unraid's pool definition. To remove them from the BTRFS pool can require extra steps, @JorgeB is the resident expert on what is required. Theoretically if the drive is removed from Unraid's pool correctly, the status is updated and things work as intended. If you physically remove the drive, I believe it may still have the BTRFS headers on it that make it a member of the pool, I think you have to leave the drive installed and remove it from Unraid's pool definition, then the pool is rebalanced to remove it.
     
    I'm a little fuzzy on the exact procedure, but what you experienced is a symptom of both drives being assigned to a single pool at some point in the past, and not being properly BTRFS balanced to separate drives.
  19. JonathanM's post in Where are "extra parameters"? was marked as the answer   
    edit the container, top right, toggle basic view to advanced view
  20. JonathanM's post in (SOLVED) Unraid 6.9.2 - Steps to: Replace parity - reorder drives (physically and in array) - move data was marked as the answer   
    1. Unraid tracks drives by serial, not physical location, so you can freely move drives around physically to different ports, Unraid won't change how they are assigned to their logical slots, so put the drives in any physical slot that makes sense to you.
     
    2. Since Disk4 (E) has no data AND you are building a new parity drive, nothing particularly fancy needed for this.
       Take a screenshot or some other record of which serial numbers are in which logical slots.
       Power down, physically remove the old parity drive and and the 1TB drive, install the new parity drive, rearrange the drives the way you want in the case, power back up.
       Tools, new config, preserve current assignments ALL, apply, go to main page, drop down the parity slot, VERIFY the serial number of the new parity drive, CHECK IT AGAIN. B and C can stay where they are, since D is empty I'd leave it out as well as E, you can add D back later if you really need the space, better to leave it out, less power and less risk.
       Verify you have the correct serial numbers for the new parity, disk1 and disk2, at this point it should be new drive in parity, B in disk1 and C in disk2. Start the array and build parity. After parity builds, do a non-correcting parity check. If you have zero errors, you can proceed.
     
    Stop the array, power down. Replace the 2TB disk with the old 4TB parity disk, power up, select the correct disk for the disk2 slot, and rebuild. Do a non-correcting check.
     
    At this point, you should have your new 4TB parity drive, the original 4TB disk1, and the old parity drive as disk2 with all your data, and 2+TB free space on disk2. If you need the space NOW, then add back the 2TB you designated D, and optionally the 2TB you just replaced with the 4TB as well. Keep in mind that each extra spinning drive is a possible point of failure, since parity requires ALL remaining data drives to be read perfectly end to end to rebuild a single failed drive, even if some of those data drives don't have any data yet.
     
    You would be better off not putting those older drive back in the array if you can help it.
  21. JonathanM's post in Sonarr not creating Series folders was marked as the answer   
    The screenshot you posted for sonarr shows
     
    Container Path : /data
    Host Path : /mnt/user/appdata/data/
     
    If the full pair of paths don't match between the applications, they can't find the files.
  22. JonathanM's post in Share Structure Setup - Trash Guides vs Predefined was marked as the answer   
    Create a share for each category.
  23. JonathanM's post in Cannot install unRAID onto new usb drive was marked as the answer   
    Did you copy the contents of the downloaded zip to the root of the flash drive? There should be a syslinux folder on the flash.
  24. JonathanM's post in Parity Check with incorrect Parity Drive was marked as the answer   
    Unclear from your title and description what is actually happening.
     
    Did you accidentally assign a data drive to the parity slot?
     
    When you removed a drive, parity was no longer valid, unless you wrote zeroes to the entire drive to be removed. Empty implies a formatted filesystem, which is part of parity, and probably deleted files, which are also part of parity.
     
    Your title implies you put the wrong disk in the parity slot, but the body of your message sounds like everything is working correctly, you just need to unassign the parity drive, reassign it, and rebuild parity based on the remaining drives.
     
     
  25. JonathanM's post in Mount disk in UnRaid was marked as the answer   
    Perfectly normal. Check the box that acknowledges "Yes, I want to do this" and it will become available.
     
    We make it a multi step process to format drives because too often if someone has file system corruption the first thing they want to do is format the drive to make it mountable and then they think parity will restore their files, when the correct thing to do is a file system check. Formatting replaces the table of contents with a blank version, and it effects the parity drive as well, so it makes recovering data much harder or impossible.
     
    In your case, you genuinely DON'T have a valid filesystem yet, so check the box and apply the format.