Jump to content

JonathanM

Moderators
  • Posts

    16,257
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by JonathanM

  1. 1 hour ago, Harlo said:

    i have a dead drive that is in my unraid server.

    How do you know it's dead? The majority of the time Unraid fails a drive is because of a cabling or power issue, dead drives are relatively rare. When a write fails, for whatever reason, Unraid disables access and emulates the data using all the rest of the drives.

     

    See if you can get a SMART report on the "dead" drive, and see if it will mount in the Unassigned Devices plugin.

  2. 13 hours ago, ehnde said:

    Simply create a VM in proxmox

    Keep in mind that running Unraid as a VM is not supported, if you have issues it's on you to solve them. It's not forbidden, plenty of people do it, but if you have a problem you will be asked to recreate the problem running Unraid bare metal before it will be considered an actual issue that the developers need to look at.

     

    There is a section of this forum dedicated to users helping each other with virtualizing Unraid.

  3. 35 minutes ago, bfenty said:

    Yup that seems to have been the issue. I must have over provisioned it. Good solution, thank you. 
     

    I ordered 64GB more RAM on a good deal, so I should be able to run a couple VMs simultaneously in the future. 

    For best performance start at the lowest amount of RAM that will allow the VM to boot successfully, and add RAM in small chunks until the performance doesn't increase any more, then back it down to the last amount before it topped out. The host needs the RAM to emulate all the I/O and motherboard functions, so the more you can give to the host without slowing down the guest will give the best overall performance.

     

    RAM allocated to the guest is totally gone from the hosts viewpoint, so the more you can give to the host the faster virtual hardware you will be running on. You wouldn't intentionally cripple a bare metal PC by giving it the slowest motherboard you could find and maxing out the RAM to compensate.

  4. 5 hours ago, gemeit said:

     

    Hi I want to remove 5 disks can I run this for 5 disks at the same time or do I need to do it 1 disk at a time

     

    Thanks

    You can do it simultaneously but it will be excruciatingly slow.

     

    Why not just remove the disks all at once and rebuild parity one time with the final complement of disks?

    • Like 1
  5. 10 minutes ago, Roscoe62 said:

    8. On Main page select dropdown for Disk 9 and unassign it

    9. Click on the dropdown for Disk 1 and reassign it to disk 9

    10. Go back to the dropdown for Disk 9 and assign it to disk 1

     

    Maybe when you finalize your directions to yourself, use Slot instead of Disk when referring to the logical positions, and fill out the last 4 of the drive serial numbers so you keep it straight in your head and can cross check things. For example, select slot 1 and assign disk A54E or something like that.

     

    Also, I suggest actually committing a document to print after verifying all your steps, then physically cross things off the list as you do them.

  6. 4 minutes ago, Roscoe62 said:

    10. Go back to the dropdown for Disk 9 and assign it to disk 1

    Why? If you are retiring the disk, why not just leave slot 9 blank, and NOT check "parity is already valid" so it calculates parity from the disks you wish to use?

     

    The only other gotcha I see is if files are added or modified while the copy process is running you may not have the full up to date copy. You can either stop all the things that could write to the disk, or use rsync to verify the copy and list differences until the verify is perfect. It's not a bad idea to verify the copy anyway, but not the end of the world since you are keeping the old disk(s) intact as you go.

  7. So you have the license *.key file that was issued for the broken stick? All you need to do is copy that file into the config folder of the new stick and delete the trial.key, then the registration wizard will step you through blacklisting the old key and getting a new one to match the current stick.

  8. 9 hours ago, MyNameWasTaken said:

    I just want to rebalance all the data on my array after adding a new disk? i.e. I have 6 disks that are 97% full and I want a bit of data from each to shift to this new empty drive.

    Why? There are good reasons to let all the new writes go to the new disk. Usually newly added data is going to be accessed most frequently, and disks are much faster on the first bit compared to when they are almost full. User shares have no issues combining all the data seamlessly.

  9. Short answer, no, there is currently no way to add limited access GUI users.

     

    Since docker start stop and restarts are easy on the command line, you can do scripting that looks at a user accessible share location for input. Updates are probably possible, but I have no experience doing updates from the CLI.

    The script would watch for and act on the existence of a specifically formatted file to show up or change in a user share location.

    The user would connect to the SMB share that has permissions for that user to create or modify files, and manipulate the file(s) the script is watching for.

    As a quick example, you could look for the existence of a file named restart.txt, when found the script would restart the container and delete the restart.txt file.

  10. 45 minutes ago, bschaeff18 said:

    Hello, I'm getting the same error message. I was hoping you could help me figure out which drive I need to repair. These are the logs:

    unraidnas kernel: XFS (md5p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x13a6acf62 dinode

    unraidnas kernel: XFS (md5p1): Unmount and run xfs_repair

     

    I gathered from above that (md5p1) is the disk in question but what disk is that supposed to represent?

    diagnostics will contain that info

×
×
  • Create New...