Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Nope. The array assignments are kept on the flash drive. Nothing written on the drive identifies it, just the serial number of the drive, and you already told Unraid that the parity slot is occupied by a different serial numbered drive.
  2. Not sure why you want to preclear the 10TB, are you suspicious that it may be failing? You should never use a drive you consider marginal in the parity array. If you are confident all drives are healthy, I'd do a non-correcting parity check, if it comes back with zero errors, you can then just pull the 3TB and replace it with the 10. No need for any other manipulations.
  3. Sometimes baby proofing plugs are necessary for the "adults" in the room as well. Block off unused outlets with plastic baby plugs duct taped in place. Easy enough to remove when you need to rejigger something, but enough of a deterrent to make someone think twice about utilizing a "free" plug spot. Just don't use a surge protector. Added power conditioning circuits on the output of a UPS can make them angry. A cheap outlet splitter is a good choice.
  4. What exactly are you trying to get set up?
  5. No, Unraid is fine with different formats on each array drive.
  6. Little if any. Shipping damage is way more prevalent in my experience. Feel free to use the same models, but try not to ever buy multiples that came through the same shipper. If you must use the same seller, stagger the purchase dates by a couple weeks or more to hopefully get a drive from a new manufacturer shipment each time.
  7. Click on the container icon and select support. That will take you to the support thread for that specific version, where you can read to see if others have had the same issue and solved it, or ask in that thread if you can't find the solution already posted there.
  8. Probably not, the messages will just show up in whichever drive gets assigned to the sdd device. Unraid assigns drives based on when in the boot process they are detected, so the sd? can change randomly. That's why all drive identification for array slots is based on serial numbers. I think you are missing the point though, the messages are simply a list of the lines in the syslog that reference a specific sd? designation, and because wsdd contains the letters sdd in order, it's being added to the list for that device.
  9. I thought that only shared compute power. Can you actually drive a video output?
  10. 1st, Unraid does not support hot replacement while the array is running. You must at least stop the array to select the replacement drive, and depending on the disk controller, you may need to power down to get the new drive properly detected. Just wanted to get that out of the way, as it sounded like you intended to hot swap while the array was running. If parity is in sync, then the removed drive will be completely and accurately emulated by all the rest of the drives, and you can rebuild that content to the new drive. However, if parity is not correct, or any one of the array drives doesn't return good data, then the rebuild will be corrupt. A parity check is the only way to know for sure that parity is in sync, so the more recent your last parity check with zero errors, the more confident you can be.
  11. Try adding this code to the bottom of the XML, editing to match your SCSI volume if needed. <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.scsi0-0-0-0.rotation_rate=1'/> </qemu:commandline> Be sure to back up everything, you may break something badly, I have NOT tested this.
  12. VM startup is easy to manipulate with scripts, I have a user script that pings an IP and starts the VM only after that IP is responding. I recommend disabling autostart on your VM's and script them instead, definitely keeping a way to disable startup for troubleshooting purposes. A few minutes delay to kill the startup script if you need to, or reading the presence of a specific file on the flash drive, so deleting or renaming that file keeps the VM from starting.
  13. You can run apcupsd on your pi as the host, and connect Unraid as a network client that shuts down when the pi reports an outage.
  14. Can you cite a source? In my 40+ some odd years of tech, I've never heard that. As a matter of fact, some drives even used the rotational energy of the platters to park the heads if power is pulled while the drive is spinning. As to why Unraid spins up the drives, it's to cleanly unmount the file systems, which are kept mounted while the drives are allowed to spin down if not accessed for a set period of time during normal use. Unmounting the filesystems requires the drives to be accessed.
  15. In my experience, drives that arrived together experienced similar trauma during shipping. I've found it's much more likely for rough shipping and handling to damage drives than any other cause. Raw drives are still extremely delicate, the G force of even gently tapping one raw drive against another is plenty to damage them. A solid layer of foam or rubber, even a thin layer, reduces the G load tremendously, It's the bare metal hitting something solid that's the problem.
  16. I'm unclear on what you are asking. Does this help? https://wiki.unraid.net/Parity#Parity_disk
  17. I hate that I don't have the depth of knowledge to confidently advise you on the fully correct solution, but in a nutshell, Unraid incorrectly decided you wanted to add the 1TB and the 500GB together in a RAID1, and because of the limitations of BTRFS RAID1 with different size devices, it's incorrectly reporting a usable size of 750GB instead of the actual 500GB. If you physically remove the 500GB SSD from the machine before telling BTRFS to remove it from the pair, I don't know what will happen. At the moment, even though it's not showing this in the GUI, the system logs seem to clearly state that both drives are participating in the BTRFS cache pool. I believe the solution will be to run a balance command of some sort, check the result to be sure the volume only has the 1 intended member instead of both, and then you should be good. Please wait for someone with a more specific answer if you need to save the data that is currently on the cache pool. If you are comfortable erasing any data on both the 500GB and the 1TB, you can run a blkdiscard command to fully erase both, which should reset Unraid's view after a reboot and allow the single specified device to be formatted correctly. If you want to wait a few hours, @JorgeB should know the exact command needed to straighten things out without erasing data, or if you want to blow things away just use the blkdiscard command pointed at /dev/sdb and /dev/nvme0n1, assuming those are the old and new cache drives. Be VERY sure you use the correct /dev id's, as those can possibly change between power cycles. Just because those are the id's listed in the screenshot you posted, you should verify that those are the correct drives immediately before issuing the command.
  18. Sorry, I was typing quickly off the top of my head. Honestly, there have been a few domain changes over the years, I don't know what is the current best contact email, I haven't emailed for support in a while. try support@ lime-technology.com or unraid.net Again, sorry for steering you to the wrong contact, it seems the company has removed ready references to their emails off the website, probably getting too much spam. The contact page does have a web form to fill out that I would guess goes to an email address. https://unraid.net/contact
  19. Since one drive is bad, the other is automatically suspect. I recommend a long smart test at the very least.
  20. Have you gone through all three options to see what results they actually produce?
  21. Since the motherboard has a graphics card built in, the BIOS may not have the igpu enabled. Some motherboards have settings to enable both, some don't.
×
×
  • Create New...