Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Right, you need to use the command the same way you do to register users, just substitute deluser instead of register prosodyctl --config /config/prosody.cfg.lua deluser <fill in the user you want to delete here> You can't just type prosodyctl deluser username
  2. The expectation is that the array is not going to be stopped for any significant period of time, the only time most people have their array stopped is to make configuration changes or prepare for an imminent array start or power down.
  3. Instead of trying to force the microsoft account, try just adding a user, put it in the allowed list for your shares, and enter those credentials to see if it works.
  4. I'm not the controller expert around here, but I couldn't find any reference to IT (initiator target) mode firmware, so my best guess is there isn't any available. It's possible you may be able to force it to work by assigning each disk to an individual RAID0 volume in the card's BIOS, but you would likely be losing much of the ability of Unraid to manage the disks and monitor their health. It would be much better to replace it with a LSI based plain HBA instead of a RAID card. RAID cards are not recommended with Unraid.
  5. Since the VM is up, can you still copy your home folder content elsewhere? Pretty much anything critical in an Ubuntu install should be in your user's home folder. I'm fuzzy on how the file got encrypted while it was held open by KVM. Are you sure it was encrypted?
  6. We are glad to help, you only need to ask. I'm not sure how you managed to save your data, because if you truly did exactly what you described in the post, the data from the failed drive is gone. Mover won't move data between array disks, only cache to array and array to cache. Removing a disk from the shares doesn't do anything to the data on the disk, it only prevents new data from being written there, or if globally excluded hides the data already on the disk from the user shares as well. Doing a new config while there is a failed disk will result in the data that could have been rebuilt from parity to be permanently gone.
  7. Or, just brute force it and script a virsh start <VMNAME> command every few minutes. Downside to automatically restarting a VM is if the start command is issued while to array is being asked to stop you could run into issues. I'd only force a VM start every hour or so, eventually the users should learn not to shut down VM's.
  8. I think you said the disk enclosure is on a separate UPS from Unraid. That can be an issue. To troubleshoot, move all interconnected disk equipment to the same UPS and see what happens.
  9. Is your name Advanced Member or Kosti?
  10. There is so much wrong here that I'm not going to even try. In the future, before you do any replacement or recovery steps, please post here and ask for advice.
  11. Depends where you access it from. \\tower\flash\config\go over SMB, /boot/config/go over SSH or at the console, X:\config\go where X is the mounted drive letter if you put the USB stick in your windows box.
  12. I don't think that's possible without some very special hardware and the software to match. Unraid runs as a normal linux system, which is a device host. To make it function as a device client isn't trivial. Would be nice though for some configurations. Probably not going to happen though.
  13. Yes, that gets inserted when a file is edited on an incompatible editor. Scratch that, I must have been thinking about something different. Sorry for the bad advice.
  14. Periodically parse the output of virsh list and act on the result using the appropriate virsh start or resume.
  15. Sounds like you copied over some items that are keeping the management GUI from starting. Can you SSH to the IP or get to the keyboard and monitor on the Unraid box? If so, log in, type diagnostics, that should say it's saving a file, after it's done shut down and put the USB stick in your desktop and attach the diagnostics zip file intact to your next post in this thread.
  16. No. FAT32, label UNRAID all caps. https://wiki.unraid.net/UnRAID_6/Getting_Started#Manual_Method_.28Legacy.29
  17. If you use scsi instead of virtio maybe this would work. https://serverfault.com/questions/318755/qemu-can-i-set-the-serial-number-on-a-virtual-scsi-device
  18. Depends on your perspective. If, when adding a drive, you had to evaluate if that drive was larger than the current largest data drive, and apply a different set of criteria for adding it, then that logic would be more complex and possibly a point for failure. As it is now, theoretically if the drive you are adding is all binary zeroes, there is already valid parity all the way to the extent of any drive you add. It's checking that the parity drive has all zeroes past that point. It would suck to add a 12TB data drive only to start getting errors on the parity drive when you were trying to add the new data drive. The secondary function of parity checks is to verify the health of the whole surface of all drives.
  19. Attach the diagnostics zip file intact to your next post in this thread.
  20. No! 1. Stop array. 2. Disable docker and VM services. 3. Start array. Make sure there is NO Docker or VMS tab in the GUI. 4. Set all shares currently not Cache:Yes to BE Cache:Yes. Make a note of which shares you changed and what they were. 5. Run mover, wait for completion, check cache drive contents, should be empty. If it's not, STOP, post diagnostics and ask for help. 6. Stop array. 7. Set cache drive desired format to XFS or BTRFS, if you only have a single cache disk and are keeping that configuration, I'd advise XFS. It's only available as a selection if there is only 1 (one) cache slot shown while the array is stopped. 8. Start array. 9. Verify that the cache drive and ONLY the cache drive shows unformatted. Select the checkbox saying you are sure, and format the drive. 10. Set any shares that you changed to be Cache: Yes earlier to Cache: Prefer if they were originally Cache: Only or Cache: Prefer. If any were Cache: No, set them back that way. 11. Run mover, wait for completion, check cache drive contents, should be back the way it was. 12. Stop array. 13. Enable docker and VM services. 14. Start array
  21. No. You should do what it says. On the share settings for that share, uncheck one of the includes, recheck it, then hit apply.
  22. I could easily be wrong, as I don't have any AMD equipment, but I'm pretty sure only some generations of intel igpu's are compatible with passthrough to a VM. I'd be happy to be proven wrong, but from what I've seen AMD igpu passthrough isn't a thing at all for now.
×
×
  • Create New...