Jump to content

JonathanM

Moderators
  • Posts

    16,713
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Have you already shrank the partition inside the disk image with windows disk manager? When you look in disk management inside the windows client, there must be free unallocated space at the end of the drive equal to the amount you want to shrink the vdisk. It's probably going to be easier to do a windows backup and restore it to a new vdisk of the size you want.
  2. The major difference, and what makes this much harder, is that Unraid merges all the pools with the user share file system. It's one thing to have isolated pools operating independently, totally another to have a fuse filesystem suddenly lose access to only part of the files it contains. Changing how the user share filesystem works in Unraid is not a minor thing, and much work has gone into seamlessly merging the pools so the end user doesn't need to worry about which disk or pool really holds the file, it's always presented at the same location, and the share allocation and mover settings determine which pool accepts new writes and where the file ends up after a mover operation. Please, by all means lay out how you would design it, given the constraints already in place with multiple pools all interacting in a single filesystem. I'm not at all involved with programming Unraid, I'm just trying to get a workable set of plans in place to hopefully request small enough bites to the overall design so that eventually we can get to where we want to go. The feature request as asked, "certain VM's running without the array started" is probably not going to happen without more basic asks being implemented, so if we can distill down what is required that will go a long way to the end goal.
  3. Sounds like you need to pass through those drives to the VM and mount them there, not mount them in Unraid.
  4. Seems to be recovering. Is this likely to happen overnight again, or do we have a probable cause and fix?
  5. Yes. After a data drive rebuild, you should do a non-correcting check, after a parity drive build, a correcting check is prudent. As always, anything other than zero errors must be dealt with appropriately.
  6. Same song second verse, little bit louder, little bit worse.
  7. Different views of exactly the same files. /mnt/user0 contains the files/directories on only the parity array data disks. /mnt/user contains the files/directories on all the pools and data disks. Also, user shares are the aggregate of the disk and pool shares. Same exact files in a different path, not duplicates.
  8. This is the crux of this whole request thread, and I'm finding it hard to see the use cases. What functions do you want to accomplish that require only partially stopping the array? Is this exclusively about reconfiguring storage pools? How often does that happen without the accompanying need to power cycle while rearranging hardware? I doubt that reconfiguring storage pools is going to be able to be done live, at least in the current incarnation of the parity array. Maybe focus on what tasks you want to accomplish without taking down everything. The more compelling and usable for the mass market use case, the better. My arrays stay running for months on end, typically only downed by power or major security update, so I'm not personally seeing the big attraction to partially stopping something that's running basically 24/7/365 anyway, which is what the first half of your post basically describes, what I'm doing currently. I guess what I'm saying is, you described all the reasons my Unraid servers stay running 24/7, but didn't address what you want to do differently than what is currently happening. Why do you want to NOT run 24/7?
  9. I'm seeing the same thing, the only reason I found this post was the notification seems to work.
  10. I'm confused, can you explain exactly what you did so far and why? Maybe recap step by step where you started and what the status is right now?
  11. The rule is no data drive can be larger than parity. An 8TB drive will work just fine since you don't have any data drives larger than that. It's true you can't ever rebuild a data drive to a smaller one, but that doesn't apply here. Just remove the failed drive, assign the 8TB to the parity slot, and start the array to build parity.
  12. Don't hesitate to ask questions BEFORE you do anything permanent.
  13. Do you know how to do that without losing the data that is on the drive slot currently being emulated?
  14. Testing normal phraseology. I would like to see you add your diagnostics zip file to the next post in this thread, we may possibly find out you need to run a scrub on your pool.
  15. If you can catch a two word phrase instead of single words docker run or are we only doing wiki links?
  16. Current example. bitdefender's browser protections causing issues.
  17. Post your docker run command.https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=564345
  18. @SpencerJ? I thought username changes are still restricted?
  19. I haven't personally verified it, but I don't think NTP is necessary unless you can't manually set the date and time correctly.
  20. Sparse means a file has space allocated for possible future use, but not currently used. What do you mean by files falsely copied?
  21. There have been some instances recently where software blocked https because it's being redirected to a local private IP. Some programs are more aggressive than others, not all adblockers / protection software is created equal. Like you said, most blockers work just fine, but if someone experiences an issue, it's something to investigate. It's a troubleshooting step, not a blanket requirement. The car door analogy is flawed for multiple reasons, but if you are parked inside your locked garage, leaving the doors unlocked is perfectly natural and not an issue.
×
×
  • Create New...