JonathanM

Moderators
  • Posts

    16131
  • Joined

  • Last visited

  • Days Won

    65

Report Comments posted by JonathanM

  1. On 1/13/2021 at 5:18 PM, TechGeek01 said:

    Was using the VGA output on the card, and the monitor constantly displayed the last thing that was on screen when I shut it down (setup screen of a Windows ISO) as if it was actively being told to display it still.

    That's actually fairly typical, the video card just renders whatever is in the framebuffer. Until something flushes the buffer or otherwise resets the card, it's just going to show whatever was stuffed into those memory addresses last. If you kill the VM and nothing else takes control of the card, it will just stay at the last state. Normal hardware machines typically take back control of the video card after the OS shuts down.

    • Like 1
  2. 14 minutes ago, Andiroo2 said:

    How did you get your docker and VM's to move over to the new cache pool?  Did you have to manually move the files between pool A and pool B?  I have set my appdata and system shares to prefer the new 2nd cache pool but nothing is happening when I run the mover.

    At the moment the most hands off method is to do it in 2 steps, first to the array, then back to the new pool.

     

    Be sure you have disabled both docker and vm services, if you still have the VM and Docker menu items in the GUI they aren't disabled.

     

    Set the shares you want to move to cache yes, then run the mover. After that completes, set them to cache prefer the new pool, and run the mover again.

     

    Alternatively you could manually move them, again be sure the services are disabled.

    • Like 1
  3. 14 minutes ago, Gnomuz said:

    Well, I just copied usbreset to /boot/config/, but chmod +x /boot/config/usbreset sends no error, but doesn't change the file permissions (still -rw-------). Sounds like a noob question, but I'm a noob in linux !!!

    You MUST copy the file to somewhere other than /boot. Linux permissions aren't honored in any path under /boot because it's FAT32, which doesn't support linux file permissions fully.

  4. Just now, JorgeB said:

    Worse than that, AFAIK there's no easy way of manually changing a VM from autostarting, hence the no autostart with manual array start, so than it can be edited in case it's needed.

    Yeah, that's why I recommend rolling your own autostart script with easily edited conditionals.

     

    The brute force autostart that is built in has severe limitations IMHO, Squid's plugin container autostart with network and timing conditionals should be the model for Unraid's built in autostart for both VM's and containers. It seems like we took a step back when order and timing were added to Unraid, prompting the deprecation of Squid's plugin.

  5. 6 hours ago, TechGeek01 said:

    Can a change be made so that even when starting the array manually, both Docker and VMs respect the chosen autostart options?

    Personally I don't rely on Unraid's built in VM autostart, as I have some external conditions that need to be met before some of my VM's come up. Scripting VM startup is very easy, virsh commands are well documented.

     

    Since you have a use case for starting your VM regardless of array autostart, I suggest using a simple script to start the VM. However, as JorgeB noted, I would recommend a conditional in the script to allow you to easily disable the auto start if needed for troubleshooting. It's very frustrating to get into a loop that requires you to manually edit files on Unraid's USB to recover.

  6. 1 minute ago, Eviseration said:

    So, what are we supposed to do until 6.9 is officially released?  It would be nice if I could get my actual "production" cluster to work, and I'm not really a fan of running my one and only Unraid server on beta software (I'm not at the point where I have a play machine yet).

    I'm not sure what you are referring to, as Unraid hasn't ever released a version with nvidia drivers.

     

    If you are referring to using the community modded version of 6.8.3, I would contend that using the official 6.9 beta is far more "production" ready than using a community mod.

    • Like 1
  7. 6 minutes ago, bigmac5753 said:

    excuse the noob question but....

     

    Am I able to assign a card to a docker container and a VM?  I don't mean simultaneously, but let's say plex is using it for transcoding then a VM takes it over when it starts.

     

     

    Only if you stop the array, change the settings, and reboot.

  8. 55 minutes ago, Marshalleq said:

    I read this 'theres no official timeline' thing a lot.

    Ok, let me be a little more clear. There is no publicly accessible official timeline. What limetech does with their internal development is kept private, for many reasons.

     

    My speculation of the main reason is that the wrath of users over no timeline is tiny compared to multiple missed deadlines. In the distant past there were loose timelines issued, and the flak that ensued was rather spectacular, IIRC. Rather than getting beaten up over progress reports, it's easier for the team to stay focused internally and release when ready, rather than try to justify delays.

     

    When you have a very small team, every man hour is precious. Keeping the masses up to date with every little setback doesn't move the project forward, it just demoralizes with all the negative comments. Even "constructive" requests for updates take time to answer, and it's not up to us to say "well, it's only a small amount of time, surely you can spare it".

     

    The team makes choices on time management, it's best just to accept that and be happy when the updates come.

    • Like 4
  9. 7 hours ago, SavellM said:

    can we pool the pools together?

    So like have 2 pools, with 4 parity drives (2 each pool) and then have increased read/write speed to the mechanical HDD's?

     

    No. Each pool will still use the same strategy of individual file systems, no striping between pools.

     

    You could, however, utilize the multiple pools feature to have a BTRFS RAID level on a pool that would be striped inside of that specific pool. So you could have a SSD pool, and a HDD pool. The Unraid traditional parity array(s) would operate pretty much as they always have.

  10. @limetech, I can confirm for the listed options in the create VM screenshot of qcow that it indeed creates a very small sparse file in my b25 test install

     

    I'm torn about the urgent label, but I'm going to let it stand until reviewed by @bonienl and company.

     

    However, the sparse qcow file does indeed expand appropriately during install, so I failed to replicate the actual issue.

  11. 10 hours ago, Mathervius said:

    I turned a very low powered Ubuntu box into a TimeMachine server yesterday and all of my machines have been backing up to it without issue. It's also much faster than with UNRAID 6.8.3.

    If your unraid box can handle VM's, try replicating that same TimeMachine server in a VM and see how it performs.

  12. 22 minutes ago, Joseph said:

    it seemed like a way to improve the product to "save the user from themselves" for those of us who suffer from 1D10T errors.

    Well, to be blunt, if you try to 1D10T proof everything, you will lose functionality, performance, and waste developer time that could be better spent elsewhere.

     

    I suppose the best way to handle your specific issue is a warning message when you start the array, similar to the question the ticket counter agent asks when you present your baggage, has anyone tampered with your bags without your knowledge?

  13. 22 minutes ago, Joseph said:

    I was concerned based on the 'new contents' of the physical disk, it would have destroyed the virtual contents held by parity and the data that used to be on the disk would then be forever lost...

    That's correct. A correcting parity check would have updated parity to reflect what was now on the disk instead of what was there before, so that parity would once again be useable to recover from a disk failure. All original content would be gone, just like you intended by erasing the disk.

     

    If you didn't want the data erased, why would you format the disk, inside or outside of unraid?

     

    Your scenario of pulling a data drive to temporarily use it for something else doesn't make sense.

  14. 10 hours ago, zoggy said:

    Which is odd since I stop all the dockers before I went to stop my array.. I'm guessing a docker really didnt stop or something?

    Stopping the containers doesn't stop the underlying docker service, and as long as the service is active the image will be mounted. Shouldn't stop the array from shutting down though.

  15. 32 minutes ago, SliMat said:

    The 'workaround' was only discovered a few minutes ago... but I have changed to "annoyance" if its not deemed important that peoples machines can be left unusable 😐

     

     

    It is important. I'm not saying that it isn't.

     

    It's just the urgent tag triggers a bunch of immediate attention, which isn't necessarily productive in this specific instance. Better to put it in the que of important things to try to fix, instead of in the "emergency we better find a solution before thousands of people corrupt their data" category, only to find out that it's not that big of a deal for 99% of the user base.

     

    Screaming for attention for something that in the grand scheme isn't a show stopper may cause the issue to get pushed down farther than it deserves to be as an over reaction to the initial panic.

     

    Politely asking for help resolving it goes a lot further than pushing the panic button.