-
Posts
16,802 -
Joined
-
Last visited
-
Days Won
66
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by JonathanM
-
-
If you can't reproduce at your end, I can give you remote access to my machine.
-
@limetech, I can confirm for the listed options in the create VM screenshot of qcow that it indeed creates a very small sparse file in my b25 test install
I'm torn about the urgent label, but I'm going to let it stand until reviewed by @bonienl and company.
However, the sparse qcow file does indeed expand appropriately during install, so I failed to replicate the actual issue.
-
Post a screenshot of the Add VM page right before you would hit the "create" button.
-
1 hour ago, TexasUnraid said:
Strange since it worked earlier when I removed a drive from a raid 0 pool.
Don't see how, since RAID0 tries to spread the data across all the devices for speed. Normally you would need to tell btrfs you intended to remove the device and allow it to remove the data from the device you want to remove.
-
1 hour ago, TexasUnraid said:
Is it possible to stop / start docker from a script?
Sure, when you make changes to a container in the GUI and apply them you can see the docker run command that is issued by Unraid. You can certainly run that same command in a script to start that container.
-
10 hours ago, Mathervius said:
I turned a very low powered Ubuntu box into a TimeMachine server yesterday and all of my machines have been backing up to it without issue. It's also much faster than with UNRAID 6.8.3.
If your unraid box can handle VM's, try replicating that same TimeMachine server in a VM and see how it performs.
-
22 minutes ago, Joseph said:
it seemed like a way to improve the product to "save the user from themselves" for those of us who suffer from 1D10T errors.
Well, to be blunt, if you try to 1D10T proof everything, you will lose functionality, performance, and waste developer time that could be better spent elsewhere.
I suppose the best way to handle your specific issue is a warning message when you start the array, similar to the question the ticket counter agent asks when you present your baggage, has anyone tampered with your bags without your knowledge?
-
22 minutes ago, Joseph said:
I was concerned based on the 'new contents' of the physical disk, it would have destroyed the virtual contents held by parity and the data that used to be on the disk would then be forever lost...
That's correct. A correcting parity check would have updated parity to reflect what was now on the disk instead of what was there before, so that parity would once again be useable to recover from a disk failure. All original content would be gone, just like you intended by erasing the disk.
If you didn't want the data erased, why would you format the disk, inside or outside of unraid?
Your scenario of pulling a data drive to temporarily use it for something else doesn't make sense.
-
10 hours ago, zoggy said:
Which is odd since I stop all the dockers before I went to stop my array.. I'm guessing a docker really didnt stop or something?
Stopping the containers doesn't stop the underlying docker service, and as long as the service is active the image will be mounted. Shouldn't stop the array from shutting down though.
-
32 minutes ago, SliMat said:
The 'workaround' was only discovered a few minutes ago... but I have changed to "annoyance" if its not deemed important that peoples machines can be left unusable 😐
It is important. I'm not saying that it isn't.
It's just the urgent tag triggers a bunch of immediate attention, which isn't necessarily productive in this specific instance. Better to put it in the que of important things to try to fix, instead of in the "emergency we better find a solution before thousands of people corrupt their data" category, only to find out that it's not that big of a deal for 99% of the user base.
Screaming for attention for something that in the grand scheme isn't a show stopper may cause the issue to get pushed down farther than it deserves to be as an over reaction to the initial panic.
Politely asking for help resolving it goes a lot further than pushing the panic button.
-
Respectfully, while I agree that it's urgent in the sense that there is something wrong that needs to be addressed, there is a valid workaround in place to run unraid without triggering the issue, and it only effects a small subset of hardware. GUI mode just doesn't work properly on some systems. It's been that way since it's been introduced.
I don't think this deserves the urgent tag, which implies a showstopper issue for general usage in a majority of hardware with no workaround.
-
So to be clear, this only effects GUI mode?
-
20 hours ago, johnnie.black said:
I can't reproduce this, if I unassign all cache devices, leaving slots as they were I get this on the log:
root: mover: cache not present, or only cache present
mover is not executed
Try this.
After you unassign the physical cache devices, try creating a /mnt/cache folder, like what would happen if a container were mis-configured to use the disk path instead of /mnt/user
I suspect the OP was filling up RAM with some misconfiguration, causing the crash.
- 1
-
On 12/15/2019 at 4:10 PM, ds679 said:
Ahhh....good idea...and...drumroll...IT WORKS! I'm in the terminal window and no blanking/whiting out!
Were you able to go 'one by one' and see which one (or was it the whole 'shields' part) was causing the problem?Thanks for the idea!
=dave
On 1/2/2020 at 8:24 PM, ds679 said:I appreciate all of the help - but this issue still is persistent & repeatable....and has not occurred with other releases. There is still an issue with the current codebase.
=-dave
Earlier you said you figured out the issue.
-
15 minutes ago, Helmonder said:
I am perfectly aware, and it was attached ?
I'm not seeing it on any of the posts in this thread. It's supposed to be in the report itself, lacking that, attached to your first reply.
Did you read the guidelines for posting a bug in this section?
-
2 hours ago, marcusone1 said:
any solutions to this. i'm seeing it and backups using rdiff-backup are now failing due to it
Since this report references rc5, I'd advise updating to 6.8.0 and see if the issue still exists. If it does, then a new report needs to be filed, with all the diagnostics and steps needed to recreate it so the devs can fix it.
-
10 minutes ago, dalgibbard said:
i've installed the unraid nvidia plugin
For future reference, the nvidia and dvb modifications are not supported by limetech. Before filing a bug report, please be sure to revert back to the limetech release and duplicate the issue there. If the issue only occurs with 3rd party modifications you need to bring that up the the folks doing the modifications.
-
3 hours ago, Carlos Talbot said:
What's the easiest way to reformat the drive to XFS?
Make sure that when the array is stopped, only 1 cache slot is shown. Then you can select XFS as the desired format type on the cache drive properties page, and when you start the array you should be presented with the cache drive as unmountable, and the option to format. Be sure the ONLY drive listed as unmountable is the cache drive, as the format option operates on all unmountable drives simultaneously.
-
40 minutes ago, eagle470 said:
Simple request, I'd like a check box on the NEXT branch where I can ask for the system to not notify me until there is a stable 6.8 release.
If you are on the NEXT branch, you are expected to install updates as you can, and participate with diagnostics if you find an issue. If you don't want to be bugged until the a stable release, you need to be running the stable branch.
I know there are valid reasons to not stay on 6.7.2, but it's not reasonable to expect to treat the NEXT branch as stable.
-
Doesn't look like an unraid issue to me, appears to be a problem with FreeBSD
https://forums.freebsd.org/threads/freebsd-12-release-guest-in-qemu-kvm.70207/
-
Wireless is your issue. To prove me wrong, temporarily make a gigabit connection to the wireless machine and see what happens.
BTW, this definitely should be in general support, it's in no way a bug with unraid.
-
13 hours ago, Peanutman85 said:
If it happens again with rc3, I'll post a new thread.
Don't bother. Either run the latest RC and provide diagnostics, or run a stable release and wait.
-
5 hours ago, bonienl said:
Do not use quotes in your password.
I think the point of this report is to complain that the webgui should give an error message and refuse to complete the change when entering a non-valid password instead of accepting it and causing a lock out condition.
-
3 hours ago, jbartlett said:
Anybody try creating a new Windows 10 VM under RC4? I had a DEVIL of a time trying to get it to work. The install would copy the files, go all the way to the "Finishing up" and then display any one of several errors - corrupted media, cannot set local, could not load a driver, could not continue, or just jump right back to the setup button at the start.
Rolled back to 6.7.2 and poof - installation went like a champ though I did have to edit/save because using the RC4 built XML gave an invalid machine type error.
What vdisk type did you choose? RAW or qcow?
6.9.0-beta25 Creation of New VM yields drive space of 0mb for client OS
-
-
-
-
-
in Prereleases
Posted
I can confirm the qcow file is indeed behaving correctly for me during the install process. Sorry for the initial alarm, I didn't go beyond getting the apparent size and allocated size for the created file, which obviously behaves quite differently than the raw file.