-
Posts
16,802 -
Joined
-
Last visited
-
Days Won
66
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by JonathanM
-
-
Probably a browser issue. Try a different browser, try incognito, etc.
-
10 minutes ago, Gnomuz said:
If you have a free pcie 1x slot on your motherboard, that will do the trick. The other route would be a USB to rs232 adapter, but there are many compatibility issues as most of these so-called FTDI adapters are Chinese cheap crap. So stick with the add-on card if you can.
https://www.tripplite.com/keyspan-high-speed-usb-to-serial-adapter~USA19HS
That specific model has a very good reputation.
-
In all seriousness, if a character is going to break the login, it shouldn't be accepted as a valid entry. @bonienl?
-
So don't use extended characters in the password.
Length is more important than complexity anyway.
-
52 minutes ago, bonienl said:
Actually the devs can do something about it
I made a correction to not flag mover messages.
Will it still flag mover failure messages?
-
Changed Status to Solved
Changed Priority to Other
-
-
49 minutes ago, shaunmccloud said:
Do I just make a list of my plugins and install them one at a time until I find the one that causes the issue?
Or use the 50% method. Install half, see how it goes. Every time it crashes, you remove half of the possible culprits.
-
35 minutes ago, mrforsythexter said:
Is there a better place for feature requests?
The feature request section of the forum?
-
May already be fixed in latest release candidate of 6.9, please retest.
-
On 1/13/2021 at 5:18 PM, TechGeek01 said:
Was using the VGA output on the card, and the monitor constantly displayed the last thing that was on screen when I shut it down (setup screen of a Windows ISO) as if it was actively being told to display it still.
That's actually fairly typical, the video card just renders whatever is in the framebuffer. Until something flushes the buffer or otherwise resets the card, it's just going to show whatever was stuffed into those memory addresses last. If you kill the VM and nothing else takes control of the card, it will just stay at the last state. Normal hardware machines typically take back control of the video card after the OS shuts down.
- 1
-
Try removing the recycle bin plugin.
-
Changed Priority to Other
-
14 minutes ago, Andiroo2 said:
How did you get your docker and VM's to move over to the new cache pool? Did you have to manually move the files between pool A and pool B? I have set my appdata and system shares to prefer the new 2nd cache pool but nothing is happening when I run the mover.
At the moment the most hands off method is to do it in 2 steps, first to the array, then back to the new pool.
Be sure you have disabled both docker and vm services, if you still have the VM and Docker menu items in the GUI they aren't disabled.
Set the shares you want to move to cache yes, then run the mover. After that completes, set them to cache prefer the new pool, and run the mover again.
Alternatively you could manually move them, again be sure the services are disabled.
- 1
-
14 minutes ago, Gnomuz said:
Well, I just copied usbreset to /boot/config/, but chmod +x /boot/config/usbreset sends no error, but doesn't change the file permissions (still -rw-------). Sounds like a noob question, but I'm a noob in linux !!!
You MUST copy the file to somewhere other than /boot. Linux permissions aren't honored in any path under /boot because it's FAT32, which doesn't support linux file permissions fully.
-
Just now, JorgeB said:
Worse than that, AFAIK there's no easy way of manually changing a VM from autostarting, hence the no autostart with manual array start, so than it can be edited in case it's needed.
Yeah, that's why I recommend rolling your own autostart script with easily edited conditionals.
The brute force autostart that is built in has severe limitations IMHO, Squid's plugin container autostart with network and timing conditionals should be the model for Unraid's built in autostart for both VM's and containers. It seems like we took a step back when order and timing were added to Unraid, prompting the deprecation of Squid's plugin.
-
6 hours ago, TechGeek01 said:
Can a change be made so that even when starting the array manually, both Docker and VMs respect the chosen autostart options?
Personally I don't rely on Unraid's built in VM autostart, as I have some external conditions that need to be met before some of my VM's come up. Scripting VM startup is very easy, virsh commands are well documented.
Since you have a use case for starting your VM regardless of array autostart, I suggest using a simple script to start the VM. However, as JorgeB noted, I would recommend a conditional in the script to allow you to easily disable the auto start if needed for troubleshooting. It's very frustrating to get into a loop that requires you to manually edit files on Unraid's USB to recover.
-
1 minute ago, Eviseration said:
So, what are we supposed to do until 6.9 is officially released? It would be nice if I could get my actual "production" cluster to work, and I'm not really a fan of running my one and only Unraid server on beta software (I'm not at the point where I have a play machine yet).
I'm not sure what you are referring to, as Unraid hasn't ever released a version with nvidia drivers.
If you are referring to using the community modded version of 6.8.3, I would contend that using the official 6.9 beta is far more "production" ready than using a community mod.
- 1
-
6 minutes ago, bigmac5753 said:
excuse the noob question but....
Am I able to assign a card to a docker container and a VM? I don't mean simultaneously, but let's say plex is using it for transcoding then a VM takes it over when it starts.
Only if you stop the array, change the settings, and reboot.
-
5 hours ago, Squid said:
Very soon(tm)
Soon™
- 1
-
55 minutes ago, Marshalleq said:
I read this 'theres no official timeline' thing a lot.
Ok, let me be a little more clear. There is no publicly accessible official timeline. What limetech does with their internal development is kept private, for many reasons.
My speculation of the main reason is that the wrath of users over no timeline is tiny compared to multiple missed deadlines. In the distant past there were loose timelines issued, and the flak that ensued was rather spectacular, IIRC. Rather than getting beaten up over progress reports, it's easier for the team to stay focused internally and release when ready, rather than try to justify delays.
When you have a very small team, every man hour is precious. Keeping the masses up to date with every little setback doesn't move the project forward, it just demoralizes with all the negative comments. Even "constructive" requests for updates take time to answer, and it's not up to us to say "well, it's only a small amount of time, surely you can spare it".
The team makes choices on time management, it's best just to accept that and be happy when the updates come.
- 4
-
2 minutes ago, 1812 said:
But if it was later, how could we really know?
"Time is an illusion. Lunchtime doubly so."
- 3
-
6 minutes ago, Moka said:21 hours ago, sheiy said:
when will the 6.9.0 release
Hahaha, I have the same question!
Soon™
Seriously, there is no official timeline, asking isn't going to make one appear. When it's done it will be released, no sooner, no later.
-
7 hours ago, SavellM said:
can we pool the pools together?
So like have 2 pools, with 4 parity drives (2 each pool) and then have increased read/write speed to the mechanical HDD's?
No. Each pool will still use the same strategy of individual file systems, no striping between pools.
You could, however, utilize the multiple pools feature to have a BTRFS RAID level on a pool that would be striped inside of that specific pool. So you could have a SSD pool, and a HDD pool. The Unraid traditional parity array(s) would operate pretty much as they always have.
[6.8.3] docker image huge amount of unnecessary writes on cache
in Stable Releases
Posted
Which is the primary reason why Unraid can't quickly "fix" this. They didn't author the docker system.