-
Posts
20,188 -
Joined
-
Last visited
-
Days Won
55
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by itimpi
-
-
6 hours ago, dalben said:
OK, thanks. Might have to look at what settings I played with. Only thing I did was enable reconstruct write so I'll disable that and have a look.
That could easily have that effect as a write to any drive will require all disks to be spinning.
-
3 minutes ago, Warrentheo said:
I also have this bug, rc5 to rc6, VM's show fine until one gets launched (either from GUI or CMDL), then it goes blank... I only have one VM, and so I am willing to rebuild the libvirt partition for testing if needed...
The cause has been diagnosed and it is intended that the fix be part of RC7.
-
1
-
-
You can only run the memtest included with Unraid if booting in legacy mode. If you want a version that can be used when booting in UEFI mode then you need to download it yourself from the memtest web site.
-
1
-
-
Try booting in GUI mode. There have been reports of some systems only being able to boot in GUI mode although the cause has never been udentified
-
I believe this is probably an instance of a long standing issue where once you have made any edits in XML mode then you can no longer edit that VM in form mode without potentially losing the edits made in XML mode. I have wondered why a pop-up warning to this effect is not displayed when you try to switch back to form mode from xml mode.
This is slightly aggravated by the fact that the toggle for xml mode is a global one and not specific to a VM. If the underlying problem cannot be easily fixed then it would be useful if the system remembered the last edit mode used for a specific VM so that at least if you go into Edit mode then if you used XML mode you are taken straight there.
-
32 minutes ago, mata7 said:
hi can someone please help me, i have a lot of folders on my /mnt/user0 , where should i move this folders to be safe for the future, thanks
Not quite sure what you mean by this? The /mnt/user0 location is just a view that is a sub-set of /mnt/user that omits any files/folders that are on the cache. As such you have no files in that location that need moving.
-
1
-
-
1 minute ago, dalben said:
Yes, though my scripting skills aren't great but the ability to shutdown dockers and plugins before a parity check would be handy. Though I'm assuming doing so would speed up the parity check.
I suspect for most people this is not a major criteria as performance heavy options are probably mainly running from the cache drive which is not affected by the parity check. There is also the fact that using the plugin to only have parity checks running increments outside prime time the parity check speed may be less critical.
-
1 hour ago, dalben said:
I'll have a look at the plugin as if it allows me to stop selected services and docker containers when doing the parity check it might be handy.
I am afraid stopping/starting services is not part of the plugin as its primary purpose was just to avoid parity checks hitting system performance during prime time. It seemed a step too far at the moment and rather difficult to implement in a generalized fashion.
What I have been thinking of adding is an ability to run custom scripts on parity check start/resume and pause/end. If I get this in place so you can do your own stop/starts is this likely to be of use?
-
This has been the case ever since the pause feature was introduced.
Not a fix, but if you have the Parity Check Tuning plugin installed it ‘patches’ the history entry to give the correct details for the whole of the run (as well as providing some other features you may find useful). Having said that I perhaps need to check that this is always the case when manual pause/resumes have been done rather than automated ones. I think it is handled properly but I am not 100% sure.
-
Quote
I initially had my diagnostics folder uploaded, but it was pointed out to me there was some personal information I'm not comfortable posting publicly. I will happily provide the zip to the appropriate staff!
It would be interesting to know what type of information this was as the Diagnostics are meant to be anonymised to avoid exactly this issue. Maybe there is some further tweak needed.
-
If you are running 6.7.2 (or later) you might want to enable the syslog server under Settings to capture the log to a persistent location so it survives a reboot.
-
24 minutes ago, WCA said:
Can I post diagnostics.zip from 6.7.2? I'm hesistant to upgrade to RC4 again in fear of data loss
Not much point! The whole idea is to try and work out what is happening in the 6.8.0 rc.
-
1
-
-
4 minutes ago, Rich Minear said:
When I attempt to run that cmd (with the array stopped), I receive an error message:
/usr/local/sbin/mdcmd: line 11: echo: write error: Invalid argument
Are you on rc4 when you try this? I have a feeling that is a new option in rc4.
-
32 minutes ago, scubieman said:
Does RC3 have kernel above 4.2? I want to use my 9th gen intel integrated graphics for transcoding.
i915 driver supporting the i9-9700k igpu
You should check the release notes for a release to determine the kernel used. Most new releases of Unraid will have an updated kernel so that is a moving target as releases progress.
-
I am not seeing anything like that on my system! What theme are you using as that may be relevant?
-
If you want to be able to use SMB1 then you need to enable settings->SMB Settings->Use Netbios
-
You can always do a manual install of any release by downloading the zip file from the Limetech site and extracting all the bz* type files to the root of the flash drive (overwriting those already there).
-
Does that mean it is likely to remain an outstanding issue? Just like to know for future reference
i am not too worried as I can easily work around it by using a different browser or puTTY to get to the Unraid command line. If it remains an issue you might want to add it to any future release notes.
-
What subnet is your phone on when this happens, and what are the subnets for for the Unraid server and the dockers?
I think I saw a report of similar symptoms when the remote subnet and the subnet used by Unraid were the same as this resulted in a routing issue. If this is the case then it is going to need looking into for the most robust solution as the user rarely has any control over the remote subnet.
If course it could be my memory is faulty and your issue is something completely different
-
13 minutes ago, dee31797 said:
Just out of curiosity though, what would would happen if you remounted the flash, and when does the mount happen in relation to the go file?
If you think about it the mount must happen before the 'go' file executes as otherwise the 'go' file would not be found in the first place
In fact the mount must happen very early in the boot sequence as the flash needs to be mounted to enable Unraid to read all its configuration information.
-
15 minutes ago, dee31797 said:
Just curious, if the mount time is after the go file, can you remount with different permissions? or if mount time is before go file can you remount with user scripts with different permissions? thanks
Why bother? You can always use the 'go' file to copy the scripts to a location from which they CAN be executed.
-
I am guessing that this problem is caused by the following change from the
release notes (which was made to increase security):
QuoteEncryption - an entered passphrase is not saved to any file. Also included an API for Unassigned devices plugin to open encrypted volumes.
This probably means that Unassigned Devices can no longer find the pass phrase and is going to need some rework before in can support encrypted disks again using the API mentioned
-
1
-
-
3 minutes ago, Darqfallen said:
I unplugged it, server still randomly restarts.
Have you checked that all fans are working. Random restarts can happen if the CPU is overheating. Other common causes are power supply and RAM issues.
-
Exactly which release were you running? Since you only have a single disk I assume that means that appdata is on an array drive?
Disks missing after upgrading to 6.7.0
in Stable Releases
Posted
Disabling IOMMU means that you cannot run VMs with hardware pass-through. You can still run VMs that do not need hardware pass-through. Marvel controllers also seem to be prone to dropping disks offline for no discernible reason.
The issue seems to be a compatibility issue between the Marvel drivers and the latest Linux kernels. The fix (if one is possible) is outside Limetech's control. If you want to run VM's with hardware pass-through then the easiest thing to do is to stop using the Marvel controller and switch to using a LSI based one which work fine with current Linux kernels.