trott
-
Posts
140 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by trott
-
-
1 hour ago, winglam said:
Thank you!!!! Does 6.12.0 rc2 support dataset through command line?
yes, you can alway use command line
-
where can I find the setting for "scheduled trimming of ZFS pools."
-
yes, I just notice the same thing, I'm quite sure before 6.92, it is work as I had to manually add pcie_asmp=off to get rid of some PCIe Bus Error message
-
this is the reason I asked if we can attached a VF to container in the thread, it can help to solve the issue and meet the requirement
-
I reformat the cache, now the issue is gone
-
are you sure BTRFS is good for VM disk?
-
currently I'm putting the docker and VM on an unassigned device, how can I change the unassigned device to one cache pool?
-
May I sugguest to use 5.6 kernel (even RC one) for 6.9 beta/RC; 5.6 really includes a lot of new features, (wireguard for example), it is better to test it eariler
-
my log also full of similar logs, in kodi, when you click a movie trying to play it, sometimes it says the file is not there, but sometimes it will work for the same movie, on such issue on 6.72, it would be better to upgrade NFS to supoort V3
Dec 27 17:33:18 Tower rpcbind[10637]: connect from 192.168.2.21 to getport/addr(nfs)
Dec 27 18:26:20 Tower rpcbind[3941]: connect from 192.168.2.21 to dump()
Dec 27 18:26:20 Tower rpcbind[3942]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:27 Tower rpcbind[4226]: connect from 192.168.2.21 to dump()
Dec 27 18:26:27 Tower rpcbind[4227]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:31 Tower rpcbind[4387]: connect from 192.168.2.21 to dump()
Dec 27 18:26:31 Tower rpcbind[4388]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:36 Tower rpcbind[4549]: connect from 192.168.2.21 to dump()
Dec 27 18:26:36 Tower rpcbind[4550]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:37 Tower rpcbind[4590]: connect from 192.168.2.21 to dump()
Dec 27 18:26:37 Tower rpcbind[4591]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:38 Tower rpcbind[4592]: connect from 192.168.2.21 to getport/addr(mountd)
Dec 27 18:26:38 Tower rpc.mountd[5447]: authenticated mount request from 192.168.2.21:668 for /mnt/user/media -
actully I have the same issue, and it should be the same on as reported below
-
any news on 6.9 RC1?
- 1
-
to be honest, I don't think it is unraid issue, my best guess is it is the docker issue with how the docker manage the network, it is better to report to docker team
-
1 hour ago, Johan1111 said:
I restarted multiple times like 5 and still got the same all the time. never happened with 6.7.3-rc4.
then I might have different problem, I will grab the log when I upgrade my unraid next time
-
6 hours ago, nagelm said:
Edit: No issues with r7 except the macvlan issue which has been described by others.
Is a linux kernel update to 5.4 planned for the 6.8 branch. As a gen 3 Ryzen user there are some useful improvements in there for me.
Apologies if it has already been discussed, I tried searching without much luck. Either I failed or the parsing of '5.4' was tricky.
I read in other post that 6.9 RC1 will use 5.4
-
1 hour ago, Johan1111 said:
Still with the same issue IPv4 address not set
unRAID Server OS version: 6.8.0-rc7
IPv4 address: not set
IPv6 address: not set
Tower login:
i'm attaching my log filesI remember I have the same issue after I poweroff and restart the sever, first time boot, all IP address is not set, then I reboot the server, this time system will have a IP of 169.x.x.x, reboot again, now system will get the correct IP
I thought I was the only one have this issue since I never see anyone report similar issue, would you please try to reboot to see if you can duplicate my issue
-
13 hours ago, limetech said:
I doubt it, those patches are quite extensive, more than I want to do in an -rc. Don't worry, 6.9-rc0 should be out about same time as 5.4 kernel.
thanks, should the 5.4 kernel will be release in 2 days?
-
16 hours ago, limetech said:
Please post link to the patch.
thanks, I got information from below link, I'm not serach for those patch myself as I don't know how to patch it
https://www.reddit.com/r/ASUS/comments/cw74rl/asus_pro_ws_x570ace_ecc_compability/
-
11 hours ago, dgriff said:
I have qbittorrent running and 1 or 2 downloads running at a given time writing to a share through my SSD cache. Approx every 15-20 seconds, all file I/O will stop and the entire (Windows) system will hang (presumably as it waits for some sort of I/O to complete). Any windows explorer sessions that are active in the shares will change to "not responding" for approx 10-20 seconds, and will eventually come back to life.
If I'm playing back a MKV stream directly through a share (using MPC), video playback will completely halt at the same time as the I/O freezes, then when it resumes, the audio will pick-up first, and it will run the video fast to catch up to the audio stream. After another short time 10-20 seconds, the entire operation repeats.
Unfortunately not much else to add right now, but if it happens again in the next RC, I'll camtasia the operation and post diagnostics and a camtasia capture of what I'm seeing.
Exactly the same operations when 6.7.2 is restored works perfectly with no timeouts.
are you running QB in vm, I notice same issue when I write data from windows VM to cache, the vm is complete hang after write 100-200M data, the cpu usage of qemu-system-x86 is extreamly high
-
3 minutes ago, Young_Max said:
root@Tower:~# sensors-detect # sensors-detect revision 6284 (2015-05-31 14:00:33 +0200)
The sensors-detect revision in 2015. Is there any update?
actully you can download the latest one from github, it is just a perl script
-
1 hour ago, testdasi said:
For one second I read that as "users (wives)" 😅
On serious note, I wonder if it's HBA-related (maybe new kernel doesn't like the driver?) since the other user reporting strange issues also appears to use a HBA too. This seems to happen every time there's a major kernel upgrade. Defo need Diagnostics.
might be samba issue, I remember I have a strange issue after update to RC1, I copy a file from PC to share and try to rename the file, it told me the file is already open and cannot rename the file
-
2 hours ago, emuhack said:
I have upgraded from RC2 to RC4 - and nothing but issues. I know its a pre release but i would skip this release
- HBA issues (RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3)
- Drive Errors in the upwords of 5k
- I had my docker image corrupt (I had a backup, thank goodness)
- my permissions corrupted on my array.
- IO errors when writing to the Array
I downgraded to 6.7.2 - and will wait
I don't think RC2 to RC4 changes that much enough to cause all your issues, it must be something else here
-
26 minutes ago, Rich Minear said:
12:10 Central Time: The Plex database corrupted again. This was 22 minutes after Plex was restarted with a known good database file.
I see no reason to restart the docker containers at this point, as it will continue to corrupt a database without any changes being made to the Unraid system.
Diagnostics are attached....
I will leave the system down for a while...until I receive some direction. I'm willing to try what you want tested....
swissarmy-diagnostics-20191018-1712.zip 90.49 kB · 1 download
1. will plex corrupt if it is the only docker running?
2. what's your plex path mapping?
-
7 hours ago, johnnie.black said:
Most likely a bios issue with new kernel, did you look for a bios update?
Bios is hte newest, it is the EDAC bug for ryzen 2, there are some patch in kernel 5.4, I think I have to wait
-
1 hour ago, bastl said:
Quick question to all the people having issues. Is there anyone with a Ryzen or TR4 system having issues? I am on a first gen TR4 and never had any problems. Are only Intel systems affected by this, maybe because of the Spectre Meltdown mitigations??? Just an idea.
I'm on 3700x and I have the same issue
Unraid OS version 6.12.0-rc2 available
-
-
-
-
-
in Prereleases
Posted
I only has this one, so it is working on ZFS pools now?