L0rdRaiden
-
Posts
568 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by L0rdRaiden
-
-
I have some macvlan errors but no crashes so far. With 6.11 everything was fine.
Regarding the fix by disable bridging, is that even an option? if I disable bridging how can I give an static IP to every docker?
-
Same problem here, I have never had call traces issues until this version, not in the previous versions, 6.11 6.10....
I have been using unraid for a long time macvlan issues come and go with different versions, so for me is hard to believe that the problem is only with the kernel and the kernel developers fix it a break it again every few versions.
I have attached my diagnostics in case it could be useful
-
3 hours ago, L0rdRaiden said:
Quick question.
I have 4 disk in the array I want to do a zfs mirror pool with 2 of them and keep the other 2 drives for other shares.
Should I create the zfs pool in the array section or in the pool section of unRAID webui?
If I create the zfs in the array I guess I should move the other 2 disk to the pool section as single disks, right?
Or can I keep them in the array?
Just for info as pools I have 2 nvme for docker and vm working as single disks.
What is the best approach to avoid conflicts in the future?
My server is down until I get an answer
3 hours ago, trurl said:Array is still individual disks not pools. Each array disk can be a separate zfs disk but not a pool.
Ok, so the zfs should go to the pool and the 2 single disk that I have in the array I guess it would make sense to move the to pool section as single disk as well to be able to use the exclusive mode, right?
-
Quick question.
I have 4 disk in the array I want to do a zfs mirror pool with 2 of them and keep the other 2 drives for other shares.
Should I create the zfs pool in the array section or in the pool section of unRAID webui?
If I create the zfs in the array I guess I should move the other 2 disk to the pool section as single disks, right?
Or can I keep them in the array?
Just for info as pools I have 2 nvme for docker and vm working as single disks.
What is the best approach to avoid conflicts in the future?
My server is down until I get an answer
-
Is there a way to change the record size when creating the pool and datasets?
https://klarasystems.com/articles/tuning-recordsize-in-openzfs/
As per "documentation" each share will become a dataset in the pool. I guess there is no yet a way to establish the "record size"
Is this the right way to change the configuration after the creation with unraid?
https://blog.programster.org/zfs-record-size
Considering media us a current share to migrate
sudo zfs set recordsize=1M mnt/user/media
Will the paths of my current shares will be kept after creating the zfs pool?
Each share assigned to a zfs pool will become a dataset?
-
A must read if you want to start with a migration to ZFS
https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
- 1
-
-
7 minutes ago, wgstarks said:
Yes
So why unraid 6.10 uninstalled this plugin?
-
On 3/20/2022 at 1:16 AM, Gingko_2001 said:
What do you mean by manually downloading?
do you mean using this app?
Is this app compatible with 6.10?
If the manual method is another one could you please specify how you did it?
Thanks
-
Will the 6.10 version include all the docker compose packages and dependencies by default?
-
3 hours ago, bonienl said:
The display of disk information is state aware.
This means when you open the display info, it stays open each time when you re-visit the VM page.
Simply close it again to keep it hidden.
Not a bug or glitch!
LoL I just discovered that you can hide or show the info by clicking on the VM name, I didn't know when I opened this thread, in fact I spend some minutes looking how to hide it before posting.
-
1 hour ago, itimpi said:
It does not happen on my VM’s tab so I suspect there must be some other factor at play.
Is there a way to troubleshoot? looking at html code or something?
-
10 hours ago, Womabre said:
Running on a 3700X here. I can confirm this bug is fixed.
What was this problem about?
Does it require qemu 5.1?
-
I had these errors in beta 25, an almost clean reinstall of beta 25 solved them.
The problem was described here, and I was not the only one with this problem. No one could help.
Now I have upgraded to beta 29 and I get this errors again
Sep 29 18:25:42 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31dde87.044110ba does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31dde86.72b4c6d5 Sep 29 18:48:24 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31de3d8.61c48512 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31de3d7.d98ce902 Sep 29 20:05:17 Unraid kernel: mdcmd (59): spindown 2 Sep 29 20:23:39 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31dfa2b.d87b55b4 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31dfa2b.514764b2 Sep 29 20:52:38 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e00f7.2759964c does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31e00f6.8a548cf6 Sep 29 21:26:42 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e08f3.2c8665f7 does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31e08f2.a290931a Sep 29 22:07:02 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e1267.285aaa3f does not match aorg 0000000000.00000000 from [email protected] xmt 0xe31e1266.9179e8df Sep 29 22:32:00 Unraid webGUI: Successful login user root from 10.10.10.30 Sep 29 22:32:04 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Sep 29 22:42:48 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Sep 29 22:44:20 Unraid emhttpd: shcmd (243): ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime-copied-from Sep 29 22:44:20 Unraid emhttpd: shcmd (244): cp /etc/localtime-copied-from /etc/localtime Sep 29 22:44:20 Unraid emhttpd: shcmd (245): /usr/local/emhttp/webGui/scripts/update_access Sep 29 22:44:20 Unraid root: sshd: no process found Sep 29 22:44:21 Unraid emhttpd: shcmd (246): /etc/rc.d/rc.ntpd restart Sep 29 22:44:21 Unraid ntpd[1794]: ntpd exiting on signal 1 (Hangup) Sep 29 22:44:21 Unraid ntpd[1794]: 127.127.1.0 local addr 127.0.0.1 -> <null> Sep 29 22:44:21 Unraid ntpd[1794]: 213.251.52.234 local addr 10.10.10.5 -> <null> Sep 29 22:44:21 Unraid ntpd[1794]: 147.156.7.26 local addr 10.10.10.5 -> <null> Sep 29 22:44:21 Unraid ntpd[1794]: 193.145.15.15 local addr 10.10.10.5 -> <null> Sep 29 22:44:21 Unraid root: Stopping NTP daemon... Sep 29 22:44:22 Unraid ntpd[25214]: ntpd [email protected] Sat Aug 15 18:24:48 UTC 2020 (1): Starting Sep 29 22:44:22 Unraid ntpd[25214]: Command line: /usr/sbin/ntpd -g -u ntp:ntp Sep 29 22:44:22 Unraid ntpd[25214]: ---------------------------------------------------- Sep 29 22:44:22 Unraid ntpd[25214]: ntp-4 is maintained by Network Time Foundation, Sep 29 22:44:22 Unraid ntpd[25214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 29 22:44:22 Unraid ntpd[25214]: corporation. Support and training for ntp-4 are Sep 29 22:44:22 Unraid ntpd[25214]: available at https://www.nwtime.org/support Sep 29 22:44:22 Unraid ntpd[25214]: ---------------------------------------------------- Sep 29 22:44:22 Unraid ntpd[25216]: proto: precision = 0.040 usec (-24) Sep 29 22:44:22 Unraid ntpd[25216]: basedate set to 2020-08-02 Sep 29 22:44:22 Unraid ntpd[25216]: gps base set to 2020-08-02 (week 2117) Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 0 lo 127.0.0.1:123 Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 1 br0 10.10.10.5:123 Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 2 lo [::1]:123 Sep 29 22:44:22 Unraid ntpd[25216]: Listening on routing socket on fd #19 for interface updates Sep 29 22:44:22 Unraid ntpd[25216]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 29 22:44:22 Unraid ntpd[25216]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 29 22:44:22 Unraid root: Starting NTP daemon: /usr/sbin/ntpd -g -u ntp:ntp Sep 29 22:44:29 Unraid emhttpd: shcmd (247): ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime-copied-from Sep 29 22:44:29 Unraid emhttpd: shcmd (248): cp /etc/localtime-copied-from /etc/localtime Sep 29 22:44:29 Unraid emhttpd: shcmd (249): /usr/local/emhttp/webGui/scripts/update_access Sep 29 22:44:29 Unraid root: sshd: no process found Sep 29 22:44:30 Unraid emhttpd: shcmd (250): /etc/rc.d/rc.ntpd restart Sep 29 22:44:30 Unraid ntpd[25216]: ntpd exiting on signal 1 (Hangup) Sep 29 22:44:30 Unraid ntpd[25216]: 127.127.1.0 local addr 127.0.0.1 -> <null> Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.0 local addr 10.10.10.5 -> <null> Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.4 local addr 10.10.10.5 -> <null> Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.8 local addr 10.10.10.5 -> <null> Sep 29 22:44:30 Unraid root: Stopping NTP daemon... Sep 29 22:44:31 Unraid ntpd[25480]: ntpd [email protected] Sat Aug 15 18:24:48 UTC 2020 (1): Starting Sep 29 22:44:31 Unraid ntpd[25480]: Command line: /usr/sbin/ntpd -g -u ntp:ntp Sep 29 22:44:31 Unraid ntpd[25480]: ---------------------------------------------------- Sep 29 22:44:31 Unraid ntpd[25480]: ntp-4 is maintained by Network Time Foundation, Sep 29 22:44:31 Unraid ntpd[25480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 29 22:44:31 Unraid ntpd[25480]: corporation. Support and training for ntp-4 are Sep 29 22:44:31 Unraid ntpd[25480]: available at https://www.nwtime.org/support Sep 29 22:44:31 Unraid ntpd[25480]: ---------------------------------------------------- Sep 29 22:44:31 Unraid ntpd[25482]: proto: precision = 0.040 usec (-24) Sep 29 22:44:31 Unraid ntpd[25482]: basedate set to 2020-08-02 Sep 29 22:44:31 Unraid ntpd[25482]: gps base set to 2020-08-02 (week 2117) Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 0 lo 127.0.0.1:123 Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 1 br0 10.10.10.5:123 Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 2 lo [::1]:123 Sep 29 22:44:31 Unraid ntpd[25482]: Listening on routing socket on fd #19 for interface updates Sep 29 22:44:31 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 29 22:44:31 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 29 22:44:31 Unraid root: Starting NTP daemon: /usr/sbin/ntpd -g -u ntp:ntp Sep 29 22:44:32 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Sep 29 22:50:19 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 29 22:55:29 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog
-
unraid-diagnostics-20200807-1751.zip
Uploaded diagnostics file in case it helps with the troubleshooting
-
Anyone else is having problems with NTP?
-
I have this "error" too, and a few other in this post. Beta 25 right now
Jul 29 17:54:21 Unraid smbd[25083]: [2020/07/29 17:54:21.267529, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 29 17:54:21 Unraid smbd[25083]: lp_bool(no): value is not boolean! Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.237445, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean! Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.253927, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean! Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.272429, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean! Jul 29 17:54:25 Unraid smbd[25083]: [2020/07/29 17:54:25.691188, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 29 17:54:25 Unraid smbd[25083]: lp_bool(no): value is not boolean! Jul 29 18:00:41 Unraid kernel: kvm: already loaded the other module
-
On 12/4/2019 at 11:37 PM, limetech said:
Changed Status to Retest
Same here, using 6.9.0-beta1, many dockers with custom IPs and I have a winserver VM
-
if I click on dashbaord or type https://192.168.1.200/Dashboard
This sends me to the login page. -> https://192.168.1.200/login
Also I get this on the error log, probably not related
Oct 19 19:13:13 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:15 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:13:15 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:16 Unraid root: error: /plugins/dynamix.system.temp/include/SystemTemp.php: wrong csrf_token Oct 19 19:13:16 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:13:23 Unraid kernel: veth1870321: renamed from eth0 Oct 19 19:13:24 Unraid kernel: eth0: renamed from veth1fa4c7e Oct 19 19:13:37 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:41 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:13:41 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:41 Unraid root: error: /plugins/dynamix.system.temp/include/SystemTemp.php: wrong csrf_token Oct 19 19:13:41 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:13:43 Unraid root: error: /plugins/dynamix.docker.manager/include/DockerUpdate.php: wrong csrf_token Oct 19 19:13:44 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:47 Unraid root: error: /plugins/dynamix.docker.manager/include/DockerUpdate.php: wrong csrf_token Oct 19 19:13:47 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:48 Unraid root: error: /plugins/dynamix.docker.manager/include/DockerUpdate.php: wrong csrf_token Oct 19 19:13:48 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:13:51 Unraid root: error: /plugins/dynamix.system.temp/include/SystemTemp.php: wrong csrf_token Oct 19 19:13:55 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:14:01 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:14:02 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:14:02 Unraid root: error: /plugins/dynamix.docker.manager/include/UpdateConfig.php: wrong csrf_token Oct 19 19:14:02 Unraid root: error: /webGui/include/Notify.php: wrong csrf_token Oct 19 19:14:02 Unraid root: error: /plugins/dynamix.system.temp/include/SystemTemp.php: wrong csrf_token
-
Could you include qemu 4?
-
OFFTOPIC.
that interface is qemu? can you manage/configure unraid VM with that? how?
-
You may want to consider this
webgui: VMmachine updates default bus to VirtIO
for FreeBSD because it doesn't work, at least in opnsense and pfsense. With Sophos XG doesn't work either although it is based on linux. In case this is the bus for the hardrive image, I had to use SATA.
[6.12.4] 'Docker' Tab shows incorrect\unreadable information
in Stable Releases
Posted
Still it would be nice if unRAID could give a better support or official support to docker compose