-
Posts
3,668 -
Joined
-
Last visited
-
Days Won
6
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Vr2Io
-
-
2 hours ago, konaboy said:
Dec 31 10:09:14 Tower kernel: nvme 0000:03:00.0: platform quirk: setting simple suspend Dec 31 10:09:14 Tower kernel: nvme 0000:04:00.0: platform quirk: setting simple suspend
System have suspend / resume ?
-
Not likely XFS / parity relate. Pls shoot those PCIe / NVMe error, try re-insert the NVMe or move to different slot.
-
19 hours ago, feins said:
Could you be able to guide me how to do that?
Use "Clear" button.
-
14 minutes ago, BigDaddyDingDong said:
if I use a freshly created Unraid bootable drive it fires up perfectly fine in CSM, but if I try and boot the existing drive I get nothing.
This likely existing USB stick haven't legacy boot sector / file. Simple run makebootable.bat ( something like that ) at USB stick root folder ( if under Windows ) will solve the problem.
- 1
-
-
On 12/24/2023 at 8:44 AM, 0xjams said:
Hi
I found a file that was created the last day the issue took place.
Below are the log in last 4 min, it seems not a clean shutdown.
Dec 21 18:47:05 groudon shutdown[13781]: shutting down for system halt
Dec 21 18:51:13 groudon root: umount: /mnt/disk1: target is busy.
Dec 21 18:51:13 groudon emhttpd: shcmd (106): exit status: 32
Dec 21 18:51:13 groudon emhttpd: Retry unmounting disk share(s)...For test UPS shutdown server, pls simulate first instead actually cutting UPS power, otherwise you may kill the battery.
upsmon -c fsd
After all fine then perform real power cut situation.
- 1
-
9 hours ago, eicar said:
What would be the technical reasons/specs for the lower value of 133 MB/s.
These bus were 32bit in 33Mhz, if double clock rate to 66Mhz, then bandwidth double to 266MB/s, if further double bus width to 64bit then will be 533MB/s.
9 hours ago, eicar said:PCIe 1.0 x1 connection
PE1*1 was the name, it probably PCIe 3.0.
In PCI 32bit 33Mhz, you will got ~100MB/s throughput.
- 1
-
4 hours ago, jkexbx said:
Per the Unraid Docs it's how you zero a disk to remove it from an array. The script is broken in a couple of different ways, so I avoid that now.
Do you know if there's a new way to zero a disk?
It's supposed to cause a parity update. I'd expect it to run at 50 mb/s like it does after the hard reboot. The problem is something is happening with the umount to cause it to run at 400kb/s.
Haven't notice official doc will umount the target disk, my bad.
May be best someone could try reproduce same problem or not.
It seems problem relate umount ( umount not complete ), some post also relate to this.
Anyway as official mention, start array in maintenance mode could improve zero disk performance but it also greatly avoid umount failure issue too.
-
I don't think unmount an array member disk was a correct step.
In this case, zero a member disk also cause parity update too. Slowdown also expected.
-
On 12/17/2023 at 11:31 PM, Inland-Empire said:
I get consistent pulses of writes to the cache, even when zero containers are active or running.
This shouldn't happen.
Below post have some script could help you to identify what have write to docker image / folder. So you may mapping container folder to anywhere you like.
- 1
-
8 hours ago, MDark said:
I am to the point where I am considering just moving the media to a non parity protected pool in hopes it stops the buffering issues.
Not must pool, pool could lost all if out of redundancy. You may try no parity array will solve the problem or not.
-
Call trace look like docker network relate, pls try using IPVLAN.
And why so much docker "vethxxxxxx" message in the log
For example : mine only few, it should record if docker start/stop/update
dmesg -T | grep veth
[Sat Dec 2 11:33:29 2023] eth0: renamed from vethddce601
[Sat Dec 2 11:33:41 2023] eth0: renamed from veth4c48c7a
[Sat Dec 2 11:33:46 2023] eth0: renamed from veth8140ad3
[Sat Dec 2 11:34:19 2023] eth0: renamed from vethab126d3
[Sat Dec 2 11:34:27 2023] eth0: renamed from vetha49f590
[Sat Dec 2 11:34:34 2023] eth0: renamed from veth7566107
[Sat Dec 2 11:34:41 2023] eth0: renamed from veth9bb2973
[Sat Dec 2 11:35:37 2023] eth0: renamed from veth79004cd
[Wed Dec 20 12:53:13 2023] veth9bb2973: renamed from eth0
[Wed Dec 20 12:53:13 2023] eth0: renamed from veth91cf68b -
As you have RAID-Z2 pool, so it allow two disk failure / missing, and you have 12TB data which can't fit in one 10TB disk.
You shouldn't destroy the RAID-Z2 pool, just cleanup two disk under Unraid and format it, then boot back TrueNAS and confirm it can mount both, then copy all data to those disks.
-
Do you swap the cable / port for eth0 ( port 21 ) from ( port 22 -24 ) to rule out actual problem. ( If cable tester haven't shoot that )
-
Dec 19 19:35:55 Tower kernel: r8169 0000:08:00.0: no MMIO resource found
Dec 19 19:35:55 Tower kernel: r8169 0000:0a:00.0: unknown chip XID 000, contact r8169 maintainers (see MAINTAINERS file)Try disable IOMMU in BIOS or in SYSLINUX CONFIGURATION page.
-
Sound interesting for high performance with multiple thread.
-
I use Sonoff Dongle-E with Z2M, never crash.
-
6 hours ago, Scriphy said:
Does the unraid creation tool not format the Drive automatically? I don't see a place to change the settings for which filesystem to use
When legacy / UEFI fail to boot ( boot fail point to USB, not Unraid boot progress ), then you need different way by manual instead default
- copy all file to stick
- execute 1 of 3 step to make legacy bootable
Change the directory EFI- to EFI if you want boot in UEFI mode, otherwise will be legacy.
-
For legacy boot, USB stick should format in MBR + FAT32, btw you may try all the combination with MBR / GPT + FAT32 / NTFS.
-
Quite trouble, those mobo seems dosen't like storage add on card.
-
No detection of the SATA controller, pls try different PCIe slot or verify does controller can be detect in other mobo.
-
You should have 3.7TB data need to copy out from the fail disk, as parity show mismatch and parity operation in very slow speed, don't consider swap parity or rebuild route.
To summarize what you should do, try your best to copy out most data ( 3.7TB ) from (1) a copy version from disk2, (2) a copy version from emulate
/dev/md1 3.7T 3.1T 621G 84% /mnt/disk1
/dev/md2 4.6T 3.7T 947G 80% /mnt/disk2Some detail for above suggestion
- install UD plugin
- stop array
- set disk2 to no disk assign
- then mount disk2 by UD and copy data to 12TB ( also mount by UD ), i.e. /mnt/disks/12TB/aaa/ , you may try rsync with --ignore-errors option, but if didn't help, you may need disk to disk block copy first
At this point, nothing have change, you can simple assign back 5TB disk to disk2, everything still as usual
- then start array ( emulate disk2 ) , also copy to 12TB disk, i.e. /mnt/disks/12TB/bbb/
- consider which version is the best ( you can mix both whatever ) then copy to the 6TB disk
- after all fine, then assign 6TB disk to disk2 and 12TB be parity and rebuild parity
-
2 hours ago, skrumzy said:
This also has the ability to house 2 systems, which I'm seriously considering setting this on my desk and consolidating my gaming rig and my home server.
That's great.
-
Sound interesting ITX mobo, if you use two M2 for 12 disk, 1st problem was there have some blockage with the CPU fan.
As Ethoo-719 offical support 12 3.5", so these in excellent match, but overall not a compact design, so I don't think embed CPU ITX mobo was a good choice.
Request for assistance in upping speed of all nvme zfs pool
in Storage Devices and Controllers
Posted · Edited by Vr2Io
Suspect CPU clock rate too low / too weak CPU
CPU is Intel Xeon CPU D-1541 @ 2.10GHz 8 core 16 threads