Jump to content

Vr2Io

Members
  • Posts

    3,668
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by Vr2Io

  1. 14 minutes ago, BigDaddyDingDong said:

    if I use a freshly created Unraid bootable drive it fires up perfectly fine in CSM, but if I try and boot the existing drive I get nothing.

    This likely existing USB stick haven't legacy boot sector / file. Simple run makebootable.bat ( something like that ) at USB stick root folder ( if under Windows ) will solve the problem.

    • Like 1
  2. On 12/26/2023 at 9:38 PM, feins said:

    Each time when the file integrity runs i keep getting this type of errors.

    How to resolve this? Ive already excluded *.nfo

    image.thumb.png.d993cbb2e128e78dc1e13f0177f49b90.png

    You also need clear attribute on those .nfo file. and re-export the hash.

  3. On 12/24/2023 at 8:44 AM, 0xjams said:

     

    Hi :)

    I found a file that was created the last day the issue took place.

     

    Below are the log in last 4 min, it seems not a clean shutdown.

     

    Dec 21 18:47:05 groudon shutdown[13781]: shutting down for system halt

    Dec 21 18:51:13 groudon root: umount: /mnt/disk1: target is busy.
    Dec 21 18:51:13 groudon emhttpd: shcmd (106): exit status: 32
    Dec 21 18:51:13 groudon emhttpd: Retry unmounting disk share(s)...

     

    For test UPS shutdown server, pls simulate first instead actually cutting UPS power, otherwise you may kill the battery.

     

    upsmon -c fsd

     

    After all fine then perform real power cut situation.

     

    • Upvote 1
  4. 9 hours ago, eicar said:

    What would be the technical reasons/specs for the lower value of 133 MB/s.

    These bus were 32bit in 33Mhz, if double clock rate to 66Mhz, then bandwidth double to 266MB/s, if further double bus width to 64bit then will be 533MB/s.

     

    9 hours ago, eicar said:

    PCIe 1.0 x1 connection

    PE1*1 was the name, it probably PCIe 3.0.

     

    In PCI 32bit 33Mhz, you will got ~100MB/s throughput.

    • Upvote 1
  5. 4 hours ago, jkexbx said:

    Per the Unraid Docs it's how you zero a disk to remove it from an array. The script is broken in a couple of different ways, so I avoid that now.

     

    Do you know if there's a new way to zero a disk?

     

     

    It's supposed to cause a parity update. I'd expect it to run at 50 mb/s like it does after the hard reboot. The problem is something is happening with the umount to cause it to run at 400kb/s.

    Haven't notice official doc will umount the target disk, my bad.

     

    May be best someone could try reproduce same problem or not.

     

    It seems problem relate umount ( umount not complete ), some post also relate to this.

     

    Anyway as official mention, start array in maintenance mode could improve zero disk performance but it also greatly avoid umount failure issue too.

     

  6. On 12/17/2023 at 11:31 PM, Inland-Empire said:

     I get consistent pulses of writes to the cache, even when zero containers are active or running.

    This shouldn't happen.

     

    Below post have some script could help you to identify what have write to docker image / folder. So you may mapping container folder to anywhere you like.

     

    • Thanks 1
  7. Call trace look like docker network relate, pls try using IPVLAN.

     

    And why so much docker "vethxxxxxx" message in the log

     

    For example : mine only few, it should record if docker start/stop/update

     

    dmesg -T | grep veth
    [Sat Dec  2 11:33:29 2023] eth0: renamed from vethddce601
    [Sat Dec  2 11:33:41 2023] eth0: renamed from veth4c48c7a
    [Sat Dec  2 11:33:46 2023] eth0: renamed from veth8140ad3
    [Sat Dec  2 11:34:19 2023] eth0: renamed from vethab126d3
    [Sat Dec  2 11:34:27 2023] eth0: renamed from vetha49f590
    [Sat Dec  2 11:34:34 2023] eth0: renamed from veth7566107
    [Sat Dec  2 11:34:41 2023] eth0: renamed from veth9bb2973
    [Sat Dec  2 11:35:37 2023] eth0: renamed from veth79004cd
    [Wed Dec 20 12:53:13 2023] veth9bb2973: renamed from eth0
    [Wed Dec 20 12:53:13 2023] eth0: renamed from veth91cf68b

     

     

  8. As you have RAID-Z2 pool, so it allow two disk failure / missing, and you have 12TB data which can't fit in one 10TB disk.

     

    You shouldn't destroy the RAID-Z2 pool, just cleanup two disk under Unraid and format it, then boot back TrueNAS and confirm it can mount both, then copy all data to those disks.

  9. 6 hours ago, Scriphy said:

    Does the unraid creation tool not format the Drive automatically? I don't see a place to change the settings for which filesystem to use 

    When legacy / UEFI fail to boot ( boot fail point to USB, not Unraid boot progress ), then you need different way by manual instead default 

     

    - copy all file to stick

    - execute 1 of 3 step to make legacy bootable

     

    image.png.5adb7ff8f7c304e25d4c4ea2c643caa9.png

     

    Change the directory EFI- to EFI if you want boot in UEFI mode, otherwise will be legacy.

  10. You should have 3.7TB data need to copy out from the fail disk, as parity show mismatch and parity operation in very slow speed, don't consider swap parity or rebuild route.

     

    To summarize what you should do, try your best to copy out most data ( 3.7TB ) from (1) a copy version from disk2, (2) a copy version from emulate

     

    /dev/md1        3.7T  3.1T  621G  84% /mnt/disk1
    /dev/md2        4.6T  3.7T  947G  80% /mnt/disk2

     

    Some detail for above suggestion

     

    - install UD plugin

    - stop array

    - set disk2 to no disk assign

    - then mount disk2 by UD and copy data to 12TB ( also mount by UD ), i.e. /mnt/disks/12TB/aaa/ , you may try rsync with --ignore-errors  option, but if didn't help, you may need disk to disk block copy first

    https://serverfault.com/questions/494840/how-to-skip-files-with-read-error-in-it-when-copying-with-rsync-on-linux

     

    At this point, nothing have change, you can simple assign back 5TB disk to disk2, everything still as usual

     

    - then start array ( emulate disk2 ) , also copy to 12TB disk, i.e. /mnt/disks/12TB/bbb/

    - consider which version is the best ( you can mix both whatever ) then copy to the 6TB disk

    - after all fine, then assign 6TB disk to disk2 and 12TB be parity and rebuild parity

     

     

     

  11. Sound interesting ITX mobo, if you use two M2 for 12 disk, 1st problem was there have some blockage with the CPU fan.

     

    As Ethoo-719 offical support 12 3.5", so these in excellent match, but overall not a compact design, so I don't think embed CPU ITX mobo was a good choice.

×
×
  • Create New...