dalben

Members
  • Posts

    1452
  • Joined

  • Last visited

Everything posted by dalben

  1. A quick that must probably has been answered before but my search fu isn't working today. Due to a catastrophic mobo error, I have to quickly replace my board and get my server up again quickly. The board that's arriving is an ASUS TUF B365M-Plus Gaming. After recently upgrading to 18Tb disks I can use the 6 onboard SATA + 2xM.2 with ports to spare. I also have an LSI 9211-i8 controller that's in the current dead server and a recently purchased 2nd hand Dell HBA, which is an LSI 9207-i8. What's going to give me the best performance, all on the onboard SATA, or on an LSI running on the x4 PCIe slot? Would using the x16 slot make a difference? If we're talking miniscule differences I might go the LSA simply because the cabling will be neater. Any advice would be most welcome.
  2. @JorgeB All reformatted and finished playing shuffle files. No more errors. Thanks
  3. Thanks. No errors prior to NUT losing the USB connection.
  4. Still getting this error with the latest update on my APC MODEL Back-UPS BX750MI. Once it loses connection I need to pull the USB cable and plug it in again for it to work. Tried switching to the native Unraid UPS monitor while in this states and it complains of connection error. After resetting the USB cable the Unraid UPS monitor works fine without any issues.
  5. Thanks @JorgeB. Appreciate the troubleshooting and testing. Yeah, but 18Gb drives for me. What fun
  6. I've started seeing parity tuning errors in syslog. Mar 21 06:40:20 tdm root: RTNETLINK answers: File exists Mar 21 06:40:20 tdm Parity Check Tuning: PHP error_reporting() set to E_ERROR|E_WARNING|E_PARSE|E_CORE_ERROR|E_CORE_WARNING|E_COMPILE_ERROR|E_COMPILE_WARNING|E_USER_ERROR|E_USER_WARNING|E_USER_NOTICE|E_STRICT|E_RECOVERABLE_ERROR|E_USER_DEPRECATED Mar 21 06:40:20 tdm emhttpd: nothing to sync Mar 21 06:40:20 tdm sudo: root : PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/controlr/controlr -port 2378 -certdir '/boot/config/ssl/certs' -showups Mar 21 06:40:20 tdm sudo: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 21 06:40:20 tdm Parity Check Tuning: PHP error_reporting() set to E_ERROR|E_WARNING|E_PARSE|E_CORE_ERROR|E_CORE_WARNING|E_COMPILE_ERROR|E_COMPILE_WARNING|E_USER_ERROR|E_USER_WARNING|E_USER_NOTICE|E_STRICT|E_RECOVERABLE_ERROR|E_USER_DEPRECATED Mar 21 06:40:21 tdm kernel: eth0: renamed from veth505b9c4 They are fairly regular throughout syslog. Noticed them today while running RC3, still there after RC4
  7. Not sure if it was addressed in RC4 and if so there was any expectation of the error going away, but it's still there Mar 21 06:40:06 tdm kernel: XFS (md1): Mounting V5 Filesystem Mar 21 06:40:06 tdm kernel: XFS (md1): Ending clean mount Mar 21 06:40:06 tdm emhttpd: shcmd (34): xfs_growfs /mnt/disk1 Mar 21 06:40:06 tdm kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Mar 21 06:40:06 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 21 06:40:06 tdm root: meta-data=/dev/md1 isize=512 agcount=32, agsize=137330687 blks Mar 21 06:40:06 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 21 06:40:06 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 21 06:40:06 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 21 06:40:06 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 21 06:40:06 tdm root: = sunit=1 swidth=32 blks Mar 21 06:40:06 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 21 06:40:06 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 21 06:40:06 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 21 06:40:06 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 21 06:40:06 tdm kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Mar 21 06:40:06 tdm emhttpd: shcmd (34): exit status: 1 Mar 21 06:40:06 tdm emhttpd: shcmd (35): mkdir -p /mnt/disk2 Mar 21 06:40:06 tdm emhttpd: shcmd (36): mount -t xfs -o noatime,nouuid /dev/md2 /mnt/disk2 Mar 21 06:40:06 tdm kernel: XFS (md2): Mounting V5 Filesystem Mar 21 06:40:07 tdm kernel: XFS (md2): Ending clean mount Mar 21 06:40:07 tdm kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Mar 21 06:40:07 tdm emhttpd: shcmd (37): xfs_growfs /mnt/disk2 Mar 21 06:40:07 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 21 06:40:07 tdm root: meta-data=/dev/md2 isize=512 agcount=32, agsize=137330687 blks Mar 21 06:40:07 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 21 06:40:07 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 21 06:40:07 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 21 06:40:07 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 21 06:40:07 tdm root: = sunit=1 swidth=32 blks Mar 21 06:40:07 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 21 06:40:07 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 21 06:40:07 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 21 06:40:07 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 21 06:40:07 tdm emhttpd: shcmd (37): exit status: 1 Mar 21 06:40:07 tdm emhttpd: shcmd (38): mkdir -p /mnt/disk3 Mar 21 06:40:07 tdm emhttpd: shcmd (39): mount -t xfs -o noatime,nouuid /dev/md3 /mnt/disk3 Mar 21 06:40:07 tdm kernel: XFS (md3): Mounting V5 Filesystem Mar 21 06:40:07 tdm kernel: XFS (md3): Ending clean mount Mar 21 06:40:07 tdm kernel: xfs filesystem being mounted at /mnt/disk3 supports timestamps until 2038 (0x7fffffff) Mar 21 06:40:07 tdm emhttpd: shcmd (40): xfs_growfs /mnt/disk3 Mar 21 06:40:07 tdm root: meta-data=/dev/md3 isize=512 agcount=32, agsize=30523583 blks Mar 21 06:40:07 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 21 06:40:07 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 21 06:40:07 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 21 06:40:07 tdm root: data = bsize=4096 blocks=976754633, imaxpct=5 Mar 21 06:40:07 tdm root: = sunit=1 swidth=32 blks Mar 21 06:40:07 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 21 06:40:07 tdm root: log =internal log bsize=4096 blocks=476930, version=2 Mar 21 06:40:07 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 21 06:40:07 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 21 06:40:07 tdm emhttpd: shcmd (41): mkdir -p /mnt/cache
  8. On the pool disks tab. Second pool shows this error: Balance Status btrfs filesystem df: Data, single: total=30.00GiB, used=21.21GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=2.00GiB, used=29.98MiB GlobalReserve, single: total=261.73MiB, used=0.00B btrfs balance status: No balance found on '/mnt/rad' Current usage ratio: Warning: A non-numeric value encountered in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(515) : eval()'d code on line 542 0 % --- Warning: A non-numeric value encountered in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(515) : eval()'d code on line 542 Full Balance recommended
  9. Who would know then? And would that person know if all large drives formatted with 6.10 need to be reformatted or just the one throwing the error?
  10. Installed 6.10.0-RC3 and I still get the same error/warning Mar 12 09:50:42 tdm kernel: XFS (md1): Mounting V5 Filesystem Mar 12 09:50:42 tdm kernel: XFS (md1): Ending clean mount Mar 12 09:50:42 tdm kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Mar 12 09:50:42 tdm emhttpd: shcmd (34): xfs_growfs /mnt/disk1 Mar 12 09:50:42 tdm kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Mar 12 09:50:42 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 12 09:50:42 tdm root: meta-data=/dev/md1 isize=512 agcount=32, agsize=137330687 blks Mar 12 09:50:42 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 12 09:50:42 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 12 09:50:42 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 12 09:50:42 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 12 09:50:42 tdm root: = sunit=1 swidth=32 blks Mar 12 09:50:42 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 12 09:50:42 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 12 09:50:42 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 12 09:50:42 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 12 09:50:42 tdm emhttpd: shcmd (34): exit status: 1 Mar 12 09:50:42 tdm emhttpd: shcmd (35): mkdir -p /mnt/disk2 Mar 12 09:50:42 tdm emhttpd: shcmd (36): mount -t xfs -o noatime,nouuid /dev/md2 /mnt/disk2 Mar 12 09:50:42 tdm kernel: XFS (md2): Mounting V5 Filesystem Mar 12 09:50:42 tdm kernel: XFS (md2): Ending clean mount Mar 12 09:50:42 tdm emhttpd: shcmd (37): xfs_growfs /mnt/disk2 Mar 12 09:50:42 tdm kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Mar 12 09:50:43 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 12 09:50:43 tdm root: meta-data=/dev/md2 isize=512 agcount=32, agsize=137330687 blks Mar 12 09:50:43 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 12 09:50:43 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 12 09:50:43 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 12 09:50:43 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 12 09:50:43 tdm root: = sunit=1 swidth=32 blks Mar 12 09:50:43 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 12 09:50:43 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 12 09:50:43 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 12 09:50:43 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 12 09:50:43 tdm emhttpd: shcmd (37): exit status: 1 Mar 12 09:50:43 tdm emhttpd: shcmd (38): mkdir -p /mnt/disk3 Mar 12 09:50:43 tdm emhttpd: shcmd (39): mount -t xfs -o noatime,nouuid /dev/md3 /mnt/disk3 Mar 12 09:50:43 tdm kernel: XFS (md3): Mounting V5 Filesystem Mar 12 09:50:43 tdm kernel: XFS (md3): Ending clean mount Mar 12 09:50:43 tdm emhttpd: shcmd (40): xfs_growfs /mnt/disk3 Mar 12 09:50:43 tdm kernel: xfs filesystem being mounted at /mnt/disk3 supports timestamps until 2038 (0x7fffffff) Mar 12 09:50:43 tdm root: meta-data=/dev/md3 isize=512 agcount=32, agsize=30523583 blks Mar 12 09:50:43 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 12 09:50:43 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 12 09:50:43 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 12 09:50:43 tdm root: data = bsize=4096 blocks=976754633, imaxpct=5 Mar 12 09:50:43 tdm root: = sunit=1 swidth=32 blks Mar 12 09:50:43 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 12 09:50:43 tdm root: log =internal log bsize=4096 blocks=476930, version=2 Mar 12 09:50:43 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 12 09:50:43 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 12 09:50:43 tdm emhttpd: shcmd (41): mkdir -p /mnt/cache
  11. Thanks, that makes sense. Didn't think of that, but yes, I'll try the Unraid way first. Makes life a lot easier
  12. Only just started to really use this plugin and dropped adding scripts to crontab. Kicking myself for not migrating to this earlier. Quick question. I have a need to add an entry to the hosts file of about 3 dockers after they start up (a server i am trying to reach has had it's domain name snatched so a hosts file entry is the only way to get to it). Is there a way I can trigger CAUS to to run a script upon docker start? Or maybe 5/10 mins after array start? While I am at it, any tips on writing a script that can be run from the console to had hosts entries in docker containers? Or is there a simpler way to do it?
  13. Did this last weekend. Being slighty OCD about my server, there's no way I could leave it with gaps in the disk numbers. 20odd hours for the parity rebuild but server is up and running while it was doing it so there was no impact really.
  14. As @JorgeB confirmed, it's 5.13.0 root@tdm:~# xfs_growfs -V xfs_growfs version 5.13.0
  15. Just a note on the warning/error. I probably wouldn't have noticed it if it was hidden in syslog but this error pops up on the console making it difficult to ignore.
  16. I've noticed that despite having syslog setup, any startup log syslog message that occur before unraid starts the syslog server are lost, or if you mirror the syslog, can be found on flash/logs/syslog. Also plenty of syslog server errors in syslog when the syslog server hasn't started yet. Understand they can be ignored, but they are annoying. Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 8: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 8: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' suspended (module 'builtin:omfwd'), retry 0. There should be messages before this one giving the reason for suspension. [v8.2102.0 try https://www.rsyslog.com/e/2007 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ] Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 2: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 2: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' suspended (module 'builtin:omfwd'), retry 0. There should be messages before this one giving the reason for suspension. [v8.2102.0 try https://www.rsyslog.com/e/2007 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ] Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 2: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 2: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ]
  17. Adding on to the above, these same errors appeared when I tried to open a file on the server but it failed because the array hadn't started yet. Once I started the array (and now set auto-start in settings/disks) and opened the file, there were no errors.
  18. I just rebooted the server to see if the warning remains, and it did. Ultimately I can live with a warning message if there's confirmation that I am not really at risk. At the moment the xfs mailing list are unable to understand why it's happening to one and not the other. Uncertainty makes me uncertain... Mar 8 06:59:48 tdm emhttpd: shcmd (149): mount -t xfs -o noatime /dev/md1 /mnt/disk1 Mar 8 06:59:49 tdm kernel: SGI XFS with ACLs, security attributes, no debug enabled Mar 8 06:59:49 tdm kernel: XFS (md1): Mounting V5 Filesystem Mar 8 06:59:49 tdm kernel: XFS (md1): Ending clean mount Mar 8 06:59:49 tdm emhttpd: shcmd (150): xfs_growfs /mnt/disk1 Mar 8 06:59:49 tdm kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Mar 8 06:59:49 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 8 06:59:49 tdm root: meta-data=/dev/md1 isize=512 agcount=32, agsize=137330687 blks Mar 8 06:59:49 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 8 06:59:49 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 8 06:59:49 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 8 06:59:49 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 8 06:59:49 tdm root: = sunit=1 swidth=32 blks Mar 8 06:59:49 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 8 06:59:49 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 8 06:59:49 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 8 06:59:49 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 8 06:59:49 tdm emhttpd: shcmd (150): exit status: 1 Mar 8 06:59:49 tdm kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Mar 8 06:59:49 tdm emhttpd: shcmd (151): mkdir -p /mnt/disk2 Mar 8 06:59:49 tdm emhttpd: shcmd (152): mount -t xfs -o noatime /dev/md2 /mnt/disk2 Mar 8 06:59:49 tdm kernel: XFS (md2): Mounting V5 Filesystem Mar 8 06:59:49 tdm kernel: XFS (md2): Ending clean mount Mar 8 06:59:49 tdm kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Mar 8 06:59:49 tdm emhttpd: shcmd (153): xfs_growfs /mnt/disk2 Mar 8 06:59:49 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 8 06:59:49 tdm root: meta-data=/dev/md2 isize=512 agcount=32, agsize=137330687 blks Mar 8 06:59:49 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 8 06:59:49 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 8 06:59:49 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 8 06:59:49 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 8 06:59:49 tdm root: = sunit=1 swidth=32 blks Mar 8 06:59:49 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 8 06:59:49 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 8 06:59:49 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 8 06:59:49 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 8 06:59:49 tdm emhttpd: shcmd (153): exit status: 1 Mar 8 06:59:49 tdm emhttpd: shcmd (154): mkdir -p /mnt/disk3 Mar 8 06:59:49 tdm emhttpd: shcmd (155): mount -t xfs -o noatime /dev/md3 /mnt/disk3
  19. I can't help there, I have v6.10-rc and upgraded the disks this weekend. I don't fancy downgrading to test.
  20. I have 2 x 18tb with only ~6tb filled each. My sequence of events was I added the two 18tb disks. Unraid detected them and asked if I wanted them formatted, I said yes, it went and formatted them and away we went. Do you remember if you had the same sequence @Ystebad ?
  21. This is what I got back from Gao Xiang. All over my head. What can I tell him? @JorgeB? May I ask what is xfsprogs version used now? At the first glance, it seems that some old xfsprogs is used here, otherwise, it will show "[EXPERIMENTAL] try to shrink unused space" message together with the kernel message as well. I'm not sure what's sb_dblocks recorded in on-disk super block compared with new disk sizes. I guess the problem may be that the one new disk is larger than sb_dblocks and the other is smaller than sb_dblocks. But if some old xfsprogs is used, I'm still confused why old version xfsprogs didn't block it at the userspace in advance.
  22. I started having these errors yesterday after creating a new config for my server as I had added and removed a bunch of disks. I *think* I worked out what triggered the log entries. In the old config I has set my pool disks as disk shares (SMB Public Hidden). The new config doesn't share them out. When I opened Notepad++ and tried to open files that used the disk share in the path, it failed. The timing of this matches the log errors. I've now set the pool disks shares up and the log entries have stopped.
  23. Not sure how to do that, nor would I be able to help them if their questions got beyond uploading diagnostics. Googling around, this error message seems to be happening on unraid more than any other platform.
  24. I've just updated my server with 3 Toshiba MG09ACA18TE 18Tb drives. It's all working well from what I can see. Rebuilding parity. But I saw this error message on the console XFS (MD5): EXPERIMENTAL ONLINE SHRINK FEATURE IN USE. USE AT YOUR OWN RISK! Any idea what that is or why it's appeared? How much risk am I taking? Diagnostics attached tdm-diagnostics-20220306-2033.zip