dalben

Members
  • Posts

    1463
  • Joined

  • Last visited

Everything posted by dalben

  1. As @JorgeB confirmed, it's 5.13.0 root@tdm:~# xfs_growfs -V xfs_growfs version 5.13.0
  2. Just a note on the warning/error. I probably wouldn't have noticed it if it was hidden in syslog but this error pops up on the console making it difficult to ignore.
  3. I've noticed that despite having syslog setup, any startup log syslog message that occur before unraid starts the syslog server are lost, or if you mirror the syslog, can be found on flash/logs/syslog. Also plenty of syslog server errors in syslog when the syslog server hasn't started yet. Understand they can be ignored, but they are annoying. Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 8: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 8: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' suspended (module 'builtin:omfwd'), retry 0. There should be messages before this one giving the reason for suspension. [v8.2102.0 try https://www.rsyslog.com/e/2007 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ] Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 2: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 2: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' suspended (module 'builtin:omfwd'), retry 0. There should be messages before this one giving the reason for suspension. [v8.2102.0 try https://www.rsyslog.com/e/2007 ] Mar 8 06:57:22 tdm rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ] Mar 8 06:57:22 tdm rsyslogd: omfwd/udp: socket 2: sendto() error: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ] Mar 8 06:57:22 tdm rsyslogd: omfwd: socket 2: error 101 sending via udp: Network is unreachable [v8.2102.0 try https://www.rsyslog.com/e/2354 ]
  4. Adding on to the above, these same errors appeared when I tried to open a file on the server but it failed because the array hadn't started yet. Once I started the array (and now set auto-start in settings/disks) and opened the file, there were no errors.
  5. I just rebooted the server to see if the warning remains, and it did. Ultimately I can live with a warning message if there's confirmation that I am not really at risk. At the moment the xfs mailing list are unable to understand why it's happening to one and not the other. Uncertainty makes me uncertain... Mar 8 06:59:48 tdm emhttpd: shcmd (149): mount -t xfs -o noatime /dev/md1 /mnt/disk1 Mar 8 06:59:49 tdm kernel: SGI XFS with ACLs, security attributes, no debug enabled Mar 8 06:59:49 tdm kernel: XFS (md1): Mounting V5 Filesystem Mar 8 06:59:49 tdm kernel: XFS (md1): Ending clean mount Mar 8 06:59:49 tdm emhttpd: shcmd (150): xfs_growfs /mnt/disk1 Mar 8 06:59:49 tdm kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Mar 8 06:59:49 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 8 06:59:49 tdm root: meta-data=/dev/md1 isize=512 agcount=32, agsize=137330687 blks Mar 8 06:59:49 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 8 06:59:49 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 8 06:59:49 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 8 06:59:49 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 8 06:59:49 tdm root: = sunit=1 swidth=32 blks Mar 8 06:59:49 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 8 06:59:49 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 8 06:59:49 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 8 06:59:49 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 8 06:59:49 tdm emhttpd: shcmd (150): exit status: 1 Mar 8 06:59:49 tdm kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! Mar 8 06:59:49 tdm emhttpd: shcmd (151): mkdir -p /mnt/disk2 Mar 8 06:59:49 tdm emhttpd: shcmd (152): mount -t xfs -o noatime /dev/md2 /mnt/disk2 Mar 8 06:59:49 tdm kernel: XFS (md2): Mounting V5 Filesystem Mar 8 06:59:49 tdm kernel: XFS (md2): Ending clean mount Mar 8 06:59:49 tdm kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Mar 8 06:59:49 tdm emhttpd: shcmd (153): xfs_growfs /mnt/disk2 Mar 8 06:59:49 tdm root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Mar 8 06:59:49 tdm root: meta-data=/dev/md2 isize=512 agcount=32, agsize=137330687 blks Mar 8 06:59:49 tdm root: = sectsz=512 attr=2, projid32bit=1 Mar 8 06:59:49 tdm root: = crc=1 finobt=1, sparse=1, rmapbt=0 Mar 8 06:59:49 tdm root: = reflink=1 bigtime=0 inobtcount=0 Mar 8 06:59:49 tdm root: data = bsize=4096 blocks=4394581984, imaxpct=5 Mar 8 06:59:49 tdm root: = sunit=1 swidth=32 blks Mar 8 06:59:49 tdm root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Mar 8 06:59:49 tdm root: log =internal log bsize=4096 blocks=521728, version=2 Mar 8 06:59:49 tdm root: = sectsz=512 sunit=1 blks, lazy-count=1 Mar 8 06:59:49 tdm root: realtime =none extsz=4096 blocks=0, rtextents=0 Mar 8 06:59:49 tdm emhttpd: shcmd (153): exit status: 1 Mar 8 06:59:49 tdm emhttpd: shcmd (154): mkdir -p /mnt/disk3 Mar 8 06:59:49 tdm emhttpd: shcmd (155): mount -t xfs -o noatime /dev/md3 /mnt/disk3
  6. I can't help there, I have v6.10-rc and upgraded the disks this weekend. I don't fancy downgrading to test.
  7. I have 2 x 18tb with only ~6tb filled each. My sequence of events was I added the two 18tb disks. Unraid detected them and asked if I wanted them formatted, I said yes, it went and formatted them and away we went. Do you remember if you had the same sequence @Ystebad ?
  8. This is what I got back from Gao Xiang. All over my head. What can I tell him? @JorgeB? May I ask what is xfsprogs version used now? At the first glance, it seems that some old xfsprogs is used here, otherwise, it will show "[EXPERIMENTAL] try to shrink unused space" message together with the kernel message as well. I'm not sure what's sb_dblocks recorded in on-disk super block compared with new disk sizes. I guess the problem may be that the one new disk is larger than sb_dblocks and the other is smaller than sb_dblocks. But if some old xfsprogs is used, I'm still confused why old version xfsprogs didn't block it at the userspace in advance.
  9. I started having these errors yesterday after creating a new config for my server as I had added and removed a bunch of disks. I *think* I worked out what triggered the log entries. In the old config I has set my pool disks as disk shares (SMB Public Hidden). The new config doesn't share them out. When I opened Notepad++ and tried to open files that used the disk share in the path, it failed. The timing of this matches the log errors. I've now set the pool disks shares up and the log entries have stopped.
  10. Not sure how to do that, nor would I be able to help them if their questions got beyond uploading diagnostics. Googling around, this error message seems to be happening on unraid more than any other platform.
  11. I've just updated my server with 3 Toshiba MG09ACA18TE 18Tb drives. It's all working well from what I can see. Rebuilding parity. But I saw this error message on the console XFS (MD5): EXPERIMENTAL ONLINE SHRINK FEATURE IN USE. USE AT YOUR OWN RISK! Any idea what that is or why it's appeared? How much risk am I taking? Diagnostics attached tdm-diagnostics-20220306-2033.zip
  12. Thanks, I wasn't aware of the warranty situation. I'll have to rethink what works best. I went through a period a few years ago where I replaced quite a few drives within the warranty period, but as stated I have a couple of others that are reaching 10yo.
  13. Understood, but what I needed File Activity for was to monitor what was happening on one of my pools. The same pool the syslog is on. At 19:30pm every day I had something deleting/archiving some files in a particular share and for the life of me I couldn't find what was doing it. But all good now. Was just wondering if there was an option somewhere that I wasn't aware of. Scrolling past the syslog entries won't kill me.
  14. Is there a way to exclude a particular share(s) from being monitored or at least logged? The Unraid server is also acting as a syslog server in my setup and due to the quirks of the Unraid syslog server setup, syslog needs to be on an array/pool drive. At the moment every syslog entry is monitored and logged by this plugin. It's info I don't really need.
  15. This thread got me looking at 16-18Gb drives in detail. I have a couple of disks in my server that are getting close to 10yo and it might be an idea to replace/upgrade them before they start throwing errors. In peoples experience, how are the Seagate Exos reliability wise in UnRaid? I've always used Seagate but either the Ironwolf series or Barracuda. The Exos seems to be cheaper than the IronWolf Pro and competing Toshiba models at the moment so they are tempting.
  16. Try removing the CoreFrequency plugin. I had a similar issue and removing it solved my problems. corefreq.plg - 2021.07.13 (Up to date)
  17. Here you go. Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz CPU family: 6 Model: 158 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 10 CPU max MHz: 4600.0000 CPU min MHz: 800.0000 BogoMIPS: 6399.96 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse 2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopolog y nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xt pr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3 dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpi d ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsa veopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_ l1d Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 192 KiB (6 instances) L1i: 192 KiB (6 instances) L2: 1.5 MiB (6 instances) L3: 12 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerabilities: Itlb multihit: KVM: Mitigation: VMX disabled L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Mds: Mitigation; Clear CPU buffers; SMT vulnerable Meltdown: Mitigation; PTI Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Srbds: Mitigation; Microcode Tsx async abort: Mitigation; TSX disabled
  18. If you have the CoreFreq plugin installed, try uninstalling that and see how it goes.
  19. Thanks for that. Might as well just straight swap and rebuild then I guess.
  20. Just coming back to this advice. After uninstalling corefreq my server hasn't crashed. Thanks. My wife and kids now like me again.
  21. I was having the same problem with my server. The advice I got was to remove the CoreFreq plugin. After removing CoreFreq the server hasn't crashed for about 2 months now. plugin: installing: /boot/config/plugins/corefreq.plg
  22. Thanks. Correct, I didn't have any issues but just thought I'd report the alerts.
  23. I need to replace my parity drive. It's a single parity. Which method is recommended? The new drive is in the server as an unassigned drive at the moment. 1. Unassign current parity drive and and then assign the new drive, letting Unraid rebuild parity 2. Add new parity drive as Parity 2, once parity is rebuilt, remove and forget Parity drive 1 3. Is there another recommended method? I prefer the 2nd option but I have only seen references to forgetting Parity 2 when in dual parity mode. Is it right to assume you can forget Parity 1 just as easily?
  24. Found these errors in my syslog. At the time I was just checking the User Scripts plugin script logs. I don't remember seeing any errors on the console. Not even sure if what I was doing triggered the errors to be honest. Feb 11 21:25:29 tdm webGUI: Successful login user root from xx.xx.xx.xx Feb 11 21:25:43 tdm emhttpd: cmd: /usr/local/emhttp/plugins/user.scripts/showLog.php tdmrsync init Feb 11 21:25:43 tdm emhttpd: spinning down /dev/sdg Feb 11 21:25:43 tdm emhttpd: spinning down /dev/sdf Feb 11 21:26:10 tdm emhttpd: cmd: /usr/local/emhttp/plugins/user.scripts/showLog.php modprobe Feb 11 21:26:26 tdm unassigned.devices: PHP Warning: count(): Parameter must be an array or an object that implements Countable in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 126 Feb 11 21:26:26 tdm unassigned.devices: PHP Warning: count(): Parameter must be an array or an object that implements Countable in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 126 Feb 11 21:26:26 tdm unassigned.devices: PHP Warning: rename(/var/state/unassigned.devices/unassigned.devices.ini-,/var/state/unassigned.devices/unassigned.devices.ini): No such file or directory in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 168 Feb 11 21:26:49 tdm emhttpd: shcmd (2330269): /usr/local/sbin/update_cron Feb 11 21:27:13 tdm emhttpd: shcmd (2330420): /usr/local/sbin/mover |& logger & Diagnostics attached tdm-diagnostics-20220211-2133.zip