mrbusiness

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mrbusiness's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yes, 4 similar NICs from this board: Alder Lake N100 NAS Motherboard ITX Home Processor DDR5 4* I226 2.5G LAN M.2 Slot 6xSATA DP HD Low Power https://a.aliexpress.com/_Ex41iIT It’s Intel I226. It has been working for a months now without this issue. Suddenly just happened. Instead of disabling in BIOS is there a way to get correct driver?
  2. Did not work. Same issues. Diagnostics gets stuck at ethtool 'eth1' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt'
  3. Every time I open Docker Settings or Network Settings, then my server becomes completely unresponsive. I cannot go to any page or SSH to the server either. I am forced to shutdown the server by holding power button. I have enabled syslog after last hard shutdown. Even pulling diagnostics makes the server unresponsive. So I had to attach monitor and input to get the diagnostics. The server is frozen, so I could only take a screenshot with my phone... I will try to get the logs out to a file.. I tried removing subnets, and changing multiple docker and network settings. Furthermore, I keep noticing 'eth1' as a culprit. What else can I do? Diagnostics: mkdir -p /boot/logs mkdir -p '/tower-diagnostics-20240415-1911/system' '/tower-diagnostics-20240415-1911/config' '/tower-diagnostics-20240415-1911/logs' '/tower-diagnostics-20240415-1911/shares' '/tower-diagnostics-20240415-1911/smart' '/tower-diagnostics-20240415-1911/qemu' '/tower-diagnostics-20240415-1911/xml' top -bn1 -o%CPU 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/top.txt' tail /boot/bz*.sha256 >> '/tower-diagnostics-20240415-1911/unraid-6.12.10.txt' uptime nproc lscpu 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lscpu.txt' lsscsi -vgl 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsscsi.txt' lspci -knn 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lspci.txt' lsusb 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsusb.txt' free -mth 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/memory.txt' ps -auxf --sort=-pcpu 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/ps.txt' lsof -Pni 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsof.txt' lsmod|sort 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsmod.txt' df -h 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/df.txt' ip -br a|awk '/^(eth|bond)[0-9]+ /{print $1}'|sort dmidecode -qt2|awk -F: '/^ Manufacturer:/{m=$2};/^ Product Name:/{p=$2} END{print m" -"p}' 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/motherboard.txt' dmidecode -qt0 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/motherboard.txt' cat /proc/meminfo 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/meminfo.txt' dmidecode --type 17 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/meminfo.txt' ethtool 'bond0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool -i 'bond0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool 'eth0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool -i 'eth0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool 'eth1' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' logs.zip
  4. @samsausages Could this setting be per pool somehow? I have a pool, which I don't mind refreshing every 30 sec, but I want my secondary pool to sleep.
  5. I just got the new LincStation N1 and I am setting up the array. I added the disks as XFS. I decided to remove 2 NVMEs with no data on it and add them as a cache pool in BTRFS. Suddenly my Kingston NVMe (disk 3) in the array with all AppData gives the messages "XFS - Unmountable: Unsupported partition layout". I read in other threads the UFS Explorer is the way, but it can only run on Windows as far I can tell. What other options do I have? linc-diagnostics-20240129-0829.zip
  6. I just got 4 drives exchanged free-of-charge with Seagate. So at least there's that.
  7. @JorgeB Correct - It was a typo. It was RAID0. I did not see your reply before I tried running last night: btrfs restore -v /dev/nvme0n1p1 /mnt/disk2/restore It worked! I have all my data in the restore folder now and things are back to how they should be. Thanks for the suggestion though!
  8. I was trying to remove one nvme disk from my raid1 pool by following the guide: I did the following: Stopped docker and VM Stopped array Removed cache2 from pool Pressed "I am sure" Started array I saw the unmountable fs I stopped array and tried adding back the removed cache2. It didn't work. Now I cannot mount any of the nvme's and I'm afraid the fs is corrupt. I tried to rescure it by following the guide: The guide does not specify how to mount multiple nvme drives. Also, /dev/sdX1 devices are array disks and not nvme's. I tried with the command: mount -o rescure=all,ro /dev/nvme0n1p1 /temp but it gives error: mount: /temp: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I am not sure what I am doing with brtfs. What can I do? tower-diagnostics-20230105-1935.zip
  9. I followed this guide to get rsync working with Hyper Backup for Synology NAS: https://www.beatificabytes.be/backup-synology-to-unraid/ It should probably be like this by default in unRAID.
  10. @JorgeB I usesd UFS Explorer now to see if it might work. I am not missing any crucial data, so I guess I will just reformat the disk now. Thanks for the help.
  11. I just did as you said and the same errors occcur: Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error tower-diagnostics-20221110-1517.zip log.zip
  12. log.txtRan the command for 20 hours: xfs_repair /dev/sdm Finally it said: "Sorry, could not find valid secondary superblock" Secondly, this just fails immediatly: xfs_repair -v /dev/md8 Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error
  13. I just upgraded unRAID to 6.11.3 and I get this error after reboot: I seems to connect fine but says my FS is corrupt? I get this when starting array in "Log disk info": Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB) Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] 4096-byte physical blocks Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Write Protect is off Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Mode Sense: 9b 00 10 08 Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Write cache: enabled, read cache: enabled, supports DPO and FUA Nov 9 10:23:40 Tower kernel: sdm: sdm1 Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Attached SCSI disk Nov 9 10:24:11 Tower emhttpd: ST16000NM001G-2KK103_WL206EZD (sdm) 512 31251759104 Nov 9 10:24:11 Tower kernel: mdcmd (9): import 8 sdm 64 15625879500 0 ST16000NM001G-2KK103_WL206EZD Nov 9 10:24:11 Tower kernel: md: import disk8: (sdm) ST16000NM001G-2KK103_WL206EZD size: 15625879500 Nov 9 10:24:11 Tower emhttpd: read SMART /dev/sdm Nov 9 10:28:12 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo Nov 9 10:30:50 Tower emhttpd: read SMART /dev/sdm Nov 9 10:30:51 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo Nov 9 10:34:22 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo So I can see on the first line that the data is still there. What can I do? I tried new cables and power connectors. tower-diagnostics-20221109-1033.zip
  14. I will try and looks through all the Power Saving settings I can find. I already had it installed and it was set to Power Save with Turbo Boost to Yes.
  15. I understand that I can remove hardware to save power, but my intention is to keep everything as-is. I just want to know if something is preventing things from not to run in low-power or idle mode(s).