mrbusiness

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by mrbusiness

  1. Yes, 4 similar NICs from this board: Alder Lake N100 NAS Motherboard ITX Home Processor DDR5 4* I226 2.5G LAN M.2 Slot 6xSATA DP HD Low Power https://a.aliexpress.com/_Ex41iIT It’s Intel I226. It has been working for a months now without this issue. Suddenly just happened. Instead of disabling in BIOS is there a way to get correct driver?
  2. Did not work. Same issues. Diagnostics gets stuck at ethtool 'eth1' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt'
  3. Every time I open Docker Settings or Network Settings, then my server becomes completely unresponsive. I cannot go to any page or SSH to the server either. I am forced to shutdown the server by holding power button. I have enabled syslog after last hard shutdown. Even pulling diagnostics makes the server unresponsive. So I had to attach monitor and input to get the diagnostics. The server is frozen, so I could only take a screenshot with my phone... I will try to get the logs out to a file.. I tried removing subnets, and changing multiple docker and network settings. Furthermore, I keep noticing 'eth1' as a culprit. What else can I do? Diagnostics: mkdir -p /boot/logs mkdir -p '/tower-diagnostics-20240415-1911/system' '/tower-diagnostics-20240415-1911/config' '/tower-diagnostics-20240415-1911/logs' '/tower-diagnostics-20240415-1911/shares' '/tower-diagnostics-20240415-1911/smart' '/tower-diagnostics-20240415-1911/qemu' '/tower-diagnostics-20240415-1911/xml' top -bn1 -o%CPU 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/top.txt' tail /boot/bz*.sha256 >> '/tower-diagnostics-20240415-1911/unraid-6.12.10.txt' uptime nproc lscpu 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lscpu.txt' lsscsi -vgl 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsscsi.txt' lspci -knn 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lspci.txt' lsusb 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsusb.txt' free -mth 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/memory.txt' ps -auxf --sort=-pcpu 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/ps.txt' lsof -Pni 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsof.txt' lsmod|sort 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/lsmod.txt' df -h 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/df.txt' ip -br a|awk '/^(eth|bond)[0-9]+ /{print $1}'|sort dmidecode -qt2|awk -F: '/^ Manufacturer:/{m=$2};/^ Product Name:/{p=$2} END{print m" -"p}' 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/motherboard.txt' dmidecode -qt0 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/motherboard.txt' cat /proc/meminfo 2>/dev/null|todos >'/tower-diagnostics-20240415-1911/system/meminfo.txt' dmidecode --type 17 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/meminfo.txt' ethtool 'bond0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool -i 'bond0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool 'eth0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool -i 'eth0' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' ethtool 'eth1' 2>/dev/null|todos >>'/tower-diagnostics-20240415-1911/system/ethtool.txt' logs.zip
  4. @samsausages Could this setting be per pool somehow? I have a pool, which I don't mind refreshing every 30 sec, but I want my secondary pool to sleep.
  5. I just got the new LincStation N1 and I am setting up the array. I added the disks as XFS. I decided to remove 2 NVMEs with no data on it and add them as a cache pool in BTRFS. Suddenly my Kingston NVMe (disk 3) in the array with all AppData gives the messages "XFS - Unmountable: Unsupported partition layout". I read in other threads the UFS Explorer is the way, but it can only run on Windows as far I can tell. What other options do I have? linc-diagnostics-20240129-0829.zip
  6. I just got 4 drives exchanged free-of-charge with Seagate. So at least there's that.
  7. @JorgeB Correct - It was a typo. It was RAID0. I did not see your reply before I tried running last night: btrfs restore -v /dev/nvme0n1p1 /mnt/disk2/restore It worked! I have all my data in the restore folder now and things are back to how they should be. Thanks for the suggestion though!
  8. I was trying to remove one nvme disk from my raid1 pool by following the guide: I did the following: Stopped docker and VM Stopped array Removed cache2 from pool Pressed "I am sure" Started array I saw the unmountable fs I stopped array and tried adding back the removed cache2. It didn't work. Now I cannot mount any of the nvme's and I'm afraid the fs is corrupt. I tried to rescure it by following the guide: The guide does not specify how to mount multiple nvme drives. Also, /dev/sdX1 devices are array disks and not nvme's. I tried with the command: mount -o rescure=all,ro /dev/nvme0n1p1 /temp but it gives error: mount: /temp: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I am not sure what I am doing with brtfs. What can I do? tower-diagnostics-20230105-1935.zip
  9. I followed this guide to get rsync working with Hyper Backup for Synology NAS: https://www.beatificabytes.be/backup-synology-to-unraid/ It should probably be like this by default in unRAID.
  10. @JorgeB I usesd UFS Explorer now to see if it might work. I am not missing any crucial data, so I guess I will just reformat the disk now. Thanks for the help.
  11. I just did as you said and the same errors occcur: Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error tower-diagnostics-20221110-1517.zip log.zip
  12. log.txtRan the command for 20 hours: xfs_repair /dev/sdm Finally it said: "Sorry, could not find valid secondary superblock" Secondly, this just fails immediatly: xfs_repair -v /dev/md8 Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error
  13. I just upgraded unRAID to 6.11.3 and I get this error after reboot: I seems to connect fine but says my FS is corrupt? I get this when starting array in "Log disk info": Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB) Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] 4096-byte physical blocks Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Write Protect is off Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Mode Sense: 9b 00 10 08 Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Write cache: enabled, read cache: enabled, supports DPO and FUA Nov 9 10:23:40 Tower kernel: sdm: sdm1 Nov 9 10:23:40 Tower kernel: sd 1:0:5:0: [sdm] Attached SCSI disk Nov 9 10:24:11 Tower emhttpd: ST16000NM001G-2KK103_WL206EZD (sdm) 512 31251759104 Nov 9 10:24:11 Tower kernel: mdcmd (9): import 8 sdm 64 15625879500 0 ST16000NM001G-2KK103_WL206EZD Nov 9 10:24:11 Tower kernel: md: import disk8: (sdm) ST16000NM001G-2KK103_WL206EZD size: 15625879500 Nov 9 10:24:11 Tower emhttpd: read SMART /dev/sdm Nov 9 10:28:12 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo Nov 9 10:30:50 Tower emhttpd: read SMART /dev/sdm Nov 9 10:30:51 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo Nov 9 10:34:22 Tower s3_sleep: excluded disks=sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo So I can see on the first line that the data is still there. What can I do? I tried new cables and power connectors. tower-diagnostics-20221109-1033.zip
  14. I will try and looks through all the Power Saving settings I can find. I already had it installed and it was set to Power Save with Turbo Boost to Yes.
  15. I understand that I can remove hardware to save power, but my intention is to keep everything as-is. I just want to know if something is preventing things from not to run in low-power or idle mode(s).
  16. Good observation - I added the PSU now (Corsair CX600M, 600W PSU). It is very strange that the Power Meter shows 250V. I have removed it for now and will look for another one. However, it is this very commonly used model: HiHome WiFi Smart Plug 16A with Energy Meter (WPP-16S). Although I can see that the model is no longer on the marked and a newer version is being sold. Will investigate further if it is a known issue with the model.
  17. Deleting a folder with photos project it takes FOREVER. I always just end up cancelling it. When transferring many files it takes forever to begin. On Windows it start instantly. I already tried a lot of things which are mentioned: https://forums.unraid.net/topic/108898-osx-smb-with-unraid-reading-of-directories-takes-forever/ https://www.reddit.com/r/unRAID/comments/ui77u0/extremely_slow_transfer_speed_macos_to_unraid_smb https://forums.unraid.net/topic/96051-my-unraid-server-is-too-slow-on-mac/ https://support.apple.com/en-us/HT208209 My settings: Enable SMB Multi Channel: No Enhanced macOS interoperability: Yes Enable NetBIOS: No Enable WSD: Yes My Samba extra configuration: [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native spotlight backend = tracker Attached my diagnostics. What can be done? Switch to OpenMediaVault? tower-diagnostics-20221024-0818.zip
  18. I want to lower my power consumption during these times, so I am doing some investigation into my server. Overview when idle Power consumption Idle + spun-down: My system is idling at around 80-100W when the array is spun down: Spun-up idle: When all disks are spun up (no load), but mostly idling the server uses 135W. Full load: With full load (Parity-Sync + Downloads running) it uses 180-200W. My system is built with MB: Gigabyte H470M DS3H (default BIOS settings with only fan settings set to Silent) Idle: 20-30W (bad source) CPU: i5-10400 CPU @ 2.90GHz Idle: 10W Full load: 86W (source) Ram: 4 x 32GB G.Skill Value DDR4-2666 C19 DC idle: 4 x 3W = 12W (estimated based on source) SAS Controller: LSI SAS 9300-8i SGL Full load: 14-19W (source) Idle: should be lower than 13W, but is not specified NIC: ASUS XG­C100C 10G Idle: Less than 10W (From what I can tell other small NICs use) HDD Array: Parity: Seagate Exos X16 16 TB (ST16000NM001G) 9 x Seagate Exos X16 16 TB (ST16000NM001G) 2 x WD Red Plus NAS - 8 TB (WD80EFZZ) SSD: 2 x Crucial MX500 SSD 2.5" - 1TB Idle: Less than 1W (source) PSU: Corsair CX600M, 600W PSU From this - I estimate my system should idle 50-80W, so I am unhopeful the system can idle at lower. However, when I compare to "standard" products: Synology DS1821+ Idle: 26W (source) Load: 60W Mac Mini M1: Idle: 6,8W (source) Load: 39W Is there any hope to lower my power consumption? What is your systems idling at? tower-diagnostics-20221024-0818.zip
  19. I want to upgrade my VM to Windows 11, but I am prevented with the Windows 11 Installation Assistant, because Secure Boot and TPM fails. Is there some way to fix this in my VM?
  20. I removed a NiC some time ago and I have pi-hole running on default port (53), which I suspect might be causing these issues. When I try to start a VM I get error: Cannot get interface MTU on 'virbr0': No such device Inside Network Settings I have Routing Table: If I try to add "virbr0" manually and press ADD ROUTE, then nothing happens. No error message. What can I do to fix this and make my VM start? tower-diagnostics-20210825-1250.zip
  21. Thanks this worked! However, I have now been running with this for 3-4 days and I am getting "out of memory" crashed in unRAID. Even after a fresh reboot I will get errors once I start plotting again. Any ideas?
  22. I am getting these 2 errors: Multiple NICs on the same IPv4 network This is false, because I used to have 2 NICs installed, but now only 1. I do not have "eth1" inside my /config/network.cgf and I have no way of removing or seeing it inside Network Settings. Any tips? Invalid folder ram contained within /mnt This is my ram-disk placed under /mnt/ram, I could ignore this, but is there another way such that this plugin allows it? tower-diagnostics-20210720-1929.zip
  23. Could someone point me to a thread or guide on how to setup ram disk for this container?