MrTee
-
Posts
12 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by MrTee
-
-
alright. will do and report back if i find anything conclusive
- 1
-
46 minutes ago, JorgeB said:
Backup current flash, re-create with a stock install, then restore the bare minimum from the current config, like the key, super.dat and pools folder, docker templates, there are a few cfg files that should be safe to restore, like network, shares, etc, you can then re-configure the server or restore the rest a few tiles at a time to see if you find the culprit.
Wow that issue impact more than i thought.
so to backup i go to Main > Flash and then "Flash Backup">> Creation error "Insufficient free disk space available"
EDIT:
what other options do i have to backup? plain copy?
-
I disabled docker and vm already was.
I rebooted in safe mode again and the same issue.
So my flash install seams to be bugged. I am clueless where to look.
So what are my options?
reinstall? what would i loose?
-
30 minutes ago, Squid said:
/boot/extra total 18808 -rw------- 1 root root 2174744 Nov 17 2022 powertop-2.15-x86_64-1.txz -rw------- 1 root root 8472744 Dec 9 20:26 vim-9.0.0623-x86_64-1_nerdtools.txz -rw------- 1 root root 8601288 May 17 18:11 vim-9.0.1493-x86_64-1_nerdtools.txz
Get rid of this stuff in /extra on the flash drive.
did that. rebooted. no change.
where else might i look?
-
i did not change the go files manually. I think this is from tips and tweaks plugin.
I downloaded the unraid install files and used the go file from there
Spoilerroot@Prime:/boot/config# cat go #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp &
restarted. still the same
root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - /
any other suggestions?
-
I rebooted several times with no effect unfortunatly.
i had a look into the go file but there are only energy saving functions as far as i can tell.
Spoilerroot@Prime:/boot/config# cat go #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & # ------------------------------------------------- # Set power-efficient CPU governor # ------------------------------------------------- /etc/rc.d/rc.cpufreq powersave # ------------------------------------------------- # Enable power-efficient ethernet # ------------------------------------------------- # enable IEEE 802.3az (Energy Efficient Ethernet): Could be incompatible to LACP bonds! for i in /sys/class/net/eth?; do dev=$(basename $i); [[ $(echo $(ethtool --show-eee $dev 2> /dev/null) | grep -c "Supported EEE link modes: 1") -eq 1 ]] && ethtool --set-eee $dev eee on; done # Disable wake on lan for i in /sys/class/net/eth?; do ethtool -s $(basename $i) wol d; done # ------------------------------------------------- # powertop tweaks # ------------------------------------------------- # Enable SATA link power management echo med_power_with_dipm | tee /sys/class/scsi_host/host*/link_power_management_policy # Runtime PM for I2C Adapter (i915 gmbus dpb) echo auto | tee /sys/bus/i2c/devices/i2c-*/device/power/control # Autosuspend for USB device echo auto | tee /sys/bus/usb/devices/*/power/control # Runtime PM for disk echo auto | tee /sys/block/sd*/device/power/control # Runtime PM for PCI devices echo auto | tee /sys/bus/pci/devices/????:??:??.?/power/control # Runtime PM for ATA devices echo auto | tee /sys/bus/pci/devices/????:??:??.?/ata*/power/control
i looked at cron and it seems also nothing out of the ordinary.I restarted (also in safe mode) and the issue persists with started and stopped array.
where else could I search for a script or something? -
should there be a mount in fstab for / ?
on my arch machine there is but afaik unraid runs differently from ram and stuff.
root@Prime:~# cat /etc/fstab /dev/sda1 /boot vfat rw,flush,noatime,nodiratime,dmask=77,fmask=177,shortname=mixed /boot/bzmodules /lib/modules squashfs ro,defaults /boot/bzfirmware /lib/firmware squashfs ro,defaults tmpfs /dev/shm tmpfs defaults hugetlbfs /hugetlbfs hugetlbfs defaults
-
Hello,
I'm running into an issue with my rootfs, where the output of "df /" is 0 across the board.
root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - /
The issue came up when I tried to preclear disks with the unassigned.devices.preclear plugin. At one point it checks for free space on '/' and there was wrong behavior due to my issue. I posted on Support Thread and the plugin was updated, and it correctly aborts the whole script now with 'Low memory...' message.
Since this issue is unrelated to the plugin, I post here in General Support, I hope that's rightI do not know where to look to find out whats scrambling the rootfs.
I restarted in safe mode, but this does not change the output of "df /"
Running current stable [6.11.5]. Array autostart disabled.
attaching Diagnostics from normal boot and safe mode.Thanks in advance!
normal_prime-diagnostics-20230524-0721.zip safemode_prime-diagnostics-20230524-0733.zip
-
root@Prime:/# ls -la / total 8 drwxr-xr-x 21 root root 0 May 17 16:01 ./ drwxr-xr-x 21 root root 0 May 17 16:01 ../ drwxr-xr-x 3 root root 0 May 17 09:13 .config/ drwxr-xr-x 2 root root 0 May 17 09:13 bin/ drwx------ 11 root root 8192 Jan 1 1970 boot/ drwxr-xr-x 17 root root 3560 May 17 09:17 dev/ drwxr-xr-x 59 root root 0 May 17 09:17 etc/ drwxr-xr-x 2 root root 0 Nov 20 22:27 home/ drwxr-xr-x 2 root root 0 May 17 09:12 hugetlbfs/ lrwxrwxrwx 1 root root 10 Nov 20 22:27 init -> /sbin/init* drwxrwxrwx 8 root root 0 Mar 18 2020 lib/ drwxr-xr-x 7 root root 0 Nov 20 22:27 lib64/ drwxr-xr-x 13 root root 0 May 17 09:17 mnt/ drwx--x--x 3 root root 0 May 17 09:17 opt/ dr-xr-xr-x 418 root root 0 May 17 09:12 proc/ drwx--x--- 3 root root 0 May 17 09:36 root/ drwxr-xr-x 18 root root 1000 May 17 09:38 run/ drwxr-xr-x 2 root root 0 May 17 09:13 sbin/ dr-xr-xr-x 13 root root 0 May 17 09:12 sys/ drwxrwxrwt 15 root root 0 May 17 17:38 tmp/ drwxr-xr-x 14 root root 0 Mar 29 17:05 usr/ drwxr-xr-x 15 root root 0 Jan 9 2017 var/
in safe mode
root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 7990928 954976 7035952 12% /
so there has to be something messing with a plugin or docker container (no vm setup).
I disabled auto start of the array and turned of safe mode
root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - /
since the array is offline... am I right to assume that no docker container is messing around and that i should look through my plugins?
-
1 hour ago, dlandon said:
Post diagnostics.
The script needs to be modified to at least handle this situation a bit better. I'll work on that.
thank you in advance
-
Hello,
I run into an issue where the script errors out and only the first command runs in the background (like pre-read or erase).
As far as I can determine its not the scripts fault but something with the filesystem/config.
See the log below where i want to erase a replaced hdd. The script finishes immediately due to the error seen. In the background the dd command which writes the zeros is running and when its done the part after "--> RESULT: .... " gets added to the log.
PRECLEAR LOG:# Step 1 of 1 - Erasing in progress ... # # # # # # # # # # # # # # # #################################################################################################### # Cycle elapsed time: 0:00:00 | Total elapsed time: 0:00:00 # #################################################################################################### #################################################################################################### # S.M.A.R.T. Status (device type: default) # # # # ATTRIBUTE INITIAL STATUS # # Reallocated_Sector_Ct 0 - # # Power_On_Hours 47335 - # # Temperature_Celsius 29 - # # Reallocated_Event_Count 0 - # # Current_Pending_Sector 0 - # # Offline_Uncorrectable 0 - # # UDMA_CRC_Error_Count 83 - # # # # # #################################################################################################### # # #################################################################################################### /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 569: 0 * 100 / 0 : division by 0 (error token is "0 ") --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Erase Finished Successfully!. 31: /tmp/.preclear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 531: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 531: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 538: /tmp/.prec lear/sdb/dd_output: No such file or directory
The error in line 569 is in the root_free_space() function, where it checks if enough space is available.
But unfortunately the is none, but I don't know why.
the line where it gets the 0 is " avail=$(df --output=avail / | tail -n +2 )"
root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - /
I found that maybe a nfs share could mess things up, but its not enabled.I restarted and checked with array stopped , but still the same.
I don't know where to continue. Any help/hint is appreciated
I'm running up to date unRaid (6.11.5) with a i think normal/standart setup (1 parity, 3 disks, 2 ssd)
Configuration scrambled? rootfs "df /" - Used, Available is 0
in General Support
Posted
So that worked. I had an issue with stale config after reboots but a quick look in the forum gave me steps to do. After re assign that works
I kept my shares and docker containers and that's what i cared about.
Getting plugins again is no hassle.
So i figured that the culprit was a product of an old stick/config. I ran this one since 2018 and just updated, changed hardware...
Thanks for all of your help everybody.
I appreciate it!