MrTee

Members
  • Posts

    12
  • Joined

  • Last visited

MrTee's Achievements

Noob

Noob (1/14)

2

Reputation

  1. So that worked. I had an issue with stale config after reboots but a quick look in the forum gave me steps to do. After re assign that works I kept my shares and docker containers and that's what i cared about. Getting plugins again is no hassle. So i figured that the culprit was a product of an old stick/config. I ran this one since 2018 and just updated, changed hardware... Thanks for all of your help everybody. I appreciate it!
  2. alright. will do and report back if i find anything conclusive
  3. Wow that issue impact more than i thought. so to backup i go to Main > Flash and then "Flash Backup" >> Creation error "Insufficient free disk space available" EDIT: what other options do i have to backup? plain copy?
  4. I disabled docker and vm already was. I rebooted in safe mode again and the same issue. So my flash install seams to be bugged. I am clueless where to look. So what are my options? reinstall? what would i loose?
  5. did that. rebooted. no change. where else might i look? prime-diagnostics-20230524-1749.zip
  6. i did not change the go files manually. I think this is from tips and tweaks plugin. I downloaded the unraid install files and used the go file from there restarted. still the same root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - / any other suggestions?
  7. I rebooted several times with no effect unfortunatly. i had a look into the go file but there are only energy saving functions as far as i can tell. i looked at cron and it seems also nothing out of the ordinary. I restarted (also in safe mode) and the issue persists with started and stopped array. where else could I search for a script or something?
  8. should there be a mount in fstab for / ? on my arch machine there is but afaik unraid runs differently from ram and stuff. root@Prime:~# cat /etc/fstab /dev/sda1 /boot vfat rw,flush,noatime,nodiratime,dmask=77,fmask=177,shortname=mixed /boot/bzmodules /lib/modules squashfs ro,defaults /boot/bzfirmware /lib/firmware squashfs ro,defaults tmpfs /dev/shm tmpfs defaults hugetlbfs /hugetlbfs hugetlbfs defaults
  9. Hello, I'm running into an issue with my rootfs, where the output of "df /" is 0 across the board. root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - / The issue came up when I tried to preclear disks with the unassigned.devices.preclear plugin. At one point it checks for free space on '/' and there was wrong behavior due to my issue. I posted on Support Thread and the plugin was updated, and it correctly aborts the whole script now with 'Low memory...' message. Since this issue is unrelated to the plugin, I post here in General Support, I hope that's right I do not know where to look to find out whats scrambling the rootfs. I restarted in safe mode, but this does not change the output of "df /" Running current stable [6.11.5]. Array autostart disabled. attaching Diagnostics from normal boot and safe mode. Thanks in advance! normal_prime-diagnostics-20230524-0721.zip safemode_prime-diagnostics-20230524-0733.zip
  10. root@Prime:/# ls -la / total 8 drwxr-xr-x 21 root root 0 May 17 16:01 ./ drwxr-xr-x 21 root root 0 May 17 16:01 ../ drwxr-xr-x 3 root root 0 May 17 09:13 .config/ drwxr-xr-x 2 root root 0 May 17 09:13 bin/ drwx------ 11 root root 8192 Jan 1 1970 boot/ drwxr-xr-x 17 root root 3560 May 17 09:17 dev/ drwxr-xr-x 59 root root 0 May 17 09:17 etc/ drwxr-xr-x 2 root root 0 Nov 20 22:27 home/ drwxr-xr-x 2 root root 0 May 17 09:12 hugetlbfs/ lrwxrwxrwx 1 root root 10 Nov 20 22:27 init -> /sbin/init* drwxrwxrwx 8 root root 0 Mar 18 2020 lib/ drwxr-xr-x 7 root root 0 Nov 20 22:27 lib64/ drwxr-xr-x 13 root root 0 May 17 09:17 mnt/ drwx--x--x 3 root root 0 May 17 09:17 opt/ dr-xr-xr-x 418 root root 0 May 17 09:12 proc/ drwx--x--- 3 root root 0 May 17 09:36 root/ drwxr-xr-x 18 root root 1000 May 17 09:38 run/ drwxr-xr-x 2 root root 0 May 17 09:13 sbin/ dr-xr-xr-x 13 root root 0 May 17 09:12 sys/ drwxrwxrwt 15 root root 0 May 17 17:38 tmp/ drwxr-xr-x 14 root root 0 Mar 29 17:05 usr/ drwxr-xr-x 15 root root 0 Jan 9 2017 var/ in safe mode root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 7990928 954976 7035952 12% / so there has to be something messing with a plugin or docker container (no vm setup). I disabled auto start of the array and turned of safe mode root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - / since the array is offline... am I right to assume that no docker container is messing around and that i should look through my plugins?
  11. Hello, I run into an issue where the script errors out and only the first command runs in the background (like pre-read or erase). As far as I can determine its not the scripts fault but something with the filesystem/config. See the log below where i want to erase a replaced hdd. The script finishes immediately due to the error seen. In the background the dd command which writes the zeros is running and when its done the part after "--> RESULT: .... " gets added to the log. PRECLEAR LOG: # Step 1 of 1 - Erasing in progress ... # # # # # # # # # # # # # # # #################################################################################################### # Cycle elapsed time: 0:00:00 | Total elapsed time: 0:00:00 # #################################################################################################### #################################################################################################### # S.M.A.R.T. Status (device type: default) # # # # ATTRIBUTE INITIAL STATUS # # Reallocated_Sector_Ct 0 - # # Power_On_Hours 47335 - # # Temperature_Celsius 29 - # # Reallocated_Event_Count 0 - # # Current_Pending_Sector 0 - # # Offline_Uncorrectable 0 - # # UDMA_CRC_Error_Count 83 - # # # # # #################################################################################################### # # #################################################################################################### /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 569: 0 * 100 / 0 : division by 0 (error token is "0 ") --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Erase Finished Successfully!. 31: /tmp/.preclear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 531: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 531: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 536: /tmp/.prec lear/sdb/dd_output_complete: No such file or directory /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/preclear_disk.sh: line 538: /tmp/.prec lear/sdb/dd_output: No such file or directory The error in line 569 is in the root_free_space() function, where it checks if enough space is available. But unfortunately the is none, but I don't know why. the line where it gets the 0 is " avail=$(df --output=avail / | tail -n +2 )" root@Prime:~# df / Filesystem 1K-blocks Used Available Use% Mounted on rootfs 0 0 0 - / I found that maybe a nfs share could mess things up, but its not enabled. I restarted and checked with array stopped , but still the same. I don't know where to continue. Any help/hint is appreciated I'm running up to date unRaid (6.11.5) with a i think normal/standart setup (1 parity, 3 disks, 2 ssd)