Jump to content

DanielPetrica

Members
  • Posts

    13
  • Joined

  • Last visited

About DanielPetrica

  • Birthday 01/31/1997

Converted

  • Gender
    Male
  • URL
    https://github.com/danielpetrica
  • Location
    Reggio Emilia, Italy
  • Personal Text
    Full stack web developer

Recent Profile Visitors

755 profile views

DanielPetrica's Achievements

Noob

Noob (1/14)

4

Reputation

  1. When we update the container/app do we need to change the "Config" path to map this new path or is it automatic?
  2. Sorry for the late reply, but I couldn't get my hands on a pc in the last days. Don't know why the script didn't work, and I agree it's not important. Probably, it may even be because I was using the first version in the thread. I only noticed the cache being unmountable and appearing in the format list when I changed the disk3 from btrfs format and started the array as I was trying to format it back in the correct format
  3. The script refused to write the disk as it said no disk was empty At this moment my understanding is that the disk3 was still having the system wrote as btfrs, but I had chosen the new format in the array config and it prompted me to format it. But it was prompting me to format the cache disks so I stopped before messing too much my system 😅
  4. @JorgeB thank you very much Now my cache is working, and I still have all my data
  5. Yep initially when changing the disk format I chose the btrfs format, the same as the cache pool I think this is what broke the cache
  6. I'm open to do it, but is there any way to mount the cache and export the data, so I don't lose it. I can't find the cache under /mnt/
  7. I was trying to remove the disk as said on the wiki. https://wiki.unraid.net/Shrink_array#The_.22Remove_Drives_Then_Rebuild_Parity.22_Method My guess is that when setting the disk3 to the same format as the cahce pool it got mixed up with them. Also I still haven't run the "Retain current configuration" option yet
  8. Here's the updated diagnostics after the restart. tower-diagnostics-20220811-1425.zip The problem is still present as shown in this new screenshots
  9. Hi everyone, I messed up my unraid cache today while trying to remove a disk from my array. System info: Unraid version Version: 6.10.3 diagnostic zip: tower-diagnostics-20220811-1301.zip I was trying to follow the "The Clear Drive Then Remove Drive Method" guide here https://wiki.unraid.net/Shrink_array#The_.22Remove_Drives_Then_Rebuild_Parity.22_Method. When trying to format my disk (disk3) I changed his format to another one and I have wrongly chosen btrfs format. Then I started my array and formatted the disk3 as it was the only disk appearing under the format disk option. I think this is the step which messed up my system as is the same system as my cache. Then I created the user script as said in the guide and ran it but i couldn't clear disk even if the cli was showing only the specified "clear-me" folder. Here's the code I placed inside it: #!/bin/bash # A script to clear an unRAID array drive. It first checks the drive is completely empty, # except for a marker indicating that the user desires to clear the drive. The marker is # that the drive is completely empty except for a single folder named 'clear-me'. # # Array must be started, and drive mounted. There's no other way to verify it's empty. # Without knowing which file system it's formatted with, I can't mount it. # # Quick way to prep drive: format with ReiserFS, then add 'clear-me' folder. # # 1.0 first draft # 1.1 add logging, improve comments # 1.2 adapt for User.Scripts, extend wait to 60 seconds # 1.3 add progress display; confirm by key (no wait) if standalone; fix logger # 1.4 only add progress display if unRAID version >= 6.2 version="1.4" marker="clear-me" found=0 wait=60 p=${0%%$P} # dirname of program p=${p:0:18} q="/tmp/user.scripts/" echo -e "*** Clear an unRAID array data drive *** v$version\n" # Check if array is started ls /mnt/disk[1-9]* 1>/dev/null 2>/dev/null if [ $? -ne 0 ] then echo "ERROR: Array must be started before using this script" exit fi # Look for array drive to clear n=0 echo -n "Checking all array data drives (may need to spin them up) ... " if [ "$p" == "$q" ] # running in User.Scripts then echo -e "\n" c="<font color=blue>" c0="</font>" else #set color teal c="\x1b[36;01m" c0="\x1b[39;49;00m" fi for d in /mnt/disk[1-9]* do x=`ls -A $d` z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi let n=n+1 done #echo -e "found:"$found "d:"$d "marker:"$marker "z:"$z "n:"$n # No drives found to clear if [ $found == "0" ] then echo -e "\rChecked $n drives, did not find an empty drive ready and marked for clearing!\n" echo "To use this script, the drive must be completely empty first, no files" echo "or folders left on it. Then a single folder should be created on it" echo "with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen." echo "This script is only for clearing unRAID data drives, in preparation for" echo "removing them from the array. It does not add a Preclear signature." exit fi # check unRAID version v1=`cat /etc/unraid-version` # v1 is 'version="6.2.0-rc5"' (fixme if 6.10.* happens) v2="${v1:9:1}${v1:11:1}" if [[ $v2 -ge 62 ]] then v=" status=progress" else v="" fi #echo -e "v1=$v1 v2=$v2 v=$v\n" # First, warn about the clearing, and give them a chance to abort echo -e "\rFound a marked and empty drive to clear: $c Disk ${d:9} $c0 ( $d ) " echo -e "* Disk ${d:9} will be unmounted first." echo "* Then zeroes will be written to the entire drive." echo "* Parity will be preserved throughout." echo "* Clearing while updating Parity takes a VERY long time!" echo "* The progress of the clearing will not be visible until it's done!" echo "* When complete, Disk ${d:9} will be ready for removal from array." echo -e "* Commands to be executed:\n***** $c umount $d $c0\n***** $c dd bs=1M if=/dev/zero of=/dev/md${d:9} $v $c0\n" if [ "$p" == "$q" ] # running in User.Scripts then echo -e "You have $wait seconds to cancel this script (click the red X, top right)\n" sleep $wait else echo -n "Press ! to proceed. Any other key aborts, with no changes made. " ch="" read -n 1 ch echo -e -n "\r \r" if [ "$ch" != "!" ]; then exit fi fi # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} $v ) ..." dd bs=1M if=/dev/zero of=/dev/md${d:9} $v #logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 ) ..." #dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 # Done logger -tclear_array_drive "Clearing Disk ${d:9} is complete" echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n" echo -e "Unless errors appeared, the drive is now cleared!" echo -e "Because the drive is now unmountable, the array should be stopped," echo -e "and the drive removed (or reformatted)." exit Now I tried stopping and restarting the array, and I've seen the cache disks appear in format disk option and I can no longer access them as they appear unmountable with the message "Unmountable: Invalid pool config". I've tried turning disk 3 to the xfs format but now i still see the disk3 and the cache ssds in the format option. As I have some shares which are configured with cache yes or cache preffer I'm worried that a format may delete this data which includes appdata share which has a cache prefer option. Can you please indicate what options I have to recover this data? In the attached image, you can see the current array status with and the array operations
  10. Thanks, I can understand a small bit of spanish so I can use it to clarify some translations thanks to the similarities with italian. I think I've found the repo: https://github.com/unraid/lang-es_ES
  11. Hi @SpencerJ In the dasboard.txt on row 24 I've encountered the is this the speed of the fan or the number of fans available?
  12. Hi, I'm an italian speaker and I'd like to do a new italian translation for unraid. Can I start it myself?
  13. Hi thanks for posting the solution. This helped me make sure by modem doesn't change the asigned ip of my docker container. Also please note thate there is a typo in your example. It should be --mac-address not mac-adress
×
×
  • Create New...