Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 03/17/19 in all areas

  1. 2 points
    There is no problems with a 9211 (or a H200) flashed to IT mode
  2. 2 points
    Well I just bought a refurb R710 and it had a PERC6i raid controller and it cannot be flashed to IT mode from what I gathered so I put an LSI 9211 8i flashed to IT mode and it works flawlessly with unraid I did see there was some workaround but in the end its to much trouble and you would loose SMART info on your drives, also be aware that if you have SAS drives you will not be able to spin them down via unraid
  3. 2 points
    They just need to push Samba 4.9.5 before 6.7 final. It fixes this bug. https://www.samba.org/samba/history/samba-4.9.5.html @limetech
  4. 2 points
    How do I replace/upgrade my single cache device? (unRAID v6.2 and above only) This procedure assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't. Stop all running Dockers/VMs Settings -> VM Manager: disable VMs and click apply Settings -> Docker: disable Docker and click apply Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer" Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page When the mover finishes check that your cache is empty (any files on the cache root will not be moved as they are not part of any share) Stop array, replace cache device, assign it, start array and format new cache device (if needed), check that it's using the filesystem you want Click on Shares and change to "Prefer" all shares that you want moved back to cache On the Main page click "Move Now" When the mover finishes re-enable Docker and VMs
  5. 1 point
    Hi guys I am considering upgrading from a xeon to a 2950x. Im sure a lot of other members are considering the same. Could someone be so kind as to catalogue any outstanding issues with threadripper that have not got a solution. I know there is a @SpaceInvaderOne video on Ryzen, but I think it would aid a lot of people with this kind of decision if the current state of threadripper and VM is documented. Thanks for any assistance on this matter
  6. 1 point
    Yeah, mine too really. I know how to set it up I think, but I don't know how to analyze the data. But I've not seen any leakage on delugevpn with privoxy enabled when testing on various leakage test sites so I'm choosing to trust it.
  7. 1 point
    Also know that the encode process is heavily impacted by read and write performance. I'm not sure how nvdec handles it's buffer queueing, but if the buffer isn't filled with enough data, you will notice the video will stop playing. This is READ limited performance, and would be heavily impacted by a parity check, especially for high bitrate media. The nvenc side of the house is limited by how much data is being fed into it by the decoder, and the write speed of the destination media. If you are transcoding to tmpfs (ram) this will almost never be your bottleneck as the encoded media is typically much smaller and lower bitrate than the source media.
  8. 1 point
    The permissions can be decoded as follows: r = read permission w = Write permission x = Execute permission (Need this to even run .exe. as file extensions have no special meaning to Linux.) - = Permission denied for this attribute (You will note that there are three groups of rwx permissions. The first group is for the owner (nobody), the second is for the group (users) and the third is for all others.) Thus, you do not have write access to these files (which you need to delete them) as you are a member of the users group! I believe you have the Fix Common Problems plugin installed. You should also have the Docker Safe New Perm tool installed ( Tools >>>> Docker Safe New Perm ). Run that tool and see if you can delete the files then. If so, figure out what Docker or plugin created those files as it is improperly configured.
  9. 1 point
    Below is the script I am using so you can get an idea how it works for me. I do not use an always-on Raspberry Pi in this scenario, but, other users have done so on the remote side of an over-the-Internet VPN connection. I would not be the best person for giving you a step by step for something I have not done. Again, my servers are both on the local LAN. The source server is on 24x7 and the destination server has IPMI and is powered on and off as needed for backups: !/bin/bash #description=This script backs up shares on MediaNAS to BackupNAS #arrayStarted=true echo "Starting Sync to BackupNAS" echo "Starting Sync $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Power On BackupNAS ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxxx chassis power on # Wait for 3 minutes echo "Waiting for BackupNAS to power up..." sleep 3m echo "Host is up" sleep 10s # Set up email header echo To: xxxxxx@xxxx.com >> /boot/logs/cronlogs/BackupNAS_Summary.log echo From: xxxxxxxx@xxxxx.com >> /boot/logs/cronlogs/BackupNAS_Summary.log echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log echo >> /boot/logs/cronlogs/BackupNAS_Summary.log # Backup Pictures Share echo "Copying new files to Pictures share ===== $(date)" echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Pictures.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Pictures/ root@192.168.1.15:/mnt/user/Pictures/ >> /boot/logs/cronlogs/BackupNAS_Pictures.log # Backup Videos Share echo "Copying new files to Videos share ===== $(date)" echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Videos.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Videos/ root@192.168.1.15:/mnt/user/Videos/ >> /boot/logs/cronlogs/BackupNAS_Videos.log # Backup Movies Share echo "Copying new files to Movies share ===== $(date)" echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Movies.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Movies/ root@192.168.1.15:/mnt/user/Movies/ >> /boot/logs/cronlogs/BackupNAS_Movies.log # Backup TVShows Share echo "Copying new files to TVShows share ===== $(date)" echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_TVShows.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/TVShows/ root@192.168.1.15:/mnt/user/TVShows/ >> /boot/logs/cronlogs/BackupNAS_TVShows.log # Backup OtherVids Share echo "Copying new files to OtherVids share ===== $(date)" echo "Copying new files to OtherVids share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to OtherVids share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_OtherVids.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/OtherVids/ root@192.168.1.15:/mnt/user/OtherVids/ >> /boot/logs/cronlogs/BackupNAS_OtherVids.log # Backup Documents Share echo "Copying new files to Documents share ===== $(date)" echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Documents.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Documents/ root@192.168.1.15:/mnt/user/Documents/ >> /boot/logs/cronlogs/BackupNAS_Documents.log echo "moving to end ===== $(date)" echo "moving to end ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Add in the summaries cd /boot/logs/cronlogs/ echo ===== > Pictures.log echo ===== > Videos.log echo ===== > Movies.log echo ===== > TVShows.log echo ===== > OtherVids.log echo ===== > Documents.log echo Pictures >> Pictures.log echo Videos >> Videos.log echo Movies >> Movies.log echo TVShows >> TVShows.log echo OtherVids >> OtherVids.log echo Documents >> Documents.log tac BackupNAS_Pictures.log | sed '/^Number of files: /q' | tac >> Pictures.log tac BackupNAS_Videos.log | sed '/^Number of files: /q' | tac >> Videos.log tac BackupNAS_Movies.log | sed '/^Number of files: /q' | tac >> Movies.log tac BackupNAS_TVShows.log | sed '/^Number of files: /q' | tac >> TVShows.log tac BackupNAS_OtherVids.log | sed '/^Number of files: /q' | tac >> OtherVids.log tac BackupNAS_Documents.log | sed '/^Number of files: /q' | tac >> Documents.log # now add all the other logs to the end of this email summary cat BackupNAS_Summary.log Pictures.log Videos.log Movies.log TVShows.log OtherVids.log Documents.log > allshares.log zip BackupNAS BackupNAS_*.log # Send email of summary of results ssmtp xxxxxxx@xxxxx.com < /boot/logs/cronlogs/allshares.log cd /boot/logs/cronlogs mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip" rm *.log #Power off BackupNAS gracefully sleep 30s ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxx chassis power soft
  10. 1 point
    I remember this being an issue at one point... googled and found this https://help.nextcloud.com/t/why-doesnt-android-client-upload-existing-pictures/10365/25 Are you on an older phone/app version? This is fixable, and to my knowledge was fixed. Top right of the screen normally, 3 little dots, 'select all photos'
  11. 1 point
    Use the script posted here and it will show up in Netdata. https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/?do=findComment&comment=728822
  12. 1 point
    There's a sidebar entry titled nvidia smi. Click on that. Note that the command is "nvidia-smi" but the configuration option is "nvidia_smi: yes" If you have entered everything correctly, and restarted the docker per the instructions it should show. Alternatively, you could add my script from a couple posts ago to userscripts and run it.
  13. 1 point
    I noticed this back on RC3 but since RC5 came out, I updated and waited to see if it happened again. Back on RC3, it happened to CPU 0 which was assigned to UNRAID. Now it's showing up with CPU 14 which is the last core assigned to a Win10 VM. Task manager in Win10 shows no activity. Remoting into the server, htop does not show 100% utilization. I see it updating at the same rate as the other CPUs. nas-diagnostics-20190302-0815.zip
  14. 1 point
    Probably still best to ask if they can add support to their docker, since it should just "not work" if nvidia-smi isn't available. But, if they are opposed, have a user script you can run on a schedule: #!/bin/bash con="$(docker ps --format "{{.Names}}" | grep -i netdata)" exists=$(docker exec -i "$con" grep -iqe "nvidia_smi: yes" /etc/netdata/python.d.conf >/dev/null 2>&1; echo $?) if [ "$exists" -eq 1 ]; then docker exec -i "$con" /bin/sh -c 'echo "nvidia_smi: yes" >> /etc/netdata/python.d.conf' docker restart "$con" >/dev/null 2>&1 echo '<font color="green"><b>Done.</b></font>' else echo '<font color="red"><b>Already Applied!</b></font>' fi
  15. 1 point
    Any updates on this thread. I am also experiencing this issue. I request to make this thread change from Minor to Urgent. Due to not being able to access the NAS via SMB it is a showstopper in my humble opinion
  16. 1 point
    ...how to backup 42TB??? Backblaze takes $ 210/month - lel - i think its cheaper to buy a second server 🤣
  17. 1 point
    There are several things you need to check in your Unraid setup to help prevent the dreaded unclean shutdown. There are several timers that you need to adjust for your specific needs. There is a timer in the Settings->VM Manager->VM Shutdown time-out that needs to be set to a high enough value to allow your VMs time to completely shutdown. Switch to the Advanced View to see the timer. Windows 10 VMs will sometimes have an update that requires a shutdown to perform. These can take quite a while and the default setting of 60 seconds in the VM Manager is not long enough. If the VM Manager timer setting is exceeded on a shutdown, your VMs will be forced to shutdown. This is just like pulling the plug on a PC. I recommend setting this value to 300 seconds (5 minutes) in order to insure your Windows 10 VMs have time to completely shutdown. The other timer used for shutdowns is in the Settings->Disk Settings->Shutdown time-out. This is the overall shutdown timer and when this timer is exceeded, an unclean shutdown will occur. This timer has to be more than the VM shutdown timer. I recommend setting it to 420 seconds (7 minutes) to give the system time to completely shut down all VMs, Dockers, and plugins. These timer settings do not extend the normal overall shutdown time, they just allow Unraid the time needed to do a graceful shutdown and prevent the unclean shutdown. One of the most common reasons for an unclean shutdown is having a terminal session open. Unraid will not force them to shut down, but instead waits for them to be terminated while the shutdown timer is running. After the overall shutdown timer runs out, the server is forced to shutdown. If you have the Tips and Tweaks plugin installed, you can specify that any bash or ssh sessions be terminated so Unraid can be gracefully shutdown and won't hang waiting for them to terminate (which they won't without human intervention). If you server seems hung and nothing responds, try a quick press of the power button. This will initiate a shutdown that will attempt a graceful shutdown of the server. If you have to hold the power button to do a hard power off, you will get an unclean shutdown. If an unclean shutdown does occur because the overall "Shutdown time-out" was exceeded, Unraid will attempt to write diagnostics to the /log/ folder on the flash drive. When you ask for help with an unclean shutdown, post the /log/diagnostics.zip file. There is information in the log that shows why the unclean shutdown occurred.
  18. 1 point
    Uploaded a short video on setting up the basics on Home Assistant with unRaid. I did it from start to show people how to quickly setup unRaid so most of you here can skip that part.
  19. 1 point
    Fixed! I rebuilt the USB drive and checked the "Allow the EUFI boot". Changed the BIOS to allow it and everything came up. Thanks for all advice!
  20. 1 point
    That seems a good plan. Don't even think about caching the initial load, and you might consider waiting until done to assign a parity disk.
  21. 1 point
    Thanks a lot for your posts! I got it working! For anyone else looking for a way to do this in the future, here you go: I am assuming you have a domain that you want to serve as an access point to a container on your server. Let's assume your domain is www.dexter.com and you want to access books.dexter.com Your CNAMES should be: Host Record Points to TTL books yourname.duckdns.org 14400 Feel free to add as many of these CNAMES as you'd like. I am using duckDNS because it has a container that I can run on my server. What I think it does is when my IP changes, my unRAID server sends an update request to duckDNS to make sure my url (ie yourname.duckdns.org) is still pointing to my IP. If you dont have something similar with your dns service, i think you will need to manually update it everytime your ISP updates your IP (maybe someone can correct me here). Next, you go to letsencrypt's docker and you put this: Domain Name: dexter.com (dont put your dns here) Subdomain(s): books (if you ever want to add future subdomains, remember to add them here) Only Subdomains: true Validation: http (People of the future, refer to documentation to see if this is still the correct way to do this) Now, navigate to appdata\letsencrypt\nginx\site-confs\ and open default. These are the configs that I am using and it seems to be working perfectly. Obviously change dexter.com to your domain and change the local IP and ports with whatever you are accessing. This was adapted from https://technicalramblings.com/blog/how-to-setup-organizr-with-letsencrypt-on-unraid/ so if you have anything more complicated you wish to do, go there and there are templates. default: ################################################################################################################ #////////////////////////////////////////////////SERVER BLOCK\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ # REDIRECT HTTP TRAFFIC TO https:// server { listen 80; server_name dexter.com .dexter.com; return 301 https://$host$request_uri; } ################################################################################################################ #////////////////////////////////////////////////MAIN SERVER BLOCK\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ # MAIN SERVER BLOCK server { listen 443 ssl http2 default_server; server_name dexter.com; ## Certificates from LE container placement ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ## Strong Security recommended settings per cipherli.st ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; # Custom error pages error_page 400 401 402 403 404 405 408 500 502 503 504 $scheme://$server_name/error.php?error=$status; error_log /config/log/nginx/error.log; } ################################################################################################################ #////////////////////////////////////////////////SUBDOMAINS\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\# ################################################################################################################ #CalibreWeb SERVER, accessed by books.dexter.com server { listen 443 ssl http2; server_name books books.dexter.com; location /error/ { alias /www/errorpages/; internal; } location / { proxy_bind $server_addr; proxy_pass http://LOCAL-IP:PORT; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Scheme $scheme; } } #Copy + Paste the same "CalibreWeb SERVER" block if you want to add another domain such as plex. It may require a different set up though. Thank you everyone for your help!
  22. 1 point
    Sorted! Found the answer, eventually in one of the guides. All I had to do was add the folders within the docker container as additional volumes/folders and hey presto. Just need to sort out Qbittorrent, Dropbox and a vnc within the next 26 days. If all that works I might just go with unraid, although I've yet to try Amahi.
  23. 1 point
    Clear an unRAID array data drive (for the Shrink array wiki page) This script is for use in clearing a drive that you want to remove from the array, while maintaining parity protection. I've added a set of instructions within the Shrink array wiki page for it. It is designed to be as safe as possible, and will not run unless specific conditions are met - - The drive must be a data drive that is a part of an unRAID array - It must be a good drive, mounted in the array, capable of every sector being zeroed (no bad sectors) - The drive must be completely empty, no data at all left on it. This is tested for! - The drive should have a single root folder named clear-me - exactly 8 characters, 7 lowercase and 1 hyphen. This is tested for! Because the User.Scripts plugin does not allow interactivity (yet!), some kludges had to be used, one being the clear-me folder, and the other being a 60 second wait before execution to allow the user to abort. I actually like the clear-me kludge, because it means the user cannot possibly make a mistake and lose data. The user *has* to empty the drive first, then add this odd folder. #!/bin/bash # A script to clear an unRAID array drive. It first checks the drive is completely empty, # except for a marker indicating that the user desires to clear the drive. The marker is # that the drive is completely empty except for a single folder named 'clear-me'. # # Array must be started, and drive mounted. There's no other way to verify it's empty. # Without knowing which file system it's formatted with, I can't mount it. # # Quick way to prep drive: format with ReiserFS, then add 'clear-me' folder. # # 1.0 first draft # 1.1 add logging, improve comments # 1.2 adapt for User.Scripts, extend wait to 60 seconds # 1.3 add progress display; confirm by key (no wait) if standalone; fix logger # 1.4 only add progress display if unRAID version >= 6.2 version="1.4" marker="clear-me" found=0 wait=60 p=${0%%$P} # dirname of program p=${p:0:18} q="/tmp/user.scripts/" echo -e "*** Clear an unRAID array data drive *** v$version\n" # Check if array is started ls /mnt/disk[1-9]* 1>/dev/null 2>/dev/null if [ $? -ne 0 ] then echo "ERROR: Array must be started before using this script" exit fi # Look for array drive to clear n=0 echo -n "Checking all array data drives (may need to spin them up) ... " if [ "$p" == "$q" ] # running in User.Scripts then echo -e "\n" c="<font color=blue>" c0="</font>" else #set color teal c="\x1b[36;01m" c0="\x1b[39;49;00m" fi for d in /mnt/disk[1-9]* do x=`ls -A $d` z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi let n=n+1 done #echo -e "found:"$found "d:"$d "marker:"$marker "z:"$z "n:"$n # No drives found to clear if [ $found == "0" ] then echo -e "\rChecked $n drives, did not find an empty drive ready and marked for clearing!\n" echo "To use this script, the drive must be completely empty first, no files" echo "or folders left on it. Then a single folder should be created on it" echo "with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen." echo "This script is only for clearing unRAID data drives, in preparation for" echo "removing them from the array. It does not add a Preclear signature." exit fi # check unRAID version v1=`cat /etc/unraid-version` # v1 is 'version="6.2.0-rc5"' (fixme if 6.10.* happens) v2="${v1:9:1}${v1:11:1}" if [[ $v2 -ge 62 ]] then v=" status=progress" else v="" fi #echo -e "v1=$v1 v2=$v2 v=$v\n" # First, warn about the clearing, and give them a chance to abort echo -e "\rFound a marked and empty drive to clear: $c Disk ${d:9} $c0 ( $d ) " echo -e "* Disk ${d:9} will be unmounted first." echo "* Then zeroes will be written to the entire drive." echo "* Parity will be preserved throughout." echo "* Clearing while updating Parity takes a VERY long time!" echo "* The progress of the clearing will not be visible until it's done!" echo "* When complete, Disk ${d:9} will be ready for removal from array." echo -e "* Commands to be executed:\n***** $c umount $d $c0\n***** $c dd bs=1M if=/dev/zero of=/dev/md${d:9} $v $c0\n" if [ "$p" == "$q" ] # running in User.Scripts then echo -e "You have $wait seconds to cancel this script (click the red X, top right)\n" sleep $wait else echo -n "Press ! to proceed. Any other key aborts, with no changes made. " ch="" read -n 1 ch echo -e -n "\r \r" if [ "$ch" != "!" ]; then exit fi fi # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} $v ) ..." dd bs=1M if=/dev/zero of=/dev/md${d:9} $v #logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 ) ..." #dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 # Done logger -tclear_array_drive "Clearing Disk ${d:9} is complete" echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n" echo -e "Unless errors appeared, the drive is now cleared!" echo -e "Because the drive is now unmountable, the array should be stopped," echo -e "and the drive removed (or reformatted)." exit The attached zip is 'clear an array drive.zip', containing both the User.Scripts folder and files, but also the script named clear_array_drive (same script) for standalone use. Either extract the files for User.Scripts, or extract clear_array_drive into the root of the flash, and run it from there. Also attached is 'clear an array drive (test only).zip', for playing with this, testing it. It contains exactly the same scripts, but writing is turned off, so no changes at all will happen. It is designed for those afraid of clearing the wrong thing, or not trusting these scripts yet. You can try it in various conditions, and see what happens, and it will pretend to do the work, but no changes at all will be made. I do welcome examination by bash shell script experts, to ensure I made no mistakes. It's passed my own testing, but I'm not an expert. Rather, a very frustrated bash user, who lost many hours with the picky syntax! I really don't understand why people like type-less languages! It only *looks* easier. After a while, you'll be frustrated with the 60 second wait (when run in User Scripts). I did have it at 30 seconds, but decided 60 was better for new users, for now. I'll add interactivity later, for standalone command line use. It also really needs a way to provide progress info while it's clearing. I have ideas for that. The included 'clear_array_drive' script can now be run at the command line within any unRAID v6, and possibly unRAID v5, but is not tested there. (Procedures for removing a drive are different in v5.) Progress display is only available in 6.2 or later. In 6.1 or earlier, it's done when it's done. Update 1.3 - add display of progress; confirm by key '!' (no wait) if standalone; fix logger; add a bit of color Really appreciate the tip on 'status=progress', looks pretty good. Lots of numbers presented, the ones of interest are the second and the last. Update 1.4 - make progress display conditional for 6.2 or later; hopefully now, the script can be run in any v6, possibly v5 clear_an_array_drive.zip clear_an_array_drive_test_only.zip
  24. 1 point
    Can I change my pool to RAID0 or other modes? Yes, for now it can only be manually changed, new config will stick after a reboot, but note that changing the pool using the WebGUI, e.g., adding a device, will return cache pool to default RAID1 mode (note: starting with unRAID v6.3.3 cache pool profile in use will be maintained when a new device is added using the WebGUI, except when another device is added to a single device cache, in that case it will create a raid1 pool), you can add, replace or remove a device and maintain the profile in use following the appropriate procedure on the FAQ (remove only if it does not go below the minimum number of devices required for that specific profile). It's normal to get a "Cache pool BTRFS too many profiles" warning during the conversion, just acknowledge it. These are the available modes (enter these commands on the cache page balance window e click balance, note if the command doesn't work type it instead of using copy/past from the forum, sometimes extra characters are pasted and the balance wont work😞 Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 -dconvert=single -mconvert=raid1 RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger. -dconvert=raid0 -mconvert=raid1 RAID1: default, requires at least 2 devices, to use full capacity of a 2 device pool they all need to be the same size. -dconvert=raid1 -mconvert=raid1 RAID10: requires at least 4 devices, to use full capacity of a 4 device pool they all need to be the same size. -dconvert=raid10 -mconvert=raid10 RAID5/6 still has some issues and should be used with care, though most serious issues have been fixed on current kernel at this of this edit 4.14.x RAID5: requires at least 3 devices. -dconvert=raid5 -mconvert=raid1 RAID6: requires at least 4 devices. -dconvert=raid6 -mconvert=raid1 Note about raid6: because metadata is raid1 it can only handle 1 missing device, but it can still help with a URE on a second disk during a replace, since metadata uses a very small portion of the drive, you can use raid5/6 for metadata but it's currently not recommended because of the write hole issue, it can for example blowup the entire filesystem after an unclean shutdown. Obs: -d refers to the data, -m to the metadata, metadata should be left redundant, i.e., you can have a RAID0 pool with RAID1 metadata, metadata takes up very little space and the added protection can be valuable. When changing pool mode confirm that when the balance is done data is all in the new selected mode, check "btrfs filesystem df"on the cache page, this is how a RAID10 pool should look like: If there is more than one data mode displayed, do the balance again with the mode you want, for some unRAID releases and the included btrfs-tools, eg, v6.1 and v6.2 it's normal needing to run the balance twice.
  25. 1 point
    Make a note of which ever folder that you went into that has the problem (which ever one starts windows in an infinite loop of empty folders) The log into the server using either the local console, or putty. Enter the following mc Navigate to /mnt/user/whatever share and folder have the problems Look for a file that has a rectangle in its name. Rename it to something normal. Its probably a messed up file from one of your downloads that didn't get renamed correctly.