Jump to content

mgutt

Moderators
  • Posts

    11,355
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. single download through ftp from the SSD cache: If the file is located in the RAM it boosts up to 1 GB/s.
  2. I never used a version lower than 6.8.3 so I'm not able to compare, but the speed through SMB is super slow compared to NFS or FTP: @bonienl You made two tests and in the first one you were able to download from one HDD with 205 MB/s. Wow, I never reach trough SMB > 110 MB/s after enabling the parity disk! Do you have one? Are you sure you used the HDDs? A re-downloaded file comes from the SMB cache (RAM), but then 205 MB/s would be really slow (re-downloading a file from my Unraid server hits 700 MB/s through SMB). In your second test you reached 760 MB/s on your RAID10 SSD pool and you think this value is good? With your setup you should easily reach more than 1 GB/s! With my old Synology NAS I downloaded with 1 GB/s without problems (depending on the physical location of the data on the hdd plattern), especially if the file was cached in the RAM. This review shows the performance of my old NAS. And it does not use SSDs at all! I tested my SSD cache (a single NVMe) on my Unraid server and its really slow (compared to the 10G setup and the constant SSD performance): FTP Download: FTP Upload: A 1TB 970 Evo should easily hit the 10G limits for up- and downloads. I think there is something really wrong with Unraid. And SMB is even worse.
  3. Ok, last test for today. Now enabled NFS in Windows 10 as explained here and downloaded from 3 disks (the 4th disk was busy through UnBalance). As you can see I was able to hit 150 MB/s per drive without problems: Conclusion: Something is really wrong with SMB in Unraid 6.8.3.
  4. I checked the smb.conf and it contains a wrong setting: [global] # configurable identification include = /etc/samba/smb-names.conf # log stuff only to syslog log level = 0 syslog = 0 syslog only = Yes # we don't do printers show add printer wizard = No disable spoolss = Yes load printers = No printing = bsd printcap name = /dev/null # misc. invalid users = root unix extensions = No wide links = Yes use sendfile = Yes aio read size = 0 aio write size = 4096 allocation roundup size = 4096 # ease upgrades from Samba 3.6 acl allow execute always = Yes # permit NTLMv1 authentication ntlm auth = Yes # hook for user-defined samba config include = /boot/config/smb-extra.conf # auto-configured shares include = /etc/samba/smb-shares.conf aio write size can not be 4096. The only valid values are 0 and 1: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html But I tested both an it did not change anything. I tested this solution without success, too. Other Samba settings I tested: # manually added server multi channel support = yes #block size = 4096 #write cache size = 2097152 #min receivefile size = 16384 #getwd cache = yes #socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 #sync always = yes #strict sync = yes #smb encrypt = off server multi channel support is still active because it enables multiple tcp/ip connections: Side note: After downloading so many files from different disks I found out that my RAM has a maximum smb transfer speed of 700 MB/s. But if I download from multiple disks the transfer speed is capped at around 110 MB/s (and falling under 50 MB/s after it starts reading from Disk). All CPU cores have an extreme high usage (90-100%) if two simultaneous smb transfers are running. Even one transfer produces a lot CPU load (60-80% on all cores). Now I'll try to setup NFS in Windows 10.
  5. No need to test iperf. I enabled the FTP server, opened Filezilla, set parallel connections to 5 and choosed 5 huge files from 5 different disks: By that I was able to reach 900 MB/s in total: Similar test, this time I started multiple downloads through Windows Explorer (SMB): This time my wife viewed a movie through Plex, so results could be a little bit slower than possible, but Filezilla was still able to download >700 MB/s so shouldn't be a huge difference. So whats up with SMB? I checked the used SMB version through Windows PowerShell and it returns 3.1.1
  6. Did you solve the issue? My transfer speed isn't as low as yours, but I think it should be better. What I did: a) If found in this thread a hint how to install iostat. So I did the following to install it: cd /mnt/user/Marc wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/sysstat-11.2.1.1-x86_64-1.txz upgradepkg --install-new sysstat-11.2.1.1-x86_64-1.txz b) Started iostat as follows: watch -t -n 0.1 iostat -d -t -y 5 1 c) I downloaded through windows a huge file that is located on my SSD cache and as we can see its loaded from the NVMe as expected: d) Then I downloaded a smaller file to test the RAM cache. The first download was delivered through the NVMe: e) The second transfer shows nothing (file was loaded from RAM / SMB RAM cache): This leaves some questions: 1.) Why is the SSD read speed 80 MB/s slower than the RAM although its able to transfer much more than 1 GB/s? 2.) Why is the maximum around 500 MB/s? Note: My PCs network status, the Unraid dashboard and my switch show all 10G as link speed.
  7. I did the same now with Lubuntu and Google Chrome Remote Desktop. Here is a little tutorial (in German), too:
  8. My script contains a loop of different short running processes (mkdir, openssl, touch) and not one long running single process. If I close f.e. the "Run Script" window it is killed completely (as expected).
  9. CA User Scripts does not kill scripts!
  10. Maybe someone needs a script for file encryption:
  11. Description filecryptor is a shell script that reads all files in a source directory and copies them encrypted through AES-256 in a target directory. The passphrase must be added through a file named "filecryptor.key" that has to be located in the root of the source directory. Your password is salted with the file's timestamp and hashed with PBKDF2 10.000x times. This is the default value of filecryptor. You can change it, but you need to remind the exact value or you are not able to decrypt the files anymore! Feel free to upload your files to a cloud after encryption has been finished! You can execute this script by using the CA User Scripts plugin. Features - encrypt files of a source to a target directory with AES-256 by your password - encryption is hardened by iterating your password through PBKDF2 10.000x times which slows down bruteforce attacks - encryption is hardened by salting your password through the file's timestamp leaving no chance to rainbow table attacks - decrypt files of a source folder - resume if last script execution has been interrupted (delete filecryptor.last in target dir to fully restart) - skip_files_last_seconds allows skipping files that are (partially) written at the moment - dry_run allows testing filecryptor - already existing files with a newer or same timestamp will be skipped #!/bin/sh # ##################################### # filecryptor # Version: 0.3 # Author: Marc Gutt # # Description: # Copies and encrypts files from a source to a target dir. This scripts uses # AES encryption with the file modification time as salt. By that re-encrypting # produces identical files making it easier for rsync/rclone to skip already # transfered files. # # How-to encrypt: # 1.) Add a text file with the name "filecryptor.key" and your encryption password as content in the source directory. # 2.) Set your source and target directories # 3.) Execute this script # # How-to decrypt: # 1.) Backup your encrypted files # 2.) Add a text file with the name "filecryptor.key" and your encryption password as content in the source directory. # 3.) Set ONLY the source directory (encrypted files will be overwritten through their decrypted version!) # 4.) Execute this script # # Notes: # - A public salt should be safe (https://crypto.stackexchange.com/a/59180/41422 & https://stackoverflow.com/a/3850335/318765) # - After encrypting you could use rsync with "--remove-source-files" to move the encrypted files to a final target directory # - rclone has a "move" mode to move the encrypted files to a cloud # - You can set the PBKDF2 iterations through "iter". A higher number of iterations adds more security but is slower # (https://en.wikipedia.org/wiki/PBKDF2#Purpose_and_operation & https://security.stackexchange.com/a/3993/2296) # # Changelog: # 0.3 # - set your own PBKDF2 iteration count through "iter" setting # - salt is padded with zeros to avoid "hex string is too short, padding with zero bytes to length" message on openssl output # - decryption now supports resume # - empty openssl files are deleted on decryption fails # - encryption of already existing files in the target will be skipped as long the source files aren't newer # 0.2 # - skip dirs on resume (not only files) # - bug fix: endless looping empty dirs # 0.1 # - first release # # Todo: # - optional filename encryption # - use a different filename if decrypted filename "${file}.filecryptor" already exists # - add "overwrite=true" setting and check file existence before executing openssl # ##################################### # settings source="/mnt/disks/THOTH_Photo" target="/mnt/user/photo" skip_files_last_seconds=false dry_run=false iter=10000 # remind this number or you are not able to decrypt your files anymore! # check settings source=$([[ "${source: -1}" == "/" ]] && echo "${source%?}" || echo "$source") target=$([[ "${target: -1}" == "/" ]] && echo "${target%?}" || echo "$target") target=$([[ $target == 0 ]] && echo "false" || echo "$target") target=$([[ $target == "" ]] && echo "false" || echo "$target") skip_files_last_seconds=$([[ $skip_files_last_seconds == 0 ]] && echo "false" || echo "$skip_files_last_seconds") dry_run=$([[ $dry_run == 0 ]] && echo "false" || echo "$dry_run") dry_run=$([[ $dry_run == 1 ]] && echo "true" || echo "$dry_run") # defaults pwfile="${source}/filecryptor.key" resume="${target}/filecryptor.last" # check if passphrase exists if [[ ! -f $pwfile ]]; then echo "Error! filecryptor did not found ${source}/filecryptor.key" exit 1 fi # check if we have a starting point if [[ -f $resume ]]; then last_file=$( < $resume) fi function filecryptor() { path=$1 echo "Parsing $path ..." for file in "$path"/*; do # regular file if [ -f "$file" ]; then # skip passphrase file if [[ $file == $pwfile ]]; then echo "Skip $pwfile" continue fi # skip files until we reach our starting point if [[ -n $last_file ]]; then if [[ $file != $last_file ]]; then echo "Skip $file" else echo "Found the last processed file $last_file" last_file="" fi continue fi file_time=$(stat -c %Y "$file") # file modification time # decrypt file if [[ $target == "false" ]]; then echo "Decrypt file ${file}" if [[ $dry_run != "true" ]]; then if openssl aes-256-cbc -d -iter 10000 -in "$file" -out "${file}.filecryptor" -pass file:"${pwfile}"; then rm "$file" mv "${file}.filecryptor" "$file" touch --date=@${file_time} "${file}" else rm "${file}.filecryptor" # cleanup on fail fi fi # remember this file as starting point for the next execution (if interrupted) if [[ $dry_run != "true" ]]; then echo "$file" > "${resume}" fi continue fi # skip new files if [[ "$skip_files_older_seconds" =~ ^[0-9]+$ ]]; then compare_time=$(($file_time + $skip_files_older_seconds)) current_time=$(date +%s) # is the file old enough? if [[ $compare_time -gt $current_time ]]; then continue fi fi dirname=$(dirname "$file") dirname="${dirname/$source/}" # remove source path from dirname file_basename=$(basename -- "$file") # skip already existing files with same timestamp if [ -f "${target}${dirname}/${file_basename}" ]; then target_file_time=$(stat -c %Y "$file") # file modification time if [[ $target_file_time -ge $file_time ]];then echo "Skipped ${file} as it already exists in target" continue fi fi # create parent dirs echo "Create parent dirs ${target}${dirname}" if [[ $dry_run != "true" ]]; then mkdir -p "${target}${dirname}" fi # encrypt file echo "Create encrypted file ${target}${dirname}/${file_basename}" if [[ $dry_run != "true" ]]; then salt="${file_time}0000000000000000" salt=${salt:0:16} openssl aes-256-cbc -iter $iter -in "$file" -out "${target}${dirname}/${file_basename}" -S $salt -pass file:"${pwfile}" fi # modification time echo "Set original file modification time" if [[ $dry_run != "true" ]]; then touch --date=@${file_time} "${target}${dirname}/${file_basename}" # https://unix.stackexchange.com/a/36765/101920 fi # remember this file as starting point for the next execution (if interrupted) if [[ $dry_run != "true" ]]; then echo "$file" > "${resume}" fi # dir elif [ -d "$file" ]; then # skip dir until we reach our starting point if [[ -n $last_file ]]; then if [[ $last_file != "${file}"* ]]; then echo "Skip $file" continue else echo "Found the last processed dir $file" fi fi filecryptor "$file" fi done } filecryptor "$source" # clean up if [[ $dry_run != "true" ]]; then rm "${resume}" fi exit # encrypt example if file_time=$(stat -c %Y "file.txt"); then openssl aes-256-cbc -iter 10000 -in "file.txt" -out "file.enc" -S $file_time -pass file:"filecryptor.key" touch --date=@${file_time} "file.enc" fi # decrypt example if file_time=$(stat -c %Y "file.enc") && openssl aes-256-cbc -d -iter $iter -in "file.enc" -out "file.enc.filecryptor" -pass file:"filecryptor.key"; then rm "file.enc" mv "file.enc.filecryptor" "file.txt" touch --date=@${file_time} "file.txt" fi
  12. Hi, I have two different problems with this plugin: 1.) When I choose "Power Save" as my CPU Scaling Governor, my Plex Docker Container buffers all streams every 10-15 seconds. Do I need an other driver or is the "ACPI" correct for my Atom C3758? Testing with <watch grep \"cpu MHz\" /proc/cpuinfo> returned 800 Mhz if no process was running. Now its around 1.7 Ghz (with On Demand). 2.) By default, the NIC settings contained "eth0" in the list, but as long this value was part of the settings my Unraid server was not able to connect the internet anymore! I do not know which of these settings caused this problem. My suggestion is to add "N/A" to the different options to be able to reset all NIC settings. While the server was running I was able to reset them by re-applying the unraid network settings (removed and added the second DNs server ip ^^).
  13. I had the same problem (no ping to 8.8.8.8 possible, but local access works). Finally I found out that the Tips and Tweaks Plugin killed my network settings. I do not know why, but after removing the "eth0" of the list and rebooting the machine, the internet was accessible again: I think the plugin is overwriting my settings on boot causing this error. I will post this in the plugin's thread, too. EDIT: Sometimes even the local network did not work. If this happens it helped to login through the server itself (IPMI or physical) and after log in with "root" and my password, I used this command to re-apply the network settings: /etc/rc.d/rc.inet1 restart After that I was able to open the webclient again.
  14. I tested the "Power Save" setting for my Intel Atom C3758. By that the CPU was throttled down to 800 Mhz. First it looked good, but then I found out that all content that was streamed through Plex, started to buffer every 10-15 seconds. After choosing the default "On Demand" settiing everything went back to normal. Tips and Tricks shows "Driver: ACPI CPU Freq" as the driver. Could this be the reason (as it did not use the intel pstate driver)?
  15. I have two unraid servers. One uses an Atom C3758 CPU. After installing the Tips and Tweaks Plugin and changing from On Demand to Power Save it downclocks to 800 Mhz as expected. The other server uses a Pentium J5005. The dashboard returns most of the time a usage of 0-2% for all cores: By using this command in the web terminal: watch grep \"cpu MHz\" /proc/cpuinfo it returns 2.3 Ghz: Every 2.0s: grep "cpu MHz" /proc/cpuinfo Black: Mon Jun 15 00:03:51 2020 cpu MHz : 2399.270 cpu MHz : 2315.888 cpu MHz : 2088.474 cpu MHz : 2333.949 After disabling the Intel Turbo it returns 1.3 GHz: Every 2.0s: grep "cpu MHz" /proc/cpuinfo Black: Mon Jun 15 00:07:02 2020 cpu MHz : 1318.495 cpu MHz : 1342.343 cpu MHz : 1245.570 cpu MHz : 1369.103 It clocks now under the baseclock (1.5 Ghz) of this CPU?! But the real strange thing is the power consumption as its nearly the same for all settings (all five disks spun down): Performance + Turbo: 18.59W Power Save + Turbo: 18.09W Power Save: 18.09W Power Save + Closed all browser windows: 18.02W Power measurement was done with a good consumer power meter (<5W +/- 100mW and >5W +/- 2%). EDIT: Hmm I restarted the server and altough the Tips and Tricks plugin displays "Power Save" its in "Performance" mode (as all cores stay at 2.6GHz).
  16. Yes, that's correct. Sorry, I thought it was a file because mv returned a file error instead of "Directory not empty" error for directories. mv is not able to merge dirs. cp is able to do that. Example: cp -al source/folder destination rm -r source/folder The -l option causes to create hardlinks instead of copying the files. But cp does not contain an exclude flag so it does not solve your problem. Instead use rsync "--link-dest=". By that it creates hardlinks and does NOT copy the files and after that "--remove-source-files" removes the initial hardlink of a file: rsync -a --remove-source-files --exclude=".sync*" --link-dest=/mnt/disks/DELUGETorrents/Seedbox_Downloads/ /mnt/disks/DELUGETorrents/Seedbox_Downloads/ /mnt/disks/DELUGETorrents/Sonarr_Pickup/ Sadly it does not remove the empty dirs in the source. This needs a separate command: https://unix.stackexchange.com/a/46326/101920
  17. Do you maybe aren't allowed to delete the target file? In this manual it says that the target file is removed before the local file is moved: https://unix.stackexchange.com/a/236676/101920 You should test it in a script by using rm: rm '/mnt/disks/DELUGETorrents/Sonarr_Pickup/The 100 Season 3 [1080p] WEB-DL H264'
  18. Hmm I wonder why this does not work. But you can instead use rsync with the --remove-source-files option. This will delete the source after it has been transfered to the target. Example: rsync -a --remove-source-files --exclude=".sync*" /mnt/disks/DELUGETorrents/Seedbox_Downloads/ /mnt/disks/DELUGETorrents/Sonarr_Pickup/ Explanation: -a = -rlptgoD = recursive, symbolic links, preserve all auth, preserve date --remove-soure-files = delete source file after transfer --exclude=".sync*" = ignore files starting with ".sync Another useful setting if you want to use rsync to sync two folders: --delete = delete files in the target that are not present in the source (not useful in combination with "--remove-soure-files")
  19. The [-f] in the documentation means that -f is optional. You must not type the brackets in your script to use it. And you made a second mistake. You did not add a white space, so mv tried to find the path "[-f]/mnt...". That's the reason why it returned the "no such directory" error.
  20. Solution in Windows without using RDP or other VNC software: 1. ) Install AutoHotKey 2.) Right click on desktop -> create AutoHotKey script 3.) Edit script and paste: ^+v::Send {Raw}%Clipboard% 4.) Doube-click the script 5.) Now you can use CTRL+SHIFT+V to paste anything in the noVNC browser window (its typing the chars that are in the clipboard) Paste the script in the windows startup folder so its automatically executed on reboot: https://www.autohotkey.com/docs/FAQ.htm#Startup
  21. Solution? https://stackoverflow.com/a/43099210/318765
  22. This is how I check the status and start a container through one of my scripts: # check if mkvtoolnix container exists if [[ ! "$(docker ps -q -f name=mkvtoolnix_mkv2sub)" ]]; then # https://stackoverflow.com/a/38576401/318765 # check for blocking container if [[ "$(docker ps -aq -f status=exited -f name=mkvtoolnix_mkv2sub)" ]]; then docker rm mkvtoolnix_mkv2sub fi echo "mkvtoolnix container needs to be started" # start mkvtoolnix container docker_options=( run -d --name=mkvtoolnix_mkv2sub -e TZ=Europe/Berlin -v "${docker_config_path}mkvtoolnix_mkv2sub:/config:rw" -v "${movies_path}:/storage:rw" jlesage/mkvtoolnix ) echo "docker ${docker_options[@]}" docker "${docker_options[@]}" fi Found here: https://stackoverflow.com/a/38576401/318765 But this checks only if the container is already existing or its status is "exited". I tried to check if the container is not used anymore by its cpu usage, but this produced problems with parallel running scripts so I disabled it: # check if container exists if [[ -x "$(command -v docker)" ]] && [[ "$(docker ps -q -f name=mkvtoolnix_mkv2sub)" ]]; then # stop container only if its not in use (by other shell script) mkvtoolnix_cpu_usage="$(docker stats mkvtoolnix_mkv2sub --no-stream --format "{{.CPUPerc}}")" # if [[ ${mkvtoolnix_cpu_usage%.*} -lt 1 ]]; then # we do not stop the container as our script is not race-condition safe! # echo "Stop mkvtoolnix container" # docker stop mkvtoolnix_mkv2sub # docker rm mkvtoolnix_mkv2sub # fi fi Maybe I will set in the future a random container name and clean them up after they become too old. I'm not sure.
  23. You need to add "shopt -s extglob" to allow this command: https://askubuntu.com/a/1228370/227119
  24. Then try !(.syncthing.*) I think this should work.
×
×
  • Create New...