Jump to content

mgutt

Moderators
  • Posts

    11,351
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. Feature Request: I really like to see echo output inside the logs (maybe as optional setting?). At the moment only errors are logged.
  2. If I understand this post correctly, rclone does try to sync the partial file, but retries again and again (as long the file changes) until it reaches the retry limit (default is 3).
  3. What is your target? Every 10 seconds? Then add 6 scripts and let them all start every minute with different delay: rclone sync sleep 10 rclone sync sleep 20 rclone sync and so on ... If you need atomic execution (avoiding two rclone processes for the same folder) for all scripts use this after "sleep" and before "rclone sync": # make script race condition safe if [[ -d "/tmp/atomic_rclone_sync" ]] || ! mkdir "/tmp/atomic_rclone_sync"; then exit 1 fi trap 'rmdir "/tmp/atomic_rclone_sync"' EXIT (As long "/tmp/atomic_rclone_sync" exists the script is not executed and its only deleted after the script is finished) Regarding the "files in use" issue you could play around with the "--min-age" flag. As long the file is written its modification time should be updated I think.
  4. Ok, understand, but are there so many people not syncing their cloud data with their NAS. Brave ^^
  5. I do not really understand why so many people try to mount there cloud storage. After you added the cloud through rclone you can sync it through rclone by its name for example: rclone sync /mnt/user/sharename cloudname:backup/sharename And I really suggest to do that because mounting a cloud is completely different than accessing it through rclone itself. For example you can not preserve the file modification time for webdav clouds if you use the mount path as target instead of the rclone cloudname. You can test it by yourself. Mount the cloud and sync a subfolder without much files and use the --vv command to see all rclone actions. One time you sync to "cloudname:backup/sharename" and the second time you use "mnt/disks/cloudname/backup/sharename". You will see that rclone returns a completely different output. Especially if you change files and/or overwrite them or use flags that only work for specific clouds.
  6. Of course mover should produce a notification and provide a download link to the error logs or have an error overview, but it hasn't so I think it could be something CA-fix could handle.
  7. Feature request: After a while I changed the cache mode of some shares to "Prefer" or "Only". Much later I found out that the Mover does not move files that are used at the moment so some of the files that should be cached, were not. I think it would be a good idea to let "Fix Common Problems" check if a share with such cache modes has still files left on the array. For example: I needed to disable VM/docker, invoke the mover and after that the corresponding files have been moved to the cache.
  8. I never tested it, but did you try it in GUI mode like described in this video at 1:20 ?
  9. I made a little test and its not, so I created this script: #!/bin/bash # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1 fi trap 'rmdir "/tmp/${0///}"' EXIT # script # ... By that a dir is created in /tmp with the name of the script and it is deleted after the script has been executed. And as long the script is running (so the dir is not deleted), the script will not be executed again.
  10. No, this works. You can see it in this video at 2:16
  11. It should, but maybe something goes wrong inside your script or something else causes a crash. Maybe it's better to set the cron interval to "every hour". But you need to think about what happens if your script is running multiple times. If this is a problem, you need to build a condition to be sure it's running only once.
  12. So it works as expected. Autohotkey is only active on your host pc and by that it can not access anything in the clipboard of the VM.
  13. Although I build an UEFI ISO, I'm still not successful on running the setup on every created VM. And now I stumbled upon this: https://support.microsoft.com/en-us/help/2828074/windows-7-setup-hangs-at-starting-windows-on-surface-pro " Symptoms If you attempt to install 64-bit version of Windows 7 on a Surface Pro or other UEFI-based computer, the Setup may hang at "Starting Windows" screen and the Setup process may not complete. Cause The computer does not support legacy BIOS interrupt 10 (INT 10H)." And this manual of OVMF: https://access.redhat.com/node/1434903/40/0 “15.9.2 Secondary Video Service: Int10h (VBE) Shim When QemuVideoDxe binds the first Standard VGA or QXL VGA device, and there is no real VGA BIOS present in the C to F segments (which could originate from a legacy PCI option ROM -- refer to Compatibility Support Module (CSM), then QemuVideoDxe installs a minimal, "fake" VGA BIOS -- an Int10h (VBE) "shim". The shim is implemented in 16-bit assembly in "OvmfPkg/QemuVideoDxe/VbeShim.asm". The "VbeShim.sh" shell script assembles it and formats it as a C array ("VbeShim.h") with the help of the "nasm" utility. The driver's InstallVbeShim() function copies the shim in place (the C segment), and fills in the VBE Info and VBE Mode Info structures. The real-mode 10h interrupt vector is pointed to the shim's handler. The shim is (correctly) irrelevant and invisible for all UEFI operating systems we know about -- except Windows Server 2008 R2 and other Windows operating systems in that family. Namely, the Windows 2008 R2 SP1 (and Windows 7) UEFI guest's default video driver dereferences the real mode Int10h vector, loads the pointed-to handler code, and executes what it thinks to be VGA BIOS services in an internal real-mode emulator. Consequently, video mode switching used not to work in Windows 2008 R2 SP1 when it ran on the "pure UEFI" build of OVMF, making the guest uninstallable. Hence the (otherwise optional, non- default) Compatibility Support Module (CSM) ended up a requirement for running such guests. The hard dependency on the sophisticated SeaBIOS CSM and the complex supporting edk2 infrastructure, for enabling this family of guests, was considered sub-optimal by some members of the upstream community, ♦ and was certainly considered a serious maintenance disadvantage for Red Hat Enterprise Linux 7.1 hosts. Thus, the shim has been collaboratively developed for the Windows 7 / Windows Server 2008 R2 family. The shim provides a real stdvga / QXL implementation for the few services that are in fact necessary for the Windows 2008 R2 SP1 (and Windows 7) UEFI guest, plus some "fakes" that the guest invokes but whose effect is not important. The only supported mode is 1024x768x32, which is enough to install the guest and then upgrade its video driver to the full-featured QXL XDDM one. The C segment is not present in the UEFI memory map prepared by OVMF. Memory space that would cover it is not added (either in PEI, in the form of memory resource descriptor HOBs, or in DXE, via gDS->AddMemorySpace()). This way the handler body is invisible to all other UEFI guests, and the rest of edk2. The Int10h real-mode IVT entry is covered with a Boot Services Code page, making that too inaccessible to the rest of edk2. Due to the allocation type, UEFI guest OSes different from the Windows Server 2008 family can reclaim the page at zero. (The Windows 2008 family accesses that page regardless of the allocation type.)“ If I understand it correct, the default OVMF installation does not support INT10H in a way Windows 7 needs it. But as I said I still was successful installing Windows 7 UEFI randomly. I will create a little video for that.
  14. After testing several VM settings and wondering why SeaBIOS works and OVMF not, I found out that I missed to create an UEFI compatible W7 ISO. 🙈 But funny is, that sometimes I was able to install the non-UEFI version (with the UEFI-only OVMF Bios) after I created the VM multiple times and started them parallel. It seems it never boots with the first port (VNC:5900), but following started images (VNC:5901, VNC:5902, etc) have a chance to work (even with multiple cores selected). But most of the time it freezed while booting the setup (showing a black screen) or rebooted after the "windows is loading files" progress bar.
  15. I did not tested to extend the amount of cores so far, but I had no problem to install W7 Ultimate with 2 Cores after I changed the Machine to Q35-4.2 and the BIOS to SeaBIOS. The main reason was SeaBIOS. With OVMF it rebooted or freezed while showing the Windows Logo constantly (even with a single core setup). When the installation has been finished, I will try to extend the cores and try a second installation with i440fx and 4 Cores. Feedback follows...
  16. Does not help as Nextcloud/Owncloud is not able to update the modification date. The only possibility is to reupload everything. WebDAV sucks
  17. I like to auto backup all shares that have the cache status "Only" and "Prefer". Is there a method available to get this information by passing a user share name or path? The only idea for a dirty hack I had, is to search for all cache root dirs on all disks and if they do not exist on the disks its cache status could be "Only" or "Prefer". But this works only if the status is "Only" or "Prefer" has never filled up the cache (else it creates the dir on the disk array).
  18. Ok, thanks. I'll try rsync. I hope it works: https://serverfault.com/a/450856/44086
  19. Any chance to get this version through the rclone beta channel?
  20. single download through ftp from the SSD cache: If the file is located in the RAM it boosts up to 1 GB/s.
  21. I never used a version lower than 6.8.3 so I'm not able to compare, but the speed through SMB is super slow compared to NFS or FTP: @bonienl You made two tests and in the first one you were able to download from one HDD with 205 MB/s. Wow, I never reach trough SMB > 110 MB/s after enabling the parity disk! Do you have one? Are you sure you used the HDDs? A re-downloaded file comes from the SMB cache (RAM), but then 205 MB/s would be really slow (re-downloading a file from my Unraid server hits 700 MB/s through SMB). In your second test you reached 760 MB/s on your RAID10 SSD pool and you think this value is good? With your setup you should easily reach more than 1 GB/s! With my old Synology NAS I downloaded with 1 GB/s without problems (depending on the physical location of the data on the hdd plattern), especially if the file was cached in the RAM. This review shows the performance of my old NAS. And it does not use SSDs at all! I tested my SSD cache (a single NVMe) on my Unraid server and its really slow (compared to the 10G setup and the constant SSD performance): FTP Download: FTP Upload: A 1TB 970 Evo should easily hit the 10G limits for up- and downloads. I think there is something really wrong with Unraid. And SMB is even worse.
  22. Ok, last test for today. Now enabled NFS in Windows 10 as explained here and downloaded from 3 disks (the 4th disk was busy through UnBalance). As you can see I was able to hit 150 MB/s per drive without problems: Conclusion: Something is really wrong with SMB in Unraid 6.8.3.
  23. I checked the smb.conf and it contains a wrong setting: [global] # configurable identification include = /etc/samba/smb-names.conf # log stuff only to syslog log level = 0 syslog = 0 syslog only = Yes # we don't do printers show add printer wizard = No disable spoolss = Yes load printers = No printing = bsd printcap name = /dev/null # misc. invalid users = root unix extensions = No wide links = Yes use sendfile = Yes aio read size = 0 aio write size = 4096 allocation roundup size = 4096 # ease upgrades from Samba 3.6 acl allow execute always = Yes # permit NTLMv1 authentication ntlm auth = Yes # hook for user-defined samba config include = /boot/config/smb-extra.conf # auto-configured shares include = /etc/samba/smb-shares.conf aio write size can not be 4096. The only valid values are 0 and 1: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html But I tested both an it did not change anything. I tested this solution without success, too. Other Samba settings I tested: # manually added server multi channel support = yes #block size = 4096 #write cache size = 2097152 #min receivefile size = 16384 #getwd cache = yes #socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 #sync always = yes #strict sync = yes #smb encrypt = off server multi channel support is still active because it enables multiple tcp/ip connections: Side note: After downloading so many files from different disks I found out that my RAM has a maximum smb transfer speed of 700 MB/s. But if I download from multiple disks the transfer speed is capped at around 110 MB/s (and falling under 50 MB/s after it starts reading from Disk). All CPU cores have an extreme high usage (90-100%) if two simultaneous smb transfers are running. Even one transfer produces a lot CPU load (60-80% on all cores). Now I'll try to setup NFS in Windows 10.
×
×
  • Create New...