Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About mgutt

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Although I build an UEFI ISO, I'm still not successful on running the setup on every created VM. And now I stumbled upon this: https://support.microsoft.com/en-us/help/2828074/windows-7-setup-hangs-at-starting-windows-on-surface-pro " Symptoms If you attempt to install 64-bit version of Windows 7 on a Surface Pro or other UEFI-based computer, the Setup may hang at "Starting Windows" screen and the Setup process may not complete. Cause The computer does not support legacy BIOS interrupt 10 (INT 10H)." And this manual of OVMF: https://access.redhat.com/node/1434903/40/0 “15.9.2 Secondary Video Service: Int10h (VBE) Shim When QemuVideoDxe binds the first Standard VGA or QXL VGA device, and there is no real VGA BIOS present in the C to F segments (which could originate from a legacy PCI option ROM -- refer to Compatibility Support Module (CSM), then QemuVideoDxe installs a minimal, "fake" VGA BIOS -- an Int10h (VBE) "shim". The shim is implemented in 16-bit assembly in "OvmfPkg/QemuVideoDxe/VbeShim.asm". The "VbeShim.sh" shell script assembles it and formats it as a C array ("VbeShim.h") with the help of the "nasm" utility. The driver's InstallVbeShim() function copies the shim in place (the C segment), and fills in the VBE Info and VBE Mode Info structures. The real-mode 10h interrupt vector is pointed to the shim's handler. The shim is (correctly) irrelevant and invisible for all UEFI operating systems we know about -- except Windows Server 2008 R2 and other Windows operating systems in that family. Namely, the Windows 2008 R2 SP1 (and Windows 7) UEFI guest's default video driver dereferences the real mode Int10h vector, loads the pointed-to handler code, and executes what it thinks to be VGA BIOS services in an internal real-mode emulator. Consequently, video mode switching used not to work in Windows 2008 R2 SP1 when it ran on the "pure UEFI" build of OVMF, making the guest uninstallable. Hence the (otherwise optional, non- default) Compatibility Support Module (CSM) ended up a requirement for running such guests. The hard dependency on the sophisticated SeaBIOS CSM and the complex supporting edk2 infrastructure, for enabling this family of guests, was considered sub-optimal by some members of the upstream community, ♦ and was certainly considered a serious maintenance disadvantage for Red Hat Enterprise Linux 7.1 hosts. Thus, the shim has been collaboratively developed for the Windows 7 / Windows Server 2008 R2 family. The shim provides a real stdvga / QXL implementation for the few services that are in fact necessary for the Windows 2008 R2 SP1 (and Windows 7) UEFI guest, plus some "fakes" that the guest invokes but whose effect is not important. The only supported mode is 1024x768x32, which is enough to install the guest and then upgrade its video driver to the full-featured QXL XDDM one. The C segment is not present in the UEFI memory map prepared by OVMF. Memory space that would cover it is not added (either in PEI, in the form of memory resource descriptor HOBs, or in DXE, via gDS->AddMemorySpace()). This way the handler body is invisible to all other UEFI guests, and the rest of edk2. The Int10h real-mode IVT entry is covered with a Boot Services Code page, making that too inaccessible to the rest of edk2. Due to the allocation type, UEFI guest OSes different from the Windows Server 2008 family can reclaim the page at zero. (The Windows 2008 family accesses that page regardless of the allocation type.)“ If I understand it correct, the default OVMF installation does not support INT10H in a way Windows 7 needs it. But as I said I still was successful installing Windows 7 UEFI randomly. I will create a little video for that.
  2. After testing several VM settings and wondering why SeaBIOS works and OVMF not, I found out that I missed to create an UEFI compatible W7 ISO. 🙈 But funny is, that sometimes I was able to install the non-UEFI version (with the UEFI-only OVMF Bios) after I created the VM multiple times and started them parallel. It seems it never boots with the first port (VNC:5900), but following started images (VNC:5901, VNC:5902, etc) have a chance to work (even with multiple cores selected). But most of the time it freezed while booting the setup (showing a black screen) or rebooted after the "windows is loading files" progress bar.
  3. I did not tested to extend the amount of cores so far, but I had no problem to install W7 Ultimate with 2 Cores after I changed the Machine to Q35-4.2 and the BIOS to SeaBIOS. The main reason was SeaBIOS. With OVMF it rebooted or freezed while showing the Windows Logo constantly (even with a single core setup). When the installation has been finished, I will try to extend the cores and try a second installation with i440fx and 4 Cores. Feedback follows...
  4. Does not help as Nextcloud/Owncloud is not able to update the modification date. The only possibility is to reupload everything. WebDAV sucks
  5. I like to auto backup all shares that have the cache status "Only" and "Prefer". Is there a method available to get this information by passing a user share name or path? The only idea for a dirty hack I had, is to search for all cache root dirs on all disks and if they do not exist on the disks its cache status could be "Only" or "Prefer". But this works only if the status is "Only" or "Prefer" has never filled up the cache (else it creates the dir on the disk array).
  6. Ok, thanks. I'll try rsync. I hope it works: https://serverfault.com/a/450856/44086
  7. Any chance to get this version through the rclone beta channel?
  8. single download through ftp from the SSD cache: If the file is located in the RAM it boosts up to 1 GB/s.
  9. I never used a version lower than 6.8.3 so I'm not able to compare, but the speed through SMB is super slow compared to NFS or FTP: @bonienl You made two tests and in the first one you were able to download from one HDD with 205 MB/s. Wow, I never reach trough SMB > 110 MB/s after enabling the parity disk! Do you have one? Are you sure you used the HDDs? A re-downloaded file comes from the SMB cache (RAM), but then 205 MB/s would be really slow (re-downloading a file from my Unraid server hits 700 MB/s through SMB). In your second test you reached 760 MB/s on your RAID10 SSD pool and you think this value is good? With your setup you should easily reach more than 1 GB/s! With my old Synology NAS I downloaded with 1 GB/s without problems (depending on the physical location of the data on the hdd plattern), especially if the file was cached in the RAM. This review shows the performance of my old NAS. And it does not use SSDs at all! I tested my SSD cache (a single NVMe) on my Unraid server and its really slow (compared to the 10G setup and the constant SSD performance): FTP Download: FTP Upload: A 1TB 970 Evo should easily hit the 10G limits for up- and downloads. I think there is something really wrong with Unraid. And SMB is even worse.
  10. Ok, last test for today. Now enabled NFS in Windows 10 as explained here and downloaded from 3 disks (the 4th disk was busy through UnBalance). As you can see I was able to hit 150 MB/s per drive without problems: Conclusion: Something is really wrong with SMB in Unraid 6.8.3.
  11. I checked the smb.conf and it contains a wrong setting: [global] # configurable identification include = /etc/samba/smb-names.conf # log stuff only to syslog log level = 0 syslog = 0 syslog only = Yes # we don't do printers show add printer wizard = No disable spoolss = Yes load printers = No printing = bsd printcap name = /dev/null # misc. invalid users = root unix extensions = No wide links = Yes use sendfile = Yes aio read size = 0 aio write size = 4096 allocation roundup size = 4096 # ease upgrades from Samba 3.6 acl allow execute always = Yes # permit NTLMv1 authentication ntlm auth = Yes # hook for user-defined samba config include = /boot/config/smb-extra.conf # auto-configured shares include = /etc/samba/smb-shares.conf aio write size can not be 4096. The only valid values are 0 and 1: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html But I tested both an it did not change anything. I tested this solution without success, too. Other Samba settings I tested: # manually added server multi channel support = yes #block size = 4096 #write cache size = 2097152 #min receivefile size = 16384 #getwd cache = yes #socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 #sync always = yes #strict sync = yes #smb encrypt = off server multi channel support is still active because it enables multiple tcp/ip connections: Side note: After downloading so many files from different disks I found out that my RAM has a maximum smb transfer speed of 700 MB/s. But if I download from multiple disks the transfer speed is capped at around 110 MB/s (and falling under 50 MB/s after it starts reading from Disk). All CPU cores have an extreme high usage (90-100%) if two simultaneous smb transfers are running. Even one transfer produces a lot CPU load (60-80% on all cores). Now I'll try to setup NFS in Windows 10.
  12. No need to test iperf. I enabled the FTP server, opened Filezilla, set parallel connections to 5 and choosed 5 huge files from 5 different disks: By that I was able to reach 900 MB/s in total: Similar test, this time I started multiple downloads through Windows Explorer (SMB): This time my wife viewed a movie through Plex, so results could be a little bit slower than possible, but Filezilla was still able to download >700 MB/s so shouldn't be a huge difference. So whats up with SMB? I checked the used SMB version through Windows PowerShell and it returns 3.1.1
  13. Did you solve the issue? My transfer speed isn't as low as yours, but I think it should be better. What I did: a) If found in this thread a hint how to install iostat. So I did the following to install it: cd /mnt/user/Marc wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/sysstat- upgradepkg --install-new sysstat- b) Started iostat as follows: watch -t -n 0.1 iostat -d -t -y 5 1 c) I downloaded through windows a huge file that is located on my SSD cache and as we can see its loaded from the NVMe as expected: d) Then I downloaded a smaller file to test the RAM cache. The first download was delivered through the NVMe: e) The second transfer shows nothing (file was loaded from RAM / SMB RAM cache): This leaves some questions: 1.) Why is the SSD read speed 80 MB/s slower than the RAM although its able to transfer much more than 1 GB/s? 2.) Why is the maximum around 500 MB/s? Note: My PCs network status, the Unraid dashboard and my switch show all 10G as link speed.
  14. I did the same now with Lubuntu and Google Chrome Remote Desktop. Here is a little tutorial (in German), too: