dAigo

Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by dAigo

  1. Client side makes sense, in a larger environment, where some clients/server need smb3 and others can't use it. Haven't tried it, but I guess its good to mention, that there may be other workarounds.
  2. You are right, the refreshing issue has nothing to do with the mounting stuff. I also have it from time to time on the lokal disk of my work pc... not much to relate to 6.2 beta So to be precise, his (and my) issue with 6.2 is probably the switch to Samba4+ or increasing the supported smb protokoll over 2.02. Since then (~2013), there are issues if you mount VHDs trough windows directly, because they use a specific feature set, that does not work/is not there. It seems, that mounting ISOs in Win10 is also affected. There were some patches to Win8/2012r2 and some to samba, but even today there are issues with smb3 in samba. Win7 can't speak SMB3 and does not mount ISOs so its not affected and 3rd-Party-Tools probably won't check for specific features. So, for those who simply "double click" a DVD/Bluray ISO-Image (one of the good additions in Win10) on unRAID and wait for Windows autostart to kick in, its an issue. And VHDs over SMB is a good way to utilize the array if an application can't/won't use shares, but insists on a "lokal disk". We have no iSCSI in unRAID after all. It's an issue with a workaround, so it could be mentioned in the patch notes until a fix is released.
  3. It's not "sucking up" power. This bug is very misleading in the way it works. Well, it may not slow down the system, but it uses a lot of power and heats up the cpu... Having a cpu cooler thats runs full speed the time, may very well be disturbing And 30-50watts more could be noticable on your power bill in the long run. I don't know why, but out of my 4 Windows VMs, only 2 show the effect. (both SeaBIOS and Windows10 x64 1511) The 2 other VMs (both OVMF, one Server2012r2 and one Windows10 x64 1507) work fine. I converted one of the affected SeaBIOS VMs to OVMF, but the issue remained. So that leaves Win10 x64 1511 or some driver/guest agents as the culprit. Investigation ongoing ... Btw. adding/removing GPU passthrough did not make any diffrence, so I don't think its the main issue. I am guessing cpu interrupts or context-switches, both seem to really go up (according to vmstat) once I start a video. I think qemu got a lot of multi-threaded changed under the hood, maybe those threads are running wild in some cases. I'll collect some data from vmstat in 6.2 and compare it to 6.1, when I find the time for a rollback.
  4. Seems so be a very old story of Linux vs. Microsoft.... Its a known issue since Samba 4.0 (2013...) If you add the following to "Settings" -> "SMB" -> "SMB Extras" in unRAID and restart the Array, VHD/ISO should work. Unless you really need SMB3 features, it should do the trick. max protocol= SMB2_02 As You can see, patches are still beeing applied... https://bugzilla.samba.org/show_bug.cgi?id=10159 The last fix is quite new, maybe not yet realeased and ready to use.
  5. Yes, same issue here. Since Battle.net and Origin did not work with shares/symlinks I used VHDs. Cant mount them right not through the gui disk management. Interestingly, same applies to ISO Images (DVD/BluRay). Works fine from the desktop, but i get an error through the share. It seems thats a common issue since Win8, because MS changed the way it worked. (its running as "SYSTEM") Probably some permission issues, maybe Limetech changed something.
  6. NVMe support seems solid, temps are showing, but no SMART info. It seems, that smartmontools added some form of NVMe support yesterday. Any chance to add that while we are still in beta? https://www.smartmontools.org/
  7. While the 6.2 beta has some issues for me, NVMe support seems solid. Currently running my Intel as a cache drive with VMs. No benchmarks yet, because 6.2 is to buggy. And yes, temps are showing (35°C) but no SMART, which is not unraids fault nvme is not yet supported by smartmontools.... And regarding temps of the 950: It seems at ~70°C it starts throttling, but some thermal pads should do the trick... http://linipc.com/blogs/news/65282629-samsung-950-pro-review-thermal-throttling-solved
  8. Still an active issue on my end. The attached diagnostic was collected, while a video was running and the host showed 99% on all cores, that are attached to the vm. Guest load was around 4-10% on any core... Your wording makes it almost sound, as if you may suspecrt an issue with the "reported load". According to my cpu-temps (see chart) and powerusage measured on the wall, the Host-Load is accurate. During the 10 minute playback, cpu-temp was at 50°C. At around 5 minutes into the test, I startet a cpu stresstest. Guest CPU went to 90%+ but temp stayed at ~50°C. 50°C is around max, because I removed all overclocking and the cpu is watercooled. And powerusage goes from ~100watts to ~150watts. I'll stay on 6.2 for this week, if you need additional info or if I should try anything, feel free to ask. Rolling back to 6.1 is a pita, because I am testing the nvme support for cache drives... Rolling back means a lot of files beeing moved. Just a quick update: I can reproduce that problem on another VM, that is also Win10 (1511), also SeaBIOS but has NO GPU passthrough... However, I can't reproduce it on VMs that run OVMF and Win10 (1507)/Server 2012 R2, regardless of passthrough or not. At least for me, it seams SeaBIOS and/or Win10 (1511) has something to do with it. I could add another daig. with the ovfm vms running, if the first one didn't help.
  9. Still an active issue on my end. The attached diagnostic was collected, while a video was running and the host showed 99% on all cores, that are attached to the vm. Guest load was around 4-10% on any core... Your wording makes it almost sound, as if you may suspecrt an issue with the "reported load". According to my cpu-temps (see chart) and powerusage measured on the wall, the Host-Load is accurate. During the 10 minute playback, cpu-temp was at 50°C. At around 5 minutes into the test, I startet a cpu stresstest. Guest CPU went to 90%+ but temp stayed at ~50°C. 50°C is around max, because I removed all overclocking and the cpu is watercooled. And powerusage goes from ~100watts to ~150watts. I'll stay on 6.2 for this week, if you need additional info or if I should try anything, feel free to ask. Rolling back to 6.1 is a pita, because I am testing the nvme support for cache drives... Rolling back means a lot of files beeing moved. unraid-diagnostics-20160320-1145.zip
  10. I rolled back to 6.1.9 and everything is normal again. I know thats not helping you, but if our problems are related, maybe the info helps. With 6.2, I had to use mdi-x otherwise sound startet stuttering immediatly. With 6.1.9 mdi-x made no difference. I had 1-2 issues a month. They said they used some patch to enable hyper-v settings with nvidia passthrough. Maybe that patch is not quite stable.
  11. Same here. I am pretty sure, it startet today after upgrading to 6.2 beta 18. I am monitoring my system through snmp and would have noticed something like that. I just gut 20 mails from due to the high load while waitching a stream on twitch.tv trough "livestreamer". (also http) So its not just chrome... A 1080p h264 movie looks similar, but not as bad. vm 10-20% host 30-50% And its not just a wrong measurement, cpu temps were also very high (65°C+ on water) No other vms/docker were running, and only the 7 cores assigned to this vm were on high load. Your uptime is low as well, did you upgrade as well? I can run 3dmark with normal results, so I also think its video/decoding related...
  12. Is the existance of a cache disk checked on every start or just the first after the Upgrade? I asume only the first, therefore the need to manually add them later if a cache disk gets added. I would like to test the NVMe support, as of now, my cache is a sata ssd and the nvme disk is manually mounted for vms/games outside of the array. Would the addition of the "system" share and its creation on the first array start affect the procedure of removing the current cache drive to add the new cache drive? Last time I tried, preClear didn't work with NVMe disks, would it be an issue to add the NVMe disk as an unformatted/uncleared disk as the only cache device prior the first start of the array? Or should I just remove the cache disk bevore the update, and add the NVMe cache after the first start und create the "system" share manually? What would be a good/easy/safe way to change the change disk after the system share got created? Can I move all files from the "system" share to a disk in the array (as long as all VMs/Docker are shutdown) or would it still be in use by the "system" (wich the name somewhat implies)? I would asume "change all shares (including "system") to no cache" -> stop all VMs/Docker that may use the cache -> run mover (and move anything mover ignored manually to the array) -> stop array -> remove old cache -> add unformatted NVMe disk -> start Array -> wait for the cache disk to be formatted -> move everything back to the cache... -> start docker/vms Maybe I am reading to much into the system share, but to me "system" implies something important. In which case it would be strange to put it on the unprotected cache...
  13. Yeah, the whole package with cache and array... I put that info in the opening post, so that new people can find it. But 6.2 is still in beta and it seems it has a ton of changes. I would have no problem to use that beta (on a new system), unRaid betas are usually really stable. But I from the patch notes it seems there are a lot of things one has to consider depending on the current system. That beeing said, I'll definitly go and upgrade asap, but not without carefull planing Whatever the outcome, its realy nice how fast we got from "driver loaded" (nov.'15)-> "its on our list" (dec.'16) -> "we talked to someone at ces" -> "we ordered a test device" (jan.'16) -> "beta support" (mar.'16)
  14. I am not sure that I fully understand your Problem, but if you want to "reroute" Audio-Output to a diffrent audio-device, you can try VB-Audio Virtual Cable.
  15. Wouldn't it be much better to include the ID or the link to the result? They have a very nice compare feature, so it would be easy to find differences. Example: Fire Strike 1.1 (ID: 8408408) test score graphics physics combined fire strike 1.1 16883 21048 12779 8432 That was from back in August '15, didn't run any tests since then. physical: (c&p from my sig & reverted to around that time) unRAID 6.1 | MB: ASUS Z170 PRO GAMING | CPU: i7-6700K (watercooled) | RAM: 16GB DDR4 (2x8GB) | Parity: WD RED 6TB | Data: WD RED (2x 4TB + 2x 3TB) | Cache: Samsung 830 (256GB) | GPU: NVIDIA GTX 980 TI (watercooled) virtual: OVMF, i440fx-2.3, 7 cores, EVGA 980 TI, 8GB RAM, Win10x64 (10240), 60GB RAW/VirtIO image on cache, no other vms/docker running The 980 ti is watercooled, but with the stock BIOS. So there is some room to improve. I had to switch to SeaBIOS with Windows10 1511, because it broke my gpu passthrough. I think i'll do some tests again, its for science after all But it seems ovmf gets a lot more love in unraid 6.2. I am planning to go back to ovmf and try a custom BIOS for the gpu to remove the powerlimits. I dont have the .xml file from back then, but maybe anybody who posts their result could add their .xml for reference.
  16. Soo, since I already installed... I looked into the ovmf/gpu-passthrough issues is had and it seems a newer ovmf version resolves the issue.. Got it already compiled from HERE. Just extract the "OVMF-pure-efi.fd" from the "edk2.git-ovmf-x64-0-20160209.b1474.g7d0f92e.noarch.rpm" file, put it somewhere qemu can access (Array/flashdrive/etc.) and change the path of the ovmf file to it... I guess it would be nice if a newer version of ovmf makes it into 6.2. So, to stay on topic, I have no problem passing through my pci nvme disk... I think I will move everything back tomorrow, the normal ssd is so slow Unless someone needs to get something tested...
  17. Well, not supporting may be right, I don't know. But it definitly works. And to make sure I just copied all the stuff from my nvme disk to the array an created a new VM. 1 core, no hyper-v, ovmf and pci-passthrogh for my disk. Win10 x64 detected the drive without any additional drivers. After Installation it boots (although I have to add the boot-option in the EFI shell) As you can see in the screenshot, Windows detects the disk correctly as an intel NVME SSD... However, thats probably the issue, I think as of now, Intel is one of the few nvme devices, thats build into the installation media. That kind of compatibility is why I generally recommend the more expensive and sometimes even a little bit "slower" Intel over Samsung. As a sidenote, pre 1151 everything works fine, with 1151, at least for me GPU-Passthrough with ovmf is broken. Maybe there are solutions for that problem, but I switched back to SeaBIOS, because I don't need the EFI features right now. But passthrough of the nvme device still works with 1151 and ovmf, its just kind of useless without a gpu
  18. Like that: Simpler / Easier PCI Device Pass Through for NON-GPUs You dont need to tell your vm, that this device is a disk, so remove the <disk ...> part. Win10 should see that pci device and identify it like it does with any other pci device. Of course, passthrough can always make problems, depending on the hardware. (IOMMU groups etc.) This way, it can use the nvme drivers instead of virtio and therefore remove the driver/protokoll overhead.
  19. You did not exactly passthrough your PCI-device. You just mapped your block-device through a virtio disk-device to your vm. qemu/kvm does not need to know, that it is a disk you are passing throught, that just adds overhead that nvme is trying to remove. That way, you get pretty much native perfomance, because you remove any overhead, that qemu/kvm may add. Your VM would directly access the PCI device. But as I wrote you via pm, you would probably need to use OVMF (EFI) instead of SeaBIOS, because I don't know if SeaBIOS can boot from PCI/nvme.
  20. I will definitely try this when I get home I just picked up an Intel 750 400gb to play with. I posted something about tweaks I tried to improove perfomance in THIS thread. In short: I use "cache=unsafe" and "x-data-plane=on". <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> ... ... <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='unsafe' io='threads'/> <source file='/mnt/nvme/Rechner/Rechner_vDisk1.img'/> <backingStore/> <target dev='hda' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> ... ... </devices> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.scsi=off'/> <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.config-wce=off'/> <qemu:arg value='-set'/> <qemu:arg value='device.virtio-disk0.x-data-plane=on'/> </qemu:commandline> </domain>
  21. First, I want to thank you for taking over or at least stepping in while gfjardim is inactive, much appeciated. Second, I would like to renew my feature request for adding NVMe support to this plugin. origial request & additional info He said he will look into it, maybe thats the reason he is absend, still looking maybe As I wrote, I suppose there would be some recoding/testing involved to resolve the missing /dev/disk/by-id issue, as it is used quite often in the code from what I can tell. Lime-Tech startet to look into support for nvme as cache devices, but even if/when that comes, you would not mix nvme with normal sata ssd in a cache-pool, so nvme outside of the array will still be a valid option I think. From what I can tell so far, apart from the naming issue that prevents it from beeing identified by the plugin every other tool/command (format/mount) should work as it does for a sata/ahci device. If you need more info about nvme, I explained the manual process of identifying an individual nvme device (at least in my case) HERE I would of course be willing to try inofficial releases and provide output/logs where needed.
  22. dAigo

    M.2 Disk

    I guess you mean 5820K. But yeah, perfomance should be ok, definitly enough for gaming and any other normal Desktop usage.
  23. dAigo

    M.2 Disk

    Should work, but something does not add up. It says quad-sli but only shows up to 3-way config. If you add that up (16x/16x/0x/8x) you would end up using all 40 PCIe-Lanes. (assuming you have a CPU with 40) So, that SSD + 3-Way SLI may be a Problem. From the datasheet I cannot tell wich lanes/slots are shared. But you should be safe unless you try 3- or 4-way SLI... And as I explained in the linked thread, X99 Chipset uses DMI2.0 for the onboard storage connections, which shares a max. of 20GBit/s So, your 4 SSDs could already be bottlenecked. (in theorie, in real world scenarios, it should be fine) MZHPV256HDGL is the AHCI Version. It should be recognized as a "normal" SATA-SSD and therefore work as a cache-drive. MZVPV256HDGL would be NVMe and would not work in the array right now. (work in progress though) IOPS/Random Access should also be around what the other SSDs could deliver (70k-90k). So you won't waste to much. But the only thing you problably won't ever see, are the high sequential speeds that drive could reach when left alone. You would basicly downgrade the M2 disk to a SATA disk when copying massive amount of large files (Backups/Movies/etc) If thats what you are going for you should not mix that drive into the cache pool with the SATA SSDs. If you want more space in die cache pool and safe a sata port, it should be ok.
  24. dAigo

    M.2 Disk

    As Nicktdot said, unless we know if your M.2 drive is AHCI or NVME, your question cannot be answered. M.2 is as vague as an "optical Disk", which could be CD/DVD/BluRay/etc. A Bluray-Disk won't work in a DVD-Drive altouth both are "optical disks" Unless we know the type of M.2 we can't tell. But you can probably answer your question yourself, after reading this TLDR; - AHCI "should" work as any other SATA-AHCI SSD - You can mix any disk in the cache pool, but the slowest drive would set the speed (so mixing M2 with SATA would be a waste) - NVMe is not yet supported but it "can work" outside of the array. It would be best to be specific about your devices, what manufacturer/model (Mainboard and M.2 Disk) are you using?
  25. So here is what I did. Not saying its the only way, but it worked for me. I am by no means a unix guy, most of the things I did here were a first for me, its usualy only windows for me. Its like I said, someone who knows all what I am about to write, just types 2-3 lines into the cli and is done in under 1 minute. If anything goes wrong, don't blame me! If there is anything unclear, ask. Somebody can probably help before something bad happens. 1) make sure the device is correctly identified as a nvme device and using the correct driver/modules root@unRAID:~# lspci -k -v 04:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express]) Subsystem: Intel Corporation Device 370e Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at f7110000 (64-bit, non-prefetchable) [size=16K] Expansion ROM at f7100000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI-X: Enable+ Count=32 Masked- Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Virtual Channel Capabilities: [180] Power Budgeting <?> Capabilities: [190] Alternative Routing-ID Interpretation (ARI) Capabilities: [270] Device Serial Number XX-XX-XX-XX-XX-XX-XX-... Capabilities: [2a0] #19 Kernel driver in use: nvme Kernel modules: nvme 2) The rest is basicly the same way as for any other blockdevice (HDD/SSD), just use the correct device path, which should be /dev/nvme(X)n(Y) instead of /dev/sd(X) root@unRAID:~# lsblk | grep nvme nvme0n1 259:0 0 372.6G 0 disk Unfortunatly, there is no /dev/disk/by-id (which would include the serial number) for nvme devices, so if you have multible identical disks, you would need* to identify the exact device by knowing which disk got what Serial Number (in my case 04:00.0) root@unRAID:~# udevadm info -q all -n /dev/nvme0n1 P: /devices/pci0000:00/0000:00:1d.0/0000:04:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 S: disk/by-path/pci-0000:04:00.0 E: DEVLINKS=/dev/disk/by-path/pci-0000:04:00.0 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:04:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: ID_PART_TABLE_TYPE=dos E: ID_PATH=pci-0000:04:00.0 E: ID_PATH_TAG=pci-0000_04_00_0 E: MAJOR=259 E: MINOR=0 E: SUBSYSTEM=block E: UDEV_LOG=3 E: USEC_INITIALIZED=9333489 So the pci-device with the s/n that can be found through "lspci -k -v" is actually "/dev/nvme0n1" You probably could skip that and just use the devicelink with the pci-slot "/dev/disk/by-path/pci-0000:04:00.0" instead of "/dev/nvme0n1" But those names/links may change when you have many nvme disks or add/remove/change their place on the mainboard. (like sda/sdb ...) Which is probably a part of the reason unraid only uses /dev/disk/by-id which includes the s/n and is therefore unique through reboots. I guess you could create the "/by-id" link yourself with customized udev rules, but I would not recommend that unless lime-tech approves. The only unique thing I found would probably be the UUID of the partition that gets created in the next step. 3) If you have the correct device name, use gdisk (for GPT) or fdisk (for MBR) or any other portitioning tool to create a partition (the following procedure wipes all data on the disk, be carefull) root@unRAID:~# gdisk /dev/nvme0n1 GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): o This option deletes all partitions and creates a new protective MBR. Proceed? (Y/N): Y Command (? for help): n Partition number (1-128, default 1): First sector (34-156301454, default = 2048) or {+-}size{KMGTP}: Last sector (2048-156301454, default = 156301454) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): Y OK; writing new GUID partition table (GPT) to /dev/nvme0n1. The operation has completed successfully. "o" -> clear the disk "n" -> creates a new partition table (to create one big partition Just hit enter everytime for the default values) "w" -> writes the changes to the disk 3) format the partition: From what I read, XFS/ext4 are recommended for nvme devices. I don't know in what state btrfs is right now, but I heard there were issues in some versions with really slow qemu/kvm access through btrfs. I have only 1 drive, so I won't need btrfs pools and therefore did not invest any time testing it. I went ahead and created the partition with default options, with the addition of "-K" (recommended by intel for my disk, I don't know what it does and if its good or bad on other disks...) and "-f" to force the creation. After creating the partition, for any reason a vfat partition is found and the new format must be forced. mkfs.xfs /dev/nvme0n1p1 -f -K 4) mount the new partition: Create an empty folder where the nvme disk should be mounted. I went with /mnt/nvme but I guess it does not matter. But it seems, that any folder under /mnt can be used through the web-interface to create VMs or use with docker, so basicly the stuff I want to put on my nvme disk. # MOUNT SSD mkdir /mnt/nvme mount /dev/nvme0n1p1 /mnt/nvme But like I said earlier, I thinks its possible that the device name "nvme0n1" may change when you add more nvme disks or rearrange their pci-slot. To be sure, you could use the UUID of the partition when mounting it, which should never change unless its reformated. I added the mount command to my "GO" script of unraid and never had any issues to auto-start VMs on array start. I do not know the exact order of go/array-start/kvm-start/docker-start, but i know lime-tech changed some things in the past to support gfjardims plugin unassigned-devices and its automount feature. Maybe i am just lucky, but it seems the go script works for now. In my case: root@unRAID:~# udevadm info -q all -n /dev/nvme0n1p1 | grep uuid S: disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da E: DEVLINKS=/dev/disk/by-path/pci-0000:04:00.0-part1 /dev/disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da root@unRAID:~# more /boot/config/go | grep nvme mkdir /mnt/nvme mount /dev/disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da /mnt/nvme 6) After a reboot it looks like this in my case root@unRAID:~# mount | grep nvme /dev/nvme0n1p1 on /mnt/nvme type xfs (rw) root@unRAID:~# lsblk | grep nvme nvme0n1 259:0 0 372.6G 0 disk ??nvme0n1p1 259:1 0 372.6G 0 part /mnt/nvme While still not part of the array/cache, I created a symbolic link on the cache drive that points to the nvme mount and has part of my steam library and some other games on it. Works for me until nvme is officialy supported as a cache device.