1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. managed to pass through 2 10k rpm disks (not images) to an os x vm and set them up as raid0 scratch disk. pretty snappy.
  2. what is the error message you are getting, and what type of vm are you trying to run? Also, post your xml from the vm.
  3. Krusader can be used to move files via a web GUI in a docker. I use to use midnight commander but got tired of logging in via terminal every other hour to move something.
  4. and for more fun, raid 0 and 1 in OS X sierra: http://www.macworld.com/article/3095835/storage/how-to-configure-a-software-raid-in-macos-sierra-s-disk-utility.html I haven't done this yet, but I'm about to pickup a couple 10k 300GB drives and try to make a super fast, 600GB scratch disk for cheap.
  5. Thanks for the pointers. I made a bunch of changes, removed xvga from the xml, upgraded to Sierra, changed smbios setting in clover to match an older mac, etc. And IT WORKS!!! Finally. It recognized the card and without any boot arguments or installing web drivers, it worked. Phew, spent so many hours Now I need to get audio working ;-) EDIT: Got hdmi audio working thanks to another one of your posts http://lime-technology.com/forum/index.php?topic=51915.msg524900;topicseen#msg524900 :-) Thanks so much I have spent tons of time getting mine working, playing with settings, breaking things, redoing things....but that was also mainly because I was also learning how unRaid works AND how KVM works at the same time. But it was completely worth it and I've loved the learning experience. I couldn't have done it without many other's contributions and help on here, including gridrunner! now----> make a backup copy of your working disk image! Trust me, it saves time vs reinstalling again later. I keep a "base" image to use when creating a new vm or for when I break mine messing around.
  6. TurboWrite is the greatest thing since sliced bread . Unless I'm writing lots of small files I get somewhere around gigabit wire speed most of the time. Well, there IS that...
  7. Just do it! Pros: You can achieve all that you want with the hardware you posted. Cons Slower writes directly to the array (which cache drive helps mitigate.) Many people do all of the things you've listed and more on less.
  8. Thanks for the link, I came across that soon after your first post. For now I have a 1.5tb vdisk on a 2tb btrfs striped cache pool and so far it seems to be working great. I am transferring my steam library now so we will see what happens once it exceeds 1tb (Although I am certain it will be fine now that the data is striped). Once I have everything stable I will dismantle my old LXD server and use its SSD for the VM's OS, keep the cache pool for games, and use the rest of my disk for less sensitive data storage (mostly infrequently accessed Plex media and temporary storage). Not exactly what I intended but it accomplishes the same thing. BTW, I did notice that each share can independently enable/disable the cache, which is quite useful for me. Thanks for your help, I think I'll stick with unraid if everything looks good over the next few weeks. I am using the trial now but it looks like a license may be in my future (never thought I would pay for a linux distro, but lime-tech seems deserving enough) Just came across your post and thought of your problem. You don't want multi vdisks correct. I guess as you want one large drive in windows. Well as you are not using parity then the writes to your array will not be slowed by a parity write. So here is what you should do. Create a vdisk on each drive you have in your array so eg on disk 1,2 and 3. Attach all the vdisks to your windows VM. Go to disk management in windows and using those attached vdisks create a striped or spanned volume. This way to windows you will have one large disk but it is going across multiple vdisks. I just tried it on mine and seems to work ok edit .....Any reason you are not just using a mapped drive in windows to the array? You could just install the os on your ssd then map a network drive to your array for data storage. Shares on the array span across drives. A vdisk will not as it is just one file smart. I hadn't thought of that. Might have to try it for fun now!
  9. they seem to also listen to the community regarding bugs, feature requests, etc. and are genuinely interested in not only presenting a good product, but also enhancing it and making it even better.
  10. https://lime-technology.com/forum/index.php?topic=46637.msg445621#msg445621 come for the raid... stay for the vm's!
  11. unRaid does not span disks using xfs. that way, if your array crashes and 1 disk dies, and your parity dies while rebuilding, you don't lose the entire enchilada, just the data on the disks that died. You "could" run a striped cache disk using btrfs to achieve what you are wanting, but that scares some folks. I use unassigned SSD's for smaller main vm images, 20-50GB each (which are backed up to the array,) and then add to them any other vdisks I need for space via unassigned devices for better speed. For OS X I have an actual vdisk with common apps so I don't have to reinstall each time I create a new os setup. Keeps the main vm's nice and tidy.
  12. I use 2 gt710's and a gt730 (710 variant) with almost no issues... https://lime-technology.com/forum/index.php?topic=54786.msg523314#msg523314 I use host dev on them all. no nvidia drivers. no boot args. but that is no your problem. (sharing is caring though) what I use (from a running vm, so it has a few extra lines auto populated): <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> try: removing "xvga=yes" from your xml. I don't see it in mine, for this card or my gtx760 Look in the boot log for the gt710 card, search by device id and by it's slot assignment and see if it is showing any errors there, like a bios bug or similar. also, try it in a different slot. sometimes that helps some folks.
  13. if the above doesn't work, post the xml of your vm.
  14. PWLA8391GTL is a gigabit (1000mbps) card. The op has a fast ethernet onboard connection (100mbps). Assuming the router/switch they are using is gigabit, then they will achieve faster transfer speeds on the network. the 10gbe setup you posted would only be point to point (between the server and computer) and have no direct network access at 10gbe speeds unless you buy a 10gbe switch to connect them.
  15. 1812

    Transfer Speeds

    Yes, you did put that. As I've learned on here, a lot of times the finer details of things get lost on newer members (and I'm still new myself.) I'm trying to make sure that there are managed expectations and a clearer understanding of what one needs to do/have to try to do achieve what many people come here for, which they usually saw on Linus TT. Since this is one of a handful of questions that repeatedly comes up, the sticking points are often glossed over because responders are tired of repeating the same thing, and begin to shorten responses. And since the questions comes up more often than not, perhaps it should be a pinned thread somewhere (in a new networking sub forum?), that way it can be referenced with complete information. And perhaps you could be the one to do the write up.
  16. this ^^^^ I never hit over 40C when transcoding multiple streams... but I also have servers with big, loud fans.
  17. 1812

    Transfer Speeds

    I think it's important to make sure Pontey understands that you'll hit higher speeds on the transfers until your cache drive and ram fill, then speeds will drop down to 40-80MB/s(maybe 100) as it writes to the array. This max speed will be determined by the max read speed of the sending hdd/ssd. You won't hit 1GB/s just plugging it in. There are other things that have to be done to do that.
  18. Response I'm getting on a Hackinstosh forum. KVM virtual machine - "No ACPI tables, no audio support" Would anyone know what this means? Obviously we know audio support can work as others here have it. perhaps it is an issue with the web drivers. Nvidia says they are "experimental." Did you ever try passing through the card to the vm with only the mac drivers and then adding the hdmi kext? (this is the point were having a "base" install img is helpful so you don't have to recreate the vm every time.) I know you said it was glitchy before, but trying to eliminate variables.
  19. I did look into this and I was able to get the correct ip range working if I removed the bottom qemu commands and added <interface type='bridge'> <mac address='52:54:00:00:20:30'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> The only problem was that the internet upload speed seems to be broken. I was reading through the developer's GitHub support pages and it looks like he was aware of most of these issue but hasn't updated anything for a few years. He even stated Probably best to avoid this driver until someone smart enough to fix it updates it. Thank you for your help in looking into it! Back to passing through hardware to saturate gigabit...
  20. probably gets sleepy... did you disable sleep/hibernate? lime tech has a video for win 10 installs. Did you follow that?
  21. ok, how to get it to get an ip in my current range? i'm on 192.168.1.x and it's on 10.0.2.5 even manually assigning the ip/router in network settings won't show. In network utilities it shows the link active but 0 mbit link speed. Also, I can't ping the address it has set for itself. --edit it appears the "user" in the qemu:arg denotes it as an isolated range, using a virtual dhcp server. when i set it to "bridge" i get a helper error. internal error: process exited while connecting to monitor: failed to parse default acl file `/etc/qemu/bridge.conf' 2017-01-17T03:05:40.642520Z qemu-system-x86_64: -netdev bridge,id=hub0port0: bridge helper failed after some searching the file listed needs the following added: allow br0 but I can' seem to find the file.... ¯\_(?)_/¯ --edit Sorry for soooo many edits. I ended up just creating the folder/file in the location listed using Krusader. it seemed to "work" as in, moved me along to the next error: internal error: process exited while connecting to monitor: failed to open /dev/net/tun: Operation not permitted 2017-01-17T12:34:22.184365Z qemu-system-x86_64: -netdev bridge,id=hub0port0: bridge helper failed I'm a bit out of element on this, but trying to dig around on the internet to find what I can. any help would be appreciated! ---edit after testing (vm still not in the same ip range as my server, but the private ip) produced the following results: smb to unRaid: ~30MB/s afp to unRaid: ~30MB/s smb to synology nas ~30MB/s vm hosted on a single cache disk, read/write to same cache disk. with the e1000-82545em i would get 30-40MB/s to unRaid, and about 70MB/s to the synology nas. side note: when using the same vm with a usb 3 to gigabit adapter, i saturate my read/write to the same cache disk on unRaid. hmm.....
  22. I could NEVER get the to work! Going to try this tonight. What transfer speeds are you seeing? The build in is horrible!(sorry, I'm a little excited by this since I"ve wasted too much time on it.)
  23. The Renesas card I had required external power, and it had it, and still threw the same handoff error.
  24. I would get the "bios bug?" in my log for that and my arock 3.1 card. There may be a firmware update for your motherboard to help with it. I choose the easier route and returned both cards and picked up a couple that worked without issue. There may be someone else more knowledgeable about usb card pass off and the xHCI handoff failure than me. But when I asked the questions a couple weeks ago, the only answer I got was "firmware update for motherboard?" The closest I came to finding an answer on the rest of the internet mentioned an issue with legacy usb being on, and that if you disabled it, it "might" fix it. The problem is, that if you access your bios with usb keyboard and not serial keyboard, then you lose the ability to make any further changes to your bios on the end boot without later having to reset your entire bios back to factory settings to enable legacy usb. So, since i din't have a serial keyboard handy, I opted to not do that. It might have fixed it, it might have kept me from accessing my bios.... I will never know.. Not really recommending this, just passing on something I read somewhere.
  25. are both drives on each side of the transmission SSD's? are you running multiple drives in you SSD cache? are you looking at network speed or read/write speed to determine those numbers?