ryoko227

Members
  • Posts

    161
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ryoko227

  1. I'm with you on this. I have been keeping full size backups and making clone VMs simply by making copies of my initial vdisk inside mc from the get go. Since my VMs are only 60GB, I don't feel like I am using too much space that I even bother with compression, etc. Especially when it takes less than 5 mins to make a copy. I think I have like 15 full backups on the array since I sorted out all of my unRAID VM issues, and have found copying to be the quickest solution for me. Most important thing is keeping a copy of those XMLs, especially if you've unlocked a free Win10 upgrade for that VM's specific uuid. After that, it's just copy and paste and you can restore a backup, make a cloned VM, whatever.
  2. Well, I tried mixing it up and just like you said I was able to get mixed upper/lowercase. Something like [Game SSD] worked fine. Tried GAME SSD, and also like you said, Windows dorked out and made it all lowercase. I'm not sure why the space wouldn't work for me before, but that was probably just user error on my part. Good find on those other links as well, making a bookmark in case I end up needing to make a share in all caps again sometime. Thanks for the help!
  3. in my experience this is a Windows issue, not an unRAID one!. With the latest Windows 10 it appears to lower case share names which are all uppercase. Mixed case seems to be respected. Cool! Thanks for the info! I'll try mixing it up when I get home and let you know how it goes
  4. Just out of curiousity, is there a way to have the share show up with uppercase letters? Specifically when being viewed from Windows? I may be doing something wrong, or missing something, but I label the drive as 250GB_SSD but it shows up as 250gb_ssd when viewed over the network from a Win10 VM. Via the unRAID GUI, UD still shows the drive as 250GB_SSD though. unRAID seems to be able to name shares using caps and also spaces. Is this an ability in UD? If not, might I ask if it could be added?
  5. Ya, it got real weird real quick, lol. Got some really interesting permission errors from Windows that I'm not even going to bother trying to translate to English. I really did enjoy the speed of the drive attached this way though. While it doesn’t have redundancy of any kind, I think the OP would enjoy the speed. It was easily 3-4x faster loading speeds for my games compared to the HDD array share I’ve been using. Unfortunately for me though, I don't think it will work for the way I intend to use it. I can definitely see the perks of it being attached this way for a single user though. That being said, I think I'll just mount and share it via the unassigned devices plugin. Pretty sure I’ll be able to have the VMs access it simultaneously then like a standard share. That and being able to access the drive via MC and running on XFS has its perks. I think I’ll do some R/W tests before I switch over just to see how much of a performance hit there actually is. Thank you for all the info! EDIT - I actually used to have the games library in a second vdisk that I kept on the cache drive. I was able to access that from both VMs concurently without issues (if memory serves me), and was initially hoping that btrfs would raid the 2x250GB SSDs into 1x500GB. I was going to go that route, but until unRAID adds that in natively, I think I'll go the UD plugin route for the mean time.
  6. lol, almost the same, I keep my vdisks at 60gb on the cache. So I took the leap tonight and pulled the 2nd drive from my cache pool following the directions in the manual (except I did power-downs before pulling cables). Found a cmd via google (which I've already forgotten) to show me the re-balancing of the cache drive and waited for the balance to finish up. Initially I was unclear what you meant about attaching the drive, but sorted it out and found the hard drive pass-through post from April plus what you wrote above. Got it added into both my VMs as a D drive now. I did notice some wonky-ness though. With both VMs up and running, new files wont show up in the other VM until after a restart. ie, vm1 copies file A to the D drive, vm2 wont see that file until after it restarts. Wonder if that has something to do with NTFS? Also, with this setup, will the same program on both VMs be able to run concurrently? Im also very interested in how you setup your D drive inside of Windows.
  7. For the above listed reasons I would also strongly advise you not put the VMs onto the array, purely from a performance point of view. Since I've had issues with VMs getting hosed in the past, I take regular backups (ie - shutdown the VM, hop into MC via ssh, then just copy the vdisk into my backup share on the array; toss the vm xml into a txt file) Its surprisingly fast and gives me piece of mind (especially since my primary VMs are running Win10). Someone brought up using unassigned devices and attaching an SSD there for gaming. I'm wondering if I should scrap my BTRFS SSD cache pool (250GBx2) and just make one dedicated cache, and one a games library share. I ask because load times from my HDD games library on the array are pretty horrendous. What do you guys think?
  8. My #1 suggestion for anyone planning on building a dedicated server, especially related to hardware purchases, isn't even inside the server itself. I strongly suggest setting aside some money and investing it into a UPS (uninterruptible power supply). Time and time again, I see posts of people saying... "Everything was working fine until a power-outage." or "Where I live we have frequent blackouts..." I myself have had to restore a backup of one of my VM's after it blitzed from the breaker popping one night. Sudden power fluctuations or outright drops can, and arguably often do, cause data issues and corruptions. Even if you aren’t running a dedicated server, having a UPS can help insure that in the previously mentioned cases, your pc will shut down correctly and in a safe manner.
  9. The idea behind this thread is for current users of unRAID to make suggestions to new or perspective users of unRAID, to help ease their transition over to the software and community. I do not intend to make this a Q/A based thread, but more a go-to list of helpful suggestions that might help prepare users for issues that might arise, or prevent commonly seen issues and solutions based on our experiences. These could be anything from software settings, install methods, to product purchases. If your suggestion is only viable under specific builds and circumstance, please make note of these. If a common recommendation already has a wealth of information listed in another thread, please post the link and brief description explaining its purpose. If something like this has already been made, I apologize ahead of time. My primary reason for wanting to set up a list such as this, is continually seeing in various different threads people noting unRAID not working properly after easily preventable non-unRAID related situations. This thread will ultimately be organized dependent on the responses and volume thereof.
  10. Going to hold off on this for a bit till VM management is fully implemented, as that is where the bulk of my unRAID use is. I would also be very interested if the ability to simultaneously/securely/remotely access multiple unRAID servers running on different networks via the app ever came to fruition. Might be asking for way too much, but it would surely be a selling point for me. That being said, I will definitely be keeping my eye on this one. Good job, looking good so far!
  11. 32GB in my home gaming/theater/storage server, and 16GB in the office media server.
  12. Well, I'm very happy to say that I was able to get this sorted out after I ran this past the people over on the ASRock forum. RobJ, I was initially of the same thinking (forgive my cut and paste from my ASRock post) but """After pulling the entire machine apart and trying all parts (including the CPU) in another (personal) machine I was unable to reproduce the errors outside of this motherboard. After rebuilding the server using this motherboard I am still receiving the MCEs during boot, though now the VMs do not seem to crash....... I have tried changing setting in BIOS to try and isolate and disable the issue to no avail. The mcelog (logged below) denotes the error as being related to the cpu cache, but I'm of the thinking that it has more to do with how the MB communicates with those banks, than some issue with the cpu itself.""" After posting that a user named Xaltar simply asked what BIOS version I was using, which lead me to reflash to the newest version, and bam, no more MCE's. Had it not been for dmacias putting mcelog into the NERDpack and RobJ's help, I wouldn't have even had a starting point of what to search for as MCEs cover so many things. Being able to pin it down via hardware troubleshooting and having the mcelog ultimately led me to the ASRock website (probably after reading some of the same search results, lol). I would have never imagined it to be an outdated BIOS issue as she was running fine for like 2 months. So thank you very much again dmacias, RobJ, and Xaltar (from other forum). Lifesavers
  13. I can confirm that on a system that has MCEs, running unRAID 6.2b21, mcelog does infact produce the log file with hardware issue(s) noted. The produced log appears in the format noted below (using my own mcelog as reference.) ~# mcelog Hardware event. This is not a software error. MCE 0 CPU 0 BANK 17 MISC 8cf00031e0000086 ADDR 5f000000 TIME 1466125355 Fri Jun 17 10:02:35 2016 MCG status: MCi status: Error overflow Uncorrected error MCi_MISC register valid MCi_ADDR register valid Processor context corrupt MCA: Generic CACHE Level-2 Eviction Error STATUS ee2000000004017a MCGSTATUS 0 MCGCAP 7000c16 APICID 0 SOCKETID 0 CPUID Vendor Intel Family 6 Model 63 Hardware event. This is not a software error. MCE 1 CPU 0 BANK 18 MISC 1cf00031e0000086 ADDR 5f100040 TIME 1466125355 Fri Jun 17 10:02:35 2016 MCG status: MCi status: Error overflow Uncorrected error MCi_MISC register valid MCi_ADDR register valid Processor context corrupt MCA: Generic CACHE Level-2 Eviction Error STATUS ee2000000004017a MCGSTATUS 0 MCGCAP 7000c16 APICID 0 SOCKETID 0 CPUID Vendor Intel Family 6 Model 63 Hardware event. This is not a software error. MCE 2 CPU 0 BANK 19 MISC 54f00031e0000086 ADDR 5f100000 TIME 1466125355 Fri Jun 17 10:02:35 2016 MCG status: MCi status: Error overflow Uncorrected error MCi_MISC register valid MCi_ADDR register valid Processor context corrupt MCA: Generic CACHE Level-2 Eviction Error STATUS ee2000000004017a MCGSTATUS 0 MCGCAP 7000c16 APICID 0 SOCKETID 0 CPUID Vendor Intel Family 6 Model 63 Thank you so much for adding this in dmacias, it's exactly what I was hopeing for, and I'm sure that there are other users who are having MCEs that will find this useful as well.
  14. Not really sure how to start this thread, so I'll just jump right into it. I want to say thank you to both the staff of LT and all the members of this wonderful community. I've been skulking around for a while and finally jumped onboard about 6 months ago. While my personal experiences with unRAID & VMs have definitely had their ups and downs, I've slowly been able to get everything mostly working and running the way I would like. This is solely due to the help, guidance, and write-ups that all of you have come together and compiled here. I know sometimes the information is hard to get to, sometimes things are repeated, and sometimes we just kind of figure it out as we go, but I have to say that despite those things, this is one of the best communities I've come across in a great many years. Your humor and perseverance make this whole endeavor worthwhile. So again, I thank you all for your input, hard work, and help. Without you all, I'd still be a wasteful multiple PC plebe. You all rock!
  15. Since I know myself and others have had MCE issues in the past (with memtest usually not finding an issue), I was curious if LT might consider adding mcelog from http://mcelog.org/index.html to the unRAID betas? I may be mistaken, but from what I've read it seems to be the only way to acertain what exactly an MCE log event was actually caused by (even if ultimately benign). If not, maybe an instructional on how to install it yourself for those that do have MCEs? On a side note, I'm also keeping a close eye on this thread to see if I should move my personal and production servers over from beta21. I'll probably give it a few more days and see if anyone has any major blowouts before stepping into the b23 ring, lol. I'm also curious to see how those who changed their num_stripes back to default are doing with large SMB transfers.
  16. Kk, thank you RobJ, I'll try that out when I get into the office on Monday and let you know what I turn up with.
  17. Jun 3 16:32:25 YES-MEDIASERVER kernel: CPU: Physical Processor ID: 0 Jun 3 16:32:25 YES-MEDIASERVER kernel: CPU: Processor Core ID: 0 Jun 3 16:32:25 YES-MEDIASERVER kernel: mce: CPU supports 22 MCE banks Jun 3 16:32:25 YES-MEDIASERVER kernel: mce: [Hardware Error]: Machine check events logged Jun 3 16:32:25 YES-MEDIASERVER kernel: mce: [Hardware Error]: Machine check events logged Jun 3 16:32:25 YES-MEDIASERVER kernel: CPU0: Thermal monitoring enabled (TM1) Jun 3 16:32:25 YES-MEDIASERVER kernel: process: using mwait in idle threads This week my unRAID server at the office started posting machine check events errors into the syslog. I didn't think to much of it until I started having complete system lockups. Nothing was accessible, VM locks up, mapped drives on other machines drop, and ssh couldn't resolve the server; daily complete system lockups. As I had recently been making config changes (num_stripes, rombar, isolcpu, adding a cheapy usb fan for the user), I reverted everything back to the original settings from before this week. Unfortunately I still get the 2 errors in the same place on the system log upon bootup. This appears to be a genuine hardware fault, but after doing some searching here and on google, I tried to view the mcelog, but unRAID doesn't appear to be packaged with this. Being completely new to nix, even after following installation directions at mcelog.org, I was unable to install the app to view the details of the fault. How do I download & install mcelog? In an effort to try and see exactly what is logged into the syslog at the time of crashing, I currently have a tail on the syslog and also a watch sensors running via putty. However, the cpu temps are pretty consistently running in the 40s, so I don't think its temp related. It seems to happen completely at random, sometimes just while idling. Seemingly unconnected, but suddenly out of the blue while writing this post I decided to reboot my VM and the gpu came up with an error code 43. I was able to resolve that issue by downgrading the drivers to 362, but still the timing seems suspect. At this point, aside from a generous soul instructing me on how to install the mcelog or simply waiting for the crash to occur again and hoping for a decent log entry, I'm not sure where to go next from here. Anyone have any other suggestions I might try? yes-mediaserver-diagnostics-20160603-1546.zip
  18. Good to know, and good feedback as well! When I got home from work I started doing some testing on my home 12 core xeon. I tried out some different setups with my 2 Win10 gaming VMs and I have to say that I agree with your suggestion of using emulatorpin for the first pair only on the VMs. Its a bit late here now, so I hope what I write will make sense, but here goes. Keeping in mind I have hyper-v on in all these instances as I know some people have had issues using it. After adding isolcpus for all but the 1st two pairs, and using my original vcpupin setups, I tried emulatorpin on the first pair solely, and then again with the first two pairs combined. I then tested the visual performance and checked fps in game for Warthunder and also Black Desert. I tried different vcpupin setups, using 5 pairs, 4 pairs, 2 pairs, and repeated tests of the above without allocating the 2nd thread. In every test using emulatorpin on only the first pair, it gave better performance and stability for me as compared with using the first two pairs. Running the VMs without the 2nd thread allocated with vcpupin, gave me a pretty bad stutter. I didn't seem to have any over allocation issues and ultimately found that 5cores 2 threads with the 1st pair emulatorpin'd is the sweet spot for my VMs at the moment. 4 pairs caused stuttering in game. 2 pairs had roughly the same fps as 5pairs, but would have latency if the cpu spiked; war thunder wouldn't even load.
  19. Hey dlandon, I had a quick question about the emulator pinning. I've noticed on my system that the Dashboard's System Status in the unRAID webGUI can frequently show the first pair (0, topping out at 2400/2400MHz on my 8 core xeon. I know that you recommended using only the first pair for emulator pinning (even with multiple VMs), but I was curious if it's possible to emulator pin more than one pair per VM? Also, what would the xml for a VM like that look like? I have all the other pairs (sans 0,1,8,9) isolated in the syslinux config. I'm currently using this xml for the VM in question. <emulatorpin cpuset='0,8'/> Could I emulator pin more pairs using something like this? <emulatorpin cpuset='0,1,8,9'/> Bit of a side note; I've also noticed that though 1,9 isn't explicitly assigned to anything, that it idles around 2200/2200MHz since I isolated and pinned the other cpus. Normally they all rest at 1200/1200MHz.
  20. Yes, but I'm not using the nvidia shield/experience in particular. I have my VMs setup to use Steam's in-home streaming to remotely access them with my 10 year old laptops. I use this for gaming and just everything in general, so my old laptops are essentially thin clients now, lol. Since I regularly use the HDMI ports on the unRAID server to watch TV, movies, etc, I use an AHK script MasterMind made that auto changes resolutions to match the remote pc's display, and then revert it back to native when you close the application. It's posted over on the Steam forums found here... https://steamcommunity.com/groups/homestream/discussions/0/490125103637825153/ Since I do everything on the VMs from the laptop, I also use Input Director http://www.inputdirector.com/ to pass the Keyboard and Mouse when everything is on the TV. Sometimes the inputs can be a bit wonky over the stream, and I'll use Input Director to force control (ie-scroll my mouse off the laptop screen and onto the TV.) The frames, sound, input lag are visually identical to what I see on the TV, and WAY better than anything else I've tried. The resource footprint is super small. Best of all, it just works, no mouse spazzing either.
  21. Found the answer at https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/Virtualization_Deployment_and_Administration_Guide/sub-section-libvirt-dom-xml-devices-host-device-assignment.html. TL;DR - The optional bar attribute can be set to on or off, and determines whether or not the device's ROM will be visible in the guest's memory map. (In PCI documentation, the rombar setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the default setting will be used. Since jonp purposefully inserted the bar='on' attribute noted in the above thread, would it be safe to assume that the rombar default setting is off? Should we all be explicitly adding the bar attribute? As the wiki http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Edit_XML_for_VM_to_supply_GPU_ROM_manually doesn't have the bar attribute noted, I'd like to verify this before making an addition to the wiki page.
  22. In the above linked thread, jonp states to use something like... <rom bar='on' file='/etc/fake/boot.bin'/> Since moving to this beta version, I've been using this... <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/boot/vbios.rom'/> What does the bar='on' segment do?
  23. Have you tried the num_stripes fix (setting num_stripes to 8192)? Took a little searching, but I found some of your posts that lead me to the http://lime-technology.com/wiki/index.php/Tips_and_Tweaks page. I changed the settings as directed. I'm kind of in the process of a game library migration, so I should have some juicy large files to test this out with over the next few days. I'll toss up an update if I encounter the issue again. Thanks RobJ!
  24. The downgrade did indeed resolve my issue, I haven't shut down my system in a while to test the samba lockup issue but I can safely say my vms haven't had any lockups in rather a while I haven't updated to anything newer since - I must say that I haven't really stressed the system since other then 1 of the vms playing minecraft I am due to start playing the new doom game which should give it a decent test soon Awesome, thank you for the update bigjme. I'd like to tack on that using the 362 drivers seems to have resolved the issue for me as well. I did my version of a "practical" stress test early today and ran up both VMs, running 3 games each (mixtures of -War Thunder, Black Desert, Heroes and Generals, Lords of Vermilion, Wizardry Online, etc.), while playing 3 random movies in 3 separate players, along with 1/2 a dozen or so Youtube videos running at the same time on both. No crash, no pause, just worked. Thank you for finding that fix and sharing it here bigjme!
  25. Hey bigjme, I was wondering since I hadn't seen any update about this if downgrading the drivers had fixed your issues? I'm having literally the same error myself. I only just now found it though as I hadn't ever really stress tested my system till just the other day. I tried running 2 VMs (both in OVMF) with my 960s passthroughed to them. I wanted to see what she could do, so I opened up multiple games, movies and web videos, but instead found that I can reliably recreate the error you described in a similar manner. VM locks up but shows paused in unraid and cant be resumed just as you described. I'll check out using the 362 drivers, but thought I would ask you first since you initiated it. Also, similar to your other post (and a few others here), I also have the "samba lockup issue" when I transfer a larger amount of files between the shares. Not knowing much of anything about how Samba works (only vaguely that it has to do with file sharing) I'll note that this only seems to happen with transfers from inside the windows VMs to the unraid shares (which I think is the function of samba if I'm not mistaken). However, if I SSH in and move stuff around the shares via mc, I've not had any issues. EDIT-- I found your post on the nvidia forum, and the issue that they are having is being reported as still causing problems with the newest driver set 365.19. Installing 362 now and see how she fares after that.