ryoko227

Members
  • Posts

    161
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ryoko227

  1. Settings>>VM Manager Would like to suggest a default setting drop down for the VNC Keyboard of VMs. Right now when you create a VM with VNC selected it auto defaults to "English-United States (en-us)". As my keyboard is "Japanese (ja)", this repeatedly changing of this setting gets annouying to say the least. Obviously not a critical issue, but would be helpful for those of us who make new VMs regularly and are not useing English keyboards. If this is already implimented somewhere or someone knows how to do this, please let me know! Thanks
  2. Interesting request imho. I have yet to have any issues with the NAS specific areas of my unRAID servers, and have thought that area to have always been pretty bullet proof. What made me ultimately jump onto unRAID was all the other features though. I don't have a lot of room to have multiple boxes in the house, so being able to have my data storage relatively protected, my htpc, and my desktop gaming VMs all in one box was an absolute must for me. Had it not been for all those features, I would have found a different (albeit not as elegant or integrated) solution for my needs.
  3. Anyone else having their nvidia driver crash their Win10 VM since hoping onto the 6.3 versions? As I mentioned in the 6.3.2 announcement thread, I did a full update of all the virtio drivers, machine type, agent, nvidia drivers, and everything perked up quite a bit. But since doing that, I started having an issue where when gaming the VM would crash to a black screen. Then have to force the VM closed via the nRAID gui. I thought maybe the newer nvidia driver might be causing it, and reverted back from 378.92 to 376.19. Instead of a full blown crash though, now it will flash a black screen, audio will stop, and win10 will give an error saying that the nvidia driver crashed. Since it wasn't happening in 6.2.4, I just wanted to see if anyone having this issue. I've got some even older nvidia drivers that I'll try out and see if that helps resolve the issue. Posting diags, just in case anyone sees something I'm not ^^; diagnostics-20170401-1751.zip
  4. That did the trick, thank you very much dmacias!
  5. Finally got around to upgrading from 6.2.4 to 6.3.2 today on my home server. Have to say I am really happy with all of the continued improvements, especially on the VM side of things. Did my due diligence and made backups of everything first of course, then ran with it. Didn't notice much of a difference until after I did all the VM upgrades; ie: changing machine from i440fx-2.5 to i440fx-2.7, updating the virtio drivers from 1.118-2 to 1.126-2, guest-agent, course running the MSI-util, and even updated video drivers to top it off. Noticeable improvement in VM performance. Games that had sluggish loading issues come up immediately, and single core games (looking at you War Thunder) that almost always pegged 1 core to ~100% don't anymore. I haven't noticed any issues in regards to the NAS area, and my plugins and dockers seem to be working fine as well. Only issue I have is an old version of perl inside NerdTools won't update or delete, but that was happening in 6.2 and not related to this update. TL;DR No issues, VMs running better than ever. Loving everything you guys are doing with unRAID!! Keep up the great work!!
  6. Found the way to remove the Software Rendering Mode message following instructions on a Japanese website here. Basically log out select your user account click the gear to change the session choose (for example) [Cinnamon (Software Rendering)]. Next time you log in, you will no longer receive the Software Rendering Mode message. Now just need some help setting up ja as the default selection for VNC keyboards when making new VMs.
  7. I may have missed the answer to this during my searches, but is there a way to remove the Software Rendering Mode message on a Linux (Mint) VM running through noVNC on unRAID? I have a few VMs that their sole purpose is to be run using unRAID's default noVNC viewer and do not need nor have a graphics adapter passed through. I know with VirtualBox VMs you can solve this just by installing the guestadditions, but is there something I'm missing in regards to noVNC VMs? Also, I know I can change the noVNC default keyboard from the default VNC keyboard en-us to ja (Japanese) layout when I edit the VM with the webgui, but is there a way to set ja to be the default? I live in Japan and am always using a Japanese keyboard, so it gets really repetative to have to change the keyboard anytime/everytime I setup a new VM. (which is surprisingly often) Thanks in advance!
  8. I'm not sure if this is a Manjaro issue or something I'm not setting up correctly in the VM settings, but every edition of Manjaro 16.10.3 that I've tried to make a VM of reacts the same way. Some bootup text with many [OK]s, but ultimately stopping at a black screen with a white blinking cursor in the top left corner of the screen. The install grub never even shows up, so I'm never even prompted to do anything. I've tried installing with 5 different editions on two separate machines, using different VM machines, and different core assignments, all with the same result. Googling found some other people with similar issues dating a few years back, but those solutions do not seem viable in the context of a VM. I'm curious if anyone else has had any issues installing Manjaro into their unRAID VMs? If so, did you have to use specific machine settings to get it to work? Is anyone else seeing the same issues I am? Additional information, I have not had any issues installing various other Linux distros using the same VM settings (ARCH default template with additional cores and memory). I've also properly installed other Arch based distros such as Antergos and Apricity with no issues. Whatever is causing it only seems to be related specifically to Manjaro.
  9. Have really been looking into Fog the past few days after having some major Win10 1607 update and multi pc backup woes at the office. After searching here and google I've found a few mentions of trying to run Fog in a container, but much like this post there haven't been any updates in over a year. I'm posting here to essentially bump the topic and hope that someone with Docker know-how might be interested in taking a stab at getting Fog to run in an unRAID Docker. Thanks
  10. Awesome, thank you for pointing me in the right direction as always RobJ! Just in case anyone else ever has this question and finds their way to this post, I'll toss in the quote from bonienl that answered both garycase and my question on how to do this. I ended up make a bat file in my putty directory containing this text, then just put a shortcut on my desktop plink.exe -ssh -pw passwordhere useridhere@serveriphere powerdown Putty will accept command line parameters. I think what you want would be something along the lines of c:\pathtoputty\putty -ssh root@tower -pw password -m c:\pathto\textfilewithshutdowncommand.txt or something like that. I haven't actually tried it, so it may create a black hole and implode the universe, ymmv. plink is the better option to automate remote execution of commands, it comes together with putty. plink -ssh -pw <password> root@<hostaddr> powerdown
  11. Pulling from the depths on this one ^^; Apologies ahead of time on the necro, but since I'm looking for updated information with only a slight tweak I didn't really see a need to make a whole new thread about it. I'm not sure how to go about this, nor how much has changed in the instructions above, and with the only relevant posts being so old I thought I had better ask first. /end necro disclaimer.... I would like to have a shortcut on my win10 laptop, that can send a poweroff command to my unRAID server. (A separate reboot shortcut would be nice too.) Also, not sure if it's important, but for clarity of use - I use Putty for all my SSH, mc-wol bat to start up the server, and have Bergware's Dynamix System Buttons installed already. As I don't leave her running 24/7, and regularly shutdown the server when I'm not using it, the shortcut would be really convenient for me. Thanks in advance!
  12. 6.2.3 to 6.2.4 on two machines, no issues so far.
  13. Updated the office server from 6.2.2 with no issues to note.
  14. It should beep in 6.2.x if initiated from webGui, but won't if initiated from command line or power button or apcupsd. Wait, what, there's a beep!? Also, two machines updated to 6.2.1 with no issues.
  15. Hey dlandon, wanted to give you some feedback on how my experience working with an nvme drive handled under UD. Using a Plextor M8Pe PX-256M8PeG-08 NVM Express SSD M.2 2280 256GB. After install and initial boot, UD was able to detect the M.2 drive and gave me the option to format it as XFS. I formatted it and could see the partition from the GUI main page, but if I tried to mount the partition nothing would seem to happen. I added the mount to the go file, but after a reboot found an error in the syslog stating that the partition was invalid. At this point I couldn’t get UD to reformat the drive, so I then followed the instructions given in dAigo’s post. When I gdisk’d the device it showed MBR:present and GPT:invalid (if I recall correctly). I finished partitioning, formatting, and mounting the drive via ssh then rebooted. UD now shows the drive, its partition, file size and space remaining, though temp is not displayed. The option to unmount is available as well, but I haven’t played with that. I’m not sure if it was something I did, or if there was a glitch when it formatted the drive, but wanted to pass you the info. Also, I had a migraine when I was doing this, so my apologies if my steps aren’t clear or if I missed something.
  16. Found a solution for this on my own. I created a new VM template, and copied the XML from that. Then pasted that information over the broken VM references (sans the name and uuid). After doing this I had no problems deleting the faulty VM references.
  17. I was testing some different core settings recently on my home system. When I was finished I noticed that I could not delete those test VMs via the webgui as both Remove VM and Remove VM + Disks did nothing. I manually deleted the vdisk from the cache drive via MC, but I still cannot remove the VM references. Also, the hard drive areas state 1/unknown, which would indicate that they are still being updated. The server has been restarted a few times since then but I still can't remove the two VMs. Since something has obviously gotten messed up, I wonder how I might go about removing them manually? I don't think it is relevant but I had created the VMs by: Adding a new VM via the gui Making a copy of a vdisk from one of my normal VMs R&R xml data from my stable vm with modified name, uid, vdisk locations, and core pinning for pass-throughs and testing. Lastly, they ran fine during testing, I just don't know how to remove the references now. Any suggestions?
  18. Thank you for clarifying jonathanm! I reread the quote from limetech (below) a few times and think I see where my misunderstanding was. I think I was combining the two separate sentences and reading it as "sure you can try it out, but after a reboot it will change back to the default of RAID-1 AND auto rebalance." Seems like what it's actually saying is that only the 'Balance section options' will change back to default, so as long as you don't click balance again OR add another pool device, your RAID settings will remain as you had set them. Sorry so much time on this, but thank you both again for helping me to get a clearer understanding of it and also find my mistake! EDIT: Following the instructions in the link johnnie.black posted, I now have my 2SSDs setup in a btrfs RAID-0. I did have to balance it a few times as RAID0 for it to finally update the Data section correctly (as was also noted in the other thread), but finally got it going correctly. The performance improvement is NOTICEABLE. Incredibly happy with this setup, thank you very much!
  19. Thank you very much johnnie.black, this was pretty much exactly what I was looking for! I did do a bit more google'n and also found some articles expanding on my other questions related to why it would be better for most to switch from HW RAID to a software driven RAID solution. Since from your post it looks like making changes are no longer reverted upon a reboot in unRAID, I think I'll be going the btrfs RAID-0 route. Thanks again.
  20. I had looked over some different threads (listed below) that discuss how the cache pool is currently implemented in unRAID and its current limitations. Those being that btrfs RAID-0 can be setup for the cache pool, but those settings are not saved and "revert" back to the default RAID-1 after a restart. I also found a much older topic related to pre 6.0 that states that you can setup a BIOS RAID-0, however "unRAID will not be able to spin it down, or get the disk temperatures..." With that in mind, I really have 0 concerns about having parity or redundancy on my cache drives as I regularly backup my VMs and mover pushes over any cached data to the main array nightly. So, I would like to have my SSDs setup with RAID-0 (btrfs or BIOS) if possible, but have a few different questions in regards to this. Is everything above still the current situation regarding RAID arrays as cache drives in unRAID? Does anyone have a BIOS RAID-0 array set as their unRAID cache drive currently, and if so do you have any issues with it? I know JonP made mention of not seeing the efficacy of having btrfs RAID-0 over "single", but I don't know enough about how that's implemented to be able to say I want one over the other. Has LT made any new comments as to when the btrfs RAID-0 option will be added (if it will be added)? Will we only be seeing "single"? What are the pro's and con's of "single" vs. 0? Adding a second SSD under /mnt/cache WITHOUT RAID1 (Read 449 times) https://lime-technology.com/forum/index.php?topic=35394.msg329612#msg329612 btrfs cache pool (Read 11135 times) http://lime-technology.com/forum/index.php?topic=34391.30 BTRFS vs. ZFS comparison (Read 12147 times) http://lime-technology.com/forum/index.php?topic=42068.msg405365#msg405365 Topic: Combine disk in an RAID-0 cache disk. (Read 4510 times) https://lime-technology.com/forum/index.php?topic=7640.msg75397#msg75397
  21. Thank you for all your hardwork getting this put together LT. Already have RC5 running on both the office and home rigs. Updated without issues.
  22. Unfortunately I did not, but it would seem the most likely culprit as I did check out the modem (which was up) and anything, including AP routers, couldn't connect through to it. As I haven't seen this before and haven't been able to reproduce it, I'll just have to call it a fluke at this point. I suppose unrelated to this thread now, but I do worry that the speculation might relate it to a power surge though. I have both of my servers behind their own individual APC RS 550 UPS. Niether of which load beyond 50% of nominal power, so I'm hoping this wasn't a powersupply issue. Anyways, thanks to you and John_M for the help
  23. Unforunately no syslog for this, but after 2 days of running my office server on rc4 I had a hard lockup. Screen saver on the VM monitor was locked and unresponsive, could not access the gui via alternate device, could not ssh in on another laptop, and oddly we lost our internet connect as well while the server was in this locked up state. I reset button'ed the server and immediately the internet came back up, unRAID loaded as normal, no errors or issues in the log file. This server only running unRAID and a Win10VM which has been there for about 5months. Nothing running in dockers, and only plugins would be things like CA, nerdpack, and system profile. All of which have been installed for months. The only thing that has changed recently was the update from rc3 to rc4. The server (like most) is attached to a router, which is in turn attached to the fiber modem. All other devices are routed through the router via ethernet or wifi. I know finding out what caused the hard crash is about impossible at this point without a syslog, etc., but to be honest, I'm more curious as to why the internet became inaccessible. Any ideas why the server box lockup would affect the internet connection?
  24. I cannot stress enough how important keeping your BIOS up to date is when refering to unRAID. This may be my personal experiences only, and it may only be related to my MBs specifically, but on 2 seperate systems I have had issues in unRAID that were corrected simply by updating the MB BIOS. Having come from the Windows world and having the mind set of "only update BIOS when your system has an issue", it was quite the change in mind set for me to actively check and keep my BIOSs updated. As I mentioned before though, multiple issues in 6.19 and the 6.2 betas were eliminated just from a BIOS update. In fact, I even updated before rc3, lol (no issues btw). I might also recommend that if you are updating from 6.19 to 6.2rc3 and suddenly nothing seems to work, try doing a clean install of 6.2rc3. This cleared up all of my other remaining issues when I migrated over from 6.19 to 6.2beta. Since getting into the 6.2 line though, the GUI updater has worked flawlessly for me.