Marcel

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by Marcel

  1. Hi all, I found this great post about keeping vdisks (qcow2 images) sparse - meaning if they had increased in size because of more space being used, they would shrink back as soon as space has been freed again. Why is this not available as an option within the VM settings dialog? Is there any downside to this approach? The absence means that if I have multiple VMs using this approach I can't use the settings dialog for any of them anymore - even for small things like temporarily connecting a keyboard or something. Cheers, Marcel
  2. @tdallen: thanks for the quick reply! I don't see what a Linux distro on a USB stick would help at all. The tool from ASUS to actually fix the Intel ME is only available for Windows anyway. Also I did get the detection tool to run under unRAID just as well. The question is why it reported not having access to the ME. My main question was more directed at understanding if unRAID with the latest Linux kernel is already doing something about the Intel ME vulnerabilities or if not what LimeTech's idea on how to fix it is. Cheers, Marcel
  3. Hi all, last year's issues with Intel's Management Engine have been handled differently by different mainboard manufacturers. While some include a fix within a BIOS upgrade, ASUS supplies an executable for Windows 10 to fix the ME. Here is my problem: I am running an unRAID server as a host for multiple Windows 10 VMs. Within a VM even the detection tool for the vulnerability, supplied by Intel, does not work as it claims the ME is not accessible. So I tried to put the ME drivers, the detection tool and the fix onto a Windows 10 PE installation on a flash drive. That did not work either. All executables report "wrong Windows version". Finally I also got a Linux version of the detection tool from Intel (just a Python script) and ran that in a terminal directly within unRAID. It ran but reported "no access to the ME" - same problem as from within a Windows VM. So here is my question: how can I fix these Intel ME issues on a system running unRAID? I don't suppose the Linux kernel is somehow taking care of it? My only idea at this point is to create a Windows installation on a HDD and boot the system from that (with the unRAID flash drive not plugged in). But that seems to be too much effort just to install a tiny patch. Anybody knows more or has a better idea? Cheers, Marcel
  4. @johnnie: thanks for the quick answer. Shouldn't that really be done by the system?!
  5. Hi all, I was under the impression that unRAID 6 is taking care of TRIM for SSDs using BTRFS by now. Am I wrong and manual trimming/ using the Dynamix TRIM plugin for scheduling is still necessary? I am running unRAID 6.3.3. Thanks & regards, Marcel
  6. Hi everybody, I was wondering if unRAID supports using libguestfs to mount and manage various image file formats. (http://libguestfs.org/) I am having trouble with a Windows 10 VM within a qcow2 image file and libguestfs would probably be of great help. Cheers, Marcel
  7. Hi, when you create a (Win10) VM with the webGUI it also creates a file in the /etc/libvirt/qemu/nvram folder. That file's name is based on the UUID of the VM (and then '_VARS-pure-efi.fd'). It is then referenced in within the xml config of the VM. When I keep a fully installed and configured VM on my storage server and copy it to my VM server I use virsh define to make the VM available based on the xml file (I created earlier using virsh dumpxml). That works fine but that file in the nvram folder does not get created and so the VM won't start. My current solution: Together with the disk image files and the config xml I also keep a copy of that _VARS-pure-efi.fd file which was created via the webGUI initially. So when I copy a VM back to the VM server, before I run virsh define I manually copy that _VARS-pure-efi.fd file to the folder /etc/libvirt/qemu/nvram. This works well so far. But I would like to know if there is a better solution. Thanks & regards, Marcel
  8. thanks! That makes sense.
  9. Thanks for the feedback so far! @bonienl: I am aware of that. It's just one checkbox within the router software to temporarily allow the necessary things to the unRAID servers. @NAS: I am aware of that too. I am hoping for a stable version to be released soon ;-) BTW, do you know why LT requires the internet access for beta or RC versions?
  10. As a general measure the router does not do any port forwarding, so as far as I understand the unRAID servers are not directly exposed to the internet. The sublist of items are blocked via a rule in the router - not for the entire LAN, just specifically for the unRAID servers. This prevents for example opening websites within the local unRAID GUI. No, I have not allowed/opened any port specifically. I just did not block SMTP, DNS and NTP since these will be used. As for SMTP: within the notification setting in unRAID I have configured the SMTP server for the e-mail account I am using for the notifications. As for NTP: I just left the default setting. So yes, it is pointing to a NTP server on the internet.
  11. I have spent the last couple of weeks to set up and test a VM server and a storage server both based on unRAID. They are both supposed to contain 'mission critical' data. (I keep my movie collection and stuff like that on a conventional NAS) In the forum I have read lots of statements like 'unRAID is definitely not safe' and the like. Network security not being my field of expertise I would appreciate feedback on whether what I did to provide the best possible security for my two unRAID servers makes sense and whether I have to do more. Thanks a lot! Here is what I did: regarding internet both servers live behind the router firewall no ports are forwarded to them whatsover specific rule in the router software for both servers to block all websites (http,TCP port 80,3128,8000,8001,8080) all secure websites (https,TCP port 443) news forums (NNTP, TCP port 119) file transfers (FTP, TCP port 21) telnet (TCP port 23) SNMP (UDP port 161,162) VPN-PPTP (TCP port 1723) VPN-L2TP (UDP port 1701) So basically the only things I have allowed are SMTP (TCP 25) for the servers to be able to send me e-mail notifications on the UPS status etc DNS (UDP port 53) so the unRAID servers can receive a name from the DNS server on the router to be displayed on devices within the LAN NTP (UDP port 123) for time synchronization regarding the LAN (to be implemented after testing and setup are completed) flash drives not exported all exported user shares use 'private' mode using Yubikeys for ultra strong root passwords Both unRAID servers have static local IP addresses and do not use the DHCP server in the router.
  12. Some interesting info regarding this discussion can be found here (unRAID 6.2) http://lime-technology.com/forum/index.php?topic=50344.msg484309#msg484309
  13. ssh did not work. telnet does though. It would still be nice to be able to configure dedicated shell windows for more than the local server to be accessible from the Lime menu - if possible also windows for Midnight Commander and htop. In a setup with multiple servers, the one with a local keyboard and mouse could be used to do maintenance on the other headless servers. Of course if this would be more than a little configuration work on the lime menu then there is other more urgent features....
  14. Hi, I am running two unRAID servers within the same network. One of them is running headless (storage server). On the other one I am running the very convenient local webGUI. Currently the included LXTerminal is configured to access only the local machine. It would be nice if it could be configured to talk to a different server on the network. That way I would have console access to both my servers. Thank & regards, Marcel
  15. @CHBMB: thanks for the reply! More important than the RAM is probably the CPU load (RAM can be increased more easily). I have some new findings about it myself: using htop within the local GUI I could see that the with the webGUI open and in idle the CPU load was floating around 2-3 %. When I closed Firefox though (something totally reasonable to do while you are actually not using the webGUI) the CPU load dropped to well below 1% and even the RAM demand went down again. The Win10 VM can run with the previous RAM assignment. So, problem solved. The local webGUI including the shell window and htop is awesome!
  16. Hi, I have just started using the local webGUI and in general I am very happy with it! I have noticed though that it seems to have higher demands in terms of RAM being available for unRAID. In a configuration with a VM using most of the RAM - previously running perfectly stable with just the unRAID console active - now the VM does crash because the system runs out of memory. That is fine for me and also understandable (webGUI needs more RAM than a console). But here is my question: is there any information on how much CPU overhead the Firefox based webGUI creates when in idle (the webGUI, not the system)? Thanks & regards, Marcel
  17. @RobJ: thanks! So I assume that means, I did not overlook something and the local webGUI currently does not care about the kbd layout setting (?)
  18. yes, RAM is cheap and more RAM usually can't hurt. So get as much as you can afford. But I don't see why unRAID itself while not doing anything else than running one VM at a time should utilize more than 2GB of RAM. This would leave 6GB of RAM for the Win10 VM. I am running such a configuration and it runs perfectly fine (engineering sw, games, video editing). In my opinion one (of the many!) advantages of unRAID is that when used as a the hypervisor layer for running VMs it does need very little resources compared to other systems like Windows Server. More RAM would be necessary in the other use cases that I mentioned.
  19. I don't think more than 8GB of RAM is required for VMs necessarily. It rather depends on what exactly you are planning to do with them. If you will be running only one VM at a time then 8GB probably will be fine unless you will perform something within that VM that can benefit from large amounts of RAM. In case you plan to run multiple VMs at the same time or one VM while the server is transcoding streams etc. at the same time, then more than 8GB of RAM is probably required.
  20. Hi all, I have added an SSD to the system and mounted it via the UD plug-in on unRAID 6.2 RC1. Formatted as btrfs. Now my questions are: is that all it takes to run an SSD outside of the array? btrfs is taking care of TRIM by default? Or do I need to configure something else for TRIM etc? Thanks & regards, Marcel
  21. ok...I installed the Nerd Tools plug-in..then installed the kbd package and that did let me change the keyboard layout for the console successfully. I would still suggest that at least the kbd package should make its way into the core unRAID. Nothing nerdy about changing the keyboard layout! To make the setting persistent I had to edit the go file accordingly. I think if there was a selection field in the webGUI and the kbd package installed by default it would make things much easier. What is unfortunate though is that the otherwise awesome local webGUI doesn't care about the changed keyboard layout. Is there any way to fix that? Did I miss something? Thoughts? Cheers, Marcel
  22. @BritT: Thanks a lot for the detailed explanation! Just for clarification: if the drive does the marking of bad sectors itself I assume this would also happen during the unRAID clear process? Or is this somehow actively triggered by the pre-clear script? Given your explanation I'll pre-clear any drive before adding it to the array in the future. Cheers, Marcel
  23. Before I started this thread, after reading through existing ones, I got the feeling that there isn't the one community approved best practise. Everybody has their own opinions. Seems that is just the case I do agree very much with what testdasi and trurl have pointed out. 1) the process to clear a drive in unRAID has actually nothing to do with testing a drive. it is just a necessary step (writing zeros for every bit) to enable the parity mechanism. 2) a nice side effect of the clear process is that it is some kind of stress test - under normal conditions (besides a reconstruct) it would never happen to a drive that all its bits get written in one session 3) using the pre-clear plug-in and potentially increasing the number of passes for clearing the stress test character of the procedure can be emphasized 4) another benefit of pre-clearing a drive with the plug-in before keeping it as a cold spare is that when the time comes for it to be replaced, the process is much faster and it is also unlikely to happen that you get a bad surprise with the drive failing right during unRAID's clear process. I hope that sums it up correctly (?) Maybe I didn't make it clear enough where my own confusion has been in my original post. My question is whether the unRAID clear and/ or the pre-clear plug-in actually mark bad sectors in some way, so during normal operation unRAID wont try to write/read stuff from them (like chkdsk does for Windows systems) @BRIT: I can understand your rationale given your own experience. I think in the end it comes down to how much effort/time you want to spend on the testing.
  24. I would like to know about the current (RC1) shutdown behaviour too. Both in case of the power button being pressed and a shutdown via the UPS. Besides the array being properly stopped before shutdown, what about running VMs (Win10) ? Cheers, Marcel
  25. Hi, I have read lots of threads about pre-clearing, formatting, parity sync etc. But at this point it is not exactly clear to me what is the best practice to add any new drive? By adding a new drive I don't just mean adding a single drive to a running unRAID array with lots of existing drives. The question is more general -> starting at the very first drive to setup a new unRAID server. I have read that pre-clear only does a stress-test of the drive. Therefore my question is, how can I make sure that any potential bad sectors of a new drive get diagnosed and marked as such right from the moment I add it to the system, so I won't run into surprises later. Clarification is much appreciated... Cheers, Marcel