bigjme

Members
  • Posts

    351
  • Joined

  • Last visited

Everything posted by bigjme

  1. Hi Everyone, I have been reading around with regards to running webservers on unraid and wanted to open a discussion to find issues people see with running webservers in unraid, and recommendations to help security I am one of those people that runs a number of web servers from home due to my job (web developer) - while they are not holding anything important it is important to me that i don't compromise security of other devices. A docker in itself is supposed to be an isolated environment yet i am still seeing posts about them not being secure enough for this and also reports that unraid itself is not secure enough for this I am sure there are many others with web servers in dockers that are unaware of security flaws that may come with it - so can anyone help enlighten me and others as to the security issues with doing this? Below is one of my test environments that is kept to internal access - for now i can use this as a theoretical server for comments - please point out any flaws you see in this as it may give others a ground to start from: My router is set to forward port 80 traffic to a docker within unraid on a random port The docker is set up using host mode so it has a network accessible ip This docker has a custom web system which checks all requests going through it, validates them and passes the requests on to other dockers that are running in bridge mode with no ports mapped - accessible only via unraid - apache, nginx etc. The receiving servers run a custom handler file which alters php properties, sanitize any requests, then loads up the needed files requested Jamie p.s. Sorry if this has been gone over before or this seems like a slight deviation as to this topics use but i think this is an important thing to cover
  2. Can you install the stats plugin and run it on another screen to check how much memory is being used? Keep it on real time I have had this issue on my system where my VMs are by the looks of it leaking memory and using a lot more than they should - when i then do a transfer the remaining memory is used up to increase transfer times and doing anything with the array on my VMs during this time will crash samba and can often cause the GUI to lockup I have mentioned it a few times but for now to get round this i had to decrease my vm memory but the issue is coming back as the longer i leave the system on the more memory my vms use
  3. Hi bshakil Thanks for the quick reply, that is not an issue as I can pass through the individual devices - I just saw mention of hubs and took a guess Regards, Jamie P. S. Great plugin!
  4. Does anyone know if this plugin is still being worked on? I have 2 vms which use hotplug all the time but it would be great if I could replace my 2 usb controllers for this plugin The main thing I need to do is pass usb hubs over, once the hub is passed through I would presume all connected usbs would be also? I haven't tried this yet but I want to clarify before installing it My usb hubs are not shown as devices that can be passed through under the vm manager window if that makes a difference
  5. The downgrade did indeed resolve my issue, I haven't shut down my system in a while to test the samba lockup issue but I can safely say my vms haven't had any lockups in rather a while I haven't updated to anything newer since - I must say that I haven't really stressed the system since other then 1 of the vms playing minecraft I am due to start playing the new doom game which should give it a decent test soon
  6. That would be interesting and would allow my to in theory have 10 hotswap 3.5" drives and 4 ssd (using the internal 3.5"), 9 hdds and 12 ssds, or more of a mix if I go with other adapters Having never really used service cases before does this case look reasonable? The only real cooling it will need to do is the hdds, psu, and the motherboard as I will be running water pipes in for the rest My main concern is that it's so deep I doubt it would ever fit in say a 16u enclosure
  7. My issue is resolved it seems in beta 21, one vm is using seabios and the other uses ovmf
  8. The dev release is sticking around it seems even with new updates available - looks the the auto update system is still not working properly I'm also having some issues but only when playing videos on my Amazon prime fire tv stick (playback failures, audio playing fine but video being almost at the end of the film at 20 seconds in)
  9. Yep, my last slot has a dual gpu in it so I can use the others I only have 3hdds at the moment but ideally would like to cater for atleast 6 with maybe some 5.25" bays for possible ssd for my cache That case may do at a pinch but I wouldn't mind more suggestions if anyone can find any, I know I cant
  10. This is the only one I can find at the moment: https://www.xcase.co.uk/4u-rackmount-server-cases/chenbro-rm-41300-4u-with-optional-8-slot-gpu-window-109-00-chenbro.html My motherboard is an asrock x99 ws-e, I will post a full parts list later today if it's helpful?
  11. Hi Everyone, I've been planning to replace my massive thermaltake x9 case for a while now with a 4u server case - it is simply too large for what it is providing but my system is watercooled so i need to keep a fair size My unraid server is mainly used for nas/gaming so as you can imagine it is fairly full pci wise, at present it has the following: GTX 750Ti - dual slot GTX 780 - dual slot GT 210 - single slot 2 x USB 3 Controllers - single slots In total i am using 7 pci slots but i am planning to get a new sas controller soon using the final pci-e slot on my Mobo but meaning that most server cases won't fit the 8 pci slots required Does anyone know of any server cases that are able to hold this many slots without going to something that has a dedicated level just for pci-e slots? The final plan will be to put my main system in 1 case with hdds, and have a second 4u case containing my radiators, pump, res, and have them connect externally using quick disconnects as i can not find a case able to support a 360mm fan wall in the middle as well as an e-atx mobo Many thanks in advance, i am in the UK if that has any affect on things Regards, Jamie
  12. It seems so, my config should be in the diagnostic i think? I can ping, ssh, get htop, its almost as if the array just stops dead - dockers etc. stop but my vms which are on the array (cache drive) still work as normal
  13. Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc. I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads) I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free The issue i am having at present can happen after an hour however Regards, Jamie Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9 archangel-diagnostics-20160515-1642.zip
  14. So i posted a while back about my vm being shut off due to unraid stating it was out of memory - today this issue happened again so i dropped my main vm's memory allocation from 14gb down to 12gb I have the dynamix stats plugin installed and found that doing this 2gb change in the stats i now had almost an extra 4GB of memory free to the system! This seems very odd so i decided to ssh into my server and run htop and look at my vms VM 1 - OVMF - allocated 12GB - htop VIRT reports 14.7G - htop RES reports 14.5G VM 2 - SeaBios - allocated 8GB - htop VIRT reports 9707M - htop RES reports 9452M Does anyone know why so much extra memory is being allocated? Setting my VM1 to 14GB the allocated memory jumps up to almost 18GB! no wonder i am running out of memory if 2 vm's are using 28GB of my 32GB when only 22GB is allocated I have no idea if this is due to the new vm changes in the betas but it is unusual and i have only started to see these "out of memory" vm shutdowns recently in beta 21 I know it is a beta so this is not a complaint - just something interesting to look into
  15. I have 2 vms, each with a pcie usb controller passed to it, then with a desk usb hub attached to that so I have full hotswap and plenty of usbs available
  16. Ok so here is one that i wouldn't say is a bug but rather a request. My system has 32GB memory, i have allocated 14gb to vm1, 8gb to vm 2 so 22gb for vm's in total with my system showing 30gb used and 2gb cache Earlier today i had both my vm's doing something and decided to transfer a large amount of files from a vm to my storage array - i then had vm1 close entirely but vm2 stayed on. Checking my logs i can see the syslog reports it was out of memory and killed the qemu process which it saw using most memory so it could continue the file transfer It is great that it kept its self from running out of memory but not so great that my vm was shut off (this vm has my video recording software on it for CCTV) It would be great if there was a way to set unraid to not use memory for file transfers in these sorts of systems or at least prevent it doing so if there is only x amount left. I doubt this is something new in this beta but just a heads up for anyone else that may come across this in feature and also a sort of feature request for the next beta. Sorry if this is a little off topic for everyone but its the best place i thought to put it since this beta fixes so many qemu issues
  17. So as a further update to my issue things are now getting worse on the pcie front and i can now replicate this issue time after time on my system, attached is my diagnostics file Apr 27 04:11:06 Archangel kernel: pcieport 0000:00:02.0: AER: Corrected error received: id=0010 Apr 27 04:11:06 Archangel kernel: pcieport 0000:00:02.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID) Apr 27 04:11:06 Archangel kernel: pcieport 0000:00:02.0: device [8086:2f04] error status/mask=00000080/00002000 Apr 27 04:11:06 Archangel kernel: pcieport 0000:00:02.0: [ 7] Bad DLLP Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: AER: Uncorrected (Non-Fatal) error received: id=0018 Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, id=0018(Requester ID) Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: device [8086:2f08] error status/mask=00004000/00000000 Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: [14] Completion Timeout (First) Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: broadcast error_detected message Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: broadcast mmio_enabled message Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: broadcast resume message Apr 27 21:06:31 Archangel kernel: pcieport 0000:00:03.0: AER: Device recovery successful I have never seen the error on requester device 10 before or one through that root bus. In these events both my vms lock up (paused under the gui) and can not be resumed. Unraid remains working perfectly but vms stop functioning. Things i have tried Replace the gpu i have issues with (750ti) Reflashed the bios multiple times Done a 24 hour memtest on the latest version across all 16 cores with no faults Changed the order of pcie slots on my motherboard Removed the usb 3 controller to the vm that usually has trouble To replicate this issue i launch a game on the vm with the 780, i then load a movie on amazon prime in the vm with the 750ti. After around 10 minutes of video playback the vms will lock up and fail every time - this is something i can repeat I have no idea if this is related to the beta or hardware failure on the motherboard but if i can get some help debugging what is wrong it would be appreciated - especially as it only seems to affect vm's (dockers continue to run fine) Edit It seems there is an issue with the latest nvidia driver and playing items suchg as netflix and amazon prime on certain system. This appears to be causing pcie conflicts with other devices. To test this i have replaced my 750ti with an older AMD 5750 and downgraded my 780 drive down to version 362 which people are reporting as a fix. I never considered the fact that a graphics issue on a guest would affect the host system so much (the 750ti over seabios is most likely the culprit). I am going to test this for a few days and see what happens - if all goes well with the AMD card i will put the 750ti back in with the older drivers and see if this fixes the issue archangel-diagnostics-20160427-2109.zip
  18. I have just started another meanest as this crash has happend 6 times today. I opened meanest and pressed f2 to force the multiform and it is just sitting there doing nothing Met test is also reporting memory 20g when I have 32g installed so something is clearly messed up Edit: So I did a memtest which did over 1000 errors in the first 10 minutes in single thread as multi wouldn't launch. I did a bios update and memtest in single core stopped throwing errors. I used the pc with no issues for a few hours then had the same lockup - I have the latest memtest running right now with all 16 cores. Its able to do 2 passes per hour so I have set it to run through 50 tests and see what happens. If this passes I'm not sure what to try next - unraid continues to function but the vms lockup the second that error is shown, it usually shows twice and 1 vm will lock up before the other I may try a new gpu for my 1 vm and failing that I think it may be time for a motherboard rma
  19. Just to add to this one, i am running beta 21 of 6.2 and still get this error. 1 usb controller works fine (passed to OVMF) but the other fails like this on boot of a vm so no usbs work (seabios) Apr 24 12:29:34 Archangel kernel: DMAR: DRHD: handling fault status reg 2 Apr 24 12:29:34 Archangel kernel: DMAR: DMAR:[DMA Read] Request device [06:00.0] fault addr ee000 Apr 24 12:29:34 Archangel kernel: DMAR:[fault reason 06] PTE Read access is not set
  20. I've seen this error on my system after a nasty motherboard death issue. Anyhow mine was related to memory timing, and I was able to change some settings and have never seen it again. Even if you haven't changed anything hardware related, I'd still run Memtest to be certain. Hi Bungee, Thanks for the reply. I've recently run a 24 hour memtest with no issue but it is certainly possible my memory may have started to play up. For now i have done the following to test what was going on just to rule out a few things before taking the home server away for memtests: Re-flashed the BIOS on my motherboard Moved the PCIe devices to different sockets in case something weird was going on Replaced the power cables and usb cables going to/from my usb 3 controllers Moved my gtx 210 so it is on its own under a single PLX chip Having googled the error people are suggesting it is an nvidia driver issue under linux - most reporting to be the host gpu (weirdly all under the same pcie port and requester id). Having an older gtx 210 it would make sense to be driver support issue due to its age but i don't think this is too likely due to the way it has just started happening I think my next steps are to run a memtest again, and alter my system overclock slightly just in case something is not playing nice. It seems too weird that vm 2 would lose all usb input after the lockup though - i did find a kink in the usb cable and part of it had be crushed so right no i am ruling out the usb cable sending rubbish to the controller
  21. I have been coming across a weird issue lately which seems to have crept in with this new version, everything was working fine for ages but now i keep getting my vm's lockup and the following error message Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0: AER: Uncorrected (Non-Fatal) error received: id=0018 Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, id=0018(Requester ID) Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0: device [8086:2f08] error status/mask=00004000/00000000 I'm unsure how to trace what the requester id actually goes back to but when this error appears both my vm's become paused under the gui and can not be resumed. On an attempted resume i get the bottom message: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required Not sure what to do with this as it is rendering my vm's unusable as without a forcestop they will not do anything. VM 2 (one named "Cat - SeaBios") will start up after the failure but will refuse to see any usb devices attached to it No idea what to do with this one at all - i was able to take a diagnostic after this error occured archangel-diagnostics-20160423-1555.zip
  22. Just wondering but does anyone know why the attached section of my syslog would have been generated? I have not seen the system past 87% memory usage, or past 70% cpu usage (i was on a game in a vm with only half my system cores allocated when this was added to the log) log_entry.zip
  23. Hey CHBMB This was actually the main thing I thought was at fault with my system for a while. I have mentioned this a number of times when talking to limetech that I think my usb was having issues but was always told it shouldn't cause it any issues as the system runs from memory once booted. Everything was still functioning like vms etc. I was just not able to use the Web gui properly and obviously the share privileges changed My system crashing issue seems to have been resolved by killing all irqbalances before starting the array. Ive been running my system since jonp asked me to run the function without an actual system crash I have been hitting it a lot harder then normal to be honest and it seems to be coping. I have even added my 750ti back into the system to see if i can force a system lockup with the seabios vm running (this vm always used to cause it without fault) but no luck yet
  24. I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again. Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes Have you tried to use another USB port for your flash device? Yep. The usb booted up straight away after a reboot so the USB is fine. My system is a bit of an odd one where if i do a bios update it wont see my usb or any other boot usbs for ages then will randomly pick it up and work fine so it may be down to that but i'm not sure. I just wanted to check nobody could see any issues in the syslog that i couldn't
  25. I'm aware of this, all data is backed up, i was just stating that it is a bit of a flaw that if the usb is no longer readable all data on the server can be accessed publicly by anyone on the network. This would most likely be an issue on every version of unraid No idea if there is a way around this but it was worth pointing out. I was not worried about data loss, i was more worried about the security issue this opens I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again. Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes