vonfrank

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by vonfrank

  1. That would be damn.. Is there any indication that this was the reason for this crash? And what does this mean? I did notice this in "Fix common problems" plugin, but actually suspected it to be something to do with the aggregation of NICs, but only because I didn't know better.
  2. vonfrankstorage-diagnostics-20240207-1457.zip syslog-192.168.3.1-7.log
  3. Hi, I may have a server start to die on me, and I am unable to resolve this issue. The server works fine for most of the time, but once a week/once every second week, it becomes totally unresponsive. I am not able to enter the GUI, all services running on the server dies, either containers or VMs. I started a syslog server on one of my other servers, in order to get some information, but my knowledge starts to lack when I start debugging linux. Is anyone able to see if this log seems out of the ordinary? This is what I see in the logs, up until the freeze. The time stamp 11:18:59 is before the freeze, and 11:22:01 is after a cold reboot. Thanks..
  4. I know, but as far as i know, i am not able to start my server with only cache drives. It is required to put at least one drive in the array, and I would like to have a SSD-only server?
  5. Thanks a lot dlandon, I already have the plugin on both servers, I will do some reading on the disc buffering values. Do you have a suggestion with my SSD array? Is this a good way to go, or should I move the SSDs to the cache instead, or move one of the SSDs into parity?
  6. Hi, I just started rebuilding my two unraid servers which have been running on different locations so far, but recently I have decided to "combine" the two servers. The two servers both consist of 8 threaded Xeon CPUs, with one on a ATX asRock rack motherboard, and the other on a MITX asRock rack motherboard, both with 32gb RAM. I want to have the ATX/DDR3 server handling storage, and the MITX/DDR4 running as a node, consuming the data provided by the storage server. The storage server is running a LSI sas card to provide sata ports for my 8x Sata HDDs, while I will put my 2x parity HDDs on the onboard raid controller, as well as the 4x 250gb samsung SSDs. The node has 2x 500gb samsung SSDs as the only storage medium, as I expect the node to consume data from the storage server. My question is, how do I maximise the link between the two servers? At the moment a 1gbE/112mb/s link will be sufficient, but I don't always see constant 112mb/s transfer speed between VMs on the servers. I have also tried using unassigned drives, to have dockers consuming the data from the storage server. If the node server is transferring more than 10/15mb/s to the storage server, I see some very high CPU spikes, and the webUI/entire server becomes very unresponsive/slow for some time, like some kind of cache/buffer stacks up. I would suspect unassigned drives to use some kind of temporary storage space, while transferring the data from this folder over time to the network drive it becomes filled up and has to use all CPU power to empty this storage space, but my 2 SSDs are not near full (10% of 1tb and the moment). What would be the ideal way to setup my systems in this setup? Would a 10gbe network card be the only way to improve this? Another thing I would like to know, is what would be the optimal setup for my two samsung SSDs in my node server. I would like to have a "Raid 0" setup, with just 1tb storage from my 2x 500gb ssds, and the performance from 2 SSDs in raid 0, as they are not storing data, only hosting VMs and docker containers. I have them in a BTRFS array, as the main array, but I only see one of the SSDs filling up. Would it be better to put a "dummy" SSD/HDD in the array and the 2x SSDs in the cache, or simply put my SSDs in the array with one in parity (even though parity is not needed for this setup).
  7. Hi, I have been using Unraid for some time, and modified one of my builds with an Intel I350-T4, 4 port ethernet PCIe card for additional 4x 1gbps beside the 2x onboard nics (Intel i210 + Intel i219) and the 1x IPMI nic (RTL8211E). The motherboard is an Asrock E3C236D2I. Whenever I boot into Unraid with my PCIe card inserted, my two onboard intel nics turn off in unraid. When I go to network it says "shutdown (inactive)" even though I can use on of the intel nics as IPMI. Is this a flaw my motherboard has, or is it possible to use all nics? I can see by nic indication light, the nic is turned of whenever unraid reaches a certain point doing boot, so the nics works. I am also pretty sure I have been able to use all 6 nics before. Can anyone help me? Thanks..
  8. I have been struggling with this for almost a year now.. Msi doesn't solve the problem, more rams and cores doesn't either..
  9. Hi, I have been doing a bit of research on this, and I can see a lot of people are having the same error, even though the hardware is fine. I have been running memtest, and everything works, but this message keeps filling up my terminals in my linux VMs. It is very annoying. My IPMI logs also says the server is fine, and doesn't log something like this. Is there a solution to this, or is linux just totally useless on unRaid??
  10. Everything you add in the template ends up in the run command. Every time you want to update the container you will need to remove --auth. Okay, thank you! Do you know if there is another workaround?
  11. whenever i try to update.. it tries to update from mongo --auth:latest and not mongo:latest.....
  12. what a shame.. does this mean the repo box is actually a placing its input in the commandline argument field of a docker command? If you understand..
  13. Hmm.. I tried this, and got a little further.. I tried " --auth" and "-d mongo --auth". The last one made it able to complete, but not start. But i saw a pattern. If i put it behind the repository field, (here it works, but can't update, and other small errors), the --auth comes behind the mongo command. If i put them in the extra parameters field, it goes in front of the mongo command. I need the --auth to be the last part of the command. Any ideas? Just playing around with this out of curiosity, looking at the readme there's nothing about --auth, where are you getting that from? Im not an expert, but from the docker hub page.. Around 40% down.. https://hub.docker.com/_/mongo/ there is a section of authentication.
  14. Hmm.. I tried this, and got a little further.. I tried " --auth" and "-d mongo --auth". The last one made it able to complete, but not start. But i saw a pattern. If i put it behind the repository field, (here it works, but can't update, and other small errors), the --auth comes behind the mongo command. If i put them in the extra parameters field, it goes in front of the mongo command. I need the --auth to be the last part of the command. Any ideas?
  15. hmm.. I don't know if it is a variable or if it just is my word for it. How do i see this? It's this docker, and the authentication part is defined about 40% down the page.. Thanks.. https://hub.docker.com/_/mongo/
  16. Hi there I am creating a docker manually in the wizard on my unraid system. I use a docker hub repository, in my case the original docker container for mongodb. I want to add the authentication variable ( --auth) to my run command, but i can't figure out where to put it. The original string for running the command is "$ docker run --name some-mongo -d mongo --auth" I have managed to make it work, by putting the --auth variable behind the repository field, but i have to delete it every time i update my container, and my icon doesn't work. Isn't there a more "right" way to add this variable? This is my custom template so far. Thanks!
  17. Hi there Is there anyone who have been playing around with deploying a dedicated server in a docker container? Otherwise, what is the Best solution to deploy a cs:go server from unraid? Thanks
  18. Hi there Is there a way to allow specified shares to users of the unraid system? So when you try to connect to a share, it requires a login and a password, and only users with these information can access this share, instead of having access to all shares? I am not looking for a owncloud service.. Thanks
  19. Would openvpn then just run in a docker container then?
  20. Well.. Now it has been removed prom the exposed port. I did this, because i have a very unstable router and only here in the initial phase, for testing purpose. I cannot access my router when i am other places than my home (of course not) but i can reset it remotely. Every time i reset my router, all my portforward settings are lost, and it would not be possible to access my server and i could not do any work for a couple of days. I needed access to ftp, the web gui, all docker apps, and my VMs. Because i was messing around with all kind of settings here in the initial phase, i thought it would be easiest to just expose the entire server, with a good password, and luckily my server isn't containing any personal or sensitive date, and no one even did break my root password.. Now it is closed and i can't access my server, until i get home tomorrow. What would you do? I can't access ftp if i portfw my router to port 21, i can access my docker containers by portfw all ports.. How do i access the webgui, other than remote desktop into a vm, that is located on the same network as my server? If i want to access my shares, how do i do that? It would be nice if there was some kind of docker that could expose shares, to different kinds of users? Thanks..
  21. Hi there Currently i have my entire server public by a DMZ port on my router for testing. I know this is very unsafe, and i have a very difficult password on my root user. I was taking a look at my log files, and they are endless! Are these logs attempts to SSH and telnet into my server? And should remove my server from the exposed port right away? Isn't SMB be exposed without any login?
  22. And am i still able to develop core apps in my "old" visual studio enterprise 2015?
  23. Neither of the projects i have finished now, are supposed to run on my unraid machine, so this is perfect! I have a look at asp.net core. What exactly is the difference? Thanks!
  24. Hi there I want to deploy my ASP.NET webapplication (not big projects, just for fun and practice).. For my PHP pages i have an apache server running in a docker container, and a mysql server in another container. Is there a IIS docker kind? Or any other way to deploý ASP.NET web apps on unraid? Or should i install a core version of windows server 2012 R2 with 35gb ssd, 1 core and 1gb ram? Thanks!
  25. No (non-beta) version of SAMBA (the Linux Program that interacts with SMB) supports multichannel yet, so unRAID doesn't support this yet. When this happens is really up to the team behind SAMBA. And ftp isn't a faster protocol? Or any other protocol?