vonfrank

Members
  • Posts

    52
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

vonfrank's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. That would be damn.. Is there any indication that this was the reason for this crash? And what does this mean? I did notice this in "Fix common problems" plugin, but actually suspected it to be something to do with the aggregation of NICs, but only because I didn't know better.
  2. vonfrankstorage-diagnostics-20240207-1457.zip syslog-192.168.3.1-7.log
  3. Hi, I may have a server start to die on me, and I am unable to resolve this issue. The server works fine for most of the time, but once a week/once every second week, it becomes totally unresponsive. I am not able to enter the GUI, all services running on the server dies, either containers or VMs. I started a syslog server on one of my other servers, in order to get some information, but my knowledge starts to lack when I start debugging linux. Is anyone able to see if this log seems out of the ordinary? This is what I see in the logs, up until the freeze. The time stamp 11:18:59 is before the freeze, and 11:22:01 is after a cold reboot. Thanks..
  4. I know, but as far as i know, i am not able to start my server with only cache drives. It is required to put at least one drive in the array, and I would like to have a SSD-only server?
  5. Thanks a lot dlandon, I already have the plugin on both servers, I will do some reading on the disc buffering values. Do you have a suggestion with my SSD array? Is this a good way to go, or should I move the SSDs to the cache instead, or move one of the SSDs into parity?
  6. Hi, I just started rebuilding my two unraid servers which have been running on different locations so far, but recently I have decided to "combine" the two servers. The two servers both consist of 8 threaded Xeon CPUs, with one on a ATX asRock rack motherboard, and the other on a MITX asRock rack motherboard, both with 32gb RAM. I want to have the ATX/DDR3 server handling storage, and the MITX/DDR4 running as a node, consuming the data provided by the storage server. The storage server is running a LSI sas card to provide sata ports for my 8x Sata HDDs, while I will put my 2x parity HDDs on the onboard raid controller, as well as the 4x 250gb samsung SSDs. The node has 2x 500gb samsung SSDs as the only storage medium, as I expect the node to consume data from the storage server. My question is, how do I maximise the link between the two servers? At the moment a 1gbE/112mb/s link will be sufficient, but I don't always see constant 112mb/s transfer speed between VMs on the servers. I have also tried using unassigned drives, to have dockers consuming the data from the storage server. If the node server is transferring more than 10/15mb/s to the storage server, I see some very high CPU spikes, and the webUI/entire server becomes very unresponsive/slow for some time, like some kind of cache/buffer stacks up. I would suspect unassigned drives to use some kind of temporary storage space, while transferring the data from this folder over time to the network drive it becomes filled up and has to use all CPU power to empty this storage space, but my 2 SSDs are not near full (10% of 1tb and the moment). What would be the ideal way to setup my systems in this setup? Would a 10gbe network card be the only way to improve this? Another thing I would like to know, is what would be the optimal setup for my two samsung SSDs in my node server. I would like to have a "Raid 0" setup, with just 1tb storage from my 2x 500gb ssds, and the performance from 2 SSDs in raid 0, as they are not storing data, only hosting VMs and docker containers. I have them in a BTRFS array, as the main array, but I only see one of the SSDs filling up. Would it be better to put a "dummy" SSD/HDD in the array and the 2x SSDs in the cache, or simply put my SSDs in the array with one in parity (even though parity is not needed for this setup).
  7. Hi, I have been using Unraid for some time, and modified one of my builds with an Intel I350-T4, 4 port ethernet PCIe card for additional 4x 1gbps beside the 2x onboard nics (Intel i210 + Intel i219) and the 1x IPMI nic (RTL8211E). The motherboard is an Asrock E3C236D2I. Whenever I boot into Unraid with my PCIe card inserted, my two onboard intel nics turn off in unraid. When I go to network it says "shutdown (inactive)" even though I can use on of the intel nics as IPMI. Is this a flaw my motherboard has, or is it possible to use all nics? I can see by nic indication light, the nic is turned of whenever unraid reaches a certain point doing boot, so the nics works. I am also pretty sure I have been able to use all 6 nics before. Can anyone help me? Thanks..
  8. I have been struggling with this for almost a year now.. Msi doesn't solve the problem, more rams and cores doesn't either..
  9. Hi, I have been doing a bit of research on this, and I can see a lot of people are having the same error, even though the hardware is fine. I have been running memtest, and everything works, but this message keeps filling up my terminals in my linux VMs. It is very annoying. My IPMI logs also says the server is fine, and doesn't log something like this. Is there a solution to this, or is linux just totally useless on unRaid??
  10. Everything you add in the template ends up in the run command. Every time you want to update the container you will need to remove --auth. Okay, thank you! Do you know if there is another workaround?
  11. whenever i try to update.. it tries to update from mongo --auth:latest and not mongo:latest.....
  12. what a shame.. does this mean the repo box is actually a placing its input in the commandline argument field of a docker command? If you understand..
  13. Hmm.. I tried this, and got a little further.. I tried " --auth" and "-d mongo --auth". The last one made it able to complete, but not start. But i saw a pattern. If i put it behind the repository field, (here it works, but can't update, and other small errors), the --auth comes behind the mongo command. If i put them in the extra parameters field, it goes in front of the mongo command. I need the --auth to be the last part of the command. Any ideas? Just playing around with this out of curiosity, looking at the readme there's nothing about --auth, where are you getting that from? Im not an expert, but from the docker hub page.. Around 40% down.. https://hub.docker.com/_/mongo/ there is a section of authentication.
  14. Hmm.. I tried this, and got a little further.. I tried " --auth" and "-d mongo --auth". The last one made it able to complete, but not start. But i saw a pattern. If i put it behind the repository field, (here it works, but can't update, and other small errors), the --auth comes behind the mongo command. If i put them in the extra parameters field, it goes in front of the mongo command. I need the --auth to be the last part of the command. Any ideas?
  15. hmm.. I don't know if it is a variable or if it just is my word for it. How do i see this? It's this docker, and the authentication part is defined about 40% down the page.. Thanks.. https://hub.docker.com/_/mongo/