Jump to content

Zyluphix

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by Zyluphix

  1. 2 minutes ago, johnnie.black said:

    What backplane? Does it have a SAS expander? If it's an expander backplane you can connect 1 or 2 miniSAS cables from the HBA to access all slots, if not you need 1 miniSAS cable for each 4 disks, but there are controllers that support 16 (or more) devices.

    Unfortunately i'm not too sure, this is relatively new to me. This is the case that i have. Which is described as: SATA/SAS drive bays (4 x Mini SAS).

     

    Unsure if this answers your question? But i do appreciate the quick response.

  2. Hello,

     

    Hope someone can help.

     

    I have a mini-sas backplane which i'm trying to find a mini-sas controller to fit to my server. Currently I have a mini-sas to SATA breakoff to a SATA expansion card. I was hoping something like this existed that allowed me to connect 2-4 backplanes to one card through mini-sas to mini-sas cables or mini-sas to sas.

     

    Unsure if this is something that exists, if it does do i need to watch for anything like forward/reverse cables?

     

    Thank you.

  3. Hello,

     

    Hoping someone can point me in the right direction for some assistance.

     

    Recently the response time for my server to LAN devices for media has been getting increasingly worse. Today Plex gave an error that refused to transcode any videos. This was usually resolved by restarting the docker container, but it has gotten to the point of annoyance and i wanted to resolve the issue.

     

    I attempted to stop and delete plex to which kept throwing errors. Reading previously on the forums it looked like the docker image was corrupt. I deleted and restarted the image. Now i seem to have errors that are preventing docker containers from being installed.

     

    I'm not entirely sure, what the issue is but it's left me stumped. I had to perform a hard reset due to unraid completely locking up at one point.

     

    I've attached my log and would greatly appreciate any assistance with this issue. Also, if you have a look at my current setup and see any obvious issues, i'd appreciate your recommendations as i'm very much a novice.

     

    Thank you.

    zen-diagnostics-20200126-2316.zip

  4. 17 hours ago, trurl said:

    Disable the docker service then after the rebuild is done we can work through getting these other things taken care of.

     

    So i've reduced the docker size and the array has successfully rebuilt.

     

    Now onto the docker moving from the drives to the cache?

  5. 2 hours ago, trurl said:

    Probably unrelated to your reported issue, but some other things I notice in your diagnostics.

     

    Like many people, your general docker configuration is less than ideal. You have some of it not on cache, which means your parity is involved when your dockers are running, so you will have spinning disks and less performance than you could have if it was all on cache. And your docker image is much larger than it needs to be. I always say set it to 20G and you are very unlikely to get close to using that unless you have some docker application configure wrong.

     

    Thank you for highlighting this.

     

    Currently i'm new to unraid and i'm now in the process of rebuilding a failed drive. I had issues with my docker file saying that it was almost full, or i think it was an error log getting full. So I increased it to a rediculous amount. I'll try bringing it back to 20G instead and see how it goes after the rebuild.

     

    With regards to the docker being part on the cache part on the disk drives, could you advise how i would change this? I was under the impression that everything was on the cache.

  6. 1 minute ago, trurl said:

    Were you experiencing a problem when you took those diagnostics?

     

    There isn't really anything called "rootshare". Perhaps you meant "rootfs". That is the root of the OS, and it is all in RAM. If you accidentally fill that up lots of things are going to break, but I don't see that happening in those diagnostics.

    Hi there,

     

    Yes I had experienced the problem just before I requested diagnostics.

     

    I'll take a video to show what I mean in a couple hours showing the issue.

  7. I've had a look at some other posts, that experience the "There is not enough space on rootshare". However when checking the logs for drives showing 100% full or the cache enablement i still experience the issue regardless of what i change it to.

     

    I have an unassigned drive (nvmeOn1) which is a 1TB Samsung 970 Evo. I'm trying to use this drive as a network drive to write/read from, however I constantly run into the issues of not enough space, although the drive has over 900Gb left. 

     

    Any help or guidance, would be hugely appreciated.

    zen-diagnostics-20190510-0808.zip

  8. Just now, Squid said:

    I had a similar problem a few years ago, where unRaid wouldn't recognize the extra memory, Windows wouldn't etc.  The BIOS would report the total though.

     

    Since then I only ever buy from the QVL and if expanding and the original sticks are no longer available I either buy a whole new set or make damn sure that at the very least the additional set is listed on the QVL and at the very least has the same CL timing.

    That's a killer. Well I think my cheapest option is to get a motherboard that's compatible. Now i just need to do that hunt!

  9. 2 minutes ago, Squid said:

    You sure it's compatible?  Always, always check the QVL for the motherboard prior to buying memory.  That motherboard has quite a number of HyperX p/n's that are not compatible in a 4 stick configuration.

    I think you've got it.

     

    HyperX HX424C15FB2K2 is not compatible in 4 slots. Damn. I looked up the wrong part number originally!

  10. 1 minute ago, jonathanm said:

    Are all 4 modules the exact same configuration? Just because they have the same part number doesn't mean the chips on the modules all match.

     

    From what I'm seeing in google, your symptoms are a motherboard setting or memory compatibility issue. Each stick by itself is probably perfect, but trying to get all 4 to run together in dual channel mode properly can be problematic.

    Should i try running an ubuntu session from a USB stick and see if it utilizes 32gb ? If it does, would that not signify that Unraid is the problem?

  11. 15 minutes ago, jonathanm said:

    Sounds to me like a motherboard BIOS setting, or possibly dual channel mode trying to initialize but the sticks are in the wrong slots.

     

    Check the motherboard manual and see if pairs of memory are supposed to be in adjacent or opposite slots.

    The motherboard only has 4 slots, which are all populated. Looking at the manual it states that when all 4 are populated the motherboard automatically switches to dual channel mode.

  12. Hi all,

     

    I've had this on-going issue for a while now and i cant seem to get my head round it. Recently i bought an extra 16GB of Hyper X Fury ram which is already installed in the machine. Same model so no compatibility issue. I installed the first set in and had the issue in the subject line. A few restarts later and still no luck.

     

    I then asked for it to be replaced, assuming faulty RAM. Again, the same issue.

     

    The current setup is as followed:

     

    Ryzen 2700

    Aorus x470

    32GB Hyper X Fury DDR4 2400Mhz

    Unraid 6.6.7

    MSI GeForce GT 710

     

    Unraid sees the RAM, as 32GB installed, it just advises only 16GB is useable.

     

    image.png.ca3b925a25f7afff8e3ec04e71c95458.png

     

    Any help would be greatly appreciated.

  13. Hi all,

     

    Hopefully someone can provide some solution to this.

     

    I have a set of files that i want to backup remotely to another computer i have in my parents place. I want to do periodic updates that during a specific time, any changes are uploaded to the remote machine accordingly.

     

    Is there any available solutions doing this either through a docker or a solution that could be implemented through a VM? If so, what is the best way to go about this.

     

    Thanks!

×
×
  • Create New...