Jump to content

Taddeusz

Community Developer
  • Content Count

    414
  • Joined

  • Last visited

Community Reputation

14 Good

About Taddeusz

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Taddeusz

    High CPU/RAM use

    This has not been my experience at all. I have 18 different Docker applications running plus a Windows VM. I have one specific Docker application, SageTV, that seems to have a memory leak but that's specific to it. The rest of my applications are well behaved. When I had 16GB of RAM my usage hovered around 70% So did you post to just complain or would you like help? Have you tried to determine which process or container is causing the CPU use? On the console top or htop can be very helpful with this. How much RAM do you have?
  2. Taddeusz

    RAM Max Installable Capacity

    That's interesting. Makes me wonder what would the BIOS would report if I replaced a couple of the 8GB sticks with 16GB sticks?
  3. This is more of a curiosity than anything else. How does Unraid determine this amount? Both my CPU (i5 8400) and motherboard (ASUS Prime H370M-Plus/CSM) are capable of a maximum of 64GB of DDR4. Unraid reports a maximum installable capacity of 32GB. Initially I had 16GB (2x8GB) and just upgraded to 32GB. Previously I've had motherboards that Unraid actually reported a higher number than what was actually installable. The chipset or CPU was capable of that amount but was limited by the number of physical memory slots. So, how does Unraid determine this number?
  4. That is really weird. I've never had anything like that happen.
  5. Yeah, I think you're right. I must be thinking of older technology where the same sector on each track across the drive actually stores the same amount of data. That would mean the same amount of data is taking up progressively more physical space on the outer tracks. For some reason I was also thinking drives read data from inside to outside. Don't know why.
  6. I think you may be getting slightly confused between what Unraid calls a cache drive and the RAM cache. That's probably my fault.
  7. If you're copying large amounts of data like that I would recommend temporarily not using a cache. The way it works is that once the cache is filled up it will start writing directly to the array. It won't copy data from the cache to the array until the mover is scheduled to run or you manually initiate the mover.
  8. Yes, if you read the help text it says "Auto selects read/modify/write."
  9. It's another name for reconstruct write. The only drawback is that all the drives need to be spun up to write data. You can install the "CA Auto Turbo Write Mode" plugin to mitigate this if you prefer your drives to spin down when possible.
  10. It's the "md_write_method" setting.
  11. Also, if you have your array set to "reconstruct write" it will be able to perform faster.
  12. You probably didn't have your shares set correctly for the mover to do it's job. I have a 500GB NVMe SSD. To automatically move data from the cache to array your shares need to be set to "Yes". If they are set to "Prefer" or "Only" the data will remain on the cache drive. As far as RAM goes it just depends on what you want to do with it. I currently have 16GB and am about to put another 16GB in this weekend because I want to be able to run more VM's at once.
  13. The speeds progressively drop because hard drives store the same amount of data on the inner most track as they do on the outer most track. Because drives spin at a constant rate the result is it takes less time to read from the inner tracks and so transfer rates are higher there than the outer tracks. The maximum theoretical gigabit transfer rate is around 118MB/s. A lot is going to affect that though. In Unraid it depends on your RAM usage and how much is available for cache. If the file you are copying easily fits within the cache you'll likely get near theoretical speed. If it's a larger file you'll get that fast speed until the cache is filled and it has to start writing to the array. Then it's dependent on the write speed of the writing drive.
  14. Those are really good numbers. I would recommend when you run this to shut down your VM's and all other Docker applications. That way you reduce the amount of possible activity. You have to keep in mind that performance on your array is limited by the slowest drive. When you were copying the file the first time which drive did it end up on? Which drive did your copy end up on that time? When I copy files I generally get about 50-75MB/s depending and sometimes slower depending on which drive it's hitting and where on the drive it's being written. Just for comparison here is my last test. My parity drive is attached to my motherboard because I have 8 drives on my LSI card.
  15. Taddeusz

    [Support] jasonbean - Apache Guacamole

    I'm glad you got it working. I've found that the most likely explanation is that the hostname gets put into the Guacamole Proxy Parameters section rather than the correct Parameters section.