Jump to content

ency98

Members
  • Posts

    10
  • Joined

  • Last visited

ency98's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Yeah I would suspect it would be the Silicon Power drives in this config. They are the cheapest and not even prosumer grade. But I didn't imagine they would have much trouble keeping up. I'll do the test and if they turn out to be the issues I might try to use two of the NVME drives as I know they are high quality and can match the performance of the Samsung drives. Thanks for the help I really appreciate it.
  2. That worked. files were transferring at +500mbs. So how do I go about fixing this while still having parity disks? is it about the quality of the disks I used for parity or was something about my config?
  3. Yes, or at least I think I did so correctly. Settings > Disks > Tunable (md_write_method) > Reconstruct Write Also I am getting the same performance copying and moving files within shares and to other shares without ever touching the network. diagnostics-20230704-1747.zip When you check the diags for such an issue what exactly do you look for? I'd like to know so I can better troubleshoot issues myself. I took a look at the output but nothing stood out to me.
  4. I don't think that's a valid solution for me. I'm not prepared to trust my data to a storage solution I'm not familiar with. Sub 2mbs also seems ludicrously slow no matter what is sub optimal. Also my other unraid server has only NVME drives and setup in a similar config two drives in an array and one as parity and does not suffer from such slow transfer speeds. Not as fast as the synology servers but still MUCH MUCH faster than the r730.
  5. Hello, I'm fairly new to unraid but am having a bit of an issue and would like some help trying to figure out where I should look for the issue. I have a Dell r730xd filled with 22 enterprise Samsung 960GB SSD's and two Silicon Power 2tb SSDs. with 22 Samsung disks in the array with the two Silicon Power used as parity. I also have a ASUS Hyper M.2 PCI card with four 1tb Samsung NVME drives split into to two cache drives. For this specific issue the cache drives are not being used. The share I am writing to does not use the cache drive. I have also bonded the four onboard nics using 802.3ad which my unifi 48 port layer 3 switch supports. When I was running windows server on this machine it was an absolute beast when it came to transfers and performance. I was using hardware raid, so that might be part of the issue I just don't know. When copying files between my synology (four nics bonded on the same switch and same vlan) and the windows install I'd easily see sustained transfer speeds of 100 to 200 mbs. It worked great but I had unraid on a small server and decided to try it out on the r730xd. I setup the unraid server, bonded the nics and added my vlans and used the unassigned devices plug in to mount my synology shares via NFS ( I have also tried SMB) When I started transferring things over I noticed the performance is EXTREAMLY slow. It will start out at ~60mbs and quickly drop down to 2mbs at most and often times under 1mbs. Things I have tried. Disabled nic bonding on the synology and r730. Used only one nic just to eliminate any mic or bonding issues. I got the same performance. I directly attached the synology to the r730 and got the same performance. disabled all cool networking voodoo and gave every nic on both the synology and r730 separate IP's and tried SMB multipathing. Same performance. I have made sure the unraid server is not doing anything with parity or moving during the times I am attempting to transfer things. All of the above involve moving files to the unraid server using SMB and NFS. When I copy things off of the unraid server I am getting 100+mbs which is slower than I would expect but acceptable. I have three synology systems and transfering between them is not an issue I often see 200+mbs between the synologys. Copying between workstations and between workstations and the synologys are also very fast. I believe I have eliminated the network and clients as possible issues. I think I have eliminated the network configuration as an issue. To me this leave the array or some other settings that I'm not knowledgeable about causing the issue. I have looked at the logs but dont see anything obviously wrong. Any one have any other ideas I can look into that might be an issue? Oh I have also tried using SCP/SFTP, rclone, rsync, and straight up cp. I'm sort of stumped at this point. I really dont want to go back to windows as I really like being able to easily manage docker containers on unraid. But I just cant deal with sch slow transfers.
  6. Yeah, that's more than enough horse power and similar to my setup. what about starting the container and then selecting a different model before generating? the default SD 1.4 gives me a EOF error. I found that to get things to work I need to try to generate something on sd 1.4 first to get the error then I change the model to the SD 1.5 I downloaded it will work 70-ish % of the time. But after generating a few images if I try to switch models the CPU will peg out and everything gets unresponsive until i restart the container.
  7. What about in the nvidia setting in unraid? The container had no issues "seeing" the gpu. But it was unraid that was not seeing the GPU. Also how much ram and what kind of CPU do you have? I found that loading the models will peg out my i9 for a bit and will often hang (pegging out the CPU) when switching between models. I need to restart the container when that happens. Could be your getting CPU or memory bound while loading the model and not when generating an image. Dont know your setup but worth a shot.
  8. Well I figured out my issue. If any one else is having an issue where your CPU pegs out and you using an Nvidia GPU go to settings and make sure your see your GPU as installed. Seems like on my last reboot unraid did not pick up my GPU. I rebooted my server and made sure the GPU was picked up. I fired up this container and it worked like a charm.
  9. @pyrater yes I tried with the GPU ID. No action on the GPU with "all" or the GPU ID. Plex, jellyfin, tdarr, and a few other containers work with the GPU. I did notice that the SD-WEBUI does not have the NVIDIADRIVERCAPABILITES variable like the other containers that use the GPU have. Think I should add that var?
  10. I installed this and managed to get it up and going. However it seems to be hammering my CPU and not utilizing the GPU. I tried the default 'all' for the NVIDIA_VISIBLE_DEVICES variable and adding my GPU ID with the same result. The CPU pegs out all cores while the GPU stays idling. I have a ASUS NVIDIA GeForce RTX 3060 w/ 12GB GDDR6.
×
×
  • Create New...