Jump to content

FantomDew

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by FantomDew

  1. Thanks everyone. The drive is meant to be fast read and write but obviously probably not as fast as NVMe. I was looking for a centralized place to run my games and maybe some VMs off of in the future. With so many parts (client network, main network, client drive, Unraid drive system) i am trying to find some good troubleshooting steps and what the outcome should be. For example if i use iperf, what should i be getting. Like is there a chart that says if your using these kind of things here is the ball park you should be in?

     

    As for the drive size, i figured since Unraid could use all the space of all drive in its main pool that if figured out a way to do that in its cache pool. If not perhaps someone should tell Unraid? Because its showing that the cache drive is 6TB?

     

    Thanks for the help

    Cached drive.PNG

  2. So the goal: To create a cache pool that has the best read/write speeds to be able to run as a game drive or other high demand function. 

    What has been done so far: I first started off with a cache drive with 4 1TB SSDs. I put them into a RAID10 with btrfs. I have not made any other changes to the default settings. I then added two more 4TB SSDs. So total of 6 drives. It currently shows that i have the 6TB i am expecting. I created a share to reside on the Cache drive. Over the last couple days i have been able to move about a TB of data to it and its decent.

    The first observation of an issue?: Earlier today i started to copy about another 3TBs to the Unraid. This is data that will reside on the spinning drives when done. With that, i would expect the cache drive to get to about 4TBs full still giving me about 2 TBs of space. The thing is it got about 1.5 TB into the copy and then said the drive was full. Unraid showed i still had more then 2 TB of space available? I started mover to clear out the cache and i can move files to it again.

    The second observation of an issue?: I have a 10GB network between my systems. I am currently coping another TB of data over but i am hovering in the low 70 MB/s of transfer speed coming from a Windows 10 machine. 

    Questions:

    1.) Is 70MBs a good transfer rate on a 10GB network going to a RAID 10 btrfs cache drive?

    2.) Are there performance gains by adding more drives in the RAID10 on unraid like "normal" raid10?

    3.) Is there something with uneven drive size in a raid10 when it comes to reporting space?

    4.) Are there any other settings i should tune in order to get the best speed

     

    Server: PowerEdge R620

    Unraid 6.8.2

     

    Please share the thoughts.

    Thanks

     

     

     

     

  3. So cleared the log files and tried to mount again. This is what i get in the log

     

    Jan 30 07:07:57 Mount SMB/NFS command: mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,username=guest,password=******* '//daisy/Movies' '/mnt/disks/daisy_Movies'

    Jan 30 07:07:57 Mount of '//daisy/Movies' failed. Error message: mount error(5): Input/output error

    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

     

  4. So i think i am doing something wrong. I have read through the form but must be missing something. I am trying to use UD on UnRaid in order to use it with Plex to connect to an old NAS that i am trying to off load. I have installed both UD and Plex and have created a SMB share using UD (Movies = \\OldNas\Movies. It creates /mnt/Disks/OldNas_Movies. I then go into Plex and make a path to point /Movies to /mnt/Disks/OldNas_Movies. I go into the Plex web UI and add that path to my Movies library and nothing shows up? Did i miss something? Using UnRaid 6.2.4 with the LinuxServer community Plex Docker.

×
×
  • Create New...