Jump to content

Arbadacarba

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by Arbadacarba

  1. Found This and it works but I expect it will break in 6.12... at least at first:
  2. Debating if anything is changing in 6.12... May try again
  3. Dual SSD Cache (No longer used as Cache...) rsync --progress --stats -v Test.mkv /mnt/systems/Backup 25,775,133,173 100% 341.82MB/s 0:01:11 (xfr#1, to-chk=0/1) 360,579,385.22 bytes/sec NVME (Just for a sanity check) rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 1.80GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,778,029,382.28 bytes/sec So I've Halved the speed of my SSD Array... Hmm... I didn't expect that.
  4. Systems it is. I think I can live with that... my VM's are all on there so technically there are more than one system on it... At least that is what I'm telling myself. So now I have a NVME cache and SSD storage for VM's and System files Thanks
  5. So I can't have a share with the same name as the pool? My goal was to have a pool of 2 2TB SSD's for the system folder on my unraid server... Somewhere to keep the Docker folders, appdata, domain folder, libvert file... I have them all on a pool called config, (because I was nervous about naming the pool System when I created it.) and am about to shuffle things around and thought I might try making it more accurate. Arbadacarba
  6. Can I name a pool "System" and then store the system file on it? Will that cause any issues? Thanks
  7. What is your network Type set to? Host? or something else? If you have it set to have it's own IP address you must "Enable Bridging" in network settings.
  8. I think I found a solution... Copying a 25GB file to the ram drive and then copying that file to each of my drives. I figure this is giving me a fairly consistent view of the write performance of each of my drives: Spinning Rust Drives: rsync --progress --stats -v Test.mkv /mnt/disk1/Backup 25,775,133,173 100% 96.26MB/s 0:04:15 (xfr#1, to-chk=0/1) 100,512,382.22 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk2/Backup 25,775,133,173 100% 105.59MB/s 0:03:52 (xfr#1, to-chk=0/1) 110,412,959.48 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk3/Backup 25,775,133,173 100% 105.43MB/s 0:03:53 (xfr#1, to-chk=0/1) 110,412,959.48 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk4/Backup 25,775,133,173 100% 100.77MB/s 0:04:03 (xfr#1, to-chk=0/1) 105,445,505.27 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk5/Backup 25,775,133,173 100% 87.66MB/s 0:04:40 (xfr#1, to-chk=0/1) 91,585,882.91 bytes/sec rsync --progress --stats -v Test.mkv /mnt/disk6/Backup 25,775,133,173 100% 93.33MB/s 0:04:23 (xfr#1, to-chk=0/1) 97,842,224.06 bytes/sec Single Cache SSD rsync --progress --stats -v Test.mkv /mnt/cache/Backup 25,775,133,173 100% 604.45MB/s 0:00:40 (xfr#1, to-chk=0/1) 636,578,420.72 bytes/sec NVME rsync --progress --stats -v Test.mkv /mnt/config/Backup 25,775,133,173 100% 1.78GB/s 0:00:13 (xfr#1, to-chk=0/1) 1,663,317,808.97 bytes/sec I'll be combining the 2 SSD's together and testing them again
  9. Take a look at my results from my setup... I'm told they are about what to expect: I'm just trying to decide on some changes to mine but I'm generally pretty happy with the speeds I'm getting
  10. Trying to do a little further testing... I want to repeat this process but get as isolated results as I can. My previous tests copied files from one drive to another... What if I stored a file on the ram disk... Say 20GB? then copied it using the above methods to each storage type? since I am unfamiliar with the available tools, I am trying to get a share available on the ram disk (/dev/shm) but I'm having no luck. I tried creating a softlink (ln -s /dev/shm/test/ /mnt/user/Backup/Test2)within another share to a folder at /dev/shm/test but I get invalid path when I try to look in that on the device: Am I just barking up the wrong tree or am I making a minor mistake somewhere? WHY---------------- I have upgrades my two SSD drives a little to a pair of 4TB Samsung EVO drives... It occurs to me that I might be smarted keeping my system folder on that and moving my Cache to the 2TB NVME drive... My gaming VM is using another 2TB NVME so I don't need that much raw speed for my pfsense router - Home Assistant Server - and various test machines. But the added size and the fact that they are duplicated would add redundancy that I am currently going without. Thanks for any help Arbadacarba
  11. Love this idea myself... Though it couldn't work in a docker could it? why not a plug in in the base os?
  12. I've had similar problems, but it turned out I was causing them by removing Drives physically without dismounting them first.
  13. If the helium is there for a reason and it is actually depleted for some reason, how could it not endanger the drive... I agree with itimpi - RMA the drive if you can. I think his answer implies that he thinks it could endanger the drive.
  14. When you say not simultaneously Does that involve a reboot or just not having the VM running?
  15. I'm currently running my Game VM using 5 Dedicated cores on my 10850K with a dedicated NVME drove and a 4080 card... If I'm being honest, I don't really game all that often. Between family (I have a 5 year old boy) and work (I run a small IT company) I just have other demands on my time. My Unraid server is used as a Media Server and Home Assistant server far more than for gaming. Is there any way to free up the 4080 for transcoding or other projects when the VM is not running? I'm currently passing it through without the use of a "Graphics ROM BIOS" and have it Bound to VFIO at boot... Thanks Arbadacarba
  16. I'm wondering if anyone has tried accessing a DVD drive in Unraid. I have a LARGE number of disks with various files on them and would like to be able to access them in Unraid. Maybe from within Krusader? Has anyone tried this? I've done some searching and come up empty. (Or with so many irrelevant hits that I can't find anything. I've use Ripper in the past so I know the disk is working, but I want to be able to check disks and copy folders into my archives. Thanks
  17. Have you got a monitor hooked up to the server? You should see its current IP on the login screen... Then yu can either temporarily set a computer to hook in in the right range, or boot it with the GUI to reconfigure.
  18. If they show up as two different IOMMU groups, you should be able to pass a pair of them through. Have you tried it yet? I'd be more concerned if they will work properly in pfsense... (I've been hurt before)
  19. Could just have the last step of the survey show the complete text of what is uploaded, so there is no question of what is being collected.
  20. Adding my vote to this... I have a fair number of drives but still have to scroll to see info in column 1
  21. When I had a similar situation, I ran the experiment of comparing Array Stopping Times with the VM engine shut down and with the Docker engine shut down. Turned out in my case that the pfsense VM was not shutting down... I solved the same issue on a proxmox server by installing the QEMU Guest Agent. And have done so on most of my VM's to facilitate better integration for shutdowns etc.
×
×
  • Create New...