syee

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by syee

  1. Probably a dumb question, but I'm going to ask it because I have no shame 😁 I recently downsized the number of devices in my array as I replaced all my 8TB drives with 14TB drives since one of the 8TB drives failed. Now, my devices are named: Parity, Disk 1, Disk 3, Disk 4. Disk 2 was the previous device that had failed. I'm a bit picky about the continuity of the ordering, so I'd like to have Parity, Disk 1, Disk 2, Disk 3 instead. Is there a way to rename the drives to the sequential numbering scheme without breaking anything? I'm a little apprehensive as I'm a bit of a Unraid noob. Is it just a matter of creating a new config and reassigning the drives, or is it a bit more complex than that? https://imgur.com/jPde6Uc
  2. Man I'm dumb. I'm 0/2 so far. The issue ended up being the eSATA connector on my motherboard on the PC I was transferring the data from. (which was hooked up to a dock where I have temporarily plugged in the hard drive to transfer the data over to the NAS) I suspect the eSATA connection is faulty - plugged in a USB 3.0 cable, and used that instead, and I'm getting the 100MB/s that I was expecting. Thanks guys for all the help you provided. I actually learned a bit from this experience so it wasn't all for nothing.
  3. Thanks! I get the following from the 1GB file and 10 count: root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=1G count=10 oflag=dsync 10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 62.8169 s, 171 MB/s Just so I'm understanding correctly, this test (the one above) would just be testing disk to disk transfer? (so internal to that disk?) The numbers seem to be pretty consistent with the other test numbers. I'll have to look into how to do a read test and give that a try.
  4. Ah, that makes sense. My apologies - Linux is a new beast to me so still trying to get a hang of it. That makes sense though. I was kind of wondering how I'd get 1.4GB/s from a spinning disk. So I tried it again, and I got the following - I tried tinkering with the file size number because of the out of memory message: root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=10G count=1 oflag=d sync dd: memory exhausted by input buffer of size 10737418240 bytes (10 GiB) root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=5G count=1 oflag=ds ync dd: memory exhausted by input buffer of size 5368709120 bytes (5.0 GiB) root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=2G count=1 oflag=ds ync 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 12.3935 s, 173 MB/s root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=3G count=1 oflag=ds ync 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 12.5788 s, 171 MB/s root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=4G count=1 oflag=ds ync dd: memory exhausted by input buffer of size 4294967296 bytes (4.0 GiB) root@STINKYNAS:/#
  5. Ah understood. I tried running the command as per the page recommended - I get the following result: root@STINKYNAS:/dev# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.748555 s, 1.4 GB/s
  6. I used the DiskSpeed plugin and got these results. Looks decent from what I can tell - at least 100MB/s based on the test itself.
  7. Yeah, seems to be writing to the cache disk currently as I can see it filling up. I also just set the share to use cache disk only so that should definitely be writing to cache disk. I'm watching the file copy and it's hovering around 2 - 7MB/s copying relatively large files. (they're mostly 4GB files from my NVR software) Says it's going to take 8 hours for 162GB. Copying to a Windows VM on the same server seems pretty snappy (about 100MB/s) so I don't know if it's really all VM related. Maybe some weird unRAID stuff, or a bit of both... Thanks anyways for taking a peek! I appreciate it!
  8. That's what I was thinking - hardwired network usually wouldn't be an issue. In any case, here's the iperf results from the machine I'm copying from to the Unraid server: Connecting to host 192.168.1.13, port 5201 [ 4] local 192.168.1.10 port 58446 connected to 192.168.1.13 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 98.9 MBytes 829 Mbits/sec [ 4] 1.00-2.00 sec 104 MBytes 872 Mbits/sec [ 4] 2.00-3.00 sec 109 MBytes 912 Mbits/sec [ 4] 3.00-4.00 sec 110 MBytes 927 Mbits/sec [ 4] 4.00-5.00 sec 101 MBytes 846 Mbits/sec [ 4] 5.00-6.00 sec 109 MBytes 915 Mbits/sec [ 4] 6.00-7.00 sec 111 MBytes 932 Mbits/sec [ 4] 7.00-8.00 sec 103 MBytes 863 Mbits/sec [ 4] 8.00-9.00 sec 105 MBytes 882 Mbits/sec [ 4] 9.00-10.00 sec 111 MBytes 932 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.04 GBytes 891 Mbits/sec sender [ 4] 0.00-10.00 sec 1.04 GBytes 891 Mbits/sec receiver iperf Done. Speed seems pretty decent. Drive is hooked up via an eSATA dock currently to the PC (though this shouldn't be the limiting factor here) I figured the cache drive would rule out parity as well, but alas, the speed with or without cache drive is the same. Share has been set to use cache disk.
  9. Not sure if this entirely belongs here, but since I'm using it in a virtualized environment, I thought I'd start here because I'm sure someone will eventually point me here anyways. As per my last thread, I've set up a new server, running VMWare ESXi 6.7. Hardware is as follows: 3x 8TB WD Red disks for storage 500GB Samsung Evo 860 SSD storing the VM files Just acquired a 2TB Seagate HD used for cache drive As per my other thread, the motherboard is a Gigabyte GA-7PESH2 board, with the on board SAS controller flashed to IT mode (LSI 2008) Since the 3 WD drives had data on them that I don't want to lose and I don't really have anywhere else to store the data, I've managed to cram enough data into the existing drives to give me 1 drive to "play around with". What I have connected right now is 1x 8TB WD Red drive, the SSD and the Segate cache drive. Both the spinning drives have been added to Unraid. Since I want to ultimately add all 3 drives to Unraid, I'm copying off data from the existing drives to the newly formatted drive in Unraid. However, I'm getting some really low numbers for transfer speed (as in consistently <10MB/s, and usually around 5MB/s) so copying 8TB of data will literally take a week. I don't know whether this is due to the virtualization or something I didn't configure correctly. I'm seeing most people report about 100MB/s so I'm obviously doing something wrong here. I'm copying this over a local network - gigabit on the source end, and gigabit on the Unraid server end. I've switched out a few network cables to verify that it's not a cable issue. The two drives are connected to the SAS controller, which is set to pass through mode to the Unraid VM. Using teh cache drive doesn't appear to make any difference at all. (I was running without cache drive for a few days since I didn't have another physical drive I could use as cache and went out today and bought this 2TB drive thinking it would speed things up) I've read that a single drive with no cache drive would be slow, but I didn't think it would be this slow. Any thoughts or suggestions on what I might be doing wrong? Logs are attached. stinkynas-diagnostics-20190208-2307.zip
  10. Man I feel dumb. This was the piece I was missing. Added it and now my disks show up. Thanks uldise!
  11. This is from the VM for Unraid side. Thanks again everyone for your input!
  12. Here's a screenshot from the VMWare side
  13. The SAS2008 controller is flashed to IT mode. (as per the SAS BIOS when it comes up) I re-flashed it again last night in a desperation move because I didn't know what to do next. I did notice the different model in the log when I looked at it over the weekend. There's only one controller (the one built into the motherboard) in this system so there shouldn't be any others. I'll grab some screenshots when I get home from work.
  14. I know there are a million threads on this issue - I've tried to go through as many of them as possible but I think I"m missing something here. I'm new to both VMWare and unRAID (though relatively tech savvy) so I thought I'd be able to figure this out but I've spent a whole weekend trying to get this to work with no success. Basic configuration is: Host is running VMWare ESXi 6.7 Unraid 6.6.6 is running within a VM Drives are 3x8TB WD Red's connected to the onboard SAS controller (which is an LSI SAS2008 built into a Gigabyte 7PESH2 motherboard) I've set the SAS controller to Passthrough=Active in the VMWare hardware settings. It appears that unRAID can see the controller, but never sees the disks. The controller itself sees the disks. I currently only have one of the drives hooked up with the data moved off it so I can do as I please with it. I'm assuming the fact I can't see any of the drives show up in the logs is the primary reason why unRAID doesn't display drives. I'm almost certain whatever I'm missing is very trivial but I can't seem to get past this point. If anyone has any suggestions or ideas, I'd be grateful! Diagnostic logs are attached. stinkynas-diagnostics-20190204-1855.zip