skaterpunk0187

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by skaterpunk0187

  1. That is a good point and just as easy. I would still say do it on the drives to help in the future (if that is the issue) changing hardware. Maybe at some point upgrade case with a backplane and forgets about the mod.
  2. Its easier to tape or clear nail polish over the third pin then modify a cable. I guess it could be worth a try its only takes a few seconds to do. And -=Striker=- is using SAS drives and HGST brand not WD, you will never find a SAS drive in an external enclosure. I have some "white" labeled SAS drive but they are not white label implied in that sense
  3. the pin 3 mod is for drives themselves and as far as i'm aware that is a sata drive only that are shucked from USB enclosures and does not apply to SAS drives
  4. How big is your power supply? SAS uses more power then sata drives thats why backplanes use molex conntectors instead of sata power. Try plugging in one or two drives and see what happens
  5. I'm not familiar with LSI WebOS. If it has a spot to list drives and its empty it could be a problem with the cables and/or backplane. The cables connecting the lsi card to the backplane can be directional, make sure they are not installed backwards. What case and backplane are you using?
  6. Welcome to Unraid, The LSI 9271-8i is a Megaraid card not an HBA (host bus adapter) and does not support HBA. You can look into IT firmware that will make it an HBA, but IT firmware isn't available for all megaraid cards, it could also be crossed flashed with another card firmware but do your research first it could render the card unusable. During boot the card should give an option like ctrl i or ctrl h or something like that to open the raid utility you will have to create an array using only a singles drive in each save and reboot and the drives should so up in Unraid. This isn't the best way to do it but it works thats how I did it when all I wad was a raid controller.
  7. I have had issues with front (case installed) mounted USB on a clients machine it took months to figure out what it was. Devices plugged into them would work occasionally or not at all. They would work for a while then the device would drop like it was unplugged, Sometimes a phone would be plugged in and would charge but no data connection (not a charge only cable it would work just fine in Mobo USB). I moved it around on the mobo headers still same issues. It ended up being the PCB of the USB ports in the case. I ended up getting a PCI USB bracet that plugged into the Mobo header with an extension cable to set on their desk so they didnt have to get behind the pc to plug and unplug. It may not be your issue but I've seen it.
  8. Yes that is true, but if the CPU speed was my bottleneck it would also be the bottleneck using samba to transfer to an unassigned device or the cache directly. Using htop to monitor CPU usage. Transfering a 27gb file to \\tower\media\movies @450ish MB/s no core spikes over 25% on CPU. Transfering same 27gb file to \\tower\cache\media\movies @1.03 GB/s CPU get one core to spike to 70-75%. My CPU clock speed. I also have an identical server other then RAM and CPU. RAM is only 64GB and CPU is 2x Xeon E5-2667 v2 base clock 3.30GHz and boosts up to 4GHz. Ran same test exact same results for speed and CPU usage. I/m not complaining 450 MB/s is still 400% faster then 1gbe. That's gotta be rough going that direction. I hope your array is all SSD. Unraid is only my archival/backup storage and Plex server.<--Thats why I transfer such large files to Unraid.
  9. This does achieve the transfer speed of over 1GB/s. The down side is the mover wont move the files, or if it does not the the right directories I would assume. Unless you created the same directory structure for example \\tower\cache\directory\testdirectory1 then transfer file to testdirectory1, then when mover runs it will be moved to /mnt/usr/directory/testdirectory1. What's your hardware setup and config? My hardware is older enterprise but its no slouch.
  10. I have been doing some testing with the cache. Drive is a Samsung 970 evo plus 500 GB NVMe new on 1/25/2020. Data being transferred is on a very fast NVMe drive as well. Data is a 27 GB mkv file. The 970 not set as cache drive mounted and shared with unassigned devices the file transfers at 1.10 GB/s. Forget to mention 10 GBE network and Unraid version 6.8.2. I set the 970 to be the cache drive transfer same file I get the speed of 450-500 MB/s say 470ish MB/s. Also no other data on cache drive Docker and VMs turned off and only one share set to use cache only for testing. I remove from cache mount and share with unassigned devices plugin again bam right back to 1.10 GB/s. I also formatted btrfs when used under unassigned devices so file system was the same for both. attached Pictures of both speed. Does the cache really have near a 50% performance overhead? This would not be noticed on a 1 GBE, just a heads up for anyone thinking of going 10 GBE. Not that 450 MB/s is bad you just wont get the full 1 GB/s speed. P.S. I apologize if this has been discussed already I spent two days here reading forums and couldn't find anything
  11. When I try to use preclear it just sits as starting. I let it run for two days and still saying starting with an empty log.
  12. I've checked permissions, tried different install places everything i can think of. The wizard still going from user setup to login (it skips the step to get camera password and button to go to cameras) and cant log in with user and password i set. I try and come back to it later.
  13. 3.10.6 and 3.10.5 starts to work the wizard comes up for NVR name and UTC then next to user setup then clicking next goes to login screen, it doesn't finish the wizard.
  14. I tried to use the unifi-video container it installs, but on first run it does not start the unifi-video service, I can console in to start it or a container restart will start it. The real issue is when it starts it props for login not the setup wizard which makes it useless since you cant add a user.
  15. I moved from FreeNAS to Unraid. They both have their pros and cons, that's for another thread. Anyways FreeNAS uses the system RAM for its cache, so pretty much the more RAM the better for FreeNAS so I put lots of RAM in it. In Unraid the RAM is pretty much usless its sits at 1% utilized and i think thats the minimum that the dashboard can display, I think once I've seen 2% used. It would be nice if Unraid could use RAM as a cache drive or a second cache for file transfers and/or docker/VMs. I would prefer to have my dockers on nonvolatile cache, but could write this cache to disk apon restart or shutdown like FreeNAS does. I'm sure I'm in a minority for this feature do to the potential data loss risk. The mover can be scheduled to run hourly I would be willing to risk it for an hour or two for the high speed transfer rate I would get using RAM on a 10GBe connection. Thank you for your time.
  16. I'm glad I stumbled across this thread. I was originally a FreeNAS user so I have a large amounts of RAM. Since Unraid doesn't use RAM for its cache or pretty much anything other then holding Unraid itself, thig gives me a use for the RAM I have. It works great using the /tmp directory with no other sub-directories as Plex will create its' own. With no errors after reboots. You can verify /tmp is Used only in RAM as you play a video that transcodes you will see the RAM usage on the dashboard increase. Let it play for a while and stop (not Pause) the video and you'll see the RAM usage drop to where its always is. I have been running like this for a several days with four devices (Xbox One, Xbox 360s, and Roku) transcoding off and on and all at the same time with 30+ hours watched without an issue.
  17. That card would work it does only support M.2 SSD. this card is NVMe compatible https://www.amazon.com/dp/B07NQBQB6Z/?coliid=I1HGOE3XIBG8FK&amp;colid=E4L84U552910&amp;psc=0&amp;ref_=lv_ov_lig_dp_it For best performance you'll want to use a good NVMe drive like the Intel Optane NVMe or a Samsung 870 Pro. You can also use multiple SSD/NVMe drives in your cache and make them RAID0 instead of RAID1. That will give you better performance but loose the redundancy if one cache drive fails. Also check your BIOS and make sure the PCIe slot you want to install it in can be configured to a 4x4x4x4x and not just a 16x slot. If it can not break out the PCIe lanes that card will still work but it will only work with one SSD, but if the lanes can be split you can have 4 SSDs for cache or whatever else.
  18. I have 10GBe as well. I have tried dozens of sata ssd's, and even the high end ones like the Samsung 860 Pro, don't have great sustained write speed. You'll need to go with an NVMe or a PCIe AIC. The Mellanox ConnectX offloads processing to the CPU. Once a week should be fine. Even less of you don't write to it often. I'm using an older Xeon board so no M.2 slots so I went with this: https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-900p-series/900p-280gb-aic-20nm.html I get around 550-610 Mb/s transfer to my Unraid box. And that's only limited to the read speed of my lower end NVMe drive in my desktop.
  19. Well that's good to know. I have not had the privilege of rebuilding a disk in Unraid. I've done it several times with a Synology box and they "detune" the rebuild to reduce the stress on the other drives. I figured Unraid would do something similar since they "detune" the write speeds of the spinning disks without configuring the tunables. Thanks that's what I meant re-reading what I said it didn't come out that way.
  20. Your setup consists of Parity, Data1, Data2, and Data3. Any one of those disks can fail and can be replaced and rebuilt with the original data. Say the Data2 drive fails you put in a new drive to replace Data2, Unraid will use Data1, Data3, and Parity to rebuild Data2 back to its pre-failed state. Depending on the amount of data this rebuild could take several days to complete. The rebuild is a consistent read/write on all drives in the array, this puts a lot of stress on the other drives increasing their risk of failure. If your buying all drives at the same time to build the Unraid server they will all have the same amount of hours on them increasing failure risk. If you don't need all that storage to start with I would say start with one data and the parity and add drives as you need them (best feature of Unraid).
  21. Taken right from the manual the strike through is what you already have populated. add the new 4 modules to the slots shown below and enjoy. CPU1: P1-DIMMA1/P1-DIMMB1 P1-DIMMC1/P1-DIMMD1 ADD to P1-DIMMA2/P1-DIMMB2 CPU2: P2-DIMME1/P2-DIMMF1 P2-DIMMG1/ P2-DIMMH1 ADD to P2-DIMME2/P2-DIMMF2
  22. This would work okay for a couple cameras. I have 6 IP cams with zoneminder running in a VM on Proxmox on a Dell R720. My original setup I had Freenas as my storage server and a share mounted in the zoneminder VM for camera storage It worked great. I decided to give Unraid a try as my NAS to recoup some extra storage space. I setup a share from in Unraid mounted it in the NVR VM, it was a disaster I was getting recording errors, things weren't getting recorded. Cameras would drop off. I was recording at 1080p initially i dropped to the lowest rez that helped some but same thing. The poor write speed of Unraid parity just couldn't keep up. I did work well making the video location an unassigned disk without the parity but not like having an actual array like ZFS. I ended up taking a drive out of the unraid and installing it into the R720 and passing it through to the VM and no issue since. I did also try using the cache drive on that share and that worked great until the mover ran and same problems. The R720 and Unraid are both using 10GBE so its not a network issue either. It would also depend on the cameras used CCTV are very limited on the bandwidth they have so more of them would probably work.
  23. Yes I did look into that all it looks to be is a script that makes Tunable (md_write_method): enter reconstruct write. I manually set that option under disk settings. That did make write speeds better but still nowhere near disk speed and still not great. If I had to do it again I would setup the array without the parity drive assigned copy data at full disk speed and after completed assign the parity disk and let it build. I would recommend that for others as well. now that its transferred I send about a 100-150gigs every few days and the cache can cover that amount and I don't have to see how slow it gets written to disk. Thank you for the suggestion
  24. I googled for days trying to find info on if optane aic's are supported in unraid. I could not find a definitive answer, so I took the plunge and bought one and hoped it worked. I received it today (4-3-19) I popped it in and it shows up in the drive list. To say the least it's an AMAZING cache drive. I'm still on a trial but I came from freenas and I was considering going back. I love being able to use any size drive to add to the "array" over freenas having to add drives in the same number of the original drives used. Computers and networking is my hobby I can buy 4 drives at a time. The unraid interface is good and the help in every menu is awesome and the amount of plugins far exceeds freenas. Back tp my point the write speed of Unraid is so AWFUL! So much so that it tool almost the whole trial just to get my data on Unraid so not much trail to get data to test. Be fore the optane I was running two 850 evo SSDs in raid0 over 10gbe network even that was pretty poor write speed barely more then the ZFS array write speed. I popped in the Optane 900P and BAM I'm writing at 550 MB/s, thats the max read speeds on my cheap desktop's NVME drive. Reads speeds are okay but i have way too much data to store on SSDs and im getting full disk speed which is tolerable since i pull from my storage device very little but I write to it a ton. I'm hoping Ill be able to get an extension for the trial key to test a bit more to be sure but it looks like Lime Tech maybe getting another paid key (or two). Anyone using Unraid with 10GBE and Optane nvme is a must.