Lebowski

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by Lebowski

  1. I just have 5555 forwarded. (Not sure what 3838 is for) My path setting is /mnt/user/ should at least work on your internal lan?
  2. I had the same notification this morning. I shrunk the array and did a parity sync with exactly the same messages.
  3. Im doing a data re-build on 2 disks. I upgraded 2 of my 2TB disks to 4TB disks. So far so good doing a dual re-build. (I have back ups and the original drives still so I figured to give it a go) Looking in the shares folder, I have some inconsistency on whats protected (Green Triangle). The share comment where it's cache only is just that cache only. Appdata and Downloads (Cache only) are correct. But Upload, domains and system show the orange triangle. I dont think that's correct, no data / files / folders are on the array for those shares. They should be Green protect by cache pool? unless this has something to do with the Enable copy on write setting? (Mine are all Auto) or another setting? but I cant seem to find it when I compare settings of Downloads to Upload. Or is the cache drives part of the rebuild, not sure if what I see is correct or a bug.
  4. A new controller like this is cheaper then new drives and lets me expand later. Would this be a good choice? I understand this card is good to go out of the packet? (LSI Internal SAS SATA 9211-8i) Samsung drives should work with it? http://www.ebay.com.au/itm/New-LSI-Internal-SAS-SATA-9211-8i-6Gbps-8-Ports-HBA-PCI-E-RAID-Controller-Card-/281409936840?hash=item418556ddc8:g:wJcAAOSwQ15XOYDL
  5. Thanks for that. I might end up moving these drives along and get more WD Reds instead.
  6. I use BTsync docker, using Limetechs template built into unRAID. So you should be able to get it? If not the template is here unless github is blocked? https://github.com/limetech/docker-templates I use one way sync, as I don't need to edit my parents files. I was misleading earlier about port forwarding, you do need to port forward. It was working via UPnP and I didn't notice. I turned off UPnP on my router just now and manually forwarded 5555 UPD/TCP. You might be able to configure to another port not blocked? possibly port 80 externally to 5555 even? You can use a proxy also but I don't know how to set that up. Others here would have better advice to get around that. EDIT: my parents machine uses a windows install of BTSync.
  7. Thanks Squid, hdparm -N /dev/nvme0n1 & / hdparm -N /dev/nvme1n1 returned the following /dev/nvme0n1: SG_IO: questionable sense data, results may be incorrect max sectors = 0/1, HPA is enabled /dev/nvme1n1: SG_IO: questionable sense data, results may be incorrect max sectors = 0/1, HPA is enabled
  8. Thank you I don't really understand what HPA is but I figure it's part of the BTRFS cache pool?
  9. This would be great and VM's (I know VM's is not really a part of this but Flash backup would be great)
  10. Small files work well, it's a smaller amount of data but heaps of files. 56,654 Files, 22.8GB (They save websites to the HDD so it creates a heap of files) No need for port forwarding or anything like that, relay / tracker servers solve this I guess. But we don't have anything trying to block it. For files like Word, the temp files that Word creates sync while the file is open and just update at each end once done/saved. I don't think it does complete file transfer not deduplication for file updates. As for being a Linux noob I cant compare it to rsync.
  11. I use BT Sync, I use it to backup / sync my parents data who are in Holland and I'm in Australia. You can use a relay/tracker server that may get around the firewall issues you might encounter?
  12. Just wanted to check on this Other Comments section, do I need to do anything? NVME pool.
  13. Not sure if this will help you, but I use my router to assign fixed IP via mac address to my VM's, works really well and I leave VM's on DHCP.
  14. Been doing some testing on network transfers, My WD Red drives are able to get 110MB/s over the network. My old Samsung drives jump all over the place 40 to 90MB/s (avg 55MB/s), they are old drives been running non-stop for the last 5 years on my last server. When copying from the Samsung drives the CPU usage spike to 90% during the transfer. Anyways, wondering why the Samsung drives push the CPU but the not the WD drives. All drives running from the MB controller. All drives pass SMART tests. I have Cache Directories plugin installed. Not sure if that is doing anything?
  15. This might be unrelated to beta, I just logged into unraid and all of a sudden most of my dockers had an update notification. I just went to the dockers tab and did another manual update check, they all went back to no updates (Green - Normal) I did hard power my router just before all of this as I had to move it. I figure its related to that?
  16. It moves it to the TV Show share, Sonarr has no idea if it's a cache drive, or array. unraid takes care of those details depending on how you setup the share. Its a fast transfer for me as it's on the same drive. TV Shows share uses cache and my Downloads share is cache only. Hope that makes sense.
  17. Yes Kodi will pick it up, even tho it's on the cache drive. (assuming Sonarr is sending it to the appropriate folder once downloaded) To any program or you accessing the share wont be able to tell it's on the cache drive. You need to set it up so the share uses cache. You get options when setting up the share, you can make it use cache, cache only (I do this for downloads), no cache, you can even include or exclude disks. I have the following setup: SAB -> Downloads to a folder in "Downloads" (Cache Only) to"complete/tv" Sonnar picks it up once unpacked and sends it to my "TV Shows" share. That share is cache enabled so it takes a split second to move the file. At 12am the mover kicks in and sends it to the array. Before the file is moved it is totally accessible via the share, same with after it's moved. I understand if the file is being used/open at the time of the move, mover will skip it on the cache drive till next schedule.
  18. When the cache drive is full the files written will be sent to the array instead (unless the share is cache only). Mover will send files on the cache drive @ 3:40am (You can change this under schedule on the settings tab) so you can keep copying. I have my "Downloads" share as cache only. SAB or Deluge uses folders in this share. Once downloaded and unpacked they move to the appropriate share for TV/Movies, that's also on the cache drive (unpacking is super fast) and moves to the array once a day. SAB is set to pause downloads if less that 10GIG left on the drive.
  19. Reporting back, Dual Parity working just fine, VM's working fine Dockers all ok NVME cache pool all ok No crashes, system seems very stable (Im not doing anything fancy with my VM's, it's just for RDP)
  20. I'm trying to figure out the results as my ssd's are capable of 2500 read and 1500 write. but not sure if any numbers showed that or is it all wrong due to cached issues?
  21. I'm getting this result from the 950pro's in my cache pool, ----------------------------------------------------------------------- CrystalDiskMark 5.1.2 x64 © 2007-2016 hiyohiyo Crystal Dew World : http://crystalmark.info/ ----------------------------------------------------------------------- * MB/s = 1,000,000 bytes/s [sATA/600 = 600,000,000 bytes/s] * KB = 1000 bytes, KiB = 1024 bytes Sequential Read (Q= 32,T= 1) : 5322.083 MB/s Sequential Write (Q= 32,T= 1) : 3262.831 MB/s Random Read 4KiB (Q= 32,T= 1) : 691.821 MB/s [168901.6 IOPS] Random Write 4KiB (Q= 32,T= 1) : 475.712 MB/s [116140.6 IOPS] Sequential Read (T= 1) : 2292.775 MB/s Sequential Write (T= 1) : 1558.035 MB/s Random Read 4KiB (Q= 1,T= 1) : 71.989 MB/s [ 17575.4 IOPS] Random Write 4KiB (Q= 1,T= 1) : 60.590 MB/s [ 14792.5 IOPS] Test : 1024 MiB [C: 34.1% (10.0/29.5 GiB)] (x5) [interval=5 sec] Date : 2016/05/22 9:44:53 OS : Windows 10 Professional [10.0 Build 10586] (x64)
  22. Container update just came up. Have updated and removed the extra variable, all working fine version 1.0.2