apefray

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by apefray

  1. Here you go, I assume I have done this right? I have to admit that you saying ports 18 - 21 are the marvell controlled ones is a bit worrying, as they are the ports I am currently using and have been for the past year without issue. The only ones I am not using are the 4 blue SCU ports 34 - 37, which I thought were the marvell controlled ports? tower-diagnostics-20180513-1548.zip Although now you mention this, it makes perfect sense as the 4 Marvell ports will be SATA 3, hence why most of my hard drives are showing as a 6GB connection.
  2. Question for both Bob and Johnnie regarding the Asrock ASRock - EP2C602-4L/D16. Both of you have mentioned not to use the Marvell controller, which I understand due to the issues experienced with unraid. Now, un-be known to me, I thought the 4 blue SCU ports were what we were talking about, but I have just looked at the manual and there seems to be some confusion over the SATA ports. On the website it shows this: Storage SATA Controller - Intel® C602 : 2 x SATA3 6.0 Gb/s, 8 x SATA2 3.0 Gb/s, support RAID 0, 1, 5, 10 and Intel® Rapid Storage 3.0, NCQ, AHCI and "Hot Plug" functions Additional Storage Controller - Marvell SE9230: 4 x SATA3 6.0 Gb/s, support RAID 0, 1, 10, NCQ, AHCI and "Hot Plug" functions But there is a discrepancy in the manual itself: I know that ports 34 - 37 are the ones to be avoided as they are the Marvell Controller, but its ports 18 - 21 that are confusing me. If you look at the above description from the website, it mentions 2 x SATA3 and 8 X SATA2, yet the above image quite clearly states ports 18 - 23 are SATA3. I understand ports 22 and 23 are controlled by the Intel chipset and are indeed SATA3, but ports 18 - 21????? This then got me thinking, if Asrock can't get there descriptions right, then are ports 18 - 21 also running on a Marvell controller? If so, then I have ports 18 - 28 filled at present and it doesn't seem to have caused me an issue. I have been running unraid for over a year now and use it with the Emby docker as a media server. I then had the idea to check what negotiated speed all my drives were showing in unraid: Parity Drive: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Disk 1: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Disk 2: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Disk 3: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Disk 4: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Disk 5: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Cache: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Unassigned Disk: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Unfortunatley, without removing the case from the rack and opening it up, I can't tell exactly which disk is connected to which port, but it appears I have opposite to what is described on the website, but is correct according to the manual, but I'm still none the wiser as to whether ports 18 - 21 are running on a Marvell chipset and if they are, have I just been lucky in that I haven't had any problems so far? As we have talking about above, I am going to get the HP SAS Expander, but I will also utilise a couple of the SATA ports on the motherboard, primarily for the unassigned/cache disks. The unassigned disk will be HDD so either SATA port is fine, but the Cache disks will need to use SATA3 ports as they are SSD's so its' important to know which port is which, but the details Asrock have posted aren't helping in this sense
  3. Silly question time, more curious than anything else, but you mention you are running 9 internal and 5 external, yet the expander card only has 1 external output. Having read time and time again that 1 SAS port = 4 SATA connections, how are you running 5 external disk from the 1 port? As mentioned in my Op, I am contemplating purchasing a Startech SAT35401U, which has a SAS port but may eventually purchase a 2nd one for backup purposes and am intrigued into your 5 disk external setup hence the reason for wanting two external SAS ports. However that can wait for now as the 1 Startech unit will be enough for a couple of years for backup.
  4. Great, so as long as I go for dual link, in theory I should be able to run 12 (8 internal and 4 external) Hard drives from it all running at full speed, 150mb/s?
  5. Oh, thanks Johnnie. I was reading the specs on the HP website and it mentioned (whoops 1TB not 2TB): Capacity Given the increasing need for high performance and rapid capacity expansion, the SAS Expander Cardand the SA-P410 or SAP410i Controller offers: Up to 14TB of total storage with 14 x 1TB 3.5-inch SATA MDL HDD Up to of 7.2TB total storage with 24 x 300GB 3G SAS 10K SFF DP ENT HDD Does the card work straight from the box, or do I need to fiddle with it like I need to with the M1015 to get it in IT mode? Edit: Do you know if this card is SAS v1 or SAS v2? I will only ever use hard drives with the M1015 and the expander card, using the onboard SATA connections for SSD.
  6. Oh hang on a mo. If I go the expander route, I have just come across this https://www.ebay.co.uk/itm/HP-24-Bay-SAS-Expender-Expander-Card-487738-001-468405-001-External-Internal/222966046815?epid=19014650362&hash=item33e9cfe05f:g:FhoAAOSwA4Va4jvl Anybody know if this will work with unraid as it has both internal and external connections and would be ideal in my situation. Edit: Apart from of course the 2TB limit for the hard drives, grrrrrrr! I may well go the purchasing a 2nd M1015 route as I know this works and they can still be had quite cheaply.
  7. You have confirmed my suspicions about the Marvel controller so I will look at the HBA route instead. I currently have a brand new IBM M1015 HBA that hasn't been flashed to IT mode yet and is still in its packaging. However, the intention for this card was to give me an extra 8 ports (i have a 16 bay server case with eight of the bays already filled). With what you have written in mind, I could go one of three routes from here: 1) Purchase a second IBM M1015 card 2) Purchase an expander to run along side the existing M1015 I already have, something like the IBM-46M0997 expander card, then purchasing the external SAS adapter as in my OP 3) Find a SAS card that has 8 internal ports and 8 external ports. I had a look last night for one of these, but they seem to be rare although I did come across a couple, they were just so expensive If the IBM-46M0997 expander card is compatible then I think this would be the route I would take, unless someone comes up with a reason not to go for it? At least this negates the problem of which cable to go for as it would just be a simple SAS to SAS cable required.
  8. I am about to purchase one of these for external back up: https://www.startech.com/uk/HDD/Enclosures/4-Bay-Rackmount-Enclosure-for-SATA-SAS-HDDs~SAT35401U This will be filled with 4 x 6TB hardrives and I will hopefully use crashplan (unless anyone recommends anything else) to backup my unraid server fully. I have an Asrock EP2C602-4L/D16 motherboard, but am currently using all the sata connections for all the hard drives in the unraid array, and am left with 4 x SCU ports which I understand I can use as normal SATA ports (although the controller is a Marvell?) So, to connect the above JBOD device to the 4 SCU ports on the mother board, I am contemplating on using one of these (spare port just in case I purchase another external enclosure in the future): https://www.amazon.co.uk/gp/product/B06ZZDLJH6/ref=ox_sc_sfl_title_3?ie=UTF8&psc=1&smid=A1E5DJ3G6SIEDV I will then use a Mini SAS SFF-8088 SAS cable between the Dual Port adapter and the external enclosure, that bit I get. The bit I'm stuck on is which SAS 36Pin SFF-8087 cable I need to connect the Dual Port adapter to the 4 x SCU ports on the motherboard. Is it a forward or reverse cable I need? In the back of my mind it is a reverse cable I need, but I just wanted to check before I went ahead and made the purchase. Cheers
  9. Ok, just checked the log in the IPMI plugin and it seems that I too am still getting a false positive (hopefully), but it only appears on one of the CPU's in my case: However, at least the IPMI plugin is keeping the fans under control and I'm not experiencing what sounded like a jet engine every 10 or so days now. Yep, just checked all 273 entries, and all are showing the spike of CPU_BS1 only.
  10. Hi Guys, I know this is an ongoing thing with this motherboard, indeed I also experienced it at one time. I use 2 x Intel® Xeon® CPU E5-2670 0 @ 2.60GHz and every now and again I would get the 100c spikes as mentioned, or the fans would spin up to full and only a reboot would solve the issue. However, by some miracle I found a cure for it and I'm not sure if anyone else is using this, but I installed IPMI plugin and set up thresholds and since then I haven't' experienced the spikes or the fans running at full speed. Just to give you an idea, the fans would spin up to full every 10 days or so and the server required a reboot, but I've now been running the IPMI plugin for several months and not once have they spun up to full speed and stuck there. Ok, due to the thresholds I have setup they do spin up faster every now and again but that is due to the ambient temp rising, but they never go full pelt these days. I'm so glad I found a solution to this as it was driving me nuts when the fans spun up to full speed, sounded like a jet engine taking off.
  11. Hi, I installed the NO-IP docker yesterday afternoon, filled in the details inf the config file, NO-IP then created the no-ip2.generated file and everything appears to be working. When I first started the Docker it then updated my NO-IP account with the IP address I am running on, but it appears it hasn't updated since. I logged into my NO-IP account, went to the manage hostname page, and clicked on the info icon and its showing last updated 22 hours ago i.e. when I first started the Docker. It hasn't updated since. I have checked the Docker log and cannot see any problems and I see this being generated every min: Process 18, started as noip2-x86_64 -c /config/no-ip2.generated.conf, (version 2.1.9)Using configuration from /config/no-ip2.generated.confLast IP Address set ***************Account ***************configured for:***************Updating every 30 minutes via /dev/eth0 with NAT enabled. The Last IP address set is my correct IP address, the Account shows the correct email address, and the configured for show the correct hostname. Am I missing something?
  12. Update Have now built the server, install the 6TB drive along with 2 x 4TB drives for Data, and 250g Samsung SSD for cache. Array has been started, parity setup has been done. Now have all green lights for everything. Easy. However, could do with some advice with regards to setting up user shares for emby (have spent two days looking at you tube videos, wiki articles and posts on the internet). I'm trying to get my head around the best way to setup the shares to store my media so that emby can access each of the shares. What I would like is to create a Movies share on disk 1, then copy my 1.5TB worth of movies to this drive. Going forward, drive 1 is only used for movies, however eventually I will fill this drive and will need to start filling a second drive in future (not bought yet), and so on. As my Movie collection increases, so does the number of disk holding the Movies. Then, I want to do the same scenario but with the 2nd drive for my Music, TV collection which currently stands at 1.9TB. Again following the above scenario, so these two shares are stored on disk 2, but can grow onto new drives as and when I purchase them. I understand creating shares, that's the easy part, the bit I am stuck on is the Split Levels bearing in mind my intended setup as above and this is where I my head is scrambled with regards to which split levels I should be using. I assume I am correct with regards to using user shares rather than disk shares to achieve the above (although even that I am doubting thinking how I want this setup), grrrrrr!
  13. Lol, so I guess the consensus is don't at the mo. Just a thought, but if I use the Deluge_VPN docker and use my CybeGhost VPN details, then will this docker be the only thing to connect over VPN whilst Unraid and the Emby Docker will connect though my regular internet as usual? In my mind this is how its going to work which may negate having to set a seperate Lan Port for Unraid and the 2 dockers. Secondly, what do people do with regards to backing up Unraid? External drives, a NAS drive or some other setup? I was thinking about purchasing a JBOD 4 bay enclosure (StarTech do one) with an e-SATA port, then add an e-SATA card to the Unraid server, connect the two together via an e-SATA cable and back up to the JBOD device. I'm moving away from desktop cases and moving to rack mount to free up my desk, so this seemed an ideal solution assuming Unraid works with e-SATA cards?
  14. Samsung 850 EVO 250GB it is then, bought. I'm still scratching my head regards to dedicating the Lan Ports, to both Unraid and the Dockers? Anyone know of any guides or has anyone done this previously?
  15. So the consensus is a bigger Cache drive then. I can purchase a 250gb SSD, but has anyone used something like a Seagte FireCuda Hybrid drive for the Cache drive? For the same price as a Samsung EVO 850 250gb, I can have a 2TB firecuda (ST2000DX002). Has anyone used a Hybrid drive for cache? Is it recommended? Will it work for the Cache + Dockers and maybe VM's at a later stage?
  16. Yep, I have a CyberGhost VPN paid account to allow me to download Torrents. I've just watched a Youtube video that shows how to install the Deluge VPN docker, and this looks ideal. The only thing I need to figure out is changing from PIA VPN to using the custom settings to enable me to use CyberGhost VPN instead, but as I say this will probably do what I want, especially using the DelugeSiphon Chrome extension. Edit: I think I may have found the settings for Deluge VPN: https://support.cyberghostvpn.com/hc/en-us/articles/213652765-How-to-configure-PPTP-for-Linux-Ubuntu
  17. My intention is to only run Unraid along with the two Dockers. I only mentioned ESXi in my original post as I did start out thinking I would use ESXi with Unraid, Emby and Deluge all running on separate VM's but have since decided that Dockers may actually be the way to go. I hope I am thinking about this the right way. Basically my goal is to consolidate my existing QNAP Nas Drive, Windows 7 running Emby and Windows 7 running my Torrent Client/VPN connection into 1 server case, which is then rack mounted along with my existing workstation (Intel i7-6700k with 16gb) which I use for Emulation/CAD work. Currently, the QNAP stores all my Movies/Music/Photos files, and is served out to several Emby apps (roku, apple iphone, android etc. etc.) using my old PC (Intel i5-2500k with 16gb ram). Then I have the second PC that serves me for the Torrent side of things, running CyberGhost VPN which I use downloading files for the Emulation software. Both these PC's sit in old cases that I found lying around and take up more room than I actually have As you can see, I have a rats nest of PC's which I want to consolidate (apart from the workstation) so I will end up with two server cases in a rack.
  18. I did have that thought also, but did come to the conclusion that 120gb should be big enough to run the two dockers from. Or am I thinking along the wrong lines here and in fact the dockers will run from the drive array instead? If this is the case, then I could use the SSD as a Cache drive instead which is something I am still looking into. I understand (almost) how dockers work, but with the Torrent (deluge) docker, does this run as a stand alone app within the Docker, or do I need to install Ubuntu to run it on aka: I ask because I need to run deluge through the VPN (CyberGhost VPN), so how do I install the VPN and ensure that Deluge is running through it? I'm a little lost with this part at the moment, although, ensuring that Unraid, Emby and Deluge/CyberGhost VPN are using their own dedicated Lan ports is the other side of the puzzle I'm trying to figure out.
  19. Lol, lets just say i'm a geek when it comes to these sort of things. I have two E5-2670, but may actually only use 1 and see how things go. The thing is, I have outgrown a Qnap 4 Bay NAS drive, and did think about purchasing a newer 8 bay model, but where's the fun in that, hence looking at building my own Unraid Server that I can expand in future if needed (and for quite a substantial saving over the Qnap). Yes, I bought the motherboard, and already have the hard drives from the Qnap I am currently using (apart from the WD Red Pro 6TB which I bought a couple of weeks ago) but the rest of it is from now redundant servers that were broken up, hence ending up with the E5-2670 v1 cpu's which I rescued before being tossed in the bin. I have spent many hours considering the case this should be built into, and came up with a 3U 16 Hot Bay server case which then allows me to use 1 or 2 x Noctua NH-D9DX i4 3U which should keep the noise down. The intention is to store all my movies/music/photos etc. etc. on Unraid, then be able to watch both locally and whilst out and about through Emby. There is the potential for 4 -5 connections to Emby at anyone time (family), again either locally or whilst out and about. The transcoding is limited to 3Mb/s per user, as overall, I only have a 19Mb/s upload rate on fibre. I'm also very much into the Emulation scene, and have around 1TB of files that need storing on Unraid as well, and this is where the Torrenting comes in via VPN. I won't actually be running any Emulation on this server, just storing the files, as I have my main computer to run this on instead.
  20. Hi, I am currently in the process of building a new server to house an Unraid setup along with a couple of dockers (emby and a torrent client) The server specs are as follows: · Asrock Rack EP2C602-4L/D16 · IBM 1015 SAS Card (running in IT mode) · 2 x Intel E5-2670 · 32gb ECC Ram (16gb per cpu) · 1 x WD Red Pro 6TB for the parity drive · 2 x WD Red 4TB and 2 x WD Red 3TB drives for data · 1 x Samsung 120gb EVO 850 SSD for running the Dockers The Asrock motherboard has 4 x 1Gbps Lan ports, so my thought is to team 2 Lan ports to Unraid (maybe overkill), then dedicate two others to the Dockers so that each is using its own Lan port only so each Lan port has its own IP address. Is this possible? I would like to have dedicated ports, especially for the Torrent Docker as I use a VPN for a connection which I don’t want to interfere with the other Lan ports. Currently I have a standalone PC that I use for Torrents and the VPN connection, and another PC for running Emby as the two cannot co-exist on the same PC due to the VPN connection, hence I would like to combine these two PC’s into the Dockers running on the new Unraid Server. Of course, I could also go the route of running everything under ESXI instead, but I haven’t really looked into this at the moment. Any pointers with regards to the above would be appreciated or indeed any other suggestions would be welcomed also. Cheers