Jump to content

Adrian

Members
  • Posts

    193
  • Joined

  • Last visited

Posts posted by Adrian

  1. 6 minutes ago, johnnie.black said:

    It's normal, it's different for each file system and it also varies with the device size.

    Could you provide some more details on it? Is there any reasoning behind it? Why wouldn't an end user see it as 0 on an "empty" drive?

     

    The reason I noticed this and it was bothering me was because I'm in the process of moving all my drives to XFS and in the process I'm moving everything from a reiserfs drive to an XFS drive. Of course, once I'm done moving everything, going here and seeing > 0 used made me second guess whether I forgot to move something or not.

  2. I'm not sure if this is a bug, or if this should be posted in a different sub-forum. Why does it show used space as 4.02 GB on an empty drive? I noticed it shows something similar for reiserfs drives, except on those it says 33.6 MB. I understand that a drives actual sizes vs what is usable, especially after formatting is different, but if this is what is happening, why even show that in the GUI? From an end users perspective, I'm expecting it to say 0 for used space on an empty drive.

     

    UsedSpace.thumb.PNG.cd350a9e92b066ff96b2f26c0dee9e83.PNG

     

    Regards,

    Adrian

  3. 8 minutes ago, bobkart said:

    And, note that the X11SSL, which drops one PCIex4 slot compared to the X11SSM, will suffice if all you need is the three total PCIe slots.  You might be able to find the SSL at a lower price than the SSM.

    Thanks for all the help. I went ahead and ordered the MBD-X11SSM-F-O to have that extra free PCIe slot just in case I need it in the future. I should have everything by Wed so hopefully I can have it up and running over the weekend. I'll let you know how it goes.

  4. 8 hours ago, bobkart said:

    That board has x8/x8/x4/x4 slots (all PCIe3).  The x4 slots are the concern.  They'll have ~4GB/s bandwidth instead of ~8GB/s.  Splitting that over eight channels leaves ~500MB/s per channel, plenty for a spinning drive.  This of course leaves out concerns of overall motherboard PCIe bandwidth, but that's present regardless of how the drives connect.

    ok, so considering that the drives I'm using won't come close to 500MB/s (at most with mechanical drives will be in the 200s?) using the 8x board in the 4x slot should be fine then right? Otherwise, in order to get 3x 8x slots, I'm looking at the workstation board, which even with the savings on getting 3x 8 port controllers, puts me at $40 more.

     

    I have an SSD that I use for cache, but that will be on the onboard controller.

     

     

  5. 4 hours ago, t33j4y said:

    In my case, 2x8GB (Kingston Server Premier) were actually cheaper than going for the 1x16GB module. Just goes to show that good planning can leave to good savings :-)

    Did you select it from the list on Kingston's site?

  6. 21 minutes ago, bobkart said:

    And, as long as we're talking about saving cost, three 9207-8i's look to come in $30-$40 under the two-card option you mention.  But maybe your PCIe slots are otherwise occupied.

    Well the 9207-8is uses x8 lanes, so I thought I could only have 2 x8 cards in the X11SSM-F-O. These would be the only cards, so am I mistaken? Could I have 3 x8 cards?

     I looked at the PCIe 3.0 24 port card (9305-24i), but that thing is $570. I'm only using 4TB drives (mostly the HGST NAS drives). I only have 20 drives, so I was thinking I'd put 8 on the 9207-8i and 12 on the 16 port card. Would that help deal with any limits I may hit?

  7. 4 minutes ago, bobkart said:

    No flashing; they're 9207-8e's as opposed to 8i's, mostly bought used on eBay, but I believe one was brand new.

    So if I get an LSI 9201-16i and 9207-8i, I see them on Amazon, I'll have to flash them?

     

    https://www.amazon.com/gp/product/B003UNP05O/ref=ox_sc_act_title_4?smid=A2J5EC07WROWMJ&psc=1

    https://www.amazon.com/gp/product/B0085FT2JC/ref=ox_sc_act_title_1?smid=A2J5EC07WROWMJ&psc=1

     

    11 minutes ago, bobkart said:

    Regarding memory, I'd lean towards a single 16GB ECC UDIMM instead of two 8GB's.  Of course pricing could work against that choice.  And: Micron/Crucial are pretty much the same, just FYI.

    Ok, I checked the price and a single 16GB is actually $50 less, thanks!

     

    12 minutes ago, bobkart said:

    On your controller prices, I see them on eBay in the $60 range.  Don't know if that's an option for you.

    Yea, I'm staying out of ebay if I can for this upgrade if possible.

  8. ok, so this possibly looks like my final build:

     

    Supermicro X11SSM-F-O

    Xeon E3-1230 V6 3.4 GHz

    Hynix MEM-DR480L-HL01-EU24 2x 8GB ECC

     

    $755 for it all. That's about $80 less than the i7-6700 route. I could shave another $20 if I go with the Micron memory instead.

     

    I didn't include the LSI controllers in my comparison since I'm using the same controllers either way, but those will run about $340 more.

     

  9. 11 minutes ago, t33j4y said:

    Actually a fair point - I've just looked into the E3-1230v6 - I can shave almost USD100 of my cost by accepting a little lower frequency and no GPU. Since I'll be pairing it with the X11SSM which has on-board GPU and this is for server use, there's no actual need for the GPU fitted one.

    Oh crap, I completely forgot about GPU availability. Hmm, so are you sure the GPU is included in the MB and it doesn't require a GPU in the CPU? If so yea, that's also like a $70-80 savings for not having to get the E3-1245

     

    Well I guess, in terms of direct console access, IPMI provides a VGA GPU?

  10. 17 hours ago, t33j4y said:

    Have you considered a V6 Xeon? I also looked at V5 but I've just noticed that fx an E3-1245 v6 is actually marginally cheaper than the E3-1245 v5. Go figure.

    Well I'm looking at E3-1230 due to pricing. The V6 is $10 more than the V5, so not much more. I may get the V6, still undecided.

     

    17 hours ago, bobkart said:

    Regarding SAS cards, the 9207's are solid performers; I have about six of those in various 16-to-24-drive setups.

    Did you have to do any flashing?

  11. So I'm looking at upgrading my unraid server. It's currently running on a Gigabyte Z77X-UD3H with an i3-3245 CPU in a Norco with 19 drives. I'm using 2x AOC-SAS2LP-MV8  plus the onboard SATA ports. On an off it's given me various issues and while it's still chugging along I'm just expecting any day now for it to take a crapper. I was going to upgrade it a while back, but then got distracted with other things. I'm trying to work on this again. I've been reading through various posts and also researching other sites regarding which MB to get and controllers. This is what I've put together so far:

     

    Supermicro X11SAT-F

    i7-6700 3.4 GHz

    2x Supermicro Certified MEM-DR480L-HL01-UN21 Hynix Server Memory - 8GB DDR4-2133 2Rx8 Non-ECC UDIMM

    LSI 9201-16i

    LSI 9207-8i

     

    or

     

    Supermicro X11SSM-F-O (cheaper than the workstation motherboard)

    Xeon E3-1230 V5 3.4 GHz

    Do I need ECC memory?

     

    I got to this choice as it's supports the i7, has IPMI, 4 PCI-E 3.0 in case in the future I want to upgrade the controllers.

     

    For the controllers, I wanted to get away from the Marvel based controllers and saw several posts regarding these specific controllers being recommended. I couldn't find any 9201-8i (I'm getting them on Amazon), so hopefully the 9207-8i is good.

     

    Looking for thoughts, recommendations, warnings, etc... regarding this build. Anything else I should look at? Alternatives?

     

    Thank you,

    Adrian

  12. Not sure if this falls under a general Docker issue or if it's specific to the docker application.

     

     

    I just tried using the update option on a docker app and I got this error:

     

    Quote


    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="sickrage" --net="bridge" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "PGID"="100" -e "PUID"="99" -p 8081:8081/tcp -v "/mnt/user/downloads/complete/":"/downloads":rw -v "/mnt/user/TV Shows/":"/tv":rw -v "/mnt/user/appdata/sickrage":"/config":rw linuxserver/sickrage

    /usr/bin/docker: Error response from daemon: Conflict. The name "/sickrage" is already in use by container 7bd07d554d4a0c63adfb41fd2d5bee357015fb31130e91699a01a48ab5163f2a. You have to remove (or rename) that container to be able to reuse that name..
    See '/usr/bin/docker run --help'.

    The command failed.

     

     

    To me it sounds like an error you'd get if you tried to install a duplicate docker app with the same name, but I'm doing this as an update, so not sure why it's complaining about the container name.

  13. Mystery solved. It's a firewall rule I have to route traffic from unraid through a VPN. It was supposed to be disabled so I'm not sure how, but somehow it got enabled. Even though ping should be working through it, I guess something else is wrong.

     

    Maybe while it should utilize ping initially it could fallback to something else if ping fails just to make sure? Not a big deal, but you can see where it caused some confusion with what it was reporting and what everything else was showing was conflicting a bit.

    • Like 2
  14. 3 hours ago, Squid said:

    The warning is from a failure to ping github.com  Since plugins, etc all use github to pull from, possibly the issue is because if you're running pfSense as a VM on unRaid then it's not up and running when the plugin does its tests at boot up....

     

    Either way, at the time of the test FCP was unable to ping github.  Ignore it if you want....

     

    I run pfSense as a VM in VMWare and it's pretty much always up. Can I force this test? Is it the same as a rescan?

  15. Started seeing this recently, so happy to find a post about it until I followed the link to setting up the DNS servers with Google's. That doesn't really answer the issue. I run a DNS server on pfSense (firewall). It uses Google as it's forwarder. All my computers use my firewall's DNS server without issue. Also, I'm able to update plugins, install docker apps, update unraid, etc... so it's obviously able to resolve DNS. So what's the deal here?

     

    Thanks in advance,

    Adrian

     

  16. 30 minutes ago, lbosley said:

    I guess I figured you were primarily just looking at unRAID storage.  From my research the SuperMicro X10SRA-F MB is about as good as it gets for connecting a large storage.subsystem.  I know there are some X99 enthusiasts out there also.  But the SM boards are a better fit for servers, IMO.  I only have a PCIe 2.0 slot available in my current system.  I noticed when I move one of my SAS controllers into this slot the performance drops off noticeably.  Good excuse to upgrade.  :D

     

    I bought 16GB (2) of the MEM-DR480L-SL02-ER21 8GB DDR4-2400 ECC RAM from an Amazon seller.  My needs don't call for much RAM. I run Plex and some plug-ins, and not much else planned right now.  Hopefully it arrives before the weekend, but I kind of doubt it.  The CPU will more than double the power of my current i3 chip.  

     

    I haven't had any luck finding the Samsung memory, so I'll probably go with the Crucial one. Thanks.

  17. 4 hours ago, lbosley said:

    Adrian,

     

    I just ordered the same motherboard with the E5-2620 v4 chip.  Even tough the v3 clock speed is higher, Passmark scores the v4 (8-core) processor at about 14% faster than the v3 and it costs the same.  I purchased ECC buffered memory from the SuperMicro approved list - Samsung DDR4-2400.  With plenty of PCIe lanes, ports, and slots, your system should be a beast for unRAID expansion possibilities.  Good luck on your build.

     

    ok great, thank you so much. I added it to my list while I research it. Where did you get the Samsung memory? And which model? Thank you.

  18. So in trying to find a CPU under $600 with 6 cores (12 threads) with a clock speed > 2ghz it looks like these are my only 2 options. If anyone knows of any other options, let me know :)

     

    Intel Xeon E5-1650 v3 ($600)

    Memory Type: DDR4 1333/1600/1866/2133

    https://ark.intel.com/products/82765/Intel-Xeon-Processor-E5-1650-v3-15M-Cache-3_50-GHz

     

    vs

     

    Intel Xeon E5-2620 v3 ($450, $420 on sale atm).

    Memory Type: DDR4 1600/1866

    https://ark.intel.com/products/83352/Intel-Xeon-Processor-E5-2620-v3-15M-Cache-2_40-GHz

     

     

    For memory I'm looking at

    Crucial 32GB Kit (2 x 16GB) DDR4-2666 RDIMM ($363, only found it directly from Crucial)

    http://www.crucial.com/usa/en/x10sra-f/CT9319493

     

    Any other options would be appreciated.

     

     

     

    So even though the MB is single process, the E5-2620 v3 seems to be the best option based on price. Are there any reasons this would be a bad choice? Is the E5-1650 worth the extra $150?

     

    Thanks,

    Adrian

×
×
  • Create New...