gert
-
Posts
19 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by gert
-
-
hi everyone.
Since the update to 6.3.2, i can't access my shares anymore from any device on my network.
- windows 10 can't access the shares
- android phone
- linux box
telnet from my windows 10 work perfectly and i can navigate to the unraid dashboard with my browser. I had this problem with the 6.2.x branchs too and the last version that is working without any problems for me is 6.1.9.
Any idea?
tower-diagnostics-20170221-2041.zip
I seem to have the exact same issue, been trying to upgrade since 6.1.9 every other release or so
Sent from my iPhone using Tapatalk -
having trouble starting Madsonic:
HTTP ERROR: 503
Problem accessing /index.view. Reason:
Service Unavailable
Has been working for a long time and earlier i had to redo the docker.img but since then this error has occurred, same config path etc as previous (same template my-binhex-madsonic)
I see a couple others report the same here https://hub.docker.com/r/binhex/arch-madsonic/
-
I've been following this thread with interest and I think I'm going to dive in as the performance per ££ is unbeatable, and it'd be a nice little project ;-)
So far, I have located the following components:
- 2x E5-2670 SROKX
http://www.ebay.co.uk/itm/2x-Intel-Xeon-E5-2670-SR0KX-CPU-2-6GHz-Eight-Core-Processor-20MB-Smart-Cache-/401130447014?hash=item5d653ce0a6:g:yjIAAOSwtJZXT~WZ - 64GB Ram
http://www.ebay.co.uk/itm/HP-64GB-8x8GB-605313-071-PC3L-10600R-ECC-DDR3-1333MHz-HP-DELL-IBM-Lenovo-/131855768729?hash=item1eb3375c99:g:HygAAOSwnNBXaouw - ASRock EP2C602-4L/D16
http://www.morecomputers.com/product.aspx?pn=EP2C602-4L%2fD16&man=ASRock&referer=PCPart - 2x NH-U12DX i4
- Phanteks Enthoo Pro ATX Full Tower Case
Will all these parts work ok? I'm not 100% sure what type of memory I should be looking for - ECC/buffered/registered?
I'm curious if a standard Noctua cooler will work, or why people have used the NH-U12DX i4?
Thanks in advance for any help
This looks good, specs wise, but you cant be entirely sure as some motherboards can be picky with RAM etc, but the specs is right, you want 10600R (registered) 12800R is also an option although it is usually way more expensive
You cant just use whichever cooler on these server boards, generally they are Narrow ILM and/or Square ILM coolers. You can read more about it here http://www.servethehome.com/narrow-ilm-square-ilm-lga2011-heatsink-differences
- 2x E5-2670 SROKX
-
When multiply cpu motherboard spec say max 145TDP on cpu, is that for both cpu or max pr cpu?
That is pr cpu. 145 is pretty high so it should support most cpus
Sent from my iPhone using Tapatalk
-
Finally jumped on this train just so I could throw more cores at my vm,s
MB Asus Z9PA-D8
CPU (2x) Intel Xeon E5-2670 SR0KX CPU 2.6GHz Eight Core Processor
Ram 64GB (8x8GB) PC3L-10600R DDR3 Dell PowerEdge R410 Server Memory RAM Upgrade
What coolers would people recommend for this MB.
Kevin.
i went with Noctua NH-U12DX i4 and im not regretting it one bit as my server is in my living room and i appreciate it being low noise, but from what i read coolermaster hyper 212 EVO's would be the cheaper choice or Noctua NH-U9DXi4 if you lack clearance in hight
-
Nanoxia deep silence 1 rev B. has 11x 3.5 and 3x 5.25 bays, that should make it possible to added 16 drives.
My thoughts on it as storage server case:
- It reminds me a lot of the fractal R4, just slightly taller and slightly slimmer, although i find the build quality if DS1 rev b. slightly lower, especially on the side panels they are not sturdy enough.
- You will need a rather short PSU to be able to use all 11 internal 3.5 bays
- The 3x 3.5 bay can be easily switched to a 5x2.5 bay (also included)
- It reminds me a lot of the fractal R4, just slightly taller and slightly slimmer, although i find the build quality if DS1 rev b. slightly lower, especially on the side panels they are not sturdy enough.
-
I only (very) recently put 6.2 beta on my server.
I did not have any issue pre-clearing the second parity disk I have just added to my array.
The fix will need to wait until I add/replace one of the existing disks with a larger one.
(Otherwise, I have no way to test the process. )
Whatever the fix might be, it must be backwards compatible with the older releases of unRAID.
In the interim, you can type this command to "patch" the preclear_disk.sh command
First change directory to the directory holding the preclear_disk.sh command. For most, it will be
cd /boot
then type (or copy from here and paste) the following:
sed -i -e "s/print \$8 /print \$9 /" -e "s/sfdisk -R /blockdev --rereadpt /" preclear_disk.sh
Your preclear disk script will be edited and should work with the two changes you mentioned. (actually, each occurs in two places, so there are a total of 4 lines changed)
Joe L.
That worked for me, thanks 6.2.0-beta23
-
5e is good enough for gigabit. It might be damaged cable or electrical noise causing your problem
-
You found the culprit
Now you just need to find out Why it is only running at 100 mbit
-
Sounds like your issue is your network is only running 100mbit. is the 11MB/sec stable or does it go well below that at times also?
I would look if you network switch and or cables is causing problems and your speed is auto adjusted to 100 mbit instead of 1000 mbit
In regard to the rest of the hardware it looks okay, and a CPU upgrade in itself should allow you to do what you want ( i havent calculated your poweusage but a single rail 600w PSU should more than suffice).
Shameless self promotion: Should you feel like aquiring yourself some truly awesome server hardware i am currently selling some perfect (used hardware) for a unraid server on the Danish used stuff portal DBA.dk
http://www.dba.dk/server-supermicro-god-e3/id-1024966774/
-
I find this video to give a quite good overview of the R5 and difference to the R4
-
Thanks for all the great insights! I'm going to check out the R4 (any particular reason you went with the R4 over the R5?) as well as the Nanoxia (found it available on amazon).
I purely went with R4 since it was before the R5 was announced, if i had to choose today i would go with R5 for sure. since it has SSD trays on the back of the motherboard tray and a better potential for airflow on the built in harddisk trays
Additionally, something that I have run into as a logically thinking about drive expansion is that the PSU I have only offers a limited number of sata power and would require picking up a few molex to sata connectors. Potential thoughts on whether to just utilize what I currently have w/ the molex or upgrade to something different?
Seing you have a modular PSU i would just aquire some cables with SATA power, i personally got a pair of 4x SATA power cables from this guy for my seasonic PSU and they have been working great and was fairly cheap (if you disregard the shipping)
-
I had a setup very similar to your server build and was in the same situation as you, so I upgraded to a Supermicro X10SL7-F and the Node 804 case, which i wasn't happy with in regard to heat/noise levels, so i upgraded again to Fractal define R4 (still with the SM board) which i was very happy with until i couldn't resist any more and just had to jump on the dual E5-2670 wagon (still in the R4).
Do you have the Intel S2600CP motherboard (SSI EBB) in an R4? I don't think a SSI EBB motherboard would fit in R5, because the motherboard is mounted lower than the rest, and then it will be blocked (search for Define R5 on google, and you will see).
You are correct. i went with an ASUS Z9PA-D8/iKVM, for more details see my post here http://lime-technology.com/forum/index.php?topic=46077.msg474222#msg474222
-
After many months consideration I ended up pulling the trigger and acquired 2 of the these CPUs, with the ASUS Z9PA-D8/iKVM board and some Noctua NH-U12DX i4 coolers. i was lucky to pick up some PC10600R ram cheap (48 GB)
My reasons for going with this combo.
I was using the Fractal Design - Define R4 and was happy with it (sits vertically on a shelf), i have no room for bigger cases (for SSI EEB), so it had to be ATX
I wanted PCIE 3.0 x16 slots for playing around with replacing my gaming/desktop PC later on (other ATX motherboard options was very limited).
I wanted enough PCIE ports to add a HBA and other cards for more Video card pass through (replace my HTPC)
I wanted IPMI functionality.
It had to be 2 CPUs for it to be worth it replacing my E3 haswell with Supermicro X10SL7-F (meaning i had to add a HBA as well)
In the end i am happy with my choice even if i don't think it is the same quality as my previous SM board and i do observe added heat from the 2 CPUs with higher TDP and doesn't have Haswell efficiency, so my drive temps in general has gone up to an average of 40c with several drives spun up, but since they are WD reds they should be able to cope fine with this.
Observations. the CPUs boots often to 3.0-3.1 Ghz on all cores. Temperatures is average 45c/62c (idle/load)
-
I had a setup very similar to your server build and was in the same situation as you, so I upgraded to a Supermicro X10SL7-F and the Node 804 case, which i wasn't happy with in regard to heat/noise levels, so i upgraded again to Fractal define R4 (still with the SM board) which i was very happy with until i couldn't resist any more and just had to jump on the dual E5-2670 wagon (still in the R4).
If you have room for it the R4/R5 are very nice cases for storage, and i can strongly recommend them (i put mine vertically on a shelf), this leaves you room for 8x3.5 and 2x 2.5 SSD out of the box with the option to add a 3 x 3.5 drive bay. with the E3 Xeon and a tower cooler, temperatures was very respectable (both CPU and drive), while the noise level was decent.
With a Define R4 or R5 you would just have to consider a storage controller and maybe a drive bay for further drive expansions, as your PSU, RAM and motherboard from the server build is very nice for unraid IMO
Alternatively you could also get the Nanoxia Deep Silence 1 rev b, which is very similar to the R4 but has a slightly higher 3.5 drive potential with the option for a 5x3 drive bay. it is harder to find outside Northern Europe as i understand it
-
Thank you very much for this, it may very well come relevant for me
-
So i have now finished most of the work for my server.
https://picload.org/image/wwlwggr/2016-03-2618.24.12.jpg
https://picload.org/image/wwlpwwr/cinebench.jpg
I also put my MSI GTX 980 into the rig to test for gaming. Not much difference to my single X5650@4Ghz.
FireStrike:
https://picload.org/image/wwlpwwl/firestrike.jpg
My Hardware:
Nanoxia Deep Silence 6 Rev.B
ASROCK EP2C602-4L/D16
2*E5-2670
64GB ECC @1333Mhz
Enermax Platimax 1500W 80Plus Platinum
lot's of HDD's and SSD
Did the motherboard fit Or did you have to make a custom solution in regards to motherboard tray standoffs ?
I am thinking about getting the deep silence 1 rev b for another Board with similar form factor
-
+1 glftpd
LSI Controller FW updates IR/IT modes
in Storage Devices and Controllers
Posted
Thanks for this, just used this post as a reference but using the new modified package
Here is a modified version of the text you wrote that fit what i ended up doing
Use Rufus to create a Freedos USB drive (both supporting BIOS and UEFI, extract the content of H310H200.zip to it and boot into dos
1. Run 1.bat
2. Run 2.bat (Can be skipped, it resulted in errors and no backups for me)
3. Run 3.bat
4. Reboot
5. Boot into EFI shell. type "FSx" [enter] x= the drive letter of your USB drive. For me it was FS0. Now you can use "ls" to list the contents of your usb stick and "cd" to change directory. Also once you type about 3 characters you can auto fill with [Tab] which makes things like navigating long directory names easier. Go to \5_DELL_IT folder. Now run the following command:
sas2flash.efi -o -f 6gbpsas.fw
6. reboot back into EFI shell and go to the folder \5_LSI_P7 and run the following command:
sas2flash.efi -o -f 2118it.bin
You will be asked to confirm the flash and just choose Yes and it will flash from the dell IR FW to the LSI IT FW.
7. Now reboot again into EFI shell and go to \5_LSI_P20 and run the following command:
sas2flash.efi -o -f 2118it.bin
8. Now enter: sas2flsh -o -sasadd 500605bxxxxxxxxx where "500605bxxxxxxxxx" is the SAS address that you got in step 1. (Look for the SAS address in ADAPTERS.TXT on the USB drive)
9. Enjoy your new flashed LSI 2008 card.