Jump to content

hawihoney

Members
  • Content Count

    585
  • Joined

  • Last visited

Community Reputation

4 Neutral

About hawihoney

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, but would it be better than connecting the backplanes itself? I mean, 8 lanes per backplane instead of 4 lanes per backplane. Will it give any benefit?
  2. The Supermicro backplane manuals for single expander chips always add JBOD cases through the primary backplane or the primary HBA. Why don't they add an additional HBA with external ports (e.g. LSI 9300-8e)? Consider this "home lab" scenario: 1.) Primary Case: Supermicro SC846E16 Backplane: Supermicro BPN-SAS2-846EL1 (single expander chip) Mainboard: Supermicro X9DRi-F Dual LGA 2011 CPU: 2x Intel Xeon E5-2609 V2 4 Core Memory: 64GB DDR3 ECC REG (4 x 16GB) HBA: LSI 9300-8i (both HBA cables, 8 lanes, are connected to the primary backplane). 2.) Secondary Case: Supermicro SC846E16 Backplane: Supermicro BPN-SAS2-846EL1 (single expander chip) JBOD Power card: Supermicro CSE-PTJBOD-CB2 First Supermicro Option: Take the free port from the primary backplane and lead one cable to the secondary backplane. --> Primary backplane has 8 lanes, secondary backplane has 4 lanes. Second Supermicro Option: Attach only one port from the LSI 9300-8i HBA to the primary backplane and attach the second port from this HBA to the secondary backplane. --> Primary backplane has 4 lanes, secondary backplane has 4 lanes. Option, I ask for: Add an additional HBA e.g. LSI 9300-8e to the primary case. Lead both cables from this second HBA to the secondary backplane. --> Primary backplane has 8 lanes, secondary backplane has 8 lanes. What's wrong with my approach? Is it total BS? Wouldn't it be better to have 8 lanes in the secondary backplane? I know, Unraid does not support more than 28 drives in protected arrays. But I hope it will be added in the future.
  3. For those of you in the EU - I'm selling 6x Supermicro CSE-M35T-1B 5in3 HD cages on eBay. These are from my two original LimeTech builds I recently took out of service. Used and still working perfect. I did clean them a little bit Screws for cage and drive holders included, no cables. Price is EUR 79. When taking more than one cage or whatever, make your price. https://ebay.us/kZO22L
  4. Bought 2x new LSI 9300-8i from eBay. Looked for a Chinese dealer with highest rating and count of ratings. Every order took 2 weeks to arrive. All in perfect package und working happily with unRAID. Was around 90-99 EUR last time. When I find a used one from EU/US with that price I would prefer that.
  5. If I give /mnt to a docker container and I do use traditional /mnt/user AND Unassigned Devices /mnt/disks does it harm to set the Slave option?
  6. I was considering this as well and my descision is - I will not go the iGPU way. If you're interested, here are my thoughts: What I learned in the past is: Every evolution requires to buy completely new hardware. New audio capabilities at the Dolby front - buy new hardware. New video capabilities - buy new hardware. The current iGPUs are at the edge of 4K transcoding. If Intel decides to leave HDR out, you're left out. The first custom UHDs require over 100Mb/s. I don't know one 4K TV with Gigabit ethernet - they all have 100Mb/s interfaces. So all 4K TVs will require transcoded material with values like that in the future. 8K is the next big thing. What are the requirements then? With that in mind I decided to go the pure power way: I will start with a server grade Xeon and a motherboard that allows up to 2 or 4 CPUs. Start with one CPU that does 17,000 to 20,000. Add one or more CPUs later if neccessary. On eBay there are lot's of server grade old and cheap monsters, approx. 5 years old. Yesterday I looked at some of them. I found some completely assembled for EUR 1,000 to 2,000. If power is no problem look at this at eBay for example: Supermicro SC 848A-R1K62B / X9QRI-F+ / 4 x Intel E5-4650 / 64GB RAM / 64 Threads --> 44,000 passmark at EUR 1,599. Will draw lots of power and cost you an arm and a leg on your monthly bill. So your CPU does 13,000. These CPUs are EUR 250 to 350 on eBay used. I don't have any skills on motherboards, somebody else must find a matching one that does 2-4 CPUs of that type. Add memory and cooler if needed. That would be my way. Just to round it up. Here's what Plex says about transcoding requirements: https://support.plex.tv/articles/201774043-what-kind-of-cpu-do-i-need-for-my-server/ Just my 0.02 USD.
  7. Copying 849,000 files (70GB, thanks Plex) from SATA3 SSD to PCIe M.2 is running since over an hour, current speed is down to 9MB/s. Temperature is ok. These SSDs/M.2s are not that good for huge transfers. Still fingers crossing.
  8. Seems somebody should add a check in Unraid, for smaller disks added to cache pool. I bet this is the problem: In the meantime I'm working the other way: - Stopped all dockers - Set docker=off in Settings - Stop array - Removed the remaining old SSD from cache pool - Added the second new NVM to cache pool - Restart - Format unmountable disk (new NVM disk1 from cache pool) - Mount old SSD with Unassigned devices - Copy cache content from old SSD to new cache pool (both NVMs show activity) - Fingers cross that I can start all dockers then.
  9. Today I added two NVM M.2 disks via two PCIe adapter cards to my server. While array stopped I removed one SSD device from the cache pool and added one of the NVM disks instead. Everything looked successfully. After restart of the array there's no activity on this new NVM disk. I waited a couple of minutes - no activity. After 10 minutes or so I clicked on the first cache device on that "balance" button. I now see massive activity on the old SSD device from the cache pool. The new NVM disk stays without activity. PCIe adapters are 2x "Lycom DT-120 M.2": http://www.lycom.com.tw/DT-120.htm NVM disks are 2x "Samsung 970 EVO 250GB": https://www.samsung.com/de/memory-storage/970-evo-nvme-m-2-ssd/MZ-V7E250BW/ Motherboard is "Supermicro - X9DR3-F" (in fact it's a X9DRi-F, at least that is written on the board): https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRi-F.cfm Both CPUs are running, so all PCIe slots are working. For these Lycom PCIe adapters I'm using the slots 2 (CPU1, x8) and 3 (CPU1, x16) - counting from the CPU. There's an additional LSI 9300-8i in slot 5 (CPU2, X8). Any help is highly appreciated. root@Tower:~# btrfs dev stats -c /mnt/cache [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 0 [/dev/sdc1].read_io_errs 0 [/dev/sdc1].flush_io_errs 0 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 ***EDIT***: "btrfs filesystem show" shows the old members of the cache pool (sdb1, sdc1). But sdc1 is no longer part of the pool. The main page and the btrfs command line show different facts. Is it possible that the 250GB NVM replacing a 256GB SSD worked for Unraid but not for BTRFS? root@Tower:~# btrfs filesystem show Label: none uuid: 5a54f36e-e516-4c7e-9b68-8bae98a6d227 Total devices 2 FS bytes used 68.09GiB devid 1 size 238.47GiB used 69.03GiB path /dev/sdb1 devid 2 size 238.47GiB used 69.03GiB path /dev/sdc1 tower-diagnostics-20190115-1243.zip
  10. As I wrote: Some browsers disable cookies (mine), some throw them away after closing a browser (company). IMHO, there's a difference between a plugin that's used once or twice when settings up a new machine and the Main page that's the main entrance to the Unraid server (default, can be change in Identification settings). I start the Unraid GUI very often to look at the Main page. IMHO, the reads/writes totals, that's the default setting of that switch, is not very user friendly for my old eyes. Every column contains values. It's nearly impossible for me to see changes/activity there. OTH, the current Mbit/s output is more meaningful. Disks with no activity stay at 0. I can see active disks immediately. Just my USD 0.02.
  11. Why? After installing my 2-3 dockers I never went there.
  12. I'm talking about the "Main" page and the "toggle reads/writes display" switch. AFAIK, the current setting is stored in a cookie. Whenever you clear your browser cache, the current setting is gone. My browsers start in "Incognito mode" always - the cookie is never stored. And the browsers in the company I'm working for drop all cookies whenever you close your browser. If you want to see the current read/write rate view on the Main page you have to change that setting after every new start of those browsers. May I ask for a configurable setting in the "Display" settings. All users can define what display they want to see as default. Thanks for listening.
  13. hawihoney

    System Information does not detect CPU

    Ok, will be "soon" I suppose