Jump to content

hawihoney

Members
  • Content Count

    774
  • Joined

  • Last visited

Community Reputation

10 Good

About hawihoney

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Array Datenträger? Diese Mischung aus Englisch und Deutsch ist fürchterlich. Entweder Array Disks belassen oder komplett Plattensubsystem (siehe Wikipedia). Da Array innerhalb Unraid ein fester Begriff ist würde ich ersteres belassen. Auch die Datenträgerverwaltung in Windows kennt z.B. Volumes. Lokalisierte Hilfeseiten und Wikis, die auf diese Begriffe eingehen, sind IMHO viel wichtiger als ein paar Überschriften von Tabellen in einem technischen Werkzeug. In diesem Sinne würde ich Kernbegriffe wie Array, Disk, Share, Plugin, Container, VM, User, etc belassen. Übersetzt man diese so läuft man unweigerlich in die Denglish Falle. Meine Meinung.
  2. Just curious: Why 4TB Cache Disk? Dockers, VMs rarely need such a huge cache. Or do you create/update 4TB array data every day? As I wrote: Just curious.
  3. On my Plex Dashboard I see a music stream. This is not shown on your Dashboard. Is your Dashboard restricted to movies only?
  4. Tried to Install and received the following error during Setup: **EDIT** Sorry, had to use internal IP. Works now.
  5. My 2x E-2680 v2 don't even recognize Direct Play streams. IMHO the CPU isn't what you should worry about. I doubt that a single HDD can handle 100 reads in parallel at approx. 17Mbits/s each. That will stutter a lot. And If you use several HDDs it's pure luck what content on what drives is selected from your Users. Upstream bandwidth, harddisks, and never ending write requests to the Plex database are parts I would worry about.
  6. Serious? Direct Stream? What content? SD, HD, 4K? Untouched content, re-encoded content? How many drives? What kind of drives? My small library has 17Mbit/s avg. bandwidth for HD stuff. Multiply that by 100 and you will see where you end. You would stream 1,7Gbit/s thru your line. For direct stream you don't need dual Xeons. Please give some more details.
  7. Just an additional thought: Modern HBAs like the LSI 9300 series can address up to 1024 SAS/SATA drives thru expander features. Many backplanes do have expander chips on board and do work perfect with these HBAs. What could be done in theory is to cascade many backplanes, hosting for example 24 drives each, with just one single HBA. Currently, if you want several parity protected arrays, this can be done with Unraid VMs running on Unraid. The drawback is, you need a HBA passed through to every VM *) So there's a builtin limit currently for the amount of parity protected arrays running in Unraid VMs. The limit are the count of free slots on your motherboard and the count of USB slots for an Unraid license that every Unraid array needs currently. *) In theory cascading should be possible without HBAs passed through to VMs, if one could pass through e.g. 24 individual disks to every VM. But I never could get that to work. I'm dreaming of chassis hosting 48 or 60 or even 90 drives running as one single server with one motherboard and one HBA hosting several parity protected arrays ...
  8. Sure, did point previous posters to the reason of their problems. The link in my post can help to identify problems on duckdns side as they don't have a status page AFAIK.
  9. DuckDNS.org seems to have Problems. I can't reach my sites any longer. This is the second time in a week now. https://intodns.com/duckdns.org
  10. Never done this but I would use find on the console with the days option. E.g.: find /mnt/user/yourfolder -type f -mtime +300 -print In the above example - Files older than 300 days will be found. To change that modification time to the current one, you can use the exec option, but be really, really carefully about what you do: find /mnt/user/yourfolder -type f -mtime +300 -exec touch {} \; Disclaimer: Use at your own risk.
  11. Both are Docker containers and are running on my cache pool. My cache pool (BTRFS RAID1) is build from two NVMe M.2 devices.
  12. Plex, Plex, Plex and MariaDB for Nextcloud It was a HUGE difference between running the Plex docker on a SATA SSD or on a NVMe M.2 on a PCIe x4. Can't write HUGE big enough. In Nextcloud I'm using External Storage that points to the array. I can't make that faster. But the search performance is way better now.
  13. What? Who says that? I started last year with two SATA SSDs building the cache pool. Later I changed that to two M.2 NVMe connected to PCIe x4 slots. If I remember correctly read performance changed from 500 MB/s to 2700 MB/s.
  14. It's just a hypothetical question for now. Currently I'm running a VM with a LSI 9300-8e (external DAS box, 24 drives attached) passed through. The whole system (bare metal and DAS box) has expander feature. This means in theory, I can omit the second HBA and let the main HBA (LSI 9300-8i) host all 48 drives. That way I would need to pass through 24 drives to the VM. Would this be possible? And if yes, how? Thanks in advance.
  15. IMHO, I would not put SSDs in the array. Put VMs and Dockers on the cache pool and use SSDs or better NVMs for the cache pool. Use the array as storage. And use spinners in the array. Just my 0.02 USD.