Joakimns

Members
  • Posts

    3
  • Joined

  • Last visited

Joakimns's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hello, I have received some equipment from work and I need tips and would like to hear your experiences. Currently, I have: - i5-11500 processor - 32GB 3200Mhz RAM - B560 motherboard - 1x18TB parity, 1x16TB, 1x8TB, 1x6TB, and 5x4TB data drives = 18TB parity + 50TB data array running on XFS. - 2x1TB Cache SN550 NVME SSD - Meshify 2XL storage layout - 750W Gold PSU - 10 Gig The server is located at a family member's place and is mainly used for ARR's, Plex and a small VM. The server is not far from me, so I can physically access it, but I would like to avoid making it a necessity. From work, I have received 24x8TB SATA NAS drives. I believe it should be possible to fill the Meshify 2XL case with 18-20 drives. So, I'm contemplating whether to replace the entire current array and instead install 20x8TB in RAIDZ1 or the recommended setup(recommend me, please!)? I enjoy tinkering and building, but this server will be offsite and ideally running 24/7 with minimal downtime. It's also possible that in the future, I may receive more equipment from work, including larger drives, which would make it easier to swap out disks if I choose XFS and expand the current array. Another option I can make happen is to go for either ZFS or XFS array on the old server and build a new server with new parts either from work or self-purchased, allowing me to tinker to my heart's content, and that server would serve as a backup for location 1. If you were in my shoes, would you discard the current array and set up with ZFS? Or would you reuse the 18TB parity and 16TB data and fill the rest of the array with 8TB drives for easier upgrades later on?
  2. I've got eight 10K 1.2TB SAS drives for free from work. Are there some advices or best practices on how to implement faster than normal 7.2K SATA drives? My primary usecase is Plex, nextcloud and 2 VM's. I've looked at setting up another cache pool consisting of all the SAS drives as my logic states that the newest Linux distros moves from 2TB tier 1 NVME cache -> 9.6TB tier 2 SAS -> tier 3 38TB SATA (array) but i'm not sure if thats even possible. Any advice? Setup: i5-11500 32GB 3200MHz Cache: 2x SN550 1 TB NVME Array: 1x 18TB, 1X 16TB, 1x 8TB, 2X 4TB = 38TB (all 7.2K 6G SATA) 8000\15000 Linux distros
  3. I'm a complete newbie at unraid and my first build and also first time using unraid is with an Intel 11500 + B560 motherboard. I bought this system mainly for Plex and after doing my research i tought 11th gen Intel would make most sense since Unraid would eventually support UHD750 igpu. I installed 6.10 RC1 from the start and quickly read that others was experiencing that transcoding in Plex would not work. I was not able too boot without using an external GPU, so i put one in. I put my server in a pretty difficult place to access so i was wondering if someone with better Unraid knowledge could maybe tell me what im doing wrong? Now im on 6.10 RC2 and still cant get unraid to show Intel VGA IOMMU device thingy under system devices.. Neither will /dev/dri or cd /dev/dri type of commands in the terminal - i get "no such file or directory". Only the Nvidia GPU i put in as a temporary solution shows up under system devices.. As i said earlier my server is difficult to access and im pretty sure last time i had it plugged in to a monitor etc i enabled CSM i BIOS and what was recommended for unraid igpu support. Can someone point me in the right direction for troubleshooting? Would love to use the igpu with Plex - Thank you