C4RBON

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by C4RBON

  1. Unraid will be good for your use case. Community Applications makes it very easy to start using Docker containers. I personally wouldn't host a website using Unraid. Many consumer ISPs block connections on port 80 and 443, and don't provide static IP addresses, so it can be difficult. Additionally, if you have actual users of that website, your home server uptime isn't going to match a professional hosting service. The entry Linode plans are $5/month which removes so many hurdles to hosting a website. Self-hosting Bitwarden certainly is possible, but for the $10 per year the Premium service costs, I let them do it. Media storage and streaming is basically what Unraid is for 😁 If you are on your LAN with the Apple TV and Infuse, you might find you don't often need to transcode; I can direct stream everything from my unraid server to infuse without transcoding. Yes, you can host VMs on Unraid. Lots of people host VMs, and there are tutorials for how to set that up (it isn't difficult). I have a Windows 11 VM on my unraid server. Looking at your hardware selection, I would recommend getting a cpu with some more cores vs the 12100. The 13500 would be a good choice. More cores gives you flexibility for the future. 4 cores is enough for a basic file sharing/streaming server (with hardware transcoding), but if you want to play with VMs you'll probably want some extra cores. 32 gb of RAM is sufficient. I think 16gb is enough for Unraid and a sane number of dockers that normal people would use. More RAM is beneficial for VMs. I have 64gb in my server, but half is for a Windows VM. 2x 1TB is good for a cache pool. Your Docker Appdata will likely be on your cache, so a pcie4 nvme drive is a good choice. Two of them gives you redundancy. Once ZFS arrives (for stable release), you can get a speed boost from having two SSDs. Regarding Intel vs AMD, I would pick Intel for the QuickSync transcoding. Intel motherboards can also be easier to pass-though devices to VMs.
  2. A few comments: 1) Do you really need 16 lanes for your GPU? On a 4090 (which would be worst case scenario) there is a few (1-5%) percent difference between Gen 4 x16 vs Gen 3 x16. Gen 4 x8 is equivalent to Gen 3 x16. See this video for reference: 2) Which Intel 10G NIC do you have? PCIe Gen 3 or Gen 2? I assume it is physically 8 lanes; if it is a single port PCIe Gen 2 x8 card, you can get 10G with 4 lanes. I have done this and verified with iperf3. 3) The LSI card may also get by just fine with 4 lanes, depending on what PCIe gen it is, how many disks you have, and your workload. 4) Do you want 2 M.2 for redundancy or capacity? There are several good 4 TB M.2 drives on the market, and even a few 8 TB ones. There are also high capacity U.2 2.5" drives that can be used with an M.2 adapter. 5) There are adapters to go from M.2 to a PCIe slot. There are also adapters to split a x16 PCIe slot into multiple slots. Your Z590-V motherboard supports bifurcating the x16 slot (according to the manual). You might already have enough PCIe lanes with the motherboard you have, if you are willing to get creative.
  3. I had one of those running for 3 years with no issues. I just replaced it as part of my new server build. I am now using a Delkin B300 "Industrial" SLC flash drive:
  4. Does your budget allow you to get a 13th gen processor? If I were making a small home NAS box, I'd go with something like: - Intel 13100 ($150) - Latest onboard GPU for QuickSync transcoding - 4 "P" cores with hyperthreading - ASRock Z690M-ITX/ax ($140) - 4x SATA - 2x PCIe Gen 4 x4 M.2 with heat spreader - Intel 1G and Realtek 2.5G NICs - Crucial DDR4 3200 2x16GB ($70) I found this In-Win case recently that I think would make a nice home NAS ($160). You'd have to check if the stock CPU cooler would fit, though. Otherwise get the Noctua low profile one for like $40. https://www.newegg.com/black-in-win-iw-ms04-01-s265/p/N82E16811108471
  5. I recently got the ASUS Pro WS W680-ACE IPMI. The new W680 chipset boards might work well for you, without moving to a more server oriented platform. Supports ECC on 13th Gen Core processors. Enough PCIe lanes for most home server needs. There are options with IPMI, which is a nice luxury for a home server.
  6. In my recent (and ongoing) new home server build, I am focusing on reliability and uptime. One of my areas of improvement is the boot drive. I had been using a 32 gb Sandisk "Ultra Flair" for the last 3 years without issue. But for my new build, I wanted something a bit more... trustworthy. In my searching for a new boot drive, I found these Delkin USB sticks: Industrial USB Flash Drive | Delkin Devices- Rugged Controlled Storage https://www.delkin.com/products/industrial-usb-flash-drive/ https://www.delkin.com/wp-content/uploads/2021/01/401-0459-00-Rev-F-B300-Series-Industrial-USB-Drive-Engineering-Specification.pdf Highlights: - Industrial SLC - Controlled BOM (they won't change the parts inside or specs without notification) - Wide operating temperature range (-40 to 85 C) - Error correction, wear leveling, block management, redundant firmware, dynamic data refresh - Mine is stamped with Made in USA (I'm sure with some overseas parts) - Has a GUID for Unraid licensing Downsides: - They aren't fast, despite being USB 3.0 (75 MB/s read, 60 MB/s write, sequential, for the fastest 16/32 GB versions) - They are long and plastic. There is a "short" version that is still fairly long. No cap retention. - Expensive. Here are the prices on Digi-Key: 2 GB - $59 4 GB - $85 8 GB - $139 16 GB - $239 32 GB - $354 I settled on the 8 GB as a good balance between capacity, performance, and cost. While I winced at spending $139 on an 8 GB flash drive in 2023, it's easier to accept if you consider it an "industrial boot drive", and not something used occasionally to share Linux distros with friends. I'm sure there are consumer drives with similar features, but I like that these Delkin drives are documented and have a controlled BOM. Mine will live in my server in my 20 deg C basement for the foreseeable future. I'll report back if it ever has any issues.
  7. Why not use U.2 drives? Rated for much higher endurance than consumer M.2 drives (usually 1 DWPD over 5 years), easier to mount, can be had in capacities up to 16TB, and have a built in heatsink. You can get new "read-intensive" (still 1 DWPD) PCIE gen 4 ones for $100/TB, which isn't much of a premium over the good consumer M.2 drives. Then connect them with an Oculink to SFF-8639 cable. This one specifically claims it is for PCI-e gen 4. https://www.amazon.com/PCIe-OCulink-SFF-8611-SFF-8639-Cable/dp/B089QPFMD2/
  8. Fix Common Problems is telling me that this plugin is deprecated: Deprecated plugin ipmi.plg This plugin has been deprecated and should no longer be used due to the following reason(s): Advised to switch to the version from SimonF which is also compatible with Unraid 6.12+ While this plugin should still be functional, it is no recommended to continue to use it. Where can I get the new version? A search on Community Apps for "IPMI" only has two apps for managing Dell server fan speeds.
  9. Check out Parsec and Moonlight + Sunshine. I'm planning to do the same thing; stream desktop and games from my PC (either as bare metal or a VM) to my laptop for use while travelling. I tested both of these solutions in early January. Parsec is a commercial solution with a free tier intended for gaming. I tested this, and found it good for video and games, but the text rendering wasn't as crisp as I'd like for a workstation. Higher quality streaming settings are locked behind a pay service. I don't really want to pay for the tier that enables 4:4:4 video, since this is a self-hosted service. Especially since I'd be using this either on my home network or via VPN, so I don't need their network features. Their ability to establish a connection in a variety of situations (double NAT, no port forwarding, etc) is irrelevant for me. https://parsec.app/ Sunshine and Moonlight are open-source clones of the Nvidia Gamestream service that was recently discontinued. You run Sunshine on the host PC, and connect to it from a client using Moonlight. Sunshine is fairly new and still being developed. It works for Nvidia, AMD, and now recently Intel QSV. I had some issues with the stream freezing while using Adobe Lightroom (one of my main use cases). However, it is in active development, and there have been several releases and features added since I tested it in early January. The stream quality on Sunshine was impressive; better than the free tier of Parsec. https://github.com/LizardByte/Sunshine https://moonlight-stream.org/ Both services had impressively good latency. I was testing it using my laptop sitting in front of my desktop, and the latency was hard to notice. The one thing I did notice, was that scrolling wasn't as smooth; it jumps 3 lines instead of a continuous smooth scroll.
  10. Newegg has the Optane 905P U.2 ssd on sale for $339. Would be a great cache drive; lower latency than NAND flash and much greater longevity (10 DWPD). Comes with a M.2 adapter cable. https://www.newegg.com/intel-optane-ssd-905p-series-960gb/p/20-167-463 Looks like the sale is today only, then the price will likely return to $400. It was on sale a couple weeks ago for $340, when I picked one up.
  11. One feature I looked for in my UPS system was "pure sine wave" output. The cheaper UPS systems produce a series of square waves to simulate a sine wave. https://blog.tripplite.com/pure-sine-wave-vs-modified-sine-wave-explained I have two APC SmartConnect UPS units. I like their web interface for monitoring the UPS status, triggering self-tests, and sending email alerts when the power shuts off. I've had one in my rack for 2 years and haven't had any issues. I just bought and installed a 2nd one for my media center (TV, receiver, etc). I personally think the pure sine wave and network connectivity are worth paying for. I also don't have my server auto-start after power is restored. I have my server set to stay on for 10 minutes running on UPS, and then shut off. 10 min covers all the "typical" power interruptions, anything longer is rare, and I'd want to evaluate the situation before powering the server back on.
  12. I picked up one of the 960Gb Optane drives for my cache drive, but those are U.2. That is rated for 17,500 TBW. Downsides are it is only PCIe Gen 3, and the sequential read/write speeds aren't nearly as good as traditional NAND. But the endurance and random read/write speeds are exceptional. This particular U.2 drive comes with a nice M.2 adapter/cable, so if you have room to mount a 2.5" drive, and can give it some airflow, it could be an alternative. Intel Optane 905P Series 960GB, 2.5" x 15mm, U.2, PCIe 3.0 x4, 3D XPoint Solid State Drive (SSD) SSDPE21D960GAM3 - Newegg.com https://www.newegg.com/intel-optane-ssd-905p-series-960gb/p/N82E16820167463?Description=optane&cm_re=optane-_-20-167-463-_-Product Currently $400, but it was on sale (further on sale) for $340 when I bought it about a week ago.
  13. Have you considered "enterprise" m.2 drives? I haven't used these, but I'm shopping for new ssd drives as well. Something like this: Supermicro (Micron) 960GB M.2 22x80mm 7450 PRO HDS-MMN-MTFDKBA960TFR1BC Solid State Drive (SSD) https://store.supermicro.com/960gb-nvme-pcie4-hds-mmn-mtfdkba960tfr1bc.html Enterprise drives typically have power loss protection, and much longer durability. These Micron ones are 1 DWPD for 5 years. $143 for 960Gb doesn't seem like much of a premium vs consumer drives. You can get add-on heatsinks for m.2 drives (that don't come with a heatsink) which might also help with temps.
  14. I dealt with memory issues a few weeks ago. Spent hours (days?) of my life I'll never get back running memtest over and over. Here are some suggestions... 1) In my experience, if there were going to be errors, it found them by Test 7. If you have to run a bunch of tests for different scenarios (DIMMS, slots, speeds, timings, etc), I wouldn't run the entire test. Stop after Test 7. When you think you have a stable system, then you can run the entire test. Even run multiple passes overnight. 2) Check your memory speeds timings in the BIOS, and set them to match the memory specs. My ASUS Z790 board was not auto-populating timings correctly (I'm talking a about the 4 main timings). Setting the timings manually fixed the vast majority of the errors. 3) If you have another system that supports your memory, check your DIMMs in that system, one stick at a time to determine if the DIMMS are good. Otherwise, if you can find a dimm/slot combination that results in 0 errors, you can use that to identify which DIMMS or slots are causing errors. 4) I had different results from memtest for different slots on my server mobo. Slots 1 and 3 threw errors every time, slots 2 and 4 had no errors (where slot 1 is closest to the CPU, slot 4 is furthest). 5) If you find another set of memory to test with, don't assume that kit is "good" just because you weren't having errors with it before. I pulled my main desktop ram to test in my server, and still got errors. Turns out two of my dimms from my desktop were bad, too, when I ran memtest on my desktop. Ultimately I decided to switch motherboards and get ECC memory, because I saw the chaos that memory errors can cause in Unraid.
  15. I'm also considering getting an A750 or A770 to pass through to a Windows 11 VM on unraid. On this page (updated February 8th), Intel states that virtualization is "not supported" for Arc GPUs: https://www.intel.com/content/www/us/en/support/articles/000093216/graphics.html https://community.intel.com/t5/Intel-ARC-Graphics/Unable-to-use-Intel-Arc-A750-in-virtual-environment-Linux-vfio/m-p/14557393216/graphics.html "Not supported" seems ambiguous... 1) It is not possible to pass through the Arc GPU to a VM? (this doesn't seem correct, as people have done it) 2) It is not possible to have multiple VMs use one Arc GPU? (I think this would require SR-IOV, which Intel doesn't list for Arc) 3) Intel won't support it meaning if you run into issues, they won't help you resolve it? (fair enough) I'm guessing #2 and #3 are true, but my concern is if Intel can update the drivers that would make it technically not possible (#1) to pass an Arc GPU to a VM. Is that something Intel could do? I'm not sure what the interaction is between the drivers, hardware, unraid, VM, and windows. This is my first foray into a passing a GPU to a VM. My goal is to have a windows VM with a discrete GPU, and then use Sunshine to stream games at 1080p. Arc A750 is attractive due to price and size (I need a 2 slot GPU). I can tolerate driver issues, since I have multiple other computers to use.
  16. What happens to a paused container when the battery limit is reached and Unraid shuts down? Would "docker stop" and "docker start" be more appropriate? Could data get lost by using pause? I read that when you use pause, the container doesn't know it's being paused. So if it is shut down before being restarted, could you lose data? Edit: After doing some reading, it seems that stop/start are more appropriate for this scenario, since a shutdown is likely after running on battery. I'll update my previous post to reflect stop/start instead of pause/restart.
  17. @pyrosrockthisworld I just got a UPS and wanted to do the exact same thing: stop the FaH container while running from battery. I did some reading in the APCUPSD manual (specifically Customizing Event Handling), and I've come up with a simple method. The apccontrol script (in /etc/apcupsd/) gets called when apcupsd detects an "event". Switching to battery power ('onbattery') and back to mains power ('offbattery') are both recognized events. When these events occur, the apccontrol script calls event-specific scripts (named the same as the event) located in /etc/apcupsd. By adding 'docker stop' to the 'onbattery' script, and 'docker start' lines to the 'offbattery' script, power-intensive docker containers can be prevented from running while on battery power. The 'onbattery' and 'offbattery' scripts already exist in /etc/apcupsd. In the /etc/apcupsd/onbattery script, I added the following line right above 'exit 0': docker stop FoldingAtHome In the /etc/apcupsd/offbattery script, I added: docker start FoldingAtHome You could add more Docker container names after "FoldingAtHome" to stop/start additional containers. I've tested it several times, and the FaH container stops several seconds after switching to battery power, and then starts again once mains power is restored. Edit: Looks like this is the exact same method used in the post linked by @kizer. I could have saved some time... oh well. I learned more by figuring it out myself. Note: If the power outage is only a few seconds long, the container won't be stopped by the time power is restored, and the 'start' command is issued. If the container is still stopping, then it won't start, and you'll be left with a stopped container.
  18. I just used the Unassigned Devices plugin to connect to an SMB share on a raspberry pi. If I use the server name ("RASPBERRYPI", either typed in manually or selected from the network search) I can't mount the share. If I use the IP address (192.168.1.4), I can mount the share. What is the reason for this?
  19. I'm planning a new server build, and would like to use Intel QSV for Jellyfin transcoding so I can buy a less-powerful CPU. I see the option in the docker template, and it seems that Jellyfin supports QSV. However, I'd like to confirm that it isn't buggy or overly-complex to set-up (I'm not looking to try experimental or temperamental features). However, my current CPU/Mobo doesn't support IOMMU (i7-4790k doesn't support Vt-d), and my other desktop is AMD. Can anyone confirm QSV transcoding works in the with the Jellyfin Docker? Is it relatively straight-forward to set up (using available hardware passthrough guides). Thanks!
  20. Finding a case that meets your needs is going to be hard. I had a similar idea (same number of drives, also wanted it under my tv), but I gave up and just put my server in the basement in an old case (from back when it was normal for cases to hold 6 drives). I know of cases that would work, but they'd cost a substantial portion of your budget. You could get something like a Synology DiskStation, but I assume you don't want that, since you are asking on the Unraid forum lol. I think a discrete, noise-focused mid-tower case like the Fractal Define (I have an R6) wouldn't look out of place sitting next to a TV. Mine is barely audible; my central air is louder when it is on. But, i've also spent probably $150 in noctua fans and cpu cooler.
  21. $500 is going to be tough using new pc components. The biggest challenge is that there aren't many case options in the HTPC form-factor anymore, and the remaining ones would be a large percentage of your budget. My advice would be to scrap the idea of putting your server under the TV. This lets you save money on a case and not worry so much about noise or aesthetics. Throw the server in the basement/closet/garage or wherever is convenient and has ethernet access. Get an nvidia shield or apple tv and use that as your media streaming device; both are $150. I have an nvidia shield and love it; I've given up on the idea of putting a htpc under my tv. I use jellyfin to stream transcoded 1080p media from my unraid server. For movies I use Kodi to play 4k bluray rips directly from the server, even over wifi. Having access to youtube, spotify, amazon prime, netflix, etc all one one device has been awesome. The remote even turns on/off the tv and receiver, adjusts volume, and has voice search. If $150 is too much, a Chromecast is $30 and they work well. I put together a quick list of server components on Newegg: - CPU: Intel Core i3-10100 ($129) Quad core with hyperthreading; plenty of power for NAS, some docker containers, and transcoding 1-2 1080p videos Supports hardware pass-thru for VMs and Docker containers Built in graphics with some decent hardware acceleration for encoding (*depending on what applications you are using) Includes a cheap cooler - Motherboard: ASRock H470 Phantom Gaming 4 ($101) Full ATX for future expansion for GPU, NIC, HBA, etc. Supports 2 m.2 NVME SSDs and 6 SATA ports* (using both NVME will likely result in losing 1-2 SATA ports) Built-in Intel networking Most recent LGA 1200 socket, so there will be upgrade paths available for several years - RAM: 2 x 4GB DDR4 2666 ($41) 8 gb will be enough for NAS and streaming, with a few docker containers. For $25 more you can get 16gb. Dual channel gets you a performance boost vs a single stick. Buy whatever is cheap from a reputable reseller and brand; RAM is a commodity. - PSU: EVGA 600W 80 Plus Bronze ($70) Reputable brand Good efficiency Should last for a decade Enough power for what you are looking to do - Case: Up to you. ($0-150) Decide what is important to you: looks, noise, expansion, airflow, price etc. There are a few new cases in the $50-$75 range that support 5+ HDDs. If you are trying to hit a strict $500 budget I'd try to find someone selling or giving away a used case. Otherwise, on the cheap end there are some Antec mid tower cases for $50, or if you want a nicer looking, quieter, better quality case look at the Fractal Design R5/6/7, but those will be $120-$150. If you insist on a HTPC case to put under your TV, look at what Silverstone has. Total without case is $341. Add the nvidia shield or apple tv and you are right at $491 without drives. For $80 more you could get 16gb RAM and a 6-core processor. You can also look at AMD, but I wouldn't expect any drastic differences in performance or features at this price range. Hopefully this gives you some ideas.
  22. The answer was yes... the ssd seems to have failed after 7.5 years in various systems. I removed sdg from my cache pool and the errors went away. The errors return if I re-add sdg. My first hardware failure! Two new ssds are on their way from newegg.
  23. Fix Common Problems alerted me to some problems in my log file. I'm getting some writing errors to my cache drive(s). My cache pool is sdf and sdg, which are both fairly old SSDs. I'm currently running the mover to get everything off the cache pool. I checked the drives, and they are still plugged in (both data and power). I haven't made any recent hardware changes. Does this look like a failing cache drive? Thanks, Jason Mar 22 11:13:57 DEVILSCANYON kernel: print_req_error: I/O error, dev sdg, sector 4682016 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS warning (device sdf1): lost page write due to IO error on /dev/sdg1 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 Mar 22 11:14:00 DEVILSCANYON kernel: BTRFS error (device sdf1): error writing primary super block to device 2 devilscanyon-diagnostics-20200322-1058.zip
  24. My desktop GPU has been working all day, but still nothing on my Unraid server. Restarted the docker and after a few attempts it finally got some work to do. Thanks for the tip.
  25. I think there are a lot of us getting that same error. I had everything up and running on Sunday, and processed several WUs on my two computers (including my Unraid box). After Sunday evening, I haven't had any new WUs. I gather that the influx of new contributors has outpaced the FaH infrastructure's ability to generate and process new WUs. Essentially, there isn't enough work to go around. I think the Rosetta project on BOINC is still providing work to users, you might look into that.