givmedew

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by givmedew

  1. So, I have a server and I thought I was pulling out a drive that I was getting rid of but it was the wrong drive. So now I have a drive that says it is disabled and emulated. Do I have to completely rebuild this drive or is there a way to put it back into play?
  2. Instead of trying to be all high and mighty and asking you why you bought that server… which does nothing to help you at all… edit I got that wrong… thought you already owned that server lol turns out I’m the jerk let’s move on maybe that other guy doesn’t know you can buy a dual v4 server for $300 rip one of the CPUs out of it and typically have an idle of around 120w which is not bad unless you are trying to build a very efficient system and those take special effort, focus, research and often expensive ITX or very specific mATX/ATX boards. my z board was idling well over 200w with a 9700k when you can get them down below 20w with most ITX boards or some specific other boards. so anyways onto what you actually asked about! transcoding… I’m going the way of the mini juggernaut… the Intel ARC A380 but I’m also running a VM… with an awful 8 core 4180 I was able to get 120FPS AV1 1080P transcode and just over 30FPS 4K AV1 transcode. But that’s AV1 and that’s a 2GHz 8 core CPU. In both situations I had about 25% utilization on the GPU. for 264 transcode it won’t brake a sweat! but you might have to use it in a virtual machine for the time being. if you want to use docker I’m not sure if they got the A380 working perfectly yet. There is a thread with plenty of action on it you could follow. if you don’t care about permanently converting a huge gigantic library to AV1 there are plenty of affordable NVIDIA cards. Only 4xxx can do AV1 and believe AV1 is going to matter to you someday. But not today as with plex only PCs can play AV1. im converting all the old stuff no one watches anymore because it’s saving tons of space going from 264 to AV1 completely skipping 265. but yeh just get a video card just do some research first on which one. Also be careful about idle power… you buy the wrong card and end up drawing an extra 100w and that’s an extra $100/yr… not a good deal if it was to avoid paying $200-300 for a newer card.
  3. stay tuned for the breakdown on a 6.4TB SSD Platinum 8160 fire beast that cost me far far less than $1000 to build! this bad boy will also be pretty soft on the electricity bill and pretty darn quiet (explained at the very end). Well, I currently have a SuperMicro X9SRL-F w/ E5-1650 v2 and 48GB of ECC. For storage I have 11TB of cache via 12 2TB SAS drives and 100TB of storage spread out across 20 disks that vary in size from 3TB to 18TB with an 18 and 16 being the parity drives. Largest data drive right now is 10TB. That CPU is a bit older and only has 6 cores 12 threads. I often find that during high speed data transfers all the cores are pegged. Add in that I want to set up a few dockers and a VM and I figured it’s time to upgrade. Well turns out the upgrades cost far less than I thought! So the build: ++HP Z6 G4 Workstation $250 shipped (configured with a 8 Core Silver Xeon, 8GB ECC, and a FirePro video card) ++Intel Xeon Platinum 8160 Engineering Sample $80 ++2x16GB DDR4 ECC PC4-2400T rDIMM $60 ++HPE 6.4TB PCIe SSD $300 ++4x8TB Helium SAS drives $250 —— ALREADY OWN —— ++2x15 Bay DELL/EMC Disk Shelves (cost less than $300/ea I paid about $350 total) ++LSI SAS9207-8e external SAS card ++Dell PERC H330 12GBPS 8i internal SAS card ++SAS DRIVES: 1x18TB, 4x4TB, 10x3TB ++SATA DRIVES: 1x14TB, 2x10TB ++SSD DRIVES SAS/SATA/NVME: 1x1TB NVMe, 4x200GB SAS, 8x300GB SATA —— DECOMMISSIONING ++SAS DRIVES: 12x2TB, 4x300GB ++SATA DRIVES: 1x6TB, 1x5TB, 3x4TB, 3x3TB ++SuperMicro case with 16 3.5” bays. I won’t be able to use this so I have to slim the total drives down to 34 or less otherwise I’ll have to use my loud 60 bay DELL/EMC VRA60 that said the actual true goal is to get down to a nice slim 18 drive setup and I might at some point decommission a 10TB SATA drive to put in a fan cooled external enclosure for nightly backups of important files. My absolute most important files are already backed up in cloud and to a 4TB internal SAS drive but I’d like to move to backing up everything besides movies, VMs, and games nightly.
  4. Can I just install Unraid onto a new thumb drive boot it up and it will find all my information or am I going to run into trouble? I’m only worried about the info on my drives. Not worried about apps/plug-ins etc.
  5. Yes I know exactly what it costs. This photo is from my Emporia system: it’s my server and a small intel NUC computer. I have a UPS so I have another chart in UNRAID. I don’t remember if the UNRAID chart does pricing or just kWh. I also have a SuperMicro board, chassis and PSU so I know how much just that machine pulls VS everything else.
  6. Hi, I have a bunch of Seagate 3TB and 4TB drives and I was wondering if this Plug-In makes use of Seagate PowerChoice modes. If it doesn’t is there a benefit to using those modes and could it be updated to use them?
  7. I've tried setting this like others have instructed and when I hit connect in putty its just a blank console with the blinking green cursor. I'm on a Supermicro X9SRL-F based board. I have (2) serial ports... I have tried connecting both. I am trying to connect to an Arista managed switch. I've connected to this switch through windows using a USB to Serial but I lost the USB to Serial cable so I am trying to connect to the switch with my server since it has a real serial port and I have tons of these serial management cables.
  8. What are you using to transcode right now? Right now... you might want to use a NVidia Shield as a PLEX only server to do your transcoding. My guess is that as soon as Intel GPUs hit the market then the answer is just the absolute least expensive Intel GPU imaginable. As there would no reason even the lowest end GPU that they come out wouldn't be able to do many many transcodes without breaking a sweat. That said... I happen to dabble in 1150 based UNRAID myself. I have a 6x10GBe 1150 based X10SLH-N6-ST031 SuperMicro. Its not too dissimilar to your motherboard as it has the same (2) PCIe slots you have. The biggest difference really is just that it has (6) 10GBit ports on it. I'm probably going to walk away from it and go back to my X9SRL-F w/ E5-1650 v2 because I need more ram and the ECC ram on 1150 motherboards is expensive compared to the ECC ram that you use on a 2011 board. Plus I have an entire box of the ECC you use on a 2011 board. So looking at EBAY I can see that your motherboard used is worth as much as an X9SRL-F w/ a processor! The one I see for sale right now includes an e5-2620... The seller doesn't specify as to if it is a v1 or v2 CPU but honestly not a big difference between the 2. He has 6 for sale and they are $187 plus shipping... As for expansion slots are concerned... You have (4) PCIe 8x Gen 3 slots! (2) are in 16x. You also have (2) PCIe 4x Gen 3.0 both in 8x slot and (1) PCIe 4x Gen 2.0 in 8x slot! So like YEH!!!! I'll be running the cheapest Intel GPU you can get once they come out, I'll have a 10GBit card, (2) PCIe 8x Gen 3 4e4i SAS cards, and then (2) PCIe 4x Gen3 NVMe to PCIe adapters. I'll still have enough room left over for a PCIe 4x Gen2 slot left... which is still quite capable if I want to toss a 8e SAS card in to run a bunch more hard drives.
  9. well I don't know why this would matter... I forced a different DNS in my adapter and now its working.
  10. Says my device is offline in the my servers tab but it is not offline. Everything worked for about 1 minute and then it wouldn't work anymore and typing in the normal local IP just takes me to Hmmm… can't reach this page Check if there is a typo in b9c833d403f49dff6b50c172cb7e81dd6fca89c1.unraid.net. If spelling is correct, try running Windows Network Diagnostics. DNS_PROBE_FINISHED_NXDOMAIN
  11. When I built my array I accidently assigned a disk to the array that I did not want to. Is this what I am going to have to go through to remove that disk from the array? I have already ensured there is nothing stored on it. I just can't believe there isn't an easier way to do this.
  12. I own a license for UNRAID 6 and have owned it since 2015. I have not really used it though except for a few hours playing around with it. I know that it is truly amazing stuff... but I keep getting into situations where tiny little things make me end up using something else. For the longest time I ran my home from a retail QNAP 16 bay dual 10G system that could double as a router and had a caching system and insane transfer speeds and then I sold it and now I'm using OMV but I feel like now its time to make use of UNRAID... THE THING IS: is there an easy way for me to use UNRAID but also utilize some sort of other array at the same time without an extra computer? Meaning I would like to have something a set of 16 3TB drives in ZFS and then a separate array of all the other random drives? For example right now I have 16 un-provisioned 3TB SAS drives, 15 2TB SAS drives in a mergerfs/snapraid setup (I want to move the data on these to something like a ZFS setup) and then I have a bunch of other drives like a 10TB SATA, 6TB SATA, 5TB SATA, (5) 4TB SATA, (4) 4TB SAS, and (6) 3TB Sata. I want those random drives to be the UNRAID volume. I probably wont be using the 2TB SAS drives after moving the data. Those will probably go to a friend. It's not worth paying $5-10/m in electricity to run them. Anyways... I know that UNRAID does great at virtual machines but I'm not sure if it is a great idea to do a virtual machine to run ZFS or something like that. Any ideas?
  13. I already own unraid but I've never deployed it. I own a 16 bay QNAP that I'm finally decommissioning. I have 20 or so 2TB SAS drives that I really don't want unraid to manage. I'd like to set them up in a couple ZFS pools and then I want unraids system to do it's thing with all the random non SAS drives I have. So I want unraid to have as much of a direct connection to that storage as possible. I don't know all the technical details but I know that with my QNAP one option was a specific type of configuration that I could have had windows connect directly too and then nothing else would have been able to connect to it except by going through windows. Maybe it was iSCSI I'm not really sure. Id like to know if something like that could be set up for unraid. I just want unraid to have access to the data and I don't want anything else to have access except for going through unraid to get to it. So if the unraid computer is off even if whatever controls those 2TB drives is still running nothing will see those drives on the network. Hopefully someone understands what I'm trying to say. Thanks PS for the unraid setup itself I plan to use 3 or more 15k SAS drives for cache (I have 20) and then a 6TB parity and a random allotment of 5 or so 4tb and 5tb commodity drives.