newunraiduser5

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by newunraiduser5

  1. Thank you so much for this and I am sorry I missed it. Can I check where is the readme file?
  2. Hi, I am trying to set up a whitelist at https://github.com/anudeepND/whitelist/blob/master/README.md but cant seem to get it to work without Python 3. The instructions for a docker install / install without Python doesnt seem to work either. Can anyone help. Thanks in advance.
  3. Hi all, I am looking to speed up my main Windows 10 VM. Currently, I have it in a vdisk with sitting on the cache pool (2 x Samsung PM883 in a BTRFS mirror pool). I have a dual socket setup and unfortunately the HBA and my GPU sits in PCIe slots assigned to different CPUs. The assigned CPU threads all belong to the GPU slot. So I am trying to speed up my VM as much as possible. There is probably some overhead associated with a) HBA being on a PCIe attached to the other CPU b) Dual file systems i.e. NTFS vdisk and BTRFS pool c) HBA sharing the bandwidth between the array drives and cache drives d) SSDs being SATA I know above individually shouldn't have a huge impact but taken together, is probably significant. I can resolve all the above by adding a dedicated NVMe drive using a PCIe to M.2 adapter (my motherboard does not have a straight M.2 slot). All my PCIe slots on my motherboard are PCIe 3. I can see the the PCIe 4 NVMe drives typically perform better. I wanted to check, if using the faster PCIe 4 NVMe will still benefit me even if I am attaching them to a PCIe 3 slot? Or should I just go with a higher end PCIe 3 NVMe drive?
  4. Thank yo so much. I ended up using the unbalanced plugin. Really appreciate your help.
  5. expanse-diagnostics-20201126-2123.zip Hi, I have attached the diagnostics file. Is this not an automatic process when the when I have set the high watermark option and split level? If I look at my drive space, I can see that my Disk 1 had 0 free space earlier and not it is at around 60GB free space. I am also not downloading or changing anything. If this is a manual process, would you be able to advice how I can do this? I would like to move top level folder in a share across to Disk 1. Thanks again for your help.
  6. Hi all, thanks in advance for reading. I have 2x 12TB data drives in the array and all shares set up as high water mark. Unfortunately, I may have screwed up my folder split level / moved data onto the array too quickly. Now one disk has filled up with one particular media share. I have changed the folder split level to allow lower level folders to be split. I can see the reallocation happening but its doing it at less than 2MB per second which is painfully slow. I cant download more files into the share for over a week or more at this rate. Note that I had a bad reboot so I have let Unraid continue a parity check that it initiated. Is there anything I can do to speed up the reallocation? Will it speed up after the parity check is completed? Thanks all.
  7. A bit of context. I live in Singapore so its almost always 30 degrees Celsius So during the day (when I am working from home), I will run then at 1000 RPM but the air conditioning set at 24 degrees. 1000 RPM keeps the CPU and PCH at around 50 to 55 degrees. Drives will be sitting between 30 to 35 degrees. At night, when the air conditioning is off, I just turn them back up to 3000 RPM and it will keep the server at around the same temp.
  8. Thanks! I get them down to about 1000 and its typically fine unless I have a very heavy workload.
  9. I think I fundamentally dont understand turbo write so will read into it! Is there a FAQ of sorts on it? Thanks yes this is the 2U case. But generally on fans, you can use IPMITools to adjust them by passing raw commands. So the default IPMI "optimised" mode is still loud but if you run the following at start up, it will get it down to a normal level. The first one is to turn the IPMI setting to "full". This is the only setting where the Supermicro IPMI doesnt actually adjust fans i.e. on any other mode, if you pass raw commands, it will ignore them after 30 seconds and do their thing. The second two are to change the fans to a value out of 64 with 64 being full. Mine is set at 16 in the command below. ipmitool raw 0x30 0x45 0x01 0x01 ipmitool raw 0x30 0x70 0x66 0x01 0x00 0x16 ipmitool raw 0x30 0x70 0x66 0x01 0x01 0x16 There are def cheaper 3u units out there. I paid through the roof for mine because I wanted 2 GPU cards (one for the W10 VM and the other for Plex transcoding in the docker.
  10. More of an FYI than a question. I saw quite a few threads asking about 100% active time in Windows VMs and I found my solution. I had real disk IO issues with my setup. Had my my vdisk on 2 Samsung enterprise drives as a BTRFS mirrored cache. These were connected to a backplane which was then connected to an LSI HBA. The VM was practically unusuable. Disks active time was 100% almost all the time. I tried the standard stuff (disable indexing, disable Cortana, disable Superfetch, disable Prefetch, disable NVIDIA telemetry, reschedule NVIDIA profile updater to night time) and nothing worked. I then referred to the Spaceinvader One tutorial on converting a vdisk to a physical disk and it worked perfectly. Performance is now completely normal. Hope this may be able to help someone else who is experiencing the same problem and has a similar hardware setup. EDIT: I may have spoken to soon. I have found my vdisk is on the array instead of the cache (I had cache only but I think it was on the array. I have now changed it to preferred to make sure mover moves it back). Doing a parity check at the moment but will run mover once that is finished and change back to the vdisk
  11. I have an older Supermicro server with 12 x 3.5" bays and an LSI card with 4i and 4e so planning to increase array size by adding more drives. Yep understand. I am doing a remote back up of documents and photos. TV shows, movies etc aren't backed up as I can always find them again. Thanks. But I am still not sure why a drive needs to be spun down at all for it to work? Maybe I just am not understanding it conceptually. And what is the recommended % of drives as spun down?
  12. Hi, I am quite new at this. Can someone help me understand why the number of drive spinning down matters? I have 4 drives, 2 are parity and 2 are data. How many spun down drives should I select?
  13. Hi, I would like to uninstall it but when I do, the main area sections are still dark. Can anyone assist with a full uninstall and reversion back to the original unraid theme? Thank you
  14. Hi All, I just received my 2U Supermicro Ultra 6028U-TR4+ with X10DRU-i+. Fans are loud even on optimised settings. I've changed it to Optimised in IPMI but they still run at around 4000 RPM and I am getting some serious grief from my SO. I have looked at a few guides like https://forums.servethehome.com/index.php?resources/supermicro-x9-x10-x11-fan-speed-control.20/ https://www.mikesbytes.org/server/2019/03/01/ipmi-fan-control.html and https://www.informaticar.net/supermicro-motherboard-loud-fans/ but I am actually not quite sure how to pass the commands in Unraid. I know there is an IPMI Tools addin but even after I install that, I SSH in and still cant load and IPMI commands. I also have the IPMI command line tool from Supermicro installed on my PC to try to control it but again, have the same problem. Can anyone assist?
  15. Hi All, there were a few minor issues with some parts not being available. I also purchased a few parts separately. This is the final build which has been shipped to me. I will post any configuration issues I have once it arrives so that it may benefit hopefully benefit other users. I also have a few additional questions so hope someone might be able to help me with any issues which comes across. Chassis (Unchanged): Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU Motherboard (Unchanged): Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA Backplane (Changed): Previously the vendor was going to supply a 6Gbps backplane to keep costs down. It was also a Supermicro one so in theory it should have worked. However it didnt work with the onboard controller. So the vendor provided a 12Gbps backplane at the same price. CPU (Unchanged): Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU Additional HDD Kit (Unchanged): Supermicro Rear side dual 2.5" HDD kit for 6Gb chassis. This is for my SSD cache drives RAM (Unchanged): 2 x Micron 16Gb PC4-17000R 2133Mhz ECC DDR4 Parity Drives (Unchanged): 2 x WD Ultrastar 10TB 3.5" SATA HDD Other array disks (Unchanged): 4 x HGST Ultrastar 7K6000 4TB 3.5" SATA 6Gb/s Existing hard drives (Unchanged): 4 x 4TB WD Reds I already have to be put into the server once I transfer the data across Cache drive (Changed and purchased separately): 2 x Crucial MX 500 500GB GPU (Unchanged): MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 Other (Changed): The Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP is no longer available at the vendor. They will provide me this part and install instructions if and when I purchase another JBOD enclosure USB Hub (Changed): I purchased 2 hubs as these were reasonably cheap and low profile. Hoping either one of these work. I will update when I put it together. The parts are Ableconn PEX-UB132 and FebSmart FS-U4L-Pro Questions: The motherboard has 1 x8 PCIe slot in x16 and 2 x8 PCIe slots in x8. Is it at all possible to put 2 GPUs inside? I ask because I want one available to a windows VM and one available for a Plex docker. I dont see any x8 GPUs and the motherboard only has 1 x16 sized GPU. What can I do here? Something I didnt think about was IOMMU. I understand that the motherboard needs to support independent IMMUO groups. Do Supermicro motherboards typically support this? Does anyone have any specific experience with the X10SRH-CF? I was also told that its actually difficult to pass through NVIDIA GTX GPUs to a windows VM. I will watch the video from spaceinvaderone but just checking if anyone has specific experience with the MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 Does anyone have experience passing through either the Ableconn PEX-UB132 or FebSmart FS-U4L-Pro Thanks to everyone for reading / helping.
  16. I have a follow on question to this. Wondering if someone could help. I have a server coming with the following motherboard https://www.supermicro.com/en/products/motherboard/X10SRH-CF It has 1 PCI-E 3.0 x4 (in x8) 1 PCI-E 3.0 x8 (in x16) 2 PCI-E 3.0 x8 1 PCI-E 2.0 x2 (in x4) 1 PCI-E 2.0 x4 (in x8) I already have a GTX1650 installed in the 1 PCI-E 3.0 x8 (in x16). I wanted 2 GPUs. One for transcoding in a Plex docker and 1 to pass through to my windows VM. Problem is, the only other PCIe slot is the 2 PCI-E 3.0 x8. Is it possible to fit a low profile GPU card in here?
  17. Gotcha. The backplane has 4 x 6gpbs lanes so that will be shared for the 12 hard drives and any further external JBOD. Not the absolute optimal but the alternative is to use a separate controller which will increase the cost. Given its 4 lanes, it should be have 4x the amount of bandwidth calculated. Given that Unraid isnt stripping and only spins up drives as needed, this should be sufficient right?
  18. Yes, originally the vendor wanted to do 4 x 8 GB but I changed to 2 x 16 GB for expandability. Thanks, I will check with the vendor. I think they will connect to the backplane. Does it make sense?
  19. Yes I've had a look around and it appears the card does not need extra power. The whole power point was something that the vendor, being reasonably responsible, wanted to point out to me i.e. the chassis wont be able to have very high power GPUs in the event that I want to upgrade. I have been told there probably wont be any SATA power plugs left so I will need to use the hack in the following thread to potentially deliver an additional 40 watts (2 x 20 watts) from Molex. Yes there is 2, I forgot to put that in. If I put 4 in, would that be even better as an upgrade later down the track? Sorry I dont understand this. Would explain a but further? This was put in so that I can just attach actual JBOD enclosures down the track. I can just have a similar Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP in the JBOD enclosure and connect a whole heap of drives as well. Is there a misunderstanding here?
  20. NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users. Hi All, close to the final iteration of this server. I asked for a few changes to opimise the array and ram. Chassis: Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU Motherboard: Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA CPU: Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU Additional HDD Kit: Supermicro Rear side dual 2.5" HDD kit for 6Gb chassis RAM: Micron 16Gb PC4-17000R 2133Mhz ECC DDR4 (edit: should have 2 of these) Parity Drives: WD Ultrastar 10TB 3.5" SATA HDD Other array disks: HGST Ultrastar 7K6000 4TB 3.5" SATA 6Gb/s / a few existing 4TB WD Reds which I will put into the array once I move the data over GPU: MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 Other: Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP Outstanding questions Getting the power to the GPU + any PCIe cards which may need more power - I understand the GPU chosen is just under the 75w so it should be ok but just wondering if I ever wanted to put something more powerful in there, what kind of connector do I need and where does the power normally in a workstation or PC setup? My understanding vendor is that there needs to be a "custom cabling solution", but I am not too sure what that means. Low profile PCIe USB card - I have this asked this in a separate thread but including here for completeness. Does anyone know or have a working low profile USB 3.0 or 3.1 card to pass through to a Windows 10 VM? Power to PCIe USB card - similar to question 1 above, quite a few of the cards, I can see a separate power connector. Anyone know what cable / connector and where do I connect this to? Otherwise, does anyone foresee any issues with this build / setup? Thanks all in advance for reading especially @Benson and @Decto.
  21. Hi, I have a server incoming but wanted to do a whole PCIe USB card passthrough to a windows VM as suggested in the Space invader one video (https://youtu.be/A2dkrFKPOyw). Problem is I understand the chassis can only fit low profile cards and can't do more than 75w of power. Does anyone have a low profile cards that doesn't need power working with windows passthrough? Thank you Edit Chassis is Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU Motherboard is Supermicro MBD-X10SRH-C
  22. Hi, @Benson , quick reaction here, you mean no RAID, just allow the host Unraid direct access?
  23. NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users. All, I've got a quote from a cheaper vendor and wanted to check the configuration below. Thanks to @Decto and @Benson for your input. Chassis: Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU Motherboard: Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA Backplane: 6Gbps due to budget CPU: Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU Memory: 4 x MICRON 8GB PC4-2133P-R DDR4 REGISTERED ECC 1RX4 DIMM Cache drive: 1 x Samsung 480Gb PM863a MZ7LM480HMHQ Enterprise SATA SSD Main pool drives: 8 x Hitachi 4Tb 7K4000 7200RPM 3.5" SAS 6Gb/s Hard Drive with 4 x WD Red to be install after I migrate the data off Controller: Not specified by the vendor but they understand its for an Unraid install and the need for pass through. THink this may be just GPU: MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 My questions are: For the GPU, if it is passed through to a Windows VM but Plex is installed via Docker in the main Unraid install, will there be conflicts? I.e. will Unraid / Plex not be able to use it for GPU transcoding so long as the Windows VM is powered on? If a passed through GPU can be used by the main system, is a workable solution to install 2 instances of Plex, once on the docker and once on the Windows VM and stream off which server has the GPU? The vendor mentioned that they need to test if the onboard GPU needs to be disabled for the discrete GPU to work and there may then be issues with the IPMI because that relies on the onboard GPU. In any case, this isnt a huge problem because it will be a home server and I will have physical access but wanted to see if anyone else here has had any experience What do I need to communicate to the vendor about the drive set up? i.e. should the backplanes be connected directly to motherboard's onboard storage controller? I am not sure if the motherboard in itself has enough onboard Sata ports so then is there an advantage to connecting to the motherboard direct vs the controller? I am also aware that if anything is connected via a controller, it needs to be have pass through to Unraid Similarly, for the cache drive, should it be through the controller or direct to the onboard motherboard controller? Is the 480 GB SSD sufficient? Plan on having a windows VM (with a few games but obviously all media storage will be in the main pool), docker and obviously the original purpose as a cache drive If I can get a second 480 GB SSD, what is the best way to set this up? Just two separate drives i.e. 1 for cache and one for VMs/Docker? Or is it better to get it into a pool via BTRFs RAID 0? Yep moving to a single CPU setup! Understand, my SO will thank you for bringing this up!
  24. NOTE - I have posted the final configuration in post 16 of this thread. I will leave my original topic below unedited. However I will also update this thread once I get the the hardware and post any issues as a matter of record. Hopefully this can benefit other users. Hi All, I'm thinking of getting a new server to serve as a NAS, Windows gaming desktop, Linux workstation and general homelabbing. I'm look at a refurbished supermicro server with the following specs/configuration System/Chasis: SuperServer 6028U-TRTP+ - https://www.supermicro.com/products/system/2u/6028/sys-6028u-trtp_.cfm Processors: 2x Xeon E5-2620 v3 2.4GHz 6-Core CPU's, (Passmark-15256) Memory: 32GB (4x 8GB) DDR4 Registered Memory Storage Controller: LSI 9311-8i (IT MODE) Storage: Up to 12 SAS 3.5" drives Graphics Card: MSI GeForce GTX 1650 Low Profile Power Supply: 2x 1000W 80+ Platinum Power Supplies I have a few questions I have 4 x 4TB WD Reds (SATA / 64MB cache / WD40EFRX / 5400 RPM / manufactured in Aug 2018). They are sitting in an Orico 4 bay external USB DAS enclosure. Is it possible to first purchase the server with 8 hard drives (out of the available 12 3.5" bays in the server), set up Unraid, migrate the data on the existing 4 bay DAS I have onto the 8 hard drives on the server then reformat my existing 4 hard drives and move them onto the server for the total of 12 drives? For the storage controller, it should already be flashed into IT MODE. Will that be sufficient? I understand that then it should be able to pass the drives to Unraid direct with all the SMART data intact etc While the seller does not have any available, is it possible for me to buy a PCIe to NVMe adapter (e.g. https://www.amazon.com/StarTech-com-PEX4M2E1-M-2-Adapter-Profile/dp/B01FU9JS94?th=1) and install an SSD/M.2 drive and use it as the cache drive to install the Windows VM / Linux VM? Last question on performance. Currently, the Orica 4 bay DAS is configured using storage spaces with 1 drive parity. It runs over USB 3.0. It is however extremely slow especially for write speeds (3 megabyte write per second, 20 megabyte read per second). I dont know if it is because of storage spaces parity or the USB 3. Do you think my proposed setup will be faster? Thanks in advance for your help.
  25. Hi, I am considering in getting a server and running Unraid on it. Want something which is 4-6 bays and will buy the number of hard drives which will give me 30 TB ish of usable space. Used mostly as a Plex server (minimal transcoding as I typically stream in my home LAN but may need 1 stream max if I am not at home) with likely 1 Windows VM and 1 Linux VM (mostly homelabbing, perhaps light Excel/Powerpoint work if needed but I'll probably have another machine for anything intensive). Budget excluding disks at around US$ 1,000 (or less if I dont need to spend that much for my use case) I am in Singapore so I wanted to check if anyone knows any custom server builders who are in SG or would be happy to ship to SG at a reasonable price. I have looked online and I just cant seem to find any. Alternatively, would anyone be able to recommend hardware from HPE, Lenovo, Dell etc? I have looked at there websites but do not really see anything that makes sense in my mind though I may be wrong. But all the large server hardware vendors are in Singapore so should be easy enough to purchase from them provided I know the right server. A final alternative may be to use actual NAS hardware. I saw on the forums that someone use an Asustor NAS. These are harder to get in Singapore but not impossible. QNAP and Synology are everywhere in Singapore. Does anyone know any prebuild NAS hardware from the vendors noted above which they have had experience with? Thanks in advance for everyone's help.