newunraiduser5

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

newunraiduser5's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thank you so much for this and I am sorry I missed it. Can I check where is the readme file?
  2. Hi, I am trying to set up a whitelist at https://github.com/anudeepND/whitelist/blob/master/README.md but cant seem to get it to work without Python 3. The instructions for a docker install / install without Python doesnt seem to work either. Can anyone help. Thanks in advance.
  3. Hi all, I am looking to speed up my main Windows 10 VM. Currently, I have it in a vdisk with sitting on the cache pool (2 x Samsung PM883 in a BTRFS mirror pool). I have a dual socket setup and unfortunately the HBA and my GPU sits in PCIe slots assigned to different CPUs. The assigned CPU threads all belong to the GPU slot. So I am trying to speed up my VM as much as possible. There is probably some overhead associated with a) HBA being on a PCIe attached to the other CPU b) Dual file systems i.e. NTFS vdisk and BTRFS pool c) HBA sharing the bandwidth between the array drives and cache drives d) SSDs being SATA I know above individually shouldn't have a huge impact but taken together, is probably significant. I can resolve all the above by adding a dedicated NVMe drive using a PCIe to M.2 adapter (my motherboard does not have a straight M.2 slot). All my PCIe slots on my motherboard are PCIe 3. I can see the the PCIe 4 NVMe drives typically perform better. I wanted to check, if using the faster PCIe 4 NVMe will still benefit me even if I am attaching them to a PCIe 3 slot? Or should I just go with a higher end PCIe 3 NVMe drive?
  4. Thank yo so much. I ended up using the unbalanced plugin. Really appreciate your help.
  5. expanse-diagnostics-20201126-2123.zip Hi, I have attached the diagnostics file. Is this not an automatic process when the when I have set the high watermark option and split level? If I look at my drive space, I can see that my Disk 1 had 0 free space earlier and not it is at around 60GB free space. I am also not downloading or changing anything. If this is a manual process, would you be able to advice how I can do this? I would like to move top level folder in a share across to Disk 1. Thanks again for your help.
  6. Hi all, thanks in advance for reading. I have 2x 12TB data drives in the array and all shares set up as high water mark. Unfortunately, I may have screwed up my folder split level / moved data onto the array too quickly. Now one disk has filled up with one particular media share. I have changed the folder split level to allow lower level folders to be split. I can see the reallocation happening but its doing it at less than 2MB per second which is painfully slow. I cant download more files into the share for over a week or more at this rate. Note that I had a bad reboot so I have let Unraid continue a parity check that it initiated. Is there anything I can do to speed up the reallocation? Will it speed up after the parity check is completed? Thanks all.
  7. A bit of context. I live in Singapore so its almost always 30 degrees Celsius So during the day (when I am working from home), I will run then at 1000 RPM but the air conditioning set at 24 degrees. 1000 RPM keeps the CPU and PCH at around 50 to 55 degrees. Drives will be sitting between 30 to 35 degrees. At night, when the air conditioning is off, I just turn them back up to 3000 RPM and it will keep the server at around the same temp.
  8. Thanks! I get them down to about 1000 and its typically fine unless I have a very heavy workload.
  9. I think I fundamentally dont understand turbo write so will read into it! Is there a FAQ of sorts on it? Thanks yes this is the 2U case. But generally on fans, you can use IPMITools to adjust them by passing raw commands. So the default IPMI "optimised" mode is still loud but if you run the following at start up, it will get it down to a normal level. The first one is to turn the IPMI setting to "full". This is the only setting where the Supermicro IPMI doesnt actually adjust fans i.e. on any other mode, if you pass raw commands, it will ignore them after 30 seconds and do their thing. The second two are to change the fans to a value out of 64 with 64 being full. Mine is set at 16 in the command below. ipmitool raw 0x30 0x45 0x01 0x01 ipmitool raw 0x30 0x70 0x66 0x01 0x00 0x16 ipmitool raw 0x30 0x70 0x66 0x01 0x01 0x16 There are def cheaper 3u units out there. I paid through the roof for mine because I wanted 2 GPU cards (one for the W10 VM and the other for Plex transcoding in the docker.
  10. More of an FYI than a question. I saw quite a few threads asking about 100% active time in Windows VMs and I found my solution. I had real disk IO issues with my setup. Had my my vdisk on 2 Samsung enterprise drives as a BTRFS mirrored cache. These were connected to a backplane which was then connected to an LSI HBA. The VM was practically unusuable. Disks active time was 100% almost all the time. I tried the standard stuff (disable indexing, disable Cortana, disable Superfetch, disable Prefetch, disable NVIDIA telemetry, reschedule NVIDIA profile updater to night time) and nothing worked. I then referred to the Spaceinvader One tutorial on converting a vdisk to a physical disk and it worked perfectly. Performance is now completely normal. Hope this may be able to help someone else who is experiencing the same problem and has a similar hardware setup. EDIT: I may have spoken to soon. I have found my vdisk is on the array instead of the cache (I had cache only but I think it was on the array. I have now changed it to preferred to make sure mover moves it back). Doing a parity check at the moment but will run mover once that is finished and change back to the vdisk
  11. I have an older Supermicro server with 12 x 3.5" bays and an LSI card with 4i and 4e so planning to increase array size by adding more drives. Yep understand. I am doing a remote back up of documents and photos. TV shows, movies etc aren't backed up as I can always find them again. Thanks. But I am still not sure why a drive needs to be spun down at all for it to work? Maybe I just am not understanding it conceptually. And what is the recommended % of drives as spun down?
  12. Hi, I am quite new at this. Can someone help me understand why the number of drive spinning down matters? I have 4 drives, 2 are parity and 2 are data. How many spun down drives should I select?
  13. Hi, I would like to uninstall it but when I do, the main area sections are still dark. Can anyone assist with a full uninstall and reversion back to the original unraid theme? Thank you
  14. Hi All, I just received my 2U Supermicro Ultra 6028U-TR4+ with X10DRU-i+. Fans are loud even on optimised settings. I've changed it to Optimised in IPMI but they still run at around 4000 RPM and I am getting some serious grief from my SO. I have looked at a few guides like https://forums.servethehome.com/index.php?resources/supermicro-x9-x10-x11-fan-speed-control.20/ https://www.mikesbytes.org/server/2019/03/01/ipmi-fan-control.html and https://www.informaticar.net/supermicro-motherboard-loud-fans/ but I am actually not quite sure how to pass the commands in Unraid. I know there is an IPMI Tools addin but even after I install that, I SSH in and still cant load and IPMI commands. I also have the IPMI command line tool from Supermicro installed on my PC to try to control it but again, have the same problem. Can anyone assist?
  15. Hi All, there were a few minor issues with some parts not being available. I also purchased a few parts separately. This is the final build which has been shipped to me. I will post any configuration issues I have once it arrives so that it may benefit hopefully benefit other users. I also have a few additional questions so hope someone might be able to help me with any issues which comes across. Chassis (Unchanged): Supermicro SuperChassis 826BE16-R920LPB 12x 3.5" + 2 x 2.5" Expander Dual 920W PSU Motherboard (Unchanged): Supermicro MBD-X10SRH-CF, Single Socket 2011 v3/v4, 12Gbps HBA Backplane (Changed): Previously the vendor was going to supply a 6Gbps backplane to keep costs down. It was also a Supermicro one so in theory it should have worked. However it didnt work with the onboard controller. So the vendor provided a 12Gbps backplane at the same price. CPU (Unchanged): Intel Xeon Processor E5-2678 v3 12 core 2.5Ghz CPU Additional HDD Kit (Unchanged): Supermicro Rear side dual 2.5" HDD kit for 6Gb chassis. This is for my SSD cache drives RAM (Unchanged): 2 x Micron 16Gb PC4-17000R 2133Mhz ECC DDR4 Parity Drives (Unchanged): 2 x WD Ultrastar 10TB 3.5" SATA HDD Other array disks (Unchanged): 4 x HGST Ultrastar 7K6000 4TB 3.5" SATA 6Gb/s Existing hard drives (Unchanged): 4 x 4TB WD Reds I already have to be put into the server once I transfer the data across Cache drive (Changed and purchased separately): 2 x Crucial MX 500 500GB GPU (Unchanged): MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 Other (Changed): The Supermicro 85cm 1-Port Ext Ipass TO Int Ipass LP is no longer available at the vendor. They will provide me this part and install instructions if and when I purchase another JBOD enclosure USB Hub (Changed): I purchased 2 hubs as these were reasonably cheap and low profile. Hoping either one of these work. I will update when I put it together. The parts are Ableconn PEX-UB132 and FebSmart FS-U4L-Pro Questions: The motherboard has 1 x8 PCIe slot in x16 and 2 x8 PCIe slots in x8. Is it at all possible to put 2 GPUs inside? I ask because I want one available to a windows VM and one available for a Plex docker. I dont see any x8 GPUs and the motherboard only has 1 x16 sized GPU. What can I do here? Something I didnt think about was IOMMU. I understand that the motherboard needs to support independent IMMUO groups. Do Supermicro motherboards typically support this? Does anyone have any specific experience with the X10SRH-CF? I was also told that its actually difficult to pass through NVIDIA GTX GPUs to a windows VM. I will watch the video from spaceinvaderone but just checking if anyone has specific experience with the MSI NVIDIA GEFORCE GTX 1650 4GT LP OC Graphics Card '4GB GDDR5 Does anyone have experience passing through either the Ableconn PEX-UB132 or FebSmart FS-U4L-Pro Thanks to everyone for reading / helping.