SSD

Moderators
  • Posts

    8022
  • Joined

  • Last visited

  • Days Won

    13

Posts posted by SSD

  1. 21 hours ago, hernandito said:

    Thank you CHBMB!

     

    To summarize, in NZBGet go to: Settings  > Extensions Scripts. In the field labeled ShellOverride enter the following:

    
    .py=/usr/bin/python3

    This worked perfectly.

     

    Thank you my friend... and Happy New Year.

     

    H.

     

    Hmmm ... 

     

    I tried this and it still not working. I get this error:

     

    nzbToMedia: Could not start /usr/bin/python3: No such file or directory

  2. Did some Googling and found a suggestion to run the command ...

    ip route add default via 192.168.1.1

    which resolved the issue.

     

    Why is this necessary? Have I misconfigured something?

     

    Before running the command above:

    root@merlin:~# route -n
    Kernel IP routing table
    
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br0
    192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

     

    After running the command above:

    root@merlin:~# route -n
    Kernel IP routing table
    
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 br0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br0
    192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

    -SSD

  3. Hey all -

     

    I've been away for a bit, but trying to update my server to 6.7.0 from 6.4.1.

     

    Boots fine, but the server cannot access the Internet. It does see the local network.

     

    Attempts to update plugins and dockers fail because server is unable to access the Internet.

     

    When I start my Windows VM, the Windows VM CAN see the Internet.  It works fine.

     

    Any help greatly appreciated!

     

    I have attached my diagnostics.

     

    Thanks for the help!!

     

    -SSD

    merlin-diagnostics-20190612-2239.zip

  4. On 9/8/2018 at 8:07 PM, icedragonslair said:

    I have 4-5x3 iStar cages, but only space for three in my case, so i was thinking of grabbing a new case for them all to fit in  :)

     

    However I am already at 18 drives plus cache and parity. since the cache is not mounted in them I have one slot and that is reserved for a second parity and then im all filled up. For the price I may look into a 24 bay rack mount setup for a couple hundred extra...and then just think of playing around with my second license.

     

    Thanks loads,

    Ice

     

    An option is to hook up the 4th 5in3 externally. You just need to run the cables out the back of the case (including a power pigtail). Depending on where the server is stored, the aesthetics might be acceptable.

  5. 1 hour ago, sadeq said:

    Hi


    I'm planning to migrate to unRAID within 35 days from now (currently using ubuntu)

    usage is NAS/ plex and running a virtual machine or two

    my current server specs are
    intel core i3 8100 4c/4t 3.6GHz / intel UHD Graphics 630
    asus z370F Strix (7 pcie slots) / 6 sata ports and lots of usb headers
    8GB DDR4 HyperX Fury (non ECC)
    corsair obsidian 750D (6 3.5 inch bays upgradeable to 12)
    cooler master v650s (gold) psu
    TOSHIBA A100 120GB SSD (for OS)
    1x WD RED 6.0TB
    1x WD RED 3.0TB
    1x WD Purple 2.0TB
    for power backup I have an inverter with 120AH battery so power outage is not a problem

     

    Looks like it would make for a good unRAID server. The 8G of RAM may hold you back slightly if you want to run VMs. You need to dedicate RAM to each VM and suggest leaving at least 4G for unRAID. That would mean 4G for your VM. If you need more, you can likely upgrade your RAM.

     

    1 hour ago, sadeq said:


    my plan is detaching the whole old storage configuration, buying a good usb flash Drive, 256gb SSD For plex and 3x6.0TB RED Drvies for the array, then moving the current data to the new Drives and then adding the OLD 6.0TB red drive to the array later and maybe ditch the smaller size drives and use the OLD 120GB ssd for a ubuntu VM

    is this configuration good for my usage ?
    -  I mean my RAM is not ECC so do I really need one (probably will restart the server once a month)

     

    You'll get different answers on this one. I personally recently upgraded from an ECC to a non-ECC capable build and just tested the heck out of the memory. Never had a problem. It would not be a deal breaker for me.

     

    1 hour ago, sadeq said:


    - can I pass the integrated GPU to a plex server installed directly on unRAID or a VM so that I can use to for transcoding ? (if yes, I'll buy a cheap $25 gpu for video output)

     

    Yes - you should be able to. I would recommend Plex Docker and no pass through is needed. The GPU can do hardware transcoding.

     

    1 hour ago, sadeq said:

    - do I need a cache SSD ? if yes should I use it while migrating the data or after migrating ?

     

    I would suggest an SSD to be shared as cache, plex, and VM. You don't need a dedicated disk to each.

     

    1 hour ago, sadeq said:

    I don't know if this is the right place for this question but...,will the whole array appear as one big Drive ?, and what is the best way to share the whole array to an ubuntu VM, I'm planning on using samba, but is there a better way than that ?

    thanks in advance for the help

     

    • Like 1
  6. On 8/9/2018 at 2:50 PM, hoba_rce said:

    I just bought Seagate Backup Plus 5TB external USB 3 drive. For some reason the external box that contains Seagate Barracuda ST5000LM000 2,5" drive is 1/3 cheaper than the drive alone....

     

    After a few minutes the box started to fail and disconnect and could not be connected again. I ran Crystal Disk info and found out that the drive quickly heats to over 55°C inside the box and refuses to work. I extracted the drive and connected it directly to SATA port in my cooled case and the drive worked fine until it reached 40°C after about an hour and random read/write errors started to occur.

     

    I placed a pot of cold water on top of the drive so it stayed below 35°C, managed to extract the data, reassembled the box and sent it for RMA. It seems like this drive has some serious heat issues that makes it fail when it heats up...

     

    Couple things make me doubt this experience. A drive is not going to suddenly start running very hot like that. And second, a drive's heat dissipates mainly from the bottom of the drive, and cold water on top would likely have little affect. Certainly not taking it down to 35C from 55C.

     

    It's not clear whether you shucked the drive to use in unRAID. If you didn't, a high temp might be expected. But again, a cup of cold water on top is likely not going to help.

     

    But if the drive was failing, returning it for replacement is the right action.

  7. 4 hours ago, methanoid said:

     

    Well I think I PAID for my current Pro licence but nobody seems to see that....

     

     

    Great, and I am also happy with my first licence. I am MERELY questioning whether discounts on subsequent licences might be an idea. I didnt expect the Spanish Inquisition (and loads of "I love unRAID and would gladly sell my firstborn child to buy another licence" type posts)

     

    Good grief.. this is a good forum but heaven forbid you ask the unaskable questions ;-)

     

     

    Looks like @bonienl is being paid for his services, which makes sense given that he developed and maintains the GUI (as far as I know). And your comment may be hitting him in the pocketbook!

     

    Used to be that a license came with 2 keys. This was before the easy key replacement program, and Tom's intent of the second key was to be a backup (although there were no restrictions on using the second key for a second server). But there was a reworking of the licensing structure, and with the key replacement feature in place, the second key is no longer provided (I think the license cost went down - but not sure).

     

    But generally you need one key per server. You pay the price (just as you would for Windows). It is a fair price IMO. Offering a second key at a discount might encourage more people to buy extra keys, which might actually work to LimeTech's advantage. But that's up to them.

     

    I will say that LimeTech's does not consistently monitor the forum for these types of questions. So you'd not hear back from them unless you send an email. And discussions of the license fee and alternatives may fall on deaf ears, as there has been a lot of desire to debate over the years. At this point, I think I am correct in stating LimeTech is not entertaining other options.

  8. 1 hour ago, sansanwal said:

    Can i install unraid and the vms?

    cause i have the same cpu and i want to make a nas from my main pc

     

    my setup:

    1 monitor, 1 gpu 1080, ram 24gb, 1 ssd, 2 hdd and last motherboard supporting VT-D technology

     

    Seems very doable. Depends on your CPU but I assume it is pretty good considering the GPU. I did something similar couple years back with a 4-core Xeon.

     

    You would need to consider how to migrate the physical machine install to the VM. Might look at something like Acronis which can directly create a VM image.

  9. On 7/29/2018 at 4:35 PM, garycase said:

    This has the BIG advantage of protecting your system from hardware failure without the need to reload anything.     If the physical machine you're running a VM on fails, just move the VM to a new system and it will run just fine -- no activation;  no programs to reinstall; etc.   

     

    I did a server update update and was able to move my Windows VM over and it ran perfectly.

     

    But the VM was setup for on the old 4 core server, and I wanted to update it for my new 12 core processor. So I created a VM on the new server, and looked at its XML concerning the CPU config (topology, cores, etc.) and used that to manually edit my real VM's XML. Worked fine, but afterwards, Office 2016 complained and wanted to me to re-register. Because my licenses were OEM and tied to the machine, I had issues. Windows, also an OEM license, did not complain.

     

    I was thinking that a VM would completely shield me from these types of issues as it would appear to use the same motherboard, chipset, etc., and this seems true it you don't touch its config, but you can run into licensing issues when upgrading a server and reconfiguring the VM to match your new CPU.

     

    Just FYI

  10. @Alfie798 / @Frank1940 -

     

    The requirement (or at least strong recommendation) if you are going to run VMs, is to have one core (both threads if hyperthreading enabled) reserved for unRAID. The remaining cores can be assigned to VMs. Not doing this has resulted in unRAID being "starved", which crashes the server. Of course this was determined a long time ago, and not sure anyone has given any serious effort into trying to determine if that is still a real concern or not. 

     

    It is not necessary to dedicate cores to VMs. So you could have two VMs each sharing 3 cores. Of course the performance may be gated if one is doing process intensive tasks. I've done this with a VM and Plex (two heavy CPU tasks), and run into trouble that the VM would become unresponsive when Plex was doing heavy transcodes. I had  success in dedicating 1 core to each, and then allowing other cores to be shared. With a quad core CPU, that would mean one dedicated to each, and then 1 shared. But with a hexcore, it would mean one dedicated to each and then 3 shared. (With a 12 core, 1 dedicated to each and 9 shared). Sharing the cores enables the power to be used by the process in need, and not sitting idle "just in case" it is needed by another. The single dedicated cores ensure no one is completely starved.

     

    I'd definitely prefer the i3. It does have a built in iGPU so a standalone video card may not be important (depending on what kind of video performance you need from your VM). The iGPU can be passed through if desired (never done myself, so you might want to confirm, but 99% sure). If you want to step up to the hex-core with iGPU, I'd do that vs adding the video card. Again, unless you need the faster video card. Many iGPUs are able to do "Quick Sync" which is an excellent feature if transcoding video is in your requirements.

     

    The 4 core i3 will have enough horsepower to run a VM very respectably. It would have 3 3.6GHz cores. With the Pentium, it would have only 1 3.8GHz core. Quite a difference. With a hexcore (e.g., i5-8600K), you'd have 5 3.6GHz (4.3GHz turbo) cores for the VM. But unless you are doing processing intensive tasks, 3 cores should be quite acceptable. But if you wanted to run two gaming VMs off the same server, I'd definitely look at hexcore.

  11. On 7/28/2018 at 2:36 AM, tillkrueger said:

    the new (used) 6TB drive ...

     

    Did you buy a used 6TB disk? I would not suggest buying a used drive unless it is from someone you trust very much and you really know what to look for in terms of signs of disk issues. If I did wind up with a used drive, I would test it hard before thinking about putting it in service. Even if under warranty, I'd pass. Often warranty returns result in getting reburbs which tend to fail early. Much better to get a new good drive out of the gate, which will live a long lifetime if treated well. If it fails in early testing, it can be returned to place I bought it and get a replacement new drive. Much better than a refurb. Note also that not all used drives are under warranty. If a disk has been shucked from an external, even if it is relatively new, it may not be under warranty.

     

    I'm not against used. I would buy used CPUs, disk controllers, video cards, drive cages, and even motherboards from good sellers on eBay. But hard disks I want to be new.

     

     

  12. 15 hours ago, Serpent7 said:

    Guess who is finally Formatting Devices, and Parity Synching???

     

    I could not have done it without your help!!! I'll be asking loads of questions in the coming days ahead, but for now, Im feeling very accomplished!

     

    -SS

     

    Good deal. Hope it is smooth sailing ahead. Community is here if you have questions or problems.

  13. 2 hours ago, BillR said:

    The fans built into the drive cages seem to keep the disks nice and cool.  It's winter here, but when I shut the unit down after hours of use, I can put my finger on the CPU heat-sink and it is barely warm.  I will re-assess when the weather gets warm, but right now, everything seems cool.  The little 80mm side fan is currently set to intake, and between it and the drive cage fans, there's plenty of air flow.  The CPU TDP is 65 watt, including the GPU, so not a lot of heat generated.  I think it will be fine.

    I'd suggest starting a parity check, and monitoring the disk temps for at least 30-45 mins. If they stay in the low 40s, you're good. If they keep getting hotter and hotter with time, and start to go over 46 or 47C,  I'd stop the parity check.

     

    I think you'd have better airflow / cooling with at least one exhaust fan. The fans on the drive cages bring in the air and PSU and exhaust fan would help force that air out, making it easier for the intake fans to bring in more fresh air. While it can be good to have a little positive pressure in the case, 3 in and 1 out might be too much. But run your temperate tests and see.

  14. @BillR -

     

    Maybe with the heavy "pull" of air from the case font (over the drives), there will be postive pressure inside the case that forces air out.

     

    The Newegg picture shows a fan bottom left. (See below). Is that in place?

     

    Looks like possible to have a fan on top of case just next to PSU blowing upwards. That would be awesome if you are having heat issues. Maybe a thin mount 92mm or 120mm would fit if you remove the horizontal support piece.

    11-119-286-06.jpg

  15. 4 hours ago, BillR said:

    Thanks.  OK, so this first pic is from top down.  The case is designed to take a full sized ATX PSU, but I opted for a SFF power supply to give me some wiggle room.  If I'd gone with a regular PSU, you wouldn't be able to see the CPU fan.  So an added bonus of the SFF PSU is better airflow to the CPU.

     

    The second pic shows the rig from the left side.  You can see the strips of steel I used to stabilise the drive cages.  The front of the case was literally cut out with tin snips, after measuring up and scoring the outline on the steel.  The plastic front panel, I cut with a hacksaw blade and then smoothed out with a rasp.  Not a lot of room to play with, but everything is there.

     

    The motherboard itself only has 4 onboard SATA ports, so I have a PCIe card with 2 more.  But I'm replacing that with a 4-port card to cater for all bays full plus sata SSD.

     

    Bill.

     

    Do you have any exhaust fan except the PSU?

     

    Looks like it might get warm in there.

  16. 27 minutes ago, BillR said:

    Hey folks,

    I'm an unRAID newb, but thought I'd share my little project.  I picked up a 2nd hand Coolermaster Elite 130 Mini-ITX case that originally had just one external 5.25" drive bay and a couple of internal mounts for drives.  I took to it with a pair of tin snips and a hacksaw and was able to squeeze in a pair of 3-bay hot-plug drive cages and made up my own internal brackets to mount them.  I can take some internal photos if anyone's interested, but with the depth of the drive cages, there is not a millimeter to spare between the back of the drive and the motherboard.  I wanted something small, quiet, low-power and with external drive access.  Internally, I also have a pair of SSDs for cache - one m.2 and one 2.5".  The motherboard is a ROG Strix X470-i, populated with a Ryzen 2200G and 16 GB of RAM.  The solitary PCIe slot is populated with a 2-port (soon to be replaced with a 4-port) SATA board.  I have a pair of 3 TB 7200 rpm drives installed and two more arriving tomorrow.

     

    The box is just going to be a home storage & plex server to start with and we'll see where it goes from there...

     

    unRAID.jpg

     

    Love it! I've been looking for a small, portable build, and this looks pretty awesome.

     

    Looking forward to internal pictures as well.

  17. Interesting article about Intel's next generation of CPUs

     

    https://www.forbes.com/sites/antonyleather/2018/07/24/intels-monster-core-i9-9900k-8-core-processor-will-reach-a-massive-5ghz-with-4-7ghz-all-core-boost/#7d2fb1e57222

     

    Appears that the new i7s are going to drop hyperthreading. They are anticipated to release an 8 core / 8 thread i7-9700K at about $350. An i9-9900K will be a similar 8 core / 16 thread CPU for about $450. Speeds are expected to be 5GHz with few cores active, and 4.7GHz with all active! Very fast.

     

    Seems they are going to solder the IHS, fixing a big problem with CPUs from past generations that used lousy thermal compound and resulted in poor heat transference, and therefore CPU throttling under load. 


    More threads is not nearly as good as more cores - so an 8 "real core" CPU is going to be quite a bit faster than a 6 "real core" CPU hyperthreaded or not. In fact that article implies that hyperthreading is a marginal performance boost that has been overblown by Intel, but convincing the public may be a different matter.

     

    Seems the new 8 core CPUS might be appealing to those looking to stick with Intel and wanting a significant upgrade from a 4 core (or less) server, and were underwhelmed by 6 core options. They should have iGPU and be able be able to do hardware transcodes, but may need to confirm once product is officially announced. On a 4 core server, 1 core is sort of reserved for unRAID so only 3 are available for VMs. WIth an 8 core server 7 are available for VMs. Some of the 5-series Xeons people have been looking at with high core counts have much slower cores - in the 2-3GHz range. These are ~2x the speed. So an 8 core 9700K might be more powerful as a 16 core Xeon 5 series. Also, many apps are not multi-threaded, so the faster cores can make a big performance difference for such apps.
     

    I would mention I am very happy with my X series i9-7920x. No iGPU but 12 cores / 24 threads. With Silicon Lottery delidding, it runs 12 cores each at 4.5GHz. Its a bit pricier, but I expect a very long life from this CPU. That and the fact that the i9-7960x (16 core) and i9-7980xe (18 core) versions are just a CPU upgrade away for a 33-50% boost in a few years when used ones appear on eBay. But my 7920x is fast enough to do 4k transcodes in software, so the loss of iGPU / quickview is not painful.

     

    Lots of exciting options for server upgrades these days!

  18. 39 minutes ago, mbc0 said:

    Hi, I was lucky enough to pick up a Barracuda Pro 6TB for £115 and have also bought a Barracuda 6TB (non-pro) 

     

    The largest drive in my 48TB array is currently 3TB so I have to make one of them Parity

     

    I have 10gbE network speeds so would benefit from the read speeds if I used the the pro as storage but am leaning on thinking that I would get better overal performance if I used the Pro for Parity? so I am interested in any opinions?

     

    Many Thanks

     

    Read speeds are gated by the data disks speed and the network speed. So you'd get marginally faster speed from faster data drives.

     

    Writes speeds are gated by slowest of the parity disk write speed and data disk write speed. So if your parity is slow, it will drag down write performance of every data disk.

     

    Generally people tend to prefer to have a bit faster parity as a higher priority to have a fast data disk, but it depends on your use case.

    • Like 1
  19. 4 minutes ago, SheepContoller said:

    Hi there,

     

    yesterday, I was greeted with the red X of despair, one of my WD Red 3TB had been disabled. So I went and bought a replacement but since that wasn't precleared and the SMART looked okay-ish to me, I re-activated the drive and the rebuild went flawless ... so is the drive really bad or could it have been something else?

     

    Report attached, I run two of these HDDs, marked the reports.

    I know the drives are collecting age and rust, but is it reasonable to keep them running?

     

    Obviously, there is a parity drive and that's pretty new, 1200+ hours. Preclear on the replacement is running right now.

     

    Thanks

    failed - WDC_WD30EFRX-68EUZN0_WD-WMC4N0816310-20180725-0033.txt

    other - WDC_WD30EFRX-68AX9N0_WD-WMC1T0041512-20180725-0034.txt

     

    The drives are not failing.

     

    The red X is often due to bad or loose cabling. Especially common when you are opening a server to add or replace a drive, and touch the delicate wiring of some other drive(s), nudging a cable just enough to cause a marginal connection.

     

    These are not spring chickens. The one called "failed" has been powered on for 3.7 years. The on called "other" has been powered on for 4.7 years.

  20. On 7/23/2018 at 2:00 AM, jmgc97 said:

    the supermicro seems to be a good mobo.

     

    https://www.supermicro.com/products/motherboard/xeon/c600/x9drl-7f.cfm

     

    Let us know what you have decided. i am torn between a single i7 with 6 cores or dual xeon that i can get 12 cores. planning on running a windows server VM as a file/backup server for my home.

     

    All cores are not created equal. You are comparing a new, fast 6 core i7 (presumably with iGPU and ability to do hardware transcodes with Plex) with an older 12 core Xeon running at much lower clock speed. I'd guess the 6 core could be very similar power, maybe even faster.

     

    Not sure that a file/backup server needs a ton of horsepower, but I guess if it is doing compression it might benefit.

  21. 3 hours ago, tr0910 said:

    I guess I'll have to try pass through graphics someday.  Since my laptop lives on my desk, and gaming is not needed, I haven't felt the need so far.  Haven't you had occasions where not having a laptop handy and being locked out of your VM has been a problem?  I find that VMs really need to be put to sleep to reduce server power usage.  But once they are asleep they can't be woken up with RDP. 

     

    I do have a Surface handy that I use when the server needs to come down or VM manually restarted. But I can go months without server or VM coming down. 

     

    With UnRaid, you can suspend and resume a VM using Web GUI. Easily done on a phone or tablet.

     

    I have not made much effort to reduce electrical draw. I figure one server is going to be less than a server + a powerful desktop. I'm not a big laptop fan for frequent home use.

  22. 20 hours ago, tr0910 said:

    I have never installed pass through graphics.  The need to have a monitor attached to the server that is wailing like a banshee totally turns me off.  The server belongs deep underground, buried in the bowels of the earth, where it can never be heard.  The connection to it is by Ethernet cable, and RDP gives very good performance for Win clients.  For remote connections to VM's in other locations, Teamviewer is what I use.  VNC is great for terminal access to linux VM's but sluggish for graphics.  I tried SplashTop desktop, but was underwhelmed.  I need to try No Machine and see what I am missing.  Overall Microsoft RDP between Windows VMs and a Windows laptop is so good and so easy that it's difficult to recommend other solutions.

     

    I'm guessing you want to be as far away as possible from the beast and connect with your laptop.  If you need faster than gigabit connection, it is a bit more challenging.  Others have wired long distance cables for monitor, mouse and keyboard, so it is possible, but not easy or simple.

     

    My goal with my VM was to replace my daily driver computer. If I have to boot a computer in order to run my VM, I have defeated the purpose.

     

    Plus the experience with video, etc. is sub-par IMO. And my experience with keyboard/mouse was not stellar. It was jerkier than normal and I had some unexpected disconnects. YMMV

     

    For graphic-intensive gaming, passthrough it absolutely necessary.

     

    I agree that the server should be not seen or heard. Mine lives in my unfinished basement, and with a longer HDMI cable and USB extension cable, I feed them into my study and hook up to monitor and KVM. Works great for me. If the server is not within reach (> ~100ft) or it is not practical to run the extension cable(s), there are other options for using RJ-45 cables to pass video. But they are pricey, run very hot, and maybe not a great option. And this is not Ethernet. You need a dedicated point to point RJ-45 cable to make it work.

     

    Here are the cables I used:

    - HDMI

    - USB

     

    If you have a less-seldom used VM, and a machine to access them from (VM or PM), NoMachine or similar should be fine. NoMachine is better wtih graphics that SpashTop. And also better supported with Linux.

     

    And if your daily driver is a physical machine, and you have no desire to replace it with a VM, then this becomes a perfectly acceptable place to access your VMs.

     

    But I have to say I LOVE having my daily machine be a VM. I am sharing the considerable horsepower of my server with Plex and Windows (my two biggest CPU intensive uses). And no physical machine (with server in the basement) means absolutely zero noise in my study. And no heat either. With a high powered desktop, I had a hard time keeping the study cool in the summer, but no more. And as I said, this is a pure native Windows experience.