• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by SSD

  1. No - it said I was on the latest and I believed it. I just forced the update and working now! Thanks @CHBMB and @hernandito!
  2. Hmmm ... I tried this and it still not working. I get this error: nzbToMedia: Could not start /usr/bin/python3: No such file or directory
  3. Did some Googling and found a suggestion to run the command ... ip route add default via which resolved the issue. Why is this necessary? Have I misconfigured something? Before running the command above: root@merlin:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface U 0 0 0 docker0 U 0 0 0 br0 U 0 0 0 virbr0 After running the command above: root@merlin:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface UG 0 0 0 br0 U 0 0 0 docker0 U 0 0 0 br0 U 0 0 0 virbr0 -SSD
  4. Hey all - I've been away for a bit, but trying to update my server to 6.7.0 from 6.4.1. Boots fine, but the server cannot access the Internet. It does see the local network. Attempts to update plugins and dockers fail because server is unable to access the Internet. When I start my Windows VM, the Windows VM CAN see the Internet. It works fine. Any help greatly appreciated! I have attached my diagnostics. Thanks for the help!! -SSD
  5. An option is to hook up the 4th 5in3 externally. You just need to run the cables out the back of the case (including a power pigtail). Depending on where the server is stored, the aesthetics might be acceptable.
  6. Looks like it would make for a good unRAID server. The 8G of RAM may hold you back slightly if you want to run VMs. You need to dedicate RAM to each VM and suggest leaving at least 4G for unRAID. That would mean 4G for your VM. If you need more, you can likely upgrade your RAM. You'll get different answers on this one. I personally recently upgraded from an ECC to a non-ECC capable build and just tested the heck out of the memory. Never had a problem. It would not be a deal breaker for me. Yes - you should be able to. I would recommend Plex Docker and no pass through is needed. The GPU can do hardware transcoding. I would suggest an SSD to be shared as cache, plex, and VM. You don't need a dedicated disk to each.
  7. Even 40 to 35 is a big drop putting cold water on top, but maybe if it was in a metal pan. I wouldn't do that BTW, could cause condensation issues.
  8. Couple things make me doubt this experience. A drive is not going to suddenly start running very hot like that. And second, a drive's heat dissipates mainly from the bottom of the drive, and cold water on top would likely have little affect. Certainly not taking it down to 35C from 55C. It's not clear whether you shucked the drive to use in unRAID. If you didn't, a high temp might be expected. But again, a cup of cold water on top is likely not going to help. But if the drive was failing, returning it for replacement is the right action.
  9. Looks like @bonienl is being paid for his services, which makes sense given that he developed and maintains the GUI (as far as I know). And your comment may be hitting him in the pocketbook! Used to be that a license came with 2 keys. This was before the easy key replacement program, and Tom's intent of the second key was to be a backup (although there were no restrictions on using the second key for a second server). But there was a reworking of the licensing structure, and with the key replacement feature in place, the second key is no longer provided (I think the license cost went down - but not sure). But generally you need one key per server. You pay the price (just as you would for Windows). It is a fair price IMO. Offering a second key at a discount might encourage more people to buy extra keys, which might actually work to LimeTech's advantage. But that's up to them. I will say that LimeTech's does not consistently monitor the forum for these types of questions. So you'd not hear back from them unless you send an email. And discussions of the license fee and alternatives may fall on deaf ears, as there has been a lot of desire to debate over the years. At this point, I think I am correct in stating LimeTech is not entertaining other options.
  10. Seems very doable. Depends on your CPU but I assume it is pretty good considering the GPU. I did something similar couple years back with a 4-core Xeon. You would need to consider how to migrate the physical machine install to the VM. Might look at something like Acronis which can directly create a VM image.
  11. I did a server update update and was able to move my Windows VM over and it ran perfectly. But the VM was setup for on the old 4 core server, and I wanted to update it for my new 12 core processor. So I created a VM on the new server, and looked at its XML concerning the CPU config (topology, cores, etc.) and used that to manually edit my real VM's XML. Worked fine, but afterwards, Office 2016 complained and wanted to me to re-register. Because my licenses were OEM and tied to the machine, I had issues. Windows, also an OEM license, did not complain. I was thinking that a VM would completely shield me from these types of issues as it would appear to use the same motherboard, chipset, etc., and this seems true it you don't touch its config, but you can run into licensing issues when upgrading a server and reconfiguring the VM to match your new CPU. Just FYI
  12. @Alfie798 / @Frank1940 - The requirement (or at least strong recommendation) if you are going to run VMs, is to have one core (both threads if hyperthreading enabled) reserved for unRAID. The remaining cores can be assigned to VMs. Not doing this has resulted in unRAID being "starved", which crashes the server. Of course this was determined a long time ago, and not sure anyone has given any serious effort into trying to determine if that is still a real concern or not. It is not necessary to dedicate cores to VMs. So you could have two VMs each sharing 3 cores. Of course the performance may be gated if one is doing process intensive tasks. I've done this with a VM and Plex (two heavy CPU tasks), and run into trouble that the VM would become unresponsive when Plex was doing heavy transcodes. I had success in dedicating 1 core to each, and then allowing other cores to be shared. With a quad core CPU, that would mean one dedicated to each, and then 1 shared. But with a hexcore, it would mean one dedicated to each and then 3 shared. (With a 12 core, 1 dedicated to each and 9 shared). Sharing the cores enables the power to be used by the process in need, and not sitting idle "just in case" it is needed by another. The single dedicated cores ensure no one is completely starved. I'd definitely prefer the i3. It does have a built in iGPU so a standalone video card may not be important (depending on what kind of video performance you need from your VM). The iGPU can be passed through if desired (never done myself, so you might want to confirm, but 99% sure). If you want to step up to the hex-core with iGPU, I'd do that vs adding the video card. Again, unless you need the faster video card. Many iGPUs are able to do "Quick Sync" which is an excellent feature if transcoding video is in your requirements. The 4 core i3 will have enough horsepower to run a VM very respectably. It would have 3 3.6GHz cores. With the Pentium, it would have only 1 3.8GHz core. Quite a difference. With a hexcore (e.g., i5-8600K), you'd have 5 3.6GHz (4.3GHz turbo) cores for the VM. But unless you are doing processing intensive tasks, 3 cores should be quite acceptable. But if you wanted to run two gaming VMs off the same server, I'd definitely look at hexcore.
  13. Did you buy a used 6TB disk? I would not suggest buying a used drive unless it is from someone you trust very much and you really know what to look for in terms of signs of disk issues. If I did wind up with a used drive, I would test it hard before thinking about putting it in service. Even if under warranty, I'd pass. Often warranty returns result in getting reburbs which tend to fail early. Much better to get a new good drive out of the gate, which will live a long lifetime if treated well. If it fails in early testing, it can be returned to place I bought it and get a replacement new drive. Much better than a refurb. Note also that not all used drives are under warranty. If a disk has been shucked from an external, even if it is relatively new, it may not be under warranty. I'm not against used. I would buy used CPUs, disk controllers, video cards, drive cages, and even motherboards from good sellers on eBay. But hard disks I want to be new.
  14. SSD

    Newbie, intro.

    Good deal. Hope it is smooth sailing ahead. Community is here if you have questions or problems.
  15. I'd suggest starting a parity check, and monitoring the disk temps for at least 30-45 mins. If they stay in the low 40s, you're good. If they keep getting hotter and hotter with time, and start to go over 46 or 47C, I'd stop the parity check. I think you'd have better airflow / cooling with at least one exhaust fan. The fans on the drive cages bring in the air and PSU and exhaust fan would help force that air out, making it easier for the intake fans to bring in more fresh air. While it can be good to have a little positive pressure in the case, 3 in and 1 out might be too much. But run your temperate tests and see.
  16. @BillR - Maybe with the heavy "pull" of air from the case font (over the drives), there will be postive pressure inside the case that forces air out. The Newegg picture shows a fan bottom left. (See below). Is that in place? Looks like possible to have a fan on top of case just next to PSU blowing upwards. That would be awesome if you are having heat issues. Maybe a thin mount 92mm or 120mm would fit if you remove the horizontal support piece.
  17. Do you have any exhaust fan except the PSU? Looks like it might get warm in there.
  18. Love it! I've been looking for a small, portable build, and this looks pretty awesome. Looking forward to internal pictures as well.
  19. Interesting article about Intel's next generation of CPUs Appears that the new i7s are going to drop hyperthreading. They are anticipated to release an 8 core / 8 thread i7-9700K at about $350. An i9-9900K will be a similar 8 core / 16 thread CPU for about $450. Speeds are expected to be 5GHz with few cores active, and 4.7GHz with all active! Very fast. Seems they are going to solder the IHS, fixing a big problem with CPUs from past generations that used lousy thermal compound and resulted in poor heat transference, and therefore CPU throttling under load. More threads is not nearly as good as more cores - so an 8 "real core" CPU is going to be quite a bit faster than a 6 "real core" CPU hyperthreaded or not. In fact that article implies that hyperthreading is a marginal performance boost that has been overblown by Intel, but convincing the public may be a different matter. Seems the new 8 core CPUS might be appealing to those looking to stick with Intel and wanting a significant upgrade from a 4 core (or less) server, and were underwhelmed by 6 core options. They should have iGPU and be able be able to do hardware transcodes, but may need to confirm once product is officially announced. On a 4 core server, 1 core is sort of reserved for unRAID so only 3 are available for VMs. WIth an 8 core server 7 are available for VMs. Some of the 5-series Xeons people have been looking at with high core counts have much slower cores - in the 2-3GHz range. These are ~2x the speed. So an 8 core 9700K might be more powerful as a 16 core Xeon 5 series. Also, many apps are not multi-threaded, so the faster cores can make a big performance difference for such apps. I would mention I am very happy with my X series i9-7920x. No iGPU but 12 cores / 24 threads. With Silicon Lottery delidding, it runs 12 cores each at 4.5GHz. Its a bit pricier, but I expect a very long life from this CPU. That and the fact that the i9-7960x (16 core) and i9-7980xe (18 core) versions are just a CPU upgrade away for a 33-50% boost in a few years when used ones appear on eBay. But my 7920x is fast enough to do 4k transcodes in software, so the loss of iGPU / quickview is not painful. Lots of exciting options for server upgrades these days!
  20. Read speeds are gated by the data disks speed and the network speed. So you'd get marginally faster speed from faster data drives. Writes speeds are gated by slowest of the parity disk write speed and data disk write speed. So if your parity is slow, it will drag down write performance of every data disk. Generally people tend to prefer to have a bit faster parity as a higher priority to have a fast data disk, but it depends on your use case.
  21. The drives are not failing. The red X is often due to bad or loose cabling. Especially common when you are opening a server to add or replace a drive, and touch the delicate wiring of some other drive(s), nudging a cable just enough to cause a marginal connection. These are not spring chickens. The one called "failed" has been powered on for 3.7 years. The on called "other" has been powered on for 4.7 years.
  22. Look at NoMachine vs SplashTop.
  23. All cores are not created equal. You are comparing a new, fast 6 core i7 (presumably with iGPU and ability to do hardware transcodes with Plex) with an older 12 core Xeon running at much lower clock speed. I'd guess the 6 core could be very similar power, maybe even faster. Not sure that a file/backup server needs a ton of horsepower, but I guess if it is doing compression it might benefit.
  24. SSD

    Newbie, intro.

    I do have a Surface handy that I use when the server needs to come down or VM manually restarted. But I can go months without server or VM coming down. With UnRaid, you can suspend and resume a VM using Web GUI. Easily done on a phone or tablet. I have not made much effort to reduce electrical draw. I figure one server is going to be less than a server + a powerful desktop. I'm not a big laptop fan for frequent home use.