Jump to content

Jacko_

Members
  • Posts

    72
  • Joined

  • Last visited

Everything posted by Jacko_

  1. So I set cache prefer and the nvme cache pool I want to use, mover didn't do a great deal but not sure what its done, if anything. May 18 23:35:12 Tower emhttpd: shcmd (299): /usr/local/sbin/mover |& logger & May 18 23:35:12 Tower root: mover: started May 18 23:35:12 Tower root: mover: finished What am I missing here, am i not understanding how mover works or is it doing what it should? Maybe I should set to only, then it will move everything, then I can reset to prefer once it's all on the nvme...
  2. Thanks, I'm trying this now. Does unraid log the mover process, and when it's completed? I've looked at syslog but it states the vdisks already exist. Should I delete these from the SSD cache pool, then enable cache prefer and the nvme cache pool I have before starting up KVM again? I want these vdisks on the nvme for more performance and I need to free up space on the SSD. Thanks Logs: May 18 23:25:02 Tower emhttpd: shcmd (280): /usr/local/sbin/mover &> /dev/null & May 18 23:28:39 Tower emhttpd: shcmd (282): /usr/local/sbin/update_cron May 18 23:28:49 Tower emhttpd: shcmd (283): /usr/local/sbin/mover |& logger & May 18 23:28:49 Tower root: mover: started May 18 23:28:49 Tower move: move: file /mnt/cache/domains/Windows 10/vdisk1.img May 18 23:28:49 Tower move: move_object: /mnt/cache/domains/Windows 10/vdisk1.img File exists May 18 23:28:49 Tower move: move: file /mnt/cache/domains/Gaming VM/vdisk1.img May 18 23:28:49 Tower move: move_object: /mnt/cache/domains/Gaming VM/vdisk1.img File exists May 18 23:28:49 Tower move: move: file /mnt/cache/domains/Ubuntu/vdisk1.img May 18 23:28:49 Tower move: move_object: /mnt/cache/domains/Ubuntu/vdisk1.img File exists May 18 23:28:49 Tower root: mover: finished
  3. Is it possible to manually move my OS VMs from an old ssd pool to a new nvme pool? I've tried 'prefer' and the nvme pool with VM engine turned off and tried mover, but this doesn't seem to have worked. Just wondering if manually moving with the cli is ok and what command I should run to do this - I'm not even sure if this would work given the folder mapping? I've ran out of space on the SSD, so really need to shift it to the new nvme ssd. Thanks
  4. Hi trurl, Thanks for this. With this containing the downloaded exe's then deleting it and starting Docker backup just recreated the file in full. I guess what i need to do is delete the Docker image template from the file but i'm not sure how. I'll do some searching and see if that's possible. Thanks
  5. I've written into docker.img by miss-configuring an app. Can anyone link to a dummies guide on how to look at the docker.img file and understand how to delete from the file. currently the file is 21.5GB and i'm hoping i can shrink the file down as my cache is @ 80%. Thanks
  6. Well after relocating the server to another room i had issues powering it back on. Not sure what's up with one of the dimms but it's doesn't like one of them. I had got initally with both sticks but even testing the problematic dimm on it's own in each memory slot the board with showing a memory error and would not get to POST. What i didn't think about is the parity check. It was maybe 8 hours into the estimated 16 hours when i decided it would be a good time to relocate it so i powered it down (gracefully) once i'd got it backup again i had to start the party check all over again! Not sure if this was caused by me pulling drives etc and trying to troubleshoot the RAM issue or just what is expected if you don't pause the parity check operation prior to shutdown. Lesson learnt. While the parity was ticking away i added some of the apps that spaceinvader had suggested in a couple of videos from a year or so ago and had a general play around and familiarising myself with the GUI and getting my APC UPS configured to report into unraid and setup graceful shutdown on power loss. Took a bit of messing and reading up on powershute but its working great now. I've added the fifth and final 12TB drive and that is being cleared at the moment and I'll add a couple of 240GB SSDs i have kicking around tomorrow and configure in raid 1 for appdata, docker, plex metadata etc and i'll install one of the NMVe drives (for pass through to Windows VM) from the Synology NAS now that it's rebuilt itself after i replaced one of the failing drives. I then need to start transferring data from the Synology to unraid but i'm expecting i have to do this via Terminal which i'm not familiar with so i need to figure this out. I've added one of the Synology shares to unraid so i suspect a simple enough command will get this copying over to one of the array shares but i'm not sure how as yet. Once everything is copied over to unraid It would be great to script an rsync backup from unraid to Synology for things like appdata, boot drive, VMs, containers and some select personal files so i'll take a look at this in the future as well.
  7. I added some old 1TB drives to the array, just testing if hotswap worked ok etc and to see if these slower drives reported SATA speed correctly and these do so SATAII @ 3Gb/s so this is great news that the icydock backplane is completely passive with no logic and will support my new shiny SATAIII drives at full speed.
  8. Well i'm up and running. I have not added all drives or any NVMe or SSD yet but will do in a few days. I have currently installed Version: 6.9.0-beta25. Did so on a 64Gb SD card as it was all i had available at the time, despite unraid stating 32Gb max i wonder if the new code allows for larger USB keys? I added 4 of my 5 12TB WD element drives and i have tried to add 2 of them for parity and 2 for data and i checked the drive stats. I will need to check on the read / write performance but the SATA version is showing as 3.2 @ 6Gb/s which is promising. If you recall back it was a concern that the icydock/s would be a bottleneck but that has given me hope that they will not be. I'm a little confused as to why the parity drives are showing with warnings which i suspect is due to the parity sync it is doing perhaps. Is this having to align the sectors on each of the parity drives to build up a reference table? The estimated time to complete it over 16 hours. I have the larger cooler ready to return to the supplier and i bought a replacement online with a much smaller heatsink and with a single 120mm fan. Hopefully i can keep the temps down still. I have three fans at the rear of the icydocks, with a PCI-e 120mm fan exhuasting out the rear. 120mm fan pulling air through the heatsink and a 120mm pulling air out the rear of the case. The PSU is mounted in the top of the chassis and has 120mm fan (bearings are noisy so i want to replace this one).
  9. I'm going to give returning a shot once I've tried cleaning the prints off with some alcohol and boxing back up with all of the bits. What I have done is bought another noctua which uses a single 120mm fan so I hope all of the mounting hardware is the same, so opened bags can be replaced for unopened etc and saves me removing the motherboard to get access to the mounting bracket. So, all being well that should be possible. With regard to the sata performance while I can say for certain, I'm in the hope that the backplane has no logic that will hinder performance. If data is just passed through to the sata cable from the backplane then the performance is driven out of the connecting controller for each indavidual drive. Possibly wishful thinking but will save me a lot of money and messing about if this is the case. Im looking forward to powering on for testing and setting up.
  10. I went and collected the MB and CPU cooler today and spent an hour or so putting it together while i wait for the CPU and RAM which should hopefully be with me before the week is out. Cabling is a nightmare and i've done the best i can with it for now. It's not perfect but TBH i'll do. My main concern now is the sheer size of the CPU cooler and the available space. I've mounted the brackets so the cooler is pulling air from the front to the rear. With it mounted this way i think it will block the PCIe slot i was hoping to use for the GPU. I won't know until i can get the CPU installed and the cooler mounted but it looks like it might be a problem. I could take an angle grinder to the fins worst-case. I think returning the cooler for a small model is out of the question now as i've got my finger prints all over the thing! Here are some shots nonetheless, any advice you have please give it. The drives i've bought are WD elements which i have taken out of the housings. I'm aware of a 3v power issue with the sata power connector but these icydocks are old and each of them is powered via 3 x 4 pin molex so i don't think that will be an issue for me. I just hope the icydock back-plane doesn't limit the transfer rate, i'm not sure if they were rated for SATA2 when i bought them many years ago.
  11. I was planing on using the CPU GPU for this which i think will be more than up the task. I'm not expecting any more than 2 transcodes at any one time. I was really into Unreal Tournament when it first came out. I haven't really played any games to speak of since then. I used to enjoy a sim racer but i can't think what it was called, perhaps iRacing. But really only played a handful of times on a console since then as well but enjoyed Forza and Gran Turismo but no idea what's good these days. I think i'll stick to normal monitor for now (i do already have a 4K 32" monitor i use for work so i've made some investment in that area already. Just need a wheel really to make the most of it. That said, i might have a problem which i'll show you shortly.
  12. Perfect, thanks for the very useful info - i'll certainly take it onboard. NVMe for gaming VM - check, thanks for explaining that for me, makes sense. 2 x SSDs Raid 1 for cache pool - check, i can make that happen. Plex on SSD - check, does these need me dedicated hardware or can it be shared as part of the system cache pool?
  13. Great, that's what i was expecting to do. That makes sense. I'll make sure i don't do that. Where should my data reside? Gamer VM i want to have the OS on SSD of some sort - maybe NVMe or SSD i've got both options available. Perhaps NVMe for some additional performance? Games to reside on HDD and be protected by parity. Test / Dev VMs on SSD/s Plex metadata - SSD i suspect is fine unless NVMe just make client thumbnail preview much snappier? Docker containers i'm not sure still - need to research this still. Unraid cache - i guess this should be on NVMe ideally, might have to settle for SSD though. Or i put Plex metadata on SSD. Personal non OS / application data on the HDDs. Is this about right, what should i consider for good performance? I can add more SSD if needed. I'll have a total of 18 x SATA ports, which 15 will be connected the Icydocks. I can either mount 2.5" SSDs in the icydock with a cage adapter OR just double-sided tape them internally to the chassis.
  14. Do i need a 2nd GPU if the CPU has it's own? I was expecting to use iGPU for Plex hardware transcoding, with a 1070 or whatever for gaming VM passthrough. Is this going to be a problem do you know? Games-wise it'll mostly be car racing, but maybe some first person shooter. TBH i've not played games for 15-20 years so i've no idea what's what these days. haha. I'll more than likely use a dedicated SSD for the gaming VM, perhaps one of the NVMe drives i have. I haven'y quite figured out where the plex metadata should reside (ideally on something solid state) and i do also have a 240G SSD not doing anything at the moment so perhaps use that.
  15. Initially directly to the server but afterwards i would like it to be tucked away out of sight and accessed remotely which is why i need to consider both graphics pass-through but also USB and audio. I need to make sure i fully understand how / what i need to do to make this work still. Ok, on that then do i need to pass through both the iGPU and the dedicated PCIe GPU for the gaming VM? Through looking i thought i would select the GPU for the specific VM? If i'm right in thinking that then i would just pass-through the PCI-e GPU and the iGPU can be for transcoding and for all other test / dev VMs. My rational here is that someone can be transcoding a film while i play games which hopefully should work ok. At the moment, i transcode using a Celeron CPU in a Synology DS918+ and it can struggle but this is only a 4 core / 4 thread CPU. I think that if i dedicate 4 cores / 8 threads to the gaming VM with the remaining 4 cores / 8 threads for unraid, plex, containers, test/dev VMs it should work ok? I hope?!? Test/dev VMs can at least be shutdown when not in use. I don't want to have to do this for the docker containers if i can avoid it.
  16. Hi Zonediver, thanks for the reply. I don't think i was being clear at all. What i was trying to say is that i would want to use iGPU for transcoding with discrete GPU pass-through to a gaming VM but just not sure if i will do this as i don't really get time for games and haven't had a gaming system or console for years but i like the idea of getting into some sim racing a little - might never happen. I have chosen the below hardware but struggling with an ETA on the motherboard and i keep questioning if this option is really the best for both unraid, plex (i have plex pass) but also running maybe 10 or so docker containers and some test / dev VMs and even a gaming VM. Would a better CPU be better do you think? I heard that sticking to Intel for unraid general support is best, but these new AMD CPUs are compelling from a price / feature point of view. But i would need an integrated GPU for transcoding so that limits my CPU options to the Ryzen 3 and Ryzen 5 i believe. I haven't researched how these fair with transcoding vs Intel. Here's what i've ordered so far for this build: Gigabyte Intel C246-WU4 XEON Server Motherboard Intel Xeon E-2278G, S 1151, Coffee Lake, 8 Core, 16 Thread, 3.4GHz, 5.0GHz Turbo, 16MB, UHD Graphics P630, 80W 2 x Crucial DDR4 ECC UDIMM 16GB 1 x Noctua NH-D15 Dual Radiator Quiet CPU Cooler LionLi ATX case (reused from old build) 3 x Icydock 5 x 3.5" hot swap enclosures (reused from old build) 2 x Samsung 970 Evo NVMe 500G M.2 (reused from old build) 1 x Crucial 2.5" 240GB SSD 5 x 12Tb WD Elements drives (2 parity / 3 data) LSI 9240 in IT mode Silverstone 750W modular PSU (reused from old build) It's very much overkill CPU-wise, but i'm hoping to really get a lot of VMs deployed and i managed to get the CPU and RAM at a good price through work. I'm not sure on the GPU side of things, not played games for a very long time so i need to research and look into USB pass-through as well which my become a problem. Any comments / thoughts on this build and approach are welcomed - happy to be put wrong here i'm learning on the go.
  17. I believe both CPUs need to be the same generation. You'll also want to balance the memory across the CPUs as well.
  18. Hi all, I've put together a couple of mobo / CPU combos which im just not sure is best bang for buck or could be make cheaper without too much compromise. Here the chassis it's going into where i would like to reuse the PSU and drive cages. Heres some info on that right here Quick Sync is an advantage when hardware transcoding in Plex - i'm not sure that is really important if a graphics card is being used, such as the P2000 but i'm wondering if just using the CPU for transcoding rather than a separate GPU for the Workstation option will be ok and net a bit of a saving in the build cost. I can always add a GPU in at a later point in time if i feel i need it i thinking. Does that make sense or am i looking at this wrong? The 2618L is higher performing - but older tech. Power consumption looks to be the same. I have some 4TB drives from a Synology NAS i might use - got to to figure out the storage element a little tbh, but that's a lot easier.
  19. If anyone has some ideas / thoughts on a good middle ground for motherboard / CPU / GPU combos i would love to hear them. I'm so far removed from consumer hardware these days i just haven't a clue what's what. I can get a board that supports 2 x M.2, has a bunch of SATA ports (7 ports would be amazing as i could use a single 8 port LSI card to give me my 15 bays - with unraid booting from USB into ram then i should be golden with this. Support for GPU card would be great as well - but perhaps i don't need it with the right CPU - but i just don't know enough about. Any advice is welcomed. Thanks.
  20. I'll take a look at your build. My idea is to try and stick to a known-working design where possible as it should be tried, tested and an almost validated design that just works and gives me zero bother. I just want something that works, that i can upgrade as i need to and not worry about for a good few years.
  21. I'm planning on a single 16TB drive, actually, maybe two of them for parity - i believe it's best to use the largest drive going which makes adding new drives easier as you expand. I'm not sure what to do with regard to the drives in the Synology - perhaps keep them where they are so that i can use it as a DVR for CCTV. I was thinking of using the 1TB drives in the Synology but not decided yet. I would still need some space for copying the data i already have over to unRAID so just got to figure that out in my head. So not sure really. Perhaps use 8TB or something in the unraid, whatever the sweet-spot is for £ per TB is these days.
  22. Hi all, First time creating an account but have been lurking in the background a little doing some research. Back story is i used to use FreeNAS around 10 years ago but then stopped but i very recently bought a Synology NAS to storage some media and use as a Plex Server. It's doing ok with direct-play x264 but is struggling with some codecs and i'm looking to invest in a 4K TV now they are much cheaper. Current Synology DS918+ 4 x 4TB IronWolf - Raid 5 2 x Samsung Evo 500GB NVMe 2 x 8GB DDR3 RAM I'm around 50% capacity at the moment, so that's not a big driver but being able to transcode video to all devices is important. At the moment with some containers running etc the CPU is struggling which is the biggest problem, to the point where the Plex Client (Fire Stick and Fire Stick Pro) will crash and restart. Seems to happen mostly with animated films for my son to watch. I have this old FreeNas server which i've not powered on for years. It developed a problem with one of the controllers and i got busy (work, kids, houses etc.) that i never did get around to sorting it. I wanted to reuse some of the components in it but i'm really out of sync with modern-day computer components so i'm looking for advice / gotchas i should be aware of. Lian Li full size case with 15 x 5.25" front bays Asus M2N32 WS Pro Motherboard (believe it's full ATX size) Unknown AMD AM2 CPU and a massive fan 4 x 2GB DDR2 ram 3 x icydock hot-swap 5 drive drive enclosures giving 15 hot-swap bays 5 x 500G SATA Seagate drives 5 x 1TB SATA WD Green drives 2 x Adaptec 6 port SATA PCIx controllers (one is a Dell rebrand one isn't) Silverstone modular PSU - can't remember the power but maybe around 400W perhaps I'm using 5 ports on each controller, plus 5 off the mainboard I would love to reuse as much hardware as possible but i have some concerns / questions about doing so. Firstly, i need to understand the power requirements of modern motherboards (the M2M32 is 24 and 8 pin). If the PSU is man enough for all the drives / motherboard, GPU etc and if i can reuse the SATA cables and drive bays and take advantage of SATA3. I can't see why the cables should be an issue, the drive bays have a passive backplane as much as i can remember and see so i don't think that'll be a problem either. Can you think of any issues with my thoughts here? I'm looking at a motherboard, LSI controller in IT mode, or two of them depending on the number of ports on the motherboard) CPU, RAM and a GPU. I'm probably looking at a Nvidia P2000 but i'm not sure if i can use an HP branded version or not, or if this is really required if a decent CPU is used - i still need to research this bit, and if unraid can fully support hardware transcoding using a separate GPU. I think i would like to stick with more enterprise grade motherboard, but i'm just not sure what i should be looking at. It doesn't need to be Skylake / Cascade Lake levels of performance but having some CPU available for additional things would be handy. Docker containers, some VMs, etc would be great without it impacting on transcoding would be the ideal but without going mental on the cost. I know this is long-winded, but has anyone got any advice on a build that could support what i'm after - lets say 2 or so transcodes and still support other workloads at the same time. Should i look at Intel 1151 socket, perhaps an Kaby Lake would do it? Single 16GB ram? NVMe 500GB (from the Synology) for Plex / unRAID cache? P2000 GPU 2 x LSI PCIe controllers 16GB or so USB key for unRAID (am i right in thinking USB2.0 is better than USB3.0?) What other things do i need to consider as part of this - should i just try and use what i've got already perhaps Its not exactly cutting edge but was very good back in the day when i built it. Thanks for your help on this in advance. Looking forward to getting this going again soon. Some photos of the old rig
×
×
  • Create New...