Jcloud

Members
  • Posts

    632
  • Joined

  • Last visited

Everything posted by Jcloud

  1. Clean cabling, nice job. I like your front panel cable route the best, not a fan of your sata cables (not my style), but I'm an internet stranger, always a critic/glass half-empty, and it's not my box -- so who cares! Right? I'm not sure how everyone else does it but I can give my general user share setup, to give you some ideas. Sorry about the late advice - hope it's useful. /mnt/user/.. ../media (my videos, music, pdfs, basiclly any file data I hardly edit, which I share simply by SMB) ../Games (my Steam library folder) ../BTsync (share folder for Bittorrent downloads, and a few personal temp download folders for VMs) ../ISOs (ISO files for VMs; folders for device drivers in Windows; install binaries for VMs and other systems) ../vmBackups (drop folder to keep backup copies of my VM disk files) Following two are for Handbrake docker -- if I don't have these, the Docker app does really weird things when I try and use other share names. Perhaps you'll want something similar for Plex and transcode directory to cache pool? ../transcode ../mkout Single folder I want network access to, but requires password, so I can keep it sand boxed from others. ../Private There are more, but those are for specific uses, and some cases not used, so I'm omitting them here. If you're unsure perhaps write it out on paper, as an outline, just to organize and then setup shares (( he says almost two months later )). Asking for brand of SSD, or whether we recommend transcoding on an SSD? For later, answer is yes. If question was for first, I recommend Samsung EVO drives, good performance, and decent price point, usually. Good luck, and hope you enjoy your new setup.
  2. Haven't seen it, perhaps it exists, but given the nature of the new Ryzen and Threadripper systems (some working in VM, some not; C-states and the such) has the community started, or want to start a spreadsheet/matrix of "KGB," or "Known Good Boards?" Perhaps the devs would know best, but is the Linux Kernel functionality is now stable (in the context of this thread) and will probably just get better and therefore this is not needed. If Community is interested, is there a standard tool used, what parameters would we want to track (besides MB, CPU, BIOS, version)?
  3. At best you can have Windows not save the password, Windows will save the password for as long as your Windows account is logged in. So if you logged out, or rebooted, the next session would require the password again. Perhaps something more at the Group policy level of things, but at requires having Pro version of Windows.
  4. In my case this was a construction company, they bought a new desktop from us and were hoping to have it transferred as there were a few active jobs on it. Customer was ecstatic, claimed I saved her two months of data entry, and yeah probably did save the company $30K in software, store charged her four hours labor ($240), it was a steal for them.
  5. And that's exactly what I said to my boss, and the client. *mental face palm* -- with boss. End of the day I'm paid to be some one's blunt hammer of technical can do. Two weeks ago, I transferred an XP box to Orcale virtual box VM, just to keep a client's billing software. Any how, you know it, and I'm rambling. Have a good day.
  6. Only the K (unlocked ) models, Coffee Lake non K models include HSF, same as Skylake/Kabylake. You could also go a step farther then this. There are OEM or "tray" cpus which won't come with an HSF. However, if you buy a RETAIL cpu then this will come with an HSF, and a pretty box for customer to look at on the shelf -- except for the K models, they'll have pretty package but no HSF.
  7. Are you referring to the Windows Update block on 7th-gen and higher hardware? If so, I've had to use this Windows update hack for work, multiple times.
  8. I do the same thing. Just use the default /mnt/cache/domains/vmFolderName/vdisk.img install my OS, updates, drivers, 3rd party utils -- defaults I want; then shut it down and cp the image file to a backup folder on my array. Something happens to vm I have a clean image of OS, same goes if something happens to my cache drive. Also makes doing 2 in 1 systems easy; just do the same thing only NOT graphics card driver -- this becomes your template. Copy template image to a new cache VM folder, assign gpu and boot, now load correct drivers for gpu. Second VM, different CPU pinnings, another copy of template image to a another cache vm folder (or different file name, whatever), assign next gpu, yada yada ...
  9. Sure, make me your enabler. Seriously though, I'd say go for it, especially if you were already interested in this case. Functionally I'm very happy with my purchase, visually I still hate it -- looks like a cheap Iron Man knock-off, but I get the manufacture's choices on their engineering/construction. Biggest issue (non-) was when I took off the that manufacturing clear flim, the stuff on plastic to keep the glossy finish from being scratch/smudged, it took with it the two Z's from the logo. So I have an, "A A" case now. LOL All the bits that matter to me, are good. I went with tray-less models, I do and don't like the enclosures. As an enclosure, has everything I want. What I don't like are two bits: one the power button, or the blue leds, are kinda cheap in construction, and the drives some times needs a nudge on the door to be fully seated in connector to show up on the controller. The electronic switch is in the back of the bay, and is pushed by a clear plastic push rod (that blue led you see at front) and acts as light-pipe for LED; on one bay I found I had to press it "just right," to get it to turn on. As for the drive door. They used a stainless-steel leaf spring on the door which pushes against the drive, or in my case a different bay that needs a slight nudge after closing to register (so probably need to get a pair of pliers and just tweak it a bit). Over all I think I'm 50/50 on my enclosure choice, but would buy/try again. EDIT: 6/15/2018 -- In regards to the icy dock enclosures, I don't recommend them. I want to like them, and I bought a second one, but it too has a manufacturing defect - there's a bay which the on/off switch won't stay in the ON position. Given the MSRP of these Icy Dock products, and their flaws I wouldn't recommend them as a, "must buy" sort of product, they would be worth considering if on a fire-sale discount.
  10. Yeah, I'm reading the Intel's documentation now. Just saw the pci port on it, and I assumed it needed that for some reason (I.E. not an optional feature) -- although I was coming up blank for what that reason would be.
  11. Now you have my attention, and blew my mind. Not on the PCI-E slot? Time for me to go look at the specifications. I'd prefer keep my slots for GPU's. Thanks.
  12. Exactly what this is. Six years on my last combo, although I was on my second mobo. Probably could have ran my old system for another year, but Threadripper came out and boy did I have an itch that couldn't to be scratched! SSDs are on the motherboard, was planning on keeping the SAS card just hard drives. I should redo the cabling for all my sata cables; that's about when I got impatient with myself and just wanted it on. System is on, phone's flash washed it out.
  13. Most recent edit date: Mar, 4th, 2018. My new build, much like the hydra, is one part old bits, and the hacked off bits only to have grown back stronger and more wild. Also, like the metaphorical comparison of unRAID (and what it offers to the user) and that of the hydra: One head is HD protection and array; another head is Docker app server; another KVM; another NAS shares -- again split off from array. It's neck. And now this hydra has more head, ThreadRipper. The metaphor just fits for me. OS at time of building: 6.4.1 Stable OS Current: 6.7.2 CPU: AMD Ryzen Threadripper 1950X Heatsink: Noctua NH-U14S TR4-SP3 Motherboard: ASUS PRIME-X399-A , BIOS 0407 (at build), Running BIOS 1002 RAM: 128GB HyperX Fury DDR4 2666MHz , (2x) HX426C16FBK4/64 Case: AZZA Solano 1000R Drive Cage(s): (2x) ICY DOCK 5 Bay 3.5 SATA HotSwap Power Supply: Antec 900W HCG SATA Expansion Card(s): LSI Internal SATA/SAS 9211-8i Parity Drive: 4TB Western Digital "gold" WDC WD4002FYYZ-01B7CB1 Array Disk1: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk2: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk3: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk4: 4TB Western Digital "red" WDC WD40EFRX-68N32N0 Array Disk5: 4TB Western Digital "gold" WDC WD4002FYYZ-01B7CB0 Array Disk6: 4TB Western Digital "blue" WDC WD40E31X Array Disk7: 3TB Western Digital "red" Cache Drive0: 250GB Samsung 850 EVO SSD Cache Drive1: 120GB OCZ-VERTEX460A SSD Cache Drive2: 500GB Samsung 850 EVO SSD Total Hard Drive Array Capacity: 24TB Total Cache Drive Array Capacity: 870GB (JBOD, not protected storage) Primary Use: To be my, "Fort Kick Ass." Gaming and mucking around with VMs; secondary protected data storage, nas functionaly, application server notes (wiki), handbrake, BT. Likes: I Linux distro powerful enough to do everything I wanted out of my computer, but easy enough for me a relative linux newb could use it. Dislikes: I'll have to come back to this one (even as a 2-year user). Yes, there can be, and often are some limited functionality of VMs, but that's not unRAIDs or dev's fault -- just the state of Linux technology, and microcode. First time I tried to touch Linux was in '98, what we have now . . . omg, NO complaints! LOL It's also more fun to accept the VMs quarks as a hobbyist than to tank the server, especially with the gains in protected storage. Plugins Installed: Community Applications; Dynamix Active Streams, File Integrity, Local Master, Scheules, SSD TRIM, System Information, System Statistics; Fix Common Problems; Nerd Tools; Unassigned Devies, unBALANCE Docker Containers: couchpotato, Sonarr, DokuWiki, Dolphin, dupeGuru, Handbrake, Netdata, RDP-Calibre, Transmission, Storj, QDirStat Future Plans: Going to get a VIVE-pro it will work on the VM. Case was chosen for the nine external 5.25" bays, so as time progresses the plan is to add 1-2 more of the drive bays for a total of 15 drives. If I get this full, the SSDs will be moved down to the bottom area where there is space of two more internal 5.25" bays. For port expansion I have the card, and my thought was to get an Intel RES2CV360 RAID Expander. The documenation on the SAS card says it can support like 64 devices, but just has the ports for 8, so I thought why not this direction? Updated HBA instead . No, seriously, if anyone knows if this will not work, I'll like to hear it. Answered, thank you. The Intel device just looks like SAS port replicator board, and has nothing to do with RAID protocol. POWER USAGE: Edit1: Just got power meter as of, Febuary ninth, so I now have values. Instantaneous readings/values, not avg over time. Boot (peak): 161.3 W Idle (avg): VM off: 149-153 W GPU VM On: 134-136 W Active (avg): Running STEEP in VM: 330-360 W vm idle (12 threads), 12 thread Handbrake trans-code: 300-330 W STEEP and continuation of Handbrake trans-code: 450-475 W Light use (avg): ## To be filled in later ## Thank you Lime-Tech developers and community developers who put in their time and expertise. I've been a very satisfied user, you all do good work. EDIT0: Been lurking a lot more on forums last month or two, came to conclusion that I under-utilize my cache on user shares. Added 500GB Samsung EVO to my cache pool, now reporting 870GB. Changed some user shares to include cache pool, which I had previously excluded use before. Next task, I need to finish cabling, drive cables are a mess and need to be redone. Next technical task, is try moving the LSI card back up in the PCIe slots, get working; then source a gpu card for player2, since previous card went to friend - current thought was to use the old Radeon 6800 as concept proxy, seeing how it hardly gets used. Still need to go shopping for a Killer-Watt. EDIT1: New SAS HBA is installed, another drive enclosure added. Upgraded parity drive from a WD blue to a WD gold drive, once the new parity drive was swapped, I added the old blue drive to the array for more storage. The parity check time from blue to gold cut about four hours off. Also got a GTX1050 to start testing a second VM, since that seems to be issue on Threadrippers, still. Although first boot with a simple GTX1050 has been very positive (ie its working).
  14. @SSD - Thank you sir. I obviously haven't kept up on PCIe specs, but from the first graph on your URL, it's all right there, just follow the curve. Even suggests that when PCI-e 4.x comes around, if the bandwidth needs of graphics remains constant then x4 will suffice. However, I could see the caveat being future needs for VR given binocular viewpoint and mega fast refresh rates.
  15. Do you have the link handy (I'd be interested in the light reading)? If not, I'll search your posts when I'm bored. Something like this came up at work, this Friday, on the subject of Valley benchmark, but a co-worker and I were dismissing it due to the age of the benchmark. Hypothesizing it's the same sort of phenomenon as single, double, and quad-ranked memory -- where the type of job could make one ranking perform better then another. But if that's truly the case, Linus 7x1 type boxes and their performance makes much more sense, of how it was pulled off. I'd seen the SLI and crossfire boards with two 16x slots one doing 16, the other doing 8x, I always figured the second card in that configuration didn't require bandwidth since it was supplemental-calculator and not pushing out video itself. Sorry OP, not trying to hi-jack your post.
  16. Is the USB printer using the same "usb hub" on the chip set as your USB thumb drive? I've had problems with VMs trying to use the same controller as that of my unRAID fob. Also with the 3d printer is it being passed as a usb device (the check box), or are you passing it along as pci address to the USB hub it's on? With my old system I had to pass my usb game controller as the latter for it to work properly. Of course that's where, your iommu groups help or hurt you, and why people often mention the separate USB controller card. If you're not familiar with second method, feel free to run, and copy/paste output of these three terminal commands: and your IOMMU groups. lsusb lspci | grep USB readlink /sys/bus/usb/devices/usb* We can see if we can't set it up the second way. I got the method out of this thread. If you're more the, "do it yourself'er" here is the most coherent/useful area of my personal notes on this: Basic idea is you want an isolated USB port(s) in an IOMMU group, and you get the bus number, then you get the PCI address for that USB port. Finally you edit your XML file to add that pci address to your VM. You then have to modify some parts of it to get it to work. The good thing is that you do not have to care about which bus and address it's supposed to have in the VM. You only need to find out the host PCI address. The part you change is bus, slot and function. In your case it's 00:14.0. Let's brake it down. 00 is the bus. You simply exchange the two numbers after the 0x. 14 is the slot. Same method as above. 0 is the function. Her it's also the same method as above. So in your case the full device tag would be like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/> </source> </hostdev> root@localhost:~# lsusb Bus 003 Device 007: ID 045e:0084 Microsoft Corp. Basic Optical Mouse Bus 003 Device 006: ID 045e:07b9 Microsoft Corp. Bus 003 Device 005: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 003 Device 004: ID 046d:c21d Logitech, Inc. F310 Gamepad [XInput Mode] Bus 003 Device 003: ID 28de:1102 Bus 003 Device 002: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ---------------------------------------------------------- | | \--------> Bus 00Y Device 002: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub root@localhost:~# readlink /sys/bus/usb/devices/usb* ../../../devices/pci0000:00/0000:00:1a.0/usb1 ../../../devices/pci0000:00/0000:00:1d.0/usb2 ../../../devices/pci0000:00/0000:00:1c.0/0000:05:00.0/usb3 ../../../devices/pci0000:00/0000:00:1c.0/0000:05:00.0/usb4 ../../../devices/pci0000:00/0000:00:1c.1/0000:06:00.0/usb5 ../../../devices/pci0000:00/0000:00:1c.1/0000:06:00.0/usb6 ---------------------------------------------------------- | | \--------> ../../../devices/pci0000:00/0000:00:1c.X/Domain:Bus:Slot.function/usbY --- --- -/\ -|| -\/ --- ../../../devices/pci0000:00/0000:00:1c.X/0000:05:00.0/usb3 ../../../devices/pci0000:00/0000:00:1c.X/0000:05:00.0/usb4 ----------------------------------------------------------- /\ || { X == 0 Y == 3 && 4 \/ { X == 1 Y == 5 && 6 ----------------------------------------------------------- ../../../devices/pci0000:00/0000:00:1c.0/0000:05:00.0/usbY At least, that's been my experience with the VMs so far.
  17. This thread is huge, and I haven't been following it, but to go full circle @coppit brought the latest proposed PCI patch (linux kernel folks) for Threadripper to my attention on another post. For those folks, who would be interested in testing it, in 6.4.0-unRAID, have a look here -- it's been compiled.
  18. You mentioned the whole raid1 enforcement... Here you go.
  19. And this becomes dependent on your main logic board and cpu combo to accommodate enough PCIe lanes for the GPU(s). So assuming two GPUs at 16x you'll need a cpu with at least 32 lanes; in Intel world this will be the K-series, new X-series, or a Xeon chip. For AMD, Ryzen and TR, but these are still having bugs ironed out, and as for other AMD series, you'd have to check since I don't know either. As for RAM requirements just provision it accordingly 2x 16GB then you'll want something like 36GB+. Or aim for 64GB of system memory and gives you several options on how to provision it out. Or start with 32GB, with half the board leaving the other half to double when the price cuts down. You did mention future-proof, this would be on the tail end of "maybe." However, for costs savings, something like a 3930K or newer, used cpu (high seller rating); and a new motherboard, of correct cpu socket (manufacture warranty) might be the cost effective entry? Or the Ryzen's, once all the kinks are ironed out in Linux kernel. As for the main board, it should have two 16x PCI-e slots, spaced to accommodate the physical sized graphics cards you're going to buy; and which both slots can be run at full 16x speed. Just two more cents
  20. Jcloud

    Harnessing Heat

    I'll admit to BTC mining, and heavy handbrake usage, during the Winter seasons to supplement my heating when I was an apartment dweller.
  21. If you remove the fan, underneath it there probably the actual fan manufacture name and fan model on the bottom - common brands being Sunon or Dafron. Once you find the manufacture and model it should be rather easy. However, whether or not you can get the fan without ordering from China or Hong Kong, that's a different matter. I'd go with, "stick a big fan on it," option assuming you have the room with your PCIe cards.
  22. Old post is old, I could see this as a handy resource, but the link is dead now. I mention this, as I was thinking about buying another copy for myself at work - this thread comes up as first hit when one searches for "find USB GUID."
  23. VLC for other PC computer's. Otherwise RaspberyPI and OrangePI with KODI for my TVs - which I don't actually use that much.
  24. I just checked mine, sure enough it's the same. Thinking to my self, "huh, well lookie there." Makes a bit of since, and in the context of ThreadRipper's ugly patch. Ugly patch is all about the micro code unable to recognize a specific reset command on the PCIe bus, this is important for say -- unplugging the device, which can be done on according the the white-paper specification. There's also a sort of "soft-unplug" that occurs with VM's and their associated hardware, every time you start and (really just) stop a VM. So, while still ignorant on details, I could see if KVM/SeaBIOS/OMFV treating all of the hardware as eject-able devices. And perhaps Windows is picking up on that? I'd bet the fix to resolve this would be a registry hack, flipping the, "can eject hardware" bit to zero.
  25. I ran Multiplicity for a while doing multiple monitor single keyboard and monitor setup. Worked well, but not quite what you asked for. Just another option, if it's still useful, just caught the posting date. lol