Nihil

Members
  • Posts

    17
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Nihil's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I searched around a bit but found on information on this topic: every time code-server container updates, it removes everything I installed within it. Stuff from apt (cmake, build-essential, gdb...), as well as manually build libraries like openCV & protobuf which reside in /opt or /usr. Vscode extensions are preserved, but everything else is purged. Am I using the container in ways not intended? Is there a way to avoid this (besides not updating)?
  2. Worked perfectly! I ended up following the guide where you modify the Techpowerup file: Thanks to all of you!
  3. I have issues trying to pass-through two Windows 10 KVMs, each with it's own dedicated GPU on X399 platform. I've been successfully using a similar configuration with Z97 platform & i7 4970K for three years, now I've upgraded to X399. Notable difference between the platforms is that X399 does not have the on-board graphics. I've set all BIOS settings that make sense to me (AMD-v/SVM, IOMMU, C-state...) and I can't seem to get both VMs to use GPUs simultaneously - only the KVM with GPU, that's not used by motherboard's default output seems to work. The other VM starts, the unraid console on it's display turns black and nothing happens. I can VNC to the VM, but the GPU is not present in the OS. If I change the initial display output in BIOS they the issue moves to the other GPU/VM. Currently I am using ACS override = downstream + multi-function without any specific vfio instructions. label unRAID OS menu default kernel /bzimage append iommu=pt pcie_acs_override=downstream,multifunction initrd=/bzroot System: Motherboard: X399 Gigabyte Designare EX (BIOS ver F11) CPU: AMD Ryzen Threadripper 1950x RAM: 4x 8GB Corsair Vengeance RGB Pro 3200 MHz GPU 1: Nvidia GTX 970 GPU 2: Nvidia GTX 750 Ti SSD cache: 1x 120GB SATA SSD unassigned: 2x 250GB NVME, 1x 120GB SATA HDD array: 6x 4TB, 2x 2TB PCIe cards: SATA controller PCIe slot configuration: Slot 1: Nvidia GTX 970 Slot 2: / Slot 3: SATA controller Slot 4: Nvidia GTX 750 Ti Slot 5: / I've attached diagnostics. Does anybody have some idea what I have to do to make it work? vifo instead of ACS override? Disable Unraid console display output at boot? triglav-diagnostics-20190705-0228.zip
  4. Just noticed that every time before the Unraid version update shows up, my entire system becomes almost unresponsive. This time it lasted about 30 minutes, previous two times slightly less. At that point, WebGUI takes forever to reload (1-5 minutes) and VMs start to "stutter" (by that I mean display, mouse and keyboard keep freezing every couple of seconds). Could releasing an update actually have such an impact on OS? Just to clarify, I'm not talking during version installation or after it, but just before the "unRAID OS v6.3.2 is available. Download Now" shows up in the bottom of the WebGUI. Is this perhaps caused by a getting a bunch of plug-in updates along at the same time? Although I'm using the Auto-update plug-in, only three plug-ins are selected, the rest just show up as update being available. Does anyone else experience this?
  5. I'm not positive that the issue is crosstalk, another very likely candidate is the poor design of the sata data connector. Bundling the cables can put bending stress on the junction between the cable and the drive, meaning either the top or bottom facing connections are not fully in contact. I'm fairly sure bad connections are the problem in 99% of the drive errors that aren't caused by a failing drive. That explanation makes more sense, I've been running the system these two weeks with an open back cover on case and cables dangling freely outside it. I guess it could also be that I applied too much stress on the connectors by bundling the cables this way.
  6. Ok, to leave a feedback to this topic for any fellow user with the same issue - I've been running the system for two weeks without any issues by unbundling the sata cables (and replacing the SSD ones just to be sure). The interface disk errors are also gone, everything has been running smootly. Ending with the picture of the neat cable management, that caused the issue:
  7. This usually happens when running xfs_repair on the device instead of the partition, post the command you're using. Your docker.img is corrupt and needs to be deleted and rebuilt. Your getting lots of what look like interface errors on both your SSDs, did you replace both the power and SATA cables? are they in some sort of enclosure? I see, I've run "xfs_repair -v /dev/sdl". Yes, docker image keeps getting corrupt, what I believe comes with these read-only issues. No I did not replace the power sata connectors, just swapped the cables. Sata power connectors are kinda extended using Y-splitters and there's also a bay fan controller powered from the same extension. Could that be an issue? Another thing is that I've bundled the sata cables tightly together. Could that actually cause behavior like this?
  8. It would seem that I misspoke, rebooting did help this time. Not sure if it was the xfs_check or the xfs itself, but up until now rebooting didn't help with btrfs. But still, I believe the issue will return pretty soon, so if anyone has any ideas on what to do, I'd appreciate it
  9. I am keep running into read-only issue with my cache drive(s). At first, I've assigned an Intenso 128GB SSD to the cache, where after a while read-only errors started to happen. Thinking it's a cheap SSD that might have procured some bad sectors, I replaced it with a Samsung EVO 120GB SSD. Everything has been running smooth, until a week ago, when read-only errors started to appear again. Restarting the array and rebooting the system does not help, only way make the drive usable again is to re-format it, but it's a very short term solution. The drive is plugged in directly to the motherboard's sata port. I've tried switching the ports and cables. I've found multiple topics to this issue, but did not find a solution in any of them. In one of the topics I've found that btrfs is not the best choice for a single drive cache, so I formatted it to XFS - Same issue two days later. Running the xfs_check outputs: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... [after a while] ...Sorry, could not find valid secondary superblock Exiting now. My cache drive usually contains: - appdata - docker.img - Games share with a single active game I play - Downloads share I've included the diagnostics. Can anyone help me find a solution?. triglav-diagnostics-20161011-2349.zip
  10. While thinking about future upgrades I stumbled upon an interesting idea some case manufacturers delivered - dual system cases (eg. two motherboards in a single case). Usually these are the standard ATX/mATX cases with support for additional mini-ITX board. This seems like a great alternative for some people, who might want a home server alongside the desktop PC. I'm definitely considering doing something like that in the future. If anybody knows some other cases, besides the mentioned ones, please share and I'll include them. Phanteks Enthoo Mini XL 1st system: mATX 2nd system: mini-ITX Price: DS variant with PSU splitter included - $180 AZZA Fusion 4000 1st system: up to XL-ATX 2nd system: mini-ITX Price: $240-260 (seems to be sold-out though) LIAN LI PC-D666WRX 1st system: up to E-ATX 2nd system: mATX, mini-ITX Price: about $600 Some more cases comming soon from Phanteks: Phanteks Enthoo Primo DS Phanteks Enthoo Elite
  11. Did that, mounted two SSDs outside the array to be the VM drives, tried to assign the third one (Intenso 128GB) to the cache, but it looks like it's not in a good state, keeps giving me read-only errors. Temporarily I've moved the docker to the light VM's SSD and I'm running the array without the cache. So far so good. Got a question I can't figure out: If I'm playing a game (let's say Far Cry 3, HW-intense) and the computer starts streaming a 1080p video via Plex server (docker) to the TV, the game becomes unplayable due to stuttering and severe FPS drops. Game is running off the same drive as VM, but it's accessed as an unassigned devices drive share. Where's the bottleneck that's causing this? Is Plex hogging my VM dedicated CPU cores? Is share somehow being accessed via ethernet bridge that's at the same time used for streaming? Is reading from multiple drives at once too hard on my motherboard controller?
  12. Reading around the forum I stumbled upon multiple threads saying it's not a good idea putting SSD drives in the array. At the moment I have a 250 GB SSD in there, as a dedicated game install location. Is this a bad idea? How would I make the best of three SSD drives (120GB, 128GB and 240GB)? Putting all three in RAID0 cache and games share in cache only? Putting one or two dedicated ones outside the array with unassigned devices plugin? Keep in mind I am frequently running two VMs and a couple of docker applications (mostly media server oriented ones). Any suggestion is appreciated.
  13. Thanks, will try out the suggestion and play around with it a bit. Are you referring to the fact that the multiple cables are zip-tied tightly together, or that the zip-tying might damage the wiring? I didn't pay much attention to the quality of the cables, so all of them are from the low-end of the price range. I'm kinda skeptic about cable interference there, but I'll keep that in mind if I run into some trouble - thanks for mentioning it.
  14. Made some upgrades in the last couple of weeks: Hardware: - added 16 GB of memory - added another SSD I had lying around to the cache (128 GB) - removed the silly CD drive - rotated the drive cage 90° for better drive placement - rotated the CPU heatsink and changed the fan configuration a bit for better ventilation - redid the cable routing in the back of the case Software: - assigned 250 GB SSD from cache to the array, although it is used solely for games - been messing around with cache configuration (raid0, raid1 and single-drive) - been trying out different CPU core assignment configurations between VMs - tried a couple plugins and docker applications out So far most of the stuff has been running smoothly and I'm very pleased with Unraid. So far two I haven't got around to set a media center/server, but I've been thinking about setting up Plex server in the docker and the SmartTV app instead of OpenELEC Kodi running as the potential third VM. Rotated drive bay: Cabling nice and tidy: Quick question: What would be the best suggested quad-core hyperthreaded CPU core allocation with two VMs (one heavy-user and the other being a lightweight facebook machine)? Been running: [0,4] - free, [1,5] - light VM, [2,3,6,7] - heavy VM. Does pairing hyperthreaded cores to a VM make sense, or would mixing it up bring better performance?