SSD

Moderators
  • Posts

    8022
  • Joined

  • Last visited

  • Days Won

    13

Everything posted by SSD

  1. Sometimes you need to create a ROM file for video passthrough to work. Suggest watching part 2 of the Gridrunner video on setting up a VM. It says for Gaming, but it is really for any passthrough. It will cover how to create the ROM file, and you might find some other ideas. I have no direct experience with passing through an intergraed GPU.
  2. I bought a mix Seagate and WD 8T. The Seagate's were the archive drives but 7200 rpm, and the WD were REDs at 5200RPM I think. Both have worked well for me, but the Seagates are substantially faster.
  3. @antaresuk I did a quick search for why is Linux better than Windows and found https://itsfoss.com/linux-better-than-windows/. But I'll look that these 11 from my perspective. In paren is whether I think this is an issue that would have some sway for me to switch to a different OS, and why. 1. Open Source Nature (No perceived benefit - maybe a negative, see below) 2. Secure (Maybe, see below) 3. Can revive old computers (Yes, but only for the OS installed on that old computer, not for switching an experienced user from Windows, see below) 4. Perfect for programmers (No, not a problem I perceive in Windows) 5. Software updates (No, not a problem I perceive in Windows) 6. Customization (Maybe, but negative of Linux, see below) 7. Variety of distributions (Yes, but negative for Linux, see below) 8. Free to use (Yes, a positive or negative for Linux, see below for both aspects) 9. Better support community (No, all of Internet is support for Windows - can get answers to most any question with Google in 1 minute or less) 10. Reliability (No, not a perceived issue with Windows which hasn't crashed for me since prior to Windows 7) 11. Privacy (Yes, see below) Open Source (#1) / Customization (#6) / free (in a negative way) (#8). - I'd rather the OS come with features configured based on manufacturer doing usability studies, including observing and learning from actual users experiences, and presenting the best solution. I do want some configuration - font, color, etc. Occasionally I find a missing customization that annoys me, but mostly the Windows OS and apps have the customization I need, and simple enough to be able to be performed by any user. Linux customizations can be very complex. Some require recompiling the kernel! Even the name is intimidating for 99% of users. Some equivalent customizations can be done much more simply with Windows. And I've seen some of Linux UI customization that could take days to finagle to figure out to see if anything made me like it better. I found Linux worse than Windows in this area. They are too complex to benefit most users. And are an excuse for not doing the usability studies, which are much more valuable. (And it takes money to do that, which is disadvantage of the whole open source / free "benefits".) Secure (#2) - I do think Windows is slightly less secure that it should/could be. If Linux is better, that would be a plus, but expect that security vulnerabilities will continue to be found in each and resolved with updates. This is not something that I feel was a big issue with Windows, and wouldn't feel a strong motivator to switch. Revive old computer (#3) / free (in a positive way) (#8) - Not applicable to me switching, but is applicable to selection of OS for a new or revived computer. If you are wanting to use a hand me down for a kid or family member with no strong OS preference, Linux might be a big plus with lighter weight and being free. If you're in a browser all day, doesn't much matter what is the OS. I myself have done one for my daughter in law with Mint (she didn't like it and I had to install Windows). Privacy (#11) - I am not happy that Windows has moved in a direction that affects privacy. But there are tools to help (look at Gridrunner video on installing a Windows VM). And Google, Amazon and other Web sites are already leaking private data. I'd much rather stop those from leaking than Windows. So this is not a dealbreaker, but an issue to be managed. If Linux does a better job here, bravo. But it is not going to be enough of a benefit to push me there. @CHBMB - if you gave advantages you might pick differently. Like the package manager does make it easy to install new apps (if it is included in that Linux's package manager). Probably #1 on my list. Superior command line (I love it and use unRAID's command line a lot), but you can get Linux shells that run under Windows. I'm curious if you have anything you would deem compelling. Combating the Microsoft monopoly was one I would have listed 10-15 years ago. But now Microsoft is struggling, and I sort of feel the other way.
  4. I have no CPU limits on unRAID in the flash drive syslinux configuration. I do allocate 11 of my 12 cores to my VM (leaving core0 for unRAID's use). For Dockers, all my lighweight dockers (everything but Plex) are restricted to Core0 (shared with unRAID). For Plex, i have the same 11 of 12 cores as the VM. So my VM and Plex each have access to 11/12ths of my CPUs horsepower. They are seldom in heavy use at the same time, and I think the horsepower is such that it would be very rare to either of them would impact the other in any meaningful way, even with Plex transcoding (in software) a 4K x265 stream to 1080p (taking 30%-50% of the CPU). On my prior server, with 4 cores, I tried allowing VM and Plex to share 3 cores. But I found Plex, when syncing to a tablet, could suck all the horsepower and literally leave the Windows unresponsive to mouse clicks for 10s of seconds. It was unusable. So I changed it so that VM and Plex each had access to two cores (one dedicated and the other one shared). So instead of having access to a max of 3 shared cores, they had access to a max of 2 cores, but always had one core dedicated. This prevented the issue I was having. BTW, syncing with my tablet allows Plex to transcode as fast as it can. With real-time watching, the transcodes are paced to support the playback. So the syncing is transcoding at a much faster rate, and therefore uses a lot more CPU, than real-time transcoding. If I ever had a similar issue occur in my current setup (Windows or Plex lagging due to the other being used heavily), I think I'd do something similar - dedicating 1 core to the VM, 1 core to Plex, and allow them to share the other 9. A 4.5GHz core is plenty to allow Windows to be responsive to mouse clicks and even significant computing, even if all the other cores were tied up with a heavy transcode requirement on Plex. If I found I was needing a lot of horsepower from Windows and Plex at the same time (which almost never happens now), I may look at dedicating more cores until the issue is managable. I recommend this type of approach (allow max sharing and tweaked based on actual observed issues). Rather than giving each the VM and Plex half of the cores. This might guarantee you won' t hit the starving situation, but also reduces the value of sharing your server asset across multiple VMs and Dockers. Technically I think that core unRAID would have access to all the cores, but from observed behavior they tend to stick to core0. So I have not tried to mess with limiting unRAID to core0. One thing I did do, in the VM config, I have changed the mapped the VM's virtual threads to the actual threads. So what the VM sees as core0, is actually core11: Original: <vcpupin vcpu='0' cpuset='1'/> ... Revised: <vcpupin vcpu='0' cpuset='11'/> ... My theory is that the VM would then tend to prefer core11 and use them in descending order, while Plex would tend to prefer core1 (core0 is not available to Plex), and use them ascending. And that this would keep them from interfering with each other. But I do notice core utilization bounces around a lot, so I'm not sure the rearranging is really having an effect if the OS is using something other that order to assign the cores, but couldn't hurt. If anyone else wants to do this, note that the only way to tweak these assignments is by editing the XML, and if you ever edit any VM settings through the GUI, it is going to reset the mappings. To solve this and other risks of the VM config getting messed up, I actually have a backup configuration for this same VM, so if I ever forget (or need to use the GUI because I am doing something significant), I have the CPU mappings in the backup config, which I can copy and paste into the real config's XML. It's a good idea to have a duplicate configuration, so if ever your primary config stops working (e.g., you made some changes and now the VM isn't booting), you have your backup as a functional configuration to use until you can get the primary working again. (Or just copy and paste the entire backup configuration to the primary, and you are back to a known working state and can re-do whatever you were trying to do.) Copying the config to a .txt file is also a reasonable option. You just have to remember to back it up after any change. With the backup config staring you in the face, it is much more likely you'll remember to update it. Cheers!
  5. Seagate quality has improved over past couple years. They were sued successfully over their advertising of quality that was not consistent with the facts. Seagate in the 2T and 3T years, had by far the greatest failures I've ever had. And very weird behavior in their failing that were not readily apparent in the Smart settings. But I'd buy Seagate for 8T+ (and I have), but need to watch out. They will change the innards of some of their models to inferior disks with lower rates / duty cycles (gb/year). So long term they may get high marks, yet recent purchasers may report higher failures in new drives. Caveat emptor!
  6. I would mention that most flavors of Linux are going to be very similar from the command line. Main difference is how you load / install apps, which vary. Some versions have access to a wider / different set of apps. I'm actually quite adept at the Linux command line. I now like it much better than the DOS command line in windows, even if it is more difficult to master. But the UI of the graphical interface of the different versions can vary to a much larger degree. Some Linux versions support different UIs. I would suggest going to YouTube and watching some videos and getting a flavor for which UI seems most natural to you. I did this a couple years ago and was surprised that some of the most basic things I did in Windows were done quite differently. Simple things like the app list at the bottom of the screen were unattainable or suboptimal in some of the UIs. I started playing with the three OSes you mention to try them out. I set up VMs. Had a lot of trouble as I recall getting them to install but I don't remember the details. Was able to at least run a bootable image though a VM that gave full access to the OS features. In the end I found it an annoying experience. Many of my instincts from running windows for decades were wrong. I suppose if I felt the Linux UI had significant advantages, I could have persevered, but I felt i was moving sideways and backwards so didn't really feel motivated. And not only that, I would have had to give up the apps that I have achieved guru like knowledge, including Word and Excel. I also have a text editor that people who have watched me use have said it appears the editor was reading my mind. I have written dozens of intricate macros, over about a 10 year stint where I was doing heavy software development (over 20 years ago). My fingers still remember most of the buttons as I still use it when tinkering on special projects. Even if the Linux UI were awesome, the time to develop the kind of expertise and customizations I have with the programs I know inside out, would have been impossible to learn in Linux in any reasonable timeframe. But if loading an OS for my kid or wife, who are always running browser apps and writing the occasional letter or resume in a word processor, I'd have few qualms setting them up in Linux. But don't let my experience and decisions overly influence you. I'm older, set in my ways, opinionated, argumentative and particular. Lots of folks have moved to Mint or Ubuntu and are very happy. And the cost is much less / free. Don't know your reasons for wanting to switch, but certainly worth your time to check out the videos and spend a few evenings creating VMs to get your feet wet, so you'd be in a better position to decide which Linux you like best (or dislike least ;) ). Will be interested in what you decide and why!
  7. One more thing, i don't think 6T are at the sweet spot for lowest cost per TB. I was able to buy 8T (externals that required shucking) at between $170-$200 at end of 2017. They were by far the cheapest/TB. But we now have 10TB and 12TB drives. I loaded up on the cheap 8Ts and won't need to purchase another drive in the foreseeable future, so haven't really kept up on current offerings and pricing, but I always look at $/TB as one of my criteria. I will pay a little premium for larger disks just because it keeps my total disk count lower. And I don't care about anything under 6T. They would be too small for now. (And I'd even be picky about 6T, as they are approaching too small statues for new drives for me.)
  8. I'd recommend HGST. I have owned 2T, 3T, 4T, 6T, and 8T varieties (probably 20 drives in past 7 years), and never had a failure. I've bought far fewer other brands and had 3 fail. HGST typically garners top marks in the backblaze quarterly updates. If you can find another brand that is cheaper, you might consider it. For example if I can buy three cheaper drives for the cost of 2 HGST, and the cheaper drives get good reviews, I will buy the cheaper ones. YMMV.
  9. Have you passed through one of the video cards and specified a ROM file? Find gridrunner's 2 part guide on setting up a VM - first part is setting up non-passthrough, and second part is on making the VM passed through. The first part is an absolutely brilliant guide for installing Windows 10 (VM or otherwise), installing apps, and turning off spying. If you are able to reinstall, I'd strongly recommend if you didn't already use that guide, or at least watch the guide for tips and tweaks to your existing install. The second part is called setting up for gaming, but is excellent on anyone wanting to pass through video / usb. He also has another video on passing through USB which is more complete than the second video I mentioned above. Definitely recommend you use passthrough for keyboard / mouse (e.g. Logitech unifying dongle) if you are planning to use this vm regularly. Setting up the keyboard without passthrough is subject to laggy / unsmooth mouse performance, dropping offline in the middle of a session, and does not support unplugging and replugging (a bit more important for a USB port you plan to use with thumbdrives, but it you are using a KVM to switch mouse/keyboard among computers, it is required). The passed through method is smooth as butter with the mouse, indistinguishable from a non-VM windows install.
  10. You definitely have to set RAM for a VM. But for a docker, the apps are run in the address space of the host (unraid). There may be a way to limit the ram use but I don't remember seeing it.
  11. As far as allocation of cores to VM and Dockers. This is more of an art than hard and fast rules based on your specific needs. My big users of processing horsepower are plex and my Windows VM. But it is very uncommon for both to need a lot of horsepower at the same time. So I have reserved core0 (both threads) for unraid and all dockers except plex. And I have assigned all the other cores (except core0) to both my VM and Plex. When either need a lot of horsepower they can access 11 cores (22 threads) from my 7920x. As I said, it's rare they are both doing CPU intensive activities in parallel, and so far I have never seen any slowdowns. One of gridrunner's server tuning videos does a good job of describing other ways to parcel out the cores. If I split the cores between the VM and Plex, I'd be artificially limiting the max horsepower each could access. Based on my usage pattern, that isn't good for me. I had done a similar thing with my old 4 core server, allowing Plex and three VMs to share 3 cores (6 threads). I noticed while doing heavy transcodes, Plex would suck all the processing power, and that the VM would become starved and almost completely nonresponsive. So I changed the configuration. The VM could access 2 cores (1 and 2), and the plex docker head 2 cores (2 and 3). So they each had one dedicated core, and one shared between them. So even if plex maxed out 2 cores, VM still had an unused core and that prevented the VM from lagging. If I ever saw a similar pattern emerge with my 12 core, I'd likely assign one dedicated core to the VM, And one to Plex. And allow them to share the other 9. Each would then have a potential of accessing 10 cores instead of 11. And neither would get starved. Hope this helps.
  12. I think there are a couple of special challenges with SHFS: 1 - It is not determinate which physical disk is going to get a write to a user share without SHFS translating the "user share name" into a "disk specific name". And there may not be an easy way to have SHFS return that data to the OS within the contect of the copy orchestration protocol. 2 - SHFS volumes would appear to have multiple files with the same inode representing files on different physical disks. With symlinks it might be easier for cp to determine the physical disk, but with SHFS it would need help. And similar to above, the OS could not figure this out without help from SHFS. And this may not have been envisioned in the protocol. At the very least, SHFS should have petitioned the Linux governance board for a protocol change to allow SHFS to return the necessary details to allow the OS to not allow source files to be deleted. I can't help but believe they would see the value and implement it as part of a future Linux release. But given that SHFS has been around for at least 12 years, this appears to have not happened? Why? One reason might be that SHFS has not gotten complaints and therefore this has not been a priority. Another is that it IS possible to implement overwrite protection with SHFS within the current structure, and SHFS authors just haven't felt it is a priority based on few if any complaints - and this might be a non-trivial enhancement / rewrite.
  13. Yes - that worked as I expected. I created a link to a file on my C drive in a different directory on my C drive. The file was called "test.docx", and the shortcut was called "test.docx - Shortcut" in the GUI, and "test.docx - Shortcut.lnk" from the cmd prompt. If I double click the shortcut in the GUI, it opens the file in Word. Any changes I make are saved back to the original file. If I type the shortcut at the cmd prompt, I get some gobblety gook but see the actual file name and folder name in there. If I delete the shortcut in Windows, the shortcut goes away but the original file is unaffected. If I delete the shortcut at cmd prompt, same thing. The shortcut is gone but the original file is unaffected. If I drag and drop a different file over top of the shortcut in Windows GUI, Windows gives a null symbol and won't do it. If I drag and drop the original file over top of the shortcut in Windows GUI, Windows gives a null symbol and won't do it. (So Windows does not allow any drag and drop on top of a file shortcut. If it has been a directory shortcut, the drag and drop would have caused a file to get added to the linked subdirectory). If I copy a file from the command prompt over top of the .lnk file, Windows asks if I want to overwrite the file. If I say yes, it overwrites the .lnk file. I can type the .lnk file and it is the content of the file I copied over it. But the shortcut disappears in Windows since it is not a valid .lnk file format. So was I ever able to coerce a file to overwrite itself using links? No Was I able to delete the real file using links? No Was I able to confuse windows by overwriting the .lnk file? Yes. Did it cause any loss of data? No. All in all, I can't say we have any sort of smoking gun like we are seeing in Linux with SHFS causing source files to be deleted. I think not. The Windows filesystem is unaware of any significance of the .lnk files. The Windows GUI attached special meaning to them, and overwriting them with junk is going to cause them to stop working as Shortcuts in Windows, but at the file system level, all is fine. If we did the equivalent in Linux, you would never be able to overwrite the equivalent of a link file? Why? Because in Linux symlinks ARE maintained at the filesystem level. They are not files at all. I may do the equivalent of what we did here and post the results. I didn't try manipulating the shortcuts in Samba, but pretty sure the results would be very similar to what we see in Windows. I may play with this also. @FlorinB - thanks for trying this out and posting your results. I think you highlighted some odd behavior of the Windows shortcut feature. Not sure why the OS allows the .lnk file to be updated. Those .lnk files could simply be marked read-only and stopped that. But no real opportunity for data loss, so that is good news and what I expected.
  14. CP specifically WANTS to protect against file overwrites. So it is using the correct file modes and logic to protect against that scenario. We know that SHFS interferes with it doing its job. You say that NFS and Samba also can create overwrite opportunities, but there is no proof so far. BTW, if you compare the people using cp with the people using SHFS, you are talking about the different between the Atlantic Ocean and a mud puddle (maybe an exaggeration, but you get the idea). The amount of testing dwarfs SHFS. I can assure you if cp left holes in how it detects alias scenarios, they would have been expertly fixed long long ago. You've answered a completely different question - proving that with C you can force a file open for reading to be overwritten. C gives you a lot of control over file I/O. If you read the K&R and the C function reference they document how these file modes work. And this program is working exactly what it was written to do. This is not an example of creating the overwritten file scenario with a cp command. I'm assuming since you didn't post an example, you are not able to reproduce an example of a real user - that has never written a C program in their life - would get bitten by the OS's inability to recognize an overwrite due to some form of link/alias WITHOUT SHFS, which we know for sure can cause this. Ditto. Continuing to question the veracity of cp is silly. This is an SHFS byproduct. No evidence NFS or Samba create vulnerabilities. I challenge you to find one that can resolve the overwrite issue that SHFS creates. If the most basic of Linux commands - cp - can't, I expect no copy program is going to be able to detect it. ??? You really think the Linux authors are being cheap in how they check for duplicate files from the cp command. That's pretty funny. We have some of the best computer scientists in the world involved in the Linux development and maintenance. I can assure you if there were a defect in cp that resulted in a source file being overwritten, it would have been reported and fixed long ago. This is a most egregious OS error. This is a cheap shot that cp is coded poorly, with no evidence. The inodes are only unique on a single disk. Every disk can have an inode 53. If the OS thinks the two files are on different disks, comparing the inodes is meaningless. Within a single user share, the same inode could very well be used multiple times and the files be on different physical disks and are not the same file. In short, an inode check is an invalid way to check for duplicates with SHFS involved. So this argument is not valid. The code is not at all clear. No idea if the current directory is on a physical disk or a user share. The "stat()" function is supposed to be in the form int stat(const char *path, struct stat *buf); But your code has the second argument is not a pointer. No idea what is in the "...". Not making sense. The write command write(2, "cp: ", 4cp: ) is not valid. No idea what this is trying to do. The "..." Since inodes are not unique across disks, this entire argument is flawed. cp is part of the OS. Your C code is not. Code still funky. Not making any sense in this context. Your read command is reading into a string label. This would likely result in an exception of a corruption of program memory. No confidence this is working code. Now we have a manually coded number as the second argument to stat(). Should be a pointer. SHFS is a low level driver. Similar low level drivers work in conjunction with the OS to identify overwrite situations. I believe SHFS could do the same. You saying its impossible is not convincing me. It may be expensive (slower), but I am confident it is possible. When you are writing in C, you can do most anything. This is what you had said ... A copy operation does not delete the source file. Only move operations could be impacted by a partial destination file. Unless you can't trust the copy command to work as advertised. You are right that double buffering could fix a broken copy command. But it would be silly for all of Linux to require double buffering to handle an issue with SHFS. It is the tail trying to wag the elephant. You have not shown this at all. You have only shown that a C program can be written to enable the author to overwrite an existing file if that's what he wants to do. And presented a very simplistic argument comparing inodes, which wouldn't work with SHFS since inodes are not unique within a single user share. @pwm - I was a professional C programmer for over 15 years. Maybe a little rusty, but I know C very well. Have done work on a variety of platforms including DOS (the mainframe one), OS (mainframe), MS/DOS 1.0+, Windows 1.03+, OS/2 1.1+, AIX, and Linux. Your post is techno-babble that non-tech nerds might mistake for legitimate argument. I'm still hopeful to have a meaningful conversation. If there are other risks of overwriting files with Samba, NFS, or other programs, I am hopeful they will come out. But one thing I am sure - this issue is a SHFS issue, and not a defect in how cp detects aliased files.
  15. @pwm / @FlorinB - Another suggestion is to rename the include / exclude options, and update the descriptions. When you turn on the "help" feature on a user share, here is what it says: These definitions are simply false. Except in a simple minded situation that the settings are set when the user share is created and no disk is ever excluded from a user share after files were place on that disk. Included / not excluded disk setting are ignored when browsing the user share. So excluded disks are always seen when browsing the user share. They are ignored when overwriting a file that exists on ANY disk in a folder with the user share name. They are ignored if the split level is satisfied on ANY disk in a folder with the user share name. They only apply when writing a file that is not being overwritten, and whose split level is not satisfied on any disk. So excluding a disk from the share does not exclude the disk from the share. A better description AND labeling should be used. Include/exclude is simply too easy to misunderstand. Maybe "Include for new files" and "Exclude for new files". At least a user will understand that it is not all encompassing.
  16. Provide a reproducible set of commands in Windows that results in a file trying to overwrite itself when the copy command is used on a single machine. The only reason you say symlinks are "transparent" is because the OS is aware of them and has the necessary logic to detect the situation.
  17. This is not true. Can you give an example where an OS aliasing feature causes to OS to not be able to detect a file is being overwritten. Most (all?) alias features including Windows shortcuts and Linux symlinks recognize that you are trying to copy a file on top of itself even through the feature. Even the old DOS ASSIGN and SUBST commands were smart enough to detect this. And the main use case for this bug - removing a disk from a user share and then copying from the excluded disk's disk share to the user share. No sane person would expect that to cause data loss without being pre-warned and understand user shares to a very deep level (see my previous post above). You can say let the user beware if you like, but there should at least be a bold warning on the user share's GUI page that a user must acknowledge before enabling user shares (this doesn't fix the bug but at least gives the user a fighting chance of not being bit by it). The application, in this scenario, is the Linux itself ("cp" and "rsync" commands). As well as all other copy programs I know (midnight commander, windows "copy" command, windows GUI drag and drop copy, and others). Certainly you can't blame all of these OS commands and programs and say they are copying files incorrectly! The authors of shfs should have been able to foresee this issue and coded around it. I have a lot of confidence that the "cp" command is doing everything right. It is the most basic and well tested. And if SHFS creates a situation where it can't copy files correctly, it is a bug in SHFS not Linux. Look at the "cp" command manpage for any warnings that if files are linked then cp may overwrite them. A copy does not remove the source until the destination is completely written. What you are referring to here is a "move". Having a copy operation that results in the source being lost ... not a bug? ? Do your test. I expect this use case will fail and you'll be calling it a bug. If a piece of software subverts the base OS to destroy files by attempting to copy them over top of itself - it is a bug. The fact it isn't one that LimeTech can easily fix, is a different matter. This makes no sense at all. I've responded in the thread you linked. The bug is not due to a file being referred to by different paths. The OS will recognize this situation if symlinks are used. It is only when the SHFS driver (which user shares depend upon) is in place that the bug exists. SHFS subverts the base OS protection against overwriting files.
  18. I don't agree with this. You could use symlinks to cause a file to be referred to by two different paths and the OS will see through it and refuse to copy it over top of itself. The root of this issue is a bug in SHFS that subverts the OS protection mechanism against overwriting files.
  19. 45C, IMO, is acceptable as a hottest temperature. I prefer 42C or 43C. 46C I start to look at adding more cooling. 45C is just under my limit. As I think was said, 40C is perfectly fine. I'd be careful looking at the specs. Some will indicate that temps of 60C+ are acceptable. Although a high temp may work out fine if the drive is maintained at that temp, for an unraid server where the drive temps are often very low when the drive is spun down, and are therefore bouncing between 25C and 60C frequently, the thermal cycles are going to cause lower life. If you are doing a lengthy operation and drive temps are going above your liking, you may be able to blow a fan on the server's air intake. Even better, open the case and blow a strong fan inside. Works quite well in a pinch.
  20. You need to make sure that your unRAID USB controller is in a separate IOMMU port from the passed through one. Using a second PCIe card is an option, but not one I recommend if it can be avoided. Why? 1 - why spend the $50 or whatever if you don't need it, and 2 - you will be tying up a PCIe slot. Between HBAs, RAID controller, PCIe card for full bandwidth access to an NVMe, and multiple video cards, I find there is nothing more precious than my PCIe slots. Avoiding tying one up for such a pedestrian purpose seems a waste. Following the video below should help you figure out if there is an onboard controller that can be passed through. You may need to use a USB header on the MB, which requires an inexpensive cable to expose. As you noticed, BIOSes often like to combine multiple controllers onto the same IOMMU group. But I have been successful on two different motherboards. The first one took a lot of experimentation. I must have tried changing 10+ BIOS settings and combinations of settings before finding that if I disabled USB3 function, that the ports got broken apart and I could pass through easily. With my current MB, which combined all the USB2 and USB3 ports in the same IOMMU group, there was no explicit feature to disable USB3 (although I tried valiantly). But I found the MB included a USB 3.1 controller that was separated out nicely. I wasn't sure it would work, but I was able to pass it through and plug my keyboard/mouse dongle (Logitech unifying receiver) and that works well. I use it on a KVM switch, and it properly disconnects and reconnects. The second USB 3.1 port on the controller is USB-C, but I think I could get an adapter and also use that for attaching a USB thumbdrive (or other accessory) to the VM.
  21. @lostincable This is very typical these days. Companies like Intel designing a new CPU that is backward compatible with the prior generation is becoming rare. And even memory specs often change between upgrades. So we need to make sure we understand the impacts of these on the upgrade cost. Another thing to consider is your NEXT update (after this one). Do not expect you will be able to update your CPU with a future offering. BUT, if a more powerful offering exists at the time of purchase, perhaps an expensive option, than years later, you might find the more powerful chip is available on eBay at a reasonable cost, and be able to do an inexpensive update w/o replacing MB or memory. This is why it might be advisable to select a CPU series that you are not looking at the top of the line. For example, I recently purchased an i9 7920x, which is a 12 core CPU (3x my current core count, and much faster per core) and absolutely all I need at the moment. But there are 14 and 18 core versions (7940x and 7980xe) available. Adding 2 more cores (15%) would likely not be enough of a bump to make upgrading worthwhile, but the 7980xe is 6 more cores (50%) and would be. At purchase, the 7980xe was double the cost of the 7920x (and the 7920x was already well over my price target). But if 5-8 years from now it could be had at a deep discount (esp. considering selling my 7920x), an upgrade may very well make sense. The 12 core would keep me satisfied for a much longer time than a 6 or 8 core CPU (which is what I had been looking at / for). This was part of my reasoning for buying an x299 CPU - longer life and next upgrade would be cheap. I figured if I was looking at this and the next upgrade costs together, that buying a 6 core CPU with no upgrade potential today, and then buying an ~18 core CPU 3-4 years down the road (with new MB and memory), that the cost of the two upgrades, taken together, was roughly equivalent. So spending the extra to buy the 7920x today at 2x+ the cost was still the smart decision. (Amazing what the human mind can do to justify expensive IT purchases!) I would also mention that higher core counts seem to be the impact of AMD re-entering the competitive space for CPUs. AMD is now promising a 28 core chip (at $2,000). But I am pretty sure we'll see a trickle down and 8, 10, or maybe 12 cores will be "normal" in the next couple years.
  22. Don't confuse disk shares (e.g., disk1, disk2) with "disks" shares created by the unassigned devices plugin. "Disks" shares can't participate in a user share, so there is no problem copying to/from user shares. Here is another post on user share copy bug that might be clearer. Also gives some data on user shares you might not know. The other post was a bug report really aimed at explaining the issue to Limetech.
  23. SSD

    Share Your Banners

    One of my servers is called shark (for my Sharkoon case). Works nicely with white theme.
  24. 2 Mb/sec (megabits / sec) is really really slow. Are you sure it is not MB/sec (megabytes per second). At 2Mb/sec 70 GB would take 3 1/4 days transfer. At 2MB/sec, it would take about 10 hours. At those speeds, probably using a USB stick would be faster. But I would expect you'd see a minimum of 30MB/sec even if transferring to a protected array. At that speed it would finish in about 40 minutes. There have been some issues with Realtec NICs over the years. Is it in the unRAID server? If so, suggest posting exact chipset so people can see if it is supported.
  25. This was stated earlier. I assumed it was true. Johnnie is smart, and often we agree. But on this one we each had different theories - both plausible and well explained to you. Johnnie's statement was based on his theory being correct, which was never proven. And BTW, his theory assumed your statement above was false. I tended to believe you. Just blindly accepting Johnnie's conjecture was no risk to him - only to you. I have no idea whether you lost data or not - or if it was just some movies you could rerip or otherwise recover, or priceless family pictures/movies. I tend to assume that a person's data is valuable, and my suggestions are always aligned with saving it if it is remotely possible.