Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/22/19 in all areas

  1. 17 points
    Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
  2. 6 points
    On Friday, August 30th, using random.org's true random number generator, the following 14 forum users we're selected as winners of the limited-edition Unraid case badges: #74 @Techmagi #282 @Carlos Eduardo Grams #119 @mucflyer #48 @Ayradd #338 @hkinks #311 @coldzero2006 #323 @DayspringGaming #192 @starbix #159 @hummelmose #262 @JustinAiken #212 @fefzero #166 @Andrew_86 #386 @plttn #33 @aeleos (Note: the # corresponds to the forum post # selected in this thread.) Congratulations to all of the winners and a huge thank you to everyone else who entered the giveaway and helped us celebrate our company birthday! Cheers, Spencer
  3. 5 points
    Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
  4. 4 points
    Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  5. 3 points
    Which may or may not mean it's a good idea to push that version to a production environment. "Stable" unifi software has caused major headaches in the past, I'd much rather wait until it's been running on someone else's system for a while before I trust my multiple sites to it. If wifi goes down, it's a big deal. I'd rather not deal with angry users.
  6. 2 points
    You can just upgrade/replace the two efi files (EFI/CLOVER/CLOVERX64.efi and EFI/BOOT/BOOTX64.efi, they are the same file) which download from github instead of install whole directory by pkg or iso.
  7. 2 points
    The URLs missing is because of the multitude of mistakes the guys were making on that field, ca is now filling it out for them. Hit apply fix on each of them The update available constantly is due to a change at dockerhub. Install or update the auto update plugin which will patch the OS for this Sent from my NSA monitored device
  8. 2 points
    You should never paste random code from the Internet into your computer without understanding what it does... but that aside, if you open up a terminal and paste in the following line: wget https://gist.githubusercontent.com/ljm42/74800562e59639f0fe1b8d9c317e07ab/raw/387caba4ddd08b78868ba5b0542068202057ee90/fix_docker_client -O /tmp/dockfix.sh; sh /tmp/dockfix.sh Then the fix should be applied until you reboot.
  9. 2 points
    Sure. 2,000 blu-rays backed up at 50GB / disk. When you have well over $20,000 worth of blu ray disks, surely you would want a backup of your data, right? 🤣
  10. 1 point
    Due to a change at docker hub unraid always reports an update is available. Installing the auto update plugin (even if it's not enabled will fix this for you) Sent from my NSA monitored device
  11. 1 point
    Yes. -rc4 doesn't fix the issue, during testing was fooled by caching. Thought I accounted for that but it was late at night. The Linux block layer has undergone significant changes in last few releases, and I've had to do a lot of re-learnin'.
  12. 1 point
    Hi @robsch, I will have to think about how to do that and test it out. Right now it can only move the final video to "/output". Also, I kinda like everything for me to be in the root of /output so I would have to make it an optional setting at startup. I'll let you know what I come up with. My first guess would be to end the script with "mv $1 /output$1". Easiest way I can think of right off hand, it would move for example "/watch/movie/movie.mp4" to "/output/watch/movie/movie.mp4" would that work for you? The comskip.ini is only for comskip settings like commercial detection schemes, cpu threads, etc.
  13. 1 point
    actually even for the appdata on cache, there will be 2 setup 1. map directly to /mnt/cache/xxx 2. map to user share /mnt/user/xxx with setup "use cache disk" only by testing on this 2 setup, we might be able to isolate if the issue is fuse related or not
  14. 1 point
    9th Gen QSV is not yet supported by Unraid. The next version will include the necessary kernel/drivers to support the newer CPU’s.
  15. 1 point
  16. 1 point
    You can delete the Plus key. It will have been blacklisted when you upgraded to Pro so cannot be used elsewhere.
  17. 1 point
    Try this: grep '^group_name_here:' /etc/group This is new ground for me as well as you. As I said, "Google is your friend" and I must admit that I do not not the in-and-outs of exactly how things are handled. I assume that your conclusion is correct but I have no firm proof of that truly being the case. I was never involved with a Windows Server setup that used AD. (I do seem to recall seeing some posts on this forum about folks not having great experiences linking Unraid to it.) I actually retired before 'Windows for Workgroups' was introduced but I did help administer a UNIX server that provided file serving for a number of DOS computers that were connected by means of AT&T StarLan. As I recall, we also had a Laser printer (Do even ask the cost of this device) connected to this network. The entire network was less than twenty devices
  18. 1 point
    Have you run the Docker Safe New Permissions script (it is part of the Fix Common Problems plugin)? Tools >>> Docker Safe New Permissions Does this fix the problem temporarily? Do you know how to use the Linux (or UNIX) command line? The reason being, it could be problem with the owner/permission settings and it is necessary to find out how the underlying Linux system is handling those. We can walk you through the procedure but it would be easier knowing where you are. (You only need a very slight knowledge to get the information that is needed.) But the amount of instruction necessary from no knowledge to some knowledge is considerable!
  19. 1 point
    I did run into this situation at first, but thought I had it worked out in the posted fix(the trick was supplying both mime-types in preference order). I use one of the containers you mentioned as having the issue post-patch, but don't see the behavior here. Did you edit the file manually, or did you use the script @ljm42 posted? If you edited it manually, would you mind pasting the exact line after the edit? It's very possible a typo in the right spot could cause this.
  20. 1 point
    The reason for change to 9300-16i, what disk controller use before ? You have one 9300 ? ( lspci show have two, but I assume should be one ) Forget it, 9300-16i have a PLX chip and two SAS3008 controller.
  21. 1 point
    The picture on the login screen is taken from the "system" image on the Dashboard page. Of course this can be any "random" picture, including a developer (which happens to be @limetech) 🙂
  22. 1 point
    Answers: I understand that I will need to add the Sata & HDD to the system. Does the Sata plug that each connect to matter? Meaning, does the Sata SSD need to be on plug one... ect? Does not matter. One of NVME drives has my current Win10 64 Pro build on it. The other has my wifes VM machine (runs on Vmware Workstation 15 Pro but the FPS in game is just aweful) Will I need to just format both drives and reinstall through UNRAID via the server host software? No need to format. You can pass through the NVMe to the VM as a PCIe device (i.e. just like a GPU - you will need to vfio-stub it - instructions are in SpaceInvaderOne videos) and it should just boot (in which case you don't need a VM vdisk and certainly don't need to format it). (Note: your VM should boot in OVMF mode and not SeaBIOS) Even if it doesn't boot and you need a vdisk, you also don't need to format the NVMe disk if passed through. It should just appear to Windows the exact same way. You can even convert the vmdk vdisk of your wife's VM into qcow2 / raw format to use for Unraid VM (still have to set up your VM template - can't convert that). I vaguely remember vmdk being supported directly, just need to edit the xml tag = vmdk but I have never used it. While talking about format - your HDD in the array and the SSD in the cache pool will be formatted by Unraid so make sure you backup your data. Probably best way is to just copy stuff from the 850 and 1TB to the 2TB and keep the 2TB safe and outside of the case so there is zero chance of accidental format. Also, I have 3 physical computers at my house but they're not networked to mine, though they all connect to the same wireless cable router/modem. What would I have to do to access the server once its running? If they are connected to the same router then shouldn't they be on the same network? What do you mean by "not networked to mine"? The others can speak about the USB stick since I have not had any problem - and I can't possibly do any worse since I use a micro stick AND USB 3.0, all sort of "not recommended" stuff. 😅
  23. 1 point
    Stop the VM and make a copy of the vdisk files. If you want you can also make a copy of the XML for the VM to save the VM settings although I think you are probably asking about the changes make inside the VM? Then any time you create a new VM, set it up to use a copy of the saved vdisk files (by manually assigning them in the VM settings).
  24. 1 point
    Best way forward is to backup cache data, re-format and restore. Those connection issues with SSDs are usually cable related, Samsung SSDs especially can be very picky with connection quality, also keep in mind that trim won't work when connected to that LSI, so if possible use the Intel SATA ports instead for both.
  25. 1 point
  26. 1 point
    Suggestion for v3 -- I had a file get corrupted in my Radarr docker appdata, and had to retrieve it, and extracting the 2mb .xml file took forever because my .tar.gz backup file is huge. Can you add a setting to back up each docker container to individual .tar.gz files, rather than one enormous file?
  27. 1 point
    I made the change suggested above and my containers are now updating as expected. Thanks
  28. 1 point
    do you have a cache drive ? But your appdata folder references /mnt/user/appdata/Plexmediaserver but your docker allocation is telling me daapd for allocation. For your version use latest instead of docker. if you have a cache drive the appdata would be /mnt/cache/appdata/PlexMediaServer I would remove the docker and install again. see if that will help.
  29. 1 point
    Nothing wrong, but it sounded like you didn't use CA. I see no reason to add repositories manually as long as the apps are in CA. CA has a much better interface for installing previous installed apps.
  30. 1 point
    There is a top folder on your flash device called "EFI" or "EFI-". Rename the folder to EFI if want to use UEFI or rename the folder to EFI- to use legacy. The setting in the "Permit UEFI boot mode" under Main -> Flash device -> Syslinux Configuration will automatically rename this folder accordingly.
  31. 1 point
    6.8 is on the horizon. Possibly kernel 5.3 since it's due to be stable soon It could also be the new LTS kernel.
  32. 1 point
    @btrcp2000 Add a second dummy vdisk let's say only 1G as Virtio. Boot your Windows and install the driver for that 1GB Harddrive. This way you can now remove the dummy file and switch your main vdisk to virtio and Windows should use the newly installed driver.
  33. 1 point
    In this case all the info needed was on the syslog.txt (but same things could be seen on other files,e.g. lspci.txt and lsscsi.txt): This shows that the Intel SATA controller is set to IDE mode: Aug 28 18:49:01 Tower kernel: ata_piix 0000:00:1f.2: version 2.13 Aug 28 18:49:01 Tower kernel: ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] And this shows that the LSI is an older SAS1 model: Aug 28 18:49:01 Tower kernel: ioc0: LSISAS1064E B1: Capabilities={Initiator}
  34. 1 point
    Great that Tom took the time to respond. I enjoyed reading that, and it brought back a few memories. <pedantry> It's 'without further adoo', or 'ado', not 'adieu', (which is old French for 'Goodbye') </pedantry>
  35. 1 point
    I am currently doing the same, (except I'm using GRE over IPSEC as my routers don't have OpenVPN UDP support) and have linked 3 sites together. Do note that the overall speed for file transfer (and the latency!) will be determined by the Unraid servers' upload speed, your download speed and the VPS upload + download. If you are not on CGNAT (as I am) it might be possible to have your routers connect to each other directly (as i did before CGNAT was implemented) Haven't tried getting a seedbox so not sure what you can cannot run on it.
  36. 1 point
    Unraid 6.8 is a kernel version? -just kidding, as always expect news soon(tm)
  37. 1 point
    Problem resolved. I had not restarted Unraid, but we had a thunderstorm last night, and the power went out. I don't have my server on a UPS yet, so it power cycled. The root cause was a misconfiguration in the proxy port for Sonarr. I had it set for 8181 instead of 8118. Comparing the configuration data between Sonarr and Radarr helped me find it. I think there was an issue with pihole initially, and restarting Unraid would have resolved the issue, but because I removed/reinstalled/reconfigured Sonarr - and didn't have the proxy port set correctly, restarting Unraid didn't resolve the issue. I'm good to go now - all dockers are working properly. Thanks for your help with this! Brawny
  38. 1 point
    Correct, you can format a drive with Unraid/UD, mount it with another Linux distro and copy data, the issue is just the format itself.
  39. 1 point
    It seems I may have found the root cause. The power I used to connect to the drives that are causing problems came from a molex to sata splitter because I don't have enough sata power. When I connected the Molex from the PSU and the Molex from this splitter, I may have not connected them securely. Though there's power, it seems whatever drive I'm connecting to it is encountering problems. I've changed Sata cables, connected to onboard ports. I've changed to other onboard ports also, until I finally looked at this sata power issue. I tried to secure the 2 molex sides properly, and so far, I've finished my parity rebuilding, I've attached other drives to this same power connector, and performed heavy transfers using Unbalance plugin, so far so good. Never would have suspected this if I didn't ran out of other things to troubleshoot already.
  40. 1 point
    It should be like this. You want modules loaded (and permissions changed) before the array is started (which will then start dockers and VMs) #!/bin/bash # enable iGPU for docker use /sbin/modprobe i915 chmod -R 0777 /dev/dri # Start the Management Utility /usr/local/sbin/emhttp & These are discussed in the varius plex/emby support threads
  41. 1 point
  42. 1 point
    I made this guide for installing archlinux as VM.
  43. 1 point
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as IBM M1015 and other similar cards 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake and Coffee Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  44. 1 point
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  45. 1 point
    I was able to get ZeroTier working via this Docker image relatively painlessly and am able to connect up with my phone and laptop to my array's SMB shares. However, I'm not able to access the web interface. It redirects to my unraid.net subdomain, but cannot connect to it. Is there a way I can access that remotely? I read through the thread but didn't notice any definitive steps. PS. I'm very thankful for this work. My father is terminally ill and I wanted to be at his side, but still needed access to some important resources on my array from my laptop. Being able to get this work on short notice made a huge difference for us.
  46. 1 point
    same issue ubtunu 18.04 by running: sudo dhclient enp3s0 I was able to get a connection again
  47. 1 point
    BIOS version F11e corrects the PCIe Bus Errors requiring the pci=nommconf kernel flag.
  48. 1 point
    My background is in tech stuff; maybe I should start dabbling in emergency medicine as stress relief from all the drama I deal with daily. How does one go about picking up emergency medicine as a hobby? ?
  49. 1 point
    Zombie thread but it wasn't answered and I haven't found a current one that does yet. Anyway its first page on google results so here goes. braaaaaains! This is unfortunately incorrect, at least in 2018 it is. A Microsoft Account is not an email account. It is a Microsoft Account that is registered to your device and software licenses (for MS products), your Store account, your Skype acct and it links your user profile experience across Windows machines. That is why every time you log in with your MS account on a new PC, you have to authorize that by multi-factor auth and lo and behold you have the same desktop image and other stuff. You can and should use your MS account with your unRAID shares because it's The Right Way and it's easy. But there is no integration to MS directly; you will have to update your password on unRAID when you change it with MS. This is actually how it's done on every NAS on the market that I've seen, because its been supported by Samba for years. Unfortunately the GUI on unRAID has not caught up with the times. You can use your Microsoft Account with unRAID; you just need to know how to edit a few config files, and you need to restart samba (i.e. stop/start the array). - In the GUI, create a user with the short name for your account. e.g. in my case I called the account 'dude'. Set a password for dude that matches your MS account. This will create a unix account and a matching samba account - edit /etc/passwd (and /boot/config/passwd probably - I did) and change "dude" to "youraccount@yourdomain.com" to match your MS acct - edit /etc/shadow (and /boot/config/shadow) likewise - edit /boot/config/smbpasswd as well, to change the unix username to your MS account Now when you restart the array its going to restart samba. You can probably bounce samba manually; I've not tried to see if unRAID handles that gracefully yet. Someone else might chime in the confirm. Once samba is restarted, the new account is enabled. Okay now on the client machine you are connecting from, I'm assuming that you are logging in with a standard Microsoft Account. You should have no drives mapped (especially with credentials saved) and you can always restart the Workstation service to clear any open sessions to the server. Once you've done this, if you navigate to the unRAID server in your Windows Explorer network browser, it should not prompt you for credentials ASSUMING that you configured basic permissions for the user account to access your shares. This works fine because its TOTALLY SUPPORTED BY SAMBA and standard on almost every NAS product I've seen but unRAID. I'm just going to push a feature request to add the ability for the GUI and the supporting scripts to eat a proper email address for a MS account. BTW the form of a MS account in SMB protocol is MicrosoftAccount\you@domain.com If the target was a windows box, it would need to have that MS account created locally and have been logged in once before. Samba is not so picky because it has SAVED the password that you gave it. The difference is that real Windows 8 or 10 host knows how to ask Microsoft if the credentials are valid (and it caches it for a time, which you could look up - I've forgotten). I find it hilarious when people say oh this is not standard or supported when it's a Microsoft protocol so what they say and do is the standard. Cheers from your friendly neighborhood MCSE.
  50. 1 point
    I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display. I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution. The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment: To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR). In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps: 1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system. 2) Stop any running VMs and open a SSH connection 3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly. 4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci: echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 5) Readout the VBIOS: cd /sys/bus/pci/devices/0000:02:00.0/ echo 1 > rom cat rom > /boot/vbios.rom echo 0 > rom 6) Bind it back to vfio-pci if required: echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line: <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/> To: <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/> After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!