Leaderboard

Popular Content

Showing content with the highest reputation on 09/03/19 in all areas

  1. On Friday, August 30th, using random.org's true random number generator, the following 14 forum users we're selected as winners of the limited-edition Unraid case badges: #74 @Techmagi #282 @Carlos Eduardo Grams #119 @mucflyer #48 @Ayradd #338 @hkinks #311 @coldzero2006 #323 @DayspringGaming #192 @starbix #159 @hummelmose #262 @JustinAiken #212 @fefzero #166 @Andrew_86 #386 @plttn #33 @aeleos (Note: the # corresponds to the forum post # selected in this thread.) Congratulations to all of the winners and a huge thank you to everyone else who entered the giveaway and helped us celebrate our company birthday! Cheers, Spencer
    5 points
  2. Hi Guys. This video is a tutorial on how to examine the topology of a multi CPU or a Threadripper server which has more than one numa node. This is useful so we can pin vCpu cores from the same numa node as the GPU which we want to pass through so therefore, getting better performance. The video shows how to download and install hwloc and all of its dependencies using a script and @Squid great user script plugin. Hope you find it useful --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **note** If using the new 6.6.0 rc versions of unraid (or above)** before running the lstopo command you will need to create a symlink using this command first ln -s /lib64/libudev.so.1 /lib64/libudev.so.0 ** dont run the above command unless on unraid 6.6.0 or above! --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- EDIT-- Unraid now has lstopo built in! You will need to boot your server in GUI mode for it to work. Then once you are in GUI mode just open terminal and run the command and you are good to go. Much easier than messing with loading it manually like in my video.
    1 point
  3. You can also use the VNC client included as standard to access the VNC interface to VMs without the need to install special software inside the VM. Having said that RDP (or an equivalent) is likely to give snappier graphics performance.
    1 point
  4. I don't need support. I just wanted to say thanks for this container and its continuous maintenance. I started with Aptalca's container then switched to the linuxserver.io container. Its been close to 3 yrs of rock solid performance. I often forget its even running. I thought about switching to the Nginx Proxy Manager for the nice GUI and the fact the nginx syntax makes me commit typo errors for whatever reason. However the lack of fail2ban in that container has kept me away. I'm so glad you guys decided to bake that in. You can watch what I assume are bots getting blocked daily and its a nice peace of mind. This container works great with my firewalled "docker" VLAN using Custom br0. Between the firewall and fail2ban I feel my little home setup is about as secure as I can get it. As a fellow dev I know we don't always hear a peep from users in regards to appreciation for our hours of hard work. So thanks again for keeping this container going. I really do appreciate it.
    1 point
  5. Edit the vm and change from form view to xml view, and remove the section that references the device from the xml
    1 point
  6. Those are checksum errors on the cache pool, one of the devices is dropping, see her for better pool monitoring: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582
    1 point
  7. Good chance it's a pcie lane sharing issue... that is a common culprit. Check your mobo manual to see if your m.2 slots share lanes with other devices you may be using.
    1 point
  8. Settings -> Network Settings -> Interface rules Reboot required for change to take effect.
    1 point
  9. Another possibility is to do all the work to upgrade to the new drive configuration on the old server, and only when that is complete move the drives and USB key to the new one which should then be ready to go with no further work.
    1 point
  10. Looking over on the main plex page I see folks running it after doing a manual upgrade: Go into the console for that docker and do the following: wget <paste link to Ubuntu version of 1597 Plex> Wait for it to download, then dpkg -i Restart your plex docker and you’re done. I haven't had a chance to try it just yet.
    1 point
  11. Try another PCI-e slot on the MB. My guess would be that LSI card is not getting reset when you reboot. If you Google lsi 9220 problems , you will find that it was a solution to a basically same problem as you are having.
    1 point
  12. I'm having the same 0kb update issue with a handful of lio docker containers for about a week.
    1 point
  13. It's not written to any permanent storage, it's written to a file in the root file system which is in RAM. Your "secret password" is only to decrypt the drives - even if the password were leaked somehow they still need the physical drives to make use of it. You'll first have to teach her to hack your root login password in order to get into your server. As @bonienl mentioned in above post we have made some changes so that casual user doesn't accidentally leave a keyfile laying around. These will appear in Unraid OS 6.8 release. In meantime, click that 'Delete' button after Starting array and, I'm done talking with you.
    1 point
  14. It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
    1 point
  15. Wasn't meant to come off as an insult, I apologize if it did. Was just trying to be helpful, I'll go back in my hole now.
    1 point
  16. You think I don't know that? That handles passphrases but not actual key-files. Besides the plaintext is sitting around in RAM in other places too, eg, in network stack buffers.
    1 point
  17. First a great deal of thought, design, and effort went into the Unraid OS encryption feature. However we do think there are improvements we should make in the interest of securing the server as much as possible. To clear some things up: If you use a passphrase, whatever you type is written to /root/keyfile If you upload a key file, the contents of that file are written to /root/keyfile Hence we always pass "--key-file=/root/keyfile" to cryptsetup when opening encrypted volumes. The Delete button action is to delete /root/keyfile. I can see where this leads to confusion in the UI. /root/keyfile is definitely not automatically recreated up system boot, but it is needed every time you Start the array if there are encrypted volumes. You only see the passphrase/keyfile entry fields if /root/keyfile does not exist, which is the case upon reboot. The default action, after luksOpen'ing all the encrypted volumes is to leave /root/keyfile present. This is because often, especially during initial configuration, one might Start/Stop array several times and it's a pain in the neck to have to type that passphrase each time. At present unlike traditional Linux distros, Unraid OS is essentially a single-user system (root) on a trusted LAN (your home). Thus we didn't think there was much risk in leaving /root/keyfile present, and besides there is a way to delete it, though granted you have to remember to do so. The primary purpose of encryption is to safeguard against physical theft of the storage devices. Someone who does this is not going to know to first snoop in a webGUI and find an encryption key - they are going to grab the case and run. Each storage device has its own unique volume (master) key. The keyfile is used to decrypt this master key, and its the master key that is actually used to encrypt/decrypt the data. Unraid uses only one of possible 8 slots. We intend to add the ability to assign additional passphrases, for example to change your passphrase, or to add another one if you want to give someone else a storage device and not reveal your unique passphrase. But of course this is very easily done with a simple command. Having read through the topic, we will make these changes: Change the default action of array Start to shred /root/keyfile after opening all the encrypted volumes. Add an additional configuration variable, probably under Settings/Disk Settings to change this default action if someone wants to, with clearer help text that explains what's happening (though the current Help text does explain it). Add the ability to change the passphrase.
    1 point
  18. Looks like 6.8.0 RC's are not far away The new remote access management plugin appears to be live and testing internally as it requires `6.8.0-rc0p` https://raw.githubusercontent.com/limetech/Unraid.net/master/Unraid.net.plg <?xml version='1.0' standalone='yes'?> <!DOCTYPE PLUGIN [ <!ENTITY name "Unraid.net"> <!ENTITY launch "Settings/ManagementAccess"> <!ENTITY author "limetech"> <!ENTITY version "2019.03.07a"> <!ENTITY pluginURL "https://raw.githubusercontent.com/limetech/&name;/master/&name;.plg"> ]> <PLUGIN name="&name;" author="&author;" version="&version;" launch="&launch;" min="6.8.0-rc0p">
    1 point
  19. Would 2FA be feasible with a new login page
    1 point
  20. Hi @johnnie.black I thought that I would chip into this post. I logged into @uaeproz server yesterday morning to help to do some testing with a new array and clean install. This was because his normal array (of 170tb and 11 data drives and 2 parity drives, encrypted xfs) the array will never start on any version of Unraid above 6.5.3. It always hangs after mounting the first or second drive in the array. He has been trying to upgrade to each new release of Unraid as they come out with the same problem and then having to downgrade back to 6.5.3 for his server to work correctly. What we thought that we would do yesterday, is to see if we could do a clean install of 6.7.0 stable. Then make a 1 drive array and see if the problem persisted. He removed all of his normal data and parity drives from the server. One 4tb drive was attached. An array was created with just one data drive with a clean install onto flashdrive of Unraid 6.7.0. The file system chosen was encrypted xfs (to be the same as the normal array) On clicking 'start the array' the drive was formatted but as the array started, the services started to start and it hung there. The gui saying starting services. The array never fully became available. I looked at the data on the disk and saw that the system share/folder only had the docker folder and had it had not created the libvirt folder. So i assumed that the vm service was unable to start but the docker service had. The server wouldn't shut down from the gui or command line so had to be hard reset. On restarting the server , before starting the array, I disabled the vm service. This time the array started as expected. However, stopping the array again it hang on stopping the services and the array wouldn't stop. Again needed hard reset. Next starting the array without the docker service or vm service running the array would start and stop fine. So next i tried starting the array without the docker or vm service running. Then once the array had finished starting then manually starting the docker and vm service. This worked fine. And so long as these services were manually stopped before attempting to stop the array, then the array would stop fine. ----- So next I deleted that array and made a new one using standard xfs (not encrypted) with the same 4tb drive. The array started fine with both the docker and vm service running without issue. The array could stop fine too. So basically everything worked as expected when the drive was not encrypted. I was hoping from the results of those tests that when we connected the original drives went back to the original flash drive, and upgraded the system to 6.7.0. that the array would start if docker service and vm service were disabled. This wasn't the case. The array didn't finishing mounting the drives. It stopped after mounting the first drive and had to be hard reset. So this is a strange problem. The OP has also tried removing all non-esstential hardware such as GPU. Also tried moving disk controller to different PCIe slot. He has run memtest on the ram which has passed. The diag file that he attached to the post, if I remember, was taken with one drive in the server formatted as xfs encrypted. Starting the array with the vm service enable. The array never finished starting just stuck on starting services. That when this file was downloaded. before hard resetting Hope that helps.
    1 point
  21. 1 point
  22. Unassigned Devices plugin will mount network shares. And they don't count as a device.
    1 point
  23. The second post in the UD support forum describes this issue A drive that is not mounted will show spun down and will not show temperature unless a script file is created. The Disk Attributes will also be disabled for the drive. The easiest way to handle this is to click on the edit script icon, select the default script, and then save it. The default script does not do anything with the drive. I'm not going to write code to get around people not reading the UD forum. UD was originally designed to be a way to mount a USB drive so files could easily be copied to/from a USB drive. It has morphed into supporting mounted disks for Dockers and VMs when these should really be array drives. You are really asking UD to support a drive the same way as a drive in the array. If you want that level of support, then put the drive in the array.
    1 point
  24. The following two commands worked perfect for me. The only change I made is to output to a raw format as that was my desire for this workload. $ tar xvf MyAppliance.ova $ qemu-img convert -O raw MyAppliance-disk1.vmdk MyAppliance.img I had need to boot an appliance that is only offered through an OVA format. Decompressed and converted the image with the commands above (using the proper names of course), then created a VM assigning no disks to it through the regular setup process, but editing other hardware as required. After that, I created the directory for the VM disk in my desired location and moved the converted raw image disk file over. I then edited the XML and added the disk using the following syntax, again, referencing the proper file names that I had chosen. <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vms/MyApplianceName/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> Booted perfectly and seems to be working great. Thanks for getting me started with this thread! Hopefully this helps others as well.
    1 point