Leaderboard

Popular Content

Showing content with the highest reputation on 02/08/19 in Posts

  1. Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
    1 point
  2. UPDATE: Found a solution which is in line with the fix for OE/LE. I created /etc/modprobe.d/snd-hda-intel.conf and put the same line in it that is used for OE/LE (options snd-hda-intel enable_msi=1). Audio works as expected now. I know that "demonic" audio is an issue for Nvidia based cards and there are fixes for Windows and OE/LE guests. However, I don't see a fix for Linux distro guests (Ubuntu 16.04.1 in my case). I'm doing a vanilla install and have only done an apt-get update/upgrade and installed OpenPHT...but I have the "demonic" audio bug. The manual seems to only address Windows guests: http://lime-technology.com/wiki/index.php/UnRAID_6/VM_Guest_Support#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support Is there a Linux fix (non-OE/LE)? XML... <domain type='kvm' id='56'> <name>HTPCFAMILYRM</name> <uuid>f31215fd-5042-c086-4b96-ba7f8531458d</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f31215fd-5042-c086-4b96-ba7f8531458d_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/HTPCFAMILYRM/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-16.04.1-desktop-amd64.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:69:c3:d7'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-HTPCFAMILYRM/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0291'/> <address bus='8' device='2'/> </source> <alias name='hostdev2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x20a0'/> <product id='0x0001'/> <address bus='2' device='10'/> </source> <alias name='hostdev3'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> </domain> GPU card... root@unRAID:~# lspci -v -s 83:00.0 83:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 730] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Device 196e:1119 Flags: bus master, fast devsel, latency 0, IRQ 66, NUMA node 1 Memory at f4000000 (32-bit, non-prefetchable) [size=16M] Memory at b0000000 (64-bit, prefetchable) [size=128M] Memory at ae000000 (64-bit, prefetchable) [size=32M] I/O ports at dc00 [size=128] Expansion ROM at f3f80000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: vfio-pci GPU audio... root@unRAID:~# lspci -v -s 83:00.1 83:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1) Subsystem: Device 196e:1119 Flags: bus master, fast devsel, latency 0, IRQ 64, NUMA node 1 Memory at f3f7c000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Kernel driver in use: vfio-pci
    1 point
  3. Here's some good info on how docker works... https://stackoverflow.com/questions/50413405/how-volume-device-mapping-works-on-docker A different way of explaining it by DigitalOcean https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers Here is an interesting article but you have to sign in (free acct) to read the whole thing. https://www.computerweekly.com/feature/Docker-storage-101-How-storage-works-in-Docker
    1 point
  4. Much better, LSI are currently the most recommended HBAs
    1 point
  5. I though it was working with stereo sound?
    1 point
  6. Never use the Marvell controller, the first 4 white ports, they are known for constantly dropping disks on those boards.
    1 point
  7. Worked! Enjoy the beer - What's the other plugin? I know of IPMI plugin for unRAID 6.1+ but I didn't know it could control the fans as well?
    1 point
  8. 4400 / 16 = 275MB/s vs the approximately 190MB/s max with the SAS2008 on a x4 slot, question is are your disks faster than that? You didn't mention what disks you have, say for example you have 4TB WD Reds, they max out at around 175MB/s, so either option would be the same, if you have faster disks or want to have some spare bandwidth for future upgrades then get an expander.
    1 point
  9. SAS3008 + SAS2 expander up to 4400MB/s max usable bandwidth, SAS3008 + LSI SAS3 expander up to around 6000MB/s max, then divide that by the number of connected disks.
    1 point
  10. Dns via container name doesn't handle uppercase. It has to be all lowercase. You need to change the container name to bitwarden
    1 point
  11. Yes, in that case you can have a little bottleneck, depending on the disks, but it's still good for around 190MB/s per disk.
    1 point
  12. Thanks for the gratitude. I'm glad that this tool is useful to others You can take a look at the commit log on github: https://github.com/Josh5/unmanic/commits/master Anything committed to the master branch will be automatically built and taged as "josh5/unmanic:latest". I have not yet setup any sort of release notes, but as a rule I do my best to ensure my commit messages on github explain the reason for the code change and what it is fixing (opposed to just saying what was changed as many people do).
    1 point
  13. this situation shouldnt happen with the latest image (was a bug in an earlier release), im assuming you are running the latest image right?, if so please can you follow the procedure below:-
    1 point
  14. Not a problem. The Expander + 3008 would provide more bandwidth than a 2008 16i, but current config, using the onboard SATA ports, will perform just as well, assuming both HBAs are on CPU slots.
    1 point
  15. Sharing some notes on how I got my grafana subdomain to work with the reverse proxy in lets encrypt. Hopefully this helps someone with a similar setup. I am simply a journeyman when it comes to nginx, encryption, and docker so take my cobbling with a grain of salt and make sure you back up your .conf files before you screw around! I have the grafana container created by grafana, and the letsencrypt container from linuxserver both in the same proxy network I set up. For most of my other subdomains the proxy works fine with the conf line : proxy_pass http://$upstream_[app]:[port]; However; for whatever reason that did not work with Grafana for me. I commented out the proxy pass conf line and changed it to the following, where the ip is the internal ip for unraid: proxy_pass http://192.168.1.20:3000; I changed the settings for the Grafana container and change my GF_SERVER_ROOT_URL to: https://[mydomain].com Of course, I added grafana to letsencrypt subdomain list. An additional configuration I added to all of my subdomain .conf files to force http to https was a new server block as follows: server { listen 80 ; listen [::]:80 ; server_name [subdomain].*; return 301 https://$host$request_uri; } So now inside and outside my network going to grafana.[mydomain].com sends me to the Grafana login page.
    1 point
  16. This plugin is awesome, so easy to use! Thanks!!
    1 point
  17. I'm sorry. I had apparently put the wrong tag in the template. The latest template should fix the problem. If you're still having issues replace the "nomariadb" tag with "latest-nomariadb".
    1 point
  18. Are you using a custom network interface? This: set $upstream_bitwarden Bitwarden Will not work until you do. With a custom docker network interface, internal DNS will translate "Bitwarden" to 10.10.8.28 but it won't work with the standard bridge interface. Sent from my SM-G930W8 using Tapatalk
    1 point
  19. it will still que it up though, unless it doesn't have 2ch audio I think than it'll mux that and poop it out (at least that's what i get from the info the option give)
    1 point
  20. Here is how the audio settings should be configured:
    1 point
  21. For some reason in the weeks after installing the latest unraid (6.6.3) my aspects of the docker interface have been really slow (loading docker apps, restarting, updating them etc). It takes almost 2 minutes to load the list of all of my docker apps (I have about 14). It used to take about 5 seconds. On top of that, my web server (Letsencrypt) and some other docker apps are behaving very weirdly and slowly. Some of them crash after just a few hours and most need a reboot really often to keep functioning. Container logs aren't showing any useful information either. I got a warning on my server a week ago saying that something had something had persisted from the beta (I never even used the beta) and that I needed to delete my entire docker.img and reinstall otherwise I would most certainly face future corruption. I just checked again and this warning is still here. I guess that's what maybe could be happening, even though I never used the beta. I was just sure that the warning was a bug but now it's looking to be a legitimate warning and maybe I am facing some kind of corruption? So how do I delete my entire docker installation, but still keep all of the settings for my docker apps? __________________________________________ EDIT: Got it sorted. Backup 'appdata' folder using Plugins -> CA Appdata backup (not really necessary but still good to do). Settings -> Docker -> 'Click Advanced Button' -> Then opt to delete the docker.img Start docker again and download the docker.img All of my docker apps are gone Make the custom network interface that I was running 90% of my dockers on with reverse proxy with command: docker network create <name_here> Go to Apps tab -> Sort by 'previous apps' and then download all of the apps that I had No need to change any settings, they all work perfectly and are still reverse proxied as soon as my Letsencrypt container is back up and running Reboot server Crack open a beer Piece of cake. Hopefully solves the problems with my containers crashing and not working after a few hours. At least I am not getting any warning any more when on the docker settings page. EDIT 2: Can confirm 4-5 months later that it's working fine with no problems at all and none of my containers crashed randomly any more.
    1 point
  22. For any other looking to do the same: 1. To see usb devices: lsusb 2. Found the one, for me: /dev/bus/usb/001/004, 0bda:2838 (flightradar DVB-T dongle) 3. Ran code to get serial number: udevadm info -a -n /dev/bus/usb/001/004 | grep '{serial}' | head -n1 Response: ATTR{serial}=="00000001" 4. Added new file in /etc/udev/rules.d: 99-usb-rules.rules with content: vi /etc/udev/rules.d/99-usb-rules.rules SUBSYSTEM=="usb", ATTRS{idVendor}=="0bda", ATTRS{idProduct}=="2838", ATTRS{serial}=="00000001", SYMLINK+="flightradar" 5. To get this to persist through reboots i copied the files to /boot/config/rules.d/99-usb-rules.rules: cp /etc/udev/rules.d/99-usb-rules.rules /boot/config/rules.d/99-usb-rules.rules 6. Then in the file /boot/config/go i added a command to copy it back on boot: vi /boot/config/go Add: cp /boot/config/rules.d/99-serial-rules.rules /etc/udev/rules.d/99-serial-rules.rules chmod 644 /etc/udev/rules.d/99-serial-rules.rules (not sure if it is required but it the permissions now match the other files in the folder) Not sure if this is run before usb is initialized, so you might have to reinsert the device after boot to get the symlink recognized. I will test it and report. EDIT: Seems like it is not recognized on boot. EDIT2: It is working!
    1 point
  23. Go to the settings page for your TM share, and set AFP Security Settings/Volume dbpath to a directory that will persist on your cache drive (assuming you have a cache drive). TM constantly (well, every 10 minutes) checks info stored in some TM database files. If these are located on your array disk, the disk will spin up. By configuring the share as above, those files are actually stored on the cache drive. Not only does that prevent the array disk to spin up, the whole operation is also much faster. In fact, I recommend you do the same for ALL your AFP shares as it greatly improves general file browsing performance (ever waited 5 minutes for a share directory listing to be displayed in Finder?) This is from the unRAID help for Volume dbpath:
    1 point
  24. Example UCD Post Here's where I might include a short intro about how happy I am with my new unRAID server, how easy it was to build, how I justified it to the wife, etc. I might also mention the other devices I use in conjunction with my server, such as a fully gigabit network, a custom HTPC, my smart-phone, etc. Live product links such as I've added below are optional, but widely appreciated by others who may wish to copy your build. OS at time of building: unRAID 4.7 Pro CPU: 2.7 GHz AMD Sempron 140 Motherboard: Biostar A760G M2+ RAM: 2 GB Kingston DDR2 800 Case: Antec 902 Drive Cage(s): ICY DOCK MB455SPF-B (3) Power Supply: CORSAIR Builder Series CMPSU-500CX 500W SATA Expansion Card(s): Supermicro AOC-SASLP-MV8, PCI-Express x1 Controller Card (Silicon Image SIL3132) Cables: SATA Cables (2), 3ware Serial Attached SCSI CBL-SFF8087OCF-05M (2), Molex Splitters (6) Fans: All stock fans Parity Drive: 2 TB Green EARS Data Drives: 2 TB WD Green EARS (2), 1.5 TB WD Greens EADS (2), 500 GB Seagate Cache Drive: None Total Drive Capacity: 15 Drives Primary Use: Data storage, media streaming to HTPC and other computers Likes: Very quiet, runs cool, impresses friends Dislikes: Bright fans can be annoying, and only the large rear fan LED can be turned off easily. Add Ons Used: preclear, unMenu, cache_dirs Future Plans: Add more data drives, add cache drive, install SNAP, install SABnzbd, build a second server Boot (peak): 212 W Idle (avg): 50 W Active (avg): 120 W Light use (avg): 52 W Edit: I've owned the server for one month now and run into a few problems. First off, my RAM turned out to be bad. See this link for details (link to the thread in which you asked for help). Guess I should have run memtest when I first built my server! Secondly, my 500 GB Seagate drive finally bit the dust. No real problem there, I just bought a new 2 TB WD EARS, installed the jumper on pins 7/8, precleared it, and installed it in the server. unRAID rebuilt the 500 GB drive's data onto the new 2 TB drive in about 10 hours. I sure am glad I opted for the hot swap cages, as they made replacing the drive so much easier. Edit2: It has now been 6 months and my server is running smoothly. I installed SNAP and a few other add-ons with no trouble. I've added several more drives so that I'm now up to 11 drives, and I just ordered another that I see on sale today in the Good Deals forum. My digital hoarding has progressed to a clinically significant stage, my wife is researching treatment options. I keep telling her I'm fine. Edit3: One year has passed. I lost my wife, my car, and my job because I spent hours each day browsing the Good Deals forum looking for the best deal on a new drive, but my server is still running great! Its gentle hum and warm glow gets me through the lonely nights. Edit4: I stopped shaving... I stopped showering... I have no more friends... even the neighbor's dog keeps its distance. I go to bed mumbling about disks, and connectors, and drive trays... The forum posts are the only company I have at night when I can't sleep. I've become a recluse... I pour over sales catalogs and mailings looking for discount-codes. I live for the monthly parity check. I spend my days copying the same files to and from the disks, over and over again... It has been months since I actually watched a movie... but my server runs like a dream!
    1 point