Leaderboard

Popular Content

Showing content with the highest reputation on 09/23/20 in all areas

  1. sadly you probably just got lucky, that is unless pia are finally sorting their sh*t out, in any case the multi remote line code should help, as mentioned above i am working on next-gen now and its going well, i MIGHT have something for people to test by the end of today. p.s. LOTS more endpoints support port forwarding on next-gen! - like dozens!
    6 points
  2. Sweet! UUD 1.4 will be out soon. @testdasi You may want to specific that UUD version 1.3 is currently what's added. Thanks for officially integrating the UUD as another option within GUS. This is really cool. @SpencerJ Are you aware of the awesome work that @testdasi did to make a single docker solution for InfluxDB/Telegraf/Grafana? He just integrated the Ultimate UNRAID Dashboard. It is basically a single docker solution for everything for users who appreciate/need that kind of out of the box setup. The timing of his work couldn't have been better!
    2 points
  3. Thanks. It's done. Thank you so much, im gratefull!
    2 points
  4. Update (23/09/2020): Grafana Unraid Stack changes: Expose Influxdb RPC port and change it to a rarer default value (58083) instead of the original common 8088. Added falconexe's Ultimate UNRAID Dashboard Thanks. It's done. The GUS dashboard is based on Threadripper 2990WX. You will have to customize the default dashboard to suit your own exact hardware. Also give UUD a try to see if you like that layout more.
    2 points
  5. @testdasi please do Love this
    2 points
  6. We can't do that on this forum, we can only recommend a post so it appears at the top, but only mods can do that, though it's really not a solution in this case, but I'll do it in case another user has the same issue before it's fixed and finds this thread.
    2 points
  7. Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
    1 point
  8. Yep. Specified that it's 1.3 now. I'll keep an eye out for v1.4.
    1 point
  9. Thanks again mate! We are in business now ,the W10 Vm is started and its running just as well as bare metal, IMPRESSIVE!!! I'll get this working for now, then move onto plex/radarr etc. tomorrow now before the Mrs kills me, as an hour here tinkering has turned into a solid 6 hour stint haha!
    1 point
  10. Sorry, I was reviewing the motherboard manual and no, you can't select the primary display. Its usually the card in the first slot so you could try to physically swap your 2080ti and 1650 in their slots and see if the 1650 comes up as primary. If you do this, it will most likely swap around their location within the IOMMU group and move them to different addresses, so you will need to change your binding and the VM gpu assignment. But since you have it working, you can decide if you really need to admin it locally or remotely via the web interface. In either case I'm glad you have it working and glad I could help you some.
    1 point
  11. Thanks a lot! after deleting appdata/Graf... and reinstall GUS update version with new port for influxdb, it works perfectly! It's time to look at how it works know ;-)!
    1 point
  12. @wcg66 pihole huh. I can relate to that, and perhaps that does have something to do with some Roon discovery issues. Short story for ya! I was also running pihole under a docker. I was doing it because I wanted it the DNS level blocking, it's terrific, if a bit restrictive. I also had pihole doing dhcp because my router, at the time, didn't support manually specified DNS servers. Everything was running swimmingly, until a power outage, and my ups didn't make it through. On restart, unraid wasn't getting an ip for its main interface from pihole docker! Duh. Ended up actually just buying a raspberrypi and running pihole as intended, never looked back. Also had to upgrade router to one that would work Century Link fiber vlan201, and allow me to specify DNS (Netgear R7000 with some firmware from somewhere). Now that the setup is as it is, Roon, and most everything else works fine. I haven't had to specify any special ports for Roon, the R7000 is doing routing, while pihole is DNS. I changed unraid to a static IP.
    1 point
  13. Yeah they fixed it maybe a day later but beta 7 added a lot of stuff under the hood that maybe added to the issues now. Also no full install can be pulled for beta 7 or beta 8. Last one was available for beta 6.
    1 point
  14. I was able to but you are not alone on this. On the AMD hackintosh discord a lot of people are having issues with updating to beta. Heard a couple different reasons why it may not update. One was a broken seal on the main partition and the second theory was something to do with the secure boot support added to opencore. I updated with no issues...that's for beta 7 and beta 8.
    1 point
  15. I was not aware! Thank you for letting know and big thanks to @testdasi for sharing. I will include this thread in the monthly newsletter.
    1 point
  16. I just did this too, and thought i'd come back and comment the same
    1 point
  17. This info was exactly what I was looking for. I removed the regex as it was still showing the 0 rpm ones as they did not come through as null. I ended up liking this better as you stated, it gives me more control! Next up, tackling the CPU info as I have 48 threads! Thanks for all your hard work on this. I have long dreamed of having all this information at my fingertips and these fill out my new 34" 1440p ultrawide quite nicely!
    1 point
  18. yeah, if you want parity (i recommend it, but remember that parity is not a backup) then that looks good. With parity, you will want to make sure turbo write is on (don't know if that is default by now but most likely is). If you are initially transferring data from another source then some people don't bother with parity until everything has been transferred to speed things up. if you don't have anything to transfer or very little then it doesn't hurt to setup parity now. You can setup the NVMe(s) as your cache and then run docker/VM from there. It's really your preference. If you decide to use both NVMes as cache then you can choose raid0 or raid1 (raid0 will get you 4tb storage, whereas raid1 will give you a 2tb mirror which can protect you if lose one of the devices, raid0 cannot). I would recommend the backup plugin on CA which can backup your docker containers to your array so if things do go south then you have a backup of your containers. As for the single SSD, its your call. You could utilize that to run your VM, or with the beta version of unraid, you can set it up as another pool
    1 point
  19. That's what I do. Granted, I just use mine headless - I still have the monitor attached to the igpu for diagnostics and to mess with the BIOS. I don't even boot up into the GUI mode - all of my admin work is done on another computer (or even my phone in a pinch). Most of the time I have win10 running within the VM and output going to the monitor with it's dedicated mouse/keyboard. I'm assuming you're running the beta version - 6.9? with that new of hardware, I'm sure you must be. Funny enough, i don't remember if there is an option to make the VM autostart on boot, otherwise you will need a way to start the VM - but your ipad will work. I personally don't have anything on mine set to autostart - simply because if the server restarted, it wasn't because of me and I want to track down what caused it before potentially making things worse. It's always better to ask questions when in doubt rather than [potentially] making reckless mistakes, such as when it comes to your data
    1 point
  20. I don't know if you've even setup your server yet, let alone booted unraid yet. I would suggest looking up spaceinvaderOne's videos. He has videos on just about everything which are extremely helpful. You would probably want to start with getting the nvidia build (through CA). Then you would want to isolate the 2080ti card that you want the VM to use (again, look up his videos) - this just tells unraid not to use this card/ ignore it. Then, when you setup the VM (again, watch the video) you are given the option to use the 2080ti card. Now for general unraid use, with the nvidia build, you will be able to manage unraid via the 1650 card (assuming you are booting into the gui mode) and still use it for transcoding. This will give you an option to manage the system if you don't have another computer available. In my situation, i have a monitor with two inputs - one from my win10 VM and another from the igpu that unraid uses. So I can manage unraid with one input and interact with windows on the other input. I also pass through several USB ports to the vm and attach the keyboard and mouse to those for win10 use. I run my unraid "headless" in a sense where I don't really use the monitor and have no keyboard for unraid use. Hope this gives you a little information to get started. Good luck!
    1 point
  21. I was referring to using everything (plex, sonarr, radarr, etc) within docker so that your system (unraid) can better utilize the resources. I'm assuming you are referring to bypassing the 2 transcode limits, yeah, you are on your own, or get the p2000 card to handle unlimited transcodes. It really depends on what you are trying to transcode and how many simultaneous streams you expect to encounter. If you don't anticipate any 4k stuff then gpu/cpu would be fine (2 via gpu, the rest with cpu). you can but gpu is always better than cpu (unless you REALLY care about quality, but thats another debate) no problem, we were all there at one point.
    1 point
  22. What I do in this cases is to search the syslog for "thread", this will find all parity check/sync/rebuilds info, since they all start with "md recovery thread"
    1 point
  23. I've been connected to the same endpoint for 5 days without issue
    1 point
  24. On the docker tab advanced view you can force update any single container.
    1 point
  25. Yes this is possible, I moved in 2018 from a linux installation on an intel nuc to docker on unraid and did not loose anything from my meta data. @SpaceInvaderOne did a video about transferring Plex installations from one container to another (docker). Not completely your case but should help enough to get your head around. Plex2Plex by Spaceinvader One
    1 point
  26. Icy Dock FLEX-FIT Trio with room for an additional 2.5
    1 point
  27. Yeah, I was just ignorant to the fact that InfluxDB uses port 8088, everything seems to be working perfect now. Do appreciate your help! This container is awesome.
    1 point
  28. Incase any body finds this helpful I always found telegraf config file full of so much that is not used. This is mine where it just has the inputs that I need, makes it easier to work on. Just note I don't have a ups so I don't use that setting. # Global Agent Configuration [agent] hostname = "box" flush_interval = "15s" interval = "15s" # Input Plugins [[inputs.cpu]] percpu = true totalcpu = true collect_cpu_time = false report_active = false [[inputs.disk]] ignore_fs = ["tmpfs", "devtmpfs", "devfs"] [[inputs.io]] [[inputs.mem]] [[inputs.net]] [[inputs.system]] [[inputs.swap]] [[inputs.netstat]] [[inputs.processes]] [[inputs.kernel]] [[inputs.diskio]] device_tags = ["ID_SERIAL"] skip_serial_number = false [[inputs.docker]] [[inputs.net]] [[inputs.sensors]] remove_numbers = true [[inputs.smart]] attributes = true # Output Plugin InfluxDB [[outputs.influxdb]] database = "telegraf" urls = [ "http://192.168.1.58:8086" ] username = "*******" password = "*******"
    1 point
  29. Don't run manual commands inside the container unless we ask you to. We don't support that.
    1 point
  30. Here it is, copied from running vm, passing through gpu (domain='0x0000' bus='0x83' slot='0x00' function='0x0'), audio from gpu (domain='0x0000' bus='0x83' slot='0x00' function='0x1'), audio from motherboard (domain='0x0000' bus='0x00' slot='0x1b' function='0x0'), 2x cpu, mouse/keyboard dongle (vendor id='0x045e'), a webcam (vendor id='0x046d') and a usb pci card (domain='0x0000' bus='0x84' slot='0x00' function='0x0'); 2 network bridges, one accessing internet (br0), set to be en0 builtin in macos (this is an ethernet-wifi bridge tp-link TL-WR802N, so the host has internet access thanks to this), one to communicate with the host only (br1), using another ethernet port (not physically connected). <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Catalina</name> <uuid>b1bd7672-d29e-e48e-d07a-2fb0dc937878</uuid> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='12'/> <vcpupin vcpu='13' cpuset='13'/> <vcpupin vcpu='14' cpuset='14'/> <vcpupin vcpu='15' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.0'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/Catalina/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/Catalina/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'/> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/Catalina/opencore.qcow2'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/mnt/user/domains/Catalina/vdisk3.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <interface type='bridge'> <mac address='RE:DA:CT:ED:--:--'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </interface> <interface type='bridge'> <mac address='RE:DA:CT:ED:--:--'/> <source bridge='br1'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x08' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/GTXTitanBlack.dump'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0745'/> <address bus='1' device='3'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0x0892'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-smp'/> <qemu:arg value='16,sockets=2,cores=8,threads=1'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='host,ucode_rev=0x513,+hypervisor,migratable=no,-erms,kvm=on,+invtsc,+topoext,+avx,+aes,+xsave,+xsaveopt,+ssse3,+sse4_2,+popcnt,+arat,+pclmuldq,+pdpe1gb,+rdtscp,+vme,+umip,check'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-2.rotation_rate=1'/> <qemu:arg value='-set'/> <qemu:arg value='device.sata0-0-3.rotation_rate=1'/> </qemu:commandline> </domain>
    1 point
  31. I prefer the second one personnaly. I find it easier to read. On a side note, it is strange that your 5 fans are all labeled FAN 01. I'll probably give this a try during the weekend, keep the good work. 👍
    1 point
  32. I'll also be adding dynamic support for segregating/isolating Unassigned Devices (Disks) is this next release!
    1 point
  33. VERSION 1.4 Sneek Peek. New Stuff, GUI changes, and continued refinement! Alt View For RAM DIMM Temps. Still deciding on which one I like more. Trying to make each topic unique looking. VOTE! Added the RAM DIMM temps graph to the Detailed Server Performance section. I like the way this one came out. Lots of tweaking ha ha. All RAM DIMM panels use REGEX of course, so it should work on ALL UNRAID systems/architecture. They use the IPMI Plugin.
    1 point
  34. I saw that yesterday and have began coding, i have a working incoming port on next-gen for openvpn, it just needs more work to make it production ready.
    1 point
  35. Hey @testdasi I just added a compatibility section on the UUD topic (first post on page 1) and tagged you. Link:
    1 point
  36. running g2g here and all is working again, looks like SD fixed it, may some lineups need some more time, just as note, so no update from g2g or docker needed.
    1 point
  37. This totally seemed to have fixed it for me as well. Over 300GB in a few hours, where earlier I would barely see a few GB before it seemingly crapped out. Thanks for the tip!
    1 point
  38. To be honest, I ran into a lot of random DNS and network connectivity issues when I was trying out running Pi Hole on UNRAID, so who knows. I now run a separate pfSense machine as a firewall and router and have never looked back 😁 It shouldn't matter at all, but one thing you could try, if you're interested in troubleshooting, is to map the ports RoonServer users rather than using the `--net=host` setting. That way at least you could tell your router specifically which ports Roon traffic will be on (if that matters in your network environment). You won't be able to change the ports because Roon clients don't have the option to use different port connections, but it still might be interesting. I am going to try it out just to see if it works, so I just forked steefdebruijn's image and changed up the Dockerfile to expose the ports Roon uses for discovering and connecting/transferring data (9003/udp for discovery and 9100-9200/tcp for connection and playback), I'll build it and throw it up on Docker Hub just to try it out for myself. If you think that mapping the ports would be of any use in diagnosing your connection problems and want to try it but aren't sure how to pull the alternative version and change your UNRAID Roon docker template just ask and I can tell you how. EDIT: So you know, it works, but it's pretty impractical to do since you have make sure that none of the ports between 9100 and 9200 are already mapped.
    1 point
  39. I wouldn't count on that. This forum has a rather high percentage of 50+ The tone of this forum has a lot to do with the number of years of computer experience that are reflected here.
    1 point
  40. I just saw this Next Generation Port Forwarding on the PIA website
    1 point
  41. Thanks for the shoutout @testdasi! My goal as the developer of UUD was to get as absolutely close to "out of the box" as possible. The new version of UUD 1.3 that I posted last night uses dynamic code to handle all manner of UNRAID architecture. And I'll keep working to improve it as time goes on. Feel free to join us on the forum topic for support. It is absolutely 100% compatible with the Grafana-Unraid-Stack docker. 😁
    1 point
  42. OK guys, multi remote endpoint support is now in for this image please pull down the new image (this change will be rolled out to all my vpn images shortly). What this means is that the image will now loop through the entire list, for example, pia port forward enabled endpoints, all you need to do is edit your ovpn config file and add the remote endpoints at the top and sort into the order you want them to be tried, an example pia ovpn file is below (mine):- remote ca-toronto.privateinternetaccess.com 1198 udp remote ca-montreal.privateinternetaccess.com 1198 udp remote ca-vancouver.privateinternetaccess.com 1198 udp remote de-berlin.privateinternetaccess.com 1198 udp remote de-frankfurt.privateinternetaccess.com 1198 udp remote france.privateinternetaccess.com 1198 udp remote czech.privateinternetaccess.com 1198 udp remote spain.privateinternetaccess.com 1198 udp remote ro.privateinternetaccess.com 1198 udp client dev tun resolv-retry infinite nobind persist-key # -----faster GCM----- cipher aes-128-gcm auth sha256 ncp-disable # -----faster GCM----- tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 1 crl-verify crl.rsa.2048.pem ca ca.rsa.2048.crt disable-occ I did look at multi ovpn file support, but this is easier to do and as openvpn supports multi remote lines, it felt like the most logical approach. note:- Due to ns lookup for all remote lines, and potential failure and subsequent try of the next remote line, time to initialisation of the app may take longer. p.s. I dont want to talk about how difficult this was to shoe horn in, i need to lie down in a dark room now and not think about bash for a while :-), any issues let me know!.
    1 point
  43. Not as far as I known, this works with SAMBA and zfs or btrfs but you need to have regular snapshots scheduled, you can then revert any file to any of those.
    1 point
  44. Alright, here is my "hacky" solution to the above problem. It works for now, if someone has a better solution, let me know. Install User Scrips plugin (if you dont have it already), and add the following script: #!/bin/bash mkdir /mnt/disks/rclone_volume chmod 777 /mnt/disks/rclone_volume obviously you can set -p flag on the mkdir if you need nested directories or if you have issues with sub directories not being there, but from trial and error on my unraid setup, at boot (before array starts), `/mnt/disks/` exist. edit the script to include all the mount folders you want (if you have multiple mounts), and chmod 777 each of them. Set the above user script to run on every array start. Just to make sure my container doesnt start prior to this finishing (unsure if it can happen?), i added a random other container above my rclone container (a container that doesn't need drives to be mounted), and set a delay to 5 secs (so rclone container waits 5 seconds). This might be unnecessary. Hope it helps someone.
    1 point
  45. The good thing to come out of all this is that you know how to set up scripts, and what they can and can't do on startup. It's pretty simple really, a script can be just a list of commands you could type at the command line, but don't particularly want to type over and over again manually. You can start dockers, VM's, do pretty much anything with a script.
    1 point
  46. Set to "At startup of array" in user scripts #!/bin/bash echo "/boot/config/plugins/user.scripts/scripts/StartVMs/script" | at now Named StartVMs in user scripts #!/bin/bash printf "%s" "waiting for pfSense ..." # Change IP to match an address controlled by pfSense. # I recommend pfSense internal gateway or some address guaranteed to be up when pfSense is finished loading. # I don't use external IP's because I want my internal network and appliances to be fully available # whether the internet is actually connected or not. while ! ping -c 1 -n -w 1 192.168.1.1 &> /dev/null do printf "%c" "." done printf "\n%s\n" "pfSense is back online" virsh start VMName1 # Insert optional delay to stagger VM starts #sleep 30 virsh start VMName2
    1 point
  47. Thought I would add my banner to.
    1 point
  48. I've always been surrounded by computers and electronics. From the early days of high school working in a computer room. Since I'm a music buff (you have not seen the other wall with thousands of CD's on shelves), musician and computer geek, it's all gotta go somewhere. Fact is the inverted U works well for me. It's pretty funny when I have a couple people over for lan parties or jamming. But it works. For a building desk, I have a rolling TV cart that comes out from under the desk with an anti-static mat. I don't build nearly as many computers as I used to. In fact I'm dumping about 5 of them (still have 7, and a few laptops). These days, I prefer ITX and laptops unless it's a large unraid server with many disks. In regarding the build with the desk and the window, consider that any computer near the window is subject to the elements. I.E. Rapid changes in temperature, humidity, dust, etc, etc. Unless you have an exhaust fan. It's a concern for me because I live near the beach. our area may be different. Just thought I would bring it up.
    1 point