Leaderboard

Popular Content

Showing content with the highest reputation on 08/06/19 in Posts

  1. Version 2.1 has been pushed Changelog: Controller Information: Display the Partiy/Disk/Cache label in the port listing if the drive is assigned Controller Benchmark: Display unRAID's Parity/Disk/Cache labels in graph Update Highcharts from 6.0.7 to 7.1.2 Display simultaneous max throughput in the Controller Benchmark graph Add "Avg" to Controller Benchmark graph labels to better clarify the value Dynamically configure Controller Benchmark graph's height based on number of drives attached When editing the drive vendor, display the drive info page instead of the home page on submit
    2 points
  2. Intent: This guide will take you through the process of passing through an entire PCI USB Controller instead of doing it individually. One benefit is that USB will be plug n play on your virtual machines. Also, I have found that if you pass through USB devices separately the device name could change which will cause the VM to not start, but if you pass through the entire controller you will avoid this. Warning!!!: Please be VERY careful when doing this. You do NOT want to pass through your unRAID USB by mistake. unRAID needs the USB present to function properly. Prerequisites: Working VM CPU and Bios that supports VT-d USB device (a spare USB Flash drive will work) Motherboard Manual or paper for notes Guide: 1. SSH into unRAID 2. Find out how many USB controllers your server has available lspci | grep USB Notice that my server has 3 USB controllers 3. Now would be a great time to pull out the Motherboard Manual to take notes on (see mine below). What we are going to do is plug in an USB flash drive in to each spare USB slot to figure out the controller. 4. First lets figure out which USB bus unRAID is on. lsusb Note that my unRAID USB is on bus 002 (figure out which USB that relates to on your motherboard manual and write that down). **THIS MEANS I SHOULD NOT PASS THROUGH BUS 002!!!** 5. Now take a spare USB flash drive that is preferably a different manufacture than your unRAID USB (this will make it easier to identify). Plug it into a spare USB slot and type lsusb into your ssh session and write down which "Bus 00#" the spare USB drive appears on. Repeat this step this for all your USB slots. 6. Now you should have identified every USB slot's bus number, so lets figure out which PCI Number the bus belongs to (for me I have 3 USB controllers (Bus 001, 002, & 003). Replace the USB# with your bus number example: bus 001 = usb1 in the below code readlink /sys/bus/usb/devices/usb1 readlink /sys/bus/usb/devices/usb2 readlink /sys/bus/usb/devices/usb3 7. We have just figured out the USB controller device number (In my server all Bus 001 USB slots are part of 0000:00:1a.0) 8. Now lets make sure that this USB controller doesn't have any other device in its IOMMU Groups Go to your unRAID gui -> Tools -> System Devices -> IOMMU Groups 9. Match the group with the device number. As you can see below, mine is part of group 5 and that is the only device on group 5. If you have more than one device in the same group you will most likly have to pass them through to your VM as well, YMMV /sys/kernel/iommu_groups/5/devices/0000:00:1a.0 As of unRAID 6.1 this step is no longer needed. unRAID does this for you automatically. 10. Now lets add that device number to the vfio-pci driver. Open your "go" file under config on your unRAID flash drive and add this line: /usr/local/sbin/vfio-bind 0000:00:1a.0 and then type the same code into your SSH session to make it active or reboot your server. 11. You can either do step 11.a or 11.b [11.b is the easier option] 11.a Now lets add that to your Windows 8 VM XML to passthrough: <domain type='kvm' id='2' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows-8-Nvidia</name> <uuid>cc411d70-4463-4db7-bf36-d364c0cdaa9d</uuid> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.1'>hvm</type> <boot dev='hd'/> <bootmenu enable='no'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='host-passthrough'> <topology sockets='2' cores='2' threads='1'/> </cpu> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/vmdisk/vm_images/default/windows8.img'/> <backingStore/> <target dev='hda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb0'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb0'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb0'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x2'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:00:00:04'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <input type='tablet' bus='usb'> <alias name='input0'/> </input> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1a.0,bus=root.1,addr=00.1'/> </qemu:commandline> </domain> This part of the code is needed no matter what and you shouldn't have to change anything (see Hint #1 for more details): <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> This part is my graphics card (yours will be different): <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> This part is the code for the USB controller passthrough: <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1a.0,bus=root.1,addr=00.1'/> </qemu:commandline> HINT: #1 You MUST add the following right after <gemu:commandline> AND then you put in your code for the USB controller <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> Then add your USB code so when its all said and done it should look like this if you are only passing through an USB controller without a graphics card (if you are passing through a graphics card as well see my config in step 11 for that example): <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/ <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1a.0,bus=root.1,addr=00.0'/> #2 If you are only passing through a USB controller and nothing else (GPU, etc.) then you need to modify your "addr=" part to "addr=00.0" Every PCI device you passthrough gets its own addr=00.# and it starts at 0. In my above code my GPU is 00.0 and my USB controller is 00.1 11.b Using Hostdev instead of the qemu:arg author=saarg link=topic=36768.msg430843#msg430843 date=1451775451 For me it's much easier to use the hostdev tag instead of the qemu:arg. It's just too much trouble with it when you don't know what you are doing. So for passing through a PCI(e) device with the hostdev tag this is the starting code: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x00' function='0x0'/> </source> </hostdev> You then have to modify some parts of it to get it to work. The good thing is that you do not have to care about which bus and address it's supposed to have in the VM. You only need to find out the host PCI address. The part you change is bus, slot and function. In your case it's 00:14.0. Let's brake it down. 00 is the bus. You simply exchange the two numbers after the 0x. 14 is the slot. Same method as above. 0 is the function. Her it's also the same method as above. So in your case the full device tag would be like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/> </source> </hostdev> After you start the VM you will see that there are some lines added to the tag, but those you don't have to care about. They get created automatically. If you copy a host device tag to pass through a new device, be sure to remove the two lines created after the </source> tag, as they are specific to that VM. <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> 12. All done. What if you only have 1 bus... To my knowledge there are two paths; I will post what JonP posted HERE below (I revised what Jon said a bit to make sense in this thread): The Easy Path The Time Consuming Path
    1 point
  3. My Intel CPU has Quicksync support and I use it for Plex hardware transcoding. It works very well and can handle multiple simultaneous transcodes. However, keep in mind that you need a Kaby Lake or Coffee Lake generation CPU (the 7x00, 8x00 and 9x00 series) for full 4K support. My Skylake Xeon only partially supports 4K HEVC. I will need to upgrade for 4K. The bigger issue is that Plex does not currently handle 4K HDR content very well. Colors are washed out and faded. Whether or not a motherboard/CPU swap or just a GPU is needed depends on price and what you need from your server. A good primer on 4K with Plex: https://forums.plex.tv/t/plex-4k-transcoding-and-you-aka-the-rules-of-4k-a-faq/378203
    1 point
  4. I've implemented a new function to use the lowest md_sync_window that is at least 99.8% of max speed for the next test pass. Early testing is that it is working as expected. I'll test it overnight before releasing UTT v4.1. Thanks for the suggestion, which I used with a slight modification. I'm now comparing to an existing test result (if it exists), and only replace it if the new result is faster. This will be in UTT v4.1 too. No pressure, and I appreciate the help, but thought I should mention that I would like to include this fix in v4.1 if possible. My eyes start swimming when I look at this code section, and I'm having a hard time seeing what I'm doing wrong. I had a similar issue on my own server, and I reworked the code and thought it was fixed. My previous issue was that I was hard-coding the substring of the bus ID, and with my server now having 14 SCSI buses, the transition from 1-digit IDs to 2-digit IDs was throwing off the logic. So I modified the substring of the bus ID to work regardless of the string length. I don't see how that would be affecting Daniel's report, though, since all of his SCSI ID's are single digit. There was also an issue with the lsscsi -H output and the $SCSIs array variable under Unraid v6.6. Back when I first wrote the GetDisks function under Unraid v6.2, the first 3 entries were always strings I needed to skip. But under Unraid v6.6, somehow the array order got reversed, and now I have to skip the last 3 values. That's why I have that $maxentry-3 calculation in the DiskReport function. I've noticed that before I did the $maxentry fix ( eval Disks=\${$SCSI[@]:0:$maxentry} ), I was getting similar results to what Daniel is getting, so perhaps the bug in my logic is related to the $maxentry.
    1 point
  5. No. I have replaced the MB, CPU and RAM in my unRAID server on several ocassions and unRAID boots up as before from the same flash drive. The only thing that may need to be reconfigured after a hardware swap is any VMs using hardware passthrough.
    1 point
  6. Unraid Nvidia is a custom build of Unraid made by the LinuxServer.io team. Note that while it is unraid, it is customized and therefore not directly supported by LimeTech. This post covers how to install and use it. The next tidbit about my script for the nvdec wrapper - it's actually extremely easy to use - you just add it as a userscript using the CA userscripts plugin and set it to run after your normal updates: https://github.com/Xaero252/unraid-plex-nvdec And finally, as far as choices with VMs; VMs can get a little nasty with this setup. For one, if you decide to pass the nvidia GPU to your VM - docker will lose control of it until the VM is stopped, at which point any containers that want to use it will need to be restarted to see it again. If you wanted to use an nvidia graphics card for virtualization pass thru you'd typically want a Quadro card - although newer consumer cards are a bit easier to get working. I'd also wait until other users respond to this thread, as their opinion may be different than mine - and it is important to weigh your options before comitting to a significant expenditure. For example, a capable intel CPU with 4k QuickSync support and board may be similar in price in your area to a GPU. If this is the case, QuickSync is far superior to nvdec/nvenc, and you won't be passing that part of the CPU through to any containers. If you can cheaply obtain a GPU it is more affordable/effective to just add a GPU to your existing setup. So to somewhat retract on my previous statement ( I hastily posted without considering locale ) - if it's cheaper to buy a GPU and add it to your current system - I'd go that route. If it's cheaper or the same price to go intel and get a CPU that supports QuickSync 4k transcoding - I'd go that route. My system does not have QuickSync support, and I often wish it did, as that would free a PCI-E slot or this GPU for passthru.
    1 point
  7. If you have a Plex Pass Option 2 (bold above) is the way to go. Add an nvidia graphics card that supports 4k h265 transcoding (see here: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix) Switch to unraid-nvidia and add the nvdec wrapper and you are off to the races. Soon the wrapper won't be needed, either.
    1 point
  8. I'm sorry I didn't see your post for what it was, a genuine effort to help. Most of the time when a first time poster jumps in to a thread with a bunch of links to external websites, it's a spammer, only out to further their own agenda. When you declined to answer any of my questions and instead got defensive, I took that as confirmation of your intent to help yourself instead of helping others.
    1 point
  9. I just noticed something else in your results. For the most part, 162.4 MB/s is the same as 162.5 MB/s, you're not going to notice the difference. But the script searches for the highest value, and so it hones in on 162.5. In Pass 2, it hit 162.5 at both md_sync_window 6400 (Test 3) and 7296 (Test 10) (plus several other even higher md_sync_window values). It used the lowest md_sync_window that hit the peak speed in Pass 3, so 6400. In Pass 3, md_sync_window 6400 was retested 16 times with different md_sync_thresh values, and the highest peak speed reached was 162.4. Since the exact same values from Pass 2 Test 3 were retested in Pass 3 Test 1r, the new 162.4 result ended up replacing 162.5 in the results table. Then, in the very final step of the interface, all of the results were scanned again to find the fastest result. Since the 6400 result was now 162.4 (Pass 3 replaced the Pass 2 value), the fastest became 7296. This is a side effect of how the results are stored and that the script does retest a few combos more than once. While I've thought about this issue, I've never figured out a good solution. Long story short, md_sync_window 6144 @ 162.4 MB/s is every bit as fast as 7296 @ 162.5 MB/s, and it has a significantly lower memory utilization. In fact, you can probably go even lower than 6144, as it looks like your server hits peak speeds between 3072 and 6144, but the script didn't test those because 9216 happened to get lucky and spit out a 162.5, so the script targeted the highest range.
    1 point
  10. May be could help That is why we took the time, effort, and resources to ditch the old "Lime Technology" site and embrace Unraid.net as our new platform.
    1 point
  11. Just means keep track on failure report or failure rate, I expect helium drive should more durable.
    1 point
  12. That would also be my first guess, along with the possibility of an incorrectly configured traefik entrypoint (with regards to what port the entrypoint uses vs what port is mapped through docker to the host). One possibility that i am not sure about is the certificate aspect. I dont use lets encrypt on my setup (well i do but not in traefik, i have an nginx reverse proxy doing ssl unwrapping on a separate entrypoint server) so i am not sure if or how that could be part of the issue.
    1 point
  13. Those IP addresses look correct. The addresses will not be that of your server but rather the address that Docker assigns the container internally, normally starting with 172. Yes I believe you should see the request in the logs with the debugging levels you have set. As long as both traefik and the containers it is proxying to are on the same network it should be fine. I used to have all of mine on the default bridge network. Then i had traefik on the default bridge, and the containers on a custom network (traefik was attached to that network as well), now i have traefik et al on a custom bridge.
    1 point
  14. I discovered "Spaceinvader One" - a gold mine of tutorials. For now I have disabled the cache as I copy 20+ TB of data from external backup drives to the new server. Once I am back up and running I will roll-up my sleeves and go through all the suggestions in this thread and do things properly. Many thanks for all the suggestions.
    1 point
  15. What I do is that I have indeed Nextcloud have its own files on /data and I create an extra mapping pointing to my unraid data that I add as external storage on Nextcloud.
    1 point
  16. Seems like since the initial post you are now staying for the ride :-).. Honestly I used Freenas for years and as soon as I moved to unRAID I never looked back.
    1 point
  17. Since you're evaluating things, I highly recommend installing "Community Applications" which will greatly aid adding additional plugins & dockers. Plugins I personally recommend: Dynamic Cache Directories - keep directory scans from waking up your drives if you want to keep them spun down during inactivity Dynamix SSD TRIM - if you have out-of-array SSD's Recycle Bin - Makes deleting files from SMB shares non-destructive, purged on a scheduled or by request, configured per share Unassigned Devices - Mount & manage non-array devices DiskSpeed - since you're using older drives, this app I wrote will help you test to see if your drives are starting to degrade or if you're exceeding your drive controller's capacity and potentially bottlenecking yourself.
    1 point
  18. Sorry, I don't. I'm barely pushing the need of 9 drives in my rig and lying to myself that they're needed.
    1 point
  19. Well, it's finally happened: Unraid 6.x Tunables Tester v4.0 The first post has been updated with the release notes and download. Paul
    1 point
  20. the geoip2 database is at this path inside the container: /var/lib/libmaxminddb/GeoLite2-City.mmdb the instructions are here: https://github.com/leev/ngx_http_geoip2_module/blob/master/README.md#example-usage
    1 point
  21. i see, Lua is required for those scripts, i have now added it to the image, please pull down latest and try it again.
    1 point
  22. Probably should not have jumped in here as I don't have a Mac. And apparently there are not a lot of other Mac users willing to help either. However, it appears that the permission settings are correct on the Unraid server for that folder. There is one more Unraid setting that was added to accommodate Mac users. Settings >>> SMB >>> SMB Settings Look for the "Enhanced macOS interoperability:" option. Click on the 'Help' icon for an explanation of what it does. If that did not fix the problem, let's look at the permission of the files that are inside of that folder. To do this using the ls command from the command line, you have to understand how to enter the directory name when it contains 'special' characters. Here is a good explanation of that: https://forums.unraid.net/topic/82188-file-permissions-issue-with-plex-docker-linuxserverio/?tab=comments#comment-763941 Post up the output of the ls command of the contents of that directory and lets see if there is anything amiss there. (I doubt it at this point but let's cover all possibilities.) Of course, there is always the possibility that the security settings on your Mac are the issue. I can not help you there. Often, Google can be your friend to finding a solution in these cases. Can you see files on your Mac that were put there by a Windows computer? Can a Windows computer see the files that you put there from the Mac?
    1 point
  23. No, it's strictly a routing/dns issue. Something must have changed on your side. Did you reboot the router perhaps?
    1 point
  24. Thank you constructor - I have now turned it back on and can see it is flushing. Many thanks again. With regard to my opening statement and some of the snarky comments that followed I just want to say that this is a paid-product. It is not a hobbyist platform like freenas et al. And I stand by my opening statement that I believe the caching setup should just work out-of-the-box in a usable way. It is what I am willing to pay for. I believe an added cache should flush in the background in idle moments without you having to do anything. It just makes sense. That aside I apologise for coming off as harsh and appreciate all the help received from those who give it.
    1 point
  25. Not quite, here is the explanation of how parity is calculated: https://wiki.unraid.net/index.php/UnRAID_Manual_6#Network_Attached_Storage Let's assume that you have an 8TB parity drive. Your array consists of several different size data drives (-- all 8TB or smaller). One of these data disks is a 500GB hard drive and that HD has the drive motor fail. To be rebuild this disk, only the first 500GB of the parity data is used (or needed). (A small bit of trivia knowledge for you-- The the actual calculation that is performed on the data to get the parity bit is called is an XOR operator. The XOR operator has been a member of the basic microprocessor instruction set since the 8008 days-- 1972.) While you may think that building parity (by writing 'zeros' to the portion of the parity drive that is not be actively used for calculating data parity) is a waste of time and resources, it makes sense from the logical software development standpoint of what has to happen when you add a data drive that has a larger capacity than any of the currently installed data disks. When this happens, you don't have to 'adjust' parity if you write all zeros to the drive being installed. Parity will always be correct. This 'zeroing' of the new drive is the first thing that Unraid will do. Then, if it finishes without error, Unraid will add the disk to the array and format it (can't remember if it asks permission or not). As this formatting (adding the basic file system structure) occurs, parity (less than 1% of the total data disk's capacity) will be updated as this formatting is being done.
    1 point
  26. So already 4 suggestions from the forum for dealing with your "dealbreaker". 1: 2: 3: 4:
    1 point
  27. Back in the day unraid was limited to 2.2TB, so the largest drive was practically a 2TB. That changed with the advent of unraid 5.0 I believe, and the new limit was ReiserFS, at 16TB. Since 6.0, or whenever XFS and BTRFS were added, the limits aren't really in sight any more, at least not for several years. If you are still using ReiserFS, you will have performance issues with larger drives, 2TB is stretching it performance wise. There is still an issue with older hardware that can't see more than 2.2TB, but that's not anything that unraid can bypass.
    1 point
  28. In progress! There were a lot of Q's to get through but stay tuned and they should be up soon.
    1 point
  29. Seeing MHDD's limitations on SATA/etc drives is what lead to me creating this App. I also hope to add a heat map of read speeds that is reflected in MHDD and broken down to individual platters.
    1 point
  30. Thanks both denishay & saarg both helped alot to get to right solution Ended up with: shell docker exec -it nextcloud bash cd /config/www/nextcloud/ sudo -u abc php7 occ files:scan --all Its running the filescan now. Thanks both of you! i owe you many hours in saved time on this
    1 point
  31. Jesus, this really makes it off-putting for people to ask for help. Is not noticing I missed something, asking if it was the issue taking responsibility? *I noticed a mistake I made, and asked it that was causing silent failures.* Also, I outlined how and why I missed it, and offered an improved that would make it harder to miss. Wasn't blaming anyone else on that in the slightest. You need to keep in mind you guys are good at this shit. I'm not dumb by any means, but I don't know everything and something like this can be overwhelming, especially in instances when you're trying to piece together several dockers. I'm happy to stop the conversation here, I just want to get that off my chest because the way things started to turn. "we do our best to lower the bar" defiantly felt like it would naturally follow on to "but if you're to dumb you're outta luck". Logically I know that's not what you mean, so don't worry, but emotionally it does feel like an attack kinda. But, we think we have the issue, I'll look into it some other time, and if there's an issue I'll pop back and provide stuff like the run commands and see if we can hammer it out. As it stands, I feel like the mistake I made is what's feked it up. Thank you very much for you're help guys. Even if I got a little upset, I do appreciate the help alot. I hope you have a good day!
    1 point
  32. So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN. Deleted and reinstalled Diskover And it works. It seems if you don't put in the right details the first time, you need to clean up and start again. Like the config files are written to appdata on the first docker run and then not rewritten on config change. Also seems like it messed up the elastisearch data. Still, hint to anyone working on this: Install redis and elasticsearch, despite what it says they are not optional. chown -R 1000:1000 /mnt/user/appdata/elasticsearch5/ start redis and elasticsearch Set the hosts for these two services when you first install Diskover, if you get it wrong, remove it, clear the appdata folder and reinstall with the correct settings. Check the port numbers, they should be good from the start, but check. Check diskover on port 9181 it should show you workers doing stuff. If not; you started Diskover before redis or elasticsearch. If diskover asks you for indexes one and two; your elasticsearch data is corrupt, delete it and restart elasticsearch and Diskover.
    1 point