AlainF

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by AlainF

  1. Maybe a bit later @JustinRSharp but you should really pay super attention when you paste logfiles - the link above contains your actual hash that could allow anyone to link their device to YOUR dropbox account!!
  2. Hello, I followed the installation steps, but after successfully linking the new device by clicking the link, nothing happens after the "Welcome Alain." prompt. When I restart the container, it seems as if the device is no longer linked and I am prompted to link the device again: Checking for latest Dropbox version... Latest : 192.4.4605 Installed: 192.4.4605 Dropbox is up-to-date Using Europe/Paris timezone (09:43:33 local time) Starting dropboxd (192.4.4605)... dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/cryptography.hazmat.bindings._openssl.abi3.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/cryptography.hazmat.bindings._padding.abi3.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/apex._apex.abi3.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/psutil._psutil_linux.cpython-38-x86_64-linux-gnu.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/psutil._psutil_posix.cpython-38-x86_64-linux-gnu.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/tornado.speedups.cpython-38-x86_64-linux-gnu.so' dropbox: load fq extension '/opt/dropbox/bin/dropbox-lnx.x86_64-192.4.4605/wrapt._wrappers.cpython-38-x86_64-linux-gnu.so' This computer isn't linked to any Dropbox account... Please visit https://www.dropbox.com/cli_link_nonce?nonce=XXXXXXXXXXXXXXXXXXXXXXXXXX to link this device. When I click the link and add the device, I get this: Please visit https://www.dropbox.com/cli_link_nonce?nonce=XXXXXXXXXXXXXXXXXXXXXXXXX to link this device. This computer is now linked to Dropbox. Welcome Alain And then it just sits there. My settings for the container are as follows:
  3. This should be pulled from the apps "store" ; doesn't work, no logging, no support/help.
  4. [Tailscale plugin and Pihole container] So I have only basic knowledge of networking, containers and Unraid, and I need some help achieving the following: Use my local Pihole as a DNS server for my iPhone wherever I am. I have the Tailscale Unraid plugin installed and it is working, i.e. I can access the Unraid WebUI via Tailscale. I also have the Pihole container running and I am using it as my DNS server for my local network. The challenge I seem to have is that my Pihole container is running on a "Custom: br0" interface which exposes an IP address of 192.168.178.250 to my LAN, whereas the main IP of my Unraid box is 192.168.178.222 and so the Pihole container is not part of my Tailscale network. One idea I had was to adapt the Pihole container so that it would also include Tailscale and so that my Pihole machine would expose itself as a separate node, but doing that is above my knowledge, and on paper seems like a lot of work - there must be other, simpler ways? Somehow forwarding port 53 request on the Unraid host to the Pihole's container port 53? Many thanks for your insights! Alain
  5. Hello! Update - the build is done! See here : Migrated to new server - update and report on final build, and thanks! - Hardware - Unraid
  6. Dear community, Some weeks ago I posted for help and opinion on a new build to migrate my disks to, and I wanted to share the final build specs with you, also as a way of saying THANK YOU for all your constructive feedbacks! Eventually these are the parts I got and assembled: Case : Fractal Node 804 PSU : Corsair SF850L Mainboard : ASUS Rog Strix Z690-G Gaming Wifi CPU : Intel Core i5 13500 Memory : 2 x 16GB Corsair Vengeance DDR5 5600MHz CL36 CPU cooler : Noctua NH-U12S chromax.black Fans : 3 x Artic PWM PST RGB 120mm, 1 x Artic PWM PST RGB 140mm 3 sets of deleyCON SATA cables 2 x Kingston 2000 GB FURY Renegade PCIe 4.0 NVMe M.2 SSD (cache pool) 2 x Samsung SSD 860 EVO 2.5" SATA III 1TB (recovered from old rig) (docker containers, future VMs, other temp stuff) 2 x 8TB Seagate IronWolf HDDs (recovered from old rig) 2 x 8TB Toshiba MG06ACA HDDs (recovered from old rig) I replaced the fans that came with the Node 804 with the Artic ones, and got rid of the built-in "fan speed switch" of the case, planning to use "auto fan control" to do the job - and it works! The CPU fan, and the Artic 120mm sitting behind it on the right side of the case, are controlled directly by the BIOS based on CPU temperature. On the other side, I have 2 x 120mm fans in the front and 1 x 140mm in the back of the HDDs, and they are chained together using PST and plugged into one PWM header on the mainboard that allows auto fan control to ramp up and down the speed of the three fans according to HDD temp. This works really well, putting the drive fans to 0rpm when the drives are cooler than 30 °C, and ramping up to 50% of max RPM as temps rise but never go beyond 42 °C on any given drive. This is a perfect compromise of heat and noise for my use case. Note that the HDDs are set to spin down automatically after 30 minutes, and 90% of the time I have max 1 drive spun up so that the fans barely run at 20% of their max RPM. The only "noise" I can hear sitting 2 meters away is the one drive spinning... Getting auto fan control working was a bit of trial&error. I needed to install some additional drivers, and repeat the detection process a few times, but finally it just worked and Unraid detected the PWM controller. Sadly, I don't remember the sequence of my actions and would not be able to explain to someone exactly what I did 🤓. Anyway - it works. The machine is VERY powerful for my needs (mainly Plex, *arrs, and some home automation and network stuff). The CPU rarely breaks a sweat and comfortably sits at an average temperature of around 30°C. The Noctua CPU fan is inaudible. It's also a quite power efficient setup, with an average consumption of around 45 Watts, which translates in my country to around 11 euros of electricity cost per month. The only issue I had was that the mainboard needed a BIOS flash to support 13th gen CPUs, and I had to bring the computer to a very kind and efficient local computer shop who popped in a 12th gen CPU to flash the BIOS, for a very small fee. My next project is to find a way to change the RGB lighting of the 4 Artic fans based on the temperature of the disks (just for fun). Again, thanks to everyone who contributed to getting this (very nice to me) setup going! A. Update: here is a visual of my fan setup, the ones marked with a red arrow are the ones I replaced / installed. Blue = 120mm, white = 140mm. The disks are on the right side of the case, facing away from you.
  7. Hallo! Spezifisch zum Node 804: ich habe vor ein paar Tagen ein relativ ähnliches Setup auf Basis des Node 804 gebaut, und bin sehr zufrieden mit meiner Wahl. Ich habe allerdings die mitgelieferten Lüfter durch Artic PWM PST 120mm und 140mm (respektive) ersetzt und ergänzt (siehe Bild unten). Ich habe auch das mitgelieferte Kontrollboard für Lüfterspeed (Min-Med-Max) rausgeschraubt da ich dies sowieso nicht nutzen wollte. Mein mainboard (ASUS Rog Strix Z690-G Gaming Wifi) unterstützt das "auto fan control" plugin und die 3 Lüfter auf Seite der Platten drehen unter Vollast der Platten (Parity check) mit maximal 50% der Drehzahl - fast nicht hörbar und die Platten werden nicht über 42°C heiss. Ich habe die RGB Versionen der Lüfter verbaut, so sieht es auch noch etwas "spannend" aus 🙂 Ansonsten ist dieses Case wirklich sehr einfach "bebaubar", sieht sehr schick aus, und ist qualitativ hochwertig. Kann ich nur empfehlen! Viel Spass und viel Glück bei deinem Projekt 💪
  8. Hello, As a happy Unraid user, it regularly occurs that I want or need to publish the details of my rig. It would be great if there were a plugin (or app or tool...) that would automate the creation of a nice looking / nicely formatted "snapshot" of my rig, including the details of the hardware, drives, setup, installed apps, plugins, containers... and ideally the tool would allow me to select what sections, and what level of detail, I would like to share! This would be a big time saver, so as not to type this information over and over again. Yes, I could type it once and then copy&paste, but having a standardized / standardizing tool the output of which would always look the same would it also make easier for all fellow Unraiders to look at my, or others' systems. A.
  9. So I do not need to worry about anything (re-assigning manually etc) but really just connect and power up and done (for that aspect) ?
  10. Hello, In the coming days I will finalize building my new rig for Unraid, and will need to move my disks over to that new machine and replicate my disk setup etc. I have read documentation and tutorials about how to proceed and just want to make sure I don't miss anything. Below is my current disk setup. The current shares The disks In my new rig, I will have an additional 2 x 2TB nvme drives that I plan to use as the new cache drives, replacing the above two Samsung SSDs. I plan to re-use those freed up 2 x 1TB SSDs to hold the data of what is currently "Dockerpool" above, to save the docker image and in the future possibly some VMs etc. So in the above, the Kingston nvme0n1 drive will effectively NOT be moved out of the old Unraid rig. My questions: To recreate the array without issues, I just have to make sure to re-assign the array disks by their ID to the correct slots once the disks have moved over to the new machine, right? To replace the cache drives, I will first have to change the cache preferences for all shares so that everything gets moved to the array disks with Mover, and once the disks are moved and the new 2 x 2TB nvme cache pool is ready in the new rig, change the cache preference settings again to what they were and run mover again in the new rig, correct? For the docker.img (residing on dockerpool above), the easiest seems to create the "dockerpool" pool in the new rig using the 2 Samsung SSDs that previously were the cache pool, re-create a fresh docker.img on there, and then use appdata backup and "reinstall previous apps" to recreate my docker containers and reapply all of the settings? This seems to be easier and quicker than trying to move/copy the original docker.img? Is there anything else I need to be aware of when moving my existing disks and setup over to the new rig? Many thanks!!
  11. Am I wrong assuming that I wouldn't even need the sensors on the motherboard to control the fans based on the DISK temperatures? I could set a fan curve in the BIOS for some fans (CPU, some case fans) based on the board sensors, but then use autofan control to set other fans based on disk temps. So autofan control would need to know how to talk to the PWM controller on the board, but not read the temp sensors on the board....
  12. Do you know where to find out about "compatible driver", knowing that my new mainboard will be an ASRock B660M Steel Legend?
  13. Ah, well yes - for some reason when I used pcpartpicker on my phone I didn't get proper results. Tried again on the desktop now, and indeed found this mobo taht seems to tick all the boxes : ASRock B660M Steel Legend
  14. I really have trouble finding (i.e. features, price, deliverability) a mainboard that has six SATA ports. Maybe I was not diligent enough in my research - would you have any recommendations! The daisy-chain part is interesting, thanks!
  15. Hallo! Alter Topic, wollte mich aber auch mal kurz zu Wort melden. Ich habe eine ähnliche Situation mit einem Mini PC und einem über USB angeschlossenen Fantec 4-bay case. Es funktioniert alles (seit ca. 2 Jahren), das heisst keine Parity Probleme oder Verbindungsabbrüche, aber "schnell" ist halt anders. Ich habe mich daher jetzt auch dazu entschlossen, einen "richigen" Unraid Server aufzusetzen - sie dazu meinen Post hier:
  16. Dear community, I am preparing my build list for upgrading my Unraid server and would like to leverage the hive knowledge to run some sanity checks and gather some input. What I have and will be moving over to the new build are the following: 4 x 8TB 7200rpm drives (array with 1 parity) 2 x 2TB Samsung SSDs (redundant cache pool) 1 x 256GB NVME (docker containers) The Flash drive holding Unraid (6.12.3 Plus) The new rig will be built with the following (unless issues will be identified here): Case : Fractal Node 804 Case fans (replacement) : 2 x Noctua NF-A12x25 PWM chromax.black.swap (120mm) and 1 x Noctua NF-A14 PWM chromax.black.swap (140mm) Mainboard : ASUS PRIME B760M-K D4 CPU : Intel Core i5-13500 Memory : Corsair VENGEANCE DDR5 RAM 32GB (2x16GB) 5600MHz CL36 PSU : Corsair SF850L CPU cooler : Noctua NH-U12S chromax.black SATA 4-port PCIe expansion card : 10Gtek https://www.amazon.de/gp/product/B09Y1PMZ2W/ The new build should be, in that order: Silent Energy efficient Powerful Room for future expansion Price efficient (currently all of the above would cost me less than 1000€ which is easily within my budget). The main usage is all the *arrs, Plex, and 10~ish other utility docker containers (Pihole, Home Assistant, ...) and the occasional 1-2 VM's (no gaming). Questions Anything wrong you would see with these components? I plan to replace/extend the fans provided with the Node 804 with the above Noctua PWM fans so that I will be able to control the case fan speed using "auto fan control". I have read that sometimes this needs a lot of trial&error to get up running correctly? I was considering to get the Noctua NH-P1 passive CPU cooler for even less noise - any comments on that idea? I am looking forward to your comments and recommendations! Alain
  17. Dear community, I have an external USB powered fan sitting in front of a drive enclosure that I would like to switch on and off based on the temperature reading of the disks in the enclosure by enabling/disabling power on the USB port. Is there a relatively easy way to do / code this? Many thanks!
  18. Hi! Cool - late reply and all as well, but I'm downloading it as I write this. Thanks!!
  19. Hello, I wanted to report that my new Cyberpower UPS, model CP1300EPFCLCD, often times per day issues a "disconnect" error and then reconnects automatically after 1-2 seconds when I use the built-in (Unraid Plus 6.9.2) apcusbd daemon. I have installed the NUT plugin since a few days and it works flawlessly, no more disconnects. Switching ports/cable did NOT resolve the issues with apcusbd. I therefore suppose there's some software issue in relation with apcusbd which is absent with NUT. Alain
  20. Hello everone, I have installed the Filebot container yesterday with default parameters (except the opensubtitles credentials) and when I start it, I cannot connect via the web GUI, there's a red cross on the top left of the web screen with error "Server disconnected (code : 1006)". I have no clue on where to start fixing this. Here's the log of the container: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... [cont-init.d] 00-app-script.sh: exited 0. [cont-init.d] 00-app-user-map.sh: executing... [cont-init.d] 00-app-user-map.sh: exited 0. [cont-init.d] 00-clean-logmonitor-states.sh: executing... [cont-init.d] 00-clean-logmonitor-states.sh: exited 0. [cont-init.d] 00-clean-tmp-dir.sh: executing... [cont-init.d] 00-clean-tmp-dir.sh: exited 0. [cont-init.d] 00-set-app-deps.sh: executing... [cont-init.d] 00-set-app-deps.sh: exited 0. [cont-init.d] 00-set-home.sh: executing... [cont-init.d] 00-set-home.sh: exited 0. [cont-init.d] 00-take-config-ownership.sh: executing... [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 00-xdg-runtime-dir.sh: executing... [cont-init.d] 00-xdg-runtime-dir.sh: exited 0. [cont-init.d] 10-certs.sh: executing... [cont-init.d] 10-certs.sh: exited 0. [cont-init.d] 10-cjk-font.sh: executing... [cont-init.d] 10-cjk-font.sh: exited 0. [cont-init.d] 10-nginx.sh: executing... [cont-init.d] 10-nginx.sh: exited 0. [cont-init.d] 10-vnc-password.sh: executing... [cont-init.d] 10-vnc-password.sh: exited 0. [cont-init.d] 10-web-index.sh: executing... [cont-init.d] 10-web-index.sh: exited 0. [cont-init.d] filebot.sh: executing... Enter OpenSubtitles username: Enter OpenSubtitles password: Testing OpenSubtitles... OK Done ヾ(@⌒ー⌒@)ノ [cont-init.d] filebot.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] starting s6-fdholderd... [services.d] starting certsmonitor... [services.d] starting nginx... [services.d] starting xvfb... [nginx] starting... [certsmonitor] disabling service: secure connection not enabled. [xvfb] starting... [services.d] starting amc... [services.d] starting logmonitor... [services.d] starting statusmonitor... [logmonitor] no file to monitor: disabling service... [services.d] starting openbox... [statusmonitor] no file to monitor: disabling service... [openbox] starting... [services.d] starting x11vnc... [services.d] starting app... [x11vnc] starting... [app] starting FileBot... 15/10/2021 09:06:21 passing arg to libvncserver: -rfbport 15/10/2021 09:06:21 passing arg to libvncserver: 5900 15/10/2021 09:06:21 passing arg to libvncserver: -rfbportv6 15/10/2021 09:06:21 passing arg to libvncserver: -1 15/10/2021 09:06:21 passing arg to libvncserver: -httpportv6 15/10/2021 09:06:21 passing arg to libvncserver: -1 15/10/2021 09:06:21 passing arg to libvncserver: -desktop 15/10/2021 09:06:21 passing arg to libvncserver: FileBot 15/10/2021 09:06:21 x11vnc version: 0.9.14 lastmod: 2015-11-14 pid: 936 15/10/2021 09:06:21 Using X display :0 15/10/2021 09:06:21 rootwin: 0x43 reswin: 0x400001 dpy: 0x139709e0 15/10/2021 09:06:21 15/10/2021 09:06:21 ------------------ USEFUL INFORMATION ------------------ [services.d] done. 15/10/2021 09:06:21 X DAMAGE available on display, using it for polling hints. 15/10/2021 09:06:21 To disable this behavior use: '-noxdamage' 15/10/2021 09:06:21 15/10/2021 09:06:21 Most compositing window managers like 'compiz' or 'beryl' 15/10/2021 09:06:21 cause X DAMAGE to fail, and so you may not see any screen 15/10/2021 09:06:21 updates via VNC. Either disable 'compiz' (recommended) or 15/10/2021 09:06:21 supply the x11vnc '-noxdamage' command line option. 15/10/2021 09:06:21 X COMPOSITE available on display, using it for window polling. 15/10/2021 09:06:21 To disable this behavior use: '-noxcomposite' 15/10/2021 09:06:21 15/10/2021 09:06:21 Wireframing: -wireframe mode is in effect for window moves. 15/10/2021 09:06:21 If this yields undesired behavior (poor response, painting 15/10/2021 09:06:21 errors, etc) it may be disabled: 15/10/2021 09:06:21 - use '-nowf' to disable wireframing completely. 15/10/2021 09:06:21 - use '-nowcr' to disable the Copy Rectangle after the 15/10/2021 09:06:21 moved window is released in the new position. 15/10/2021 09:06:21 Also see the -help entry for tuning parameters. 15/10/2021 09:06:21 You can press 3 Alt_L's (Left "Alt" key) in a row to 15/10/2021 09:06:21 repaint the screen, also see the -fixscreen option for 15/10/2021 09:06:21 periodic repaints. 15/10/2021 09:06:21 GrabServer control via XTEST. 15/10/2021 09:06:21 15/10/2021 09:06:21 Scroll Detection: -scrollcopyrect mode is in effect to 15/10/2021 09:06:21 use RECORD extension to try to detect scrolling windows 15/10/2021 09:06:21 (induced by either user keystroke or mouse input). 15/10/2021 09:06:21 If this yields undesired behavior (poor response, painting 15/10/2021 09:06:21 errors, etc) it may be disabled via: '-noscr' 15/10/2021 09:06:21 Also see the -help entry for tuning parameters. 15/10/2021 09:06:21 You can press 3 Alt_L's (Left "Alt" key) in a row to 15/10/2021 09:06:21 repaint the screen, also see the -fixscreen option for 15/10/2021 09:06:21 periodic repaints. 15/10/2021 09:06:21 15/10/2021 09:06:21 XKEYBOARD: number of keysyms per keycode 7 is greater 15/10/2021 09:06:21 than 4 and 51 keysyms are mapped above 4. 15/10/2021 09:06:21 Automatically switching to -xkb mode. 15/10/2021 09:06:21 If this makes the key mapping worse you can 15/10/2021 09:06:21 disable it with the "-noxkb" option. 15/10/2021 09:06:21 Also, remember "-remap DEAD" for accenting characters. 15/10/2021 09:06:21 15/10/2021 09:06:21 X FBPM extension not supported. Xlib: extension "DPMS" missing on display ":0". 15/10/2021 09:06:21 X display is not capable of DPMS. 15/10/2021 09:06:21 -------------------------------------------------------- 15/10/2021 09:06:21 15/10/2021 09:06:21 Default visual ID: 0x21 15/10/2021 09:06:21 Read initial data from X display into framebuffer. 15/10/2021 09:06:21 initialize_screen: fb_depth/fb_bpp/fb_Bpl 24/32/5120 15/10/2021 09:06:21 15/10/2021 09:06:21 X display :0 is 32bpp depth=24 true color 15/10/2021 09:06:21 15/10/2021 09:06:21 Listening for VNC connections on TCP port 5900 15/10/2021 09:06:21 15/10/2021 09:06:21 Xinerama is present and active (e.g. multi-head). 15/10/2021 09:06:21 Xinerama: number of sub-screens: 1 15/10/2021 09:06:21 Xinerama: no blackouts needed (only one sub-screen) 15/10/2021 09:06:21 15/10/2021 09:06:21 fb read rate: 1978 MB/sec 15/10/2021 09:06:21 fast read: reset -wait ms to: 10 15/10/2021 09:06:21 fast read: reset -defer ms to: 10 15/10/2021 09:06:21 The X server says there are 10 mouse buttons. 15/10/2021 09:06:21 screen setup finished. 15/10/2021 09:06:21 The VNC desktop is: 918b09174638:0 0 ****************************************************************************** Have you tried the x11vnc '-ncache' VNC client-side pixel caching feature yet? The scheme stores pixel data offscreen on the VNC viewer side for faster retrieval. It should work with any VNC viewer. Try it by running: x11vnc -ncache 10 ... One can also add -ncache_cr for smooth 'copyrect' window motion. More info: http://www.karlrunge.com/x11vnc/faq.html#faq-client-caching FileBot 4.9.3 (r8340) JDK8 JNA Native: 5.2.2 MediaInfo: 21.03 Tools: fpcalc/1.4.3 p7zip/16.02 unrar/5.61 Extended Attributes: OK Unicode Filesystem: OK Script Bundle: 2021-08-02 (r761) Groovy: 3.0.7 JRE: OpenJDK Runtime Environment 1.8.0_275 JVM: 64-bit OpenJDK 64-Bit Server VM CPU/MEM: 8 Core / 3.7 GB Max Memory / 52 MB Used Memory OS: Linux (amd64) HW: Linux 918b09174638 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 GNU/Linux CPU/MEM: Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz [MemTotal: 16 GB | MemFree: 2.3 GB | MemAvailable: 13 GB] STORAGE: btrfs [/] @ 11 GB | btrfs [/watch] @ 11 GB | btrfs [/:/output:rw] @ 11 GB | btrfs [/:/watch:rw] @ 11 GB | fuse.shfs [/storage] @ 8 TB | fuse.shfs [/config] @ 914 GB | btrfs [/output] @ 11 GB p DATA: /config Package: DOCKER License: UNREGISTERED (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.780: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.781: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.781: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.781: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. (process:965): dconf-CRITICAL **: 09:06:23.781: unable to create file '/tmp/run/user/app/dconf/user': Permission denied. dconf will not work properly. ------------------- UPDATE AVAILABLE: FileBot 4.9.4 (r8735) -------------------- Done ヾ(@⌒ー⌒@)ノ Any help greatly appreciated!
  21. Dear community, I am finally ready to share what I call my "lazy man's Unraid build" ! I believe it's for lazy people because there was definitely NOT a lot of building/tinkering involved 😀 Let's start with a picture and then go to the specs: The specs OS is Unraid Plus 6.9.2, running off a Sandisk Cruzer Fit 32GB thumbdrive Computer (small little gray box on top) is a Minis Forum U850 CPU: Intel Core i5 10210U RAM: 2 x 8GB DDR4 Internal storage: NVME (Kingston 256GB), 2 x Samsung 860 EVO 1TB SSDs used as mirrored cache pool Storage enclosure (black box below) is a Fantec 4 drive USB 3.1 with the following drives 2 x Seagate Ironwolf 8 TB 7200rpm (one parity, one array) 1 x WD Red 3 TB 7200rpm (array) 1 x Seagate unknown model 2 TB 5400rpm (array) So I currently have a total of 13TB usable array storage, and 1TB SSD cache pool I have added a dummy HDMI plug for complete headless operation and to be able to make use of QuickSync within Plex Network is 1Gbit/s A screenshot of the Unraid disk setup: On the subject of using external USB disk enclosures : I know that this is considered "suboptimal", to use a polite word. However, I have been running this for 3 weeks now with great satisfaction, great performance, and without any issues so far. Help me cross fingers! I am using my little Unraid box mainly for my Plex libraries and the usual *rrs Dockers, as well as some tools like DNS server and Internet connection monitoring. What I like: Unraid. It just works, and it's very light/lean. This product and its value proposition should actually be a benchmark for many software companies! I absolutely LOVE it! My setup is very quiet and takes very little space, while having plenty enough "oomph" to run several dockers, Plex with a few parallel streams, and so on. Transfer rates from and to Unraid are consistently bottlenecked by the 1 Gbps network I use. The fact that it was really plug&play with "consumer grade" components that don't need a lot of planning, building, tinkering. Not that I don't like and appreciate this as well, but for this given use case I wanted something that is quick to build, not overy expensive, and which doesn't completely look like a "big ugly black IT box" like my SO would call it. EDIT: I also like the fact that the array disks are spun down when there's no activity, respectively selectively spun up/down. This is good for drive life, power consumption and mainly good for the environment! The Unraid communities, here and on Reddit, with so many knowledgeable and kind people who know a sh*tload about everything and were able to guide and help me very well when I started planning my Unraid journey! Did I say that it just works?? What I dislike: During a parity check or rebuild, the disks get relatively hot (49-50°C) because this Fantec case has suboptimal cooling. USB C ... I am afraid that one day, shit will happen and I will need to go to a more robust option. The Fantec case has too many blinking LEDs, even when all the disks are spun down. My urge to continuously enhance my Unraid setup and in the end finish with a "big ugly back IT box" with way too many drives and CPU and RAM horsepower than what I'd ever really need and use. So what do you think? I hope you like my setup, and my enthusiasm for Unraid! And yes, you are definitely allowed to tell me again that external USB enclosures are a bad idea 🙂 Best, Alain
  22. Hello community, Is there an easy way to monitor/display the TBW values for my SSDs / nvmes without having to install something huge like Unraid Ultimate Dashboard and without having to go thru the various SSDs and nvmes manually? Thanks, Alain