Leaderboard

  1. falconexe

    falconexe

    Community Developer


    • Points

      3

    • Content Count

      618


  2. itimpi

    itimpi

    Moderators


    • Points

      3

    • Content Count

      10671


  3. BusterBrawls

    BusterBrawls

    Members


    • Points

      2

    • Content Count

      9


  4. Mex

    Mex

    Community Developer


    • Points

      2

    • Content Count

      168


Popular Content

Showing content with the highest reputation on 01/11/21 in all areas

  1. Donate: Ultimate UNRAID Dashboard (UUD) Current Release: Version 1.5 (Added Real Time PLEX Monitoring) UUD Version 1.6 is in Active Development! UUD NEWS: 2021-01-19: The UUD Forum Topic Reaches 50,000 Views! 👀 2021-01-12: The Ultimate UNRAID Dashboard Membership is Launched 2021-01-11: The UUD Tops 1,000 Unique Downloads 💾 🎉 2021-01-07: UUD is Featured as the FIRST "Best of the Forum" Blog 🥇 https://unraid.net/blog/ultimate-unraid-dashboard 2021-01-06: UUD Donations Site is Created
    2 points
  2. Nextcloud est comme un cloud privé. Il permet a chaque utilisateur d'avoir son espace de stockage personnel. Il est utile autant pour les photos que tout autre sortes de fichier. De plus il fonctionne avec des plugins, donc surement des plugins de gestion de photo ou autre. C'est très complet et probablement mieux adapté a ce que tu veux faire. Bonne chance
    2 points
  3. Hi, this is mostly a WIP thread, but as of the first post it does work up to my relatively limited testing. I plan on expanding this to a fully featured plugin, but this script is a working foundation, and I'd like to make this available to people to play with asap. Bottom Line Up Front: This script only works on your btrfs-formatted array drives. By default, it will keep 8760 snapshots (1 year of hourly snapshots), this value can be adjusted by changing the MAX_SNAPS variable in the script. This script does not handle cache drives, but would be trivial to extend to do so - I just think i
    1 point
  4. From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.x Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).) 0. install redis from apps jj9987's Repository, no config needed. 1. Install CA User Scripts plugin Create a new script named vm.max_map_count navigate to \flash\config\plugins\user
    1 point
  5. I had to use a small mirror and a flashlight, but the fan has failed so that would make sense. Once the replacement arrives I'll swap it out and see what happens.
    1 point
  6. Dropped in an LSI 9211 in IT mode and that seems to have solved it.
    1 point
  7. I think the ability to edit that field must be an earned privilege I've yet to achieve, @ChatNoir. I can find no such field in my profile. -- Chris
    1 point
  8. Jan 11 08:37:03 storage kernel: ata15.00: HPA detected: current 3907027055, native 3907029168 Jan 11 08:37:03 storage kernel: ata15.00: ATA-9: WDC WD20EFRX-68AX9N0, WD-WMC300660429, 80.00A80, max UDMA/133 See here:
    1 point
  9. So unraid server IP itself is 192.168.1.129 is that correct? And the fire TV itself is 192.168.1.222 yes? If so then things look okay but ports are wrong somewhre for sure. Is there a reason you are not using the default ADB port of 5555? either way something is wrong somewhere, in the ADB container you say port 8009 for ADB, and in HA config you are saying port 5009, so there's a mismatch on ports right off the bat. Unlessy you've actually changed ADB ports on the FireTV I'd leave them as default of 5555 in both palces otherwise it likely will n
    1 point
  10. Of course, here you go: { "period": { "duration": 0.030292, "unit": "ms" }, "frequency": { "requested": 0.000000, "actual": 0.000000, "unit": "MHz" }, "interrupts": { "count": 0.000000, "unit": "irq/s" }, "rc6": { "value": 95.507065, "unit": "%" }, "power": { "value": 0.000000, "unit": "W" }, "imc-bandwidth": { "re
    1 point
  11. @Jaster I bet you copied the xml of a running VM and used it for other VMs. Default entry for a none running VM for VNC looks something like the following: <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='de'> <listen type='address' address='0.0.0.0'/> </graphics> The "port" option is set to auto and counts up with each new VM you start. If you hardcode a value in, it tries to use that port and if in use by another VM you get that error. If I look into a the same xml from that VM while online it looks like
    1 point
  12. @b3rs3rk I have just tried your new intel version on my i9 9900k with UHD 630 graphics and everything is working just fine. Thanks a lot for this update.
    1 point
  13. Have you checked the permissions on the containing directory? I think that ‘wx’ permissions allows deleting files within the directory.
    1 point
  14. What does the 'lsblk' command under Unraid show? Note that 'sda' is the whole drive and so it is still passible for there to be a partition (sda1) present that is not the whole drive.
    1 point
  15. Sadly I don't have any. Probably should document it for my own peace of mind.
    1 point
  16. It was. They have hid their changelog in a way that there is no reliable way to get notifications. They silently updated the requirements. Runs on latest build
    1 point
  17. Please remove that entry. The vendor-reset has to be loaded as soon as possible on host boot. If this already has happend and you reload it again with your entry in the go file this might lead into issues. Please reboot unraid after that and try again. The vendor-reset is a wip. Maybe it just doesn't work right now. While your vega is a vega10, there is a reported issue for vega20. The vendor-reset for example breaks the audio on my 5700xt. Had to switch back to the old navi patch.. List of supported GPUs: |AMD|Polaris 10| |AMD|Polaris 11| |AMD|Polar
    1 point
  18. Don't change the post arguments. Apk is for alpine and apt is for latest.
    1 point
  19. Try reducing fan speed minimum and or increasing low temperature threshold.
    1 point
  20. In the end I bought two nmve drives and created a new mirror 🙂 (performed the backup and restore). I'm happy with the outcome!
    1 point
  21. I believe you answered your own question. Once they have access to the Unraid GUI, they have complete control. You must secure any access with a VPN tunnel or something similar, i.e. teamviewer or other secure remote access through another machine on the LAN
    1 point
  22. CRAZY: I had a miraculous break through as well. I kept trying to use UEFI, but when I made the unRAID 6.9-rc2 image with the Unraid Flash Drive Tool and as BIOS and CHANGED my R710 to BIOS instead of UEFI it booted this time! I know I've read somewhere where using BIOS instead of UEFI for R/710 is a bad idea (whether for drives space, which apparently is false for an HBA controller; or for some other reason, I do not recall).
    1 point
  23. Thanks! Just downloaded the latest version and created a user script to execute it. Will sleep better now
    1 point
  24. this would involve wiping out existing and pulling down app during startup, this is something i have stayed away from as its frought with issues, not to mention harder to support as you never know what version people are using. switching from plex to plex-pass (or the other way around), should be as simple as stop container, configure the other container to be the same as the stopped one, and start it, that should be it.
    1 point
  25. ok the fix is now in for spaces in share names, during my testing i also noted the default include extensions should be * not *.*, to ensure files with no extension are also locked (if no include extension specified).
    1 point
  26. Not sure if you still would like this, but here's my take;
    1 point
  27. This was driving me crazy too and I came to the same conclusion regarding papermerge.db. What I did was create a new container path- /db:/mnt/user/appdata/papermerge/db and then edited the papermerge settings.py to point to the new db location ("NAME": "/db/papermerge.db" for me). Now the documents stay on the array and all the db shenanigans that were keeping the drives spun up are taking place on my cache drive. The only times I've noticed the drives spinning back up is when I actually add new documents to papermerge. That does mean I'm relying on CA Appdata Backup/Restore to keep the db
    1 point
  28. I have upgraded the container to Apache Guacamole 1.3.0. Please let me know if you run into any problems.
    1 point
  29. So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck. Until this plugin is fixed or whatever is happening. Note the direct download is 6.8.3 though, with ZFS 2.0.0.
    1 point
  30. Ok, thanks for the response AgentXXL and itimpi. I did find this... might try some CLI. Cheers
    1 point
  31. Those following the 6.9-beta releases have been witness to an unfolding schism, entirely of my own making, between myself and certain key Community Developers. To wit: in the last release, I built in some functionality that supplants a feature provided by, and long supported with a great deal of effort by @CHBMB with assistance from @bass_rock and probably others. Not only did I release this functionality without acknowledging those developers previous contributions, I didn't even give them notification such functionality was forthcoming. To top it off, I worked with a
    1 point
  32. Alright, got everything working finally! Opencore configurator helped a lot to get the autoboot set up & Big Sur is working almost perfectly. I ran the script to change the network adapter to the e1000 variant to enable Apple services and network settings still shows the VMXnet3 as the ethernet connector. XML lists the correct e1000 variant, any input?
    1 point
  33. My solution was: Build a Ubuntu Server VM, pass trough your shares or use multiple USB Harddrives on the PS3. Or change the OS
    1 point
  34. The Ultimate UNRAID Dashboard Version 1.5 is here! This is another HUGE 😁 update adding INTEGRATED PLEX Monitoring via Varken/Tautulli. This update is loosely derived from the official Varken dashboard, but I stripped it down to the bolts, modded the crap out of it, and streamlined it with a straight Plex focus. Honestly, the only code that sill remains from their official dash is the single geo-mapping graph, as it is not actually an editable panel, but rather straight JSON code. I wanted to say thank you to that team for providing a great baseline to start from, and all of their previous wor
    1 point
  35. Here's what I did in my own system (also behind CGNAT). Spun up a Mikrotik CHR (virtual router) on Linode (but any other VPS with completely unblocked ports is also fine). Made a site-to-site VPN between my home networks (3 different houses) and the VPS. then port forwarded port 80 + 443 to an nginx container (kinda like SWAG - but I prefer to roll my own). This could be done with a Linux VPS, but I wanted to play with a Mikrotik router (all of my edge routers are Mikrotiks) and have less issues with the rather performant IPSec site-to-site vpn This means I can access my Emby serve
    1 point
  36. I actually had a similar issue with a folder within my main SMB share which did not display in Windows Explorer, I could however browse to it via the exact path. Turned out to be the Windows folder attributes, not the Unraid attributes. The folder was \\dataserver\store\hyperoo. When I accessed cmd prompt, and typed attrib \\dataserver\store\hyperoo, the attributes showed as SH (System and Hidden) so I removed these via attrib -S -H \\dataserver\store\hyperoo and now the folder is visible from all devices.
    1 point
  37. Handling of ‘unmountable’ disks is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI. Have you run a file system check on the drive? Was the drive showing as unmountable before the rebuild (since a rebuild does not clear ‘unmountable’ state`).
    1 point
  38. That'll do it, i cant believe i didnt see this, also didnt expect the admin user to have something as simple turned off.
    1 point
  39. Yes, maybe for that user this option needs to be enabled: When the user can read, it looks like this on a book:
    1 point
  40. Did you allow the user to read books?
    1 point
  41. I too am coming across this issue, thought it was because i only had .mobi files, so i grabbed the .epub files and let calibre merge them, now i can see both files types and get the option to download both types, but i dont get a read book button click on any of my titles. I am using the linuxio calibre docker image. Checked the FAQ's and github nothing really about how to troubleshoot the inability to read through calibre-web. Thanks in advance.
    1 point
  42. Plugin author has not posted or provided support for some time, made a note of that in the 1st post.
    1 point
  43. Hey jwiener3 and all others, As far as the perf: interrupt messages, I receive these as well, and the system seems to limit my login until it is done spitting those errors. Not sure if there are other processes still booting in the background. I'm assuming the "accessed a protected system..." is your banner/motd. As far as I know, those alerts don't actually cause any issues (that I have found). OH!!! Also, Update. Running Unraid 6.9.0-beta25. I'm now able to run much more interfaces. Currently running 25 (24 + mgmt). Addin
    1 point
  44. Hi all, Just spent the day creating a somewhat simple script for creating snaphots and transferring them to another location, and thought I'd throw it in here as well if someone can use it or improve on it. Note that it's user-at-your-own risk. Could probably need more fail-checks and certainly more error checking, but it's a good start I think. I'm new to btrfs as well, so I hope I've not missed anything fundamental about how these snapshots works. The background is that I wanted something that performs hot backups on my VM's that lives on the cache disk, and then moves
    1 point
  45. ...and here's an example on how I'm using it. This is my "daily-backup.sh", that is scheduled daily at 01:00. It snapshots and backs up all VM's and all directories under appdata. #!/bin/sh # VMs KEEP_SRC=48h KEEP_DST=14d SRC_ROOT=/mnt/cache/domains DST_ROOT=/mnt/disk6/backup/domains cd /mnt/disk6/backup virsh list --all --name | sed '/^$/d' | while read VM; do /mnt/disk6/backup/snapback.sh -c -s $SRC_ROOT/$VM -d $DST_ROOT -ps $KEEP_SRC -pd $KEEP_DST -t daily done # AppData KEEP_SRC=48h KEEP_DST=14d SRC_ROOT=/mnt/cache/appdata DST_ROOT=/mnt/disk6/backup/
    1 point
  46. The reason nohup doesn't work for this is because when you disconnect, or log out of that terminal, that terminal, and any child processes of the terminal are killed. This is just Linux kernel process management doing it's job. To prevent this you can just disown the process, no need to nohup it. For example you can: $ processname & $ disown and "processname" will continue running after the terminal is killed. This is good because it means that "processname" will still respond to hangup, which may be needed. Of course, you could also call disown with nohup:
    1 point
  47. I see this thread has been idle for a month or so but I found it because it is the most recent thread that comes up on google when searching for "unraid wireless"... which I think is a much needed feature based on quite a few threads coming up on google asking for it. I live in a small one bedroom apartment where the only wired access to the internet is at a small desk between my bathroom and bedroom doors. There is not enough space there for my unRAID server and my gaming PC. I have placed my unRAID server in my living room by the TV because that is the only place I really have space. I al
    1 point