johner

Members
  • Posts

    59
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

johner's Achievements

Rookie

Rookie (2/14)

10

Reputation

  1. Note, I tried setting the trilium version to 'latest', i now get the same version error as you,
  2. I have a similar error, I tried removing the docker and image, then reinstalling, now getting: ERROR: Current DB version 228 is newer than app db version 197
  3. I think he’s referring to the title, it’s a potential solution to the requirement. Sent from my iPhone using Tapatalk
  4. That’s a design and user experience question. obviously those specific VMs will not be able to be running if ‘their’ storage is offline. what experience do ‘you’ want? With this requirement enabled, it could be: prevent any vm/docker from using the parity array? (Maybe an easy MVP release), or more complex where the stop solution goes and traces back users of its array and stops any specific (singular VMs, specific shares etc.) services first - I suggest this as a later/subsequent release if people want this experience. first things first, agree the requirements, accept the requirements, scope a release, then design it.
  5. Array = parity array… docker and VMs should be able to continue to run on a cache etc. basically make each array independent with their own start/stop/check buttons etc. this might then also support multiple parity arrays…
  6. The beef people have, is it adds layers of complexity. unraid has native VM capability. esxi 8 has specific hardware requirements that not everyone can or want to meet. I use proxmox which is a lot more forgiving in this regard. I’d still like to have one management console, and my dockers to continue running when I perform work on the array. I also like the community apps that are VM image builders like the macOS one, a lot more faff trying to get these running on prox or esxi. I think there is enough of an ask here for it to be a valid requirement for lime tech to figure out a design that works and meets their licensing requirements.
  7. Yes exactly, for those that are curious, search the forum. There is a guy in Europe (Thomas Hellstrom) that has done this and even written his only cli based status/management tool. I tried to get him to publish it on GitHub but he’s yet to do it (well hadn’t a few months back when I last checked). Sent from my iPhone using Tapatalk
  8. Just seeing some of the comments on ‘you don’t know the internals’ - damn right I don’t and I don’t care as none of my business, it doesn’t change the fact that as a consumer I have valid requirements. Will they get implemented, hey I don’t know, or if when (due to design choices maybe by lime tech), but I do know market forces and eventually someone will release a product that meets these requirements. Ultimately I think people are paying for two things: 1) the unraid parity solution (best efficiency ref disk space) - this is technically free given the open source code is available, so really they’re paying for the UI to configure it 2) the docker community App Store, which is also community/open source as I understand it. I’m sure if enough frustrated people don’t get their diva-ish ways one will crack and start building a module for OMV on some other open source base platform. Sent from my iPhone using Tapatalk
  9. Looking for some help, so I have the restarting unraid issue of having to reset the owner etc, however, that's not the real issue for me, the issue i have is plex see's the mount, I can see the files when adding the /data/[MySubFolder] to the plex library, but plex doesn't have permission to load that file to scan it to actually add the file to the library (thus library is empty) when i ls -la, the files have: -rw-r--r-- Do they need execute also? I have set the ownership to 911:911, chmod 777 has not impact. What am i doing wrong? I'm considering installing rclone 'in' the plex docker at this rate, but i'm sure that'll be a new world of pain, and not as 'clean'
  10. same problem for me. For those that have moved them manually/with a CA, did you just move from mnt/cache to mnt/disk#?
  11. Yes please! Sys log filling up, but i need mover logging on to work out why it's not finding files it thinks exist...
  12. Imagine if ZFS or BTRFS designed their solution so that if you took one pool offline they all went offline? my requirement is to be able to independently start and stop a pool, be it the parity array (or plural if multiple arrays arrive in the future) or a cache pool. Only the services using that particular array/pool to be impacted. A warning to be presented listing which services are impacted, e.g. ‘Specific share 3’ is set to use ‘cache pool 5’ it will be remounted without cache, OR all dockers will be taken offline due to system image being stored on this pool OR this list of VMs will be hibernated as their disk images are stored on this pool, others will continue to run. And so on design considerations - I’m trying hard to not tell you how to design the solution, as I hate it when business teams at work focus on the solution and not the requirement 🙂 thx!!!
  13. This, I have to run unraid as a VM on esxi for this exact constraint! I could run bare metal if the pools were separated out and could be managed independently (eg a setting to force vms and docker related contents to a specific cache drive that can be independently managed from other ‘pools’). Understandably for docker too! I thought about using my second unraid license as a separate vm on esxi to run just dockers (because I love the ui and App Store), but this constraint on being tied to an unraid array prevents running unraid for just docker - unless I missed something!? Please find a way to make this happen team! Sent from my iPhone using Tapatalk
  14. hi, I stumbled across this, and it looks like what i need to detect duplicates across disks under the logical share - so thanks. When I run without options, i get no dupes error after about 30s - all good (i assume!). I then went to run it with -c to double check bash unRAIDFindDuplicates.sh -c And it immediately responds with the help text. Q1: Is this a defect or am I doing something wrong? I then tried: bash unRAIDFindDuplicates.sh -z It said no duplicates (again after about 30s), then sat there reporting nothing for a few mins and eventually came back with lots of errors such as: (this is a subset): ls: cannot access '/mnt/disk*//appdata/ESPHome/hot_water_system/.piolibdeps/hot_water_system/ESPAsyncTCP-esphome/examples/SyncClient/.esp31b.skip': No such file or directory ls: cannot access '/mnt/disk*//appdata/FileBot/log/nginx/error.log': No such file or directory ls: cannot access '/mnt/disk*//appdata/FileBot/xdg/cache/openbox/openbox.log': No such file or directory ls: cannot access '/mnt/disk*//appdata/FileBot/.licensed_version': No such file or directory ls: cannot access '/mnt/disk*//appdata/FileBot/error.log': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/telegraf/autogen/250/_01402.wal': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/_internal/monitor/258/_00094.wal': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/home_assistant/autogen/255/_00003.wal': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2573': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2520': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2525': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2551': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2552': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2579': No such file or directory ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2609': No such file or directory Q2: What is it trying to do when checking for zero length dupes that it isn't when running with no options? I then ran in verbose out of interest bash unRAIDFindDuplicates.sh -v I noticed two things: 1 - this error half way through: List duplicate files unRAIDFindDuplicates.sh: line 373: verbose_to_bpth: command not found checking /mnt/disk1 Q3: Will this error affect the actual dupe check? - I assume not. 2 - it doesn't seem take into consideration additional cache drives that are an option to define in v6.9 (I have a second called 'scratch') Q4: Would you be willing to add something that can dynamically check for additional cache drive config and include in the no option execution? I then tried the -D option to add the additional cache drive (/mnt/scratch) to be treated as an array drive, and it went a bit screwy! bash unRAIDFindDuplicates.sh -v -D /mnt/scratch Output (killed with ctrl-c in the end) ============= STARTING unRAIDFIndDuplicates.sh =================== Included disks: /mnt/disk/mnt/scratch /mnt/disk1 ... ... List duplicate files unRAIDFindDuplicates.sh: line 373: verbose_to_bpth: command not found unRAIDFindDuplicates.sh: line 404: cd: /mnt/disk/mnt/scratch: No such file or directory checking /mnt/disk/mnt/scratch [SHARENAMEREDACTED] ... Duplicate Files --------------- **Looks like it's now listing every file below here (these may be genuine - TBC)** ... I'm running 6.9.2 if that helps in anyway. Thanks! John
  15. If there are notifications queued up in the UI, but by the time you click x or clear all the log in session has expired, they look like they are clearing by disappearing, but then they all come back again. A page refresh (and subsequent re-log in/confirm the 'insecure' cert to continue) then they can be cleared. The expected behavior would be for the client side browser logic to detect it couldn't post the clear request to the server, and force a refresh/relogin or the like.