• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

doblez's Achievements


Newbie (1/14)



  1. @Knutowskie Fix for @SpaceInvaderOne config not working I got it working with a new proxy configuration file based on the documentation on Collabora's website. The reason it stopped working is that the image for the docker container was updated with new directories - Hence what was /loleaflet is now /browser and /lool is now /cool. The docker container itself hasn't been updated which is why the WebUI isn't working. - Can't seem to push an update to the container, so maybe @chvb can fix it? Fixing the WebUI: Edit the Collabora template Enable advanced view Change the WebUI url to: https://[IP]:[PORT:9980]/browser/dist/admin/admin.html Update the template and voila - Login with your admin details Fixing the proxy configuration file: Change lines 17, 35, 44 and 54 to their new directories inside the NGINX .conf file - I've attached it here as a file and code. # make sure that your dns has a cname set for collabora. If you setup Collabora to use the custom docker network (as in my earlier videos for reverse proxy) # then this config file will work as is. However the container name is expected to be "Collabora" as it is by default in chvb's container. # If you are not using the custom docker network for this container then change the line "server Collabora:9980;" to "server [YOUR_SERVER_IP]:9980;" resolver valid=30s; upstream collabora { server Collabora:9980; } server { listen 443 ssl; server_name collabora.*; include /config/nginx/ssl.conf; # static files location ^~ /browser { proxy_pass https://collabora; proxy_set_header Host $host; } # WOPI discovery URL location ^~ /hosting/discovery { proxy_pass https://collabora; proxy_set_header Host $host; } # Capabilities location ^~ /hosting/capabilities { proxy_pass https://collabora; proxy_set_header Host $http_host; } # main websocket location ~ ^/cool/(.*)/ws$ { proxy_pass https://collabora; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $http_host; proxy_read_timeout 36000s; } # Admin Console websocket location ^~ /cool/adminws { proxy_buffering off; proxy_pass https://collabora; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_read_timeout 36000s; } # download, presentation and image upload location ~ ^/(c|l)ool { proxy_pass https://collabora; proxy_set_header Host $http_host; } } Working07-01-2022-collabora.subdomain.conf
  2. Thanks - I ended up figuring it out myself, but I managed to bork my license key, so now I'm waiting for support! Have a good day!
  3. EDIT: Failed flash drive - Thanks ljm42 for verifying. I've recently gotten this error message and now I am unable to access my server at all since it defaults from my local IP to the unraid-provided https IP. Has anyone else had a similar error it? Otherwise the plugin works perfectly and is great.
  4. Hey, Cool that your server has been running 2 years now - I am quite pleased with where Unraid has ended up by now! What exactly are you trying to do since you want Plex to rename them? I have mine running with "Keep Filename History" enabled and Plex doesn't have to rename anything, it just realises the new codec and file type. I have "Scan my library automatically" and "Run a partial scan when changes are detected". @Josh.5 Thanks for the awesome piece of software, it works beautifully. Feature request(s) if you at some point have time: 1. A way to initiate scan with a button-press. (Nice to have) 2. A way to blacklist folders within the specified library path, which is especially useful for TV-directories. (Very nice to have)
  5. Hey, I did the same thing earlier with the same result. I tried deleting the Big Sur image files and changing it to method 2 which then grabbed the correct Big Sur image file from Apple. So yeah, I'd say workaround-able bug to be fixed when there is time to.
  6. I can't see any VMs in the dashboard regardless of whether I put it in started-only mode or all vms.
  7. 64GB/768GB Multi-bit ECC in my main server 16GB/32GB non-ECC in my backup server Both overkill with my usecase though both my linux Vms love that I give them 16GB each
  8. I tried all of the above options to no avail and then I figured. Hell might as well try to reformat the disk. Turns out something went wrong when I last cleared and formatted the 5 900gig SAS disks and a data shuffle + reclear subsequent reformat fixed it and I now have 0 errors. Thanks for your help though!
  9. Fair enough - I'll try to have a look. I just find it very weird with two redundant 750W PSU and the fully functioning disks being on the same expander as the "broken" ones.
  10. And how would I test the controller (which I'm assuming to be the H200)? Although I doubt it's that since the new disks aren't spitting out errors!
  11. I'll run the the extended smart test, but how do I fix/test for the other? It's a Dell Perc H200 Raid card (JBOD) and a dell r720 with the backplanes connecting to the disk themselves.
  12. Smart Stats are included in the diagnostics file and it's all the disks, not just 5 and 6 that fail. Disk 1 (new 4TB Seagate) doesn't provide errors.
  13. Hi, I've recently build a new server with some brand new disks and some older SAS drives I got for cheap. I wrote to all of them from my old server through ftp with problems or errors. 5.5TB in total. I then tried to do a parity check and all "hell" broke those with disk read errors. I was extremely confused as to why, so I restarted the server. Tried downloading files from all of the afflicted disks. No errors. But as soon as I start the parity check I get errors for days and I have no clue why. I hope you can help! In advance thanks!
  14. Hey, I recently upgraded my setup to two servers at two different locations. The main reason for this is that both me and my father (we live at separate locations) want full gigabit access to all of our raw photos while editing. I was thinking of setting up an openVPN on the main server and have the remote server connect to it and then sync photos between them, but I am unsure if this is the best solution or what you would recommend. In advance thanks