ph_

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by ph_

  1. Resolved it by first duplicating the 2 affected shares then deleting the old ones and renaming the new ones to the previous names on unraid. That solved the lock sign on the shares in OSX. Then removed and then recreated the crashing nextcloud sync connection. not elegant... but did the job.
  2. Hi, I am aware that Mac OS SMB issues are an ongoing topic and seem to come in multiple flavours... here comes my contribution .... Some context: My setup consists of the unraid server, 2 apple laptops and a windows PC. On the unraid server I have a few private shares. To access them I have one user set up I use for smb access on all mentioned client machines. In addition to this I also run nextcloud which uses these shares as external storage and I have clients set up on the windows pc and the main macbook to sync files. This has worked ok for a few days now. The issue: At some point I noticed on the mac, that the nextcloud client would crash after a few seconds. I could narrow down the cause of the crash to 2 folders which sync the contents of 2 shares by stopping all other synchronisation. When mounting then these 2 suspicious shares via finder on the desktop, I noticed the padlock signs, however upon inspecting the properties the permissions for the correct user are read and write. Trying to write to the top directory of the 2 affected shares does not work e.g. /share1/, but strangely enough, I can write into a subdirectory of the share e.g. /share1/somefolder. All other shares behave as usual, no problems. On the PC side, I have no problems whatsoever. On the second mac laptop I have the same troubles as on the primary mac, but this one never had the nextcloud sync installed. So the trouble seems to be located with the shares themselves. I had a glance through the diagnostics and did not see anything obvious in the syslog.txt file, however I am not an expert. I would appreciate a holler if someone has come across this kind of problem and could kindly point me in the right direction. best, Ph unton-diagnostics-20240322-1256.zip Edit: Actually, I can delete files from a finder window in the top level of the problematic share , but I cannot drag and drop a file into it. Freefilesync also moves data back and forth without a problem... but nextcloud crashes on every sync attempt.
  3. Hi @binhex , I tried another browser .... it works there instantly. In the initial browser clearing cached images and files and did the trick there. I did not know that a vnc could not appear to work because of some cookie / cash issue. Will keep it in mind. Many thanks, ph
  4. Hi, I just updated Krusader, unfortunately the update seems to have introduced some problems, see below logs. The app does not work any more, the WebUI shows another error. Help would be appreciated. I am running Unraid 6.12.4. text error warn system array login int2: panel items: TSC 2024-03-18 01:28:15,361 DEBG 'start' stderr output: tint2: Systray composited rendering on tint2: nb monitors 1, nb monitors used 1, nb desktops 4 tint2: panel 1 uses scale 1 2024-03-18 01:28:15,423 DEBG 'start' stderr output: tint2: Kernel uevent interface initialized... 2024-03-18 01:28:15,424 DEBG 'start' stderr output: tint2: systray window 8388621 2024-03-18 01:28:15,424 DEBG 'start' stderr output: tint2: systray started 2024-03-18 01:28:15,666 DEBG 'start' stderr output: MESA: error: ZINK: vkCreateInstance failed (VK_ERROR_INCOMPATIBLE_DRIVER) 2024-03-18 01:28:15,667 DEBG 'start' stderr output: glx: failed to create drisw screen 2024-03-18 01:28:15,667 DEBG 'start' stderr output: failed to load driver: zink 2024-03-18 01:28:16,020 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 434 2024-03-18 01:28:16,024 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 435 error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 436 error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 437 error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 438 2024-03-18 01:28:16,029 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 439 2024-03-18 01:28:16,029 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 440 2024-03-18 01:28:16,030 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 441 2024-03-18 01:28:16,030 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 442 2024-03-18 01:28:16,031 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 443 2024-03-18 01:28:16,032 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 444 2024-03-18 01:28:16,032 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 445 2024-03-18 01:28:16,032 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 446 2024-03-18 01:28:16,032 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 447 2024-03-18 01:28:16,033 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 448 2024-03-18 01:28:16,033 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 449 2024-03-18 01:28:16,034 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 450 2024-03-18 01:28:16,034 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 15 minor 0 serial 451 2024-03-18 01:28:16,035 DEBG 'start' stderr output: error 3: BadWindow (invalid Window parameter) request 2 minor 0 serial 471 error 3: BadWindow (invalid Window parameter) request 20 minor 0 serial 472 2024-03-18 01:32:13,289 WARN received SIGTERM indicating exit request 2024-03-18 01:32:13,290 DEBG killing start (pid 70) with signal SIGTERM 2024-03-18 01:32:13,295 INFO waiting for start to die 2024-03-18 01:32:13,303 DEBG 'start' stderr output: Signal: 15 X connection to :0 broken (explicit kill or server shutdown). Signal: 15 In exit 2024-03-18 01:32:13,305 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 23117275914576 for <Subprocess at 23117276021584 with name start in state STOPPING> (stdout)> 2024-03-18 01:32:13,306 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 23117275846800 for <Subprocess at 23117276021584 with name start in state STOPPING> (stderr)> 2024-03-18 01:32:13,307 WARN stopped: start (exit status 143) 2024-03-18 01:32:13,310 DEBG received SIGCHLD indicating a child quit ** Press ANY KEY to close this window ** Best, ph
  5. ich habs dann einfach noch einmal ausprobiert - ich kann bestaetigen, dass auch beim WU2 der betrieb headless (mit iGPU) ohne einen dummy plug moeglich ist. (es hatten sich da wie so oft einige probleme ueberlagert, deswegen die urspruengliche frage) lg
  6. ok, danke dir - das ist ja schon einmal ein anhaltspunkt.
  7. Hallo zusammen, nur eine kurze frage an nutzer/ kenner dieses mainboards - gehe ich recht in der annahme, dass man den dummy plug fuer den headless betrieb bei dem board braucht ? ich benutze es mit einer igpu (i3 8100). ich habe zu dem thema nichts in der anleitung gefunden aber in anderen foren wurde generell erwaehnt, dass das bei manchen boards bei headless betrieb mit einer igpu von noeten ist. danke im voraus, Lg ph_
  8. Update: I tried the Safe Mode with GUI. My settings: Spindown after 15 mins, Array is stopped as for one missing parity. Result: Disks stay spinning for 30mins, then at 31mins start spinning down, at 32 mins start spinning up again. Conclusion: Safe Mode does not make any difference. As for my goal to add a new drive to the array and reactivate it - seems I have to endure having that array spin up and down every few mins while it takes 2 days to pre-clear my 10TB ironwolf disk. Any other suggestions ? If I decide to downgrade to 6.10 again - that probably would mean that my dockers stop working (as they have been upgraded in the meantime) or introduce other new problems ? Possibly any changes to this strange behaviour in even newer versions of Unraid ?
  9. Hi Simon, thx for the reply - alright, guess gonna give that safe mode a shot. While it might keep checking if disks are still there, why would it spin them up though, if the user has disabled the array ? If this is a design decision... it is a rather poor one. Cheers 🙂
  10. Hi, as the headline says... I am certainly not the first person to have this problem, but I also could not find a conclusive solution to the issue. I am not keen on downgrading, as this will probably create new issues with docker images I have upgraded since switching from 6.10.0 to this newer version. So, array has been stopped, because I took out one of the parities. Hence there should be no reason whatsoever to spin up disks. I guess the problem was there before I took one of the parities out, I guess was less obvious then that something is going awry. If someone could advise where to look in the diagonstics - I checked syslog and it does not say much more then what I already know: .... Feb 20 12:44:13 Unton emhttpd: read SMART /dev/sdb Feb 20 12:44:25 Unton emhttpd: read SMART /dev/sdc Feb 20 12:59:03 Unton emhttpd: spinning down /dev/sdb Feb 20 12:59:15 Unton emhttpd: spinning down /dev/sdc Feb 20 13:00:13 Unton emhttpd: read SMART /dev/sdb Feb 20 13:00:25 Unton emhttpd: read SMART /dev/sdc .... What else can I check ? best! unton-diagnostics-20240220-1630.zip
  11. hey, sorry, I have not dealt with this for a while now. So, no I have not fixed it yet. Any enlightenment in the meantime ? Cheers
  12. Replying to my own post, there is some info on the german corner of the forum: I guess you easily get obsessed with saving power if you have electricity prices as skyhigh as in germany....
  13. Hi, I have read here and there that certain NVME drives might be responsible for Systems not being able to reach lower power states such as C7 or C8. If this is the case, is there some guidance which models are proven good in that respect and which ones are troublesome and thus better to be avoided for low power server builds? In particular, I was interested in experience with Enterprise NVME drives as many of those have capacitors onboard preventing data loss in a case of sudden power failure which seems like a great feature to have (although maybe not necessary if you have a UPS?) . Point being that those types of drives are apparently especially uncooperative when it comes to low power management. I would appreciate some insight being shared or links with more information. Many thanks!
  14. Hi Iqgmeow, many thanks for your help - I managed to follow your instructions and overleaf is running now! I was able to set up an admin account by using the command: grunt user:create-admin [email protected] in the overleaf console. I can access overleaf now via the local ip adress, but the access via domain over Nginx Proxy Manager does not work yet. It is a tad strange, as for running behind a proxy according to the documentation you apparently need to set the following variables: SHARELATEX_SECURE_COOKIE=true SHARELATEX_BEHIND_PROXY=true SHARELATEX_SITE_URL=https://overleaf.mydomain.com but once you do set them in the unraid template, you get an error relating to cookies when trying to sign into overleaf. https://github.com/overleaf/overleaf/issues/1032 no matter if those variables are set up or not, I get a 404 Error when trying to access via the URL. I have seen a few nginx configuration file examples in relation to overleaf, they have a couple more entries then mine - The nginx Proxy manager config files generated through the WebUi have just a limited amount of options, including the enabling of websockets - but nowhere is an explanation if these additional entries are actually a necessity or not. In case you have come across that issue and a solution, please do let me know.
  15. Hi everyone, I am having trouble setting up the overleaf docker template. I would appreciate if someone could help me out or could point my to a comprehensive tutorial if that exists. Unfortunately, the github page of overleaf gives no answers for these kind of problems. I am intending of running overleaf behind the Nginx Proxy Manager, this works fine for other dockers as jellyfin and airsonic, so that part should not be totally wrong on my side. I have downloaded the template of overleaf as well as redis and mongodB and filled in templates to the best of my knowledge (see below), however the site refuses to connect with 404 Error, respectively ERR_CONNECTION_REFUSED. The overleaf docker runs in a proxynet with the Proxymanager, the database are on "bridge". The webui port had to be changed to something else than 80 so it is not conflicting with the Unraid main UI. I am not sure what the exact problem is, but it seems the service is just not running hence nothing can be found at the adress. Actually when refreshing the docker page, you see that Overleaf keeps stopping, too. I am not entirely clear if redis and mongodb are appropriately connected. In the template for mongodb I Ieft all the default values and in overleaf I have the following to establish the connection, I also tried leaving out the "mongodb://" part but it does not make any difference. It is weird the template asks for a URL and not for a "host" as it does for Redis. The overleaf log looks like this, I am not sure if says it can connect or if it is checking if it can. But looking at the top, I would say it cannot connect to the database and then shuts down as a result: I also notice that the overleaf console does not stay open, so I cannot use it. MongoDBs log looks like this: For Redis I have also used the default host and port and inserted it in the overleaf template. The Redis log looks like this: Following advice here: https://github.com/overleaf/overleaf/issues/1032 I have also tried to remove a few of the overleaf template variables such as: SHARELATEX_SITE_URL SHARELATEX_SECURE_COOKIE SHARELATEX_BEHIND_PROXY In retrospect this does not seem to make sense, as it specifically refers to setups with behind proxies, but it did not make any difference either. I also removed SHARELATEX_REDIS_PASS following this post: Lastly, I wonder about that neither Redis or Overleaf produce a folder in my appdata directory, mongoDB does - is this behavior normal ? Can anyone point me in the right direction please ? Also happy to give more info. Many thanks in advance.
  16. Ok, great! No, in my case it is all on the same machine, so it should not be a problem. Thank you!
  17. Hi, I have set up a few dockers e.g. Jellyfin, Airsonic etc. which I expose to the internet via subdomains. I set up an Argo Tunnel via the cloudflared docker to allow for connections. The requests then go from cloudflared to my reverse proxy. I decided for Nginx Proxy Manager as it would work out of the box with my cloudflare certificate (I had trouble with getting the self-certificate requested by swag to be accepted by cloudflared) The connection from the cloudflare server via the argo tunnel to the reverse proxy should be secure via the certificate / https. My question is though the following : Most of the proxy hosts (except nextcloud) are connected via http to the reverse proxy, e.g. Piwigo to Nginx. Does this still qualify as secure as this connection is already "within the server" ? Or does this break "a secure https chain" and creates a vulnerability ? The subdomains all start with https://... but sometimes chrome would flag the site as "dangerous", Safari for instance doesn't. I read in the post above : "Nginx Proxy Manager doesn't have the support for forwarding to a HTTPs backend/server." I am not sure if this is related to my question. Feedback or some good links for reading would be much appreciated, happy to provide more info if necessary. Many thanks
  18. Hi, I am on Unraid Version: 6.10.0-rc4 and try to use the CA Backup / Restore Appdata. Worked fine the first time when I used it in v 6.9. in february, now on version from 2022.07.23. I hit the backup button and it does something for 3 secs, but it does not create any new backup file in the appdata backup. See below message : What could be the cause ? Absolutely no change since last backup ? - I doubt it. Any pointers would be much appreciated. Thx !
  19. Hi, yes, good point - I checked this before and was ... well ... hoping that the "Supported cards" listed refers to cards which are "tried and tested" and that there is a chance that other cards will be supported, too. I also dug into the source/device-db.h - the Radeon Pro W5500 has the identifier of 0x7341 and is part of this list - however maybe this also does not mean the card is supported after all. What do you think - still worth trying ? https://github.com/gnif/vendor-reset/blob/master/src/device-db.h
  20. Hi, thank you, that's encouraging - I am aiming to use the AMD card primarily for Mac OS - any experience with this ? I have setups with Catalina 10.15.4 and 10.15.7 with a Radeon Pro W5500 which do overall work ok (except for the reset issue obviously and lack of hardware acceleration in 10.15.7) - if of interest I put together a little write-up here : I guess I will just dive in with the upgrade this weekend and see how it goes... thx for all the replies, much appreciated !
  21. Hi, wow, such a speedy reply - ok, that sounds reasonable - I have been using Unraid for just 2 months 🙂 - been quite interesting so far and a good learning curve, but never upgraded before - will have a look at the process - good to know that downgrading would also be an option (if needed) ! Many thanks / vielen Dank !
  22. Hi All, this is to sum up my experiences up to this date trying to pass through a Radeon Pro W5500 for mainly Mac OS VMs on Unraid 6.9.2. I had before successfully been setting up Mac OS High Sierra with an Nvidia 1080Ti, but as this OS being a tad dated and being the last supported OS for this card , I was hoping to make a new installation with a more modern AMD GPU and Catalina or above. So, I went over to Dortania and checked the support for recent AMD cards, the Radeon Pro W5500 seems supported. The card is also appealing for just occupying a single PCI slot. I first tried Big Sur and Monterey with the OpenCore Boot argument agdpmod=pikera as stipulated in the dortania guide, but could not get it to work so far. In Big Sur the boot process stops with "Failed to send SMC number of eGPU P states (16) due to XxXX!" It seems Big Sur treats the card as an external GPU, despite running a Mac Pro or iMac Pro SMBIOS which both have no iGPUs. Monterey gets also stuck during Mac OS boot, though have not looked into the specifics here yet. After some digging online, I found following forum post on macvidcards which proved useful information going forward, however also essentially stating that Big Sur and above are not supported by the card, though there seem to be Hackintosher out there running the card with Big Sur. I then therefore aimed a bit lower for Catalina and had more success with it. https://www.macvidcards.eu/installing-macvidcards-patch-for-amd-radeon-rx-5000-w-5000-cards-under-macos-catalina-10-15-5-or-later Catalina 10.15.7 It seems there had been changes to Graphics kexts from 10.15.5 onwards up to 10.15.7, which prevented the card from working. By replacing the Extension AMDRadeonX6000HWServices.kext in 10.15.7 this could be fixed, however Metal and Graphics acceleration do not work. The replacement extension is the previous graphics extension from 10.15.1 up to 10.15.4, essentially rolling back changes made by Apple. Regarding the reset-bug, this installation proves relatively benign - a clean shutdown or restart gives the card a full reset. Catalina 10.15.4 As for the lacking graphics acceleration in the 10.15.7 setup, I went and downloaded a full installation of 10.15.4. (If you just use the base system installer, OSX will download additional files during install and essentially upgrade the whole OS to the latest version in the progress). It turned out that 10.15.4 gives Metal support as well as hardware acceleration to the card. Hackintool reports VDA fully supported and playback of high res H265 video is smooth. However, launching a program like videoproc and testing the acceleration capabilities makes the whole UI almost unuseable laggy which can just mitigated by rebooting. This is a bug which also seems to plague various Mac users. The reset- bug seems more pronounced in this version, too. Restarts make the whole server hang (Web-UI frozen) that a hard restart seems the only solution out. A shutdown requires resetting the card by putting the server to sleep for a few seconds and rescanning PCI-devices after, I did this with @SpaceInvaderOne script, however, the PCI rescan part seemed to not always execute, so I separated it in a second script I run manually after the Server is back up. Generally, it is problematic to be an "unfinished" version of Mac OS, some apps from the appstore would not install on anything below 10.15.6, in that case you had to find your own legacy installers. Mac OS sum up : Unfortunately, it seems both versions do not make for a fully useable configuration for above reasons. If anyone has experience or ideas how to further improve above setups, it would be great if you could share. I still wonder if the OpenCore config can be tweaked further to enable hardware acceleration in 10.15.7. The hackintosh forums are sparsely populated with info about this specific card. It seems the Shiki mods in Whatevergreen could be of further help, however it is odd that the Mackie Opencore configurator does not show them as options. It might be related to Shiki allowing for several flag values being added up, hence having static menu entries would make less sense. It is definitely possible though to add these values manually with a plist editor like Propertree, but so far this did not yield improved results. Generally, it definitely helps to slowly build up the VM - Xml configuration by first using VNC and a separate EFI and Mac OS System disk without any PCI or USB devices, then VNC + GPU and finally GPU on its own, keeping previous configurations as a fallback position when things go south. Windows 10 The card works fine in Windows 10 once the appropriate driver is installed. Regarding reset- bug, as long as a shut down is clean, it is not an issue. When restarting, the card unfortunately also tends to hang. Therefore, I Ieft the VNC in the VM xml config, in case the GPU does not reset, I can at least shut down the VM cleanly and then reset the card with the user-script mentioned above. Next up, I guess I will dive into upgrading to 6.10RC4 in order to be able to install @ich777 AMD-Reset-plugin. Let me know if you have comments / thoughts on the above, Many thanks !
  23. Hi @ich777, All, I had been looking for the AMD-reset-plugin for a quite a while ... I was baffled by the ongoing discussion here while I was never able to actually find the plugin in the community applications. I appreciate though I have never read all 47 pages of discussion. I was even more dumbfounded when I could find the plugin via google on unraids webpage. Then it finally dawned on me that Community applications by default hides apps and plugins deemed incompatible.... oh boy... So there I am , running the latest stable version 6.9.2 while the AMD-reset-plugin stormed ahead to support 6.10beta17 🙂 My question naturally : Is it reasonably safe to install this latest version on 6.9.2 or is it just not compatible ? If so, can I install a previous version from a different source, e.g. using the unraid plugin installer page supplying a link to the appropriate github repository ? I would greatly appreciate a pointer in the right direction. Many thanks in advance, Philipp
  24. Hi, I am fairly new to Unraid, so far it is overall going ok. As I am generally concerned about security I have tried to follow the general forum guidelines as much as I could. Strong Passwords, using Bitwarden, restricted access to shares to name a few. I still have an additional question though regarding my setup and exposing services to the internet. I have set up a few dockers e.g. Nextcloud, Airsonic etc. which I expose to the internet via subdomains. Each page/ service is secured with a strong password. I prefer this at least for nextcloud over a vpn solution like wireguard as I can share once in a while links to data directly from my server to others. As my provider apparently blocks ports 443/80 for connections to my server from outside, I set up an Argo Tunnel via the cloudflared docker to allow for connections. My real WAN IP does not get exposed, requests to my sites are handled directly by cloudflare, which also should provide some protection from ddos attacks. This also means I do not have to open any ports on my router, except for the wireguard one. The requests then go from cloudflared to my reverse proxy. I decided for Nginx Proxy Manager as it would work out of the box with my cloudflare certificate (I had trouble with getting the self-certificate requested by swag to be accepted by cloudflared and ran out of weekly certificate requests in the process - learned about the existence of testing-certificates by making that mistake) The connection from the cloudflare server via the argo tunnel to the reverse proxy should be secure via the certificate /https. My question is though the following : Some of the actual proxy hosts are connected via http to the reverse proxy, e.g. Piwigo to Nginx. Does this still qualify as secure as this connection is already "within the server" ? Or does this break "a secure https chain" and creates a vulnerability ? The domains all start with https://... Feedback or some good links for reading would be much appreciated, happy to provide more info is necessary. PS: Going forward, it might be good to just expose dockers directly via the domain which need to be accessed by others than me / one off cases in which it would be a hassle to make them download wireguard and set it up for them. Many thanks.