Kam

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Kam

  1. Hi, I've just realized I was experimenting the same issue (Unraid version : 6.11.5) when upgrading to a new cache pool (1xHDD --> mirrored 2xSSD). The links were the certificate files in swag. It seems the fix should be easy : remove avoiding "broken" symbolic links. It should be up to the user whether or not to keep broken links. Thanks, Kam
  2. Hi, I'm quite a new user to UNRAID (1 year), but I think array disk can only be HDDs as SSD's sectors cannot be accessed directly. In my understanding you should use 2 HDDs (1 parity and 1 data) at least to build your array. What I do is I have nextcloud docker running on the SSD cache (2x480GB mirrored with btrfs) and backups going to the array (12TB array + 4TB parity). I would say you should buy HDDs to build your array. That would be much cheaper than using such big SSDs and more effective. Good luck with your set up Kam
  3. So, as expected it was a problem from their servers. It is now backing up. I've started with only a few shares (not the biggest one like series and movies), and it tells me it's going to take 5 days for +- 250GB o_O and I didn't set any maximum rate... I hope restoring a backup would be faster than the backup upload. I'm wondering if I should keep going on with crashplan (I'm still in the trial free month). It doesn't sound very reliable. Are you guys happy with it ? Does it worth the monthly $10 ?
  4. Ok probably something wrong with my account ... It is still stuck at the same stage after 8 hours. I just selected 1 folder with 1 text file, which makes a total of 156B (yes < 1KB) !! o_O So I created a ticket to crashplan. So far, nothing bad with the container Thanks anyway ! PS: I'll put the answer here when it's resolved, in case someone has the same problem as me some day
  5. Yes it is the first time, so I expected it to take a long time. But then I saw the history and read it performed several times the scan and backup preparation but never started to backup... Yesterday, I've unchecked most the file to see if it would work better with a lighter backup. So there's only 1 folder (/config) selected (only 157 files and 7MB). But it still the same. It was scanned twice since I've done that, and no backup done. I will try to make a small backup from my computer to see if it works better or if it's a problem with my account.
  6. I've restarted the docker container after pausing the backup. It is stuck again at "Preparing to backup". Checking the logs, only service.log.0 was modified after the 60 minute pause. # tail /mnt/user/appdata/CrashPlanPRO/log/service.log.0 [10.11.22 21:03:43.152 INFO 74122243-125 backup42.service.ui.UIController] UI:: UserActionResponse: GetHistoryLogResponseMessage@497560094[ session=1081740232581454819 ]1.3KB [10.11.22 21:06:36.334 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=2 [10.11.22 21:11:36.340 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=1 [10.11.22 21:16:36.342 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=1 [10.11.22 21:16:36.844 INFO -34-thread-1 systemcheck.BackupLicenseCheckV3] V3 backup isLicenseValid change detected. New value=true, old value=false [10.11.22 21:16:37.125 INFO ogStatsTimer fileactivity.FileActivityHandler] V3E::FileActivityHandlerStats (15min) [totalEvents=0, interestingEvents=0, scanEvents=0, pathsEnqueued=0, sessionsEnqueued=0] [10.11.22 21:21:36.343 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=1 [10.11.22 21:26:36.344 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=0 [10.11.22 21:31:36.345 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=0 [10.11.22 21:31:37.118 INFO ogStatsTimer fileactivity.FileActivityHandler] V3E::FileActivityHandlerStats (15min) [totalEvents=0, interestingEvents=0, scanEvents=0, pathsEnqueued=0, sessionsEnqueued=0] Nothing looks bad to me. Well except I'm not sure the logs have anything related to backup preparation, neither to failing to start. Am I missing something ? I'll check tomorrow morning if something's new
  7. So now, I really have a problem... The backup is still stuck to the "preparing to backup" stage, while history tells me scanning ended twice yesterday and this morning. I 10/10/22 10:21PM [PRO Online Backup Set] Scanning for files completed in 5.8 hours: 1,246,605 files (2.60TB) found I 10/11/22 01:04AM Code42 started, version 10.2.1, GUID XXXXXXXXXXXXXXX I 10/11/22 03:00AM [PRO Online Backup Set] Scanning for files to back up I 10/11/22 09:15AM [PRO Online Backup Set] Scanning for files completed in 6.2 hours: 1,246,623 files (2.60TB) found I've browsed the logs and it seems there's errors in engine_error.log, service.log.0 and ui_error.log (see below) Looks like it's something to do with some Java classes not implemented o_O I doubt it is related to the UI error, but just in case I added the log to the post. I don't know if it could have something to do with it, but I'm still using Unraid 6.9 from which I had a notification that it wouldn't be supported anymore. I wanted to back up before upgrading... But I think it shouldn't be the issue as it doesn't impact the container version, does it ? Any thought on this ? Thanks, Kam /mnt/user/appdata/CrashPlanPRO/log/engine_error.log WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.code42.crypto.jce.ec.EcCurveLookup (file:/usr/local/crashplan/lib/c42-crypto-impl-15.2.7.jar) to method sun.security.util.CurveDB.lookup(java.security.spec.ECParameterSpec) WARNING: Please consider reporting this to the maintainers of com.code42.crypto.jce.ec.EcCurveLookup WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release /mnt/user/appdata/CrashPlanPRO/log/service.log.0 [10.11.22 15:10:29.932 INFO 83686783-129 .storage.UniversalStorageService] SM:: Saved ServiceModel of 48602 bytes to UDB in 2.749 ms. [10.11.22 15:10:29.938 INFO 83686783-128 e.ui.ws.WebSocketEventController] WS:: Handling request=LoginTokenRequestMessage{token=****}' [10.11.22 15:10:29.945 INFO 83686783-130 backup42.service.ui.UIController] UI:: UserActionRequest: StatusQueryMessage[1081704676279664543] null [10.11.22 15:10:29.946 WARN 83686783-130 rvice.ui.message.AppStateMessage] Problem getting app state, 'void org.json.JSONWriter.<init>(java.io.Writer)', java.lang.NoSuchMethodError: 'void org.json.JSONWriter.<init>(java.io.Writer)' STACKTRACE:: java.lang.NoSuchMethodError: 'void org.json.JSONWriter.<init>(java.io.Writer)' at com.backup42.common.Computer.toJSONString(Computer.java:1130) at org.json.JSONWriter.valueToString(JSONWriter.java:331) at org.json.JSONWriter.value(JSONWriter.java:412) at com.backup42.service.ui.message.AppStateMessage.toJSONString(AppStateMessage.java:110) at com.backup42.service.ui.message.AppStateMessage.buildMessage(AppStateMessage.java:46) at com.backup42.service.ui.UIController.sendAppStateToTray(UIController.java:2483) at com.backup42.service.ui.UIController.modelChanged(UIController.java:1327) at com.backup42.service.model.Model.notifyModelChanged(Model.java:65) at java.base/java.lang.Iterable.forEach(Unknown Source) at com.backup42.service.model.Model.notifyObservers(Model.java:61) at com.backup42.service.model.ServiceModel.save(ServiceModel.java:279) at com.backup42.service.ui.UIController.receiveMessage(UIController.java:869) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at com.backup42.service.ui.http.LegacyHttpHandler.invokeMessageReceiver(LegacyHttpHandler.java:211) at com.backup42.service.ui.http.LegacyHttpHandler.getResponse(LegacyHttpHandler.java:84) at com.backup42.service.ui.http.AHttpHandler.service(AHttpHandler.java:65) at javax.servlet.http.HttpServlet.service(HttpServlet.java:750) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410) at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Unknown Source) [10.11.22 15:10:29.948 INFO 83686783-130 .storage.UniversalStorageService] SM:: Saved ServiceModel of 48602 bytes to UDB in 2.014 ms. [10.11.22 15:10:29.953 INFO 83686783-130 backup42.service.ui.UIController] UI:: UserActionResponse: StatusResponseMessage@595426261[ session=1081704676279664543 ][ User@-493561161[ userId=28247113, userUid=1080552143039004986, name=Camille, [email protected] ], AuthorityLocation@1205044493[ location=central.crashplanpro.com:4287, addressHidden=true, addressLocked=true ], orgType=BUSINESS, upgrading=false, UpdateLicenseMessage[] [ authenticated=true, updateConfigPassword=false, [email protected], securityKeyType=AccountPassword, dataKeyExists=true, secureDataKeyExists=true, secureDataKeyUpdateRequired=false, secureDataKeyQA=null, authorizeRules=AuthorizeRules [minPasswordLength=5, usernameIsAnEmail=true, deferredAllowed=true], blocked=false, licensedFeatures=[] ], locale=AUTOMATIC_LOCALE, websiteHost=https://console.us2.crashplanpro.com, defaultRestoreFolder=/config, os=Linux, dtrServiceType=LAN_WAN ] [10.11.22 15:10:29.973 INFO 83686783-628 zation.CustomizationApiClientCmd] ClientCustomization:: Getting postAuthenticated customizations [10.11.22 15:10:29.973 INFO 83686783-628 CustomizationAuthorizedClientCmd] ClientCustomization:: Getting post-authentication customizations [10.11.22 15:10:30.026 INFO 1WeDftWkr465 com.code42.messaging.SessionImpl] **** No receiver assigned for message for type com.code42.protos.shared.authority.PlanAdministrationMessages$DestinationsGetResponse, remoteGuid=4200, RemotePeer-[guid=4200, state=CONNECTED]; Session-[localID=1081697776767025055, remoteID=1081697776684262330, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=AUTHORITY, local=172.17.0.2:46348, remote=64.207.222.171:443] [10.11.22 15:10:59.143 INFO Thread-50 om.backup42.service.AppLogWriter] WRITE app.log in 84ms [10.11.22 15:14:46.469 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=1 [10.11.22 15:15:55.266 INFO 83686783-128 backup42.service.ui.UIController] UI:: UserActionRequest: GetHistoryLogMessage[1081705222092832671] [10.11.22 15:15:55.267 INFO 83686783-128 backup42.service.ui.UIController] UI:: Retrieve History; file=history.log.0, exists=true, length=1180, path=/usr/local/crashplan/log/history.log.0 [10.11.22 15:15:55.268 INFO 83686783-128 backup42.service.ui.UIController] UI:: UserActionResponse: GetHistoryLogResponseMessage@346261857[ session=1081705222092832671 ]1.2KB [10.11.22 15:18:27.882 INFO 83686783-628 backup42.service.ui.UIController] UI:: UserActionRequest: GetHistoryLogMessage[1081705478146703263] [10.11.22 15:18:27.883 INFO 83686783-628 backup42.service.ui.UIController] UI:: Retrieve History; file=history.log.0, exists=true, length=1180, path=/usr/local/crashplan/log/history.log.0 [10.11.22 15:18:27.883 INFO 83686783-628 backup42.service.ui.UIController] UI:: UserActionResponse: GetHistoryLogResponseMessage@812038901[ session=1081705478146703263 ]1.2KB [10.11.22 15:19:46.470 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=0 [10.11.22 15:19:46.688 INFO ogStatsTimer fileactivity.FileActivityHandler] V3E::FileActivityHandlerStats (15min) [totalEvents=0, interestingEvents=0, scanEvents=0, pathsEnqueued=0, sessionsEnqueued=0] [10.11.22 15:24:46.471 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=3, numFailedConnectedCheck=0, duration(ms)=0 /mnt/user/appdata/CrashPlanPRO/log/ui_error.log (node:7582) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency (Use `code42 --trace-warnings ...` to show where the warning was created) [7582:1011/010450.097350:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [7582:1011/010450.098961:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.099056:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [9603:1011/010450.115593:ERROR:gl_implementation.cc(441)] Failed to load /usr/local/crashplan/electron/swiftshader/libGLESv2.so: /usr/local/crashplan/electron/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory [9603:1011/010450.117000:ERROR:viz_main_impl.cc(161)] Exiting GPU process due to errors during initialization [7582:1011/010450.120398:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [7582:1011/010450.120490:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [7582:1011/010450.120609:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [7582:1011/010450.120693:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [9651:1011/010450.121991:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is disabled, ANGLE is [7582:1011/010450.134937:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.135762:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.157532:ERROR:object_proxy.cc(642)] Failed to call method: org.freedesktop.login1.Manager.Inhibit: object_path= /org/freedesktop/login1: unknown error type: [7582:1011/010450.248944:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.257949:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.331674:ERROR:nss_util.cc(286)] After loading Root Certs, loaded==false: libnssckbi.so: cannot open shared object file: No such file or directory [7582:1011/010450.375934:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -379 [7582:1011/010450.400278:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -379 [7582:1011/010450.423969:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/010450.424957:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -379 [9718:1011/010450.426044:ERROR:ssl_client_socket_impl.cc(981)] handshake failed; returned -1, SSL error code 1, net_error -202 (node:9746) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency (Use `exe --trace-warnings ...` to show where the warning was created) (node:10451) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency (Use `exe --trace-warnings ...` to show where the warning was created) [7582:1011/151029.940311:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151029.985027:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151029.991086:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151030.061679:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151030.076541:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151030.100084:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151555.004942:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151555.006902:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") (node:11840) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency (Use `exe --trace-warnings ...` to show where the warning was created) [7582:1011/151827.625193:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") [7582:1011/151827.627063:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix") (node:12342) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency (Use `exe --trace-warnings ...` to show where the warning was created)
  8. Hi there, There is no issue actually. I'm just to dumb to scroll down to see all the files 😅 Thank you very much for the container
  9. Sorry for not answering earlier. I didn't changed anything. Looks like it was just the corrupted SSD causing it. Thanks
  10. So that was it ! Once I removed the bad SSD the problem was solved, and it looks like other issues too It's a bit confusing that a failing unassigned disk causes a network error, though... Anyway, thanks everybody for the help !
  11. I suspected it could be this one, but I thought maybe not as it is unassigned. I didn't have the time to check further. So thank you very much ! I'll remove it tomorrow and let you know if it works better then 🤞 Thanks again
  12. Hi everybody, I received a message from @ljm42 asking to reinstall the MyServer plugin because it was not responding. I deleted it but then I realized CA plugin cannot access the internet. I get the following message : At first I thought it would come from the DNS, but I've changed it to google, cloudflares and opendns both for Unraid and my router and it still doesn't work. I'm able to ping github and various other servers though. So it is not a DNS issue. I've whitelisted my server on my router, and no parental control is active. I tried to download the diagnostics file but the web gui freezes when I do. I managed to do it via ssh but I don't know the option to anonymize it. The thing is I don't know where to look at. I've tried searching through this forum, but the few things I saw and/or tried either don't match my issue or don't work. Do anyone have any thought on what I should be looking for ? Thanks for reading PS: I'm running Unraid Version 6.9.2 2021-04-07 Edit: So here is the diagnostics file, Thanks kus-diagnostics-20220929-1450.zip
  13. Thanks @trurl actually it took me a few hours to find the encryption file so I couldn't start the array. But then, no shares appeared... But I just realized I checked "maintenance mode" as it struggled to start the array 😅 Everything's fine now ! Thanks for help and sorry for this not-a-problem-post Anyone can close this topic please? Kam
  14. Hi everyone, I moved my server to an other apartment and (of course) I didn't have time for a backup. I've just switch my Unraid server back on and had a really bad surprise... All the disks (arrays, pools and unassigned) are well identified but there is no share at all anymore. I believe the data is still there and I have just started a parity check. I will see in 14 hours if it's OK. Do you have any idea to recover the shares that I set up ? Or should I recreate them manually ? It seems there still is a definition of the shares, as I can see in the diagnosis report, but I don't know how I could use this. I haven't tried to reboot it as I thought about it after launching the parity check but I'll do it afterwards. Any help much appreciated ! kus-diagnostics-20220622-2117.zip
  15. @dsmith44 Thank you for sharing this template and good luck maintaining it ! @ldog88 You can edit the container's url so that it points to the tailscale IP (100.x.x.x) instead of your local IP (eg. 192.168.x.x) by modifying the URL in advanced view. My question is related to this. Is there a way for the link to be tailscale's one when I'm accessing through talescale and the local one when I'm on the LAN ? My containers URLs are configured as follow : https://[IP]:[PORT] I've thought of two not quite satisfying solution, though it should work enough : - always access to unraid through tailscale - configure my router so that https://MYSERVERNAME (same name as in tailscale) routes to the unraid server and configure the containers accordingly. Does any know a smarter way of doing it ? Many thanks
  16. Thanks for the answer. I'm not at home right now so I'll try later and let you know how it worked. Just to be sure, are you suggesting that I run a ipa-server-update command before I run the ipa-server-install one?
  17. I'haven't done it yet so I'm not sure of what I'm saying, but shoudn't it be https ? The doc says (https://ibracorp.io/lets-install-authelia-in-depth-authorization-and-authentication-server/#nginxproxymanagernpm) NB: For some reason in the current version of NPM as of writing this (v2.2.4) the SSL settings turn off after initial creation. Go back into the SSL settings of 'auth.example.com' and turn them back on then save again. Have you tried this ?
  18. Hi there ! First I want to thank @Sycotix for your great advanced-yet-simple tutorials ! I've been learning so much ! I am currently having trouble with the FreeIPA VM tutorial Everything goes well until the IPA server configuration. I can't get my server to authenticate the certificate.... Here's what I get : # [root@ipa ~]# ipa-server-install --mkhomedir [...] Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes [1/29]: configuring certificate server instance Failed to configure CA instance See the installation logs and the following files/directories for more information: /var/log/pki/pki-tomcat [error] RuntimeError: CA configuration failed. CA configuration failed. The ipa-server-install command failed. See /var/log/ipaserver-install.log for more information Attached is an extract of /var/log/ipaserver-install.log where I can read the following error : Exception: PKI subsystem 'CA' for instance 'pki-tomcat' already exists! It might come from previous attempts to install IPA server, but maybe that was the cause of the first failure too ? I've tried to run "ipa-server-install --uninstall" serveral times and "pkidestroy -s CA -i pki-tomcat" but I still get the same error... I thought it could be due to cloudflare argo tunnel.... but even if I switch to an A type DNS with my IP and DMZ to the freeIPA server on my router, it's still the same O_O Btw, Fedora Cockpit works well through a subdomain and argotunnel + swag. But it shows me 1 service cannot start : Machine Check Exception Logging Daemon. I doubt it is related but just in case, this is the error logs : mcelog.service: Failed with result 'exit-code' [systemd] mcelog.service: Main process exited, code=exited, status=1/FAILURE [systemd] CPU is unsupported [mcelog] mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor. Please use the edac_mce_amd module instead. [mcelog] Started Machine Check Exception Logging Daemon [systemd] Does anyone have an idea where the issue comes from / where should I dig in ? or what should I do ? Thanks a lot for your advices/thoughts ! ipa-install.log
  19. I got the same issue. It comes from a compatibility issue between the NC oO app and the only office server. You have to manually downgrade the NC app. I don't remember the details but you can find it easily if you google it. I remember you need to delete the oO folder in nextcloud/app folder, downloads the right version of the only office plugin and untar it there. You might have to reactivate it from nextcloud admin page. Then it works ! You just have to not upgrade it Hope it helps, Kam
  20. So finally, I found a solution ! 🤩 I checked my nextcloud.subdomain.conf and added/removed everything so both files were similar. In case someone has the same issue, here's my onlyoffice.subdomain.conf file with added/removed comments # only office doc server server { listen 443 ssl; ## Added listen [::]:443 ssl; server_name office.*; include /config/nginx/ssl.conf; ## Added add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; client_max_body_size 0; location / { include /config/nginx/proxy.conf; ## Added include /config/nginx/resolver.conf; set $upstream_app OnlyOfficeDocumentServer; set $upstream_proto https; set $upstream_port 443; proxy_pass $upstream_proto://$upstream_app:$upstream_port; ## Added proxy_max_temp_file_size 2048m; ## Removed # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_set_header X-Forwarded-Host $server_name; # proxy_set_header X-Forwarded-Proto $scheme; } } Kam
  21. So I just went through every log files I could think of being relevant, but I couldn't found nothing ... Here's what I get when I go to oods.mydomain.me /mnt/user/appdata/swag/log/nginx/access.log 192.168.1.254 - - [19/Nov/2021:16:05:09 +0100] "GET /ocs/v2.php/apps/notifications/api/v2/notifications HTTP/2.0" 304 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0" /mnt/user/appdata/swag/log/nginx/error.log NOTHING /mnt/user/appdata/swag/log/fail2ban/fail2ban.log NOTHING /mnt/user/appdata/swag/log/letsencrypt/letsencrypt.log NOTHING /mnt/user/appdata/swag/log/php/error.log NOTHING /mnt/user/appdata/swag/log/logrotate.status NOTHING /mnt/user/appdata/onlyofficeds/logs/documentserver/nginx.error.log NOTHING Any idea where I should look at ? Thx !
  22. Hi everybody, First, thanks a lot for porting the containers to unraid and supporting users here. It make life much easier I'm having trouble setting up OnlyOfficeDocumentServer. I started by following spaceinvaders' tutorial, but I also tried a few things after. Here's what I've done so far: I created a CNAME for my domain name on cloudflare (and deactivated proxied*), added it in swag's docker configuration. I skipped the duckdns step as I have a static IP address. I copied (hard link) the certificate files to onlyofficeds/Data/certs/onlyoffice.{crt,key} I had to remove the following line from his file : proxy_redirect off; as it would prevent swag from starting nginx: [emerg] "proxy_redirect" directive is duplicate in /config/nginx/proxy-confs/onlyoffice.subdomain.conf:19 I manage to access the webUI through the local IP and https port, but not the http one. And clicking on WebUI from the unraid docker page redirects to the unraid's login page. Is that normal ? I could even test OODS with the provided example. But when I try to access it from my subdomain I get a "HTTP 400 Bad Request" error. I also tried with the documentserver.subdomain.conf that I found in nginx/proxy-confs folder, but then I got a "bad redirection" error. I attached m onlyoffice.subdomain.conf file in case it would help. I probably missed something, but I'm really clueless right now. Anybody have an idea ? Thanks a lot, Kam * I also spent some hours figuring that when cloudflare's proxy is activated, it leads to a "too many redirecting" error. Not sure if it's the actual english error, I just translated from french. onlyoffice.subdomain.conf
  23. I felt that was not a good idea, but I was not sure. Thank you for the explanation.
  24. Hi there, I'm very glad to finally start this post. I'm a close-to-be fresh new unRaid user. I've looking around for weeks now to build myself a home server, and now I think I'm ready to start. So I wanted to share my hardware config with you guys and maybe get some advices before doing/buying stupid thing.... Actually, I've always been a software guy (and a bit of system) and poorly paid attention to the hardware part until recently when I started to read more about building a DIY NAS/server. As a matter of fact, the last computer I built entirely was in 2003, when I was 18 ... My goal is to build a server to be used mainly as a NAS + nextcloud for myself, my wife and some friends and family. But I want to use it as a VM station : 1 windows (not gaming, at least at first), a couple of linux to play with, and 1 MacOS for my wife so that she doesn't need to bring hers (macbook pro) when she comes to my place) or goes to unsafe places (I've found recently my old eeePC 901, so she can connect to the VM). First, I would go for a TrueNAS core, but two things discouraged me: - the heavy resources demanding ZFS file system, which looks amazing but very expensive imho... - and mainly it seems to me that it doesn't meet my requirement for VMs I think I might buy a TrueNAS mini, or build a cheap one with DDR3, in the future if my storage needs increase so much that it would required a second server... - plus, I didn't pay much attention to unRaid as it looked like a closed commercial solution. But then I realized it was based on slackware, and I am much more at ease with linux than BSD ! (well even than windows ) But let's focus on the topic! I started to make a config with intel based CPU as AMD had bad reputation back in Athlon/Pentium times. Then I realized that Ryzen CPU seem pretty fine and that I could even get affordable pieces using DDR4 ECC memory ! So here's the last config I've come to : - Fractal Design Node 804 as it can host 8 3.5 HDD + 2.5 SDD - ASRock B550M Pro4 motherboard - AMD Ryzen 5 3600 - be quiet! Pure Loop 240mm CPU watercooling - be quiet! Straight Power 11 550W Platinum PSU - 2x Kingston KSM32ED8/32ME(pdf) 32GB DDR4 3200Mhz ECC - Kingston A400 240GB read/write cache - Kingston NV1 NVMe PCIe 1TB for VMs - MZHOU Carte PCIe SATA 8 Ports (Amazon.fr) as M.2 slots condemn 2 SATA ports - 2x Toshiba N300 4TB HDD - 2x WD Red plus 4TB HDD - 2x SEAGATE IronWolf 4TB HDD + an old MSI GeForce 1030 AERO ITX 2G OC I removed from a PC on which I installed QubesOS I fixed myself a 2000€ budget (storage included), and it makes around 1900€ now so I'm good. HDDs and the Node 804 case are already purchased as it's the only parts I sure of... well that meens I not sure of much Here's what I wonder: - Is watercooling worth it ? I'd like the server to be as quiet as possible but also as cool as possible, I live in Grenoble (France) and summers can be damn hot, even though my flat's office is the coolest room. - the Node 804 comes with 3 fans integrated, would it be wise to add a few more ? - I really got an headache sizing the PSU ! and I still wonder if 550W is good. I still have room for 2x3.5" + 2x2.5" HDDs and might upgrade the GPU. So maybe I should go for a 600W or even 650W PSU ? - Maybe 64GB of RAM is overkill but I'm thinking if I should choose between 4x16GB max and 4x32GB max the latter will allow me to run more apps on the server. The MacOS VM would probabl run 24/7 while the others would run on demand (maybe 1 linux VM would end running 24/7) - The two M.2 disk I wanted to buy are not available anymore... but I understood quality is not a big issue. Yet I picked Kingston ones instead of unkown brands... Did I understand well or should I pay a little more attention to it ? - And above all, is the sizing correct (240GB for cache, 1TB for VMs) ? - Would my old GPU do the job ? or should I investigate for a better one ? The last questions are less urgent as I will decide when I setup the server... that won't be done before buying the parts - I plan to keep 1 HDD unassigned as a hot spare disk, should I use 2 parity disks which would leave me 12TB of data disks which would be enough for now ? or use 16GB data + 1 parity disk ? I haven't read enough to make myself an opinion, so any advice ? - Toshiba N300 HDDs run at 7200rpm, do you know if it's possible to configure them to run only 5400rpm ? that would reduce the heat and noise I presume... - Is there a smarter way to dispatch the HDDs between parity/data/spare disks when using different brands ? - Is there a smarter way to plug the HDDs between the motherboard and the PCIe card ? I think I would plug 3 on each but really don't know... Well I think that post is long enough ! Sorry if I'm to chatty (I hope you're not reading this on a mobile, sorry -_-" ) I'm really excited and can't wait to start playing/testing (messing?) with unRaid ! I hope you guys can help me confirm my hardware choices so that I can annoy you with software/config issues Thanks in advance, Kam