Jules

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by Jules

  1. Hi everyone, I love my Unraid box, does everything flawlessly, except it crashes from time to time, randomly. Sometimes after 1 hour, sometimes after 1 week. Most I had it without crashing, 6 months. What does crashing mean? I'm not exactly sure. * the dockers seem to work, network mounts are working, VM is up and running, it seems to be just the interface * the plugins are not recognised by the system * sometimes it says "Blacklisted USB Flash GUID" and the key is missing, Flash Product: n/a Flash GUID: 0000-0000-0000-000000000000 * sometimes has nothing to do with the USB and the registration key is there ** the USB sits inside the case and was brand new at the building of my Unraid. It is less than 1 year old now. ** if I stop the array, whatever the case may be, it says the registration key is missing If I reboot the machine, everything starts to work as expected. I've attached the diagnostics if anyone is willing to have a look. micky-diagnostics-20190428-1011.zip
  2. I can't seem to find the username/password for the ZNC's admin interface. Anyone can help me with this please? EDIT: found it as: admin / admin
  3. Hi everyone, I gave up on Nextcloud. Too many errors and failing randomly. I've settled on Pydio and so far works great. However, I need some help with setting up SMB's please: I've enabled debug and this is what I get off the logs: 11-09-18 23:32:48 192.168.1.102 INFO iulian conf.sql Switch Repository rep. id=9110a23f40a788f1577bad89fcf43ac5 11-09-18 23:32:49 192.168.1.102 DEBUG iulian Pydio\Access\Driver\StreamProvider\SMB\smb debug Testing dir cache url=smbclient://***:***@mbclient://192.168.1.100/ 11-09-18 23:32:49 192.168.1.102 DEBUG iulian Pydio\Access\Driver\StreamProvider\SMB\smb debug SMBCLIENT -N -O 'TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=8192 SO_SNDBUF=8192' -O 'TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=8192 SO_SNDBUF=8192' -d 0 '//2.168.1.100/' -c 'dir "/\*"' 2>/dev/null [auth data] 11-09-18 23:32:49 192.168.1.102 DEBUG iulian Pydio\Access\Driver\StreamProvider\SMB\smb debug Adding to dir cache url=smbclient://***:***@mbclient://192.168.1.100/ if you look at line 3, I can see " 8192' -d 0 '//2.168.1.100/ " not sure how to fix it. I tried adding 2 more characters before that, but doesn't seem to work. It throws the rest out. Any suggestions welcomed.
  4. If anyone has any idea, here is the log: Error: NameResolutionFailure Fatal error System.Net.WebException: Error: NameResolutionFailure at System.Net.WebConnection+d__16.MoveNext () [0x00044] in :0 — End of stack trace from previous location where exception was thrown — at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0003e] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00028] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00008] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.ConfiguredTaskAwaitable+ConfiguredTaskAwaiter.GetResult () [0x00000] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Net.WebConnection+d__19.MoveNext () [0x000cc] in :0 — End of stack trace from previous location where exception was thrown — at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0003e] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00028] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00008] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Runtime.CompilerServices.ConfiguredTaskAwaitable1+ConfiguredTaskAwaiter[TResult].GetResult () [0x00000] in <71d8ad678db34313b7f718a414dfcb25>:0 at System.Net.WebOperation+<Run>d__57.MoveNext () [0x0009a] in <fc308f916aec4e4283e0c1d4b761760a>:0 --- End of stack trace from previous location where exception was thrown --- at System.Net.WebCompletionSource1+d__15[T].MoveNext () [0x00094] in :0 — End of stack trace from previous location where exception was thrown — at System.Net.HttpWebRequest+d__244`1[T].MoveNext () [0x000ba] in :0 — End of stack trace from previous location where exception was thrown — at Duplicati.Library.Main.BackendManager.List () [0x00049] in :0 at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x0000d] in :0 at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x00000] in :0 at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x000fd] in :0 at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x003c6] in :0
  5. Hi all, Unfortunately, I've updated dockers and my Duplicati docker is bugged. How do I downgrade it to 2 previous versions or so?
  6. Yup, updated dockers, Duplicati broke. Bugs on duplicati forum with the new updates.
  7. Hi folks, I'm appealing to your knowledge again, if I may. My work emails and my personal emails are completely separate. However, due to certain restrains, if I need to get home some documents I can't just email them, I need to get them onto a secure stick. That's because the email server is not my own and the organization doesn't want the risk of someone else possibly accessing them. My personal email is sitting for free on Gmail, as [email protected], hacked through Mailgun relay. Some years ago, for about 3 years, I ran an Axigen email server from my garage and worked wonders. Security updates and spam were in control. I got bored of the interface and switched to different providers until I ended up on Google's domain (again). Questions I know unRaid was not meant to be exposed to internet, but if I setup an Ubuntu/Centos VM or similar to expose it, how secure do you think it will be? I was thinking to have it behind a SSL (letsencrypt) + Cloudflare and a fail2ban for more security. The emails will be received through a reverse DNS and outgoing will be through Mailgun or similar for spam proofing (as they are now, score 10/10). Any ideas if could work or am I barking the wrong tree and I should drop it? Security will be an issue with updates every day.
  8. Thank you for your reply pwm. I was thinking same but worth the question, for others out there. I've setup the updates on a weekly basis and see how I go. Duplicati already broke 1 of the backups and was in a continuous loop preventing the other. I'll set it up tonight again. With the auto updates, if I get no phone calls from desperate employees, unable to access files, It will be ok :) The server does sync every 4 hours on local and off-site + 2 backups offsite.
  9. Hi everyone, My server is pretty simple, containing 2 SSD's, 2 normal drives and 2 Parity drives. It is for a business with pretty low needs. Question: if both HDD's fail at the same time, can the 2 parity drives restore them both? Does it work like that? P.S. just in case you were wondering as it seems that this pops up in nearly every rebuild topic, the server backs up on 3 different servers, in 3 different countries on a daily basis and another local backup as well.
  10. I'll setup CA for updates on a monthly basis or so. Thank you for your quick replies.
  11. Unfortunately, apart from the server alone, none is isolated from the internet. Not in this day and age. What I was looking for, is a set and forget type of server, with minimal maintenance, probably once ever 6 months or a year or less if possible. Of course, that's if nothing else goes wrong. Is it doable?
  12. Hi folks, I've setup the unRaid for my business. It's only a SMB with 2 Parity, 1 Cache and 3 HDD's. As per dockers and plugins I have only a few, Duplicati and rclone being the main ones. My question is in relation to updating CA's and Dockers. Seeing stories about things getting broken on updates, is there any reason for updating unRaid, plugins and dockers? I understand about vulnerabilities if you expose it to internet but for such needs, would you upgrade/update it on regular basis?
  13. The solution proposed by samba.org is on par with my solution. https://wiki.samba.org/index.php/Setting_up_a_Share_Using_POSIX_ACLs and https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs
  14. I managed to get it working just now. I will post it here as others might have an interest in this as well. I ended up using "chmod" and "chown" to change folder permissions and folder ownership. The problem with this setup is that can get very complicated if you have more than 1 manager and or changing managers all the time.
  15. Thx for this. I've been looking on that website today, however, the information is either poorly written or in-existent. This was useful but very hard and long to read: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html
  16. Thank you. I was thinking more on the lines of smb.conf than the Web UI. Any tips for a smb.conf setup ?
  17. Hi folks, For the last 12 hours or so, I read the forums and wiki and tested the server with SMB shares. I can't seem to get it right. I was hoping anyone could give me a hand if possible. I made it with colors so it will be easier to read the SMB shares I want to create. Users: User-1 (manager) User-2 (team member) Folder structure and permissions: General Folder <-- User-1 (read/write) Team <-- User-1 (read/write) | User-2 (read) User 1 <-- User-1 (read/write) | User-2 (read) User 2 <-- User-1 (read) | User-2 (read/write) Other folders <-- User-1 (read/write) I tried creating the general folder and creating the team folder in it. But the Team folder will inherit the General's folder permissions through the Shares although I tried modifying it with smb.conf in the SMB settings. Can anyone share some light on how can one achieve this please. If it is possible.
  18. mmm that's interesting and not so encouraging. So if the NAS is on LAN, doing its thing, there's no rclone option to backup/sync local computers or other nas servers within the LAN?
  19. Hi folks, After fiddling with NextCloud and other variants, I found Tonido to be the best single file/folder sharing for my clients. However, I find Tonido very unsecure. I have enabled additional security but still feels unsafe. Achieved grade F on SSL Labs https://www.ssllabs.com (image attached). Accessing and logging in to my only shared folder through Tonido's network, reveals my whole server completely. (image attached) Is there any way to contain Tonido to only 1 folder and 1 folder only without revealing the whole unRaid? That way, I can share only 1 file with my clients without worring too much. Or is this docker obsolete and shouldn't use it at all?
  20. Is this project dead? Very little activity on it.
  21. @pwm all your questions are valid, hence my problem. Not that often, probably every night, once every 24 hours. That was an example. I was thinking of some sort of Google Drive client, as Google keeps file versioning as well.
  22. Hi everyone, I've been looking for a solution to this and I can't seem to find one. Need your ideas please. My NAS needs are fairly small (~15GB) at the moment. I want to have them synchronized on Google Drive, on schedule (say every 4 hours). However, I would also like whatever I'm adding on GDrive from home to be synced on the server, on schedule. With rclone, I successfully mounted the cloud on /mnt/disks/folder, however, working on it actively it's not feasible as the files are CAD files that are changed with every save + backup. There will also be others in the future working on CAD files on the server. It will bring the network down as not all the files are small. Is there any way of having a 2 way sync every 4 hours, between the cloud and the server? Can anyone suggest any other solution? (NextCloud or rsync or mounting google drive on /mnt/disk1/folder?) Any ideas would be greatly appreciated.