Jump to content

denishay

Members
  • Content Count

    58
  • Joined

  • Last visited

Community Reputation

6 Neutral

About denishay

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. FYI, still works fine with unraid 6.8.3 and sysstat-11.2.1.1-x86_64-1.txz
  2. OK... I have found a solution, and writing it here as it may help others troubleshoot passwordless SSH connections. I knew I had done everything right and couldn't find why it was working with passwords entered bu not without. The whole key is in the "-v" (verbose) option of the SSH command which allowed be to see all the steps and find out what was wrong. As a reminder, host is unraid and the remote guest is a raspberry pi under debian 10 lite (without desktop). As you can see, all it took was to rename the public certificate and upload it to the host. To do that "properly" without to have to set permissions, you can use the ssh-copy-id -i <path+certificate_name> user@host command. So from now on, if you are having issues with your certificates or passwordless SSH connections, you know how to look for what's missing or not working!
  3. I'm at a loss after 3 days of research and many reboots on this. I am trying to create a "safe" connection between my unraid and a raspberry pi which will ultimately be remote. I am using raspbian lite 10 on the raspberry. I have already installed wireguard on it and it connects to unraid successfully at each reboot. So that's for the "safe" tunnel in between the two. Unraid (10.253.0.1) can ping and ssh into the raspberry pi (10.253.0.3) using the IP address between brackets, and the other way around. So SSH works wonderfully... but only by entering the password. I created the ssh public keys [using ssh-keygen], added them to the authorized_keys file on each endpoint [cat raspberrypi.pub >> authorized_keys], but to no avail. Everything works fine, but the password is always required. I have tried of course to disable the password login on unraid [in the settings section once the SSH plugin is installed] to try and see if it would force the usage of the predefined public key stored, but nope, the connection then fails. I want to be scripting some rsync later, so I absolutely need to have the passwordless login working. I have tried many tutorials around, and I think I understand the mechanisms quite well. As far as I can tell, I have done everything necessary, but I don't understand why the password is still required for any SSH attempt I do (either way). I have tried also all the different values for chmod on the different files and folders, but no combination seems to work. Worthy of note, I am using the root account on each side, even on the raspberry (so not the pi account, I have enabled root, but even other "new" users created with super user privileges still have the same behaviour and the password is requested). Any help would be greatly appreciated
  4. Hi turl, The shares on the array was a temporary thing as I wanted to try and sort out the current issues before setting them back to use the cache. I had tried to delete the docker image and install dockers again, but it dien't work at first. So I went a bit brutal and stopped the array completely, removed the two cache drives and let them in unassigned devices where I removed their respective partitions. Then I added them back as cache, let unraid format them again and clicked on the Balance > Convert to raid 0 once more. This time everything went well and I could delete the docker image once more and this time it allowed installation of my previous docker. I am sooo glad we have this saving of templates in appdata! So all in all, everything is working as it did before. I think my issue was probably some initial corruption during the first conversion to raid0 which in turn corrupted the docker image file. Anyway, thanks for the help! I'll now set back the shares which should be on the cache.
  5. Some more info. I saw there was a BTRFS check feature whent he array is in maintenance mode, so I tried that, but it can't find any errors: [1/7] checking root items [2/7] checking extents [3/7] checking free space cache [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdl1 UUID: ***removed*** found 206969012224 bytes used, no error found total csum bytes: 149267328 total tree bytes: 381583360 total fs tree bytes: 214286336 total extent tree bytes: 12845056 btree space waste bytes: 49397585 file data blocks allocated: 531302301696 referenced 206132273152
  6. Hi all, I could really use some help to get back the possibility to run my dockers. Here is the situation: My unraid 6.8.3 had a fully happily functionning array of various disks and a 250 GB SSD as cache drive (xfs). My VMs and dockers image are by default on the cache drive too for performance reasons. I happen to have gotten another 250 GB SSD drive and thought I would use that second SSD and use that new 6.8.3 support for BTRFS pools to set the two as raid0. So here I go, stopping my array, extending the cache to two slots and adding the extra SSD. I format the first drive as BTRFS, then use the balance option to convert to raid0. Everything goes well and after some time, I get what looked like a functionning cache. I had my dockers running, one VM too. so here I go to copy back to the cache all the files which I had previously backed up in a separate disk on the array. And then, I proceeded to restart the whole unraid server, to make sure everything would work well if the server rebooted when I'm not present. And now I'm a bit stuck... the docker service will fail to start and I look at the system log, I get this weird bit I didn't have before: May 25 13:40:02 unraid kernel: BTRFS error (device loop3): failed to read chunk root May 25 13:40:02 unraid root: mount: /var/lib/docker: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error. May 25 13:40:02 unraid kernel: BTRFS error (device loop3): open_ctree failed May 25 13:40:02 unraid root: mount error May 25 13:40:02 unraid emhttpd: shcmd (167): exit status: 1 May 25 13:40:37 unraid root: error: /webGui/include/Notify.php: wrong csrf_token May 25 13:40:57 unraid emhttpd: error: cmd: wrong csrf_token I have tried googling for that, and it seems others have had a similar problem in the past, but I couldn't find any working solution. I have attached here the anonymized diagnostics file in case that can help. The second line seems to indicate a wroing "fs" (filesystem?) type, and I'm wondering if it's not the change from xfs to btrfs. I have rerun the balance operation, tried the chunk verification with error correction, multiple restarts, but nothing seems to help. If anyone could point me to the right direction here, it would be much appreciated. unraid-diagnostics-20200525-1241.zip
  7. Hi, You are right. It is not possible anymore. The "for business" line of Crashplan never allowed that. And Crashplan discontinued their older solution for consumers. So you cannot do that anymore. A pity as I see lots of added value there, but I also understand, knowing that their cloud backup service isn't limited in size (or you could potentially backup many workstations to one, then have a subscription only for that last one).
  8. What I like the most about Unraid is the ease of use of dockers. It's so easy to add new functionality to my home lab with that. And what I would like to see the most in 2020 is an easy to use back up system (not just sync) from unraid to remote unraid. It is doable now but requires quite a lot of setup.
  9. Lost completely all connection to the server twice already since 6.8rc3 upgrade yesterday. Can't even connect via SSH, I am forced to hard shutdown/restart Unraid. I wish I could help debug that by providing diagnostics, but can't as my server is headless. I had it running 6.7 stable previous to that for several months in a row without any problem. Going back to 6.7 Not sure what went wrong with 6.8 really. The loss of connection was complete, not only a web UI issue, but even at the router level, neither Unraid nor any of the dockers with their own IP showed up. Running on an old but reliable Core i7 2.8GHz and Assus Maximus Gene III (yes, that old!). Never had this "loss of IP connection" issue on any system before on it, including earlier Unraid version.
  10. What I do is that I have indeed Nextcloud have its own files on /data and I create an extra mapping pointing to my unraid data that I add as external storage on Nextcloud.
  11. You're welcome. Yes, sorry, my nextcloud is using an older PHP (I downgraded it manually for an older version update which wasn't supported with PHP7). Had even forgotten about it since.
  12. One could argue that syncing is not backup and not meant to protect against deletions or corruption.
  13. Use: - duckdns docker to get a free domain name to redirect to your dynamic IP - let's encrypt docker which comes along with nginx and does create/update your free SSL certificate (the let's encrypt part) and redirects HTTPS calls to your Nextcloud - nextcloud (+maria db or other database) docker I am pretty sure that SpaceInvaderOne did a video on the full setup for nextcloud... This one might help too: https://www.youtube.com/watch?v=I0lhZc25Sro Also, the config will not only be on nextcloud, but also on nginx, as it is your reverse proxy. Typically under "sites" with xxxsitename.conf files iirc.
  14. Open a console (either from main unraid dashboard or SSH session) and type: docker exec -it nextcloud sudo -u abc php /config/www/nextcloud/occ files:scan it will run the OCC command in the nextcloud docker with the abc username and scan for any missing files. Not sure why, but someone found funny to use a completely non-standard name as Nextcloud data owner... Hoping it saves you countless hours of research.
  15. This was discussed already in this thread if you search. You can edit the config file to disable de-duplicaiton and presto, Crashplan will use your max upload... With a nice upload like you have, it's way faster than having Crashplan trying to guess what is necessary or not to upload, hence reducing the upload strongly. Edit: you can also see that here: https://support.code42.com/CrashPlan/4/Configuring/Unsupported_changes_to_CrashPlan_de-duplication_settings