Jump to content

denishay

Members
  • Content Count

    49
  • Joined

  • Last visited

Community Reputation

6 Neutral

About denishay

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. What I do is that I have indeed Nextcloud have its own files on /data and I create an extra mapping pointing to my unraid data that I add as external storage on Nextcloud.
  2. You're welcome. Yes, sorry, my nextcloud is using an older PHP (I downgraded it manually for an older version update which wasn't supported with PHP7). Had even forgotten about it since.
  3. One could argue that syncing is not backup and not meant to protect against deletions or corruption.
  4. Use: - duckdns docker to get a free domain name to redirect to your dynamic IP - let's encrypt docker which comes along with nginx and does create/update your free SSL certificate (the let's encrypt part) and redirects HTTPS calls to your Nextcloud - nextcloud (+maria db or other database) docker I am pretty sure that SpaceInvaderOne did a video on the full setup for nextcloud... This one might help too: https://www.youtube.com/watch?v=I0lhZc25Sro Also, the config will not only be on nextcloud, but also on nginx, as it is your reverse proxy. Typically under "sites" with xxxsitename.conf files iirc.
  5. Open a console (either from main unraid dashboard or SSH session) and type: docker exec -it nextcloud sudo -u abc php /config/www/nextcloud/occ files:scan it will run the OCC command in the nextcloud docker with the abc username and scan for any missing files. Not sure why, but someone found funny to use a completely non-standard name as Nextcloud data owner... Hoping it saves you countless hours of research.
  6. This was discussed already in this thread if you search. You can edit the config file to disable de-duplicaiton and presto, Crashplan will use your max upload... With a nice upload like you have, it's way faster than having Crashplan trying to guess what is necessary or not to upload, hence reducing the upload strongly. Edit: you can also see that here: https://support.code42.com/CrashPlan/4/Configuring/Unsupported_changes_to_CrashPlan_de-duplication_settings
  7. who would have thought there is a reason it's called *un*RAID
  8. Yup. That. I have yet to see silent rack servers. You might be way better served changing the MB+CPU combo in your existing case if you're satisfied with it. I have a setup similar to yours, but in a Fractal Design R5 (the "quiet" one, not the flashy stuff with loud glass panels everywhere), and it is very silent, I can barely hear a small breeze when coming next to it, but that's about it (and I'm not using Noctua fans). For the GPU though, I went minimal and got a PCI (not PCI-E) passive GPU just with VGA output. Uses a slot which would then be unusable for anything else, and saves me a precious x16 PCI-E slot for more useful expansions.
  9. I'd probably replace the "well it was free" with "well it was given". With that type of hardware and the huge number of drives, I can't imagine it has no impact on the electricity cost. While I understand it's nice to experiment with, I would certainly not rely on a huge number of drives driven by hardware RAID. I can't tell for others, but for me, going UNRAID was exactly to avoid such dependencies. If and when something happens on that one drive, then fine, just replace that one. With both a parity at high risk of failure (because it's a hardware raid, while it gets rebuilt you don't have parity, hence protection), I would certainly not put any data I value on such a server. But as I said, thjat is only my point of view. Others might have another.
  10. Nope, an single SSD. 240GB, nothing extraordinary. Filesystem 1B-blocks Used Available Use% Mounted on /dev/sdc1 239947935744 97084276736 142863659008 41% /mnt/cache /dev/md1 1999422144512 1214729420800 784692723712 61% /mnt/disk1 /dev/md2 999716474880 247712878592 752003596288 25% /mnt/disk2 /dev/md3 2999127797760 1427599224832 1571528572928 48% /mnt/disk3 /dev/md4 2999127797760 2593694089216 405433708544 87% /mnt/disk4 /dev/md5 3998833471488 3454259048448 544574423040 87% /mnt/disk5 /dev/md6 3998833471488 2016894824448 1981938647040 51% /mnt/disk6 rootfs 4140683264 691458048 3449225216 17% /mnt shfs 17235009093632 11051973763072 6183035330560 65% /mnt/user shfs 16995061157888 10954889486336 6040171671552 65% /mnt/user0 Unfortunately, in between, I had completely uninstalled and reinstalled it in hope of "fixing" what I thought was an issue. So the log is now empty.
  11. Thanks a million! That was indeed the case. It was my nextcloud share, and somehow, between the trashbin and a few redundant folders, I had more than 5.1 TB on this share! It's weird Unbalance offered the cache drive as destination then, or didn't warn me there wasn't enough space. But thanks! Mystery solved now!
  12. Hi all,  I'm puzzled. I just added a new drive to my array (formatted it, etc.) and I can move files to/from it just fine in a shell using mc, but somehow, Unbalance is completely unable to offer me this brand new emtpy 4TB drive as a destination for a "gather" operation. I would like to move all files pertaining to a share to this new drive, but alas, nothing but my cache drive is offered as a destination (the other drives are quite full, up to the 512MB set as a limit).  Of course, I tried stopping/restarting unbalance, the array… I even rebooted the whole server, but no luck in getting this new drive as a destination. How can I "force" that as a destination in a gather operation?