RackIt

Members
  • Posts

    33
  • Joined

Converted

  • Gender
    Undisclosed

RackIt's Achievements

Noob

Noob (1/14)

2

Reputation

  1. All the config data is being placed into appdata. I was able to re-map it. Maybe this helps someone else also. I have not gone beyond just getting this installed and the volumes remapped. I created a new share called "minecraft-crafty" In the extra parameters: -v "/mnt/user/minecraft-crafty/backups:/config/crafty/backups" -v "/mnt/user/minecraft-crafty/logs:/config/crafty/logs" -v "/mnt/user/minecraft-crafty/servers:/config/crafty/servers" -v "/mnt/user/minecraft-crafty/config:/config/crafty/app/config" -v "/mnt/user/minecraft-crafty/import:/config/crafty/import"
  2. I forced a reboot. And then started the array after the reboot. The array took a while to come online but the zfs 4-drive pool seems to be online. I'm going to remove the pool and repeat the creation. Edit: I just recreated the 4-drive zfs pool and this time it went smoothly. I removed the diags since the problem seems to have been temporary.
  3. I am now testing all SSD in a Dell R630. Though I'm running into some issues. I have 4 2TB SSD's which I want to place into a single pool as 2 mirrored vdevs. I place 2 SSD's into a zfs mirror. Start the array and format. All good. I stop the array and change the pool to 4 disks. I add the additional 2 SSD's. The trouble starts when I start the array. The array simply stays at "Starting Array - Mounting disks...".
  4. I want to move my unraid server to new hardware that has all 2.5" drives (Dell R630). I imagine the process would be: - in the existing server, install new 2.5" drives and migrate the data from the 3.5" drives - shutdown the old server - move the USB stick, SSD cache drives, and 2.5" drives the new server Any flaws in that plan or is there any easier method?
  5. I added it to the custom location for the individual proxy.
  6. In the event this helps anyone else... I placed FileBrowser behind NGM. My test file is 73GB. I was able to upload this file in FileBrowser without NGM but it would fail with NGM. I noticed docker throwing a space warning. NGM was buffering the upload within the docker container. I am now able to successfully upload my test file after adding the following Custom Nginx Configuration. Some settings might not be needed but it works and at least is a starting point. proxy_request_buffering off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 0;
  7. For anyone still struggling with this. I found that the container isn't setup to work in the typical "appdata" way and permission issues prevent warrior from saving the json config file. My solution was to accept all the defaults of the container and to just pass "Extra Parameters" to configure warrior. This will preserve your settings but any in-progress work gets lost when the container is re-created. My settings are to just use the preferred project and to max the concurrent items. I hope this helps someone. Set "Extra Parameters" to: -e DOWNLOADER='YourUsername' -e SELECTED_PROJECT='auto' -e CONCURRENT_ITEMS='6'
  8. It looks like the granularity for moving 'older' files is 1 day based on CTIME. I'd like to move files older than 1 hour based on CMIN. Am I not seeing a setting for this? If not, is there a work-around? (awesome plugin!)
  9. I currently do the same with a Synology for user storage that is versioned using BTRFS. It looks like you have the local backup covered. You then could have remote backup by taking drives offsite. I didn't outline every one of my goals, but one of my goals is to have fast versioned remote backup to the cloud. To get the versioning I could use a file-based backup application (of which I've used and tested many) or I'm thinking I could just use ZFS snapshots. Then just backup snapshots. I would also like to have the files verified by a hash check and I think ZFS would give me that. I hope all that makes sense.
  10. Thanks and good suggestion and my DS916+ supports it. While I like Synology in general, I'm not really a fan being vendor locked with their backup solution. I don't think I'll be buying Synology devices in the future. I'm still thinking I would prefer the robustness of ZFS as a backup server if what I outlined is sound.
  11. My home network is a Windows environment along with a Synology NAS. I want to be able to protect against ransomware so having incremental backups are important to me. I've been looking at various methods other unraid users are using but they all seem too complex for the purpose. I'm sure I'm not the first to consider this but I'm thinking of just using a 1U server to run FreeNAS so I can use daily ZFS snapshots and backup those snapshots locally and remotely. I can then backup important files from my Windows Server NAS, Synology NAS, and unraid all to the FreeNAS server and perform simple backups of FreeNAS. What are the holes in this strategy? p.s. I've also tried Synology backup methods but I found them really unreliable.
  12. Thanks. Not sure how I was missing it but the SAS drives show now.
  13. Great plugin! My SAS drives are not found in the scan. Does this support SAS drives?
  14. CacheCade basically allows you to use a few SSD drives as caching drives with the controller determining what to cache. I'm curious if anyone has tested LSI's CacheCade with unraid. I'm not sure if CacheCade even works with JBOD or if it requires an array. Anyone using it? Thanks.