RackIt

Members
  • Posts

    33
  • Joined

Posts posted by RackIt

  1. All the config data is being placed into appdata. I was able to re-map it. Maybe this helps someone else also.

     

    I have not gone beyond just getting this installed and the volumes remapped.

     

    I created a new share called "minecraft-crafty"

     

    In the extra parameters:

     

    -v "/mnt/user/minecraft-crafty/backups:/config/crafty/backups" -v "/mnt/user/minecraft-crafty/logs:/config/crafty/logs" -v "/mnt/user/minecraft-crafty/servers:/config/crafty/servers" -v "/mnt/user/minecraft-crafty/config:/config/crafty/app/config" -v "/mnt/user/minecraft-crafty/import:/config/crafty/import"

     

    • Like 1
  2. 1 hour ago, JorgeB said:

    Please post the diagnostics.

    I forced a reboot. And then started the array after the reboot. The array took a while to come online but the zfs 4-drive pool seems to be online. I'm going to remove the pool and repeat the creation.

     

    Edit: I just recreated the 4-drive zfs pool and this time it went smoothly.

     

    I removed the diags since the problem seems to have been temporary.

  3. I am now testing all SSD in a Dell R630. Though I'm running into some issues.

     

    I have 4 2TB SSD's which I want to place into a single pool as 2 mirrored vdevs.

     

    I place 2 SSD's into a zfs mirror. Start the array and format. All good.

     

    I stop the array and change the pool to 4 disks. I add the additional 2 SSD's.

     

    The trouble starts when I start the array. The array simply stays at "Starting Array - Mounting disks...".

     

  4. I want to move my unraid server to new hardware that has all 2.5" drives (Dell R630).

     

    I imagine the process would be:

     

    - in the existing server, install new 2.5" drives and migrate the data from the 3.5" drives

    - shutdown the old server

    - move the USB stick, SSD cache drives, and 2.5" drives the new server

     

    Any flaws in that plan or is there any easier method?

  5. In the event this helps anyone else...

     

    I placed FileBrowser behind NGM. My test file is 73GB. I was able to upload this file in FileBrowser without NGM but it would fail with NGM.

     

    I noticed docker throwing a space warning. NGM was buffering the upload within the docker container.

     

    I am now able to successfully upload my test file after adding the following Custom Nginx Configuration. Some settings might not be needed but it works and at least is a starting point.

     

    proxy_request_buffering off;
    proxy_buffering off;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_max_temp_file_size 0;
    client_max_body_size 0;

     

  6. For anyone still struggling with this. I found that the container isn't setup to work in the typical "appdata" way and permission issues prevent warrior from saving the json config file.

     

    My solution was to accept all the defaults of the container and to just pass "Extra Parameters" to configure warrior. This will preserve your settings but any in-progress work gets lost when the container is re-created.

     

    My settings are to just use the preferred project and to max the concurrent items. I hope this helps someone.

     

    Set "Extra Parameters" to:

    -e DOWNLOADER='YourUsername' -e SELECTED_PROJECT='auto' -e CONCURRENT_ITEMS='6' 

    • Like 1
  7. 1 hour ago, hugenbdd said:

    it's based on days.  I don't think many would use hours.

     

    You could modify age_mover where ctime is created in the "find" string....  Or you could create your own mover command and put it in cron. 

     

    # find "/mnt/cache/share1" -depth -cmin +60 | /usr/local/sbin/move -d 1

     

    Perfect.. thanks!

  8. It looks like the granularity for moving 'older' files is 1 day based on CTIME. I'd like to move files older than 1 hour based on CMIN. 

     

    Am I not seeing a setting for this? If not, is there a work-around?

    (awesome plugin!)

  9. 12 hours ago, Benson said:

    My solution are QNAP NAS use for private / small data / versioning storage, and Unarid for large media and all backup. So, Unraid is my last restore source.

     

    I currently do the same with a Synology for user storage that is versioned using BTRFS.

     

    It looks like you have the local backup covered. You then could have remote backup by taking drives offsite.

     

    I didn't outline every one of my goals, but one of my goals is to have fast versioned remote backup to the cloud. To get the versioning I could use a file-based backup application (of which I've used and tested many) or I'm thinking I could just use ZFS snapshots. Then just backup snapshots. 

     

    I would also like to have the files verified by a hash check and I think ZFS would give me that.

     

    I hope all that makes sense.

  10. 12 minutes ago, Spyderturbo007 said:

    Depending on the model of your Synology, have you looked at the Active Backup for Business?  You mentioned you tried the Synology ones, but this is newer than the terrible ones they had before and wasn't sure if you had seen it.  We use a much more robust imaging system at my office, but I spun it up on a few servers just for piece of mind and it's pretty reliable.  But not all models support the software.

     

    https://www.synology.com/en-us/dsm/feature/active_backup_business

    Thanks and good suggestion and my DS916+ supports it. While I like Synology in general, I'm not really a fan being vendor locked with their backup solution. I don't think I'll be buying Synology devices in the future.

     

    I'm still thinking I would prefer the robustness of ZFS as a backup server if what I outlined is sound. 

  11. My home network is a Windows environment along with a Synology NAS. I want to be able to protect against ransomware so having incremental backups are important to me.

     

    I've been looking at various methods other unraid users are using but they all seem too complex for the purpose.

     

    I'm sure I'm not the first to consider this but I'm thinking of just using a 1U server to run FreeNAS so I can use daily ZFS snapshots and backup those snapshots locally and remotely.

     

    I can then backup important files from my Windows Server NAS, Synology NAS, and unraid all to the FreeNAS server and perform simple backups of FreeNAS.

     

    What are the holes in this strategy?

     

    p.s. I've also tried Synology backup methods but I found them really unreliable. 

  12. 13 hours ago, olehj said:

    Shouldn't care about SAS vs SATA drives, it's rather the HBA/Raid card. There's several posts here about these issues and howto debug it.

    Thanks. Not sure how I was missing it but the SAS drives show now.

  13. CacheCade basically allows you to use a few SSD drives as caching drives with the controller determining what to cache.

     

    I'm curious if anyone has tested LSI's CacheCade with unraid.  I'm not sure if CacheCade even works with JBOD or if it requires an array.

     

    Anyone using it?

     

    Thanks.

  14. I'm trying to get AD working with unraid and I'm trying to get familiar with samba but i think my inexperience with it is my problem.

     

    I am attempting to have unraid use my existing Windows Server 2008 R2 AD domain.  I have unraid joined to the domain successfully.  I am not able to configure shares for AD user groups permissions within unraid successfully though.

     

    I am looking for any help and direction someone can provide.  Is this not an unraid issue but a slackware-2-AD configuration quesiton?

     

    Thanks.

     

  15. force a page reload in your browser - most browsers you can hold down the left shift and reload the page, it will cause the browser to ignore the cache dir....

     

    Myk

     

    Thanks.  I actually tried loading it from another machine and the message is the same.  I suppose I need to try some way to restart the web server component?

     

    My array is still completely offline.  HELP PLEASE!

  16. I rebooted the array and the web interface and array is offline.  My syslog is attached. 

     

    Any help would be appreciate.  Thanks.

     

    If I may refer back to the problem I am having and posted of earlier, my array is still down but I just noticed that the tower website actually keeps showing "System is rebooting..." even after being rebooted a few times.

     

    Any suggestions?

     

    Thanks.

  17. I'd hate to run a parity check on that thing.

     

    unraid wouldn't really be run on it. :-)  One parity drive to protect 895 wouldn't be much protection.

     

    Each of those slots in the photo actually hold 14 drives. There would be 32-64 arrays in that beast.

  18. I have run a raid5 with 4x 7200RPM 500GB Samsung 2.5 laptop drives for a cache drive on an areeca raid card as an experiment. the array could get close to 500MB/s read/write.

     

    but with that cost ($800ish for a cache drive) and added power consumption. you might as well buy a 256GB Corsair Performance Pro SSD. unless you are looking for protected cache drive.

     

    (striped opens up a new world of data loss potential BTW)

     

    The application is a backup storage array so losing the current days image backups on the cache drive is not a worry. 

     

    I'll need about 500GB for the cache to hold a days backups. I'll need to work the numbers again but I think SSD will be more costly but maybe not by much.  But of course SSD does have the added benefit of simplicity over striping spinners.