vinid223

Members
  • Content Count

    19
  • Joined

  • Last visited

Everything posted by vinid223

  1. I was not able to find a fix. It was not a big deal so I just created a new VM. Ghost82 solution might be worth a try. Let me know if it works
  2. Thanks, that did the trick.
  3. TL;DR; Uninstalled, but the redirect from internal IP to xxxx.unraid.net is still enabled. How I stop that redirect? I've installed the plugin and did all the things for the port forward and remote access. It seems to work except for the my server page in unraid. I cannot see remote access. I waited a full day for the "sync" just in case, but still nothing and now I am uninstalling the plugin for now. But now, each time I open my server by the internal network ip, it redirect me to the custom cname of the unraid.net domain. I want to remove that redirect, b
  4. @war4peace Good to know it is working out great
  5. I normally download directly to the cache, never to the array, unless I bust the cache size. I point to transcoding only depending on your needs. I do have a lot of 4k REMUX content, so the bandwith is heavy on the network, so I transcode to lower that size. I should encode it thinking about it Good luck with your project. Let me know how it goes.
  6. I am all in the kimifelipe. It is really important to preclear the disks. (DON'T DO IT ON THE EXISTING DRIVE IF YOU HAVENT MOVE THE DATA TO THE ARRAY). When the cache is full, the write speed should be the same as reading from the HDD. I am only suggesting to bypass the cache only because I had issue in the past when doing a really large transfer with big files. You can enable the cache pool at the begining, you can just skip it in the share settings. It really depends on what you intend to do. If you think you will fill the cache really often (other than
  7. Feel free to ask anything else. I will answer the best I can.
  8. I am not an expert, so this is from an enthusiast to another. For your process of creating the array and transferring your data, it will take a lot of time, but will work. I would strongly suggest to have 2 parity drives in the future. This will add more protections, but it will need to be 14tb as well. On your first 14tb copy, I would suggest to not have the parity enabled for better performance. The cache could be useful, but you will fill it really fast so I would also bypass the cache for the first data transfer. (In the share, just disable cache) It will be slower
  9. Lets just hope I don't have anyone who need my plex and my CPU can't transcode
  10. Yeah same for me, can't reinstall the plugin
  11. The NVIDIA plugin broke after the installation of 6.9.1. I can't reinstall it either.
  12. I am not aware if gsutil supports both ways for sync. You can check the options here https://cloud.google.com/storage/docs/gsutil/commands/rsync#options You can use the `-d` option to delete remote files when they are deleted locally. But be careful with that option, it deletes without a confirmation so be sure to set the remote bucket and local folder correctly to not lose anything. There might be a way to do it according to this stackoverflow answer https://stackoverflow.com/a/1602348/3900435 which implies that you run 2 instances of the image with reverse folder. If
  13. I wanted to move a VM to an unassigned device to reduce space and the i/o in the cache drives. The VM was booting fine prior to the move, but when moving the img and changing the config to point to the new location, the VM will never boot and get me straight to the interactive shell. Here is the XML of the VM https://pastebin.com/16FnWW7j The BIOS seems to find the drive, but is not detecting it well If I change the drive from virtio to sata, the bios seems to be able to see that it is a drive, but never boot on it I am not sure wha
  14. Just upgraded without issue. Thank you for the amazing work.
  15. google-cloud-storage-backup This container allows you to backup your local files to a Google Cloud Storage bucket with simple configuration and flexibility. This container works just like @joch S3Backup container. You simply need to mount your local directories into the /data volume in the docker image, add the required variables to authenticate you to GCloud as well as the bucket name to use. You can also configure custom options for your backups and custom cron to automatically backup your local files. There is a complete example on how to use this im