vinid223

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by vinid223

  1. I made Time machine work again after @dlandon recommendation to copy the smb-extra.conf file to /boot/config I have set the following content. # global parameters are defined in /etc/samba/smb.conf # current per-share Unraid OS defaults vfs objects = catia fruit streams_xattr #fruit:resource = file fruit:metadata = stream #fruit:locking = none #fruit:encoding = private fruit:encoding = native #fruit:veto_appledouble = yes #fruit:posix_rename = yes #readdir_attr:aapl_rsize = yes #readdir_attr:aapl_finder_info = yes #readdir_attr:aapl_max_access = yes #fruit:wipe_intentionally_left_blank_rfork = no #fruit:delete_empty_adfiles = no #fruit:zero_file_id = no # these are added automatically if TimeMachine enabled for a share: #fruit:time machine #fruit:time machine max size = SIZE But, restarting samba did not fix the transfer problem. Rebooting the server fixed the Time Machine sync issue. I still have a flood of logs for the error Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.264730, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/759:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.272952, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/759:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.302927, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/1a1c:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.324951, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/28a:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.343093, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/6fa:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.350699, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/6fa:AFP_AfpInfo] failed andromeda-diagnostics-20221013-1857.zip
  2. I have the same issue. The logs are flooded. The error started after upgrading from 6.10.3 to 6.11.1 andromeda-diagnostics-20221013-1857.zip
  3. Hey there. I'm having the same issue and the logs are full now. Anyone fount a solution to that ? I did upgrade from 6.10.3 to 6.11.1 and the error started and I cannot backup using timemachine anymore.
  4. I was not able to find a fix. It was not a big deal so I just created a new VM. Ghost82 solution might be worth a try. Let me know if it works
  5. TL;DR; Uninstalled, but the redirect from internal IP to xxxx.unraid.net is still enabled. How I stop that redirect? I've installed the plugin and did all the things for the port forward and remote access. It seems to work except for the my server page in unraid. I cannot see remote access. I waited a full day for the "sync" just in case, but still nothing and now I am uninstalling the plugin for now. But now, each time I open my server by the internal network ip, it redirect me to the custom cname of the unraid.net domain. I want to remove that redirect, but I can't see where. Any clue? I also have that funny glitch on the top right . Instead of my user name I see this. Other problem, in the plugin settings, when I click sign-out, the pop-up opens and load to infinity and beyond. I can't sign out.
  6. @war4peace Good to know it is working out great
  7. I normally download directly to the cache, never to the array, unless I bust the cache size. I point to transcoding only depending on your needs. I do have a lot of 4k REMUX content, so the bandwith is heavy on the network, so I transcode to lower that size. I should encode it thinking about it Good luck with your project. Let me know how it goes.
  8. I am all in the kimifelipe. It is really important to preclear the disks. (DON'T DO IT ON THE EXISTING DRIVE IF YOU HAVENT MOVE THE DATA TO THE ARRAY). When the cache is full, the write speed should be the same as reading from the HDD. I am only suggesting to bypass the cache only because I had issue in the past when doing a really large transfer with big files. You can enable the cache pool at the begining, you can just skip it in the share settings. It really depends on what you intend to do. If you think you will fill the cache really often (other than the initial transfer), then yes this could be something you could do, but if you don't really fill it up and use it only for fast appdata access then there is not really any use for it. You are better to put the price of toward a GPU for plex transcoding or more drives The plex data, torrent, other apps you would install would go by default to the cache. It is not mandatory, but it is by default. If you really, really, really want the NVME for plex, you can do a second cache array with both NVME for plex. But don't use just one. If the NVME fail you lose all the data in plex. No problem, I needed help in the begining so I am glad to give help in return.
  9. Feel free to ask anything else. I will answer the best I can.
  10. I am not an expert, so this is from an enthusiast to another. For your process of creating the array and transferring your data, it will take a lot of time, but will work. I would strongly suggest to have 2 parity drives in the future. This will add more protections, but it will need to be 14tb as well. On your first 14tb copy, I would suggest to not have the parity enabled for better performance. The cache could be useful, but you will fill it really fast so I would also bypass the cache for the first data transfer. (In the share, just disable cache) It will be slower in the beginning for sure, but I had an issue in the past when transferring a lot of data and filling the cache. Some file transfer did not work because of that. When creating the cache pool, be sure to select a raid 1 cache pool. This way all your appdata will be duplicated between each cache drive. This will include the data from plex. You can use the NVME drive for plex (specify another location than the default appdata), but the performance boost from NVME vs SSD is not that big for plex, in my opinion, unless you have 100 simultaneous users. Also, the plex data will not be duplicated or backuped using the NVME drive. I would not use the other NVME drive for Unraid, I am not sure you can activate it using this. A modern USB drive should do the work. If you used VM for daily usages, I would suggest using the NVME drive as passthrough. It will be a lot faster for that VM, but you won't be able to use it for anything else. The GPU is pretty much useless for plex transcoding. It is not fast enough to transcode anything other than audio. So you would not need to configure plex for that. But it could be used to run a VM with the GPU passthrough. I hope this answer your questions and I just want to remind you I am not an expert
  11. Lets just hope I don't have anyone who need my plex and my CPU can't transcode
  12. Yeah same for me, can't reinstall the plugin
  13. The NVIDIA plugin broke after the installation of 6.9.1. I can't reinstall it either.
  14. I am not aware if gsutil supports both ways for sync. You can check the options here https://cloud.google.com/storage/docs/gsutil/commands/rsync#options You can use the `-d` option to delete remote files when they are deleted locally. But be careful with that option, it deletes without a confirmation so be sure to set the remote bucket and local folder correctly to not lose anything. There might be a way to do it according to this stackoverflow answer https://stackoverflow.com/a/1602348/3900435 which implies that you run 2 instances of the image with reverse folder. If you try this, please do this with an empty folder and empty bucket to test it out. Let me know of your results. I might add an option to do this with a single instance of the image.
  15. I wanted to move a VM to an unassigned device to reduce space and the i/o in the cache drives. The VM was booting fine prior to the move, but when moving the img and changing the config to point to the new location, the VM will never boot and get me straight to the interactive shell. Here is the XML of the VM https://pastebin.com/16FnWW7j The BIOS seems to find the drive, but is not detecting it well If I change the drive from virtio to sata, the bios seems to be able to see that it is a drive, but never boot on it I am not sure what to do and I don't want to create a new VM from scratch and reconfigure everything. Thanks
  16. Just upgraded without issue. Thank you for the amazing work.
  17. google-cloud-storage-backup This container allows you to backup your local files to a Google Cloud Storage bucket with simple configuration and flexibility. This container works just like @joch S3Backup container. You simply need to mount your local directories into the /data volume in the docker image, add the required variables to authenticate you to GCloud as well as the bucket name to use. You can also configure custom options for your backups and custom cron to automatically backup your local files. There is a complete example on how to use this image on the Github page as well as the Docker Hub page. Github : https://github.com/vinid223/GcloudStorage-docker Docker Hub : https://hub.docker.com/r/vinid223/gcloud-storage-backup Changelog : Look in the Github repository for the CHANGELOG file There is a template available here for Unraid https://github.com/vinid223/unraid-docker-templates If you have any questions, bug or requests you can ask here. Thanks, vinid223 Edits: 2020-11-23 Specify Support only 2020-11-22 Errors in text