vinid223

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

793 profile views

vinid223's Achievements

Noob

Noob (1/14)

6

Reputation

  1. I made Time machine work again after @dlandon recommendation to copy the smb-extra.conf file to /boot/config I have set the following content. # global parameters are defined in /etc/samba/smb.conf # current per-share Unraid OS defaults vfs objects = catia fruit streams_xattr #fruit:resource = file fruit:metadata = stream #fruit:locking = none #fruit:encoding = private fruit:encoding = native #fruit:veto_appledouble = yes #fruit:posix_rename = yes #readdir_attr:aapl_rsize = yes #readdir_attr:aapl_finder_info = yes #readdir_attr:aapl_max_access = yes #fruit:wipe_intentionally_left_blank_rfork = no #fruit:delete_empty_adfiles = no #fruit:zero_file_id = no # these are added automatically if TimeMachine enabled for a share: #fruit:time machine #fruit:time machine max size = SIZE But, restarting samba did not fix the transfer problem. Rebooting the server fixed the Time Machine sync issue. I still have a flood of logs for the error Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.264730, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/759:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.272952, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/759:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.302927, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/1a1c:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.324951, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/28a:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.343093, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/6fa:AFP_AfpInfo] failed Oct 13 19:03:02 Andromeda smbd[21319]: [2022/10/13 19:03:02.350699, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 13 19:03:02 Andromeda smbd[21319]: synthetic_pathref: opening [Vincent’s MacBook Pro.sparsebundle/bands/6fa:AFP_AfpInfo] failed andromeda-diagnostics-20221013-1857.zip
  2. I have the same issue. The logs are flooded. The error started after upgrading from 6.10.3 to 6.11.1 andromeda-diagnostics-20221013-1857.zip
  3. Hey there. I'm having the same issue and the logs are full now. Anyone fount a solution to that ? I did upgrade from 6.10.3 to 6.11.1 and the error started and I cannot backup using timemachine anymore.
  4. I was not able to find a fix. It was not a big deal so I just created a new VM. Ghost82 solution might be worth a try. Let me know if it works
  5. TL;DR; Uninstalled, but the redirect from internal IP to xxxx.unraid.net is still enabled. How I stop that redirect? I've installed the plugin and did all the things for the port forward and remote access. It seems to work except for the my server page in unraid. I cannot see remote access. I waited a full day for the "sync" just in case, but still nothing and now I am uninstalling the plugin for now. But now, each time I open my server by the internal network ip, it redirect me to the custom cname of the unraid.net domain. I want to remove that redirect, but I can't see where. Any clue? I also have that funny glitch on the top right . Instead of my user name I see this. Other problem, in the plugin settings, when I click sign-out, the pop-up opens and load to infinity and beyond. I can't sign out.
  6. @war4peace Good to know it is working out great
  7. I normally download directly to the cache, never to the array, unless I bust the cache size. I point to transcoding only depending on your needs. I do have a lot of 4k REMUX content, so the bandwith is heavy on the network, so I transcode to lower that size. I should encode it thinking about it Good luck with your project. Let me know how it goes.
  8. I am all in the kimifelipe. It is really important to preclear the disks. (DON'T DO IT ON THE EXISTING DRIVE IF YOU HAVENT MOVE THE DATA TO THE ARRAY). When the cache is full, the write speed should be the same as reading from the HDD. I am only suggesting to bypass the cache only because I had issue in the past when doing a really large transfer with big files. You can enable the cache pool at the begining, you can just skip it in the share settings. It really depends on what you intend to do. If you think you will fill the cache really often (other than the initial transfer), then yes this could be something you could do, but if you don't really fill it up and use it only for fast appdata access then there is not really any use for it. You are better to put the price of toward a GPU for plex transcoding or more drives The plex data, torrent, other apps you would install would go by default to the cache. It is not mandatory, but it is by default. If you really, really, really want the NVME for plex, you can do a second cache array with both NVME for plex. But don't use just one. If the NVME fail you lose all the data in plex. No problem, I needed help in the begining so I am glad to give help in return.
  9. Feel free to ask anything else. I will answer the best I can.
  10. I am not an expert, so this is from an enthusiast to another. For your process of creating the array and transferring your data, it will take a lot of time, but will work. I would strongly suggest to have 2 parity drives in the future. This will add more protections, but it will need to be 14tb as well. On your first 14tb copy, I would suggest to not have the parity enabled for better performance. The cache could be useful, but you will fill it really fast so I would also bypass the cache for the first data transfer. (In the share, just disable cache) It will be slower in the beginning for sure, but I had an issue in the past when transferring a lot of data and filling the cache. Some file transfer did not work because of that. When creating the cache pool, be sure to select a raid 1 cache pool. This way all your appdata will be duplicated between each cache drive. This will include the data from plex. You can use the NVME drive for plex (specify another location than the default appdata), but the performance boost from NVME vs SSD is not that big for plex, in my opinion, unless you have 100 simultaneous users. Also, the plex data will not be duplicated or backuped using the NVME drive. I would not use the other NVME drive for Unraid, I am not sure you can activate it using this. A modern USB drive should do the work. If you used VM for daily usages, I would suggest using the NVME drive as passthrough. It will be a lot faster for that VM, but you won't be able to use it for anything else. The GPU is pretty much useless for plex transcoding. It is not fast enough to transcode anything other than audio. So you would not need to configure plex for that. But it could be used to run a VM with the GPU passthrough. I hope this answer your questions and I just want to remind you I am not an expert