b0m541

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by b0m541

  1. same problem here since Parity Check Tuning plugin got updated
  2. @VRx Thank you so much for your effort for providing this. I had a lot of non-computer things going on and hadn't seen this for a long time. Sorry for that. I have now successfully integrated vchanger and I think your effort has paid off. I will monitor the ongoing daily use of vchanger and if it is stable I'll let you know in a while, so that it can be put into the frquently updated container. Thanks again and all the best
  3. I had to actually use even more options to get it to work with folders containing hundreds of files: vfs objects = catia fruit streams_xattr fruit:metadata = stream fruit:encoding = native readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no I didn't need POSIX rename so far, but I did not rename any files yet. Also I do not know how well this works with TimeMachine, I may test this at a later point in time.
  4. I have seen posts mentioning that there are plugins for Python 2 and Python 3 and a replacement for Nerd Pack called Nerd Tools. on my unRAID 6.10.3 with CA 2022.10.16 I cannot find any of these? Why is that? is CA filtering out plugins that are not compatible with the running version of unRAID?
  5. It is an old Seagate Barracuda XT. I just saw that it seemed to be mounted, but it was not! Maybe it did report to support trim when it was mounted, and when it "fell off" fstrim stil thought it was there and tried to trim it? I now rebooted and mounted that drive and fstrim does not report this error any more.
  6. Since a few days I am receiving the following per email: I am using the SSD trim plugin for a long time now and I did not upgrade unraid in the last weeks, I did not add or remove USB drives. Not sure why fstrim would pick a USB HDD drive for trimming, that makes no sense. I searched this topic and found 2 other people reporting it, but saw no solution or response. Maybe some can chime in. I would be interested i a solution.
  7. thanks, I knew that thread. AFAIK it does _not_ contain the crucial information to get folder listings to work again. Just saying. See my last post where I had it highlighted.
  8. Thank you so much. I had the fruit directives already, but the following made the difference (I can now list those folders with many files): readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no Do people report whether these also help with the timemachine problems?
  9. the link to the report thread is wrong, it links to this thread, not the report thread. do you still have the correct link?
  10. #disable SMB1 for security reasons [global] min protocol = SMB2 #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native fruit:model = MacSamba fruit:veto_appledouble = no fruit:posix_rename = yes fruit:wipe_intentionally_left_blank_rfork = yes fruit:delete_empty_adfiles = yes spotlight backend = tracker [redacted] path = /mnt/user/redacted veto files = /._*/.DS_Store/ delete veto files = yes spotlight = yes the one of the folders with really many files is in the path with spotlight. another of these folders is not in that path. It makes no difference, I can't list any of those.
  11. That does not sound like you have a solution yet. Did you report the issues officially to limetech already?
  12. I have now been reading the macOS SMB problem reports and posts and the newer Samba in 6.10.3 definitely has problems with macOS. I am using macOS Catalina 10.15.7 I have seen reports about slow SMB, reports about search not working, and reports about TimeMachine not continuing to work after the initial backup. I have all of these problems and more. I could solve the search problem with the hints found in the diverse posts on that problem. Still open issues me: - timemachine - smb is not performing well, although I applied all SMB tweaks to make it work better with macOS - when I try to list folders that contain literally hundreds of files/and or folders it does not work The last problem is especially bad for me as I cannot access my files in folders where I have hundreds of files and/or folders in a folder. In that case it will take a while until the result is just nothing, no matter whether I am using Finder or the shell. This problem doe not occur, if I mount the share on Linux as smb3. This is very likely a problem with the newer Samba on 6.10.3. I did not have these issue with the same macOS when the unraid was running 6.8.x Does anyone else have this issue with many files/folders with unraid 6.10.x under macOS SMB? Were you able to solve it? If not did you report it as a bug already (I did not find a report)? Please put a link to your bug report in your response. Thanks for taking your time to consider my post.
  13. i am regularly seeing this from SMBd in syslog for 3 appdate folders: synthetic_pathref: opening [myappfolder] failed any idea what this means and how to fix it?
  14. regarding logging: I currently have the bacula.log on unraid and I am getting the messages via email. This works fine. I also have a central syslog-ng (this is another container on unraid), where all my services and devices log to over the network. That makes it really more comfortable to identify new problems and also helps with troubleshooting because one is able to see a more complete picture of the situation at hand. I think it would be nice to also have the bacula logs in the central syslog. What would you suggest how to achieve this with your container? Is there already something in there that would allow this functionality? Or would I need to install rsyslog via apt?
  15. regarding this: the configuration with selected FDs connecting to the director is actually legit. The feature is new in 11 and just not yet very mature and I have reported the problems elsewhere. It is quite possible that the feature is not in widespread use yet. For a better understanding: the FD side connection scheduling doesn't work fully correctly, and the FDs start to poll quite aggressively. The Director should still be able to handle this, but there seems to be a bug so that it crashes in such a situation. These things are nothing you or me can do much about (well, I have disabled these FDs for now). It is possible that bacula 13 doesn't have these problems any more, we will see.
  16. Firstly let me clarify that I have no expectations whatsoever, as I know this is a voluntary effort. I am merely making suggestions, feel free to not implement them or to ignore them. They are not made to infuriate you, sorry that this is what you feel about them. I can certainly stop making suggestions if it is stressful for you. About bsmtp: one may actually relax some SPAM settings in postfix (which I use) to get bsmtp to work. Depending on what setup is used these settings might be more or less hidden, e.g. in Synology DMS Mail Server. I firstly tried to install postfix in the container, but apt failed (honestly). When I executed apt today it worked, so I need to apologize, whatever went wrong that day on my end has nothing to do with you. I wouldn't have thought that you take something like when someone said "X doesn't work" personal, it certainly isn't meant like that. So apalogies again, and let me assure you, on my end there are no bad vibes, just trying to make suggestions. Have a nice weekend.
  17. my experience in the past few days and suggestions for a better life: - btraceback does not work in the container, it might be useful for troubleshooting -bsmtp is pretty much useless in a standard environment, reasons (1) the container hostname equals the container ID and cannot be set using "hostname" to an FQDN - normal mailers refuse to accept mail from non FQDN hostnames: suggestion: allow proper usage of the hostname command (2) bsmtp is unable to authenticate the user: suggestion: install postfix as client sender in the container (standard config as forwarder to a local mailserver) - nothing can be installed in the container because apt isn't there: suggestion: provide a working apt in the container
  18. Thanks for this! I found the cause for the segmentatoin violation. I had a misbehaving FD that connected very often and in quick repetition to the director. The director did not like this and crashed. Since I tamed this FD, the director works again normally. I think this is a bug in the bacula-director, and probably not a thing that you could have remediated with the bacula version you are using. Hence, there is no need to check out the "special" container. I reported the problem, I hope it will be fixed in forthcoming versions of Bacula.
  19. I just want to summarize my experiences with macOS Catalina and unRAID 6.10.3: -TimeMachine only does the initial fullbackup to unRAID shares mrked for TimeMachine. Later incremental backups do not work. -Access to shares slower than with 6.9.x - I disabled an reenabled macOS compatibility for SMB, but it did not help -having installed the TimeMachine container does not solve any problems, the problems get worse: the initial full backup starts but does finish, it just stops. Once it did finish, but subsequent incremental backups did not work I am sure that re-installing macOS is not instrumental in solving this issue due to the following observations: -people report that without macOS reinstall TimeMachine resumes to work after downgrading to 6.9.2 -on the same Catalina TimeMachine works fine with a Synology, but not with unRAID 6.10.3 -peple report that reinstalling macOS did not solve their problem Overall this is a highly unsatisfactory situation for me. I am just lucky to own a Synology, too, so I do have backups. I wanted to have it al consolidated on unRAID, though.
  20. I did not find a solution, no clue why bacula-dir is crashing. Had to turn off bacula-server container.... I don't like to live without current backups
  21. bacula hasn't written any logs for some weeks now. Not sure why. I suspected it had to do with permissions and removed the log folders for apache and bacula so the were created new, but no bacula logs were written, so t here I am blind. Will try running with debug info. This is the result: root@4148d66cce87:/# bacula-dir: dird_conf.c:2480-0 runscript cmd=/opt/bacula/scripts/make_catalog_backup.pl MyCatalog type=| bacula-dir: dird_conf.c:2480-0 runscript cmd=/opt/bacula/scripts/delete_catalog_backup type=| bacula-dir: dird_conf.c:2480-0 runscript cmd=purge volume action=all allpools storage=unraid-tier1-storage type=@ bacula-dir: dird_conf.c:2480-0 runscript cmd=prune expired volume yes type=@ bacula-dir: dird_conf.c:2480-0 runscript cmd=truncate volume allpools storage=unraid-tier1-storage type=@ bacula-dir: dird_conf.c:2480-0 runscript cmd=truncate volume allpools storage=unraid-tier2-storage type=@ root@4148d66cce87:/# Bacula interrupted by signal 11: Segmentation violation Kaboom! bacula-dir, bacula-dir got signal 11 - Segmentation violation at 29-Jul-2022 22:08:08. Attempting traceback. Kaboom! exepath=/opt/bacula/bin/ Calling: /opt/bacula/bin/btraceback /opt/bacula/bin/bacula-dir 240 /opt/bacula/working /bin/sh: 0: cannot open /opt/bacula/bin/btraceback: Permission denied The btraceback call returned 2 LockDump: /opt/bacula/working/bacula.240.traceback Segmentation violation!?!? WT*? bacula.240.traceback: Attempt to dump locks threadid=0x14b4673f9700 max=1 current=-1 threadid=0x14b467bfd700 max=1 current=-1 threadid=0x14b4671f8700 max=1 current=-1 threadid=0x14b4657eb700 max=1 current=-1 threadid=0x14b465bed700 max=1 current=-1 threadid=0x14b4679fc700 max=1 current=-1 threadid=0x14b466bf5700 max=1 current=-1 threadid=0x14b467dfe700 max=1 current=-1 threadid=0x14b4665f2700 max=1 current=-1 threadid=0x14b4677fb700 max=1 current=-1 threadid=0x14b4669f4700 max=1 current=-1 threadid=0x14b4661f0700 max=0 current=-1 threadid=0x14b465fef700 max=1 current=-1 threadid=0x14b4663f1700 max=1 current=-1 threadid=0x14b467fff700 max=1 current=-1 threadid=0x14b47429c700 max=0 current=-1 threadid=0x14b47449d700 max=0 current=-1 threadid=0x14b4746b3980 max=1 current=-1 threadid=0x14b4746b2700 max=0 current=-1 Attempt to dump current JCRs. njcrs=3 threadid=0x14b4746b3980 JobId=0 JobStatus=R jcr=0x562ee58811f8 name=*JobMonitor*.2022-07-29_22.07.54_01 use_count=1 killable=0 JobType=I JobLevel= sched_time=29-Jul-2022 22:07 start_time=29-Jul-2022 22:07 end_time=01-Jan-1970 01:00 wait_time=01-Jan-1970 01:00 db=(nil) db_batch=(nil) batch_started=0 wstore=0x562ee5848bd8 rstore=(nil) wjcr=(nil) client=0x562ee5841918 reschedule_count=0 SD_msg_chan_started=0 threadid=0x14b4679fc700 JobId=0 JobStatus=R jcr=0x14b45800b098 name=-Console-.2022-07-29_22.08.08_06 use_count=1 killable=0 JobType=U JobLevel= sched_time=29-Jul-2022 22:08 start_time=29-Jul-2022 22:08 end_time=01-Jan-1970 01:00 wait_time=01-Jan-1970 01:00 db=(nil) db_batch=(nil) batch_started=0 wstore=0x562ee5848bd8 rstore=(nil) wjcr=(nil) client=0x562ee5841918 reschedule_count=0 SD_msg_chan_started=0 threadid=0x14b4673f9700 JobId=0 JobStatus=R jcr=0x14b44c00b328 name=-Console-.2022-07-29_22.08.08_11 use_count=1 killable=0 JobType=U JobLevel= sched_time=29-Jul-2022 22:08 start_time=29-Jul-2022 22:08 end_time=01-Jan-1970 01:00 wait_time=01-Jan-1970 01:00 db=(nil) db_batch=(nil) batch_started=0 wstore=0x562ee5848bd8 rstore=(nil) wjcr=(nil) client=0x562ee5841918 reschedule_count=0 SD_msg_chan_started=0 List plugins. Hook count=0 I tried the b48 and the b49 image, both behave the same. I also removed br0 that was created for Timemachine container. Even rebooted. The Director is still crashing. Regarding the FD connection direction you are mistaken. Since 11 there is a new way to make the FD connect to the director, which is needed if the FD is behind a firewall and the director cannot connect to it. I can elaborate if you like.
  22. Note: this post is a bit long, the main problem is that bacula-dir dies after a few minutes, and I don't know why. As macOS TimeMachine stopped working properly after unRAID upgrade to 6.10.3 I disabled TimeMachine in unRAID and installed the TimeMachine container. This container runs with a br0 network and the TimeMachine service is using a different IP than the unRAID server, but in the same subnet. Since I have this container Baculum threw an error on the UI that bconsole had problems connecting to the director at localhost:9101. Consequently I replaced in etc/bconsole.conf localhost with the FQDN of the director, which is a CNAME for the FQDN of the unRAID server: bconsole.conf: Director { Name = "bacula-dir" DIRport = 9101 address = bacdir.foo.net Password = "redacted" } After this, Baculum first works a while, but then it complains that bconsole fails to connect to the Director due to authentication problems: "Error code: 4 Message: Problem with connection to bconsole. Output=>Connecting to Director bacula-dir.lan.net:9101 Director authorization problem. Most likely the passwords do not agree. If you are using TLS, there may have been a certificate validation error during the TLS handshake. For help, please see http://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html, Exitcode=>1" (btw, the mentioned URL is dead, I didn't find an FAQ) bacula-dir.conf: Director { Name = "bacula-dir" Messages = "Daemon" QueryFile = "/opt/bacula/scripts/query.sql" WorkingDirectory = "/opt/bacula/working" PidDirectory = "/opt/bacula/working" MaximumConcurrentJobs = 20 Password = "redacted" } The redacted passwords for both Director resources are identical, so a password mismatch is not the problem here. As can be seen there are no TLS directives, so even if TLS is used per default it would us the Password directives for the TLS PSK (if I understood it correctly). There are not TLS certificates provides by me for bacula. However, the unRAID server UI uses a certificate, and bacdir is not one of the names in the certificate's SAN. But AFAIK Bacula looks only at the CN, not at the names in SAN, correct? This is the first time I am seeing such an error in Bacula, and I have a number of remote FDs that might possibly also connect using TLS (although in no config file there are TLS directives). If this error was about a missing TLS certificate, then why am I not seeing such an error for any of the remote FDs to which the Director connects? I do see that one FD, which is configured to connect on its own to the Director, is throwing connection errors, and indeed, the director port refuses connections. This is the docker log of the bacula-server container: ==> Checking DB... Altering postgresql tables This script will update a Bacula PostgreSQL database from any from version 12-16 or 1014-1021 to version 1022 which is needed to convert from any Bacula Enterprise version 4.0.x, 6.x.y, 8.x.y, 10.x.y to version 12.4.x The existing database is version 1022 !! This script can only update an existing version 12-16, 1014-1021 database to version 1022. Error. Cannot update this database. ==> Starting... ==> .......Storage Daemon... Starting the Bacula Storage daemon ==> .......File Daemon... Starting the Bacula File daemon ==> .......Bacula Director... Starting the Bacula Director daemon ==> .......Bacula Web... 2022-07-29 10:58:01,498 INFO Set uid to user 0 succeeded 2022-07-29 10:58:01,506 INFO supervisord started with pid 1 2022-07-29 10:58:02,512 INFO spawned: 'baculum' with pid 200 2022-07-29 10:58:03,516 INFO success: baculum entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-07-29 10:59:00,580 INFO reaped unknown pid 181 (exit status 11) 2022-07-29 11:02:18,810 INFO reaped unknown pid 3237 (exit status 0) 2022-07-29 11:02:19,812 INFO reaped unknown pid 3238 (exit status 0) 2022-07-29 11:03:13,886 INFO reaped unknown pid 4003 (exit status 0) 2022-07-29 11:03:29,906 INFO reaped unknown pid 4004 (exit status 0) 2022-07-29 11:03:32,915 INFO reaped unknown pid 190 (exit status 11) When I look inside the container, after a restart bacula-dir is there and Baculum works for ca. 6 minutes. Then bacula-dir dies and Baculum stops working (as bconsole cannot connect to the Director any more). Any ideas what is going wrong here? Could this have anything to do with the br0 network for the TimeMachine container - or is it something caused by the last container upgrade?
  23. yes. and now two of this: vfs_default_durable_reconnect (mbp15r (1032).backupbundle/bands/28c): stat_ex.st_ex_blocks differs: cookie:65408 != stat:38216, denying durable reconnect and more of the former. it seems the backup stopped and then started anew while I was AFK. Now in preparation phase. doesn't look too good to me.
  24. I see. Have you seen these in your logs before? I see like 50-60 of them currently: scavenger_timer: Failed to cleanup share modes and byte range locks for file 56:18303754804867892465:0 open 2233142994
  25. INFO: running test for xattr support on your time machine persistent storage location... INFO: xattr test successful - your persistent data store supports xattrs INFO: entrypoint complete; executing 's6-svscan /etc/s6' nmbd version 4.15.7 started. Copyright Andrew Tridgell and the Samba Team 1992-2021 smbd version 4.15.7 started. Copyright Andrew Tridgell and the Samba Team 1992-2021 INFO: Profiling support unavailable in this build. Failed to fetch record! ***** Samba name server TIMEMACHINE is now a local master browser for workgroup WORKGROUP on subnet REDACTED Thank you, I set it up and the SAMBA is running, although some things are weird in the logs: What does this mean?: Failed to fetch record! I can see in your writeup that you put in a full IP address for the container instance, not just a subnet address. And so did I. In the log it says it is running now on subnet... and there is the IP of the timemachine instance, not the subnet address (i.e. the host octett is not all zero). Is that a problem or is the text "subnet" just plain wrong and should say "IP address"? Then after configuring the backup under macOS and starting it, during preparation phase the timemachine container log would say: error in mds_init_ctx for: /opt/mymac _mdssvc_open: Couldn't create policy handle for mymac Is that a problem? Ideas how to fix it? current status: Timemachine full backup is running on the mac and it will take many hours to complete. So If all this currently works I will be able to say tomorrow when the full backup hopefully has completed and I was able to see whether incremental backups do work.