JohnGAG

Members
  • Posts

    15
  • Joined

  • Last visited

About JohnGAG

  • Birthday April 22

Converted

  • Gender
    Male
  • Location
    Herts, UK

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JohnGAG's Achievements

Noob

Noob (1/14)

0

Reputation

  1. yes point 7 was where I discovered that XFS was recommended. tried starting and stopping as per other lines but "file system type" remained greyed out. I'm not claiming expertise here, or criticising... but I've sometimes learnt valuable detail from the experiences of the less knowledgeable when searching forums to see if someone has had similar experience (also why my warning about might not be right)
  2. Thanks Jorge! I appear to have backed up, reformatted and restored ok. It's now running without errors showing on the log I struggled a little over the BTRFS reformatting. Not sure if I did it the right / best way. although it was a BTRFS "single" pool I couldn't find a GUI option to reformat (to either BTRFS or XFS). I had expected to find this from the FAQ text I couldn't find the right command from the terminal (within Unraid) - this might be my insufficient command line knowledge... eventually shut the system down and restarted it in safe mode (directly on the server) and "reformatted" there not sure if I truly reformatted it, but this was sufficient to persuade Unraid that it was the cache pool was unformatted, which then permitted formatting within the Unraid GUI I tried to set it to XFS (which the FAQ suggested for single cache devices), but it nonetheless restarted as "auto" and changed "auto"matically to BTRFS
  3. I'd like advice on how best to approach recovery? The cache shows a BTRFS corruption. BTRFS error (device nvme0n1p1: state EA): parent transid verify failed on logical 119125622784 mirror 1 wanted 4234875 found 4233357 ... BTRFS critical (device nvme0n1p1): corrupt leaf: root=7 block=118272999424 slot=2, bad key order, prev (18446744073709551606 128 99098980352) current (18413109569270616310 255 99098984448) ... BTRFS: error (device nvme0n1p1) in btrfs_commit_transaction:2494: errno=-5 IO failure (Error while writing out transaction) Jan 2 09:18:30 Tower kernel: BTRFS info (device nvme0n1p1: state E): forced readonly ( I ran check filesystem status --report & started to run --repair, but aborted when I saw it suggested that should only be done "under advice") Diagnostics attached (nb edited out some file copy errors which showed content) Possibly connected - It is possible that this was caused UPS failure (late Christmas day) - I have noticed app (like Plex) failing as permissions have changed to read only ( I can now see this in the logs) - initially recovered by variously re-installing Plex, restarting the server, running scrub (fix corruptions). Data on cache I have Appdata Backup installed the cache has (had?) appdata & directories on it, and some files from a backup process - these were reported as having failed I've edited the filenames out (which looked like text content but were from a macintosh backup app called Carbon Copy Cloner The "appdata" share with all the docker configs is no longer visible Parity check has been running since the reboot (no obvious issues yet - but I see syslog notes a XFS corruption) XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0xc0004408 dinode Jan 1 23:01:34 Tower kernel: XFS (md2p1): Unmount and run xfs_repair Should I run "BTRFS --repair" on the cache? should I "unmount andrun XFS_repair" or are there some other checks or steps recomended? thanks John tower-diagnostics-20240102-1303.zip
  4. and 6 music works too (PS streamed ok in MacOS safari)
  5. I deleted the "duplicate mDNS" - there might have been a marginal, but not material improvement. I ended up purchasing an app "auto mounter" from the App store - it works... (I understand there are alternatives)
  6. having dug deeper about SOLR and it's schema's it looks more powerful than I need - and requiring much more work to setup. Yes Copernic has proved to be an absolute godsend a few times, I'd even pay good money for a copy to run on unraid
  7. I have this question too... I've looked at SOLR ElasticSearch so yes I too would like to hear of an alternate / simpler solution? In the meanwhile - my current approach follows ( I welcome suggestions or others to join me on this journey) I think I got closer with SOLR, and in any case I found the elastic search site unclear about whether I would need to buy a license to run ElasticSearch on a server. So currently I'm pursuing SOLR. The current issue is: solr 19:28:30.53 INFO ==> ** Starting solr setup ** solr 19:28:30.55 INFO ==> Validating settings in SOLR_* env vars... solr 19:28:30.55 INFO ==> Initializing Solr ... realpath: /bitnami/solr/data: No such file or directory solr 19:28:30.56 INFO ==> Configuring file permissions for Solr mkdir: cannot create directory '/bitnami/solr': Permission denied searching through the various links I found https://hub.docker.com/r/bitnami/solr/ TL:DR I think I need to either tinker with the docker image https://github.com/bitnami/containers/blob/main/bitnami/solr/docker-compose.yml or find out how to "mount a volume in the desired location and setting the environment variable with the customized value (as it is pointed above, the default value is data_driven_schema_configs)" so going to investigate the "data_driven_schema_configs" as I think this would persist even if the container were modified by the maintainer
  8. The SysLog shows Feb 15 11:25:03 Tower avahi-daemon[326]: *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** Feb 15 11:25:03 Tower avahi-daemon[326]: *** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** I can see that there are two (at least) mDNS groups 192.168.1.. (that I would expect) and 192.168.122.. that I have no knowledge of I do seem to be getting random network disconnections (user share disconnections) - which might be related to network resets??? I'm running a number of Dockers and one VM (Hoobs) I set the VM on 192.168.1.82 I've downloaded (anonymised) diagnostic files - should upload all or selective ones? browsing ifconfig I can see virbr0: inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 I can see IPv4 192.168.122.0/24 virbr01 in the network routing table I'd like some advice please? I guess I could delete ..122.. from the routing table - but would this be wise? are there any other areas I should investigate or diagnostics I should run
  9. I had a similar wish moving from /video/movies /video/tv /video/radio etc.. & /music/albums /music/podcasts to /media/albums /media/movies /media/radio /media/tv etc… Primarily so I could record radio via the tuner in Plex and play the radio broadcasts back at a convenient time via LMS I created a new share "media", with the same disks allocated as the two old shares. Stopped the dockers using the shares Then via Krusader used the "F6 move" command, which as mentioned by Jonathon / Jorge above renames rather than moves (PS tried it out on a couple of sacrificial directories first) edited & restarted the the dockers and all is well I couldn't see a way of making the changes to mnt/disk1/etc as above from within a file manager (despite having temporarily enabled disk shares) - but I probably missed something…
  10. yes I've just had a flash drive fail on 6.9.2 upgrade too (to be fair Tom did warn me it might not be very reliable model - in 2009) The other server upgraded flawlessly - I have had trouble getting it to read on the desktop - so it might well be a dodgy connection
  11. Great docker/VM and install instructions - all works well when you follow them correctly - thanks I was expecting (hoping) to pass through Unraid Shares as local volumes to the “MacinaBox” In the forms view I added 3rd Unraid Share: /mnt/user/Backup/ 3rd Unraid Mount tag: U_backup and it generated as XML <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/Backup/'/> <target dir='U_backup'/> <alias name='fs2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </filesystem> But I can’t see the shares anywhere (other than as Remote volumes on the unraid server) The share has both AFP and SMB exports as secure Am I missing something I should have done?
  12. So I guess (Apple) Time machine backups are not a good idea?
  13. Makes sense presumably chrome as suitable as Firefox
  14. Hey Jonathan neat string from you Thanks hadn’t considered an indirect search non-identical, but broadly similar results to my first few variants - but then that’s google search engine algorithms v. whatever unraid forums use I guess you’re hinting it was there... there’s a sorting the wheat from the chaff problem here Previous searches within unRaid forums... 10+k “safari” issues using search within unRAID forums <10 safari AND terminal issues in “general support” (which would appear to be the appropriate forum - none current different list than I found - but let’s see: 1 Deprecated - web sockets 2 logout - web sockets - found ref, had already tried it, now works ok... 3 iPhone 4 tower.local - found ref, had already tried it, now works ok.. 5 2013... 6 Deprecated - but specifically Safari -terminal - apparently fixed u/fmp4m etc... ...getting lazy retyping summary, but “𝐼 𝑐𝑎𝑛 𝑠𝑒𝑒 𝑎 𝑐𝑜𝑢𝑝𝑙𝑒 𝑜𝑓 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑡𝑜 𝑆𝑎𝑓𝑎𝑟𝑖 𝑟𝑒𝑛𝑑𝑒𝑟𝑖𝑛𝑔, 𝑏𝑢𝑡 𝑛𝑜𝑡𝒉𝑖𝑛𝑔 𝑠𝑒𝑒𝑚𝑠 𝑐𝑢𝑟𝑟𝑒𝑛𝑡.” cheers J
  15. I'm running Safari 12.0.2 under MacOS Mojave 10.14.2 and seem to be experiencing some display issues. To be honest I assumed I must be making some mistakes, but then tried Chrome 71.0.3578.98 and the problems disappeared. a) terminal launched but displayed nothing (under Chrome a session was launched) b) Ubuntu VM screen likewise launched but displayed nothing I can see a couple of references to Safari rendering, but nothing seems current. not a problem to use Chrome, but curious to learn if I'm missing something If it is a "safari issue" and I'd found a comment like this I might have tried chrome a couple of days ago