pm1961

Members
  • Posts

    129
  • Joined

  • Last visited

Everything posted by pm1961

  1. Hi, Although it's a Nextcloud/Cloudflare tunnel question, my problem doesn't involve the Cloudflared Docker or NPM......... My tunnel was formed using the GUI at the Cloudflare end. I'm happily using a variety of apps...... HA in a VM works great, as do Frigate, Sonarr and all the other 'arrs...... It's just Nextcloud............. It works fine on the local network. I have a Cloudflare Tunnel working to the Unraid server. All my other dockers work fine through the tunnel. Cloudflare returns a 502 Bad Gateway error with the host. That led me to believe it must be my config file and “trusted proxies”. However, I have included my domain through which the tunnel runs and still no joy. Please have a look at my config and comment if there are any errors. Are there any other avenues for me to check? Thanks for looking. Paul $CONFIG = array ( ‘memcache.local’ => ‘\OC\Memcache\APCu’, ‘datadirectory’ => ‘/data’, ‘instanceid’ => ‘ocxxxxxxxxxxxxxxw’, ‘passwordsalt’ => ‘TBxxxxxxxxxxxxxxxxxxxxxxDoy’, ‘secret’ => ‘dFxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxnRHo’, ‘trusted_domains’ => array ( 0 => ‘nextcloud.xxxxxx.com’, 1 => ‘localhost’, 2 => ‘10.10.10.30:444’, ), ‘dbtype’ => ‘sqlite3’, ‘version’ => ‘25.0.1.1’, ‘overwrite.cli.url’ => ‘http://10.10.10.30/:444’, ‘installed’ => true, ‘theme’ => ‘’, ‘loglevel’ => 2, ‘maintenance’ => false, ‘updater.secret’ => ‘$2y$10xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx232’, );
  2. Whereas my Intel Quad NIC has been split into 4 separate, selectable NICs...... My reason for asking is that I would like to allocate one of the Mellanox 10Gbe NICs to my pfsense VM and retain one for Unraid.......... Or, is that simply not possible? TIA Paul
  3. Hi, This is doing my head in! A few weeks ago I managed to unwittingly delete my entire NAS of media..... approx 35TB worth. It appears, though I still have no conclusive proof, that I may have misconfigured the Appdata backup.... Never mind, I have a tape backup, so inconvenient it certainly is, but not a disaster...... However, since then, it just gets weirder and weirder. I removed the Appdata app just in case..... I restore the media files onto the NAS.... all is fine...... and then...... they just start disappearing..... first the files.... then the folders.... It's not like.... bang, they've all gone..... they just dribble away slowly over a day or so! The way I first found out was by restoring my Plex app (on a different server) once the data was back on. Plex rebuilt all it's thumbnails and data and I thought all was hunky dory..... But then, I try and play something and Plex reported that it couldn't find the media..... Sure enough, dig into the NAS and find that the media was no longer there I've tried different ways of restoring.... e.g. putting them onto an intermediate drive before putting them on the NAS.... no different.... The NAS doesn't do anything else.... No VM's.... just a Krusader Docker and that's it. Can anybody think of any mechanism that would strip files out like this? I've virus checked it as much as I know how to, but any lines of enquiry gratefully received please. TIA tower-diagnostics-20210303-1215.zip
  4. Yep, I'd seen that and have one called "appdatabackup", so I still don't know what I did wrong.... I was meaning more into the mechanism of how it happens and why? It's one thing to read a dire warning.... it's another to understand how that happens, that's all I'm trying to do. Now that I'm fully conversant with the consequences!
  5. Wow... I still don't know what I did, but this was the very thing I was trying to sort the night before.... Ironically, because I thought I'd been a little lax in keeping on top of it.. It does seem to be something a hidden hazard for the unwary and nowhere near as overt as positively selecting stuff and hitting a big fat delete button! I've done a little reading on XFS file recovery and it seems to be an order of magnitude more difficult and less certain than other file systems... I'm sticking with restore from my tape backup for now..... The cache is full, so just the odd 30TB @40Mb/s to go....... For my understanding, and I'm sure others, is there any chance of expanding on the subject please?
  6. oh dear.... I did look at that only last last night...... If it was an "orderly deletion", rather than corruption.... Is there a possibility that file recovery utilities may still be of help?
  7. Oh dear! (that's my printable reaction.....) Can you think of any mechanism within Plex or MyMedia (or any other program) that would lead to mass deletion of files.... but not quite all them... ? Their parent folders are still there as well....
  8. Could there be any clues in my Appdata Backups? If so, what should I be looking for? The only other unusual thing that comes to mind recently is that my Docker/VM machine hosts Plex and MyMediaforAlexa and they've both been playing up. They both have paths back to the media server with the problems..
  9. No, there is/was 35TB on there! I was kinda hoping it was some kind of corrupted file system rather than complete deletion....
  10. Hi, a bit of a strange one........... I have two unraid servers, this one is mainly a NAS. After a long period of stability, I've suddenly had problems with both. With regard to this NAS, I've not done any arranging/meddling/editing recently but whole folders have gone missing. It's all media, films, photos, music etc... Most of it has gone missing, but not all of it. That to me is the strange bit.... e.g. If I'd "accidently" deleted the "FLAC" folder, it would have all gone... but it hasn't... I have remnants of it. It's exactly the same with the other folders. Any thoughts please? The other server is mainly a docker/VM machine which had completely different symptoms which I'll detail if thought necessary/relevant. I did wonder whether I'd been subject to some kind of "attack", but both do sit behind what I considered to be a tight pfsense firewall... TIA Paul tower-diagnostics-20210222-1427.zip
  11. It's nothing complicated at all.... Everything is on a standard W10 pc. The LTO drive is connected to a RocketRAID 2720 SAS Controller (Other HBA cards are available). A PCIe x8 slot required for this... With LTFS installed on windows, no other software beyond the normal file manager is required. That's a big plus because the software I used with previous LTO incarnations wasn't cheap. If you're proficient with Linux/Unix software (I'm not!), then I believe handling TAR files (Tape Archives) is native. https://en.wikipedia.org/wiki/Linear_Tape_File_System Your tape appears as a drive and you can drag and drop to your heart's content. You wouldn't want to use it like a HD, but for archiving and backup, it's perfect. Interestingly, LTO6 is a jump in capacity but not so much speed. There's a big jump for LTO7 in both capacity and speed..... but the costs are eyewatering. https://en.wikipedia.org/wiki/Linear_Tape-Open Good luck!
  12. I thought I understood DNS records! Now I'm in a tailspin..... I have.......... two domains with Godaddy, but gave control of the nameservers to Cloudflare. I have managed to corrupt the IP address in the A name in one of my domains and I can't figure out how to recover the original IP address. I thought that putting the domain name in a DNS lookup would show me the answer. Whilst the answer does belong to Cloudflare, it doesn't take me to my Godaddy webpage... My simple brain thinks that I have a web page constructed and godaddy still have that. But, if I type my domain name in the address bar, it doesn't go there.... I have a direct comparison to my other (unmolested) domain name. Also hosted by Godaddy and, again, dns handled by Cloudflare. If I type in that domain name, it correctly takes me to my Godaddy homepage. However, if I do a DNS lookup on the domain name in shows me the cloudflare NS, not the original A name IP that transferred from Godaddy in the first place. I've asked Godaddy, but yet to get a reply..... I was hoping that the assembled masses might know a way of recovering the same info. ATB Paul
  13. Sterling has picked up a bit against the dollar recently......... Typically quoted at $69.95..... Over here, they somehow translate that to £67.74....... using some unspecified, other planet, exchange rate... Thanks to ipcamtalk, they offer it at $57.99 which paypal translated to £44.74. Good deal. .........Old enough to remember when the math(s) was easy..... £1 = $2
  14. Hi, After an update of a working instance of Zoneminder, I have the problem of; Unable to connect to ZM db.SQLSTATE[HY000] [2002] No such file or directory On inspection of the log (bottom), it references the MariaDB. When I use mc to try and navigate to the floder/files referred too..... they don't exist.... presumably because the "kill" at the end deletes them? After much frustration, I installed a fresh Zoneminder on my other unRaid server. This server runs as a NAS with no other dockers........ and that works fine! So........... if it's an interaction between Zoneminder and my MariaDB, is there a workaround? I only have MariaDB because of my Nextcloud instance, and they work fine together. This server is my high core/high memory VM and docker server so, although it works, I dont want to leave Zoneminder on my NAS server. Having read through the forum, I've tried deleting templates for a fresh install from CA...... I've tried deleting log files.... but they didn't exist.... presumably because it's never done anything............ I noticed from comparing the Appdata folders that the non-working instance has an extra directory "opencv". My working version doesn't have this even though it was installed after the broken one??? Any help gratefully received, ATB, Paul Starting services... * Starting Apache httpd web server apache2 * * Starting MariaDB database server mysqld Jan 12 15:23:43 70771cb7af9e web_php[657]: FAT [Failed db connection to ] ...fail! DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. Jan 12 15:24:12 70771cb7af9e zmupdate[1092]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] Can't call method "prepare_cached" on an undefined value at /usr/share/perl5/ZoneMinder/Config.pm line 96. BEGIN failed--compilation aborted at /usr/share/perl5/ZoneMinder/Config.pm line 147. Compilation failed in require at /usr/bin/zmupdate.pl line 73. BEGIN failed--compilation aborted at /usr/bin/zmupdate.pl line 73. Jan 12 15:24:12 70771cb7af9e zmupdate[1092]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. Jan 12 15:24:12 70771cb7af9e zmupdate[1094]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. Can't call method "prepare_cached" on an undefined value at /usr/share/perl5/ZoneMinder/Config.pm line 96. BEGIN failed--compilation aborted at /usr/share/perl5/ZoneMinder/Config.pm line 147. Compilation failed in require at /usr/bin/zmupdate.pl line 73. BEGIN failed--compilation aborted at /usr/bin/zmupdate.pl line 73. Jan 12 15:24:12 70771cb7af9e zmupdate[1094]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] Starting ZoneMinder: DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. Jan 12 15:24:12 70771cb7af9e zmpkg[1104]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] DBI connect('database=zm;host=localhost','zmuser',...) failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) at /usr/share/perl5/ZoneMinder/Database.pm line 110. Can't call method "prepare_cached" on an undefined value at /usr/share/perl5/ZoneMinder/Config.pm line 96. BEGIN failed--compilation aborted at /usr/share/perl5/ZoneMinder/Config.pm line 147. Compilation failed in require at /usr/share/perl5/ZoneMinder.pm line 33. BEGIN failed--compilation aborted at /usr/share/perl5/ZoneMinder.pm line 33. Compilation failed in require at /usr/bin/zmpkg.pl line 34. BEGIN failed--compilation aborted at /usr/bin/zmpkg.pl line 34. Jan 12 15:24:12 70771cb7af9e zmpkg[1104]: ERR [Error reconnecting to db: errstr:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) error val:] ZoneMinder failed to start *** /etc/my_init.d/40_firstrun.sh failed with status 255 *** Killing all processes... Jan 12 15:24:12 70771cb7af9e syslog-ng[38]: syslog-ng shutting down; version='3.25.1'
  15. Well, I'm really happy to say that on my W10 machine, with a direct path of the file sitting on the NVMe drive, through the PCIe bus, direct to the tape drive gets me a constant write of 138 Mb/s. I'm a very happy bunny. What's even more pleasing is that he tape is in a constant "write" state rather than a "stop/wind/start writing again" cycle. Which must be better all round for the mechanics of both tape and drive........
  16. I'm in a working Mac.... Yay! Thanks Ed Following up with the VirtManager example, I get the following screen thrown up....... Any ideas gratefully received thanks...........
  17. Thanks for the reply, No, unfortunately not yet..... My system is in pieces at the moment. I've just acquired a rack to rehome/tidy the rats nest that my attic has become! The reboot will be an opportune time to see if that works for me. ATB
  18. Hi, I hope this fits in the general category..... I'm trying to setup Swag but can't get past the initial setup. I find the following error when I check the log files. I must add that I have DuckDNS working fine on my PC/Server/pfSense and I have OpenVPN also working flawlessly. I can also successfully ping my DuckDNS address from the server console..... I think that tells me that I don't have a problem either with DNS or Port Forwarding.... Or is that too simplistic? TIA Paul Challenge failed for domain pm1961nextcloud.duckdns.org http-01 challenge for pm1961nextcloud.duckdns.org Cleaning up challenges Some challenges have failed. IMPORTANT NOTES: - The following errors were reported by the server: Domain: pm1961nextcloud.duckdns.org Type: connection Detail: Fetching http://pm1961nextcloud.duckdns.org/.well-known/acme-challenge/l989LN5Is6-ZQZlkyOb34c3NOlBO45SaUdEZnVq2aAg: Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided. ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container
  19. Hey, good to hear I'm not alone! I do use it exactly for the kind of scenario you say..... backing up my bluray collection every few months....... And, as you say, the biggest single benefit in my use case is that I can add the odd few files to an archive. I have a tape dedicated to each letter of the alphabet whereas in my old LTO2, I would have to run the entire tape again just to add a few (albeit large) .iso files. That's a thought.... I do have 128GB on one of my servers and I believe a RAM disk can use half? It's also connected to the tape pc with a 10Gbe fibre link so that'll be an interesting experiment. ATB Paul
  20. pm1961

    HP nc375i

    I'm sure you've done a quick google and seen that others have also had problems with this controller over on the Freenas pages.... It depends on how much you want to fight with it. A quick and cheap solution would be to put a known quad port Intel card in like this; https://www.ebay.co.uk/itm/Intel-Pro-1000-PT-Quad-Port-Gigabit-Network-Interface-Card/224194158812?hash=item34330360dc:g:g3EAAOSwVp1fhwtJ I only paid £10 for mine.... and it works brilliantly with unraid. In my case, it passes through effortlessly for use with a pfsense VM.
  21. oops, sorry..... tower-diagnostics-20201110-1208.zip
  22. Hi, My server is currently working fine however I get the occasional boot failure very early in the sequence. It shows as a "SMART" error on an unspecified disk. A "CtrlAltDel" results usually results in a normal bootup. Perusing the individual disk logs, all seemed OK apart from this one........ Googling "READ FPDMA QUEUED" didn't sound very good news for that disk. Would it be reasonable to infer that this disk is dying? TIA Paul syslog.txt
  23. Can I ask for your thoughts please on the attached iperf3 results? In particular, the discrepancy in speed between the direction of travel to and from the W10 box. All three machines have identical Mellanox 10G cards. Jumbo frames are enabled on all three cards and the switch, along with Tunable Direct IO set to Yes on the unraid machines. I wondered if there are any other settings worth tweaking on the W10 implementation of the Mellanox card? Or perhaps Linux just handles the card in a more efficient way leading to better results? TIA Paul
  24. Kinda solved..... I think........... I resorted to a bit of advice I've seen before which seems to be a blunt but effective tool for dealing with all sorts of network issues.... I used MC in terminal to delete the network config files on the flash drive and rebooted to allow it to rebuild the config files how it sees fit. I'm not clever enough to know how that works, but it seems to have sorted itself out now. Clever ol' thing. Happy days.
  25. I've had Mellanox 10G fibre connections in my machines quite happily (peer to peer) for some time now.... Since I introduced a 10G switch, I'm having problems with my Unraid servers, but not my W10 box, ,which works fine. Hopefully the picture explains all........ The W10 machine is happily connected to the internet and any other devices connected to the switch. Unraid 1 shows up in the windows file manager but won't display folders and files. I can succesfully ping Unraid 1 from the W10 machine. Unraid 2 doesn't show up at all and can't be pinged from the W10 machine, or the other Unraid machine. IPMI works fine for both Unraid machines and is connected through 1G ports on the switch. I can manipulate both Unraid boxes from the W10 machine. Both Unraid machines work fine on the network if I just use the onboard 1G NICs to the switch. Do I need to isolate those onboard NICs in the Unraid config so that they don't get loaded at all when using the 10G NICs? Any help greatly appreciated! TIA