untraceablez

Members
  • Posts

    47
  • Joined

  • Last visited

Everything posted by untraceablez

  1. So I've been using CloudBerry backup for a while to backup the painfully digitized record collection I host on my NAS and other things. I have tried many times to backup my /mnt/user/appdata folder, but the backup always encounters errors due to log files, databases, and other types of files that update due to the running apps. These files are ones I would like backed up, especially in the case of a crash or hardware failure, so I don't want to blacklist the file extensions for them. Is there a way to get them to backup successfully and have CloudBerry not just throw a ton of errors at me in the error reporting? Screenshot attached shows the kind of files that typically result in errors for me.
  2. Reserved for updates UPDATE #1 Accidentally messed up the template and it was populating with my personal info instead of the defaults, has been fixed and pushed.
  3. Hey All! This is my first container template, so excited to finally give back to the unRAID community! This one's a template for UV Desk, a self-hosted helpdesk and knowledgebase software. You can learn more about the overall project at their site. This unRAID template is based off dietermartens/uvdesk. There is some initial setup that needs to be done before you install the container, please check the README for full details.
  4. So trying to add Loki + Promtail into my existing setup, Promtail is fine, but Loki keeps insisting that my local-config.yaml isn't there? 2021-11-08 20:24:06.763073 I | proto: duplicate proto type registered: purgeplan.DeletePlan 2021-11-08 20:24:06.763141 I | proto: duplicate proto type registered: purgeplan.ChunksGroup 2021-11-08 20:24:06.763143 I | proto: duplicate proto type registered: purgeplan.ChunkDetails 2021-11-08 20:24:06.763145 I | proto: duplicate proto type registered: purgeplan.Interval 2021-11-08 20:24:06.817986 I | proto: duplicate proto type registered: grpc.PutChunksRequest 2021-11-08 20:24:06.817993 I | proto: duplicate proto type registered: grpc.GetChunksRequest 2021-11-08 20:24:06.817995 I | proto: duplicate proto type registered: grpc.GetChunksResponse 2021-11-08 20:24:06.817997 I | proto: duplicate proto type registered: grpc.Chunk 2021-11-08 20:24:06.817999 I | proto: duplicate proto type registered: grpc.ChunkID 2021-11-08 20:24:06.818001 I | proto: duplicate proto type registered: grpc.DeleteTableRequest 2021-11-08 20:24:06.818003 I | proto: duplicate proto type registered: grpc.DescribeTableRequest 2021-11-08 20:24:06.818005 I | proto: duplicate proto type registered: grpc.WriteBatch 2021-11-08 20:24:06.818007 I | proto: duplicate proto type registered: grpc.WriteIndexRequest 2021-11-08 20:24:06.818008 I | proto: duplicate proto type registered: grpc.DeleteIndexRequest 2021-11-08 20:24:06.818012 I | proto: duplicate proto type registered: grpc.QueryIndexResponse 2021-11-08 20:24:06.818014 I | proto: duplicate proto type registered: grpc.Row 2021-11-08 20:24:06.818016 I | proto: duplicate proto type registered: grpc.IndexEntry 2021-11-08 20:24:06.818018 I | proto: duplicate proto type registered: grpc.QueryIndexRequest 2021-11-08 20:24:06.818020 I | proto: duplicate proto type registered: grpc.UpdateTableRequest 2021-11-08 20:24:06.818022 I | proto: duplicate proto type registered: grpc.DescribeTableResponse 2021-11-08 20:24:06.818025 I | proto: duplicate proto type registered: grpc.CreateTableRequest 2021-11-08 20:24:06.818029 I | proto: duplicate proto type registered: grpc.TableDesc 2021-11-08 20:24:06.818036 I | proto: duplicate proto type registered: grpc.TableDesc.TagsEntry 2021-11-08 20:24:06.818039 I | proto: duplicate proto type registered: grpc.ListTablesResponse 2021-11-08 20:24:06.818042 I | proto: duplicate proto type registered: grpc.Labels 2021-11-08 20:24:06.818133 I | proto: duplicate proto type registered: storage.Entry 2021-11-08 20:24:06.818138 I | proto: duplicate proto type registered: storage.ReadBatch failed parsing config: open /etc/loki/local-config.yaml: no such file or directory I have a screenshot of the config in unRAID attached, at a loss as to why the file's not showing up. I also have a screenshot of the /appdata/loki folder attached as viewed from Cyberduck.
  5. Great container, the interface is very nice, got my Wasabi account linked flawlessly. That being said, encountering a couple issues: 1.) Has anyone tried backing up all system files from / excluding some shares in /mnt and /storage? All the errors I'm getting are permission denied errors, even though the container is assigned /storage as / and it's running at a privileged level. I suppose this isn't necessary and I could get away with just backing up /flash 2.) I can't expand the /mnt folder to selectively backup my /appdata folder. The whole container hangs. I assume this is due to the downloads and media library folders there. Is there anyway to manually specify the path and not have to use the GUI there, or am I just screwed trying to backup anything under /mnt?
  6. Final Solution: I switched back to the 3900XT and an ASUS X570 Gaming. I'm not sure why that worked, but I was able to successfully make it through parity sync and am now in the process of reconfiguring everything. Just wanted to update in case someone else discovers this thread looking for a solution.
  7. Given others already explained UPSs & UNRAID pretty well, I'll just chime in with the UPS I use personally, I caught mine at a Office Depot that was going out of business, so it was below MSRP, but even at MSRP it's still a great balance of capacity/bulkiness: APC BN1350M2 1350VA UPS w/ USB Charging I also keep my router/modem plugged into the UPS to ensure I have WiFi for a bit in the event of a power outage.
  8. Hello! I was having undecipherable kernel panic issues, and after a couple weeks of research and troubleshooting, including swapping to a different CPU and also a different motherboard, I decided to reset UNRAID. I wiped all my HDDs and SSDs, and re-created the USB drive, using the same physical drive and just copying the license key back over. I was still receiving kernel panics until I found a BIOS update that fixed the issue. Now, in setting up the new array and assigning a parity disk, all goes well until the initial parity sync begins. It will start out at full speed (120MB/s) but it quickly drops to ~200-300KB/s and stays there. The machine has been running for 14 hours and has only finished 17.4% of the parity sync. From what I can parse of the S.M.A.R.T data for the drive (sdd) it doesn't seem to be having any kind of error rate to be fully failing, but I'm not an expert on S.M.A.R.T data either. I am at a loss as to what could be causing the issue. I do have a spare drive I can use for Parity as I have an in-box 10TB IronWolf, just wanting to make sure I don't need to pull it out yet. I've attached the full anonymized diagnostics, hopefully there's a nugget of info that answers what's going on with it. Given the system is completely wiped of data, any formatting solutions are fine, there's nothing to lose at the moment. EDIT: I went ahead and replaced the parity drive with a brand new drive, still in anti-static bag. The new drive is working phenomenally. In the meantime I have taken the old drive and have plopped it into my hot-swap dock connected to my Windows machine. After running S.M.A.R.T analysis with CrystalDiskInfo, and doing research on Seagate's values, the drive doesn't seem to have anything wrong with it, and the Reallocated Sectors count is still at 100. The only errors showing up are Read Error Rate and Seek Error Rate. I've got the CrystalDisk results attached, still curious as to what might be wrong with the drive, or if it was just weird software issue. smaugs-trove-diagnostics-20210214-1229.zip EDIT 2: Changing to a new parity drive seemed like it worked, however after 2 hours the machine kernel panicked once again. I took a photo of the panic as I had no other way to capture the data, the picture is attached. I have noticed that it usually takes about 2 hours for the machine to panic, and as of writing I tried disabling the onboard WAN and Bluetooth, noting that they were mentioned in the kernel panic printout. I would truly appreciate any help I could get, these issues have been going on for a few weeks now and are very frustrating. PS: Apologies about the oddness of the picture, I tried to make the text still readable by using panorama mode on my phone, so it's not exactly aligned perfectly.
  9. So I have a 10G Asus PCI-E NIC and the 2.5 Intel NIC built into my X570 Taichi Motherboard. I want the 10G card to connect to the main network, and for the 2.5 NIC to be a direct connect between UNRAID and my Windows PC. The 2.5 NIC isn't visible in Network Settings to even set to a static IP, however it's visible everywhere else, IOMMU groups, ifconfig, and BIOS. Attached I have screenshots of the IOMMU group screen, Network Settings, and ifconfig printout. Any ideas?
  10. Hello, First off, love your containers, about 1/3rd of my containers are running from your images! To get to the point though, I'm currently trying to accomplish 2 tasks for my mineos-node instance. 1) Being able to run multiple servers (distinguished by port number) 2) Using the swag container to proxy the webui AND the servers to subdomains for friends/family to access. Do you have any info on getting this accomplished? Most of what I found were forum posts over on the main mineos forums, and the results were rather inconclusive. I have almost every container of yours running behind a proxy for remote management, if it's possible to use other container .confs as template of sorts. And of course, given the time of year, Happy Holidays and Happy New (Finally) Year!
  11. I don't know how I didn't encounter issues with it earlier, I'd rebooted a few times before the move and never encountered an issue. Sometimes servers work in mysterious ways...
  12. Well, unattaching the controllers brought all of the drives back, including the 4TB ones. I'm beyond stumped as to why that did it, but hey, it worked! I seriously appreciate the help both of you have given me today @JorgeB and @trurl.
  13. Just got done re-assembling the PC, I have to take the motherboard tray out entirely to get to the PSU, I checked all connections, and actually minimized the number of SATA cables to just 2 cables. The drives show up in BIOS, so I'm not sure why they're not showing up in UNRAID at this point. I've attached a new copy of the log, just in case a new clue lies within since I physically checked and re-seated everything. smaugs-trove-diagnostics-20201201-0723.zip
  14. Yeah, checked all the connections on the drives already, that was my first trick after a reboot. I've seen stories of bad cables on other threads but I have a feeling that I wouldn't have 5 SATA PSU connectors all fail at exactly the same time.
  15. I have 3 cables running to the various drives from the PSU, using 2 connectors on 2 of the cables, and 1 cable runs exclusively to 1 drive, just due to length issues in the case. The PSU is an 850W modular unit from Corsair.
  16. Do you mean like a bad SATA power connector, not enough power from the PSU, or from the wall? Note it is plugged into a APC UPS unit that it was plugged into in the other room.
  17. Hello Everyone, So I just relocated my server this morning from the living room to my office to clear up some space, and after plugging everything back in and booting up, all of my SATA drives are missing. I have the following drives, all the HDDs are missing, but my 2 SSDs show up fine. Parity - 10 TB Seagate IronWolf Pro Disk 1 - 10 TB Seagate IronWolf Pro Disk 2 - 10 TB Seagate IronWolf Pro Disk 3 - 4 TB Seagate IronWolf Disk 4 - 4 TB Seagate IronWolf Cache 1. 500GB Samsung 970 EVO M.2 nVME 2. 500GB Samsung 970 EVO M.2 nVME I scanned the system log but it only shows that the system failed to see the devices. I've checked all the cabling, and even reseated every SATA data and power connection, in the same slots as they were before, despite nothing seeming to have come undone. The move was maybe 14 feet at most across a flat surface, so I highly doubt it's real damage to the drives, especially given 3 of them are relatively new (less than 3 months old). I have my diagnostics zip attached. Any info or ideas would be fantastic, I'm hoping I don't have to replace 3 month old drives, or wipe the array. smaugs-trove-diagnostics-20201201-0723.zip
  18. Funny enough NZBPlanet has since resolved itself for me, and for NZBGeek I eventually got it working by adding my username/password as a backup method if API calls failed. For NZBPlanet I'd just suggest removing it, updating the container, and re-adding it.
  19. I use the LetsEncrypt docker, making just the changes you stated for that container worked for me! Thanks for the explainer, very helpful!
  20. A re-install could work if you make sure to wipe out the binhex-radarr folder in appdata. That should force it to rebuild its database on re-install. I would make sure to note down any specific settings/customizations you have so you've got a reference to set everything back up after re-install.
  21. Hello! So I have been enjoying my UNRAID server quite a bit and have a number of docker processes, including Plex running on it. This morning I was networked into the machine and running filebot on the remote directories (primarily the TV directory) batch renaming files. The process hard crashed the server, after which I had to get several reboots before anything including the webGUI would come back online. Now in investigating the issue I get soft crashes on various actions, sometimes accessing Community Applications, Tools, Settings, or just accessing the running log on a particular container will cause all web interfaces to crash and throw "connection refused" errors for a couple minutes before things start to come back online. I've got my diagnostic logs below and have not rebooted the server since the soft crashes started, so whatever is causing the issues should be in there. I suspect something to do with networking as a couple times CA stated it couldn't connect to the server, but I don't see too much evidence for that theory in the logs. Hoping more experienced eyes might notice what's amiss. **EDIT** To add, SSH connections also fail some of the time, and accessing console on any running docker is impossible. **EDIT 2** The server has been stable for a bit now after I rebooted the machine that I was performing the FileBot actions on, I think it may have been trying to reconnect in the background and messing with things. I'm running a loop of Plex streams to see when the connection drops again if it does. The one thing not working is a few of my subdomains aren't working, particularly Sonarr and Radarr. Other services like Ombi, Lidarr, Bazarr and OpenEats are all working fine though. smaugs-trove-diagnostics-20200927-1459.zip
  22. Nope unfortunately, still stuck at 500KB which is really inconvenient, having to resize every image in PS.
  23. So as of recent Lidarr has begun crashing, requiring a restart of the container to get things going again only for it to crash a few hours later. I have the log dump here for the last crash, it seems to be a memory error, didn't know if this is a container level issue or if this is a main branch lidarr issue. 2020-09-23 15:30:17,155 DEBG 'lidarr' stdout output: * Assertion at lock-free-alloc.c:145, condition `sb_header' not met, function:alloc_sb, Failed to allocate memory for the lock free allocator ================================================================= Native Crash Reporting ================================================================= Got a SIGABRT while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= 2020-09-23 15:30:17,156 DEBG 'lidarr' stdout output: ================================================================= Basic Fault Adddress Reporting ================================================================= Memory around native instruction pointer (0x14c697d9f82f):0x14c697d9f81f d2 4c 89 ce bf 02 00 00 00 b8 0e 00 00 00 0f 05 .L.............. 0x14c697d9f82f 48 8b 8c 24 08 01 00 00 64 48 33 0c 25 28 00 00 H..$....dH3.%(.. 0x14c697d9f83f 00 44 89 c0 75 19 48 81 c4 10 01 00 00 5b c3 66 .D..u.H......[.f 0x14c697d9f84f 90 48 8b 15 e9 75 18 00 f7 d8 64 89 02 eb ba 67 .H...u....d....g 2020-09-23 15:30:17,156 DEBG 'lidarr' stderr output: /proc/self/maps: 400ca000-4066a000 rwxp 00000000 00:00 0 40ec2000-40ed2000 rwxp 00000000 00:00 0 14c3bdaa9000-14c3bee00000 r--s 00000000 00:29 36611 /var/tmp/etilqs_ea5b35344a134efd (deleted) 14c3bee00000-14c3bef00000 rw-p 00000000 00:00 0 14c3bf100000-14c3bf200000 rw-p 00000000 00:00 0 14c3bf300000-14c3bf400000 rw-p 00000000 00:00 0 14c3bf500000-14c3bf600000 rw-p 00000000 00:00 0 14c3bf700000-14c3bf800000 rw-p 00000000 00:00 0 14c3bf900000-14c3bfa00000 rw-p 00000000 00:00 0 14c3bfb00000-14c3bfc00000 rw-p 00000000 00:00 0 14c3c0900000-14c3c0a00000 rw-p 00000000 00:00 0 14c3c0b00000-14c3c0c00000 rw-p 00000000 00:00 0 14c3c0d00000-14c3c0e00000 rw-p 00000000 00:00 0 14c3c0f00000-14c3c1000000 rw-p 00000000 00:00 0 14c3c1100000-14c3c1200000 rw-p 00000000 00:00 0 14c3c1300000-14c3c1400000 rw-p 00000000 00:00 0 14c3c1500000-14c3c1600000 rw-p 00000000 00:00 0 14c3c1700000-14c3c1800000 rw-p 00000000 00:00 0 14c3c1900000-14c3c1a00000 rw-p 00000000 00:00 0 14c3c1b00000-14c3c1c00000 rw-p 00000000 00:00 0 14c3c1d00000-14c3c1e00000 rw-p 00000000 00:00 0 14c3c1f00000-14c3c2000000 rw-p 00000000 00:00 0 14c3c2100000-14c3c2200000 rw-p 00000000 00:00 0 14c3c2300000-14c3c2400000 rw-p 00000000 00:00 0 14c3c2500000-14c3c2600000 rw-p 00000000 00:00 0 2020-09-23 15:30:41,773 DEBG 'lidarr' stdout output: ================================================================= Native stacktrace: ================================================================= 0x55fda9163dea - /usr/bin/mono : (null) 0x55fda90fa32e - /usr/bin/mono : (null) 2020-09-23 15:30:41,773 DEBG 'lidarr' stdout output: 0x14c697f594d0 - /usr/lib/libpthread.so.0 : (null) 0x14c697d9f82f - /usr/lib/libc.so.6 : gsignal 0x14c697d8a672 - /usr/lib/libc.so.6 : abort 0x55fda9069a58 - /usr/bin/mono : (null) 0x55fda934c456 - /usr/bin/mono : (null) 0x55fda936640c - /usr/bin/mono : (null) 0x55fda93669cd - /usr/bin/mono : monoeg_assertion_message 0x55fda935843b - /usr/bin/mono : mono_lock_free_alloc 0x55fda931ba23 - /usr/bin/mono : (null) 0x55fda931a4ff - /usr/bin/mono : (null) 0x55fda931a57f - /usr/bin/mono : (null) 0x55fda9312b91 - /usr/bin/mono : (null) 0x55fda934375d - /usr/bin/mono : (null) 0x55fda9312349 - /usr/bin/mono : (null) 0x55fda93142d9 - /usr/bin/mono : (null) 0x55fda93148e0 - /usr/bin/mono : (null) 0x55fda9318ffe - /usr/bin/mono : (null) 0x55fda9319187 - /usr/bin/mono : (null) 0x55fda931c03f - /usr/bin/mono : (null) 0x55fda9308809 - /usr/bin/mono : (null) 0x55fda92e8f4c - /usr/bin/mono : (null) 0x55fda926ab1b - /usr/bin/mono : (null) 0x55fda9280356 - /usr/bin/mono : (null) 0x400ce84d - Unknown ================================================================= Telemetry Dumper: ================================================================= Pkilling 0x14c68f729700 from 0x14c67e59a700 Pkilling 0x14c63e0bd700 from 0x14c67e59a700 Pkilling 0x14c68f92a700 from 0x14c67e59a700 Pkilling 0x14c67d866700 from 0x14c67e59a700 Pkilling 0x14c63e2be700 from 0x14c67e59a700 Pkilling 0x14c5bfe72700 from 0x14c67e59a700 Pkilling 0x14c67e198700 from 0x14c67e59a700 Pkilling 0x14c67e399700 from 0x14c67e59a700 Pkilling 0x14c695379700 from 0x14c67e59a700 Pkilling 0x14c67ea9b700 from 0x14c67e59a700 Pkilling 0x14c63d8b9700 from 0x14c67e59a700 Pkilling 0x14c63daba700 from 0x14c67e59a700 Pkilling 0x14c697d63780 from 0x14c67e59a700 Pkilling 0x14c67c53a700 from 0x14c67e59a700 Pkilling 0x14c67d2fe700 from 0x14c67e59a700 Pkilling 0x14c63dcbb700 from 0x14c67e59a700 Pkilling 0x14c68f528700 from 0x14c67e59a700 Pkilling 0x14c63debc700 from 0x14c67e59a700 * Assertion: should not be reached at mini-posix.c:249 * Assertion: should not be reached at mini-posix.c:249 * Assertion: should not be reached at mini-posix.c:249 2020-09-23 15:30:41,797 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 23379921680984 for <Subprocess at 23379921595640 with name lidarr in state RUNNING> (stdout)> 2020-09-23 15:30:41,797 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 23379921595696 for <Subprocess at 23379921595640 with name lidarr in state RUNNING> (stderr)> 2020-09-23 15:30:41,798 INFO exited: lidarr (exit status 0; expected) 2020-09-23 15:30:41,798 DEBG received SIGCHLD indicating a child quit
  24. I left the project in the dustbin for a bit and picked it up today, and just went through the install fully again, and everything's working like a charm this time, don't know what went screwy the first time still but hey, it's working now! The interface on this is slick and I quite enjoy it, I only have one question, how does one adjust the image size limit for recipe pictures? 500kb is rather small and I'd like to just it to 10Mb, I like hi-res photos.