DMills

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by DMills

  1. Is this still being maintained? Last update has it at version v8.1.4, latest version is 10.0.1.
  2. That did the trick, thanks so much!
  3. Is there any way to make the startup scripts not cause the docker container to fail to run? I'm seeing this in the logs every time I try to start ownCloud. Hit:5 http://ppa.launchpad.net/ondrej/php/ubuntu focal InRelease Hit:6 https://mirrors.gigenet.com/mariadb/repo/10.3/ubuntu focal InRelease Reading package lists... E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem. E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem. *** /etc/my_init.d/20_apt_update.sh failed with status 100 I'd love for it to just start so I could exec in and run the dpkg line to fix it, but it dies every time. Any ideas on how to get past this?
  4. I ended up adding an integer column called Dirty and set it to nullable, since there were already values present. I checked on the focalboard server and they're aware of the issue and so it should be fixed soon? Not running the container anymore so can't say for sure, sorry!
  5. nvm, I fixed it myself, edited the sqlite3 database and added the Dirty column, it's working fine now.
  6. Seems that focalboard is broken due to a bad db migration? [31merror[0m [2022-06-07 12:58:55.525 -03:00] Table creation / migration failed [31mcaller[0m="sqlstore/sqlstore.go:55" [31merror[0m="table schema_migrations has no column named dirty in line 0: INSERT INTO schema_migrations (version, dirty) VALUES (?, ?)" fatal [2022-06-07 12:58:55.525 -03:00] server.NewStore ERROR caller="main/main.go:145" error="table schema_migrations has no column named dirty in line 0: INSERT INTO schema_migrations (version, dirty) VALUES (?, ?)" Anyone aware of whether this is fixed yet or planned or? TIA
  7. I'm sure I'm just blind, but what is the default username and password to get into this?
  8. It's possible I guess, although the cache drives are just regular HDDs so they're as slow as the rest of the array, parity included. I just think my old quad core couldn't handle the load as well as this one can. I'm still going to try out the tuning plugin though
  9. For me, it kind of depends on the system the drives are on. I know on my previous unRAID "server", I had a number of 8TB drives and it took upwards of 24 hours to finish the monthly parity check, mainly due to the machine being overtaxed by me with lots of containers running, and using the machine during that time. Now on my new server with 16TB drives, and lots more horsepower, I don't even notice the check is running until I go onto the server and see it's either running or finished. The parity check now takes around 26 hours or so, so there is no direct correlation from one size to the next, there are lots of other factors. I checked the prices and found the 16TB drives to be the best $/TB so went with them, parity check times be damned!
  10. You're probably right, but I don't see it being reported anywhere using dmidecode. Oh well, as I said, an annoyance at worst.
  11. HOW TO REPRODUCE: Install unRAID 6.9.2 on a multi-CPU machine. View the Dashboard page and scroll to the Memory section. In my particular instance, my machine has 4 x Xeon E5-4650 CPU's which each have a maximum memory limit of 384GB. This is what is being reported as the maximum memory size for the machine, so it seems to not be taking into account that there might be multiple CPUs. The size should be reported as 1.5TB (4 x 384GB). Very much cosmetic and very minor, but confusing/annoying nonetheless odin-diagnostics-20220510-1448.zip
  12. FYI, got notified of an upgrade this morning, but receive the following error when trying to upgrade: plugin: updating: unassigned.devices.plg plugin: downloading: "https://github.com/dlandon/unassigned.devices/raw/master/unassigned.devices-2022.04.20.tgz" ... done plugin: bad file MD5: /boot/config/plugins/unassigned.devices/unassigned.devices-2022.04.20.tgz Tried multiple times, same issue
  13. Exactly the same issue here, and moving away from Cloudflare's DNS servers fixed it for me as well!
  14. Just tried again to install Nextcloud on my new server, no matter whether I choose, MariaDB or Sqlite, I get the dreaded 504 timeout error every time. I'm officially giving up on Nextcloud.
  15. If I'm reading things right, it looks like i801_smbus is spamming syslog. Any ideas why, and more importantly, how to fix it? TIA! D odin-diagnostics-20220412-1009.zip
  16. Well, that worked! Not sure why but as you say, perhaps I didn't wait long enough, etc. I'm a little confused where to go from here though, as I'm seeing this in the logs now: ---------Your 'modules' folder is empty, please put your--------- ---required 'module' files in the 'Neverwinter Nights/modules'--- ------------folder and restart the container, putting------------ ---------------------server into sleep mode---------------------- Are there required modules I need to install in order for this to function? Is there a list somewhere? I realize this is probably outside of the container support you're offering but thought I would ask Thanks again for your help and all your awesome containers! D
  17. Wondering/hoping someone can help with the Neverwinter Nights: EE server. I'm hitting the following issue every time I start the server container: ---Checking if UID: 99 matches user--- ---Checking if GID: 100 matches user--- ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Starting MariaDB...--- ---Starting Redis Server--- ---NWN:EE Binaries not found, installing v8193.34!--- ---Something went wrong, can't download NWN:EE Binaries, putting server in sleep mode--- ---Something went wrong, can't download NWN:EE Binaries, putting server in sleep mode--- I noticed someone else having this issue about a year ago, so I tried a force update as well, but it did not fix the issue. Anyone have any ideas what is going on? Did they remove/move the binaries perhaps? TIA
  18. For that specific version of Nextcloud no, but using a more recent version did let me get past the errors I was encountering before.
  19. 64GB DDR4 non-ECC on my old server, 256GB DDR3 1600MHz ECC on the new server.
  20. That's the original onboard port, thankfully it still works fine. It's eth1 through 4 that are the issue now
  21. Just installed a new Intel Pro1000 quad NIC in my main Unraid server, but none of the new interfaces stay up. I've watched it boot, all 4 of them get link status lights on the back and on the switch (Netgear GS724T v3), but at some point during the boot process, they all lose link and the lights go out. I've tried with a bond and without. I've set it to 802.3ad, balance-xor, active-backup. I've set the MTU to the max for the switch (9216) and also back to 1500. Nothing makes any difference, I cannot get them to stay up. lspci lists the NIC fine, and I can see in syslog the cards driver is loaded up fine. I can't see anything obvious in the logs as to why they are not working. Any help is greatly appreciated! p.s I've attached diagnostics as well. D bifrost-diagnostics-20211215-1052.zip
  22. Because in theory, the drive should still be usable, short term to get the array started. I realize it's not the same drive, but it's the right size. My biggest issue right now is the drives I want to use are bigger than my parity. This way at least I can try and get this working. Right now it's an expensive flower pot. Yes I have backups, and I've written the data off as gone, I just have no place to restore the backups to yet. As for SMART warnings, yes, as I said, I was getting warnings for a while now, just thought I had more time than I did.
  23. Now that I read that, and my head isn't buried inside the server case, it makes sense, so thanks for that Many reboots, many failed array starts, and then tried both drives in two other machines here. One would not even finish booting with either of the drives attached, the other with an external drive dock didn't detect either of them when plugged in. So I'm calling them dead. A friend is bringing over a 4TB drive later today that he pulled out of his array due to read errors, we're hoping it will limp along enough to get mine to a state where I can at least swap the parity and then look at adding in the other 10TB drive. I was getting warnings for months on one of the failed drives, read errors galore. The other reported errors maybe twice I think in the last month, maybe 3 times. I chose to ignore them, so it's 100% my fault and for months was entirely avoidable. You always think you have the time to get new drives before they become a problem, until you don't! Thanks for you help and clarification! D