Lignumaqua

Members
  • Posts

    58
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Lignumaqua's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. Yes! Now starts up in a couple of seconds again. Thank you. 🙂
  2. This is almost identical to my log except I have q slightly longer 13 second delay between 'POST RETURN (getFavourite)' and 'POST RETURN (get_content)'. Log attached. Also, yes, I do have a number of Dockers with the '?' icon, but they are my own dockers which have no other defined icon and deliberately use '/plugins/dynamix.docker.manager/images/question.png' CA-Logging-20211020-1450.zip
  3. I am seeing the same. CA used to load in a couple of seconds, now it takes 18 seconds. Nearly as long as the Plugin page which has always been slow. This is in Unraid 6.9.2. Also on a 1Gb connection using SSD.
  4. All worked this time, thank you. BTW I wasn't able to both remove parity2 and replace parity1 in one operation. Unraid complained that was too many changes. Took two steps, but all is fine. Thanks everyone for your help! 🙂
  5. Thank you @itimpi, that’s helpful. Yes, I am currently running a parity check with the old drives. It found a lot of errors not surprisingly. 🙂 I also managed to trace the original problem back to a cable (isn’t it always) which triggered CRC errors causing Unraid to disable the drive. I now don’t believe there was an actual disk failure at all. It was a train of issues caused by the cable failure. That triggered the CRC failures which caused Unraid to mark the drive disabled, which, in turn, caused everything else to fail. Unraid behaved as it should with one exception I believe. It shouldn’t have then locked up the GUI so that I was unable to get in to fix anything. Assuming this parity check finishes in an hour as scheduled, I have two questions please: 1. General question - when a parity check fixes errors I assume it does so by changing the parity drive, not by changing the data drives. Is that correct? 2. When I build up the courage to try this again can you confirm that I am following the right procedure to replace dual parity 4TB drives with a single parity 8TB drive? (So I can in the future replace some of my 4TB data drives with 8TB versions.) a. Stop array and remove 2nd parity drive from configuration b. Restart array c. Stop array and replace 1st parity drive with new 8TB drive d. Restart array and rebuild parity onto new 8TB drive (this is where I had the failure last time) e. Stop array and assign the two old 4TB parity drives as new data drives f. Restart array g. Sleep peacefully.
  6. We're back. I went to Tools/New Config and reassigned everything back as it had ben in the first place, with the two original parity drives and removed the new 8TB drive. I then checked the "Parity is already valid" checkbox, crossed my fingers, and started the array. So far, so good, I think everything is working again. That was pretty scary. The GUI was frozen, and nothing was working. I had to boot into Safe Mode to get anything going again.
  7. Server very dead, would appreciate help. I think it’s recoverable but I don’t know how. Here's what happened. 1. All running fine. Using 2 x 4TB parity disks and 7 x 4Tb data disks. 2. Need to increase size so add new 8TB disk with intention of this becoming new parity disk. 3. Stop array and disable parity disk 2 4. Start array with one 4TB parity disk 5. Stop array and designate 8TB disk as new parity disk (replacing 4Tb drive) 6. Start array. Parity rebuild automatically restarts. Life is good. 7. A few hours in to the parity rebuild, errors show up on disk 6 and parity rebuild fails. I still have the original 2 parity disks in the system as unassigned devices and untouched. Can I reassign them as the parity and rebuild the data on the now failed disk? As it is the whole array is stalled. 😞
  8. And he has! 😃 Thanks Squid, all your hard work is much appreciated.
  9. Yes, see my post above. This had been working fine but broke when the single quotes were replaced with ' in mid March. Unfortunately some (all?) browsers treat this as a quote which breaks the Javascript. I had to edit the strings in tests.php to remove the apostrophes before I could get it to work. I'm sure Squid will post an update.
  10. FYI - I was able to fix this (well, mask it really, I didn't fix the root problem) by editing the two addWarning() strings in tests.php to remove the apostrophes, and deleting ignoreList.json again. The strings now read 'The unRaid built in FTP server is currently disabled, but users are defined' and 'The unRaid built in ftp server is currently running'. The buttons now work as expected. Any other error strings with apostrophes likely still have the same problem.
  11. Thanks, but I've tried that. In fact I deleted everything and started again. Still has this problem I'm afraid, the single quotes in these two error fields break the JavaScript 'Monitor Warning/Error' buttons. Version installed is 2021.03.24
  12. Looks like 'unRaid's built in FTP server is running' has the same problem. The 'Monitor Warning/Error' button doesn't work because of the single quote/apostrophe breaking the JavaScript. Any suggestions on how to fix this?
  13. Recently I've been getting the 'unRaid's built in FTP server is currently disabled, but users are defined' warning every day. I've tried adding it to the ignored list it but it doesn't help. So, I thought I'd try UNignoring it and then REignoring it. However the 'Monitor Warning/Error' button doesn't work. Looking in the browser console I can see why. There is an unescaped apostrophe which is breaking the JavaScript. This is the line: <input type="button" id="1360801943" value="Monitor Warning / Error" onclick="readdError("unRaid" s= built in ftp server is currently disabled, but users are defined&quot;,this.id);'<td> Looks like the apostrophe in the text unRaid's built in ftp server isn't being dealt with correctly which leads to an 'Uncaught SyntaxError' when the button is clicked.
  14. After hours of testing I now think this is nothing to do with Unraid. Instead, it's a fault with my UniFi WiFi set-up. It seems that when meshing is enabled it is creating a net loop. I've disabled every client and it still does it, so it's one of the UniFi devices themselves rather than a client. If I disable meshing all is good. This is clearly off topic for Unraid. So I consider this solved.
  15. With no changes that I'm aware of my log is suddenly getting flooded with messages like this and network access to Unraid is very slow. Presumably because the NIC is getting overloaded. Jan 2 23:58:45 Tower kernel: br0: received packet on eth0 with own address as source address (addr:d0:50:99:c2:d5:1b, vlan:0) I know this message can appear if you have bridged NIC interfaces without bonding them. I do have two NICs connected, but they are neither bridged nor bonded and have been working perfectly for some months with each having its own IP on the same network (I did this to allow VMs to have their own interface). To be sure using two NICs wasn't the issue I temporarily disabled the eth1 interface but the messages persist. The messages come in for both br0 on eth0 and br1 on eth1. I've also tried rebooting, disabling Dockers and VMs, but so far nothing has helped. It's probably something I've done unwittingly, but I would really appreciate any help in tracking this down! 🙂 Diagnostics attached. tower-diagnostics-20210102-2358.zip