Jump to content

salvdordalisdad

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by salvdordalisdad

  1. Hi JorgeB Thanks, I did a quick & dirty single disk array & got docker up & running on fresh demo license & new USB stick (seemed quickest!) These are the stats from the disks: disk MB/sec MB/sec sdf 200 97 sdg 157 70 sdh 200 87 sdb 150 73 sdc 200 94 sdd 183 84 sde 142 67 They all seem pretty much within a similar range, no single one stands out as clearly having difficulties. Any thoughts? Thanks in advance. (Nice picture - Slava Ukraine! )
  2. That's really interesting, but if I have no cache? Anyway, I started with a fresh 6.10 trial on a new USB, as I had clearly got something wrong & I am comfortable with starting from scratch. So I ran the diskspeed benchmarks & was a little surprised at the max & min values. disk MB/sec MB/sec sdf 200 97 sdg 157 70 sdh 200 87 sdb 150 73 sdc 200 94 sdd 183 84 sde 142 67 So I chose the fastest (sdc) as the parity. Are these figures good enough? Do they indicate any underlying issues? Anyway, just rebuilt the array again & started it & it's now saying 16 days . The CPU is trundling along at 3% util, so it's not that. All the SATA ports are 6Gbps, all the disks are 6Gbps. Any suggestions, or just "yeah it might do that" Thanks in advance for any pointers. Also - when I want to move back to the old USB, what do I copy from this USB drive to that one & what do I delete on that one? (I know that the license in in the plus.key file, but virtually naff all else of what's on there).
  3. Hmm, yes it dort of works, but it still says "path does not exist". I think I'm going to scratch this server & start again. Now I need to find out how to do that on the existing USB drive... some searching to do I think. But not right now, out of time, have to be after the weekend now. ;-/ Thanks for the nudges, much appreciated...
  4. So I did some digging into those diags & saw a LOT of dead tcp connections, and spotted that port_ping+ I cancelled all of the (legacy from previous config) user scripts & rebooted. Now those connections are gone, which is as expected. The parity build time has gone down to 9 days - which is an improvement - woohoo! I think I've spotted another slight issue... Can't run docker diskspeed Docker ... "path does not exist" There was no /mnt/system/docker folder, so I created one - now it doesn't complain about that anymore. There's no /mnt/user/appdata folder - but I can't create one of those... "no medium found" Contents of /mnt/user: root@n3:/mnt# cd /mnt/user root@n3:/mnt/user# ls -l total 0 drwxrwxrwx 1 nobody users 6 May 19 11:07 filestore/ I'd expected a bunch of folders, system, appdata, domains... Do I just create those, or is something lower level going to be required to fix that? - open to suggestions. Tried creating "system" as a share - didn't do anything, no folder in /mnt/user at all. So I created the folders /mnt/system/docker and /mnt/user/filestore/appdata But even with those fudged the docker refused to start. Definitely something lower level needed. I will tiidy up that mess I just made...and await the right instructions. (Even if those instructions involve starting from scratch)_. Thanks in advance Lee
  5. Hi Guys, I just recovered my 3rd unraid server ("Plus" license) & replaced a couple of disks intending it to become a backup target. So I zeroed all the disks & created a new array of 7 x 4Tbyte disks, with one of them as parity. All the disks are completely blank, freshly formatted in xfs. These were happy in the old servers, so I assumed they'd be OK for this job. However, it seems not. The parity build (1st time as it's a new array) says it will take 25 days to complete. When it first started, it said 10 hours, and has crept up since then. So clearly something's wrong, probably a dying disk, but I don't know how to diagnose it, so please can someone read the diags & point me in the direction of the faulty disk?n3-diagnostics-20220519-1514.zip Thanks in advance. Much appreciated. Lee
  6. ...aaaaaand that was the cause. Why on earth would you provide 2 m.2 slots when only one of them works! That shouldn't be in teeny tiny small print on page 57, it should be in large print with a WARNING sign... oh well, you live & learn. ;-/
  7. Hi Guys, Just re-using an nvme drive 256gb intel (I think) drive for use as a cache experiment. The drive is working, I cleared all the partitions & reset it as GPT n2-diagnostics-20220102-1701.zip There's 2 slots on this motherboard, so I used slot 1 & re-powered the server. But it doesn't show up, I think I must be missing something. The motherboard doesn't seem to need to have the m.2 slots enabled. Here's the diag download. Not sure it's any use as the drive isn't showing up there either! I've done this on many occasions before & not hit this snag before... I've done a bit of googling, but all the answers get very specific quickly, so better to ask for help I think. Open to suggestions as always. Thanks in advance. Supplemental: Just read through the MB manual, and it looks like maybe one of the slots is disabled! (very small print bit about ...only supported in 11th gen CPUs...) (i HATE small print ! ) Will move it & update.
  8. D'oh! Rookie error! Very grateful for that. Increased my understanding by one notch... Thank you.
  9. Hi Guys, Not sure it's a bug, but just updated from 6.8.3 to 6.9.2 Was advised to post it here to let you (dlandon) know. 1st thing I noticed which was OK but now isn't quite working is on the "Main" tab. Under the SMB / NFS / ISO shares section, the orange buttons labelled "Add Remote NFS/SMB share" and "Add ISO File Share" had disappeared & a debug line was displayed instead. So I deleted my two NFS shares & the buttons re-appeared. But if I add a NFS share back again, the "add" buttons disappear and I get this line of debug instead: Fatal error: Uncaught Error: Call to undefined function _() in /usr/local/emhttp/plugins/dynamix/include/Helpers.php:35 Stack trace: #0 /usr/local/emhttp/plugins/unassigned.devices/UnassignedDevices.php(387): my_scale(0, NULL) #1 {main} thrown in /usr/local/emhttp/plugins/dynamix/include/Helpers.php on line 35 The share process works exactly as expected & the share is certainly working AOK. However, I can't now add a 2nd share because the buttons have disappeared. Suggestions welcome on how to debug further to get you the info you need. Thanks in advance.
  10. Hi Guys, Not sure it's a bug, but just updated from 6.8.3 to 6.9.2 1st thing I noticed which was OK but now isn't quite working is on the "Main" tab. Under the SMB / NFS / ISO shares section, the orange buttons labelled "Add Remote NFS/SMB share" and "Add ISO File Share" had disappeared & a debug line was displayed instead. So I deleted my two NFS shares & the buttons re-appeared. But if I add a NFS share back again, the "add" buttons disappear and I get this line of debug instead: Fatal error: Uncaught Error: Call to undefined function _() in /usr/local/emhttp/plugins/dynamix/include/Helpers.php:35 Stack trace: #0 /usr/local/emhttp/plugins/unassigned.devices/UnassignedDevices.php(387): my_scale(0, NULL) #1 {main} thrown in /usr/local/emhttp/plugins/dynamix/include/Helpers.php on line 35 The share process works exactly as expected & the share is certainly working AOK. However, I can't now add a 2nd share because the buttons have disappeared. Suggestions welcome.
  11. Hi, I have a script which updates my DDNS, and uses dig which now throws library errors. Nothing has been updated in a while, so I'm not sure why it's suddenly broken. Tested on my primary & backup systems & both the same result. So I'm trying to understand why & fix it, so I ended up here. I tried your "fix" updating those two libraries, but both those links say "404 file not found". So I looked closer & used the ones which are there on the slackware repository, but I think that was a bad idea as it hasn't fixed dig or nslookup & gives more errors. dig now produces several 'GLIBC_2.33' not found messages. I have no idea what else has broken, and now the nerdtools page shows that those two are "update ready" but they don't update. I haven't done this fix on my primary system, just the backup system. I assume this will get fixed in the fullness of time, so I'll await that & find an alternative in the meantime. Thanks guys.
  12. Hi, I have a script which updates my DDNS, and uses dig which now throws library errors. Nothing has been updated in a while, so I'm not sure why it's suddenly broken. Tested on my primary & backup systems & both the same result. So I'm trying to understand why & fix it, so I ended up here. I tried your "fix" updating those two libraries, but both those links say "404 file not found". So I looked closer & used the ones which are there on the slackware repository, but I think that was a bad idea as it hasn't fixed dig or nslookup & gives more errors. dig now produces several 'GLIBC_2.33' not found messages. I have no idea what else has broken, and now the nerdtools page shows that those two are "update ready" but they don't update. I haven't done this fix on my primary system, just the backup system. I assume this will get fixed in the fullness of time, so I'll await that & find an alternative in the meantime. Thanks guys.
  13. well, isn't that interesting... That port 80 was listening was a clue. but it's not obvious from the unraid docker settings why it's not set right. The "Advanced settings" shows 2 options for port acces using http. (and 2 for https, not that I will use that indoors). The first one isn't adjustable very much, but the second one works! I set the first one to 9999, which clearly doesn't work, as there's nowt listening on 9999 internally. I set the second one t6o "container port 80", and "host port 99" Yup, that worked...well, mostly. http://host:99 opens the blank starting page, which is the best it's been in days, however, the shortcut click on the icon sends me to port 9999 which is obvs wrong, but I guess I'll live with that for now. I am guessing there's a config file somewhere I could play with, but beyond me currently! Hopefully someone will see wot I dun wrong, and suggest the file to edit to put it right, no hurry though, maybe i won't actually get around to fixing it anyway!! But - in the meantime - how do I get the configured pages icons, apps etc from one Heimdall instance on my old unraid server to my new one? I suspect that tarring up the whole directory from the old one & over-writing the new one is a bit harsh, and might upset things... Thanks again
  14. So I'm doing some more basic diags, and learning a lot! tcpdump (from the nerd pack) shows the incoming connections, to 172.168.0.2:metagram (I looked it up & it is TCP port 99), so that's reasonably good - it means the container config in unraid is correct. Then I thought I'd check the open (listening) ports inside the docker (I didn't know I could do that 3 hours ago!). netstat -lntup shows the only ports listenign are 0.0.0.0:443 127.0.0.1:9000 0.0.0.0:80 So that doesn't really help either, as it kinda shows it's not listening on port 99... More playing needed...
  15. Hi Chaps, I too can't seem to access heimdall...unraid 6.7, about 14x freshly deleted & reinstalled, even with default settings...always get "unable to connect" I did see a similar post to my issue, with the comment "you've configured it wrong", so I checked. bridge mode, port 99, all the default values as far as I can tell. (there are not that many tbh) It was working OK, on my demo unraid license, but then I started all over from scratch, and now it refuses point blank. I've even deleted all my dockers (only 6, so nbd, but configured, so slightly wasted effort), and now I only have heimdall, not working. Tried bridge, host (apparently that's a real no-no!), lots of different ports, and there's not much else to choose from! Maybe move the docker.img and appdata locations to an unassigned ssd drive? (just to make sure there's no underlying issue with the newly created docker.img file) Edit: tried moving docker.img to ssd drive - no difference. (but what about recovering if/when the ssd goes pffft?) One thing worth noting - this unraid box has two subnet interfaces, one for "normal" stuff and a dmz just for the home automation iot dash buttons etc. So they're noted during the docker installation, and there's only 1 gateway & it's on the normal subnet. The heimdall installer page makes no mention of which network interface to use, but correctly chooses the normal one & shows the 172.16.0.2:99 -> 192.168.4.4:99 mapping in the docker summary page. How do I check log files? The docker log seems unhelpful & I know nowt about dockers internals... Grateful for any nudges in the right direction, thanks in advance. salvadorddalisdad
  16. Hi there, Looking at dashbtn because it's exactly what I want to achieve & looks great. Clearly it's looking for ARP packets (as that's all the button does when it wakes up) However, it's not seeing any ARP packets. Unraid 6.4, and there are two VLANs, the native one and vlan 333. I gave dashbtn the br0.333 network type and an ip address on the subnet, and can ping successfully to & from. However, when I run the dashbtn.py program & hit the button, the log file has no entry for the button, only "dashbtn service started" messages. I got tcpdump onto the unraid CLI and that definitely sees the ARP packets: root@s1:~# tcpdump -i br0.333 -v arp tcpdump: listening on br0.333, link-type EN10MB (Ethernet), capture size 262144 bytes 22:12:12.168356 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.200.1 tell 192.168.200.129, length 42 But the dashbtn log file never gets to see it. I can't run tcpdump in the docker to prove it's getting the ARP packets, so I'm a bit stumped. Any suggestions very welcome and gratefully received, I'm sure I'm doing something wrong or unexpected! (Not s clue about the nuts & bolts of docker, otherwise I might try & reverse engineer it to add tcpdump, but I'd probably only break it!) Thanks in advance.
×
×
  • Create New...