BLKMGK

Members
  • Posts

    978
  • Joined

  • Last visited

Everything posted by BLKMGK

  1. If someone with an ASUS ACE workstation board could take a look at their board during bootup I'd appreciate it. There's a set of lights near the memory that countdown as it goes through the boot process. Mine is stopping at the White LED after having it hang trying to adjust memory speeds. Does your board progress past the White LED? The next one looks to be labeled "boot". <sigh> Mine was running well till I tried to get the memory near spec, I no longer get kbrd lights or anything. I don't have a chassis speaker so no idea if it's beeping and no docs for codes exist that I've found for this board anyway - certainly not the user manual. Pulled the CMOS battery, jumped the PITA pins, am going to let it sit sans battery while I travel a few days and hope like heck it comes back to life - am NOT happy! Problem "solved" thanks to Microcenter doing an exchange! New board boots and runs fine so far. Now to find a heatsink that will fit in a 4U case. 120mm Noctua AM4 sits about 5cm too high so I'm stuck with the stock cooler for now. This thing flies, cannot wait for the 3950X!
  2. Loaded defaults (again), swapped cards, no change. Removed the second card and it now sees the drives, ugh! Mind you it was booting fine previously with two cards! Way more finicky than it should be for sure. Oh these are M103 cards but pretty much the same thing either way. I cannot recall if these cards displayed their BIOS in the past but they sure don't now. I'll get a Windows OS loaded in there for some tweaking and testing ASAP, it will run Linux most of the time when done though I think. Not sure this is the board that will get into my unRAID chassis but this CPU will when the 3950x come out and this machine gets another upgrade. IMO the 470 boards might be more economical for unRAID, anyone seen any with more slots and real onboard video? Edit: and now I'm finding that the IBM M1013 cards aren't supported by WIN10, SAS9220-8i likewise (same card). This upgrade is turning out to be pretty frustrating lol
  3. Got mine fired up today and flashed the BIOS to 0702 after first having seen it boot just fine into the Linux disk in my system. After playing in the new BIOS some I attempted to get back into Linux - no go. The system doesn't appear to see any of my PERC cards and thus no boot device What firmware is everyone running? It had a 200 firmware when I first booted and this 702 is the only I see on the ASUS site right now. Can't do much with it if it won't see my SAS cards!
  4. This video seemed to explain it best and I've saved it off to use for mine for sure. Some of the voltages seemed awful high and I know a few reviewers have had CPU fry on them so this perked my ears up pretty quickly! Skip to about the 3min mark!
  5. This is very helpful and I truly appreciate you having done all the maths! I believe, if I'm following you, this to mean that using an expander won't hurt me. I'm a ways away from setting up unRAID on this but am trying to be prepared. I do have an expander card at least and the cabling for it so I feel good about that. I'll setup my board in a standard linux build as soon as my memory arrives, shipping is taking ages from NewEgg - ugh. Again, much thanks and when I can put this together I'll post any lessons learned. I've really wanted to get my drive speeds up and had hoped it was bottlenecked with my controllers - perhaps not! My last parity check finished at 103mb/sec FWIW. One thing I'm seeing that everyone should be aware of is reports of ASUS firmware setting voltages WAAAY too HIGH! Be sure to look into this on a fresh build and update firmware ASAP.
  6. Can you give any additional details on the riser card? I've tried looking for something like that but no luck as I'm not sure what the heck to call it?
  7. I have an Extender already in a box but based on what I saw in Johnnie's test it looks like using one slows transfer speeds down too. Right now I use 3x Perc H310 cards in my existing system and have IBM M?? cards as well which are pretty close to the same thing. We need better cards I think lol!
  8. I intend to go Ryzen when the 16core CPU come out in September, I'll be moving a 12core into my unRAID server from another system. I'm hoping that any Linux wonkiness will be worked out by then, I see that current up to date Linux kernels are currently having some issues with the new CPU (Phoronix reported on it). The added clockspeed and IPC will be welcomed but I'll be reducing my core count. I believe it'll be a huge improvement! The adapter issue is a big one for me, likewise video. Most of the mobo I'm seeing have just 3x full length slots and I'll need a video card - need to find a cheapie that can sit in a short slot. Each of my current adapters currently have 8x drives in them and it looks like that's hurting me with shared bandwidth. Finding a higher performance adapter might be nice as would finding a board with more suitable slots! My first build (a server for Docker Swarm not unRAID*) will be using an Asus Pro WS X570-ACE but looking at it I'm not sure how suitable it might be for unRAID, it has the slot issue plus I think I'd like a 10gig network connection for the future I chose it because doesn't have crap onboard like bluetooth, wifi, or spiffy audio and has some remote management features. It wasn't cheap though and I'm disappointed in the 1gig network port. Has anyone else seen anything more suitable? Are there any drive better adapter cards that can handle a density of drives like the current crop of cheap SAS cards but not lose so much speed? A nasty PCIE4 card would rock but I'll be surprised if we see something reasonable anytime soon. @johnnie.black did some awesome testing previously, has anything better come along since? I will at least try booting unRAID on my new system to see how it might fail and report on it! * wish I could get Docker Swarm working on unRAID
  9. Another big thanks! Loaded it up and it's running stable as an unRAID server. I'm still seeing Swarm issues with our project though and at this point I'm starting to suspect something else could be amiss so it would be helpful if someone else could test as well. Same error about a route to host not being found.😣 I have spun up another standard Linux host as a VM and am waiting for my more knowledgeable friend to lend a hand adding it to our swarm and testing in case I screwed something up previously. Docker DNS seems pretty weird so diddling with it is confusing for me trying to troubleshoot it. The suggestion of a single host swarm seems like a good one and maybe we can try that on this test system. Unfortunately I'm about to take a trip away from home for a few weeks. I'll have a laptop and VPN access at least and my partner in crime will be coming along too so hopefully there will be some downtime to better troubleshoot this together. I'll update as I figure things out, I suspect my server will be getting a 12core Ryzen soon so I'd love to be able to utilize it fully BTW it's pretty weird seeing the container appear and disappear as the swarm comes up and down let me tell you! unRAID currently doesn't have an XML file for it so it's being loaded at the CLI. I'll figure out how to more normally load it in the future once the silly thing works. A big THANK YOU to @CHBMB!
  10. So yup, I needed to label the node properly and then it began being available to the swarm, so that was good. However, after some troubleshooting inside the worker container we're still getting errors as it attempts to connect back to it's tasking engine. An error 113 - "no route found to host". Inside the container we can ping by hostname but the IP we get is crazy and seems to change depending upon the instantiation of the container. We are suspecting that Docker's weird DNS mechanisms may be confusing the troubleshooting. In any case we are unable to get a route back to the tasking database so the app fails, the worker cannot get work and never checks in. Not sure what's causing this. I believe tonight I may drop one of the other (not unRAID) nodes and go through the steps of re-adding it to ensure that something hasn't occurred with our container somehow. If it works perfectly with a standard Linux install I'm not sure what the next steps will be. I may try loading up a VM on unRAID, installing Docker, and trying that - this rings incredibly backwards to me however. Will try to troubleshoot this more tonight, it's driving me cxrazy but I've got to get out of the house now lol so just an update
  11. I've got the test system up and running on the custom kernel, it's joined into the swarm (I could to that before tho), but I'm still having some issues that could very easily be my ignorance. A friend of mine who's been working with me on this is going to need to take a look at the error I see. From what I can tell it's a node labeling issue but I had thought I'd built it correctly in the yaml file. Anyway, the kernel appears stable and I'm hopeful that once I figure out my configuration issue (and no doubt learn something) that this will WORK. Fingers crossed and a big THANK YOU to @CHBMB for giving me something to test with! I'm hopeful and truly appreciative of the investigation he did on this as frankly it was beyond me. Stay tuned, as I know more I'll be sure and share it with everyone. If others have need of Swarm and could test their use cases that would be helpful too since I may not be pushing many boundaries here
  12. I appreciate you taking the time to diff these results! I had to run out the door after posting but I'm not surprised there's other differences. The module I highlighted is one of the modules listed as "common", the research I've done seemed to indicate it was a good place to start! I don't blame you for not wanting to tinker with the NVIDIA builds for sure. As it happens I've given a few unRAID systems to friends over the years as gifts, I've reached out to one friend who's not utilizing theirs heavily right now and will be bringing it home to test with using the build you posted earlier My fingers are crossed that I'll find success or at least answers as to what's needed and I can load test with it to see if weirdness occurs. I'm hoping this is enough, I recognize that enabling modules at the kernel level willy nilly isnt a great idea and am hoping that what I need proves benig to what Tom has in place and doesn't harm the KVM support either. I hope to test tonight if someone doesn't beat me to it! I really appreciate the support and responses guys!
  13. Sadly this isn't completely true. Even RipBot allows you to run multiple encoding workers on a single machine. Ffmpeg and x.265 are indeed multithreaded however they don't scale past about 6threads very well. The machine I have with 48threads was one I tested on and it was a failure to say the least. I had a few threads working but quite a few threads sat completely idle. That machine with multiple worker containers pegs all cores and the fans become quite noisy In any case, I currently have three machines in the cluster and would like to add my unRAID machine as a fourth! Edit: Also, Windows is going to be adding Docker support soon and we will make sure our workers can live on those. That would allow me to add at least two more machines including one that's a competitor to my 48thread system 😮
  14. For reference here's what I get when I run the check-config script on a machine that Swarms just fine blkmgk@smaug:~$ ./check-config.sh warning: /proc/config.gz does not exist, searching other paths for kernel config ... info: reading kernel config from /boot/config-4.15.0-50-generic ... Generally Necessary: - cgroup hierarchy: properly mounted [/sys/fs/cgroup] - apparmor: enabled and tools installed - CONFIG_NAMESPACES: enabled - CONFIG_NET_NS: enabled - CONFIG_PID_NS: enabled - CONFIG_IPC_NS: enabled - CONFIG_UTS_NS: enabled - CONFIG_CGROUPS: enabled - CONFIG_CGROUP_CPUACCT: enabled - CONFIG_CGROUP_DEVICE: enabled - CONFIG_CGROUP_FREEZER: enabled - CONFIG_CGROUP_SCHED: enabled - CONFIG_CPUSETS: enabled - CONFIG_MEMCG: enabled - CONFIG_KEYS: enabled - CONFIG_VETH: enabled (as module) - CONFIG_BRIDGE: enabled (as module) - CONFIG_BRIDGE_NETFILTER: enabled (as module) - CONFIG_NF_NAT_IPV4: enabled (as module) - CONFIG_IP_NF_FILTER: enabled (as module) - CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module) - CONFIG_IP_NF_NAT: enabled (as module) - CONFIG_NF_NAT: enabled (as module) - CONFIG_NF_NAT_NEEDED: enabled - CONFIG_POSIX_MQUEUE: enabled Optional Features: - CONFIG_USER_NS: enabled - CONFIG_SECCOMP: enabled - CONFIG_CGROUP_PIDS: enabled - CONFIG_MEMCG_SWAP: enabled - CONFIG_MEMCG_SWAP_ENABLED: missing (cgroup swap accounting is currently not enabled, you can enable it by setting boot option "swapaccount=1") - CONFIG_LEGACY_VSYSCALL_EMULATE: enabled - CONFIG_BLK_CGROUP: enabled - CONFIG_BLK_DEV_THROTTLING: enabled - CONFIG_IOSCHED_CFQ: enabled - CONFIG_CFQ_GROUP_IOSCHED: enabled - CONFIG_CGROUP_PERF: enabled - CONFIG_CGROUP_HUGETLB: enabled - CONFIG_NET_CLS_CGROUP: enabled (as module) - CONFIG_CGROUP_NET_PRIO: enabled - CONFIG_CFS_BANDWIDTH: enabled - CONFIG_FAIR_GROUP_SCHED: enabled - CONFIG_RT_GROUP_SCHED: missing - CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module) - CONFIG_IP_VS: enabled (as module) - CONFIG_IP_VS_NFCT: enabled - CONFIG_IP_VS_PROTO_TCP: enabled - CONFIG_IP_VS_PROTO_UDP: enabled - CONFIG_IP_VS_RR: enabled (as module) - CONFIG_EXT4_FS: enabled - CONFIG_EXT4_FS_POSIX_ACL: enabled - CONFIG_EXT4_FS_SECURITY: enabled - Network Drivers: - "overlay": - CONFIG_VXLAN: enabled (as module) Optional (for encrypted networks): - CONFIG_CRYPTO: enabled - CONFIG_CRYPTO_AEAD: enabled - CONFIG_CRYPTO_GCM: enabled - CONFIG_CRYPTO_SEQIV: enabled - CONFIG_CRYPTO_GHASH: enabled - CONFIG_XFRM: enabled - CONFIG_XFRM_USER: enabled (as module) - CONFIG_XFRM_ALGO: enabled (as module) - CONFIG_INET_ESP: enabled (as module) - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled (as module) - "ipvlan": - CONFIG_IPVLAN: enabled (as module) - "macvlan": - CONFIG_MACVLAN: enabled (as module) - CONFIG_DUMMY: enabled (as module) - "ftp,tftp client in container": - CONFIG_NF_NAT_FTP: enabled (as module) - CONFIG_NF_CONNTRACK_FTP: enabled (as module) - CONFIG_NF_NAT_TFTP: enabled (as module) - CONFIG_NF_CONNTRACK_TFTP: enabled (as module) - Storage Drivers: - "aufs": - CONFIG_AUFS_FS: enabled (as module) - "btrfs": - CONFIG_BTRFS_FS: enabled (as module) - CONFIG_BTRFS_FS_POSIX_ACL: enabled - "devicemapper": - CONFIG_BLK_DEV_DM: enabled - CONFIG_DM_THIN_PROVISIONING: enabled (as module) - "overlay": - CONFIG_OVERLAY_FS: enabled (as module) - "zfs": - /dev/zfs: missing - zfs command: missing - zpool command: missing Limits: - /proc/sys/kernel/keys/root_maxkeys: 1000000
  15. Wow, this blew up overnight! As I stated above, I'm no Docker expert but I'm attempting to learn more and I DO have a use case for this. I've been relying on the check-config script and making an educated guess that this "missing" module was what was causing me issues. I understand however that it may not be properly checking our kernels and it's possible there's a different module that's needed for what I'm doing but this seemed a good place to start as it's networking that breaks for my use case. I would agree that having this done in an upstream fashion by @limetech is the way to go. I'm an avid user of the NVIDIA enhanced kernels but the last thing I'd like to see is more work placed upon @CHBMB as he's already got his hands full with crazy support questions and to add to it would be insane! He has the ability to add this but that's asking for too much IMO. I did briefly consider trying to compile and test a custom kernel myself as documentation to do it exists (sans NVIDIA) but I figured I'd ask first as that's a learning curve I'm not yet willing to climb if possible. I'm presently running 6.7.1 RC1 (NVIDIA) but I will attempt to test the v6.7.0RC1 kernel posted above and report back. It may take a day or so as I've got a lengthy list of jobs running right now that I'd like to complete first so anyone else able to help out I'd appreciate it. Either way I need to test to ensure my issue is solved with this addition or figure out what else might be blocking me. I've seen it asked elsewhere what containers would benefit from Swarm. One general thing that might be useful by enabling Swarm is DockerDNS which I don't *think* works right now. DockerDNS as I understand it is an internal Docker DNS system that gets turned on when Swarm is activated and it could prove generally useful as you can reference other containers by name internally. That's not why I want this however so I'll try to explain better what I'm up to in hopes that folks might be a little more eager 😎 ============begin long explanation you can skip=========== Like many I use my unRAID to store video. As such I try to be judicious in how I store it. I can rip a BD and end up with a 30GB file, I can then compress it down with x.265 and store it for a third of that or less while maintaining excellent video quality and not compromising on sound quality. The catch is that this requires some fairly hefty amounts of processing* and time investment. Most of us are familiar with HandBrake, some of us may be also be familiar with RipBot264. RipBot264 allows you to "cluster" Windows computers in order to encode individual videos more quickly. Each computer encodes portions of the video and it's joined back together at the end. RipBot264 is pretty well supported and mostly works well too - I use it on a few Windows machines. However I have a fairly decent amount of Linux hardware in my home and the developer of RipBot is unwilling to support Linux, he advised me to run VMs of Windows instead when I asked, I'm not willing to do so. I sat down and analyzed how the RipBot264 program does it's work, I studied ffmpeg and x.265 functionality, I asked ignorant questions, and I realized that duplicating what he was doing on Linux was completely possible with the exception of AVISynth**. I also realized that it could be done more efficiently without some of the intermediate steps that program takes. I built a proof of concept script, tested it, and it worked WELL! I happen to work with some talented guys and Docker is one of the technologies that we're beginning to use, Swarm is an area of interest too. I managed to interest some of my coworkers (one in particular) in my little project as I'm not a well versed programmer. Working together we've built a system that can cluster encode videos VERY well on a home lab I have setup that includes a 48thread machine***, a 24 thread ESX server, and a small PC tucked in the corner for Kodi use. What I've been unable to join (properly) into this cluster is my 32thread unRAID box 😢 End-state I'd like to end up with a container that can live in the unRAID "app" repo or be easily side-loaded that could allow users to join unRAID servers into a cluster for added compute power with our new toy. I'd tried this by hand from the commandline and was able to load a worker container and join our swarm but unable to get the container to receive any tasking or comms. This is when I began digging into what might be missing from the kernel etc. Thus I've arrived here! So YES, I have some need of unRAID more fully enabling Swarm functionality for our little project. As a side note - yes we intend to release our code to others, ALL of it, and yeah we will need some help in the future. My selfish hope is that when released to others with more skill than we possess features of interest can be built that we're interested in. We still have a few things to add and test before making it public. Yes, there are products that do this commercially - at eye watering prices. As yet I've found no built and supported OpenSource program duplicating this so we've scratched our own itch so to speak as is the OpenSource way! There, sorry for the lengthy dissertation lol *at present time I don't leverage GPU hardware for this but may begin leveraging it in the future, I'd want it just to speedup the math not run through the GPU's onboard encoder as I want as much control over quality as possible. For some reason RipBot refuses to use any of my current desktop GPU hardware - ugh. **see Vapoursynth for a Linux solution that we've yet to touch. AVISynth is pretty powerful though so yes eventually we'll want filtering ability. ***this is all older XEON hardware, when Ryzen 3000 is released I'll be upgrading ALL of it including my unRAID server 😈
  16. The script to check the Docker configuration can be found here-> https://github.com/moby/moby/blob/master/contrib/check-config.sh
  17. I'm no Docker expert but I'm attempting to add my unRAID server into a Docker Swarm I've got configured on my network. I'm able to join but I get network errors when I attempt to coordinate client\worker containers on my unRAID server. There's a script for checking the Docker configuration named check-config.sh and when I run this it looks like just a single module is missing and near as I can tell it explains my networking issues. I've seen other threads where people have compiled their own kernels to get around this but since I also run the NVIDIA kernel it's not something I'd like to tackle myself. We've got pretty slick container support now and I'm hoping this module was simply one that wasn't thought of when compilation was done for some reason. The line that gets printed as missing when I check the config is as follows: CONFIG_NETFILTER_XT_MATCH_IPVS: missing This comes out of the "generally needed" section and is the only module there not installed. Thanks!
  18. I had been thinking of just making a backup of my appdata for this container but I think I like this idea better - thanks! I can make a copy for a new copntianer and allow it to upgrade but doing it as a separate container is a good idea I think - going back will be much easier.
  19. Curious, is anyone using the V3 version of Sonarr with this container? It looks like we can request it by using the "preview" tag. I've been reading some good things about V3 and have become curious. Just wondering if anyone has tried this and how it went before attempting to take the plunge
  20. I may be running into the too it seems! This is also doubly weird in that I've already got the NVIDIA kernel loaded. Working with a friend to slave my server into a swarm and running into networking issues. 6.70- RC7, any chance for release we could get swarm enabled if it's not already? From what I can tell there's only one module that needs to be added? CONFIG_NETFILTER_XT_MATCH_IPVS: missing
  21. I checked, NOT there and no it's not saving data elsewhere. I just updated the container though and re-ran the script - which declared it had already been run for some reason despite my rebuilding\upgrading. Refreshed the screen and it's THERE! I wonder if they did something to the latest release? In any case it's there and this is REALLY cool One odd thing - I notice that I have, under users, a set of GUID that you don't. The GUID I have are for individual containers. I can see usage of various things for each of them - just by GUID and not name. It would be way more useful by name but anyway I just thought I'd point out this difference as you might also want to be able to monitor individual containers resource usage. No idea how I ended up with this and you didn't though but let me know if you'd like to compare setup notes to try and get it! Thank You!
  22. Used your script and bashed into the container and manually checked the python file - the setting is in the file! Appreciate the script as I hadn't realized this setting was needed in addition to making the hardware available to the container. I assume by sidebar you mean the right hand side where all of the various charts live. Despite adding the hardware to the container and modifying the python file - with a reboot - no dice I've even done a search of the page for the word NVIDIA and not found it lol. Is this a specific plug-in that you've also added or is the functionality in NetData already? Standard NetData container right? Mine is from titpetric/netdata. I can run the SMI command in the container by bashing in - it sees the hardware. Very strange, I must be missing something simple. Is it buried under one of the headings that can expand? Thanks, the link made it very easy - much appreciated! I'll get it yet
  23. Where in Netdata are you finding this graph? I've got it passed through and can see nvidia-smi from the NetData console but cannot find this chart. I've run the command in the container etc. so I think I should be set but cannot find data in the interface.
  24. Most of the searches I've done indicates an IPMI issue, I do run the Dynamix SystemTemp plugin. I've had weird issues with incorrect things getting loaded and have had to occasionally hand modify files on older versions of unRAID. Dynamix detects the proper driver as coretemp nct7904, I've hit the unload button though which takes the drivers out of memory and the issue MAY have gone away 😮 If so I'm much relieved as this has been a nightmare lol. Oddl my temps displayed at the bottom of the web interface are still working. All I've done is hit the unload, the plugin is still onboard. I'll post back if the errors return, this might have solved it at last - thanks!
  25. If the encrypted data auto-unlocks what's the point of having it encrypted? My data is all encrypted and at each boot I give it the miles long password and the array starts up when I hit start. Unless someone has the password my data is useless to them. If it simply started up they need only take the array and press the power button - crypto defeated...