TexasUnraid

Members
  • Posts

    1178
  • Joined

  • Last visited

Everything posted by TexasUnraid

  1. While trying to track down a random kernel panic issue I am dealing with, I decided to upgrade to 6.10. After doing this the above script method does not seem to be working anymore? The tab is still there but when I click it I just get a 404 error? I checked and the files are still copied to the same location on array start and the script is unchanged from what was working yesterday? Any ideas why it is not working today after the update? I use this all the time, it is VERY handy. Cuts as much as 150-200w off the power draw when it is not doing anything time sensitive and I can crank it back up when I am working with it directly.
  2. Well, just setup my VM to farm from the machine in the garage over the very subpar wifi just to see how things go. Seems my bandwidth is limited to ~60mbit/s and it keeps that pegged most of the time. Only a few of the GUI's show the proof time but they seem to be hovering in the ~10 seconds range with 475 plots and 32 forks running. Gonna see how it goes as things settle in. Hoping that adding the mesh node closer to the garage will boost bandwidth out there to at least 100mb+. Worst case it is possible to run a cat6 cable out there but a lot more work.
  3. yeah, doing that today myself on my VM. And actually thinking about completely changing how I do this. Thinking about converting the system in the garage into nothing but a NAS and share the plots from it over the network. Then using my VM to do the actual farming and access the plots over the network. It only has like 100mbit wifi in the garage right now but plan to add a mesh node that should improve it. I am finding that with 32 forks running even 32gb of ram is not enough, on my VM machine it is sitting at ~55GB of ram usage with them all running right now. P.S. Have you got Silicoin working? It always locks up on my and never even gets to sync. PPS, what happened to that fork manager program you were working on awhile back? Is that usable now?
  4. I had my server crash the other day and it corrupted several of the databases. Luckily I had the VM on the server to get fresh versions from after little it work overnight. Now I need to go copy all those files over and start up all the fork again.
  5. well, a month later and still farming. Had some real issues with getting corrupted blockchains and having to rebuild. Sometimes downloading from that site works and others it doesn't for some reason. This system is also mining with my old GPU's and it has been crash prone while dialing in the settings. I finally setup a VM on my main server with all the blockchains installed so I can update on there where there is lots of horsepower and then just copy over a fresh version if something goes wrong on the farmer. Not sure how much longer I will farm though, had a few cold nights and could be hard to keep the drives warm enough in the garage. When I was looking up acceptable drive temps years ago I found that too cold of drives was bad, I like to keep them between 30-35c. Getting close to 1xch mined at this point, thinking about getting an even 1 xch and calling it but will see what happens at that point. Have you hit anymore XCH solo?
  6. True, I have done this on containers that I know will break if updated but maybe it is worth it to do most of them.
  7. good to know to skip this update. This is why I wish I could disable the automatic update check. I don't update what works fine lol.
  8. I guess look at the bright side, at least the failed before being put into real use. This is why I decided to go with shucking drives vs used drives, I was just too scared of random drive failures and unraid doesn't handle failures efficiently at all.
  9. Still almost double what I am running. On the plus side with the new SSD things are running much better so far, have most of them running now. Can't get taco and apple to sync but the rest seem to be working. Still syncing a few so not 100% sure. Lucky, Silicoin and wheat are not working at all for me.
  10. That sucks, I had a scare last week with my main server suddenly getting errors on my most important drive. Thankfully I had a backup but I am out of spare drives so it would of still been bad if it had died. Thankfully it seems to of been something besides the drive, I rebooted the system and scanned the drive again and it came back basically clean, 1 error in a mundane file but the rest of the files that had errors before were fine now. So looking like it was a SAS card / backplane / cable issue. Gonna be keeping my backups updated a bit more regularly now though.
  11. Yeah, for decoding hardware should work for your use case. I never really cared about the decoding side of things. Encoding on the other hand it was not uncommon to have to crank the bitrate 2-3x as high to match the CPU encoded files quality at naturally 2-3x the size. I have tried it several times over the years but always decide it is just not worth the time savings. I still say that if the images are largely static, test it on a small file to start with and see what the end result is space wise. I have had OBS videos of game loading screens with just minor movement only take up a few mb of space when the same time of in game footage can be 1-2GB.
  12. Decoding should be better, I mostly noticed issues when encoding. Although odd you have issues with CPU decoding, decoding is not generally very intensive. That said, I would test it on a small section of videos first, depending on the footage you might not save as much space as you would think doing this. If the shot is mostly static H264 encoding is already quite good at optimizing the file size.
  13. I did try a windows VM with mixed results. The issue is that the VM still has to communicate with unraid over SMB internally and thus still had a bad bottleneck. I used windows before unraid and the performance was much better for small files but the other features of unraid made the switch worth it. Docker in particular has been a life changing experience.
  14. Everytime I have tried GPU encoding I have not been happy with the results vs CPU encoding. The quality to size ratio is WAY off with worse quality and larger file sizes. The latest RTX cards are supposed to be better but not sure how much.
  15. Yeah, once synced it doesn't take much to stay synced. Indeed, I will be taking a backup on a regular basis from now on as well. Been sitting at 48TB for awhile but looks like I am going to have to pull my 12TB drive for proper service soon so will drop me down to 36TB. Just can't justify putting nice drives into chia at this point.
  16. Here is what half the blockchains just trying to sync up the wallets and remaining blocks from that site you linked looks like on a 20C/40T server, it is getting HAMMERED. Pulling 500w from the wall lol. It would be working even harder but it is IO limited since the VM is running on some 10k drives.
  17. Dang, that site saved me a TON of time. I was in the process of spinning up a VM to download all these and it was not going to be quick lol.
  18. Well, this sucks. The boot SSD for my farming machine just died. On the plus side I think that it might of been failing for awhile and causing my crashes. Now I get the fun of re-downloading all those blockchains. That will take weeks lol. Wish there was a way to download a monthly updated blockchain directly or something to at least get a head start. Thinking about spinning up a VM on my server to help with the load.
  19. So had a BTRFS corruption error on a drive and decided to run a scrub of all my drives to check them out but noticed something odd. The total scrub speed for the system seems to be bottlenecked to around ~800mb/s? AKA, if I start 4x drives scrubbing I get 200mb/s per drive but if I start a 5th drive they all drop to ~160mb/s for the same total system read speed. If I do a parity check on the other hand I can get the full speed of all drives and over 2GB/s easily. The system is a 2x 10c/20t 2680v2 and CPU usage is only around ~30%. I would think adding more scrub jobs would be more threads and thus easily spread over the 40 threads I have. The SAS card can sustain far more then 800mb/s as shown by the parity check. I can't figure out where the bottleneck could be, anyone have any ideas?
  20. True, more habit I guess of not wanting to have to fix something but as long as it has the files in a backup, the age should not matter.
  21. While this works, it is not the proper way to do it as it does not keep everything contained inside the container. The proper method is like was explained in the guide here: Simply add this to the Extra Parameters of the container settings and it will create a ramdrive in the container mounted at /tmp. --mount type=tmpfs,destination=/tmp,tmpfs-size=256000000 This prevents the container from mistakenly causing issues with the host system and can also limit the max size of that containers /tmp data. The last argument is the size, that is 256MB in the example. Either way will reach the same result of causing the writes to go to ram. The above is just the native method for docker.
  22. very interesting idea, I guess it still counts on unraid not changing the docker file too much but interesting for sure. I will have to think about if I want to implement it. Leaning towards yes. Personally I would just setup a script to sync the ramdisk hourly. I do that with my ramdisk now and still get 1/8th the writes I used to get. Could even sync it to the array as well I suppose if you have a drive you don't spin down.
  23. 1: I did a du of the folder that is being converted to a ramdrive and even with my 70 dockers I only had 60mb in it. Figure it would grow some as the logs grew but still I don't see it growing more then ~256mb even with all my dockers unless there is a misbehaving docker with far too many log writes or VERY long uptimes. Edit: come to think of it, the docker log rotation setting should limit the max size it could grow to. 2: This folder seems to only hold log files, so docker should not care if a write was missed as long as the folders exist (it might even re-create them if they are missing, not sure). During testing I deleted a few of the log files and don't remember there being any issues, it simply started a new one. That said an hourly rsync to the backup folder would not be a horrible idea, it could be a adjustable setting in the docker settings as well if this was combined into the official docker. This basically cut my writes in half. Worst possible case, the docker can simply be re-installed / updated and it would correct any issues.
  24. A very involved fix but should work. Although think I will stick with the setup I have now. Once I start making permanent changes that will not flow through updates is where I tend to draw the line unless there are no other options. Still very good to have a complete option to deal with the writes, I will keep it in mind if I have to keep dealing with it. Seems like something that could be added into unraid in general considering the write it saves and how easy it would be to implement.