Jump to content

magmpzero

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by magmpzero

  1. Hey all: I have been searching and reading a ton about timing of harvester/farming including bug reports / github issues with nas times. I am still a little unclear and was hoping someone would take a minute and help me understand it a bit better. Here is an example of my log: 2021-05-21T07:38:26.001 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15400 s. Total 103 plots 2021-05-21T07:38:35.406 harvester chia.harvester.harvester: INFO 2 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.67500 s. Total 103 plots 2021-05-21T07:38:42.863 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15500 s. Total 103 plots 2021-05-21T07:38:50.154 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.16802 s. Total 103 plots 2021-05-21T07:39:09.484 harvester chia.harvester.harvester: INFO 1 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 8.33100 s. Total 103 plots 2021-05-21T07:39:15.320 harvester chia.harvester.harvester: INFO 1 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 7.28700 s. Total 103 plots 2021-05-21T07:39:24.514 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.17500 s. Total 103 plots 2021-05-21T07:39:25.875 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15203 s. Total 103 plots What concerns me is that sometimes when I have plot(s) pass a filter, the time goes up. For example, the first hit with 2 plots is good at .675000s but you can see a bit later I have 1 plot pass but time is 8.33100. I am guessing the time is going up because right now, I am farming on a machine with my plots share mounted via smb. Question 1: Am I correct in thinking this is all fine given that is still under 30sec but greater than 2? Question 2: Let's say I get lucky and one of my plots may have a proof. Does that full plot have to be transferred via the smb mount? I am pretty confident I can't transfer 100gb in under 30 sec with my network setup from the unraid array. I am really just trying to determine if I need to stop farming remotely given these times and just farm directly on the unraid server via a docker container. I know a bunch of us are off creating plots and storing them on our array, so just trying to figure out if we even have a chance at winning given the speed of unraid reads.
  2. Everyone starts somewhere. In order to get any farmed XCH / Chia off of your wallet on unraid, you will need to create a new wallet somewhere and transfer your chia to it. For example, if you want to sell your farmed chia, you will need to transfer it to a wallet on an exchange that sells chia. I currently use gate.io as they support the ability to buy and sell chia.
  3. This is great. I have it up and running now (well, syncing). I plan to use this just as my farmer and will not be doing many plots on unraid. I was previously farming on my windows computer via a smb mount of my plots. I think having farming local will help ease my concern of a long delay due to network etc. Quick question that I didn't see in the documentation, where is the syncing bloclchain stored? Is this outside the container so it will persist across updates etc? I am assuming it is being stored in appdata mount but just wanted to verify.
  4. Yeah, I think it is a linux problem because I futzed around in the cli for a while. when fdisking or doing anything I kept getting errors / warnings about the GPT mismatch GPT PMBR size mismatch (4294967294 != 35156656127) will be corrected by write I reformatted multiple times on a real mac machine and tried several drives (I bought 4). I even tried creating new partitions under linux but could never get anything to work. Unfortunately, I can't do any further testing as I shucked the drive(s) and put them in my array.
  5. I do. Just yesterday I plugged in a 8tb hfs+ external drive and was able to use without problem.
  6. Great plugin! I use it all the time. Quick question: I bought a bunch of 18tb external drives for shucking but thought I would try to use one on my mac. Formatted normally and verified it was a good drive. Unassigned devices failed to work on such a large drive and would only show about 800gb free (if I remember right). Is this a known problem and is it a problem with linux hfs+ support or something to do with unassigned devices?
  7. Hi all: Sorry for not paying attention to this thread. I really had no idea people were actually using magrack. Looking over the comments I will get an update out in the next week or two to address the folder structuring and then take a stab at adding stack support where magazines are grouped by name. The image display bug is because of how I create the preview image. I name it preview.jpg when it should be the name of the mag. That being said, my weakness is UI design and development for the frontend code. If anyone could help out with this aspect, please let me know and I can explain how the app works and point you to the git repo.
  8. Thanks for the kind words. This is kind of a UnRaid exclusive app right now as I haven't made it available anywhere else. It's pretty basic but has surely made my life easier for reading magazines. I suppose it will work with any PDF including tech books. If you have any ideas for improvements, please let me know. I am thinking about automation but not sure if the market is large enough to justify the time it would require to implement. The one thing I would love to figure out is how to reduce duplicates on my automated RSS downloads. I generally get two versions of some issues by mistake, one TuePDF and one not. Not sure how to filter those out.
  9. Just an update on the automation. I think I have figured out a way to do it but it's going to be a lot of work. It will probably take me 2-3 months to have a system that is ready to be used for others. I will keep updating with progress as I am working on it if people care.
  10. Are you suggesting you have a figured out a way to automate placing magazine issue into a title folder? If so, I would love to know how you do this. The real problem with magazines there isn't a naming standard that I have been able to figure out. But yeah, I could certainly add that as a feature.
  11. Thanks for alerting me about the port. I just fixed it and pushed a new version up.
  12. Application: MagRack Docker Hub: https://hub.docker.com/r/magmpzero/magrack/ GitHub: https://github.com/magmpzero/docker-templates ScreenShot: Overview: I wanted to create a lightweight and simple application to ease the pain of reading magazines that I have downloaded. For that reason, MagRack was born. I normally browse my network share and then download the pdf I want to read. This is a total PITA. I decided to start work on a magazine application that will make it easier for me to read the ones I want. What it does: Scans a directory of your magazines and creates a wall of preview images. You can then click on the cover image and the magazine will open in the default PDF viewer for your device. The magazines are sorted based on newest first. What it doesn't do: It doesn't download anything. This is simply a tool to make reading your magazines less of a pain. I assume you already have automation set up for downloading magazines via RSS. How it works: When the WebUI is hit, it scans all of your magazines and creates a preview image if one doesn't exist. It then sorts based on modified and creates a display wall. For this reason, the first time you load the page, it can take a few minutes depending on the number of magazines you have. How to use it: mount your magazine directory at /mags. MagRack expects to have a subfolder for each magazine. When you hit the http://address:4567/mags it defaults to only show the latest 10 magazines. If you want to show 100, for example, you would use http://address:4567/mags/100 Anyway, I put this little program together overnight and I really enjoy using it so far. Thought I would containerize it and offer it up for others to use.
×
×
  • Create New...