magmpzero

Members
  • Posts

    19
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

magmpzero's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Hey all: I have been searching and reading a ton about timing of harvester/farming including bug reports / github issues with nas times. I am still a little unclear and was hoping someone would take a minute and help me understand it a bit better. Here is an example of my log: 2021-05-21T07:38:26.001 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15400 s. Total 103 plots 2021-05-21T07:38:35.406 harvester chia.harvester.harvester: INFO 2 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.67500 s. Total 103 plots 2021-05-21T07:38:42.863 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15500 s. Total 103 plots 2021-05-21T07:38:50.154 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.16802 s. Total 103 plots 2021-05-21T07:39:09.484 harvester chia.harvester.harvester: INFO 1 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 8.33100 s. Total 103 plots 2021-05-21T07:39:15.320 harvester chia.harvester.harvester: INFO 1 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 7.28700 s. Total 103 plots 2021-05-21T07:39:24.514 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.17500 s. Total 103 plots 2021-05-21T07:39:25.875 harvester chia.harvester.harvester: INFO 0 plots were eligible for farming 632e3571bf... Found 0 proofs. Time: 0.15203 s. Total 103 plots What concerns me is that sometimes when I have plot(s) pass a filter, the time goes up. For example, the first hit with 2 plots is good at .675000s but you can see a bit later I have 1 plot pass but time is 8.33100. I am guessing the time is going up because right now, I am farming on a machine with my plots share mounted via smb. Question 1: Am I correct in thinking this is all fine given that is still under 30sec but greater than 2? Question 2: Let's say I get lucky and one of my plots may have a proof. Does that full plot have to be transferred via the smb mount? I am pretty confident I can't transfer 100gb in under 30 sec with my network setup from the unraid array. I am really just trying to determine if I need to stop farming remotely given these times and just farm directly on the unraid server via a docker container. I know a bunch of us are off creating plots and storing them on our array, so just trying to figure out if we even have a chance at winning given the speed of unraid reads.
  2. Everyone starts somewhere. In order to get any farmed XCH / Chia off of your wallet on unraid, you will need to create a new wallet somewhere and transfer your chia to it. For example, if you want to sell your farmed chia, you will need to transfer it to a wallet on an exchange that sells chia. I currently use gate.io as they support the ability to buy and sell chia.
  3. This is great. I have it up and running now (well, syncing). I plan to use this just as my farmer and will not be doing many plots on unraid. I was previously farming on my windows computer via a smb mount of my plots. I think having farming local will help ease my concern of a long delay due to network etc. Quick question that I didn't see in the documentation, where is the syncing bloclchain stored? Is this outside the container so it will persist across updates etc? I am assuming it is being stored in appdata mount but just wanted to verify.
  4. Yeah, I think it is a linux problem because I futzed around in the cli for a while. when fdisking or doing anything I kept getting errors / warnings about the GPT mismatch GPT PMBR size mismatch (4294967294 != 35156656127) will be corrected by write I reformatted multiple times on a real mac machine and tried several drives (I bought 4). I even tried creating new partitions under linux but could never get anything to work. Unfortunately, I can't do any further testing as I shucked the drive(s) and put them in my array.
  5. I do. Just yesterday I plugged in a 8tb hfs+ external drive and was able to use without problem.
  6. Great plugin! I use it all the time. Quick question: I bought a bunch of 18tb external drives for shucking but thought I would try to use one on my mac. Formatted normally and verified it was a good drive. Unassigned devices failed to work on such a large drive and would only show about 800gb free (if I remember right). Is this a known problem and is it a problem with linux hfs+ support or something to do with unassigned devices?
  7. Hi all: Sorry for not paying attention to this thread. I really had no idea people were actually using magrack. Looking over the comments I will get an update out in the next week or two to address the folder structuring and then take a stab at adding stack support where magazines are grouped by name. The image display bug is because of how I create the preview image. I name it preview.jpg when it should be the name of the mag. That being said, my weakness is UI design and development for the frontend code. If anyone could help out with this aspect, please let me know and I can explain how the app works and point you to the git repo.
  8. Dude, settle down. I just found a random docker image on the hub and made a template for it so more people could use booksonic. How was I supposed to know you were an unraid user that actually had an account and was the same person who created the image. I don't have a crystal ball. Sorry for trying to help. I remove this template and let you create one so you can get all the credit for creating the image.
  9. Try using the following to connect from the app: https://yourip:yourport/booksonic I have had to add booksonic to the end of the URL for some reason. I didn't build this docker, just the template for unraid but may start building my own so I can help out more with issues that pop up.
  10. Application: Booksonic Docker Hub: https://hub.docker.com/r/ironicbadger/booksonic/ GitHub: https://github.com/magmpzero/docker-templates Overview: Booksonic is a fork of the popular Subsonic streaming server but modified to focus 100% on audiobooks. I put this simple template together so others could enjoy using this software. For more information on booksonic, visit the official website: http://booksonic.org/
  11. Thanks for the kind words. This is kind of a UnRaid exclusive app right now as I haven't made it available anywhere else. It's pretty basic but has surely made my life easier for reading magazines. I suppose it will work with any PDF including tech books. If you have any ideas for improvements, please let me know. I am thinking about automation but not sure if the market is large enough to justify the time it would require to implement. The one thing I would love to figure out is how to reduce duplicates on my automated RSS downloads. I generally get two versions of some issues by mistake, one TuePDF and one not. Not sure how to filter those out.
  12. Just following up on. Issue has been resolved after formatting /md8 and copying data back over.
  13. I think the failed drive is mostly readable for the data I care about saving. Most I can lose. Do anyone have a link that explains the process to basically wipe this drive to clear both the drive and reset /md8 data on the parity drive and then copy over what I want to keep? I am thinking just a format of /md8 would work and then rebuild parity, finally copy over what I want to keep from previous failed drive from another box. -- gs
  14. Attached is the output from the check. I am curious, is it possible that because the drive failed when rebuilding my parity drive if bad data was written to parity and then when restored is restored bad data that makes it appear the FS is corrupted? ---- node allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting. ---- xfs_check.txt
  15. Hey folks: I really need some help here. I had a failed drive so I bought a new one and replaced it. The drive rebuilt fine and was humming a ong fine for a day and then I noticed some weird behavior in logs: Mar 4 03:40:10 Hog logger: *** Skipping any contents from this failed directory *** ar 4 03:40:10 Hog kernel: XFS (md8): Internal error XFS_WANT_CORRUPTED_RETURN at line 1137 of file fs/xfs/libxfs/xfs_ialloc.c. Caller xfs_dialloc_ag+0x195/0x248 Mar 4 03:40:10 Hog kernel: CPU: 1 PID: 23476 Comm: shfs Not tainted 4.1.17-unRAID #1 Mar 4 03:40:10 Hog kernel: Hardware name: Gigabyte Technology Co., Ltd. Z97X-UD5H-BK/Z97X-UD5H-BK, BIOS F7 04/21/2015 Mar 4 03:40:10 Hog kernel: ffff8800086f3b88 ffff8800086f3ac8 ffffffff815f1df0 ffff88041fa50a01 Mar 4 03:40:10 Hog kernel: ffff88040a53d790 ffff8800086f3ae8 ffffffff81260934 ffffffff81253028 Mar 4 03:40:10 Hog kernel: ffffffff81251b13 ffff8800086f3b38 ffffffff8125207e ffff8800cb86b000 Mar 4 03:40:10 Hog kernel: Call Trace: Mar 4 03:40:10 Hog kernel: [<ffffffff815f1df0>] dump_stack+0x4c/0x6e Mar 4 03:40:10 Hog kernel: [<ffffffff81260934>] xfs_error_report+0x38/0x3a Mar 4 03:40:10 Hog kernel: [<ffffffff81253028>] ? xfs_dialloc_ag+0x195/0x248 Mar 4 03:40:10 Hog kernel: [<ffffffff81251b13>] ? xfs_inobt_lookup+0x22/0x24 Mar 4 03:40:10 Hog kernel: [<ffffffff8125207e>] xfs_dialloc_ag_update_inobt+0xbd/0xdb Mar 4 03:40:10 Hog kernel: [<ffffffff81253028>] xfs_dialloc_ag+0x195/0x248 Mar 4 03:40:10 Hog kernel: [<ffffffff81253d0d>] xfs_dialloc+0x1d6/0x1f5 Mar 4 03:40:10 Hog kernel: [<ffffffff8126b564>] xfs_ialloc+0x4b/0x46f Mar 4 03:40:10 Hog kernel: [<ffffffff81275097>] ? xlog_grant_head_check+0x4b/0xc7 Mar 4 03:40:10 Hog kernel: [<ffffffff8126b9e2>] xfs_dir_ialloc+0x5a/0x1fb Mar 4 03:40:10 Hog kernel: [<ffffffff8126be24>] xfs_create+0x261/0x485 Mar 4 03:40:10 Hog kernel: [<ffffffff81269124>] xfs_generic_create+0xb2/0x237 Mar 4 03:40:10 Hog kernel: [<ffffffff8113b2e8>] ? get_acl+0x12/0x4f Mar 4 03:40:10 Hog kernel: [<ffffffff812692ce>] xfs_vn_mknod+0xf/0x11 Mar 4 03:40:10 Hog kernel: [<ffffffff812692e1>] xfs_vn_mkdir+0x11/0x13 Mar 4 03:40:10 Hog kernel: [<ffffffff81105fd6>] vfs_mkdir+0x6e/0xa8 Mar 4 03:40:10 Hog kernel: [<ffffffff8110a72f>] SyS_mkdirat+0x6d/0xab Mar 4 03:40:10 Hog kernel: [<ffffffff8110a781>] SyS_mkdir+0x14/0x16 Mar 4 03:40:10 Hog kernel: [<ffffffff815f74ee>] system_call_fastpath+0x12/0x71 Mar 4 03:40:10 Hog kernel: XFS (md8): Internal error xfs_trans_cancel at line 1007 of file fs/xfs/xfs_trans.c. Caller xfs_create+0x3de/0x485 Mar 4 03:40:10 Hog kernel: CPU: 1 PID: 23476 Comm: shfs Not tainted 4.1.17-unRAID #1 Mar 4 03:40:10 Hog kernel: Hardware name: Gigabyte Technology Co., Ltd. Z97X-UD5H-BK/Z97X-UD5H-BK, BIOS F7 04/21/2015 Mar 4 03:40:10 Hog kernel: 000000000000000c ffff8800086f3cf8 ffffffff815f1df0 0000000000000000 Mar 4 03:40:10 Hog kernel: ffff880098ce2cb0 ffff8800086f3d18 ffffffff81260934 ffffffff8126bfa1 Mar 4 03:40:10 Hog kernel: 00ff8800cb86b000 ffff8800086f3d48 ffffffff812744a3 ffff8800cb86b001 Mar 4 03:40:10 Hog kernel: Call Trace: Mar 4 03:40:10 Hog kernel: [<ffffffff815f1df0>] dump_stack+0x4c/0x6e Mar 4 03:40:10 Hog kernel: [<ffffffff81260934>] xfs_error_report+0x38/0x3a Mar 4 03:40:10 Hog kernel: [<ffffffff8126bfa1>] ? xfs_create+0x3de/0x485 Mar 4 03:40:10 Hog kernel: [<ffffffff812744a3>] xfs_trans_cancel+0x5b/0xda Mar 4 03:40:10 Hog kernel: [<ffffffff8126bfa1>] xfs_create+0x3de/0x485 Mar 4 03:40:10 Hog kernel: [<ffffffff81269124>] xfs_generic_create+0xb2/0x237 Mar 4 03:40:10 Hog kernel: [<ffffffff8113b2e8>] ? get_acl+0x12/0x4f Mar 4 03:40:10 Hog kernel: [<ffffffff812692ce>] xfs_vn_mknod+0xf/0x11 Mar 4 03:40:10 Hog kernel: [<ffffffff812692e1>] xfs_vn_mkdir+0x11/0x13 Mar 4 03:40:10 Hog kernel: [<ffffffff81105fd6>] vfs_mkdir+0x6e/0xa8 Mar 4 03:40:10 Hog kernel: [<ffffffff8110a72f>] SyS_mkdirat+0x6d/0xab Mar 4 03:40:10 Hog kernel: [<ffffffff8110a781>] SyS_mkdir+0x14/0x16 Mar 4 03:40:10 Hog kernel: [<ffffffff815f74ee>] system_call_fastpath+0x12/0x71 Mar 4 03:40:10 Hog kernel: XFS (md8): xfs_do_force_shutdown(0x8) called from line 1008 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff812744bc Mar 4 03:40:10 Hog kernel: XFS (md8): Corruption of in-memory data detected. Shutting down filesystem Mar 4 03:40:10 Hog kernel: XFS (md8): Please umount the filesystem and rectify the problem(s) Mar 4 03:40:10 Hog logger: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0] And then I see a ton of these when trying to access files on the restored drive: Mar 4 04:37:14 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. Mar 4 04:37:44 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. Mar 4 04:38:14 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. Mar 4 04:38:44 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. Mar 4 04:39:14 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. Mar 4 04:39:44 Hog kernel: XFS (md8): xfs_log_force: error -5 returned. What really weird if I reboot the box, it works again for a while. All Web UI elemets report the drive is healthy with no issies. Full log attached. hog-diagnostics-20160304-0916.zip