Shunz

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by Shunz

  1. On 1/22/2022 at 6:44 AM, guy.davis said:

     

    Unfortunately, this has been reported by many regular Chia users recently (not specific to Machinaris).  Only solution seems to be a blockchain database reset which is a hassle. 

    Sorry for the trouble these Chia services are causing.  I'm actually looking into automated DB backups within Machinaris as a way to try to mitigate this unfortunate instability.

     

     

    Thanks!! I had the same problem too, since a few days ago over the weekend.

     

    Sadly, I spend a good deal of time figuring out how to delete the config and db files, since the appdata (especially the machinaris folders) permissions were locked by unraid (and I'm too lazy to figure out the commands). My binhex Krusader refused to rename or delete the files, until I googled that I needed to edit Krusader's docker values of PGID and PUID to 0 (zero = root) to run krusader as root. Sheesh

  2. Is it possible to use the /Library folder from another repository?

    - Can I even overwrite the entire library folder with another? (eg overwrite the entire /Libary from my linuxserver.io installation over to binhex's folder)

     

    I messed up my Docker image (it became full, and I had to delete it), and I'm having difficulty trying to reinstall the plex from the linuxserver.io repository that I previously had. I had success copying the repository settings from SpaceInvader One's video, but I'm now considering switching to the BinHex Plex Pass repository.

  3. Docker service failed to start, even after reboot
    1) Docker image could be full? (it shows 100% in the Dashboard-Memory area)
    2) I suspect it could be due to my Machinaris Chia container - plotman was trying to move a 100gb plot file into a destination that was already full
     
    Posting my diagnostics file here.
     
    I believe the solution is to delete the docker vdisk file, and create a new one? But before that...
     
    I'm wondering if there's a way to access the container in docker.img and manually delete the container or junk files, so as to make the docker image "not full". I'm just concerned whether I need to re-configure my docker container mappings again if I re-create a new image. (e.g. and that setting in plex that enables GPU transcoding)

    I've tried selecting the service start to "no" and adjusting to a larger image size, but that didn't work.

    unraid-diagnostics-20211219-0535.zip



  4. Hi guy.davis!

     

    Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full.

     

    Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it...

     

    (edit - corrected my own drunk 5am writing)

  5. Thanks so much for this! I'm still a little hesitant about going full CLI, since I'll need to sit for a few hours at a go to experiment.

     

    Some questions during installation of the app:

     

    1) Plot path

    - During the add-container settings page, there seems to be only 1 folder selection for the plots. Are there eventually more disk destination options via plotman (for a whole bunch of unassigned devices)?

    - I'm still wondering if I should place my plots in the protected array... Technically, plots aren't precious data (we can simply re-plot), so, unassigned devices should be better from a performance point of view, both for the array and the plots/farmer

     

    2) Port Forwarding (router settings - see attached image)

    Noob question here, but I thought I should ask, to be sure...

    a) Protocol - TCP? (or udp/both)

    b) External Port - 8444

    c) Internal Port - leave blank?

    d) Internal IP Address - IP of unraid server

    e) Source IP - leave blank

     

    3) Farmer/Harvester

    I'm currently using my main windows gaming PC as my farmer... I intend to eventually use the unraid system as the farmer (makes more sense this way - its permanently online and connected), while my PC becomes a harvester and plotter.

    I guess I should change the config settings of my Chia Windows to make it into a harvester?

     

    4) Add container settings

    We can leave all the settings untouched? Except the following:

    - plots directory

    - plotting directory

    - mnemonic, no change needed, but i'm aware i do need to key in my mnemonic phrase into that text file

    Untitled.png

    Untitled2.png

  6. 12 hours ago, TexasUnraid said:

    Just a little FYI from some testing I am doing.

     

    You can get some good plotting speeds with normal hard drives without the need to wear out SSD's.

     

    With a regular 7200rpm 4tb single drive I was able to get an effective speed of 6.5 hours by plotting 3 in parallel.

     

    By raiding 4x 4tb drives together I have gotten it down to 2.5 effective hours/plot doing 8 in parallel and trying 12 now.

     

    I have a bunch of old small drives I plan on raiding together and using for plotting.

     

    Just a heads up so people don't think SSD's are required to plot. That said I would still use hard drives you don't care about for plotting as it will put a lot of wear and tear on them.

     

    I am doing the tests on an ubuntu machine using BTRFS filesystems. You can do the same thing with pools in unriad easier. I plan to do that once I settle on a setup.

    Ah, so it is possible to have an amazing plotting speed (high terabytes per day) simply by using a whole bunch of cheap HDDs in parallel! (assuming a CPU with enough grunt)

  7. I currently only have a few plots on the unraid array. It keeps that disk continously spun up - which I'm not exactly excited about; I'll definitely have some drives dedicated to Chia plots, or keep chia plots off the array.

     

    The other concern is whether having Chia plots on the unraid Array causes timeouts. Chia requires the plots/proofs (sorry - i haven't gotten my terms correctly yet) to be verified within SECONDS, and there has been news that NAS storage was causing verifications to timeout.

  8. 1 hour ago, SavellM said:

    1) I use my array as if I lose my plots I'd cry its taken so long to make them.

    But you can use anything really...

    I had spare drives in my array, so I just set them to spin forever.

     

    2) I dont do this, I got 2 spare Intel SSD's that I use for plotting via unassigned devices

    I did do 1 or 2 plots via my Cache pool as a test. Just make sure you link it via the container otherwise you'll plot to your docker.img file and it'll blow.

     

    3) I use both unRAID and my gaming PC and just have my gaming PC transfer the plots to my array via a share. 

     

    Hope this helps.

     

    Regarding #2, Have you tried creating a 2nd cache pool for Chia purposes? I wonder if a non-redundant pool makes for faster plotting speeds (or allow for more parallel plotting)

  9. Gonna be exploring Chia farming, of which I believe interested unraid-dabbling folks are extremely primed to be exploring!

     

    Several thoughts:

    1) Storage (Farm) Locations: Should the Chia farm plots be stored on the unraid array, or should they be on unassigned devices?

    Benefits of being on unassigned devices

    - Reduce spin-up and wear on the array drives

    - These farm plots aren't exactly critical data - if the drives are lost, just build those plots again

     

    Edit - Specific Chia-only shares can be set to include specific disks, and exclude non-desired drives. This makes the spin-up point above moot, though I'm still undecided between chia storage on array, or on unassigned disks.

     

    2) Plotting Locations: Chia plotting should be done on fast SSDs with high endurance. What about plotting on unraid BTRFS pools? E.g. a 2nd speedier non-redundant cache pool.

     

    3) I'll probably plot on my Desktop PC, and store Farm Plots on unassigned devices. I have 2 high endurance SM863/SM963A SSDs as my cache pool, so, I hope to start farming on the unraid system as well. Waiting for a proper docker for unraid...!

    • Like 1
  10. JorgeB, Tigerherz,

     

    Thanks so much! It took me a while to figure out what it meant by "unassign all cache devices", since yesterday I was panicking and it was just a few hours before I had to take an examination 😆

     

    It works now! I tried those steps - array down, disable dockers.

    But i had to make sure to unassign all cache devices. I forgot exactly what happened, but after unassigning the remaining cache drive, the pool disappeared. I added a new pool, re-assigned the 2 drives, ensured that there was no "All existing data on this device will be OVERWRITTEN when array is Started" as mentioned, restarted array, and woohoo! All is well!

     

    If i didn't remove and re-create a new pool, adding the drive back to the pool does give the OVERWRITTEN warning.

     

    Side note: 4 months ago, my unraid system kept randomly rebooting by itself 2-3 times every day for about a week. Now, this suddenly happens... Time to migrate to new hardware!

    Untitled2.png

  11.  

    1) Share was suddenly inaccessible - i/o error encountered while transferring files over

    2) Rebooted the system, and the entire cache pool and both its droves seems unaccessible

    3) Rebooted again, 1 cache drive seems connected to the pool (but Unmountable: Too many missing/misplaced devices), and the other cache drive is now in "unassigned devices"

     

    - I've try remounting the unassigned drive as read only - the files and any freshly transferred files (just before the error) seems to be intact.

    - I have yet to physically touch the system - no adjustment of cables, etc throughout the process.

     

    I've read though this https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-543490, but I'm still not very sure how I can restore things back to exactly where it was, until I take more time over the weekend. I really hope not to lose the data in the cache pool - years of maintaining my plex library, and some precious documents.

     

    Any comments on what I should do next? Thanks so much! I've attached my current screenshot, as well as the diagnostics zip.

    Untitled.png

    unraid-diagnostics-20210422-0023.zip

  12. Just made an order for this sweet baby! It'll be in stock some time next week:

    https://www.amazon.com/QNAP-QSW-1105-5T-5-Port-Unmanaged-2-5GbE/dp/B08F9ZL9LY/

     

    Strange, that I can't seem to find much mention of this switch in this forum.

     

    Anyway, my fibre ISP not 2.5gbe, but at least I'll finally break that 113MB/s barrier between my PC and the unRaid system.

     

    This availability comes at a perfect time - I'm in the process of switching out my 6+ year old Quad-Core intel system to an AMD one; my ASRock X570 Phantom Gaming X just arrived! Though I'm a little worried the integrated 2.5GBE Realtek/Dragon RTL8125AG may not yet be supported in the latest stable unRaid release.

     

     

  13. The 2 HPE Samsung SM863 SSDs running as a RAID 1 cache pool on my Unraid. Works perfectly so far, though temps are wrongly reported, around 12 to 15 degrees reported too low - which according to some reddit threads, can be a common issue for certain enterprise drives not being used in environments they were customized for.

     

    Still, cheaper than a QVO, but faster, more reliable, and endurance of, what, 15x more? (though I'll probably never even reach 10% of the endurance before it's time to change them again)

     

    Anyway, sharing the good deal!

     

    0631513e-8185-4fec-91be-c5e80656d0de-ori

  14. 13 hours ago, UhClem said:

    Well, that "Amazon page" is, of course, the responsibility of the seller, GoHardDrive.  Are they incompetent, or dishonest? [Remember, drives are their specialty--they should be held accountable for correctness.]

    Yes. From a 4 yrs ago press release [Link],

    [ That SM863 link on AMZN is also sold by GoHardDrive.] I have no evidence, or direct experience, but my gut tells me to question their integrity.  Keep in mind that, while I (and probably you) am (are) not able to modify/reset SMART data, it is definitely possible. A perusal of Google results for <<goharddrive honest>> is enlightening (though not ALL bad).

     

    Who did you buy from on ebay?

     

    Good luck with your new toys.

     

     

    I bought mine here.

    https://www.ebay.com/itm/113767323096

    Reviews of this seller looks good (at least not much issues). The seller also sells the 5100 Max (among other server SSDs like the Intel DC 3520), I wonder if they are the same merchant as GoHardDrive.

     

    Bought 4 units including 2 for my friends. They arrived in a proper 5-unit carton packaging, and serial numbers are very close to each other. Anti static wrap looks great, SSDs look brand new as far as I can recall, and maybe I should take a look at any traces of usage on the connector pins on that last unit when my friend opens his.

     

    Basically, at this moment everything looks legit, and both my drives has been working well, and appear to perform better than the advertised speeds, at least based on Crystalmark and some unraid situations.

     

    Again, the only problem is that I can't upgrade the firmware (being HPE drives), and the temperatures posted are a good 13-15 degrees lower than what they should be. I even did a preclear on them (I know I should NOT do so to SSDs, heh) to make sure everything reads okay, before using them as my cache drives. They also do not support the low sleep power states that consumer drives have.

     

    At this moment, at these prices, these feel like wonderful drives for cache pools, and can support high write intensive usage or dockers or VMs. My hypothesis is that such enterprise drive names and specific capacities (e.g. 1.92TB) are not what most people search for, and being HPE re-brands, hence merchants find it good to sell at a low price if they have ample surplus stock to clear.

    (heh, I shouldn't talk about this so much, if I want things to keep this way)

     

    CrystalDiskMark shots below. They probably can't tell the whole story (e.g. no latency values, etc), but I ran these tests anyway for the sake of making sure they aren't lemons.

     

     

    CrystalDiskMarks for both my SM863 1.92TB

    20190610_204128_resize.jpg.8bfac1b2fa3f1b7651961a7f123c4e93.jpg20190610_203436_resize.jpg.7a602d8224531ba3f342b8125c153b3e.jpg

     

    CrystalDiskMarks for 850 Pro 512GB, 860 EVO 4TB, and an Intel DC 3.84TB

    20190610_231044_resize.jpg.0bb7798c5e97d827c2524ce3b58844f3.jpg20190610_232635_resize.jpg.8d8b866dc325bce88fc92c80886e59ea.jpg20190610_231859_resize.jpg.d7ef923c53949c7730e777b42d90ee33.jpg

     

    The carton the bunch of SM863 drives arrived in

    20190614_213540_resize.jpg.8b1e255a85409eb108b516329dadb57c.jpg

  15. On 2/2/2019 at 3:31 AM, bman said:

    tbh I haven't given Intel SSDs a fair shake since premature failure on 6 of 8 purchased 520 series products left students without their days' worth of video footage upon returning.  Swore off them because even though the warranty was good, the reliability wasn't.

     

    Seeing the price of the D3-4510 960G versus the competition has me rethinking things, albeit for different use cases.  I think you're on the right track with that one.

     

    Don't mean to necro this thread, but some really good deals for HPE (HP Enterprise) drives - similar to your Intel D3-4510 SSDs available here at a steal. (actually, better performance, for the Samsungs)

     

    These enterprise drives have crazy endurance (e.g. the Micron 5100 Max is even more over-provisioned than the 5100 Pro you were looking at).

     

    Posted these on the Good Deals forum.

     

     

    Micron 5100 Max 1.92TB - Around $200 to $220

    https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/

    17.6PB (17,600 TBW)

     

    Samsung SM863 1.92TB - Around $215-229

    https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV

    12.32PB (12,320TBW)

     

     

     

     

  16. 2 really great enterprise grade SSDs going at what I'd feel is a steal. Both appear to be HPE (HP Enterprise) branded SSDs.

     

    They each have a ridonculous 2-digit Petabye endurance!

     

    For comparison, at time of writing, a 2TB Samsung 860 Pro and a 860 EVO goes at $477 and $297 respectively. (Endurance 2400TBW and 1200TBW)

     

    Unfortunately, it is nearly impossible to find side-by-side reviews and benchmark comparisons of these type drives against consumer SATA drives, but they are certainly more than capable (especially the Samsung) for read-intensive server/enterprise types of heavy loads. I'm personally really curious how these would fare against consumer drives in a PC desktop environment. But being so heavily over-provisioned and having insane endurance, these should be perfect for heavy downloading/par/unrar, and for content creators (render videos without worry of NAND wear).

     

     

    Micron 5100 Max 1.92TB - Around $200 to $220

    https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/

    17.6PB (17,600 TBW) endurance

     

    The Amazon page says its MLC, though according to Micron brochures it is eTLC NAND.

    Reviews are decent, but the Sammys seem to perform better.

     

     

    Samsung SM863 1.92TB - Around $215-229

    https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV

    12.32PB (12,320TBW) endurance

    Probably a bona fide MLC NAND drive.

     

    I splurged on 2 of these SM863s a week ago for my cache pool (RAID 1), from eBay. Seems to work really well so far, just that these being HPE drives, the model displayed on Unraid isn't Samsung SM863, but the HPE rebrand. Temperatures appear to be wrongly reported by the SSDs as 10+ degrees lower than ambient temp. Will post some pictures and CrystalMark benchmarks if anyone is interested (summary - they perform roughly similar to my 850 Pro 512GB, 860 EVO 4TB, 850 EVO 512GB)

     

    Am I missing something - are there problems with these HPE ssds? (e.g. dated firmware that's difficult to upgrade)

  17. Has been getting one every 2 months... Now it's happening even on unraid v6.0.1.

     

    Happened live while I was looking at the main screen, error count moved from 1 to 4 during a refresh. Had some noises from the disk.

     

    I didn't save the SMART short report that was done after the error, but it looked okay; doing a preclear cycle now. All other disks look good - no increase in CRC error counts etc.

     

    Anything I might be missing?

     

    Aug 29 23:03:50 unraid kernel: mpt2sas0: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00)

    Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08

    Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current]

    Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0

    Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 81 56 1c f0 00 00 00 08 00 00

    Aug 29 23:03:50 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960

    Aug 29 23:03:50 unraid kernel: md: disk5 read error, sector=10759838896

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current]

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 81 56 1c f0 00 00 00 08 00 00

    Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960

    Aug 29 23:04:01 unraid kernel: md: disk5 write error, sector=10759838896

    Aug 29 23:04:01 unraid kernel: md: recovery thread woken up ...

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current]

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0

    Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 86 74 6c 28 00 00 00 08 00 00

    Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400

    Aug 29 23:04:01 unraid kernel: md: disk5 read error, sector=10845711336

    Aug 29 23:04:01 unraid kernel: md: recovery thread has nothing to resync

    Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08

    Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current]

    Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0

    Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 86 74 6c 28 00 00 00 08 00 00

    Aug 29 23:04:02 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400

    Aug 29 23:04:02 unraid kernel: md: disk5 write error, sector=10845711336

  18. Finally moved to unraid v6!! Setting up unraid and plugins up was much easier than I expected!!

     

    Did a clean install (with several safe .cfg files from v5) on an 850 EVO.

     

    Not sure whether I've made the right choice in using LimeTech's Plex docker install, than Needo's.

     

    Took a while to figure out the mapping of Plex Library directories. Needed to install and run Plex once to see where the Library directory goes, then used the command line to copy the Plex libraries from the old cache drive over to the new SSD.

     

    Plex posters now load so much faster!

  19. Does migrating from 32bit to 64bit of plex work okay on the existing 32bit library files?

     

    I don't want Plex to sift through the entire library again; and more importantly, re-do all the manual edits to the Plex libraries all over again!!!

     

     

    https://forums.plex.tv/discussion/88712/migrating-plex-from-32-to-64-bit-on-ubuntu-should-i-expect-issues

    I'm replacing my file server's dusty Pentium-based innards with some beefier Core i5 hardware. I'm currently running Ubuntu x86, but plan on a clean install with the x64 version.

     

    My question: can I expect to install PMS x64 and simply plop my existing 32-bit "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server" directory in the same spot? Will that work for maintaining my settings, preferences etc... or are there significant changes/issues I should be aware of?

     

    Thanks for any insight.