• Posts

  • Joined

  • Last visited

Everything posted by Shunz

  1. Thanks! I was hoping to avoid reinstalling the containers, since I'll need to dig through and make sure I have the paths and variables (eg Nvidia devices) added correctly. And the plex docker looks a little different in the apps list. Oh well. Sent from my SM-N9860 using Tapatalk
  2. Is it possible to use the /Library folder from another repository? - Can I even overwrite the entire library folder with another? (eg overwrite the entire /Libary from my installation over to binhex's folder) I messed up my Docker image (it became full, and I had to delete it), and I'm having difficulty trying to reinstall the plex from the repository that I previously had. I had success copying the repository settings from SpaceInvader One's video, but I'm now considering switching to the BinHex Plex Pass repository.
  3. Hi guy.davis! Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full. Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it... (edit - corrected my own drunk 5am writing)
  4. Thanks so much for this! I'm still a little hesitant about going full CLI, since I'll need to sit for a few hours at a go to experiment. Some questions during installation of the app: 1) Plot path - During the add-container settings page, there seems to be only 1 folder selection for the plots. Are there eventually more disk destination options via plotman (for a whole bunch of unassigned devices)? - I'm still wondering if I should place my plots in the protected array... Technically, plots aren't precious data (we can simply re-plot), so, unassigned devices should be better from a performance point of view, both for the array and the plots/farmer 2) Port Forwarding (router settings - see attached image) Noob question here, but I thought I should ask, to be sure... a) Protocol - TCP? (or udp/both) b) External Port - 8444 c) Internal Port - leave blank? d) Internal IP Address - IP of unraid server e) Source IP - leave blank 3) Farmer/Harvester I'm currently using my main windows gaming PC as my farmer... I intend to eventually use the unraid system as the farmer (makes more sense this way - its permanently online and connected), while my PC becomes a harvester and plotter. I guess I should change the config settings of my Chia Windows to make it into a harvester? 4) Add container settings We can leave all the settings untouched? Except the following: - plots directory - plotting directory - mnemonic, no change needed, but i'm aware i do need to key in my mnemonic phrase into that text file
  5. Pointing to Machinaris's docker - for GUI users like myself who are a little lazy at the moment to experiment with CLI (command line interface)
  6. Ah, so it is possible to have an amazing plotting speed (high terabytes per day) simply by using a whole bunch of cheap HDDs in parallel! (assuming a CPU with enough grunt)
  7. Thanks! I'm glad there's now a Chia docker and healthy discourse at the Partition Pixel thread. Oh gosh, SSD and HDD supplies will be so reckd these 2 years
  8. I currently only have a few plots on the unraid array. It keeps that disk continously spun up - which I'm not exactly excited about; I'll definitely have some drives dedicated to Chia plots, or keep chia plots off the array. The other concern is whether having Chia plots on the unraid Array causes timeouts. Chia requires the plots/proofs (sorry - i haven't gotten my terms correctly yet) to be verified within SECONDS, and there has been news that NAS storage was causing verifications to timeout.
  9. Regarding #2, Have you tried creating a 2nd cache pool for Chia purposes? I wonder if a non-redundant pool makes for faster plotting speeds (or allow for more parallel plotting)
  10. Gonna be exploring Chia farming, of which I believe interested unraid-dabbling folks are extremely primed to be exploring! Several thoughts: 1) Storage (Farm) Locations: Should the Chia farm plots be stored on the unraid array, or should they be on unassigned devices? Benefits of being on unassigned devices - Reduce spin-up and wear on the array drives - These farm plots aren't exactly critical data - if the drives are lost, just build those plots again Edit - Specific Chia-only shares can be set to include specific disks, and exclude non-desired drives. This makes the spin-up point above moot, though I'm still undecided between chia storage on array, or on unassigned disks. 2) Plotting Locations: Chia plotting should be done on fast SSDs with high endurance. What about plotting on unraid BTRFS pools? E.g. a 2nd speedier non-redundant cache pool. 3) I'll probably plot on my Desktop PC, and store Farm Plots on unassigned devices. I have 2 high endurance SM863/SM963A SSDs as my cache pool, so, I hope to start farming on the unraid system as well. Waiting for a proper docker for unraid...!
  11. JorgeB, Tigerherz, Thanks so much! It took me a while to figure out what it meant by "unassign all cache devices", since yesterday I was panicking and it was just a few hours before I had to take an examination 😆 It works now! I tried those steps - array down, disable dockers. But i had to make sure to unassign all cache devices. I forgot exactly what happened, but after unassigning the remaining cache drive, the pool disappeared. I added a new pool, re-assigned the 2 drives, ensured that there was no "All existing data on this device will be OVERWRITTEN when array is Started" as mentioned, restarted array, and woohoo! All is well! If i didn't remove and re-create a new pool, adding the drive back to the pool does give the OVERWRITTEN warning. Side note: 4 months ago, my unraid system kept randomly rebooting by itself 2-3 times every day for about a week. Now, this suddenly happens... Time to migrate to new hardware!
  12. 1) Share was suddenly inaccessible - i/o error encountered while transferring files over 2) Rebooted the system, and the entire cache pool and both its droves seems unaccessible 3) Rebooted again, 1 cache drive seems connected to the pool (but Unmountable: Too many missing/misplaced devices), and the other cache drive is now in "unassigned devices" - I've try remounting the unassigned drive as read only - the files and any freshly transferred files (just before the error) seems to be intact. - I have yet to physically touch the system - no adjustment of cables, etc throughout the process. I've read though this, but I'm still not very sure how I can restore things back to exactly where it was, until I take more time over the weekend. I really hope not to lose the data in the cache pool - years of maintaining my plex library, and some precious documents. Any comments on what I should do next? Thanks so much! I've attached my current screenshot, as well as the diagnostics zip.
  13. Just made an order for this sweet baby! It'll be in stock some time next week: Strange, that I can't seem to find much mention of this switch in this forum. Anyway, my fibre ISP not 2.5gbe, but at least I'll finally break that 113MB/s barrier between my PC and the unRaid system. This availability comes at a perfect time - I'm in the process of switching out my 6+ year old Quad-Core intel system to an AMD one; my ASRock X570 Phantom Gaming X just arrived! Though I'm a little worried the integrated 2.5GBE Realtek/Dragon RTL8125AG may not yet be supported in the latest stable unRaid release.
  14. I didn't put much effort into trying, and sold the Prolink UPS to a friend within a month, and have been using APC since. It's been 5 years, so I'm not sure if there's any advances made on the Prolink side of things...
  15. The 2 HPE Samsung SM863 SSDs running as a RAID 1 cache pool on my Unraid. Works perfectly so far, though temps are wrongly reported, around 12 to 15 degrees reported too low - which according to some reddit threads, can be a common issue for certain enterprise drives not being used in environments they were customized for. Still, cheaper than a QVO, but faster, more reliable, and endurance of, what, 15x more? (though I'll probably never even reach 10% of the endurance before it's time to change them again) Anyway, sharing the good deal!
  16. I bought mine here. Reviews of this seller looks good (at least not much issues). The seller also sells the 5100 Max (among other server SSDs like the Intel DC 3520), I wonder if they are the same merchant as GoHardDrive. Bought 4 units including 2 for my friends. They arrived in a proper 5-unit carton packaging, and serial numbers are very close to each other. Anti static wrap looks great, SSDs look brand new as far as I can recall, and maybe I should take a look at any traces of usage on the connector pins on that last unit when my friend opens his. Basically, at this moment everything looks legit, and both my drives has been working well, and appear to perform better than the advertised speeds, at least based on Crystalmark and some unraid situations. Again, the only problem is that I can't upgrade the firmware (being HPE drives), and the temperatures posted are a good 13-15 degrees lower than what they should be. I even did a preclear on them (I know I should NOT do so to SSDs, heh) to make sure everything reads okay, before using them as my cache drives. They also do not support the low sleep power states that consumer drives have. At this moment, at these prices, these feel like wonderful drives for cache pools, and can support high write intensive usage or dockers or VMs. My hypothesis is that such enterprise drive names and specific capacities (e.g. 1.92TB) are not what most people search for, and being HPE re-brands, hence merchants find it good to sell at a low price if they have ample surplus stock to clear. (heh, I shouldn't talk about this so much, if I want things to keep this way) CrystalDiskMark shots below. They probably can't tell the whole story (e.g. no latency values, etc), but I ran these tests anyway for the sake of making sure they aren't lemons. CrystalDiskMarks for both my SM863 1.92TB CrystalDiskMarks for 850 Pro 512GB, 860 EVO 4TB, and an Intel DC 3.84TB The carton the bunch of SM863 drives arrived in
  17. Don't mean to necro this thread, but some really good deals for HPE (HP Enterprise) drives - similar to your Intel D3-4510 SSDs available here at a steal. (actually, better performance, for the Samsungs) These enterprise drives have crazy endurance (e.g. the Micron 5100 Max is even more over-provisioned than the 5100 Pro you were looking at). Posted these on the Good Deals forum. Micron 5100 Max 1.92TB - Around $200 to $220 17.6PB (17,600 TBW) Samsung SM863 1.92TB - Around $215-229 12.32PB (12,320TBW)
  18. 2 really great enterprise grade SSDs going at what I'd feel is a steal. Both appear to be HPE (HP Enterprise) branded SSDs. They each have a ridonculous 2-digit Petabye endurance! For comparison, at time of writing, a 2TB Samsung 860 Pro and a 860 EVO goes at $477 and $297 respectively. (Endurance 2400TBW and 1200TBW) Unfortunately, it is nearly impossible to find side-by-side reviews and benchmark comparisons of these type drives against consumer SATA drives, but they are certainly more than capable (especially the Samsung) for read-intensive server/enterprise types of heavy loads. I'm personally really curious how these would fare against consumer drives in a PC desktop environment. But being so heavily over-provisioned and having insane endurance, these should be perfect for heavy downloading/par/unrar, and for content creators (render videos without worry of NAND wear). Micron 5100 Max 1.92TB - Around $200 to $220 17.6PB (17,600 TBW) endurance The Amazon page says its MLC, though according to Micron brochures it is eTLC NAND. Reviews are decent, but the Sammys seem to perform better. Samsung SM863 1.92TB - Around $215-229 12.32PB (12,320TBW) endurance Probably a bona fide MLC NAND drive. I splurged on 2 of these SM863s a week ago for my cache pool (RAID 1), from eBay. Seems to work really well so far, just that these being HPE drives, the model displayed on Unraid isn't Samsung SM863, but the HPE rebrand. Temperatures appear to be wrongly reported by the SSDs as 10+ degrees lower than ambient temp. Will post some pictures and CrystalMark benchmarks if anyone is interested (summary - they perform roughly similar to my 850 Pro 512GB, 860 EVO 4TB, 850 EVO 512GB) Am I missing something - are there problems with these HPE ssds? (e.g. dated firmware that's difficult to upgrade)
  19. here Oh crap, I've been planning to purchase extra RAM since last year so that I could comfortably transcode to RAM. So I've finally just purchased an extra 16GB of RAM - before reading this. Bummer
  20. Has been getting one every 2 months... Now it's happening even on unraid v6.0.1. Happened live while I was looking at the main screen, error count moved from 1 to 4 during a refresh. Had some noises from the disk. I didn't save the SMART short report that was done after the error, but it looked okay; doing a preclear cycle now. All other disks look good - no increase in CRC error counts etc. Anything I might be missing? Aug 29 23:03:50 unraid kernel: mpt2sas0: log_info(0x31110d00): originator(PL), code(0x11), sub_code(0x0d00) Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:03:50 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 81 56 1c f0 00 00 00 08 00 00 Aug 29 23:03:50 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960 Aug 29 23:03:50 unraid kernel: md: disk5 read error, sector=10759838896 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 81 56 1c f0 00 00 00 08 00 00 Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10759838960 Aug 29 23:04:01 unraid kernel: md: disk5 write error, sector=10759838896 Aug 29 23:04:01 unraid kernel: md: recovery thread woken up ... Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:01 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x88 88 00 00 00 00 02 86 74 6c 28 00 00 00 08 00 00 Aug 29 23:04:01 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400 Aug 29 23:04:01 unraid kernel: md: disk5 read error, sector=10845711336 Aug 29 23:04:01 unraid kernel: md: recovery thread has nothing to resync Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] Sense Key : 0x2 [current] Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] ASC=0x4 ASCQ=0x0 Aug 29 23:04:02 unraid kernel: sd 1:0:0:0: [sdg] CDB: opcode=0x8a 8a 08 00 00 00 02 86 74 6c 28 00 00 00 08 00 00 Aug 29 23:04:02 unraid kernel: blk_update_request: I/O error, dev sdg, sector 10845711400 Aug 29 23:04:02 unraid kernel: md: disk5 write error, sector=10845711336
  21. Finally moved to unraid v6!! Setting up unraid and plugins up was much easier than I expected!! Did a clean install (with several safe .cfg files from v5) on an 850 EVO. Not sure whether I've made the right choice in using LimeTech's Plex docker install, than Needo's. Took a while to figure out the mapping of Plex Library directories. Needed to install and run Plex once to see where the Library directory goes, then used the command line to copy the Plex libraries from the old cache drive over to the new SSD. Plex posters now load so much faster!
  22. Does migrating from 32bit to 64bit of plex work okay on the existing 32bit library files? I don't want Plex to sift through the entire library again; and more importantly, re-do all the manual edits to the Plex libraries all over again!!!
  23. Thanks guys! Interesting stuff! I'd wonder if 16gb of RAM is enough - my Plex transcodes to the Roku is always done at 20mbps. I would love to upgrade to 32gb, though at the moment purchasing another 16gb ECC memory would hurt a little, haha.
  24. Any comments on which would be better? I'm about to upgrade from a v5 installation to a v6, and this should be a great occasion to replace my existing Cache Drive (and especially the Plex Library and transcode folders) with an SSD. Usage Patterns: Array WD Red x6 + 1 HGST Parity, core i3 4150 16gb, on X10SL7-F Currently mostly used as a Media Server, using PLEX. Maximum 3 streams at a time. My transfers to the Cache drive varies from about 50gb, to 200+ gb at a time Ideally one should now use a 2x SSD Cache Pool, running plex off the pool... Existing Setup, on unraid v5: Cache WD Black 750gb (2.5-inch, 1.5 years old) > Includes Plex Library and Transcode Folders (all in a share-only App folder). Covers and menu currently takes some time to load!! Upgrade option 1, on unraid v6 Benefit: no more spinners for cache drive; straightforward clean upgrade from unraid v5 to v6 Upgrade Cache Drive to Samsung 850 Evo 500gb SSD > Includes Plex Library and Transcode Folders (all in a share-only App folder) Upgrade option 2, on unraid v6 Benefit: I get to upgrade my PC gaming drive from a 256gb to a 500gb No change to Cache Drive WD Black 750gb (2.5-inch) Add 2-year old SSD from PC (about 6.5TBW) Samsung 840 Pro 256gb SSD For Plex Library and Transcode Folders Upgrade option 3, on unraid v6 Benefit: Same as 1 and 2, but at 1/2 the Cache Swap Cache Drive Add 2-year old Samsung 840 Pro 256 gb (about 6.5TBW) > Includes Plex Library and Transcode Folders (all in a share-only App folder)
  25. Did a quick read-through of the massive v6 upgrading guide and the v6 manual. Fwaaah! Some really amazing and long-awaited improvements - better file systems, streamlining (v5 feels generally patchwork by comparison), and dockers are probably my favorite ones right now.