Jump to content

Shunz

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by Shunz

  1. Thanks!! I had the same problem too, since a few days ago over the weekend. Sadly, I spend a good deal of time figuring out how to delete the config and db files, since the appdata (especially the machinaris folders) permissions were locked by unraid (and I'm too lazy to figure out the commands). My binhex Krusader refused to rename or delete the files, until I googled that I needed to edit Krusader's docker values of PGID and PUID to 0 (zero = root) to run krusader as root. Sheesh
  2. Thanks! I was hoping to avoid reinstalling the containers, since I'll need to dig through and make sure I have the paths and variables (eg Nvidia devices) added correctly. And the Linuxserver.io plex docker looks a little different in the apps list. Oh well. Sent from my SM-N9860 using Tapatalk
  3. Is it possible to use the /Library folder from another repository? - Can I even overwrite the entire library folder with another? (eg overwrite the entire /Libary from my linuxserver.io installation over to binhex's folder) I messed up my Docker image (it became full, and I had to delete it), and I'm having difficulty trying to reinstall the plex from the linuxserver.io repository that I previously had. I had success copying the repository settings from SpaceInvader One's video, but I'm now considering switching to the BinHex Plex Pass repository.
  4. Docker service failed to start, even after reboot 1) Docker image could be full? (it shows 100% in the Dashboard-Memory area) 2) I suspect it could be due to my Machinaris Chia container - plotman was trying to move a 100gb plot file into a destination that was already full Posting my diagnostics file here. I believe the solution is to delete the docker vdisk file, and create a new one? But before that... I'm wondering if there's a way to access the container in docker.img and manually delete the container or junk files, so as to make the docker image "not full". I'm just concerned whether I need to re-configure my docker container mappings again if I re-create a new image. (e.g. and that setting in plex that enables GPU transcoding) I've tried selecting the service start to "no" and adjusting to a larger image size, but that didn't work. unraid-diagnostics-20211219-0535.zip
  5. Hi guy.davis! Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full. Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it... (edit - corrected my own drunk 5am writing)
  6. Thanks so much for this! I'm still a little hesitant about going full CLI, since I'll need to sit for a few hours at a go to experiment. Some questions during installation of the app: 1) Plot path - During the add-container settings page, there seems to be only 1 folder selection for the plots. Are there eventually more disk destination options via plotman (for a whole bunch of unassigned devices)? - I'm still wondering if I should place my plots in the protected array... Technically, plots aren't precious data (we can simply re-plot), so, unassigned devices should be better from a performance point of view, both for the array and the plots/farmer 2) Port Forwarding (router settings - see attached image) Noob question here, but I thought I should ask, to be sure... a) Protocol - TCP? (or udp/both) b) External Port - 8444 c) Internal Port - leave blank? d) Internal IP Address - IP of unraid server e) Source IP - leave blank 3) Farmer/Harvester I'm currently using my main windows gaming PC as my farmer... I intend to eventually use the unraid system as the farmer (makes more sense this way - its permanently online and connected), while my PC becomes a harvester and plotter. I guess I should change the config settings of my Chia Windows to make it into a harvester? 4) Add container settings We can leave all the settings untouched? Except the following: - plots directory - plotting directory - mnemonic, no change needed, but i'm aware i do need to key in my mnemonic phrase into that text file
  7. Pointing to Machinaris's docker - for GUI users like myself who are a little lazy at the moment to experiment with CLI (command line interface)
  8. Ah, so it is possible to have an amazing plotting speed (high terabytes per day) simply by using a whole bunch of cheap HDDs in parallel! (assuming a CPU with enough grunt)
  9. Thanks! I'm glad there's now a Chia docker and healthy discourse at the Partition Pixel thread. Oh gosh, SSD and HDD supplies will be so reckd these 2 years
  10. I currently only have a few plots on the unraid array. It keeps that disk continously spun up - which I'm not exactly excited about; I'll definitely have some drives dedicated to Chia plots, or keep chia plots off the array. The other concern is whether having Chia plots on the unraid Array causes timeouts. Chia requires the plots/proofs (sorry - i haven't gotten my terms correctly yet) to be verified within SECONDS, and there has been news that NAS storage was causing verifications to timeout.
  11. Regarding #2, Have you tried creating a 2nd cache pool for Chia purposes? I wonder if a non-redundant pool makes for faster plotting speeds (or allow for more parallel plotting)
  12. Gonna be exploring Chia farming, of which I believe interested unraid-dabbling folks are extremely primed to be exploring! Several thoughts: 1) Storage (Farm) Locations: Should the Chia farm plots be stored on the unraid array, or should they be on unassigned devices? Benefits of being on unassigned devices - Reduce spin-up and wear on the array drives - These farm plots aren't exactly critical data - if the drives are lost, just build those plots again Edit - Specific Chia-only shares can be set to include specific disks, and exclude non-desired drives. This makes the spin-up point above moot, though I'm still undecided between chia storage on array, or on unassigned disks. 2) Plotting Locations: Chia plotting should be done on fast SSDs with high endurance. What about plotting on unraid BTRFS pools? E.g. a 2nd speedier non-redundant cache pool. 3) I'll probably plot on my Desktop PC, and store Farm Plots on unassigned devices. I have 2 high endurance SM863/SM963A SSDs as my cache pool, so, I hope to start farming on the unraid system as well. Waiting for a proper docker for unraid...!
  13. JorgeB, Tigerherz, Thanks so much! It took me a while to figure out what it meant by "unassign all cache devices", since yesterday I was panicking and it was just a few hours before I had to take an examination 😆 It works now! I tried those steps - array down, disable dockers. But i had to make sure to unassign all cache devices. I forgot exactly what happened, but after unassigning the remaining cache drive, the pool disappeared. I added a new pool, re-assigned the 2 drives, ensured that there was no "All existing data on this device will be OVERWRITTEN when array is Started" as mentioned, restarted array, and woohoo! All is well! If i didn't remove and re-create a new pool, adding the drive back to the pool does give the OVERWRITTEN warning. Side note: 4 months ago, my unraid system kept randomly rebooting by itself 2-3 times every day for about a week. Now, this suddenly happens... Time to migrate to new hardware!
  14. 1) Share was suddenly inaccessible - i/o error encountered while transferring files over 2) Rebooted the system, and the entire cache pool and both its droves seems unaccessible 3) Rebooted again, 1 cache drive seems connected to the pool (but Unmountable: Too many missing/misplaced devices), and the other cache drive is now in "unassigned devices" - I've try remounting the unassigned drive as read only - the files and any freshly transferred files (just before the error) seems to be intact. - I have yet to physically touch the system - no adjustment of cables, etc throughout the process. I've read though this https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-543490, but I'm still not very sure how I can restore things back to exactly where it was, until I take more time over the weekend. I really hope not to lose the data in the cache pool - years of maintaining my plex library, and some precious documents. Any comments on what I should do next? Thanks so much! I've attached my current screenshot, as well as the diagnostics zip. unraid-diagnostics-20210422-0023.zip
  15. Just made an order for this sweet baby! It'll be in stock some time next week: https://www.amazon.com/QNAP-QSW-1105-5T-5-Port-Unmanaged-2-5GbE/dp/B08F9ZL9LY/ Strange, that I can't seem to find much mention of this switch in this forum. Anyway, my fibre ISP not 2.5gbe, but at least I'll finally break that 113MB/s barrier between my PC and the unRaid system. This availability comes at a perfect time - I'm in the process of switching out my 6+ year old Quad-Core intel system to an AMD one; my ASRock X570 Phantom Gaming X just arrived! Though I'm a little worried the integrated 2.5GBE Realtek/Dragon RTL8125AG may not yet be supported in the latest stable unRaid release.
  16. I didn't put much effort into trying, and sold the Prolink UPS to a friend within a month, and have been using APC since. It's been 5 years, so I'm not sure if there's any advances made on the Prolink side of things...
  17. The 2 HPE Samsung SM863 SSDs running as a RAID 1 cache pool on my Unraid. Works perfectly so far, though temps are wrongly reported, around 12 to 15 degrees reported too low - which according to some reddit threads, can be a common issue for certain enterprise drives not being used in environments they were customized for. Still, cheaper than a QVO, but faster, more reliable, and endurance of, what, 15x more? (though I'll probably never even reach 10% of the endurance before it's time to change them again) Anyway, sharing the good deal!
  18. I bought mine here. https://www.ebay.com/itm/113767323096 Reviews of this seller looks good (at least not much issues). The seller also sells the 5100 Max (among other server SSDs like the Intel DC 3520), I wonder if they are the same merchant as GoHardDrive. Bought 4 units including 2 for my friends. They arrived in a proper 5-unit carton packaging, and serial numbers are very close to each other. Anti static wrap looks great, SSDs look brand new as far as I can recall, and maybe I should take a look at any traces of usage on the connector pins on that last unit when my friend opens his. Basically, at this moment everything looks legit, and both my drives has been working well, and appear to perform better than the advertised speeds, at least based on Crystalmark and some unraid situations. Again, the only problem is that I can't upgrade the firmware (being HPE drives), and the temperatures posted are a good 13-15 degrees lower than what they should be. I even did a preclear on them (I know I should NOT do so to SSDs, heh) to make sure everything reads okay, before using them as my cache drives. They also do not support the low sleep power states that consumer drives have. At this moment, at these prices, these feel like wonderful drives for cache pools, and can support high write intensive usage or dockers or VMs. My hypothesis is that such enterprise drive names and specific capacities (e.g. 1.92TB) are not what most people search for, and being HPE re-brands, hence merchants find it good to sell at a low price if they have ample surplus stock to clear. (heh, I shouldn't talk about this so much, if I want things to keep this way) CrystalDiskMark shots below. They probably can't tell the whole story (e.g. no latency values, etc), but I ran these tests anyway for the sake of making sure they aren't lemons. CrystalDiskMarks for both my SM863 1.92TB CrystalDiskMarks for 850 Pro 512GB, 860 EVO 4TB, and an Intel DC 3.84TB The carton the bunch of SM863 drives arrived in
  19. Don't mean to necro this thread, but some really good deals for HPE (HP Enterprise) drives - similar to your Intel D3-4510 SSDs available here at a steal. (actually, better performance, for the Samsungs) These enterprise drives have crazy endurance (e.g. the Micron 5100 Max is even more over-provisioned than the 5100 Pro you were looking at). Posted these on the Good Deals forum. Micron 5100 Max 1.92TB - Around $200 to $220 https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/ 17.6PB (17,600 TBW) Samsung SM863 1.92TB - Around $215-229 https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV 12.32PB (12,320TBW)
  20. 2 really great enterprise grade SSDs going at what I'd feel is a steal. Both appear to be HPE (HP Enterprise) branded SSDs. They each have a ridonculous 2-digit Petabye endurance! For comparison, at time of writing, a 2TB Samsung 860 Pro and a 860 EVO goes at $477 and $297 respectively. (Endurance 2400TBW and 1200TBW) Unfortunately, it is nearly impossible to find side-by-side reviews and benchmark comparisons of these type drives against consumer SATA drives, but they are certainly more than capable (especially the Samsung) for read-intensive server/enterprise types of heavy loads. I'm personally really curious how these would fare against consumer drives in a PC desktop environment. But being so heavily over-provisioned and having insane endurance, these should be perfect for heavy downloading/par/unrar, and for content creators (render videos without worry of NAND wear). Micron 5100 Max 1.92TB - Around $200 to $220 https://www.amazon.com/HP-Micron-2-5-inch-Internal-MTFDDAK1T9TCC-1AR1ZABHA/dp/B07R3BYPM6/ 17.6PB (17,600 TBW) endurance The Amazon page says its MLC, though according to Micron brochures it is eTLC NAND. Reviews are decent, but the Sammys seem to perform better. Samsung SM863 1.92TB - Around $215-229 https://www.amazon.com/HP-Samsung-MZ-7KM1T90-2-5-inch-Internal/dp/B07SNH1THV 12.32PB (12,320TBW) endurance Probably a bona fide MLC NAND drive. I splurged on 2 of these SM863s a week ago for my cache pool (RAID 1), from eBay. Seems to work really well so far, just that these being HPE drives, the model displayed on Unraid isn't Samsung SM863, but the HPE rebrand. Temperatures appear to be wrongly reported by the SSDs as 10+ degrees lower than ambient temp. Will post some pictures and CrystalMark benchmarks if anyone is interested (summary - they perform roughly similar to my 850 Pro 512GB, 860 EVO 4TB, 850 EVO 512GB) Am I missing something - are there problems with these HPE ssds? (e.g. dated firmware that's difficult to upgrade)
  21. here Oh crap, I've been planning to purchase extra RAM since last year so that I could comfortably transcode to RAM. So I've finally just purchased an extra 16GB of RAM - before reading this. Bummer
  22. I took a gamble at the Prolink because it has universal sockets, which I needed for the 90-degree power cord connection because the back of my unraid server is very close to a wall. Looks like I have to find a way to make do with an APC...
  23. I'm not sure if it is too early to say, but I don't think the Prolink seems to be working, at least at the moment. On Sysinfo UPS Status tab: (from /sbin/apcaccess status) Error contacting apcupsd @ localhost:3551: Connection refused Syslog: Aug 26 02:38:17 unraid apcupsd[3286]: apcupsd FATAL ERROR in linux-usb.c at line 609 Cannot find UPS device -- For a link to detailed USB trouble shooting information, please see <http://www.apcupsd.com/support.html>. (Errors) Aug 26 02:38:17 unraid apcupsd[3286]: apcupsd error shutdown completed (Errors) Syslog usb re-plugging Aug 26 02:30:22 unraid kernel: usb 3-3: new low-speed USB device number 7 using xhci_hcd (Drive related) Aug 26 02:30:22 unraid kernel: hid-generic 0003:0665:5161.0005: hiddev0,hidraw0: USB HID v1.00 Device [iNNO TECH USB to Serial] on usb-0000:00:14.0-3/input0 (Drive related) I've tried looking around for information for the past 2 hours, can't find anything yet. The only thing I thought I could do - but didn't help - was to change the "use serial port?" to /dev/hiddev0
  24. Oh? Its a standard protocol? Ahhh, yeah it should work then... I think I'm gonna give it a shot. Heh thanks!!
  25. It seems that the apcupsd - APC UPS Daemon may support UPS of other brands. Can it? There is a description Under Cable Type:(smart/usb/dumb/ether): Use "smart" for APC UPS, "usb" for some other brands, "dumb" for others with appropriate cable to serial port, "ether" to slave off of apcupsd running on another server. I'm considering buying one of these to try out: http://www.prolink2u.com/new/products/index.php?cid=374 I know APC is the recommended brand, but because it forces me to use male/female PSU connectors instead of the regular power plugs (I need a 90-degree one, as my server needs to be close to a wall), the Prolink seems more attractive.
×
×
  • Create New...