SnowLprd

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by SnowLprd

  1. Hi folks. I’ve been an unRAID fan and user for over a decade, and now it’s time to build another system. In addition to storage, here’s what I want to use it for: * Plex Server / Radarr / Sonarr * Home Assistant / PiHole * Steam / PC games, presumably via Windows 10 VM Regarding the latter, I have an nVidia Shield Pro in the living room. I’d prefer to locate the unRAID tower in another room, so perhaps the Shield could be used via GameStream. I’ve never used unRAID to set up a gaming VM, so that’s all new to me, and I’m still trying to sort it out. I need to start by choosing a CPU + motherboard combo. I’d prefer AMD unless there are compelling reasons to choose Intel. I am leaning toward the AMD Ryzen 7 3700X because there aren’t many CPUs that have both: * 8 cores * low TDP (65W) My main question is the motherboard. My current target criteria are: * at least 6-8 on-board SATA ports * on-motherboard USB A (for the unRAID OS thumb drive) would be nice but not necessary * … what other criteria would you recommend? What CPU + motherboard combination would you choose if you were me? Any and all suggestions would be *greatly* appreciated! 😁
  2. Many thanks for the quick response. The most important data is indeed backed-up off-site. While I would prefer to recover the non-critical data on disk6, I understand that may not be very likely. Given that I won't be in the vicinity of this machine for several months, I agree that it's unlikely I'll be able to fix this situation soon. What I would like to do now is determine the following: 1. What can I do to increase the likelihood that the data on the remaining drives (1, 2, 3, 4, 5) will remain intact and uncorrupted? I suspect those drives are connected to the motherboard's disk controller, while the others are connected to a (possibly failing) Supermicro PCI card. For the non-critical data on those drives that isn't yet backed up off-site, I'll begin to get as much backed up as possible, but that will take time and thus I'd to do everything I can to protect this data in the interim. Anything in particular I should be doing or not doing? 2. What do I do when I'm back in physical proximity of the machine? Presumably the Supermicro PCI card needs to be replaced, and the entire array needs to be rebuilt from scratch. Or is that not the case? Any suggestions on what steps I should take when I arrive?
  3. My unRAID 6.5.2 system reported that disk6 (sdi) had been disabled. I had a same-sized, unassigned hot spare already in the tower, so I followed the instructions below to replace it with the hot spare (sdl). (I did not remove the disabled disk, as I am currently thousands of miles away from the tower and won't be near it for several more months.) https://lime-technology.com/wiki/Replacing_a_Data_Drive When I started the array, unRAID began to rebuild disk6. Shortly thereafter, however, there was a notification that there is a problem with the hot spare disk also. Shortly after that, another disk -- disk7 (sdj) -- showed up as unmountable. I have another disk in the array (disk8) that is empty, the same size, and could be used to rebuild disk6 (my first priority, if possible). That disk, however, was set up with encryption a while back, but it never seemed to work properly, always displaying "Unmountable: Unsupported partition layout" for unknown reasons. I didn't need the space on that drive at the time, so I figured I'd get around to reformatting it (unencrypted) at some point in the future. In short, it seems like three drives (two data drives, one hot spare) *might* have developed/exhibited problems all at once. I don't really understand what's going on. I should, in theory, have a working empty disk in the array (disk8) that could be used as a potential drive replacement. If at all possible, I would like to remove the empty disk8 from the array and use it to rebuild disk6. I don't know whether that's possible, and if so, how to proceed. More generally, I am looking for any and all recommendations regarding how best to move forward. I have attached a console screenshot and a diagnostics bundle. Any thoughts? Any other information I can provide that would be helpful? tower-diagnostics-20180818-0308.zip
  4. As a result of recovering files from a data loss event, I have thousands of video files with names like “file892.mkv”. Anyone know if there’s a way to programmatically identify and re-name them? Perhaps something like Chromaprint/AcoustID, but for video files (i.e., TV shows & movies). Anyone know if such a solution — or any solution for that matter — might exist?
  5. I see. Thank you for clarifying. I think the following is what led me to wonder:
  6. In that scenario, how will the cache drive be treated? Will its data also be purged when doing Tools > New Config? If so, what should I do? I want to preserve the appdata on the cache drive, given that's the only part of the system that seems to have fully survived the data loss event.
  7. This seems like sensible advice. I am preparing to transfer the first drive's recovered data over the network to the unRAID tower. In an effort to ensure that no writes (or filesystem/directory modifications of any kind) are made to any of the other drives, I disconnected power to all drives except the parity drive and the drive to which I intend to copy the recovered data. Upon powering up the Tower, I see that I cannot start the array, presumably due to the "Too many wrong and/or missing disks!" message. I assumed unRAID would allow me to temporarily ignore the missing drives and allow me to copy the recovered data to the single data drive, but clearly that is not the case. What should I do here? Is my worry about having data drives (from which data has not been recovered yet) powered up and connected an unfounded concern? Is it safe to connect everything back the way it was and start the array, as long as I don't actively make any writes? I suppose my main concern was any low-level garbage collection or other behind-the-scenes filesystem manipulation, but I have no idea if that concern is valid. Or is there another way I should be handling this? Any and all suggestions would be most welcome.
  8. I have written up a data recovery plan and would deeply appreciate feedback so as to increase the odds of salvaging as much as I can. Done so far: 1. Confirmed that AppData was restored to new cache drive 2. Shut down unRAID Tower 3. Set up another machine with fresh installation of macOS 10.12.2 ("Sierra") Planned next steps: 1. Remove 4TB disk10 from Tower (that disk was empty until used as backup target for AppData; should still contain that backup) 2. Connect disk10 to Mac via SATA. Boot Mac. 3. Test UFS Explorer on disk10. Can it copy AppData backup to Mac's internal SSD? 4. Remove disk3 from Tower (that disk has least important/amount of data) and connect to Mac. 5. Use UFS Explorer to recover deleted files on disk3 and copy them to disk10. Do best to verify integrity of recovered files. 6. Copy recovered files from disk10 back to disk3 (assuming UFS Explorer supports copying un-deleted files) 7. Put disk3 back in Tower. Remove files from disk10 in order to free up space for recovering the the next disk's files. 8. Repeat steps 4 through 7 until files from all disks have been recovered and copied back to original locations. 9. Remove disk10 from Mac and put back in Tower. 10. Boot Tower, start array, and cross fingers. Does this sound like the most sensible plan? Anything seem unwise? Anyone have suggestions for improvement?
  9. @Squid / johnnie.black: I was merely following what I believed to be the canonical instructions for replacing a cache drive. Step 4 reads:
  10. After the Appdata Backup/Restore process deleted all the files in my array, I am now trying to recover data from the XFS-formatted disks. Does anyone know how I might do this without using a tool that requires Windows?
  11. @johnnie.black: For what it's worth, I don't believe I selected /mnt/cache as the Destination Directory — I am fairly certain that field was populated that way by default.
  12. Thank you both for the advice and encouragement. I am sincerely grateful for any and all guidance. I was indeed in a panic and rebooted the machine, so I'm not sure logs were preserved. I don't see anything that looks like log files/directories on the flash drive. Beyond the reboot and running the diagnostics, I have not performed any other actions, in hopes of soliciting advice from those more experienced with these situations. As suggested, I have attached the resulting output of Tools > Diagnostics > Download. Is there any other information I can provide to help determine what I should do next? tower-diagnostics-20161226-1817.zip
  13. My goal was to upgrade my cache drive from an older 60 GB SSD to a larger 240 GB SSD via these docs. Following unRAID 6.2.4's Community Applications > Appdata Backup/Restore process as carefully as possible, I completed the appdata restore and immediately received notifications that multiple disks had returned to "normal" utilization levels. Puzzled, I looked through the disks in the array and noticed that they are 99% empty. Other than a few scattered files, it appears as though nearly everything in the array has been lost. I tried to re-trace my steps and see whether I did anything that could cause this. I performed the following before restoring the data: * 6.1.9 --> 6.2.4 upgrade didn't seem to create a "system" share, so I created it manually with "Use cache disk" set to: "Prefer" * I created a "docker" subfolder within "system" and enabled Docker to (re-)create a 20 GB file at: /mnt/user/system/docker/docker.img * I created an "appdata" share with "Use cache disk" set to: "Prefer" * I set "Default appdata storage location" to: /mnt/user/appdata/ * I used the default restore settings to restore: Source Directory: /mnt/disk10/CommunityApplicationsAppdataBackup Destination Directory: /mnt/cache My main question is... Is all my data truly lost? Is there nothing that I can do to recover it? Any and all assistance would be hugely appreciated. I feel sick. I thought I was being so cautious. Sincere thanks in advance for any help.
  14. Everything appears well-seated. Currently running SMP memtest, which will probably finish tomorrow morning. No errors so far. I recently upgraded the CPU. Given that the new CPU has been performing well over the last week, how likely is it that it might be the cause?
  15. Many thanks for the suggestion, mr-hexen. Unfortunately, that doesn't seem to have had any effect. System hangs at boot at the exact same place. What do you think might be the next step in troubleshooting this problem? (I really appreciate any and all help with this. Logically, I'm sure I'll get it resolved, but it's still causing a bit of anxiety for me.)
  16. I just used the web GUI's plugin manager to update from 6.1.8 to 6.1.9, stopped the array, and initiated a reboot. System won't come back up after reboot. The last line on the console is: Freeing SMP alternatives memory: 28K (ffffffff81974000 - ffffffff8197b000) I've never had any issues with UnRAID upgrades before. Anyone have any suggestions? :'(
  17. Well, it was installed when I ran sensors-detect, but it seems Perl is no longer present after rebooting. I just ran `installpkg /boot/packages/perl-5.18.1-x86_64-1.txz` again, uninstalled and re-installed the plugin, and now the aforementioned Detect button behaves as expected. Other than putting the Perl archive at /boot/packages/perl-5.18.1-x86_64-1.txz, is there something else that I need to do to ensure Perl is always available after reboots? Side note: I wonder if this has something to do with the order of steps, specifically when the plugin is installed. Unless I'm missing something, there's no mention in these docs at which point the plugin should be installed: https://lime-technology.com/wiki/index.php/Setting_up_CPU_and_board_temperature_sensing
  18. I have a SuperMicro C2SEE motherboard and have followed all the steps in the wiki, but when I go to Settings > TempSettings and tap the Detect button, nothing happens. The page reloads, but there is no visible change. When logged in via SSH, running the `sensors` command accurately shows the MB and CPU temps as `temp1` and `temp2`, respectively. So as far as I can tell, everything seems to be in order. Anyone have any suggestions for how I might troubleshoot this?
  19. That's great to hear. As one of those folks with a very vanilla 5.0 install, I would find said 5-to-6 upgrade plugin to be very useful and am glad to hear that its release will not be held up trying to support the wide variety of possible variations/customizations.
  20. I stand corrected. Doesn't much matter now: I tried to order one, and the price changed to $220 while I was in the process of buying it. Awesome work, folks. No thanks.
  21. You might want to read the reviews before buying these. The failure rate appears to be horrendous. http://www.newegg.com/Product/Product.aspx?Item=N82E16822152420 There is a reason NewEgg is using eBay to sell these: that way they don't have to accept returns or provide refunds when they fail.
  22. ... and yet you don't explain why, which to me sounds like hand-waving. I'm not close to my case capacity, and I just bought a 2TB drive. Even if 2.2TB+ support were available, I would still have bought the 2TB drive. Why? Because given my storage needs, it demonstrably costs me less than the 3TB drive. Huh? Your first sentence seems to prove my point, but then you take it into a different direction in the second sentence. If I only need 500GB today, and the price for that drive were significantly less per GB than the 2TB drive, why wouldn't I buy the 500GB? After all, by the time I need a 2TB drive, the price will have dropped significantly and it will probably end up costing less to buy that one plus the original 500GB than if I had bought the 2TB drive from day one. And I end up with an extra 500GB drive that I can stick into a drive dock if I want it for non-essential spillover data/backup. In that one paragraph you've suggested we will be getting 3TB support (I'm glossing over 2.5TB drives a little bit!) and assigned it a timeframe - something which others have also been arbitrarily doing. I'd hope to agree with you on your first point though on the latter, given release schedules I wouldn't like to suggest when. Either way it's pure speculation on your part. No fear mongering just fact based on current information. 1. Read my comment again. I was only talking about odds, and I stand by my suggestion that the odds for "never" are lower than "someday." 2. I imagine that 2.2TB+ support isn't too far off, but that is of course speculation. 3. Saying "There is every chance we will never see 3TB support" is not fact. It's fear mongering. Saying "There is a chance..." would be factually correct, but then again so is my hypothesis that all of our unRAID machines are actually zombie botnet sleeper nodes.
  23. While I'm still seeing some arguments with questionable foundations, some good points were also raised. Blockhead: You are right about the poll totals. That said, 52% of folks would rather see 5.0 shipped soon with its current feature set, which still speaks volumes. Your assumption that prospective unRAID buyers would constitute a "landslide" for the other options however, doesn't appear to be grounded in any material fact. It's a shaky assumption at best. And I don't buy the premise that the LimeTech business is going to suffer if 2.2TB+ support isn't added yesterday. The people who choose unRAID seem to do so because they understand its advantages over competing solutions, and I think it's highly unlikely that the LimeTech business will suffer to any significant degree if 2.2TB+ support appears in 5.1 instead of 5.0. I would argue that if you care about unRAID succeeding as an enterprise, vote for shipping early and often, when means getting 5.0 out the door so work can begin on 5.1. Boof: As I mentioned earlier, I question whether the majority of folks have so much data to store that they need to worry about running out of ports. But maybe I'm wrong... Maybe the vast majority of folks here max out 20+ ports and need all the storage density they can get. I'm not one of those people (clearly), and for folks in my situation, the cost per terabyte is much more significant than the other measurement you describe. Last but not least, let's cut the fear mongering: "There is every chance we will never see 3TB support." Yeah, and there's "every" chance that all of our unRAID machines are actually zombie botnet sleeper nodes that are waiting to be activated. The question isn't about what's possible — it's about what's probable, and given the track record (e.g., support for AF drives in the most recent 4.7 release), I'd argue there are much better odds that 2.2TB+ support will be added in the not-too-distant future than the odds for "never."
  24. I have a master's degree in finance, so suffice it to say that I understand the definition of ROI quite well, thank you. Your conflation of the term with the subjective notion of "value," while interesting, doesn't hold water. I don't know how much data you feel you need to store, so I'm not going to try to understand how you are doing your math. I will, however, say this... At $100 for an 8-port SATA card (that's $12.50 per port), I find it hard to believe that there are a significant number of people whose storage needs are so great that it's less expensive to (a) buy 3TB drives at today's cost than to (b) use 2TB drives with an expanded number of ports. But like I said, I don't know your situation. Maybe you've done the math, and maybe it still works out better for you with 3TB drives. In which case, like I said before, just do that -- the other 800GB/drive will be available to you soon enough. Well said. And as Squirrellydw has already pointed out, the poll clearly shows people want 5.0 released more than they want 2.2TB+ support — by a wide margin.
  25. Really? How does this contribute to the discussion? First of all, I chose my moniker well before Apple began ever working on that OS version, much less when they gave it a name. So your conspiracy theory is misplaced. Secondly, I never said it's essential that AFP is included in the 5.0 release, because AFP is already in the 5.0 branch. It's pretty much a fait accompli at this point. My point is that, other than bug fixes, no new features should be added to 5.0. Otherwise, it just delays what has already been a fairly long development cycle. Finally, what you're really missing is that I'd be saying the same thing if the situation were reversed. If 2.2TB+ support were already in place in the 5.0 branch, and the question was whether we should add AFP support now (delaying the 5.0 release) or defer it to 5.1, I'd be the first to suggest shipping 5.0 without AFP support. Release early, release often. Scope creep is usually what causes long development cycles. If there's a unifying theme in this thread, it's that everyone would like to see more frequent releases.