musicmann

Members
  • Posts

    275
  • Joined

  • Last visited

Everything posted by musicmann

  1. Thanks for the steps. It worked perfectly. Disk has been rebuilt with no write errors!
  2. When I tried this, the disk is marked as disabled, so it won't continue the rebuild. Is there a way to reset the disabled flag?
  3. Thursday night, I noticed a Red X on Disk 5. This disk was an old 2TB drive on an array otherwise filled with 5TB drives, so I assumed it was going bad and just needed to be replaced. On Friday, I replaced it with a new 5TB. I know it had completed at least 2 hours of rebuilding without any issues, but on Saturday, I looked and the new Disk 5 had a Red X for write errors. I use Rosewill 4-in-3 drive cages, so with errors on both an old and new disk in the same slot, I'm betting (hoping) that it's probably something like a loose cable versus the drives actually being bad. What are the recommended next steps from this point? Can I shut down, check for cable issues, and restart? Or is it more complicated than that? Do I need to change this disk again to try to force a new rebuild? Diagnostics are attached. Any advice is appreciated. tower-diagnostics-20170827-1434.zip
  4. Better luck this time, but still not all the way there. When I restarted the server, at least my dockers and VMs reappeared. The folder seemed to have synced 54.6GB out of 60.0GB (about 35K files out of 56K files). The Sync application on all the machines indicate that Sync *thinks* this folder is complete. Plus the folder size hasn't changed in the last hour since I restarted. I think the setup is correctly syncing the files to the unRAID share. When I connect to the shared folders, I'm able to see all my unRAID shares, and I go into the share I set up for this and create corresponding subfolders. I can browse from a different Windows machine and see the new subfolders and encrypted files. I've attached a screenshot of the docker settings and the sync.conf code is below. { "listening_port" : 55555, "storage_path" : "/config", "vendor" : "docker", "display_new_version": false, "directory_root_policy" : "belowroot", "directory_root" : "/sync/", "webui" : { "listen" : "0.0.0.0:8888", "allow_empty_password" : false, "dir_whitelist" : [ "/sync/folders", "/sync/mounted_folders" ] } }
  5. Thanks for the the clarification @CHBMB. I'm running into an issue where the docker seems to be crashing and affecting other things. Background: I'm trying to sync my 3 production computers (work desktop, home desktop, and laptop), and I want a "backup" and always-on node on unRAID. The 3 production computers all have Windows Bitlocker enabled, so those contents are encrypted at the disk level (though, of course, they look unencrypted to Resilio Sync). I'm using encrypted folders in Sync in order that have the unRAID node's contents encrypted at rest. Last night, my initial test worked. I encrypted a small folder (less than 200MB) from work desktop. I created an encrypted node on unRAID and synced. I then shut down work desktop, and I was able to add home desktop and have it successfully sync unencrypted. Overnight, I set the main folder (60GB) to sync. When I checked it in the morning, about 30GB had synced. However, when I went to the Docker tab, there were no dockers listed (previously, I also had Plex installed). Reading some of the other troubleshooting comments, I decided to 1) disable Dockers in settings, 2) delete docker.img, 3) re-enabled Dockers now with a larger image size (now 30GB). I resintalled Plex and Resilio Sync dockers. Again, I was able to sync the smaller folder, but the larger folder created problems. Docker page and VM page are now showing nothing installed. No shares are available either. A parity sync is still running from having to do an unclean shutdown after the overnight crash. Any help troubleshooting and resolving this will be greatly appreciated. I'll probably need to be pointed in the right direction for things like pulling log files and seeing run commands. Thanks
  6. Thanks for the docker! I was a user of Windows Live Mesh until it went EOL. Then I was a user of LogMeIn Cubby until it went EOL. I'm hoping Resilio Sync will be a horse I can ride for a very long time. I'm able to get the docker to work when I map /sync --> /mnt/user/ However, I really want to map /sync to a share I've created (called: Resilio-Sync). Is there a way to do this? I've tried /sync --> /mnt/user/Resilio-Sync/ and got the error "your config file prevents you from accessing this directory" when I tried to set the default directory in the GUI. I also tried /sync --> /Resilio-Sync/ which seemed to give an error when when I tried to connect to a folder synced to another machine. Am I missing something obvious? Thanks in advance
  7. @johnnie.black, Thanks! Things are looking good. VMs are running and Plex seems to be good. Thanks!
  8. Here's the output. Should I do the -L option? Phase 1 - find and verify superblock... - block cache size set to 1529792 entries Phase 2 - using internal log - zero log... zero_log: head block 254699 tail block 252947 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  9. Thanks @trurl and @johnnie.black I've attached the output of the check (-nv) I'll admit, I'm a bit concerned by "Inode allocation btrees are too corrupted, skipping phases 6 and 7" check_results.txt
  10. My cache drive (an xfs-formatted ADATA SSD) is showing as unmountable. I'm wondering if there's a way to try to recover any data that was on it (including a couple of VMs and my Plex server data). Background: I just returned from a long trip. My array was up, but everything was spun down...even the cache drive. I was able to open a share from Windows and move some files to it, but when I tried to start one of the VMs (that I had shut down prior to my trip), I got an Execution Error stating that the VM file(s) could not be found. That's when I noticed that the cache was spun down. I went into the config to set is spin down to "never" hoping that would bring it back up, but that didn't work. I think stopped the array and started the array again. All my shares appeared in Windows except cache. Then I noticed that cache was saying unmountable. I've tried rebooting the server, but the cache is still unmountable. I'm hoping there will be a way to try to save any data that was on the drive. Any help would be appreciated. The disk log info is attached. Note lines 17 and 37 are marked as errors. Thanks in advance. disk_log_info.txt
  11. Wow! I think I found it via that AVSForum post way back in the day! Circa 2006, unRAID + Meedio was a kick-butt combo. Back then, I bought two of the CM Stackers that I still use to this day. I think they came with one 4-in-3 cage each. I remember emailing Tom to see if he had any extras from his builds, and he said he'd send me a couple for free. I think I had to argue with him to get him to just let me pay the shipping! Pretty nice having a storage device that just runs unattended in the background for years. Thanks, Tom!
  12. I apologize in advance because this topic has been discussed a number of time, but I couldn't really zero in on how I should proceed... After years and years of incremental upgrades to my unRAID server, I've finally decided to rebuild (mostly) from scratch to take advantage of v6 and VMs/Dockers. I'm pulling my server out of a hot closet and into a main living area, so I need to get it a lot quieter. I had been using the Norco SS-500 5-in-3 bays without a problem, but in my rebuild, I've replaced the noisy stock fan with the Noctua NF-R8. I'm using new Toshiba 5TB drives (7200 RPM). I thought it would be a good idea to start with a preclear of the drives for testing, and during the pre-read, after about 30 minutes, the drives are showing temps in the high 40s and low 50s (48c - 51c). Is this normal since the bay is full and I was attempting to preclear all at the same time? Should I look at less dense bays, say a 4-in-3? Should I test something else? I know the answers may be buried somewhere in another thread, but any help would be appreciated.
  13. Can anyone recommend a half-height, air-conditioned server cabinet? Just moved, and I'd like to move my unRAID out of my guest closet. I have un-conditioned attic space that I would like to cable and move my unRAID to, but I'm having trouble finding a suitable server rack/cabinet to put my machine in. I've done a ton of searching, but am not finding good solutions. Any ideas? Thanks in advance.
  14. Thanks, for the advice, Joe. I had let the parity check continue, and that disk eventually showed up as "disabled." This is the 2nd time recently that I've had problem with a disk on this port. In fact, this drive was just added 2 or 3 weeks ago. Given this history, I think it might be the controller port and not the disk. Since this disk was new and not burned in, I'm not just willing to trust doing a rebuild just using this disk on a different port. Here's what I'm thinking of doing. I'll pull Disk 4, put it in another system, and run a long Smart test on it. If it passes without issues. I'll reinstall it into my unRAID on a different controller port, and rebuild the data as if it's a new disk. If it has any issues, I'll replace it with a new drive, and rebuild the data (again on a different port). What do you think? Additionally, I think I will test the old disk that was on port 4 to see if it shows any errors. In fact, I also had one "disappear" on port 5 previously, and maybe I should test it too. The replacement for the one on 5 hasn't shown any issues though.
  15. It seems like I lost power at some point while I was away for Christmas holidays, and when I returned, my machine was doing a parity sync. The speed was so slow, I decided to stop, reboot, and start another parity check. It's been running an hour, and the speed is about 142 KB/sec. Something's definitely wrong and I see a lot of red lines on the unMenu-Syslog view. However, I'm lost reading it and can't tell is this is a disk issue, a multiple disk issue, a controller issue, etc. Any help would be greatly appreciated. syslog-2011-01-02.zip
  16. I jumped on this deal too...and good thing I did. The day it arrive (yesterday) was the same day I experienced my first unRAID drive failure. Instead of adding 2TB storage, I ended up replacing a failing 1TB instead.
  17. No. You just need a build environment. Statement. Not a question.
  18. This is very true. So getting back to the original purpose of this thread, there isn't anything additional needed kernel-wise to support VMware or VBox. <<If you don't care about VMs in unRAID, feel free to stop reading here>> However, though this would probably be more appropriate in a different thread, to address the point of those who would like to use VMs but aren't keen on all the work currently required, two things would make using VMs on a stock unRAID much more trivial: Scripting of the entire compiling/packaging process Building a development environment VM (including the script) that could be shared The goal would be that a user could fire up the VM dev environ in, say, a Windows environment, log in, download the VMware (or VBox) software, update a text file to enter their key code, run the script, and voila...an unRAID-installable VMware package. I wish I had thought of doing this way back when, when I first published my instructions. At this point, I don't know when/if I'll ever have the time to upgrade my current install (4.4.2) and/or learn how to do any semblance of scripting to help accomplish this, but it should be entirely possible. This is what's needed to make VMware/VBox easy to install. unRAID already has what we need, so let's take this on as a community and let Lime-Tech work on the stuff we can't do.
  19. I used both HDAT2 and Seatools from time-to-time to get rid of my Gigabyte-bequeathed HPA. At least twice, I couldn't get rid of it with either tool while it was on the Gigabyte MB. I moved it to another machine to get rid of it. Granted, I don't know if I tried ports other than the 1st port. I'd really love if someone could confirm that it only happens on the first port AND that it doesn't happen if the drive is already partitioned. I never tested that far when I was experiencing it, but I may be rebuilding early next year and would love to know. I'll probably play around with it on another machine prior to my rebuild. I can post my results (though it will be a few months before this happens).
  20. @ftp222 - Does it confirm that your VMs are now using the designated processors? Are you running multiple VMs in a way that you're seeing performance improvements from editing the vmx files?
  21. @juliperman - Unfortunately, I really don't know. I assume the new package would just overwrite any files that already exist but leave intack files that do not conflict. I only assume this based on the fact that numerous packages install software into existing directories without affecting what was previously there. But because of this unknown, I have my VMs in a directory outside of my one-time install directory. That way, I can make an update, and pretty much the only thing I might lose are the power settings (like auto-startup, auto-shutdown) for my machines.
  22. But must I modify my unRAID kernel and bzimage to install vmware (understanding that there is a known working solution)? Is there not some way to install vmware by only modifying my go script with the REALTIME.tgz and having other persistent changes in "/mnt/cache/.something". Installing what else is needed (e.g. vmnet) as modules perhaps? In other words, is there some very good reason this can't happen in Linux that I just don't understand. dimes, let me know if I'm misunderstanding your question. If you build the packages (on your dev system setup to match the running unRAID kernel), then no, you will not need to modify the unRAID kernel and bzimage on your actual unRAID box. The packages built from on the dev machine can be installed onto a stock unRAID. Granted, you'll need to rebuild these packages if you want to upgrade unRAID to subsequent unRAID kernels.
  23. I seem to remember errors like this when I didn't install an unRAID kernel onto the dev environment for creating the packages. I never got these issues once my dev environment kernel matched the 2.6.27.7-unRAID naming. You might have a point about other packages being able to run on various kernels, so there might be some kind of "force" option. However, it's safest to just go ahead and put unRAID's kernel onto your dev environment given that it's a known working configuration.