JarDo

Members
  • Posts

    377
  • Joined

  • Last visited

Everything posted by JarDo

  1. Ok. Now I've got a question for you all. I am currently following the procedure I posted above and something occurred that has me puzzled. I have a drive umount'd and currently being zeroed. We'll call this drive disk1. Overnight, a cron job I forgot to disable performed a backup of my flash drive to disk1. Now disk1 is visible as a mapped drive with the contents of the backup job. When I discovered this I stopped and restarted the zeroing process. I'm not sure what else I can do. I'm assuming these files don't actually exist on disk1 but do exist in parity. Does this behavior make sense. I had assumed that any attempt to write to an unmounted drive would have failed. Is this behavior normal or is this an issue with unRAID v6? This is the first time I've tried to zero a drive with version 6. Another thing I noticed with v6 is that after unmounting the drive the webgui did not show the drive with zero free space. unMenu does show zero free space, but not the stock webgui. Any thoughts?
  2. So, I just finished preclearing 2 drives. In both cases, the result was such as "Disk /dev/sdk has NOT been successfully precleared". I don't fully understand why. I tried adding one of the precleared drives to my array and there was no issue. I've attached the preclear logs. I'm using preclear_disk.sh v1.15 on unRAID v6b12. If I'm able to add the drives to my array with no problem, should I be Okay? PL1311LAG14UWA.zip WD-WMC5D0D0MDMK.zip
  3. 1) I stitched this procedure together from various posts from this forum. I've performed this procedure many times, and actually plan to do it again tonight. 2) Exactly as trurl posted... The purpose of writing zeros to the to-be-removed drive is to bring parity to a state that will allow me to remove the drive from the array without breaking parity. The point is to perform the drive swap without losing access to my array for long periods of time OR breaking parity . Of course (I forgot to mention) this all assumes that the new drive is Pre-Cleared prior to adding it to the array. As far as retaining the data on the old drive; this concern is addressed in Step 2. Maybe the procedure should say 'Copy' instead of 'Move' the data. I usually copy the root of the drive to a directory on another drive (i.e. rsync -av --stats --progress /mnt/disk1/* /mnt/disk9/disk_1). There is very little need to retain data on the old drive after you've safely copied this data to a fault tolerant location elsewhere on your array. (It's time to move on. Really... let it go.) A couple additional notes on this procedure I forgot to mention: - During Step 5 there is NO feedback. It will look as if nothing is happening until it is finished. And, as I already mentioned, this step will take a long time. Don't stare at the screen waiting for it to finish. It will win the staring contest. - If it is an option for you, perform Step 5 directly at your server (not in a telnet window from another PC). If you start the zeroing process from another PC in a telnet session then if that telnet window is closed for any reason you will have no way of knowing when the zeroing process completed.
  4. Maybe this will help you. Below is the procedure I follow to remove a drive from my server without losing parity or access to my array for long periods of time. Some notes about the procedure: - Be sure to record your drive assignments before performing this procedure (take a screen shot of the Main Unraid screen) - In step #2, make a subdirectory on the target drive to prevent creating a share with duplicates - Steps #1-5 are done while the array is Started. You don't stop the array until step #6 - Step #5 can take a very long time, depending upon the size of the drive being zero'ed, but all the while your array is available. - I have not yet used this procedure on v6, but I don't see why there would be any issues. - This procedure has been tested on v6b12. - This is the procedure I use. I can not guarantee results for others. - If you plan to write zeros to your drive from a remote console, I suggest using SCREEN. UPDATE 1 - Added alternate dd syntax in Step 5 to show a progress bar, speed, total time and ETA. 1) Start the array 2) Empty the drive of all contents (move all files to another drive) 3) Telnet to the server 4) Unmount the drive to be removed with the following command (where ? is Unraid disc number) 4.1) umount /dev/md? 4.1.1) If the command fails issue 'lsof /dev/md?' command to see which processes have a hold of the drive. Stop these processes. 4.1.1) Some processes (i.e. smdb [samba file sharing]) require a restart to stop. 4.1.3) Try 'umount /dev/md?' again. 4.1) At this point the drive will show 0 (zero) Free Space in the WEB GUI. 5) Write Zero's to the drive with the dd command. 5.1) dd if=/dev/zero bs=2048k of=/dev/md? 5.1.1) Alternatively, you can put a progress bar on your dd operation by piping into the pv command 5.1.1.1) dd if=/dev/zero bs=2048k | pv -s 4000G | dd of=/dev/md? bs=2048k 5.1.1.1.1) The '4000G' in this example represents a target drive size of 4TB 5.2) Wait for a very long time until the process is finished. 5.3) when finished a report similar to the following will display in the console dd: writing `/dev/md?': No space left on device 476935+0 records in 476934+0 records out 1000204853248 bytes (1.0 TB) copied, 48128.1 s, 20.8 MB/s 6) Stop the array 7) From the Devices tab, remove the device 7.1) At this point the drive will show as 'Missing' on the Main tab. From the 'Tools' menu choose the 'New Config' option in the 'UnRAID OS' section. 8.1) this is equivalent to issuing an 'initconfig' command in the console. 8.2) This will rename super.dat to super.bak, effectively clearing array configuration. 8.3) All drives will be unassigned 9) Reassign all drives to their original slots. Leave the drive to be removed unassigned. 9.1) At this point all drives will be BLUE and Unraid is waiting for you to press the Start Button. 9.1.1) Assuming all is good, check the "Parity is valid" box and press the 'Start' button. 10) At this point, provided all is OK, all drives should be GREEN and a parity-check will start. If everything does appear to be OK (0 Sync Errors) then parity-check can be cancelled. 11) Stop the array 12) Shut down the server 13) Remove the drive 14) Power up the server 15) Restart the array 16) Maybe a good idea to do complete a parity-check. Jarod
  5. It's essentially a moment of communications corruption. When you say 'every half hour', is that exact? Normally I would point out the BadCRC flag and tell you to replace the SATA cable with a better one, as BadCRC means a packet was corrupted and didn't transmit across the cable correctly. But you have so many other indicators of momentary corrupted communications, plus an indication of repeatably timed occurrences, that I would look for a power glitch source, perhaps something that occurs every half hour that puts an abnormal demand on the circuit, or something that goes off that could be a source of interference or noise on the line. The ACPI errors do not normally ever repeat. Are you saying that they too occur every half hour? They typically occur as part of the initial setup, and are quite normal and generally harmless. Most setups have one or more issues like that, but they shouldn't repeat. A BIOS update *may* eliminate them. What is your spin down timer set to? Is it 30 minutes? Do you have a media scanner running every 30 minutes, like is the drive trying to spin up every 30 minutes if it's spindown is set to 15? Thank you all for the feedback. Yes, even the ACPI errors were happening every half hour. My spin down timer is set for 3 hours on all drives. I rebooted and the pattern of errors every half-hour has stopped for now. I will keep watch.
  6. Just decided to look at the syslog and noticed the following happen every half hour. Dec 12 00:38:26 UNRAID kernel: ata6.00: exception Emask 0x52 SAct 0x0 SErr 0x1280d00 action 0x6 frozen Dec 12 00:38:26 UNRAID kernel: ata6.00: irq_stat 0x09000000, interface fatal error Dec 12 00:38:26 UNRAID kernel: ata6: SError: { UnrecovData Proto HostInt 10B8B BadCRC TrStaTrns } Dec 12 00:38:26 UNRAID kernel: ata6.00: failed command: IDENTIFY DEVICE Dec 12 00:38:26 UNRAID kernel: ata6.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 8 pio 512 in Dec 12 00:38:26 UNRAID kernel: res 50/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x52 (ATA bus error) Dec 12 00:38:26 UNRAID kernel: ata6.00: status: { DRDY } Dec 12 00:38:26 UNRAID kernel: ata6: hard resetting link Dec 12 00:38:26 UNRAID kernel: ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 12 00:38:26 UNRAID kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20140724/psargs-359) Dec 12 00:38:26 UNRAID kernel: ACPI Error: Method parse/execution failed [\_SB_.PCI0.SAT0.SPT5._GTF] (Node ffff880217073eb0), AE_NOT_FOUND (20140724/psparse-536) Dec 12 00:38:26 UNRAID kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20140724/psargs-359) Dec 12 00:38:26 UNRAID kernel: ACPI Error: Method parse/execution failed [\_SB_.PCI0.SAT0.SPT5._GTF] (Node ffff880217073eb0), AE_NOT_FOUND (20140724/psparse-536) Dec 12 00:38:26 UNRAID kernel: ata6.00: configured for UDMA/133 Dec 12 00:38:26 UNRAID kernel: ata6: EH complete Otherwise, everything appears normal. Does anyone recognize what this might mean?
  7. So... Funny story... This weekend I'm listening to some Taj Mahal MP3's, a 3-CD set. And, I noticed that the latter half of 2-3 tracks per album changed to Elton John. I ripped these CD's in 2008. I know I've listened to the MP3's since then and I don't ever remember this strange behavior. I tried a different player... Same result. I checked the file dates... Still 2008. I'm very confused. And, then, today I see this. I'm very sad right now. I have so many files on my server I think the question about what exactly is corrupt will out-live me. This is a fail.
  8. Is anyone else having difficulty with nzbget on beta 8? After upgrade, I restored my templates according to the upgrade instructions, then I re-installed all my dockers (couchpotato, madsonic, nzbdrone, nzbget & PlexMediaServer). All seem to function same as before the upgrade except with nzbget I can not browse to the user interface. I am using the plugin that came with beta 8 (I have not upgraded the plugin).
  9. I installed the 2nd version that enables in-app updates. Everything appears to be working, but the headphones.log file is filling with the following error: 08-Jul-2014 06:48:32 - INFO :: MainThread : Checking to see if the database has all tables.... 08-Jul-2014 06:48:32 - INFO :: MainThread : Retrieving latest version information from GitHub 08-Jul-2014 06:48:32 - ERROR :: MainThread : Request raise HTTP error with status code 403 (local request error). 08-Jul-2014 06:48:32 - WARNING :: MainThread : Could not get the latest version from GitHub. Are you running a local development version? 08-Jul-2014 06:48:32 - INFO :: MainThread : Using forced port: 8181 08-Jul-2014 06:48:32 - INFO :: MainThread : Starting Headphones on http://0.0.0.0:8181/
  10. I am having much difficulty to get nzbdrone or couchpotato to connect with nzbget. All are installed via Docker and appear to function properly. The error I get in nzbdrone is: NzbDrone.Core.Download.Clients.DownloadClientException: Unable to connect to NzbGet, please check your settings at NzbDrone.Core.Download.Clients.Nzbget.NzbgetProxy.CheckForError (IRestResponse response) [0x00000] in <filename unknown>:0 at NzbDrone.Core.Download.Clients.Nzbget.NzbgetProxy.ProcessRequest (IRestRequest restRequest, NzbDrone.Core.Download.Clients.Nzbget.NzbgetSettings settings) [0x00000] in <filename unknown>:0 at NzbDrone.Core.Download.Clients.Nzbget.NzbgetProxy.GetHistory (NzbDrone.Core.Download.Clients.Nzbget.NzbgetSettings settings) [0x00000] in <filename unknown>:0 at NzbDrone.Core.Download.Clients.Nzbget.Nzbget.GetHistory () [0x00000] in <filename unknown>:0 I've fiddled with this for hours. I have made sure that all file permissions for my Docker apps are set to nobody:users. And, in the apps themselves I've run out of settings to modify. If anyone has had this issue could you give me some advice?
  11. I managed to get nzbdrone to install with the following command: docker run -d --name="nzbdrone" -v /mnt/cache/appdata/nzbdrone:/config -v /mnt/user:/mnt/user -v /etc/localtime:/etc/localtime:ro -p 8989:8989 needo/nzbdrone It shows as running on the Docker extension page, but I can not browse to it on port 8989. I also noticed that there is no .conf file in /mnt/cache/appdata/nzbdrone, whereas, nzbget (also installed) does have a config file in /mnt/cache/appdata/nzbget. Is there an issue with this container, or is my install command wrong? I tried to uninstall/reinstall nzbdrone, but the issue persists. EDIT: The files and directories were actually there in /mnt/cache/appdata/nzbdrone. Once I fixed permissions on the directory (according to Justin's instructions) everything showed up. Now, I make a habit of recursively fixing the permissions on /mnt/cache/appdata after every new Docker container install.
  12. Justin... I'm trying very much to understand this method of yours: If you do this, wouldn't anything below /mnt/user become unavailable? If someone wanted to put downloads (for example) ONLY onto the cache drive, wouldn't this mapping make this option impossible? Could I bother you for a better example of how you would configure an app (like nzbget) to save downloaded files to the cache drive with -v /mnt/user:/mnt/user as the only mapping? Maybe my mistake is that I have unraid configured to not 'use a cache disk'. I think if I enable that option that folders on the cache drive are available under /mnt/user. But, if I do that then unraid will try to sync contents of these folders with my array.
  13. I'm trying to install nzbdrone container via the Extended Docker Configuration plugin and the following error is returned: Command: /usr/bin/docker run -d --name="nzbdrone" --net="bridge" -p 8989:8989/tcp -v "/mnt/cache/appdata/nzbdrone":"/config":rw -v "/mnt/cache/appdata/nzbdrone/downloads":"/downloads":rw -v "/etc/localtime":"/etc/localtime":ro needo/nzbdrone Unable to find image 'needo/nzbdrone' locally Pulling repository needo/nzbdrone 2014/07/04 10:27:48 Could not find repository on any of the indexed registries. The command failed. I'm guessing that it is a permissions issue on my machine, so I attempt to follow this advice: If I run either of these commands on my system it gets stuck and outputs a non-stop stream of this error message: chmod: cannot access ‘docker/btrfs/subvolumes/5e66087f3ffe002664507d225d07b6929843c3f0299f5335a70c1727c8833737/bin/vdir’: Stale NFS file handle Any ideas?
  14. So... I'm trying to add a Container via this plugin. I could get it to install via the command line with this: docker run -d --name="nzbget" -v /mnt/cache/appdata/nzbget:/config -v /mnt/cache/appdata/nzbget/downloads:/downloads -v /etc/localtime:/etc/localtime:ro -p 6789:6789 needo/nzbget How do I translate this command to proper GUI settings? These are the settings I tried, but I got errors due to the repository not found locally: Name: nzbget Repository: needo/nzbget Network type: Bridge Privileged: False Bind time: True Paths Host Path: /mnt/cache/appdata/nzbget Container volume: /config Host Path: /mnt/cache/appdata/nzbget/downloads Container volume: /downloads Ports Host port: 6789 Container port: 6789 Environment Variables Variable Name: /etc/localtime Variable Value: /etc/localtime:ro I know I am parsing the command incorrectly, but I hope someone can explain what I am doing wrong.
  15. I noticed that packages I place in /boot/extra are not being installed at boot time. Could I possibly be doing something wrong, or did something change in v6 that I'm not aware of? EDIT: Syslog attached EDIT2: Nevermind - I was wrong My packages in /boot/extra are installing just fine. I had misdiagnosed my original issue. All is good now
  16. Attached is a .conf to install the 64Bit version of htop. The information says that the package was made for Slackware v13.37. I believe unraid v6 beta6 installs Slackware v14.1. I tested on the current unraid beta version and it appears to work as advertised. htop-unmenu-package-x86_64.conf
  17. I'm not sure if these libraries were cooked into v5 or if my install of one or more unmenu packages installed them, but I noticed when trying to get Subsonic running that the glib2 and libffi libraries are missing from v6 Beta 6. I had to install the following packages to get these libraries back: glib2-2.36.4-x86_64-1.txz libffi-3.0.13-x86_64-2.txz
  18. I upgraded to v6b6 today. The first thing I did was re-install unmenu! I noticed that there is no JRE package (and many others missing too). Has anyone yet built a 64-bit JRE install package? I'd be interested if you have. EDIT: I figured out how to install the 64Bit JRE v1.7.0_60. It's the openjdk version. Looking at the unMenu conf file I used to install JRE when I was on unRaid v5, I think I can create a new .conf file for everyone else to use. To test my 64bit JRE install I tried to run Subsonic, because I know it requires Java, and I'm getting an error. The error is that the file libgio-2.0.so.0 can not be found. I'm not sure if this is a problem with the JRE install or not. I figure if I can't get Subsonic to run, then I can't yet trust my install of Java. EDIT #2: Attached is a .conf to install the 64Bit version of JRE. I figured out my Subsonic issue. After JRE was installed there were still 2 dependencies missing in unraid v6, glib2 & libffi. Neither of these dependencies were related to Java. I guess these two libraries are built into the 32Bit version of unraid but missing from the 64Bit version. jre-unmenu-package-x86_64.conf
  19. So, kinda on the same subject... I'm planning my upgrade to v6. I want to use my spare USB key for the process. My plan is to format and upgrade my spare USB key and then bring the config directory from the old USB key to the new USB key (minus the .key file). My question: The config directory on my old USB key has junk in it (after 4+ years I'm pretty sure of it). Can someone tell me exactly which files in the config directory should be kept? My goal is to go back to 'stock' with my v6 upgrade.
  20. Yes, when I said 'old flash' I was referring to the same drive before upgrade. I wasn't aware that most folks have their key file in the config directory. Mine has been installed at the root for so many years I couldn't tell you why it's there. I just thought that's where it belonged. I'll move it to my config folder, and test, before upgrade. Thanks for the quick replies!
  21. The upgrade from v5.0.x instructions say to copy the config folder back onto the upgraded flash drive, but say nothing about the .key file from the root of the old flash. Are there other steps missing from the instructions I should know about?
  22. Hmm. Strange. I am running the same version as you (5.0.4) and from what you showed about your system it would seem to me that your advice should work for me too. I don't know what to think at the moment.
  23. Thank you Peter. I gave it a shot, but unfortunately this is the response I got: root@UNRAID:~# modprobe w83627ehf FATAL: Error inserting w83627ehf (/lib/modules/3.9.11p-unRAID/kernel/drivers/hwmon/w83627ehf.ko): No such device It looks like the code for this driver is available: https://github.com/groeck/nct6775/. Is it possible to install this on my server, or does this need to be integrated by Tom?
  24. It appears that my motherboard (MSI Z87-G43) supports more advanced temperature monitoring. Unfortunately, following the instructions on the WIKI, I learned that my unraid install (v5.0.4) doesn't include the required module. This is the result I see when following the WIKI instructions: Driver `nct6775': * ISA bus, address 0xa00 Chip `Nuvoton NCT5532D/NCT6779D Super IO Sensors' (confidence: 9) Warning: the required module nct6775 is not currently installed on your system. If it is built into the kernel then it's OK. Otherwise, check http://www.lm-sensors.org/wiki/Devices for driver availability. Is there a way I can resolve this, or does this module need to be added by Tom? I'm assuming that this module is not built into the kernel because when I run the sensors command no temps from this device are shown. On my system, the only device showing temperatures is 'coretemp-isa-0000'.
  25. On the Disk Health page it tells me that my Disk 4 has an available firmware update. How the heck does it know this?