eweitzman

Members
  • Content Count

    48
  • Joined

  • Last visited

Community Reputation

0 Neutral

About eweitzman

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi. I'm a longtime developer, new to development on slackware. Discovered pkgtools is the package manager. But it seems that the gui program pkgtool is missing from unraid 4.7.2. The other pkg commands {remove, install, etc}pkg are here and running. So I pulled pkgtool out of the lastest package distibution at http://ftp.slackware.com/pub/slackware/slackware64-current/slackware64/a/pkgtools-15.0-noarch-28.txz, put it in /sbin with the other *pkg files, chmod +x, installed dialog, et voila, I have pkgtool for the moment -- at least until the next reboot. pkgtools-15.0-no-arch-23 is
  2. I skipped 6.4.1 and went to 6.5 and have two drives with crc errors. Would you mind describing how to acknowledge them? I see the crc counts in the smart listing on the affected drives' pages highlighted in orange, but there are no buttons, etc, to acknowledge the error. Thanks, - Eric
  3. Will updating a drive's firmware cause any data loss? If not, is it safe to update the firmware if the drive is part of the array? That is, will any portions of the disk be changed (such as a signature) that would invalidate parity? In case the answer depends on the drive/manufacturer, I'd like to update a WD Red WD60EFRX from version 68MYMN0 to 68MYMN1. Thanks, - Eric
  4. Interesting advice, but has rfs really rotted such that it's no longer usable? This 6TB drive has 750GB free so the "nearly full" theory doesn't apply. I have several other 6TB drives that *are* nearly full but have never had rfs problems. In fact, this is the only time I've seen this problem in seven years of using rfswith unRAID. The array has been up and moving data around, including from/to this disk, for about 12 hours now. Mystery.
  5. Update: The extended SMART test shows no errors.
  6. A 6TB Red Pro drive with ~1000 hours has twice had write failures and reiserfs problems while running mover and once for no reason that's apparent to me. SMART short test shows no problems. About 2 weeks ago while running overnight, mover caused several thousand reiserfs errors and unraid red-balled the drive. reiserfsck --check informed me to run --rebuild-tree. This took a very long time (10+ hours?) with the array in maintenance mode. All looked well afterwards, until a day later under normal use, unraid red-balled the drive again but without reiserfs errors. I replaced cables a
  7. I noticed that I missed a "5" in one of the port mappings. It's now "55555". Restarted the docker, it required that I re-enter my license, but still can't access my /sync folder. This is the docker command now: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="ResilioSync" --net="bridge" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8888:8888/tcp -p 55555:55555/tcp -v "/mnt/disk1/ResilioSync":"/sync":rw linuxserver/resilio-sync 681779083c4b4f12353f7b043d9c782d1c5790e22475ce44a948a35e
  8. I would like my /sync folder to be inside the array, not on the cache drive. To that end, I set /sync to /mnt/disk1/ResilioSync, created the directory, made sure user:group was 99:100. When I try to create a manual folder connection, enter a key pasted from sync on another machine, and use the path in the "Connect to folder" dialog as set by they share/key to /sync/docs (for example) and press "Connect", the dialog says "Sync Home does not have permission to access this folder." What am I doing wrong? Thanks, - Eric
  9. When I first started using unraid, I bought into the notion that storage solutions like this could run on any ol' hardware you had lying around unused in a closet. unraid upped the ante by being able to use any ol' disks. While this was and still is all true, I discovered it also led to much frustration in terms of low speed, wildly varying transfer rates, and mysterious long random waits and freezes. In spite of these frustrations, it was perfect for what it did at a very low entry cost. I just had to temper my expectations and appreciate the job it did. A few months ago, I bought
  10. I just redid steps 10-13. I believe I must have pressed "Done" in the New Config page without pressing "Apply" first. Now, after pressing both buttons, every device on the main page has a blue square indicating "new device" in hover tips and the "Parity is already valid" checkbox is present. One suggestion for improvement: After checking the "Parity is already valid" checkbox, the warning up top beside the parity disk should change from "ALL DATA ON THIS DISK WILL BE ERASED WHEN ARRAY IS STARTED" to some other message indicating it will be preserved. The array has start
  11. I've almost completed removing a drive using the method described here: https://lime-technology.com/wiki/index.php/Shrink_array#Alternate_Procedure_for_Linux_proficient_users. I am absolutely certain that I have unmounted, cleared, and unassigned the correct drive. A problem occurs in step 14, "Click the check box for Parity is already valid, make sure it is checked!" There is no such checkbox on the main page. There is the following checkbox after I unassigned the zeroed out drive: The server is running version 6.3.2.
  12. This came to me by email with a promo code (ESCAKAK23). I don't believe the promo code is specific to me because it shows up in the listing for the drive. http://www.newegg.com/Product/Product.aspx?Item=N82E16822145973 - Eric
  13. I order this drive and got the $25 off for using Paypal. The cost was $235 delivered - no shipping, no tax. What a deal! The drive is in a retail blister pack, the blister pack is in a 12"x15"x6" box with air-filled bags that are completely flat. The bags have small tears in them from the sharp edges of the blister pack. The blister pack itself has bent corners and one part is compressed. The drive has undergone lots of impacts from UPS and USPS handling. It may have been a good drive when shipped, but probably has been subjected to G forces out of spec. It's going back. What a disap