JustinChase

Members
  • Posts

    2156
  • Joined

  • Last visited

Everything posted by JustinChase

  1. some of the files failed to move, not just be removed after. I ended up rebooting the server, then ran newperms again, and it didn't obviously fail this time. I'm working on moving the files again now. Sadly, it looks like this could take a few days to finish
  2. Right, and that is how I have it set up currently and have had it set up since before the upgrade to 6beta9. It is set as only and the share "ball" is red stating all files are on the cache drive. I just can't figure out why when I change the host path to /mnt/user/appdata/plex it works but when it is /mnt/cache/appdata/plex it does not when the files are clearly there If any of the files/folders got put on the array, it will screw you up. I had to check each drive manually using Midnight Commander, and move any errant folders back to the cache drive manually, then my 'weird' issues went away. I had this issue because I neglected to set the share as cache only when I first installed dockers. Anyway, I hope that helps.
  3. so, how do I change the "read only" nature of the files on this disk? the newperms script doesn't do it, so I don't know how to 'free up' my files, so I can move them off of this disk.
  4. so, rsync is moving so slow, and now I've noticed some issues as it's going along. One example is this... rsync: sender failed to remove Formula 1/Season 2010/Formula 1 - s2010e18 - PreRace.mp4: Read-only file system (30) Also, after it's finished moving some directories, it's left some/several directories behind. They are blank, but there are still lots of empty directories in the folders that rsync has finished with. I then tried to 'move' the directories with MC, but it said "cannot move directory ... Read-only file system (30)" I've been getting some similar errors when updating tags in my media player, so I suspect there are more issues here. So, how to finish getting all my files moved, so I can get this drive cleared?
  5. Okay, I'll try that when this mc batch is done. Sadly, I don't have enough room to move everything to one drive, none have enough room, so I'm going to have to split it into more batches. I'll try the rsync next. thanks again.
  6. Thanks. I wonder; is Midnight Commander the 'best' way to move the files around? I've seen rsync mentioned, but am not sure how to access/use it, or if it's worth my time to learn. MC is going pretty slow (~15Mb/sec) right now, but I think that's likely a function of it actually using parity to move the data, since it thinks the drive is missing. Either way, I would like to use whatever tool is fastest, but safe. The sooner I get everything off the red-balled drive, the better I'll feel.
  7. I couldn't assign it. When I selected it, it just didn't 'accept' that. I finally decided that rebooting one more time couldn't hurt, and when I did, it found that disk, and added it just fine. Now, I just have the one red-balled drive, disk 9. That allowed the array to start, so now I'm moving all the data off of disk9 to the other disks, and will try to pre-clear disk9 once finished, and see if it 'passes' without error. If so, I'll format it to (probably) XFS, then begin the transferring of data onto that disk, and continue to change formatting of the rest of my drives. Hopefully this is the end of the red-ball drives I'll need to deal with for quite some time.
  8. I'm looking at 2 red-balled drives, one has a drive in the box, the other says missing disk. I've checked the cables twice, and the third time I moved the drives around, to connect new/different SATA and power cables to both drives. The errors followed the drives, so I doubt it's a cable issue at this point. How I got here... I had to hard boot the machine a few days ago, and just remembered last night to start the parity check I cancelled because I couldn't watch anything with parity check running the night I had to hard boot. So, I started it at about 10pm last night, and at 8pm this morning, parity was 4% finished, which is crazy slow. I cancelled that parity check, then noticed a couple thousand sync errors (not happy). I rebooted again, then started the parity check again. It started off fine, but after about an hour, was going very slow again. During that hour, I tried to launch my deluge docker, but it wouldn't load. I looked at the docker plugin and it said it was started, but still I couldn't connect. I tried to uninstall the docker, but that gave me an error. I grabbed a syslog, and it looks like disk10 (where the docker image resides) is having problems. I rebooted again, and the docker plugin won't even start now. My son started watching a movie, and it was stuttering frequently (not an uncommon occurrence unfortunately), but I suspected it might have to do with disk10 being sketchy, so I shut down the server, and checked all the cables. I remembered I had a new drive sitting 'loose' in the server from when I had the last red ball, about 2 weeks ago. I removed the old red-balled drive from its caddy, and put the new drive in its place, checked all SATA and power cables, and started the server. I ended up with 2 red-balled drives, one as missing and the other listing the drive. I shutdown, moved the drives around, and started again. red-balls followed drives. Before I make things any worse, what should I do at this point? I don't currently have any blank drives to install, but I do have the old 2TB drive that was previously red-balled, most likely due to a cable issue. I had planned to preclear it to test it before using it in something else, but even if that worked, it's smaller than both current red-balled drives. So, what do I do now? syslog.zip
  9. I tried going to my nzbdrone docker, but it said it couldn't connect, or some such error. I tried to remove it, but it failed, and I didn't capture the error sorry. I decided to just shut down and reboot the server. Now, when it's booted back up, docker won't even start. I have it set to load the image here... /mnt/disk10/docker I checked, and this seems to still exist at this location, but when I click start, it tries, then comes back to showing as stopped. I did another shutdown, confirmed the machine turned off, then turned it back on, and tried again with the same results. This is all with beta9. I've started a parity check, but the last one I stopped said it was going to take 22 DAYS to finish, which is absurd. Something is obviously wrong with my server, but I don't know what. syslog.zip
  10. Okay, thanks for the quick response. I've changed it back now. Now I just need to figure out how to set it up
  11. good news - even though I couldn't 'save' the change, it did install the config to /cache/ bad news - I changed the container port, instead of the host port when I installed it, which obviously didn't work. I realized what I did, then switched the port numbers to be correct, container 8000, host 9998 However, now when I enter media:9998 it automatically changes and takes me to 192.168.20.150:8000 If I manually change that from :8000 to :9998 it works fine, but it shouldn't be changing my address to :8000 automatically. I removed the container and reinstalled, but it's still happening. I'm going to remove the image, and start over, but wanted to post my findings.
  12. I haven't installed anything new for a while now, but I just tried to install onwCloud, and noticed something weird, perhaps a 'bug' I used the repository template, and it offered the path for /mnt/user/appdata/owncloud. I prefer to keep it with my others at /mnt/cache/appdata/ There didn't seem to be a way to 'save' the change from /user/ to /cache/ I could make the change in the box, but it didn't show any way to save that change. I'm sure I could have created a new path, but I'm not sure if I could have removed the /user/ path. I'm installing right now, so i'm not sure what the result will be, if it will go to user, or cache. I just wanted to report this issue, in case it can be improved, or if there is a reason it can't. Ideally, it would be cool if the plugin could probe for the appdata folder, and default to just using the location that already exists.
  13. http://lime-technology.com/forum/index.php?board=64.0 I see this as new features and a wish list for new features. Will all of these features/requests make it to RC? LimeTech will have to say for certain, but that board was created by them, and populated by them with the things they intended (at that time) to have in Version 6.0. There are other boards for 6.1 and 6.2. If those intentions have changed internally, we don't know, but publicly, yes, all that is supposed to make it into version 6.0
  14. Normally, when I hold ctl and press refresh, that forces firefox to re-pull the page (ignores any cookies), but that was not working. I did, just now, manually delete the 3 or 4 cookies for my server, and now its' working again. Weird, but fixed.
  15. so, it seems I can connect to the GUI from Internet Explorer, but still not from Firefox. I've restarted the server, and restarted the laptop, still get this message in Firefox... The connection was reset The connection to the server was reset while the page was loading. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. If I go to IE, it works instantly, and asks for my username and password, like it should, and like it always has in Firefox before beta9. How to fix?
  16. I'm going back to bed, will check on this again in a few hours. After a few hours, I checked the GUI from my phone, and it was working fine. I tried to update a docker from my phone, and it went really, really slow, then failed. Not the GUI isn't working on the phone, and is not working on the laptop either. I tried to shutdown the server from putty, but it had no effect... root@media:~# ps aux | grep emhttp root 2175 0.0 0.0 89052 3204 ? Sl 05:57 0:01 /usr/local/sbin/emhttp root 7550 0.0 0.0 5100 1728 pts/0 S+ 09:50 0:00 grep emhttp root@media:~# shutdown -r now Broadcast message from root@media (pts/0) (Fri Sep 12 09:51:27 2014): The system is going down for reboot NOW! Okay, after several minutes, it seems it is now shutting down.
  17. I suspect this is a 'my' issue, and not an 'unRAID' issue, since I've not seen anyone else report it. When I tried restarting the server on beta8 (to install beta9) it took a long time to finally stop the array. I just now realized that was probably due to my windows VM still running. I was finally able to get it restarted, and can confirm it's running beta9 now. However, any navigation in the GUI is really, really slow. Like a minute to switch screens. I checked the dockerMan plugin, and none of the dockers show the update status, so I tried to update them anyway, which worked last time. it didn't work this time. I then tried to just install one from the green plus button, and it failed (the second of the 3 things it does; can't remember which). Anyway, not the GUI is totally gone. It just gives me a failed to load message. I've tried entering specific addresses media/Extensions and others, but it just fails altogether now. Array is still up, dockers are still running, and I can navigate the folders just fine in windows, but I have no GUI. Syslog attached. syslog.txt
  18. I'm thinking it's time to set up my own 'cloud' server, since I want to share some files with Family in another state, and just simply don't trust any cloud service to keep them safe/secure. I'm not sure I can keep them any more secure, but if I fail, I won't feel as bad as having my trust betrayed by someone else. So, I've just started considering this, and haven't looked very hard into my options yet, but I see there are Dropbox and OwnCloud dockers, so I thought I'd ask for opinions and suggestions before I get too far down any particular path. Thanks for any input you can provide!
  19. That was my first thought too... Hmm.... that voice/tone sounds awfully familiar! Their user name is "Truth Hurts". Not too hard to put it together. I must say I miss Grumpy. His style was too 'in-your-face', but his content was usually pretty good. Oh well.
  20. Grumpy, is that you? I could be mistaken, but I'm pretty sure you advocated to get on the latest kernel, because of the older kernels having known bugs.
  21. I don't disagree, and I'm not arguing, or even saying anything should change, I was just thinking out loud. however, this is more than 'one bug'. A bug is the GUI not updating, or something equally annoying, this is data loss, and actually has nothing to do with beta, so much as being a result of the latest, less tested, Linux kernel. If unRAID was running 3-5 versions behind the latest Linux kernel, I think this situation would not have happened. Again, I'm not saying this 'should' happen, just pointing out that this potential data loss is a result of being on the very cutting edge of Linux, not the unRAID beta, per se.
  22. A few months ago, there were lots of (grumpy) posts stating that we should get and stay on the latest versions of Linux, and other programs, because they contain bug fixes, and are generally more secure than the old version of Linux unRAID had been using for years. The posts sounded convincing, and made some sense, but I always had this possibility in the back of my mind. That the latest aren't as well tested, and although they have bug fixes, they may also contain unknown bugs. This situation has shown that to be a real, and frustrating possibility. Perhaps there needs to be some more thought about how 'bleeding edge' unRAID wants to run, in regards to Linux and other critical programs. it's a balancing act, for sure, but when we were living with known bugs, I don't think we were subjected to data-destroying bugs, and now we've run into one. Obviously getting beta9 tested and out the door should remain the top priority, but once done, maybe a rethink about latest versioning is necessary. Just thinking out loud.
  23. perfect. I honestly didn't know about it when I wrote that. I just set mover to monthly, so you have 20 days before I write to the array again. Seriously though, fixing potentially serious issues obviously comes first, playing with passthru is a far off second. thanks again for keeping us updated.