doctor15

Members
  • Posts

    20
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

doctor15's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Cool, I'm back up and running. Thanks for the help and explanations! # btrfs fi df /mnt/cache Data, single: total=26.00GiB, used=24.48GiB System, single: total=32.00MiB, used=16.00KiB Metadata, single: total=2.00GiB, used=93.14MiB GlobalReserve, single: total=16.00MiB, used=0.00B Per recommendations, I've changed my Docker file size to 20gb. I'll try to be more vigilant keeping an eye on things.
  2. Yup, all zeros Ah, thanks for the help! Any obvious way to catch when this happens? I'm pretty surprised I could fill up a 500gb SSD when its just used for cache + 100gb Docker system. Am I better off just using XFS for my cache drive? I don't plan on setting up a cache pool
  3. Oh, right, I forgot. I believe the problem actually started when one of my Docker containers started logging like crazy and blew up to 50gb. My Docker file was set to 100gb, but the actual disk had plenty of free space. I fixed it by moving the file to the array via terminal Are you saying the disk is full or the file system inside docker is full?
  4. I started having issues with Docker containers failing to start and recently Docker completely stopped working. I checked the logs and see a number of BTRFS errors on my cache drive. It is a single Samsung SSD, which has been running for ~6 months. I've tried to search for my issue, and see its somewhat common, but I couldn't find anything specifically explaining what the log entries mean or how to interpret. I ran a scrub, which returned 0 errors, and an extended SMART report which seemed to pass. I'm attaching my diagnostics in the hope someone more knowledgeable can spare a few minutes to look at the logs and explain what the issue is or point me towards appropriate resources. I dont mind reformatting the cache drive and starting over, but would like to know what caused the issue to prevent it from happening again. unraid-diagnostics-20170630-1506.zip
  5. I've been struggling for a while to optimize my shares setup with a Mac OS client (I have windows and linux clients too, but mostly use Mac OS), and need some help. If I use SMB with "Enhanced OS X interoperability", I can connect fine and it works mostly ok, but I get all sorts of errors when I attempt to use `git` while connected like `error: unable to write sha1 filename .git/objects/65/551a27304525fe2635976fbe7898db1f6c300d: Operation not permitted` If I use SMB with "Enhanced OS X interoperability" disabled, everything works, but any file or terminal operations (including `ls`) are painfully slow, to the point its unusable (several plus seconds). I have attempted to use NFS, but can't figure out how to connect. I believe this may be because of UID mismatch. Is there a way to create a user with a specific UID in unraid? What would be the NFS path I connect to? nfs://<unraid ip>/share_name? How do you have shares setup with Mac OS? How can I make this usable? I can live without access from windows clients if need be.
  6. It looks like several of you are having the same problem I reported in this thread: https://lime-technology.com/forum/index.php?topic=43682.0 Did you find a workaround or figure out the problem? Any insight would be helpful
  7. I'm running 6.1.3 and currently have a 60gb SSD in the array as a cache drive, but I really only use it for VMs and Docker (mover is disabled). I've been recording a lot of TV and wanted to keep my parity drive from constantly spinning, so I'ld like to start using a 1TB hd I had lying around as a "real" cache and leave the SSD outside of the array for VM/Docker only. I couldn't find a definitive guide, but the best I could tell from forum posts led me to try the following 1) Install unassigned devices plugin http://lime-technology.com/forum/index.php?topic=38635.0 2) Stop all Docker containers and VMs 3) Disable Docker and VMs from the Web Gui settings 4) Stop the Array 5) Change cache drive to new 1TB drive via Web Gui using dropdown menu 6) Mount 60gb SSD using Unassigned devices plugin 7) Renable VM and Docker from settings menu and point to new path of SSD drive Everything up through Step 5 went as expected, and I can start the array with new cache drive mounted and everything green. However, under "Unassigned Devices" in the web gui, it still shows the 1TB drive even though its already mounted as the cache drive. I thought maybe I needed to restart, but even odder, every time I restart, it boots back up with the SSD mounted in the array as the cache drive. I looked at the logs and don't see anything odd, but I've attached them in case I missed something. unraid-diagnostics-20151028-0925.zip
  8. After LOTS of troubleshooting, it turned it was the RAM. I'm now on 8+ days of uptime! Its odd that running memtest for 24 hours showed nothing, but it stopped happening once I pulled one of the DIMMs. Thanks for the help!
  9. Oh, also I should add on that I'm on 6rc2 now.
  10. No overclocking or anything fancy. I have 8gb of ECC RAM,a 4gb stick that came with the system and a 4gb that I bought. I should note its been running stable in this configuration for 1+years (I used to be on FreeNAS), but I know they can still go bad. Should I try running memtest for a few days? Or just pull out one of the of the sticks?
  11. So I'm still struggling with this issue, slowly removing one component at a time. This is time consuming since it doesn't crash for several days. I ran the memtest overnight and had no issues. Every time after it crashes there is nothing in the syslog. Is there a way I can increase the log level or find a more hardware focused log to tail. One thing I did notice though is that the screen has a few random colored pixels when in different areas after crashing. Does this sound meaningful to anyone, or just a side effect of the crash?
  12. I don't disagree that it might be a hardware issue, but I also have not hard unRaid for that long so I'm not sure it started out of the blue. Regardless.. I'm very unsure how to troubleshoot with the current log situation. Any suggestions? The Parity check is running nightly with no issue and I pre-cleared all drives before setting up the Array.
  13. For plugins I have powerdown (although it still can't save the logs when the system freezes). I also have a few docker containers, and an Ubuntu VM running on KVM. Hardware is a Dell T20 Pentium G3220 3.0 GHz w/ 8gb ECC RAM. For drives I have a 60gb SSD assigned as the cache drive for Docker/KVM, 2x3TB drives in the array and a 3TB parity disk. I should also note I have e-mail alerts enabled and don't recieve any warnings before it crashes.
  14. I've been an experiencing an issue for serveral weeks now where unRaid crashes ever 2-3 days. It usually happens while idle, and the server becomes completely unresponsive locally and via network, so I have force restart via power button. The most frustrating part is after restart I can't view the logs to see what went wrong. I finally left the log tailing and hooked it up to a monitor so I could watch it, and did not see any sign of activity during the crash. Every ~20 minutes there was a log entry that it could not communicate with the UPS (I have it plugged it at my desk to troubleshoot and UPS is in network closet). Around 30 minutes before the last UPS entry there was an entry "spindown(0)", "spindown(1)" etc. I was initially on beta 14b when this started, but recent upgraded to 15 and the issues has not resolved. I know this is not much to go on.. Any suggestions on how I should continue troubleshooting?
  15. I'm running 6.0-14b and keep getting a red notification in the web GUI Where do I find more information about what failed? All drives are showing green or grey, and the parity check completed with 0 errors. I also don't see any errors in the syslog. This may or may not be related, but my unRaid server has also been crashing and/or losing networking connectivity on occasion.