jtech007

Members
  • Posts

    152
  • Joined

  • Last visited

Everything posted by jtech007

  1. Docker is not enabled with a fresh install. You have to turn it on. Quit trolling.
  2. Yes and yes. It shouldn't create a HUGE file automatically. But if it auto-grows, it'll start out as ony a few bytes. Possibly 0 bytes even. That's fine. What's NOT fine, is creating a 21,5GB file without permission. Imagine what a new user like myself thinks when seeing 22GB usage on a newly formatted drive? First "WTF? I formatted it didn't I?" and then after investigating, finding a humongous file "WTF is that for??", and then finding out it's for Docker, so "WTF, if it uses that much, why is it enabled without my permission in the first place??" You see, confusion. Nothing but confusion. I know this is a Linux product, but it's also an expensive product. A would expect a little thought to go into initial setting up stuff. It shouldn't have been taken from me in the first place. I never gave it permission to do this. unRAID left a bad first impression on me, that's for sure. Not only because of this, also because of the other things I posted. Obvously you skipped over this page on the website: https://lime-technology.com/try-it/ They give you 30 days to sort out your issues with the product before you buy it. If you don't like it, you don't/didn't have to buy it.
  3. Stop Docker, Delete the docker.img and the 22GB will be returned to you. Simple as that.
  4. I had a similar issue where no matter what I tried, I would get a kernel panic on boot. I tried a clean install on it and it still would not boot. Tried a *new* USB thumb drive with the backup of my flash drive and it booted up without an issue. If you have another thumb drive laying around, try putting the backup of you flash drive on it and see if it will boot. unRaid will recognize that your new flash drive does not match your key file and prompt you from there to get key if you can.
  5. IFIRC the colors indicate/are a warning tool to indicate usage of a drive or device. The "yellow" drive has slightly more available space than the other drives, hence the color. I bet if you added a 100GB to that drive it would turn red in color to warn you that it is very close to being full.
  6. Sorry for the slow reply, I was moving my server to a new case. This was the problem. In the settings it was pointed to the wrong place and was filling up the docker image. As soon as I changed it and deleted the backup, unRAID sent a notification that the image utilization was normal. Thank you for your help!!!
  7. Oddly enough I don't have any more shares than I did before the re-install.
  8. Since I took the 6.2 update that required deletion of my docker.img file, the fix common problems plugin has been nagging me that something is filling up my .img file. I have read several treads and changed a few things to no avail. I have taken a snap shot of my docker page and can upload reports if needed. I would assume it's Sabnzbd that is the culprit from what I have read, but for some reason no matter what I change it still fills up. Ideas?
  9. I keep pfsense on a stand alone box so the rest of the family can use the WiFi while I am playing with the unRaid server or in the event of a crash that I can't fix until I get home. I thought about including it as I would love to have one less box, but the recycled security appliance is only 1U in the rack and has more than enough power for my home needs. I think I will end up with CAT6 extenders for the HDMI to my double/triple monitor setup. I would love to use one Displayport cable from the garage to an MST hub that would allow for up to three monitors, but I cannot find a cable that long that I would trust. Most reviews say the longer cables cannot handle the data correctly and the monitors do not respond or don't display the correct resolution. I have enough room in my rack currently to have a 19" monitor sitting on a shelf with an old keyboard to see the server boot. I might add a KVM setup down the road, but for now, I don't need it and I have the room for a monitor to sit there. I have IPMI working as well. Now off to find some deals on GPU's and also pickup a USB card or two. Thanks again for the suggestions.
  10. Thank you for the reply! It seems like your setup is very similar to what I am trying to do so this info is a big help so I don't run in circles trying different parts that will not work in the first place.
  11. That is good news. Now I just need to figure out how to use all the slots when the newer video cards need two. Risers might be in order if I can find room in the case.
  12. I believe the HGST unit needs 220 to work as well as they were imported from Europe. Funny thing is I don't need either as I have a 24 bay SM chassis about half full and a 16 bay SM chassis that is not being used. Just curious to see if other have used this setup and if it's practical for unRaid as others might have a need for it.
  13. Would these work well with unRaid? Not sure how old these are and if the backplanes would support larger capacity drives. Looks like only one Raid card so there would have to be expanders to control that many drives. http://tinyurl.com/zfa9ovg
  14. Running Dual 2670's on an ASRock EP2C602 Motherboard (The one with only 8 memory slots, if that helps) I am trying to sort out what I can effectively use on this board and what the performance will be when I add VM's and pass through GPU's to them. Currently, I do not have any VM's running that I do not use via VNC, but my goal is to pass through a Windows VM and possibly a MacOS VM as well which will require two GPU's if I want to run them at the same time. I also have two M1015's and a 4 port Intel nic, but I can lose the nic if needed and drop one of the M1015's and add an expander (which I already have, but not using) or some of the Sata ports on the MB itself. I pasted the slot mapping to get advice on if I can run all or some of this all on the same box: 49 PCI Express 3.0 x 4 Slot (PCIE1, White) from CPU_BSP1 50 PCI Slot (PCI2, White) 51 Intel C602 Chipset 52 PCI Express 3.0 x16 Slot (PCIE3, Blue) from CPU_BSP1 53 PCI Express 3.0 x16 Slot (PCIE4, Blue) from CPU_BSP1 54 PCI Express 3.0 x16 Slot (PCIE5, Blue) from CPU_BSP1 55 PCI Express 3.0 x16 Slot (PCIE6, Blue) from CPU_AP1 56 PCI Express 3.0 x16 Slot (PCIE7, Blue) from CPU_AP1 My concern is adding two many cards on the lanes will nuke performance either on the data side or the GPU side. The end goal is to move away from multiple towers and have all my windows and mac instances on this server in my garage and use extenders (Cat6) to send video to my desk on another floor of the home. Any advice on what I can do and which way to head would be a big help before I buy new GPU's for passthrough.
  15. If you zoom in on the rear photo the Nic is a management port, so you will need to add a nic. Not a big issue though.
  16. Thank you, Gary, for the reply! I thought it was time for an upgrade, just wanted to make sure that I get one big enough to handle it all.
  17. Running the following hardware on a 650W Seasonic: Supermicro 24 Bay Case with Backplane ASRock RP2C602 Dual Xeon E5-2670's 64GB ECC Memory 4 Port Intel NIC 3 4TB HGST Drives 2 1TB WD Drives I am looking to add 2-4 more drives to add more space + dual parity. I am also going to setup 3-4 VM's with several dedicated video cards. Looking to see what wattage PSU would work best moving forward? I don't mind buying a large one now and growing into it.
  18. http://www.ebay.com/itm/SAMSUNG-Data-Center-Series-SV843-2-5-960GB-SATA-III-V1-MLC-VNAND-Enterprise-Sol-/302056009503?rmvSB=true Not as good as the Intel ones they offer from time to time, but still a good deal. Credit to "keyborded" on the STH forums for finding this. Just reposting here...
  19. I have three of the Xi4 (dual fan) running on a single socket SM board and one of the dual socket ASRock boards. They clear the ram and are very quiet, installation was a breeze and they come with everything you need to install on Square or Narrow ILM. I have another Noctua setup on an AMD rig and it's been running for 5 years solid with zero issues.
  20. Thanks again for helping! The rebuild has begun, should be done by midnight!
  21. Forgot about that part! Also, can you access the problem folder on the old disk? Yes, the files are there now on the Photo share on the new disk and can also be seen on the old disk. I would assume that all of this is due to connectivity causing errors so I will need to sort that out soon.
  22. Here is a copy of the repair session: TOWER login: root Linux 4.1.7-unRAID. root@TOWER:~# xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 3019880 entries Phase 2 - using internal log - zero log... zero_log: head block 162112 tail block 162108 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I mounted the file system and then put it back to maintenance mode root@TOWER:~# xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 3019880 entries Phase 2 - using internal log - zero log... zero_log: head block 162116 tail block 162116 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Sat Feb 27 11:32:24 2016 Phase Start End Duration Phase 1: 02/27 11:31:57 02/27 11:31:58 1 second Phase 2: 02/27 11:31:58 02/27 11:32:16 18 seconds Phase 3: 02/27 11:32:16 02/27 11:32:23 7 seconds Phase 4: 02/27 11:32:23 02/27 11:32:23 Phase 5: 02/27 11:32:23 02/27 11:32:23 Phase 6: 02/27 11:32:23 02/27 11:32:23 Phase 7: 02/27 11:32:23 02/27 11:32:23 Total run time: 26 seconds done root@TOWER:~# Drive still shows red at this point. Not sure where to go from here. I ran the repair which is disk #3 in unRaid, would the file system see it as a different number as I have a parity drive? I am not sure how it orders them.
  23. I put the old drive in my test server, alone, with no other drives. Assigned it as a new drive for Disk 1 and started in maintenance mode first to see what it would say. Drive shows green with normal operation. Mounted the disk and still have green/normal and can see the drive as it was before the errors occurred. I now believe that what ever connectivity issue I have (still haven't 100% sorted it out) caused the old disk to red ball and the new one as well. I guess I will check the new disk for errors and see if the data shows up after repairs are made. Thank you again for your help, hopefully this will fix the issue!