jtech007

Members
  • Posts

    152
  • Joined

  • Last visited

Posts posted by jtech007

  1. It shouldn't have been taken from me in the first place. I never gave it permission to do this.

     

    unRAID left a bad first impression on me, that's for sure. Not only because of this, also because of the other things I posted.

     

    Docker is not enabled with a fresh install. You have to turn it on. Quit trolling.

  2. So now its OK to automatically grow the image? Didn't you complain earlier about the image automatically getting created?

    Yes and yes.

     

    It shouldn't create a HUGE file automatically. But if it auto-grows, it'll start out as ony a few bytes. Possibly 0 bytes even. That's fine. What's NOT fine, is creating a 21,5GB file without permission. Imagine what a new user like myself thinks when seeing 22GB usage on a newly formatted drive? First "WTF? I formatted it didn't I?" and then after investigating, finding a humongous file "WTF is that for??", and then finding out it's for Docker, so "WTF, if it uses that much, why is it enabled without my permission in the first place??"

     

    You see, confusion. Nothing but confusion.

     

    I know this is a Linux product, but it's also an expensive product. A would expect a little thought to go into initial setting up stuff.

     

    Stop Docker, Delete the docker.img and the 22GB will be returned to you. Simple as that.

    It shouldn't have been taken from me in the first place. I never gave it permission to do this.

     

    unRAID left a bad first impression on me, that's for sure. Not only because of this, also because of the other things I posted.

     

    Obvously you skipped over this page on the website: https://lime-technology.com/try-it/

     

    They give you 30 days to sort out your issues with the product before you buy it. If you don't like it, you don't/didn't have to buy it.

  3. Okay, so the text should be changed to VMs must be DISABLED to change and why not make it a link that actually *does* what's required.

     

    Heck, why not just simply allow changing network settings while VM's are enabled? Apply network settings should just do "disable VMs -> apply network settings -> enable VMs -> done". Why bother the user with petty things like this?

     

    Because the VM's and Containers rely on those settings to operate properly. If you change those settings without stopping those services, they might not operate correctly after you change the network settings.

  4. Isn't the whole point of Docker that it's incredibly lightweight? Why must docker images live inside a huge honking container file?

     

    I understand from a security standpoint, sort of, but why so huge? Why can't it auto-grow?

     

    Stop Docker, Delete the docker.img and the 22GB will be returned to you. Simple as that.

  5. I put the files from latest version on flash drive, ran makebootable script, deleted all files except for bzimage and bzroot, copied all the other files back over from the old flash drive backup, andddddd error.

     

    Is there another way that I could re-do the drive, and still get all of my shares, settings, and my docker container back?

     

    I had a similar issue where no matter what I tried, I would get a kernel panic on boot. I tried a clean install on it and it still would not boot. Tried a *new* USB thumb drive with the backup of my flash drive and it booted up without an issue.

     

    If you have another thumb drive laying around, try putting the backup of you flash drive on it and see if it will boot. unRaid will recognize that your new flash drive does not match your key file and prompt you from there to get key if you can.

  6. Check to make sure your CrashPlan "Inbound backup from other computers" folder is set to /backup. You can set this location in CrashPlan under Settings, General tab. I believe the default folder is /data, which will be in the docker image.

     

    Sorry for the slow reply, I was moving my server to a new case. This was the problem. In the settings it was pointed to the wrong place and was filling up the docker image. As soon as I changed it and deleted the backup, unRAID sent a notification that the image utilization was normal.

     

    Thank you for your help!!!

  7. Since I took the 6.2 update that required deletion of my docker.img file, the fix common problems plugin has been nagging me that something is filling up my .img file.

     

    I have read several treads and changed a few things to no avail. I have taken a snap shot of my docker page and can upload reports if needed. I would assume it's Sabnzbd that is the culprit from what I have read, but for some reason no matter what I change it still fills up.

     

    Ideas?

    Capture.JPG.b6315db503a9466f3f9378b489c52091.JPG

  8.  

    Regarding your Pfsense

    you can run pfsense on unraid, but you probably know that already?

     

    Regarding placement/case: I use a Fractal Define XL 2, it is a tower case that fits SSI-EEB boards. It has 8 drive bays and 4 additional optical drive bays, so it is easy to cram a bunch of drives in it. It is super silent / 0db according to what it is doing at the moment and sits just in the living room. You seem to know a thing or two about Wiring out the in and outputs per Cat6, do you want to share your options here or per pm? I might be interested and do not have experience with that.

     

     

    Some more recomendations:

     

    modern GPU which has 0 rpm fans in idle/sleep such as a gtx 1060 if someone can confirm that series to work well with passthrough

    some cheap old screen with vga to output the text console to unraid itself. I rarely use it but it is nice to have just in case. You do have IPMI on your board as do I so that's not super important.

    A set of input/output devices per VM and additionaly some cheapo keyboard and maybe mouse for your unraid itself.

    Wlan Router, so you can easily acces your unraid gui per phone. If I am done streaming stuff in bed on the phone I do not have to get up to power down my pc and such.

    Set static ip adresses - boot time will be cut immensely depending on your network. I save about 5-20 seconds on boot just with static ip on a Fritz box 4790 Router (maybe set a static ip for ipmi also and configure both ip's in your router) If you run pfsense you probably got all that figured out anyway.

     

    I keep pfsense on a stand alone box so the rest of the family can use the WiFi while I am playing with the unRaid server or in the event of a crash that I can't fix until I get home. I thought about including it as I would love to have one less box, but the recycled security appliance is only 1U in the rack and has more than enough power for my home needs.

     

    I think I will end up with CAT6 extenders for the HDMI to my double/triple monitor setup. I would love to use one Displayport cable from the garage to an MST hub that would allow for up to three monitors, but I cannot find a cable that long that I would trust. Most reviews say the longer cables cannot handle the data correctly and the monitors do not respond or don't display the correct resolution.

     

    I have enough room in my rack currently to have a 19" monitor sitting on a shelf with an old keyboard to see the server boot. I might add a KVM setup down the road, but for now, I don't need it and I have the room for a monitor to sit there. I have IPMI working as well.

     

    Now off to find some deals on GPU's and also pickup a USB card or two. Thanks again for the suggestions.

  9. Hey there, writing from a vm right now, running on an asrock EP2C602 (4Lan+16 Dim) Model. I use two Xeon 2680 which are different in clock but nothing else of importance in comparisson with the 2670.

     

    I use a gtx 770 and a USB 3.1 Controller card per vm at a total of 2 vms. I run them at the same time with good gaming performance and overall feel and speed - I would describe the performance and overall feel and appearance as 99% native. Each CPU is capable of 40 PCIE lanes totaling to 80 Lanes, so you got some lanes to play with, in my case the user manual was not 100% accurate so do not count on it and make your own observations. The two vms of mine are in daily use by multiple people at once and errors have yet to occur. Pushing data over the network sometimes reaches >100mbs to the array of 2*3tb WD Reds + parity but currently I only get around 60 mbs (because the drive is >90% full. Caching is off for everything except for the stuff that stays on the cache, so the write speeds are actual long term values. The virtual disks of the multiple vms are stored on the cache in unraid which in my case consists of 2 ssds. Write and read speeds are very good, boot time of the vm is faster then boot times of a (whole) native system.

     

    I plan on running two as well. One windows 7 or 10 (can't decide which) and a MacOS VM. Do you use virtual nic's on your VM's? I have read they are faster as they do not have to pass through the card, cable and the switch which makes sense. I assume there are no data collisons when you use virtual nic's?

     

    I am happy with my system and currently some IT people I work with want to follow my build.

     

    Considerations:

    You probably want usb controllers passed through, I tested usb device-based passtrough on multiple motherboards with bad experience, but whole pcie controllers work fine. If you do use device based passtrough, basic microsoft stuff worked fine (for me) while logitech crashed the machine.

     

    So your USB devices are on a PCIE card? Do you have seperate cards for each VM or can you use one with a lot of ports and spilt them up? I also ran into the Logitech issue the other day while setting up my dads VM.

     

    Sleep states might have to be turned off, or on, in bios, depending on the hardware you use, and if you are going to stub controllers.

     

    I don't use sleep, but good to know.

     

    A monitor with a headphonejack to output the gpus sound would be good, there are external adapters from hdmi to hdmi+audiojack as well.

     

    I have looked at those displays and will need to get something like that to get sound to my desk

     

    If there isn't a display connected to a vm with a passed trough GPU, it might not be able to start correctly. Some screens do not correctly listen to the ports and might cause issues. I had one such monitor.

    I'd recomend UEFI GPUs only.

     

    I will keep that in mind

     

    Fan Speed is way of what I would like to achieve for a silent system, so I do not use the fan headers on the board and use an external fan controler instead(except for the cpu fans). Whatever I set in BIOS, I always get 100% rpm on my fans.

    Heat emissions of the xeons is much less then I expected, this might be due to the fact that I never realy even used 1/3 of the horsepower even while benchmarking, Idle temperature is pretty cool aswell even on semi passive cpu heatsinks.

     

    I have swapped out all the fans on my SM case and have dual fan Noctua's on the 2670's. It's pretty quiet now, compared to how it came. My PFSense box makes the most noise as it is a repurposed internet security appliance. Not a whole lot of options for 1U cooling that is quiet.

     

    You need to check if your CPU model is of the right stepping, so that it supports vt-d.  Easiest way to check for that support is to press the info button on unraid itself looking for IOMMU = enabled. If this is not enabled yet, you might have to find that entry in the BIOS, if it is not possible to set it there, you might have an incompatible cpu. If it odes not support vt-d, you can not pass trough the pcie devices correctly.

     

    Both of my 2670's have a SR0KX stepping and IOMMU is active and working.

     

    What is your main reason to have the server located in the garage? As the cost factor of extenders might be so big, that you might be able to upgrade your system instead to xeon 2687 v3 and passively cool them or something like that. Can you explain in detail how you want to setup your "clients" and how you want to route the connections to there? If you want to game on the machines, I would take "latency" into consideration. Network latency to an internet server is a different thing from input latency in the local network.

     

    I bought a Tam's 4U SM chassis years ago and picked up a 42U rack for free. I have always liked the rack setup even though I know many have moved away from that now due to the smaller MOBO's and larger capacity drives. Plus my wife is a neat freak and I don't have a closet with good air flow near my desk that I could hide the rack (or build one) so she does not have to look at it. I know as long as I keep it out of sight, she doesn't say anything about the bill for server parts.  :P

     

    Thank you for the reply! It seems like your setup is very similar to what I am trying to do so this info is a big help so I don't run in circles trying different parts that will not work in the first place.

     

     

  10. With dual CPUs all slots can be used, and since all are CPU slots there shouldn't be any performance issues, 2 slots share a x16 link, so it's x8/x8 when both are used, if possible use those for the controllers, the quad port NIC can be used in the x4 slot.

     

    That is good news. Now I just need to figure out how to use all the slots when the newer video cards need two. Risers might be in order if I can find room in the case.

  11. I believe the HGST unit needs 220 to work as well as they were imported from Europe. Funny thing is I don't need either as I have a 24 bay SM chassis about half full and a 16 bay SM chassis that is not being used. Just curious to see if other have used this setup and if it's practical for unRaid as others might have a need for it.

  12. Running Dual 2670's on an ASRock EP2C602 Motherboard (The one with only 8 memory slots, if that helps)

     

    I am trying to sort out what I can effectively use on this board and what the performance will be when I add VM's and pass through GPU's to them. Currently, I do not have any VM's running that I do not use via VNC, but my goal is to pass through a Windows VM and possibly a MacOS VM as well which will require two GPU's if I want to run them at the same time. I also have two M1015's and a 4 port Intel nic, but I can lose the nic if needed and drop one of the M1015's and add an expander (which I already have, but not using) or some of the Sata ports on the MB itself. I pasted the slot mapping to get advice on if I can run all or some of this all on the same box:

     

     

    49 PCI Express 3.0 x 4 Slot (PCIE1, White) from CPU_BSP1

    50 PCI Slot (PCI2, White)

    51 Intel C602 Chipset

    52 PCI Express 3.0 x16 Slot (PCIE3, Blue) from CPU_BSP1

    53 PCI Express 3.0 x16 Slot (PCIE4, Blue) from CPU_BSP1

    54 PCI Express 3.0 x16 Slot (PCIE5, Blue) from CPU_BSP1

    55 PCI Express 3.0 x16 Slot (PCIE6, Blue) from CPU_AP1

    56 PCI Express 3.0 x16 Slot (PCIE7, Blue) from CPU_AP1

     

    My concern is adding two many cards on the lanes will nuke performance either on the data side or the GPU side. The end goal is to move away from multiple towers and have all my windows and mac instances on this server in my garage and use extenders (Cat6) to send video to my desk on another floor of the home. Any advice on what I can do and which way to head would be a big help before I buy new GPU's for passthrough.

  13. Running the following hardware on a 650W Seasonic:

     

    Supermicro 24 Bay Case with Backplane

    ASRock RP2C602

    Dual Xeon E5-2670's

    64GB ECC Memory

    4 Port Intel NIC

    3 4TB HGST Drives

    2 1TB WD Drives

     

    I am looking to add 2-4 more drives to add more space + dual parity. I am also going to setup 3-4 VM's with several dedicated video cards.

     

    Looking to see what wattage PSU would work best moving forward? I don't mind buying a large one now and growing into it.

  14. Anybody have any thoughts on the two heatsinks below. I'll be using the ASRock EP2C602-4L/D16 SSI EEB Server Motherboard.

     

    http://m.newegg.com/Product/index?itemNumber=N82E16835608042

     

    http://www.amazon.com/Noctua-Cooler-LGA2011-Platforms-NH-U12DXi4/dp/B00DWFQ42I#featureBulletsAndDetailBullets_secondary_view_div_1456796362754

     

    I'm not sure that the dual fan heatsink isn't overkill. Any reason why I should need it?

    I'm using the Define XL R2 case. I'll be running at most 2 Vm's.

     

    I have three of the Xi4 (dual fan) running on a single socket SM board and one of the dual socket ASRock boards. They clear the ram and are very quiet, installation was a breeze and they come with everything you need to install on Square or Narrow ILM. I have another Noctua setup on an AMD rig and it's been running for 5 years solid with zero issues.

  15. Repairing the filesystem wont enable a disabled (emulated) disk, you still have to rebuild it.

     

    Can you access all the files on disk3 after repairing running xfs_repair?

     

    If yes and you think you sorted out your connectivity issues you can rebuild on the same disk, for that:

     

    stop array

    unassign disk3 (select "no device")

    start array

    stop array

    re-assign disk3

    start array to begin rebuild

     

    Forgot about that part!

     

    I put the old drive in my test server, alone, with no other drives. Assigned it as a new drive for Disk 1 and started in maintenance mode first to see what it would say. Drive shows green with normal operation. Mounted the disk and still have green/normal and can see the drive as it was before the errors occurred. I now believe that what ever connectivity issue I have (still haven't 100% sorted it out) caused the old disk to red ball and the new one as well. I guess I will check the new disk for errors and see if the data shows up after repairs are made. Thank you again for your help, hopefully this will fix the issue!

     

    Also, can you access the problem folder on the old disk?

     

    Yes, the files are there now on the Photo share on the new disk and can also be seen on the old disk. I would assume that all of this is due to connectivity causing errors so I will need to sort that out soon.

  16. Here is a copy of the repair session:

     

     

    TOWER login: root

    Linux 4.1.7-unRAID.

    root@TOWER:~# xfs_repair -v /dev/md3

    Phase 1 - find and verify superblock...

            - block cache size set to 3019880 entries

    Phase 2 - using internal log

            - zero log...

    zero_log: head block 162112 tail block 162108

    ERROR: The filesystem has valuable metadata changes in a log which needs to

    be replayed.  Mount the filesystem to replay the log, and unmount it before

    re-running xfs_repair.  If you are unable to mount the filesystem, then use

    the -L option to destroy the log and attempt a repair.

    Note that destroying the log may cause corruption -- please attempt a mount

    of the filesystem before doing this.

     

    I mounted the file system and then put it back to maintenance mode

     

    root@TOWER:~# xfs_repair -v /dev/md3                                           

    Phase 1 - find and verify superblock...

            - block cache size set to 3019880 entries

    Phase 2 - using internal log

            - zero log...

    zero_log: head block 162116 tail block 162116

            - scan filesystem freespace and inode maps...

            - found root inode chunk

    Phase 3 - for each AG...

            - scan and clear agi unlinked lists...

            - process known inodes and perform inode discovery...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - process newly discovered inodes...

    Phase 4 - check for duplicate blocks...

            - setting up duplicate extent list...

            - check for inodes claiming duplicate blocks...

            - agno = 0

            - agno = 2

            - agno = 1

            - agno = 3

    Phase 5 - rebuild AG headers and trees...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - reset superblock...

    Phase 6 - check inode connectivity...

            - resetting contents of realtime bitmap and summary inodes

            - traversing filesystem ...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - traversal finished ...

            - moving disconnected inodes to lost+found ...

    Phase 7 - verify and correct link counts...

     

            XFS_REPAIR Summary    Sat Feb 27 11:32:24 2016

     

    Phase          Start          End            Duration

    Phase 1:        02/27 11:31:57  02/27 11:31:58  1 second

    Phase 2:        02/27 11:31:58  02/27 11:32:16  18 seconds

    Phase 3:        02/27 11:32:16  02/27 11:32:23  7 seconds

    Phase 4:        02/27 11:32:23  02/27 11:32:23

    Phase 5:        02/27 11:32:23  02/27 11:32:23

    Phase 6:        02/27 11:32:23  02/27 11:32:23

    Phase 7:        02/27 11:32:23  02/27 11:32:23

     

    Total run time: 26 seconds

    done

    root@TOWER:~#

     

    Drive still shows red at this point. Not sure where to go from here. I ran the repair which is disk #3 in unRaid, would the file system see it as a different number as I have a parity drive? I am not sure how it orders them.