Jump to content

blade316

Members
  • Posts

    74
  • Joined

  • Last visited

Posts posted by blade316

  1. Seems like because this issue is so inconsistent across unraid Mac users, and there are so many variables that could be involved, I don’t think the Unraid team will get around to addressing this. Not even sure if it’s Unraid specific, it could be an underlying Linux issue … and perhaps that’s why truenas being BSD based, doesn’t suffer from this issue.

     

    With Unraid 7 on its way, and the array now being optional, I’m going to see if just running ZFS pools will yield a different result on Mac devices …. Perhaps simply the way the array and fuse system is, that’s what’s causing the problem.

     

    We will see what happens once unraid 7 lands 

  2. @johnwhicker

     

    So just an update on my issue, I recently started migrating from XFS to ZFS. This didn't result in any changes to the Mac SMB issue, however I am in a much better place using ZFS instead of XFS for my pools. I have only changed my general cache, docker pool, and VM pool which are on SSD media over to ZFS. Next is to work out how I am going to migrate my array to ZFS.

     

    Anyway, during this I also decided to test out Disk Shares, I have never used them before (except for the cache pools), and wouldn't you know it, when accessing a disk share from Macs over SMB is instant, same as windows.

     

    What this tells us is that there is something in FUSE that is causing Mac SMB to bug out. 

     

    Now at the moment because my array is still set up in the old traditional xfs (6.11.x and before, without pool functionality) I only get each disk surfaced as a share, not the pool of disks itself. So once I work out how I want to migrate the data from my array, I will be creating a pool (maybe multiple) for my array disks, which will then be surfaced as a disk share. Accessing a disk share takes FUSE out of the equation, and should hopefully fix this problem moving forward. Now that I have discovered this, I'm wondering if this was also why UnRAID NFS shares didn't work very well on MacOS either, as they would have also be going through FUSE for this.

     

    I encourage you to test this yourself as it looks like thats the go.

     

    Cheers!

     

     

     

  3. Hey peeps!

    Nowwwwww I already know there have been a number of threads already about this, however this problem seems to really have no solid answer or solution. So please don't lash out for a new post about it lol 🙂'

     

    I'm going to break it down as simply as I can.

     

    Unraid 6.11.0

    Intel Core i7

    32GB RAM

    Array size: 58TB

    # of Drives: 8

    Parity drives: 1

    NIC: Intel Quad Port Gigabit NIC

    Average Overall Server Load: 10-15%

    Network

    Switch: Standard Gigabit Managed switch, no fancy config

    Wireless: Nothing fancy, standard N & AC wifi

     

    Dockers:

    Nothing fancy, not taxing the hardware at all

     

    Devices

    Mac Mini 2012 (10.15 Catalina)

    Mac Mini M1 2021 (13.2.1 Ventura)

    Macbook Pro M2 2022 (13.2.1 Ventura)

    Multiple Windows 10 machines

    All machines connected to network via Ethernet & Wireless

     

    Affected share type: SMB

    Is NFS also affected: Sort of, no where near as bad as SMB


    Symptoms:

    Both Mac Mini 2012 and Mac Mini M1

     - Loading any SMB share takes forever, regardless of how many folders and files are in the share

     - On average around 5-6 minutes

     - Copy any files whether it be big single files or lots of small files, goes at around 1MB/s, when it should be around 80MB/s

     

    Troubleshooting:

     - Tested iperf from both macs to unraid - maxes out gigbit connection around 975MB/s, so its not a connection issue

     - Changed out ethernet cables

     - Disabled wireless

     - Changed switch ports

     - Reset NICs on Mac Mini's

     - Created brand new test shares with test data

     

    Additional info:

    So to make things more confusing, the Macbook M2 does not have this issue at all. I can access all of the same unraid SMB shares via ethernet or wireless and they load instantly. I can copy files to and from, and it maxes out the connection. Windows 10 machines also have no issues with the same SMB shares and max out the connection.

     

    If I use SSH on the Mac Mini devices, its a lot faster, however still not like the M2 or the Windows machines, only achieving around 40MB/s transfer speed.

     

    I also have a TrueNAS box, so to throw another spanner in the works, the Mac Mini devices do not have any issues accessing SMB shares on the TrueNAS box. I have set up the TrueNAS SMB shares to be the same as the UnRAID SMB shares for making troubleshooting more accurate.

     

    This issue has been going on Unraid for as long as I can remember, and until I got my Macbook M2 recently, I just figured that was something I would just have to deal with using Mac with UnRAID. However, the Macbook M2 working properly blows that theory out of the water. I have tried all sorts of different SMB config options etc, nothing works.

     

    The key here is the Macbook M2 is not affected, the Mac Mini's work fine with SMB on TrueNAS, so something in UnRAID is not quite right.

     

    Keen to get some insight from people and maybe even the devs as to why SMB seems to be troublesome on UnRAID for MacOS?

     

    TIA

     

    Daniel

     

     

     

     

     

     

     

  4. I have had a new error pop up this morning ... the trigger was when I logged back into my Mac Mini, and it was telling me it couldn't reconnect to the nfs shares on my unraid server. Quick check in unraid and all the shares are now missing, obviously due to the error. The only device in the house that uses nfs is my mac, which I have been using with nfs for a long time with unraid.

     

    I've not seen this error before, the only thing I can think of that may have led to this is leaving some finder windows open with those nfs shares overnight.

     

    Here is the syslog portion:

     

    ar 25 05:21:59 unRAID CA Backup/Restore: #######################
    Mar 25 05:21:59 unRAID CA Backup/Restore: appData Backup complete
    Mar 25 05:21:59 unRAID CA Backup/Restore: #######################
    Mar 25 05:21:59 unRAID CA Backup/Restore: Backup / Restore Completed
    Mar 25 07:00:01 unRAID root: mover: started
    Mar 25 07:00:02 unRAID root: mover: finished
    Mar 25 08:12:40 unRAID autofan: Highest disk temp is 40C, adjusting fan speed from: 112 (43% @ 771rpm) to: 135 (52% @ 713rpm)
    Mar 25 10:27:43 unRAID  shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed.
    Mar 25 10:27:44 unRAID kernel: ------------[ cut here ]------------
    Mar 25 10:27:44 unRAID kernel: nfsd: non-standard errno: -107
    Mar 25 10:27:44 unRAID kernel: WARNING: CPU: 6 PID: 7793 at fs/nfsd/nfsproc.c:889 nfserrno+0x45/0x51 [nfsd]
    Mar 25 10:27:44 unRAID kernel: Modules linked in: veth nvidia_uvm(PO) xt_nat macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod it87 hwmon_vid efivarfs ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc ipv6 nvidia_drm(PO) x86_pkg_temp_thermal intel_powerclamp nvidia_modeset(PO) coretemp i915 kvm_intel iosf_mbi drm_buddy ttm nvidia(PO) kvm drm_display_helper drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd rapl intel_gtt mxm_wmi drm intel_cstate agpgart mpt3sas igb i2c_i801 ahci i2c_algo_bit i2c_smbus syscopyarea intel_uncore sysfillrect raid_class libahci sysimgblt atl1c i2c_core scsi_transport_sas fb_sys_fops video
    Mar 25 10:27:44 unRAID kernel: wmi thermal fan backlight button unix
    Mar 25 10:27:44 unRAID kernel: CPU: 6 PID: 7793 Comm: nfsd Tainted: P           O      5.19.9-Unraid #1
    Mar 25 10:27:44 unRAID kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./Z77MX-D3H, BIOS F17b 01/06/2014
    Mar 25 10:27:44 unRAID kernel: RIP: 0010:nfserrno+0x45/0x51 [nfsd]
    Mar 25 10:27:44 unRAID kernel: Code: c3 cc cc cc cc 48 ff c0 48 83 f8 26 75 e0 80 3d cc 47 05 00 00 75 15 48 c7 c7 0f 34 ef a2 c6 05 bc 47 05 00 01 e8 06 99 90 de <0f> 0b b8 00 00 00 05 c3 cc cc cc cc 48 83 ec 18 31 c9 ba ff 07 00
    Mar 25 10:27:44 unRAID kernel: RSP: 0018:ffffc900009a3e30 EFLAGS: 00010286
    Mar 25 10:27:44 unRAID kernel: RAX: 0000000000000000 RBX: ffff8881c1490000 RCX: 0000000000000027
    Mar 25 10:27:44 unRAID kernel: RDX: 0000000000000001 RSI: ffffffff81ec827f RDI: 00000000ffffffff
    Mar 25 10:27:44 unRAID kernel: RBP: ffff888103d64000 R08: 0000000000000000 R09: ffffffff82044990
    Mar 25 10:27:44 unRAID kernel: R10: 0000000000000000 R11: ffffffff8264afbe R12: ffff8881c1490008
    Mar 25 10:27:44 unRAID kernel: R13: ffff8881c1488000 R14: ffffffffa2f12600 R15: ffffffffa2eeda98
    Mar 25 10:27:44 unRAID kernel: FS:  0000000000000000(0000) GS:ffff888810380000(0000) knlGS:0000000000000000
    Mar 25 10:27:44 unRAID kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 25 10:27:44 unRAID kernel: CR2: 0000153ca414d638 CR3: 000000000500a002 CR4: 00000000001726e0
    Mar 25 10:27:44 unRAID kernel: Call Trace:
    Mar 25 10:27:44 unRAID kernel: <TASK>
    Mar 25 10:27:44 unRAID kernel: nfsd3_proc_getattr+0xaf/0xd7 [nfsd]
    Mar 25 10:27:44 unRAID kernel: nfsd_dispatch+0x194/0x24d [nfsd]
    Mar 25 10:27:44 unRAID kernel: svc_process+0x3f1/0x5d6 [sunrpc]
    Mar 25 10:27:44 unRAID kernel: ? nfsd_svc+0x2b6/0x2b6 [nfsd]
    Mar 25 10:27:44 unRAID kernel: ? nfsd_shutdown_threads+0x5b/0x5b [nfsd]
    Mar 25 10:27:44 unRAID kernel: nfsd+0xd5/0x155 [nfsd]
    Mar 25 10:27:44 unRAID kernel: kthread+0xe7/0xef
    Mar 25 10:27:44 unRAID kernel: ? kthread_complete_and_exit+0x1b/0x1b
    Mar 25 10:27:44 unRAID kernel: ret_from_fork+0x22/0x30
    Mar 25 10:27:44 unRAID kernel: </TASK>
    Mar 25 10:27:44 unRAID kernel: ---[ end trace 0000000000000000 ]---

     

    Diagnostic files attached:

     

    unraid-diagnostics-20230325-1311.zip

     

    Would be great if anyone could shed some light on this one, and where I can start looking.

     

    Also side note: other than a reboot or restarting the array, nothing seems to bring the shares back ... unless there is a command I can run that will export all the array shares again?

     

     

  5. 5 minutes ago, JorgeB said:

    There's no SMART for that drive, check/replace cables and power cycle and post new diags.

     

    @JorgeB yeah I know, unfortunately when the pre-clear failed, I can't see the drive in unassigned devices, can only see it mounted to /dev/sdd ... but with the weird 16GB tmpfs, so I don't have any data .... thats why I grabbed what I could out of syslog just in case and put it on pastebin ... I will have to wait until the other WD Red finishes its pre-clear before I reboot and try and get the smart data for the the failed one.

  6. Been using UnRAID since around 2010, first time ever pre-clearing a drive and it fails! haha ... I am doing two WD 8TB at the same time, one is standard WD RED the other is WD RED+ .. and the RED+ failed... failed on the Zeroing step, pre-read seemed to be fine.

     

    I have create a log just with the pre-clear and disk error details

    Logs link:

    https://pastebin.com/kG3fvXXW

     

    For some reason the drive is no longer showing in unassigned devices, however I can see it under /dev/sdd, but the weird thing is its showing 16GB devtmpfs. I have both the new WD drives, and my array drives connected via LSI 9211-8i - array drives are fine, and the other new WD RED is currently 50% through the zeroing stage.

     

    Looking at the logs, I am seeing stuff I do not like at all! ... I'm thinking either a bad cable, or indeed a bad drive ... I am hesitant to think its the card as the 2 new WD drives are connected to P3 and P4 on the same connector for the LSI card .... I actually have some spare breakout cables arriving this week, along with an additional LSI 9300-8i, so I can always swap the card out as well and test again, but I don't think its the card.

     

    However as this is my first pre-clear, I thought I would throw it to the masses and get your thoughts.

     

    My other side question is regarding returning the drive ... I have never had to return a drive under warranty - I purchased it from Amazon, I have checked the serial on WD warranty site and its in warranty ... would you recommend I return via Amazon, or go directly through WD?

     

    Also here is my unraid diagnostics - 

     

    Any advice would be appreciated :) 

    unraid-diagnostics-20210124-1200.zip

  7. just a quick update, it was suggested elsewhere that I limit the memory usage of my docker containers. Most of my docker containers are limited between 256M - 4G depending on their requirements. Except for Plex and Emby containers. So I have now limited those both to 4GB of memory, to see if that helps.

  8. Hey guys,

     

    I'll keep this one short and to the point.

    Overnight I have received some kernel, PCI and memory errors.

    I had 24GB RAM in the server, 2x8GB + 2x4GB

    Last night I swapped out the 2x4GB with 2x8GB sticks (exact same model as my other 2x8GB sticks already in the box) to give it 32GB

    Nothing else has changed.

     

    Then this morning I woke up to these errors. (disregard the data and downloads share and cache entries, I will fix them after - unrelated)

     

    1151325709_ScreenShot2021-01-09at12_03_32pm.thumb.png.8a21ae4690369ba7267cf27016a6b134.png

     

    The box is still functioning fine, but obviously I would like to try and get a handle on where these errors are coming from.

     

    Here is my server diags:

    unraid-diagnostics-20210109-1317.zip

     

    I have not run a memtest yet, I just wanted to see if the community had some information on things I can check prior to taking my server offline for a memtest.

     

    Any help would be great :)

     

     

  9. On 10/21/2020 at 1:32 AM, DontWorryScro said:

    ive got this exact same messages.   was there ever a solution?

    nope haven't seen any resolution to this as of yet ... however I ended up taking my server RAM from 24GB to 32GB, and it seems those errors have not returned for me ... so maybe I was running out of memory like my video was leaning towards...

  10. I installed a new GPU recently into my unraid server, and I have been getting hundreds of these errors in the system log:

     

    NVRM: GPU - RmInitAdapter failed!
    NVRM: GPU - rm_init_adapter failed, device minor number 0
    NVRM: GPU - Failed to copy vbios to system memory

     

    image.png.675f5e09cfad0ad13c072138768e451c.png 

     

    New GPU is a GTX 1660Ti

    Running Nvidia UnRaid 6.8.3 with Nvidia 440.59 drivers

     

    I was not having any of these errors with my previous GPU, and I do not have any issues with my new GPU in another two machines when I tested

     

    From what I have been able to see during troubleshooting, is the GPU keeps going online and offline, and as per the video it seems to be when the cached memory in unraid gets all used up, due to file transfers happening. Perhaps there is a tunable of some sort to prevent all the memory from getting used?

     

    I did a screen capture when it was happening, its boring AF but it does show the device going offline and online when the cached memory goes up and down. Keep an eye on the memory usage and the green available sliver at the top.
     

     

    There seems to be a number of other posts about this, but can't really see that its been resolved.

     

    Any help or suggestions would be appreciated

  11. 5 hours ago, dst1995 said:

    I have the same issue with an older GTX 970.

     

    The card has always worked fine. The error's started this morning... 

     

    I only got two of the error;s tho: 

    NVRM: GPU 0000:09:00.0: RmInitAdapter failed! (0x26:0xffff:1227)

    NVRM: GPU 0000:09:00.0: rm_init_adapter failed, device minor number 0

     

    Yep still same here - I just logged this - with a video showing what I get on mine - 

     

     

  12. I installed a new GPU recently into my unraid server, and I have been getting hundreds of these errors in the system log:

     

    NVRM: GPU - RmInitAdapter failed!
    NVRM: GPU - rm_init_adapter failed, device minor number 0
    NVRM: GPU - Failed to copy vbios to system memory

     

    image.png.d475794f93eb1119e9bb1dac86b27cd8.pngScreen Shot 2020-06-20 at 4.08.14 pm.png

     

    New GPU is a GTX 1660Ti

    Running Nvidia UnRaid 6.8.3 with Nvidia 440.59 drivers

     

    I was not having any of these errors with my previous GPU, and I do not have any issues with my new GPU in another two machines when I tested

     

    From what I have been able to see during troubleshooting, is the GPU keeps going online and offline, and as per the video it seems to be when the cached memory in unraid gets all used up, due to file transfers happening. Perhaps there is a tunable of some sort to prevent all the memory from getting used?

     

    I did a screen capture when it was happening, its boring AF but it does show the device going offline and online when the cached memory goes up and down. Keep an eye on the memory usage and the green available sliver at the top.
     

     

    There seems to be a number of other posts about this, but can't really see that its been resolved.

     

    Any help or suggestions would be appreciated

  13. On 3/4/2020 at 11:13 PM, Sarge888 said:

    Mine started doing the same thing with my new gtx 1660. Thought it was a corrupt bios but a reflash didn't fix it. Wondering if it has something todo with CA appdata backup as its scheduled to run at 5am and my card always dies every other day at 530am.

    I just installed a GTX 1660 Ti and I am getting the same issue as well .... 

  14. On 2/27/2015 at 9:24 AM, ThatDude said:

    I had the same issue until I started using NFS instead of SMB/AFP to export my shares.

     

    It made a huge difference for me, also file copies from the unRAID server to my Mac increased from 40MB/s to over 100MB/s.

     

    To mount an exported share on OSX, from the Finder choose "Go/Connect to Server" then:

     

     

    
    nfs://tower/mnt/user/sharename
    
    e.g. nfs://tower/mnt/user/movies
     

     

     

    I have been suffering from slow transfers on my mac devices for years, I just found this post ... changed my share mappings on my mac's to use NFS instead of SMB and boom now I am saturating my gigabit LAN with 110MB/s from my unraid box to my mac devices.

     

    Thanks so much! :)

     

    • Thanks 1
  15. 21 hours ago, binhex said:

    ahh that must be part of the fancy pants letsencrypt certificate they included with unraid when switching to https (dont use this myself), ok well try my suggestion of whitelisting *.unraid.net as well as your lan range then

     

    Well after hours of trying to solve this, I haven't had any success.

     

    I have enabled logging, but only to show me failures and blocks etc.

     

    When trying to access my UnRAID 192.168.2.195, the privoxy log shows this:

     

    2020-04-25 15:43:30.225 7f3711fdb700 Actions: +client-header-tagger{css-requests} +client-header-tagger{image-requests} +client-header-tagger{range-requests} +set-image-blocker{pattern} 
    2020-04-25 15:43:33.326 7f3711fdb700 Actions: +client-header-tagger{css-requests} +client-header-tagger{image-requests} +client-header-tagger{range-requests} +set-image-blocker{pattern} 
    2020-04-25 15:43:33.326 7f3711fdb700 Crunch: Connection failure: http://192.168.2.195/
    2020-04-25 15:43:33.516 7f3711fdb700 Actions: +change-x-forwarded-for{block} +client-header-tagger{css-requests} +client-header-tagger{image-requests} +client-header-tagger{range-requests} +hide-from-header{block} +set-image-blocker{pattern} 
    2020-04-25 15:43:33.516 7f3711fdb700 Crunch: CGI Call: config.privoxy.org:443
     

    So from what I have been able to teach myself, this indicates the blocks should be coming from +change-x-forwarded-for{block} and +hide-from-header{block}

     

    So I added the following to my user.actions file:

     

    { \
    -change-x-forwarded-for{block} \
    -hide-from-header{block} \
    }
    192.168.2.195
    .unraid.net
     

    Still no good.

     

    Privoxy shows:

     

    image.png.dbf7322f4be86c8f16fd8d15a0791782.png

     

    This would lead me to believe there should be no blocks happening - but still doesn't work.

     

    So then I tried using the { fragile } action, which is meant to basically prevent any actions that would cause a site not to load, and I get.

    image.thumb.png.88466d9b2b6b4bbcd581dcbc0c3dcf4d.png

     

    Still can't load the site.

    So then I went to the match-all.actions and basically disabled all the default actions.

    image.thumb.png.ce5b086f1b1161711c11de224c45c85d.png

     

    When I do this, my log shows:

    2020-04-25 15:53:44.245 7f3829ffb700 Actions: 
    2020-04-25 15:53:47.341 7f3829ffb700 Actions: 
    2020-04-25 15:53:47.341 7f3829ffb700 Crunch: Connection failure: http://192.168.2.195/
    2020-04-25 15:53:47.507 7f3829ffb700 Actions: 

     

    ===================

     

    So I am really at a loss now, no idea what to do from here.

     

    Any thoughts?

     

  16. 53 minutes ago, binhex said:

    ahh that must be part of the fancy pants letsencrypt certificate they included with unraid when switching to https (dont use this myself), ok well try my suggestion of whitelisting *.unraid.net as well as your lan range then

     


    no worries - I’ll try that and report back :)

     

  17. 10 minutes ago, binhex said:

    what is that address its trying to go to?, not sure where that is coming from, but in any case you could simply whitelist *.unraid.net as a workaround, as well as whitelisting your lan range.


    So normally when I access my unraid webUI 192.168.2.195 it just redirects to to the https://xxxx.unraid.net address as https is enabled on unraid - no issues.

     

    However, when i set my devices to go through privoxyvpn, I can no longer access my Unraid Ui 192.168.2.195

     

    does that make sense? 

  18. @binhex - First off great work on your dockers mate - really impressive :)

     

    So mine is probably a simple issue, but I can't quite work it out.

     

    Local LAN 192.168.2.0/24

    UnRAID IP: 192.168.2.195

    UnRAID https enabled, so accessing 192.168.2.195 redirects to https://xxxxxxxx.unraid.net

    UnRAID 6.8.3

    Docker 19.03.5

    Most of my dockers are using their own IPs (just how I like it)

     

    Scenario:

    I am simply using privoxyvpn to pipe my various device traffic through it and my PIA VPN. 

     

    - Installed privoxyvpn, configured, connected to VPN - no issues

    - Pointed my clients (desktop, phones etc) to use 192.168.2.16:8118 - can browse through privoxyvpn no issues.

    - I also have DelugeVPN installed, I am routing NZBget through that using the container:binhex-delugevpn option - no issues

    - Privoxy option on the DelugeVPN container is also enabled, so I can use proxy option in Sonarr, Radarr, Lidarr, Jackett etc to pipe through - no issues.

     

    Issue:

    Can't access my UnRAID webUI whilst using the privoxyvpn proxy, or any docker that is using Bridged instead of their own IP (I only have one of those)

     

    Privoxyvpn config:

    IP: 192.168.2.16

    LAN_NETWORK: 192.168.2.0/24

    ADDITIONAL_PORTS: blank

    ENABLE_PRIVOXY: yes

     

    I can access any docker that is using its own IP address.

    I can't access 192.168.2.195 or anything related to it like my Diskspeed docker 192.168.2.195:18888

    I get this:

     

    image.png.d2a79e43a88f9899c0969cb3b932a1d3.png

     

    image.png.006ef5c65a28e452efa0aeeb49e6895c.png

     

    If I configure in my browser to bypass proxies for local addresses etc and specify 192.168.2.0/24

    I can load my Diskspeed container 192.168.2.195:18888

    However I still cannot get to my UnRAID webUI, I get this

     

    image.png.17df9ca921899106c050f888142f99da.png

     

    I have read through all the posts in this topic so far, some kind of touch on it, but they are more talking about routing other containers through say your DelugeVPN container, which I am not having any issues with.

     

    I'm sure it's something simple, but I have not used Privoxy before, so I'm wondering whether its got to do with the https option being enabled on UnRAID, and somehow privoxy is blocking the redirect?

     

    Any help you can give would be great :)

     

  19. 9 minutes ago, steve1977 said:

    I am a "windows guy" and even adding a docker through CA took me a while to understand how all this path and port mapping works. You may be laughing, but I am quite illiterate when it comes to *nix.

     

    I downloaded the image in Windows (which I believe equates to what you call building the image). Now I need to find a way to turn it from zip to tar (which I am sure I can do). And then I copy the tar file into a new folder in my docker folder? And then select "add container" and create my own template? And then try to follow the github description as closely as possible?


    Why not just use the Linux subsystem that built into windows 10 and then you can just follow the whole install guide as if your were on a Linux VM?

     

    https://docs.docker.com/docker-for-windows/wsl-tech-preview/

×
×
  • Create New...