Jump to content

ukkeman

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by ukkeman

  1. when i try to start the service, sometimes it somehow does starts, BUT only if there is no added peer. If i add a peer there is always this error: [#] ip -4 route flush table 202 [#] ip -4 route add default via table 202 Error: inet address is expected rather than "table". [#] ip link delete dev wg2
  2. Tested it with GBIT Lan now: SMB: 45 MB/s SCP: 110 MB/s to the same share. Used in both cases the mnt/User path and not any direct disk share path.
  3. I tried to create a new share with cache only. Restarted the array. Got the same speed around 30MB/s
  4. No I did not the settings are there since a very long time and since I restarted the machine and the array several times. So I don't think this is an issue. I also can see that smb transfer writes to the cache disks when I look at the dashboard and directly on the mounted disks. I also can not see any significant things like maxed out CPU or oom in tools like netdata. This drives me a bit crazy. There is no encryption of the disks but if that'd be the bottle neck I would also see a higher CPU load
  5. I can see it's writing on to the cache but the way it's writing is odd in the dashboard. The speed is shown as zero then for a brief moment goes to 300mb/s and then drops to zero again.
  6. Speed to windows vms is at full. Write speed to cache array is at 500mb/s
  7. I'm also having this issue. It seems like open tab and running parity check triggers it. At least that's what I observed. I'm on 6.20.rc.2
  8. Ive got slow smb speed when writing to a cache enabled share. smb has around 30mb/s using scp maxes out wifi around 90 mb/s i only transfer one large file at time I'm on 6.10-rc.2 Added diagnostics tron-diagnostics-20220109-2150.zip
  9. Im losing one container after another on auto updates. I just found out about it today. /usr/bin/docker: links are only supported for user-defined networks. See '/usr/bin/docker run --help'. The command failed. I don't want to generate a network for a mysql machine where all the container accessing this machine are in the same network and also do not really like the cli way of adding custom networks. This is why i liked the linking. Is there another way or do i miss something? Going back to 6.7.2 for now and will report back if this is only a 6.8.0 issue
  10. Ok it seems i was to fast with my assumption when testing the LSI Controller. Since i moved it from the mobo to the LSI Controller it works like a charm. I googled a bit more around and found a lot of people having issues with Samsung evo drives and AMD 9xx chipsets.
  11. I've seen exactly the same. Added a new Evo 860 Evo 500 GB SSD and pre cleared it for fun. It started to get more and more CRC errors. im now at around 100 after some hours. My setup is close to yours: SABERTOOTH 990FX R2.0 with AMD FX 8350, 16 gig. The Intel 300GB SSD before had not a single issue with the same cables and power cables and port on the mainboard. What i tried: - exchange cables - use LSI SATA controller instead of onboard - tried different power rail This seems like a driver or a general issue here.
  12. Hey, i had similar issues with switching from afp to smb and i just registered to let you know how i solved it here. First of all i've seen that the console app and the log inside there told me that authentication did fail. So i went ahead and googled around and found this: https://apple.stackexchange.com/a/246244 This states that maybe some "security.plist" is broken, so i went ahead and looked at that file which was completely broken here. It was just a scramble of hex / asci mix and nowhere near valid xml. So i kept a copy and replaced it with the content from stack overflow <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>SecItemSynchronizable</key> <true/> </dict> </plist> This did not fix the issue but maybe together with the second step i took did: I went to the time machine prefpane and removed the disk again. Opened up a terminal and executed: sudo tmutil setdestination -ap smb://username@myserver/Backup This asked me for my local user password and the remote destination password, since then its working here. No changes were made to the smb conf files or extra confs and the setup is as all the screenshots here have shown. Hope that may helps some of you.
×
×
  • Create New...