Ancan

Members
  • Posts

    55
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ancan's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. Right, I checked around a bit myself, and one showstopper is that if a disk fails, there's a chance that the entire SATA-channel stops responding and makes the multiplexed drives fail as well. Seems like risk of data loss, so I'll be returning this card.
  2. After ordering a new controller I think I might have made too littile research first. I'm currently using a LSI 8-port HBA, which have started to give errors on my partity drive (even after replacing both cable and drive) so I thought I'd replace it and go for a plain SATA-card instead since they are supposed to have much lower power consumption. However, looking at the order now, I see that the 10-port card is using a JMB575 port multiplier and just read that those are not recommended for unRAID. So now I ask, what may go wrong with this card? Am I loosing performance or are there more serious downsides?
  3. Hi all, After swapping my cache-drive for a larger one, I did reinstall of the Deconz container from "old apps" and restored appdata-folder from my backup. After this I've lost control over all my zigbee devices. Checking the logs there's lots of "error ASSPDE-DATA.confirm: 0xE1 on task". I've also noted that in the container, "DECONZ_DEVICE" is set to "0", but in the config screen it says "set same as device", which for me is /dev/ttyACM0. Is this expected? It seems to find the USB-device, because it confirms that the firmware is up to date.
  4. The first Mac just joined the household, and I've been trying to connect to shares on the unRAID server from it. All works fine except hidden shares (i.e. share name ends with "$"). I have no problem mounting hidden shares that are on a Windows machine from the Mac, nor mounting the hidden unRAID shares from a Windows machine, which leads me to suspect there's something with the samba setup on unRAID. I've monitored the traffic while connecting and it seems I get STATUS_BAD_NETWORK_NAME. 24 1.892387 192.168.y.y 192.168.x.x SMB2 414 Create Request File: ;GetInfo Request FILE_INFO/SMB2_FILE_ALL_INFO;Close Request 25 1.894348 192.168.x.x 192.168.y.y SMB2 566 Create Response File: ;GetInfo Response;Close Response 27 1.894602 192.168.y.y 192.168.x.x SMB2 414 Create Request File: ;GetInfo Request FILE_INFO/SMB2_FILE_ALL_INFO;Close Request 28 1.896113 192.168.x.x 192.168.y.y SMB2 566 Create Response File: ;GetInfo Response;Close Response 30 2.083146 192.168.y.y 192.168.x.x SMB2 208 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\<sharename>$ 31 2.085822 192.168.x.x 192.168.y.y SMB2 151 Tree Connect Response, Error: STATUS_BAD_NETWORK_NAME 33 2.085957 192.168.y.y 192.168.x.x SMB2 190 Tree Connect Request Tree: \\192.168.x.x\<SHARENAME>$ 34 2.088408 192.168.x.x 192.168.y.y SMB2 151 Tree Connect Response, Error: STATUS_BAD_NETWORK_NAME 36 2.089989 192.168.y.y 192.168.x.x SMB2 198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$ 37 2.091087 192.168.x.x 192.168.y.y SMB2 158 Tree Connect Response 39 2.091278 192.168.y.y 192.168.x.x SMB2 240 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x\<sharename>$ 40 2.093056 192.168.x.x 192.168.y.y SMB2 151 Ioctl Response, Error: STATUS_NOT_FOUND 42 2.093249 192.168.y.y 192.168.x.x SMB2 138 Tree Disconnect Request 43 2.094102 192.168.x.x 192.168.y.y SMB2 146 Tree Disconnect Response 45 2.095655 192.168.y.y 192.168.x.x SMB2 198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$ 46 2.097853 192.168.x.x 192.168.y.y SMB2 158 Tree Connect Response 48 2.098042 192.168.y.y 192.168.x.x SMB2 240 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x\<sharename>$ 49 2.100083 192.168.x.x 192.168.y.y SMB2 151 Ioctl Response, Error: STATUS_NOT_FOUND 51 2.100296 192.168.y.y 192.168.x.x SMB2 138 Tree Disconnect Request 52 2.100979 192.168.x.x 192.168.y.y SMB2 146 Tree Disconnect Response 54 2.102956 192.168.y.y 192.168.x.x SMB2 198 Tree Connect Request Tree: \\<myNAS>._smb._tcp.local\IPC$ 55 2.104394 192.168.x.x 192.168.y.y SMB2 158 Tree Connect Response 57 2.104578 192.168.y.y 192.168.x.x SMB2 220 Ioctl Request FSCTL_DFS_GET_REFERRALS, File: \192.168.x.x 58 2.105260 192.168.x.x 192.168.y.y SMB2 151 Ioctl Response, Error: STATUS_NOT_FOUND 60 2.105402 192.168.y.y 192.168.x.x SMB2 138 Tree Disconnect Request 61 2.106209 192.168.x.x 192.168.y.y SMB2 146 Tree Disconnect Response 63 2.258212 192.168.y.y 192.168.x.x SMB2 446 Create Request File: :AFP_AfpInfo;Read Request Len:60 Off:0;Close Request 64 2.260541 192.168.x.x 192.168.y.y SMB2 318 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND;Read Response, Error: STATUS_FILE_CLOSED;Close Response, Error: STATUS_FILE_CLOSED 66 2.260991 192.168.y.y 192.168.x.x SMB2 446 Create Request File: :AFP_AfpInfo;Read Request Len:60 Off:0;Close Request 67 2.262606 192.168.x.x 192.168.y.y SMB2 318 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND;Read Response, Error: STATUS_FILE_CLOSED;Close Response, Error: STATUS_FILE_CLOSED 69 2.263692 192.168.y.y 192.168.x.x SMB2 414 Create Request File: ;GetInfo Request FS_INFO/FileFsSizeInformation;Close Request 70 2.264944 192.168.x.x 192.168.y.y SMB2 486 Create Response File: ;GetInfo Response;Close Response 72 2.265747 192.168.y.y 192.168.x.x SMB2 414 Create Request File: ;GetInfo Request FS_INFO/FileFsSizeInformation;Close Request 73 2.267337 192.168.x.x 192.168.y.y SMB2 486 Create Response File: ;GetInfo Response;Close Response
  5. No, I've been running it as it is since my last post. No issues.
  6. Late reply, but I still can't get flax to work. I've copied the mnemonic.txt to the correct placement, and also deleted the database file for good measure, but there's still zero plots. Running "flax plots check" from the console finds them all though, and "flax keys show --show-mnemonic-seed" shows the correct mnemonic. In the GUI under "workers" the proper plots directories are shown with correct disk space usage reported, but still the summary shows "0 Total Plots", and lots of alerts about "Your harvester appears to be offline". init.log just shows this: Flax directory /root/.chia/flax/mainnet /root/.chia/flax/mainnet already exists, no migration action taken debug.log is attached. debug.log
  7. Can't seem to get Flax to find my plots after moving it to the separate docker. All /plots1, /plots2 and so on are mapped, listed under plots_dir, and I can see all plots from the docker console. Farming is "active" but still zero plots. Nothing obvious in the logs. Not sure if it's relevant, but the harvester process is shown as "defunct" in ps: [flax_harvester] <defunct> I believe I followed the guide properly, but must have missed something.
  8. I just realized I've been running "latest" all the time since it's the default, but I'd rather be on LTS. What's the best way to move to the LTS branch, keeping your configuration? I tried reinstalling and restoring from backup, but that wasn't possible since the backup was done with a newer version of the controller.
  9. If it's till 4.1GB with only Chia, then I assume my usage is normal. It's been around 5.8-5.9GB since yesterday now.
  10. Anyone knows how much RAM is needed for this, if you're just running it as a farm? I've done all my plotting on other hardware, and am farming 930 plots. I thought 4GB would be well enough for this, since ppl are farming on Raspberry Pi's, but it got unstable and now with 8GB RAM, it's on 5.8 and rising. Is there a memory leak or is it normal?
  11. Got some time for tinkering today, and after a few stable weeks on the beta, tried updating again. The problem actually seemed to have *worsened* now for some strange reason, since it started acting up just a couple of minutes after boot instead of days. I do however I suspect I might have found the culprit for this on my particular server. After replacing the flash device to make sure it hadn't gone bad, turning off the "tunable" in the Global Share Settings, and moving on to 6.9.2 I still had the server failing soon after boot, with "flash device error", no VM's/Dockers, lost shares et.c. Checking the syslogs I'd collected I saw that the first errors in them seems to come from the eth1 NIC, which happens to be an USB3 "Type-C" NIC that I use in LACP bond with the integrated GbE. This has worked just fine before 6.9.1, but as a test I unplugged it, and the server has now been up an hour without problems. Looks like it takes the whole USB subsystem with it when it breaks, hence the inaccessible flash device. The NIC is a Realtek "RTL8153", and some research shows that the Realtek-provided driver only works properly up to kernel 5.6, but that it will be supported again in 5.13. (https://www.phoronix.com/scan.php?page=news_item&px=Realtek-RTL8153-RTL8156-Linux). Edit: Some more googling and not sure about the driver state on the different kernels anymore. Some say it's been supported already for some time. Still seems to be what caused my problems, so I'll keep it unplugged meanwhile. Edit #2: Might have celebrated too early. Shares disappeared again, lots of "Transport endpoint is not connected", but this time no "flash device error" and all VM's/Dockers was running fine. I'll post diagnostics if it happens again.
  12. I'm back and have upgraded to 6.9.1 again (Reboot button did hard non-clean reboot BTW). I'll keep and eye on it, but can't disable NFS unfortunately since I'd have no use of it then.
  13. I'll be away for a week or two now, and like to keep the server as stable as possible meanwhile. Sorry if I can't be of more help. Pretty dependent on NFS also, since almost all of my VM's have mounted exports from the array.
  14. Right! Sorry, forgot the attachment. nasse-diagnostics-20210311-2127.zip