sonofdbn

Members
  • Posts

    492
  • Joined

  • Last visited

Everything posted by sonofdbn

  1. A bit more background: the original flash drive (Sandisk Cruzer Blade) has been in use from about 2016, and over the last couple of months I've had problems with the flash drive not being found, but rebooting or moving to another USB port solved the problem. This time, unRAID said it couldn't find the GUID and after shutting down the server and trying to read the drive on my Win 10 PC, I couldn't read the drive at all. And of course, after upgrading a few disks, I didn't have a recent copy of the drive assignments, or a recent backup of the flash drive. After panicking just a little, I tried another USB port and was able to read the drive. The first thing I did was copy the contents to the PC, then I tried it on the server, which booted into the GUI but said there had been an unclean shutdown and the array didn't start. Weirdly, the GUI also showed a missing drive but after a minute or so the drive showed up (although the array still didn't start). So that was when I made the zip backup of the flash drive and that's the zip I used for the new flash drive (also Sandisk Cruzer Blade). Then I started wondering if there might have been a problem reading the original drive, because previously it did work after just moving it or rebooting, and maybe that's why it reported an unclean shutdown. So I shut down the server (which had the new flash drive where I couldn't replace the key) and changed back to the original key and rebooted. Now the server is running fine and there was no message about an unclean shutdown. So I've made another zip backup and overwrote the new flash drive with the new backup, in case the previous zip had some corrupted files. I'll test it tomorrow, because things will get very frosty at home if I interrupt someone's already delayed Plex watching tonight. If that fails, I'll contact support. I did have a bit of a connection issue because the manual (https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/) doesn't mention unRAID Connect, which I didn't realise is more or less mandatory now and hadn't installed. Then I had a problem because I had three unRAID entries in my authorisation app from the time the unRAID.net accounts were re-done. Eventually got there, and it wasn't too difficult.
  2. I've been having problems with my unRAID flash drive and tried to replace it. I followed the manual: made a backup to my Win10 PC via the WebGUI, then used USB Flash Creator with the zip backup to make a new unRAID flash drive. Then I shutdown the server and replaced the original flash drive with the new one. After restarting the server I got the expected message about the registration key and navigated to the Replace Key button (manual is a bit out of date here) and selected it. Then I got the following error message after clicking confirm in this window: What do I do now?
  3. For all those following this avidly, I finally solved this by contacting Supermicro support, who were very helpful. In the end it just required updating the BMC. After that I was able to connect to the server via the Web GUI (using the IPMI IP address) and use iKVM/HTML5. This avoided various Java complications which kicked out certificate errors.
  4. I often take a look at what's going on in my VMs using the VM Console option in the unRAID GUI, which uses noVNC. But I'm wondering how unRAID is able to do this when I never knowingly set up a VNC server in the VMs. Is unRAID doing something special/different? For example, I was able to use the VM Console to play around with a new Linux VM. But when I tried to VNC into the same VM using TightVNC on my Win 10 PC, I couldn't connect. I tried both the standard 5900 port and a few 57xx ports, which the VM Console seems to use. I don't know if using noVNC on the PC would have worked, but couldn't find any reasonably simple way of using noVNC on Windows.
  5. I'm trying to run Roon Bridge on a Linux Mint VM and play music via a DAC that is connected to the server via USB. No problem with connecting to the DAC, which shows up as a USB Device in the VM Setup page, but the sound is very poor - a bit like bad reception on a radio. I don't know if I might be able to improve things by passing through the entire USB controller, but unfortunately the motherboard has only one USB controller, and the unRAID flash drive is obviously sitting on it. This is the IOMMU grouping: IOMMU group 3: [8086:7ae0] 00:14.0 USB controller: Intel Corporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller (rev 11) Bus 001 Device 001 Port 1-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002 Port 1-4 ID 0665:5161 Cypress Semiconductor USB to Serial Bus 001 Device 003 Port 1-6 ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 001 Device 004 Port 1-13 ID 26ce:01a2 ASRock LED Controller Bus 001 Device 005 Port 1-6.2 ID 0781:5591 SanDisk Corp. Ultra Flair Bus 001 Device 006 Port 1-7 ID 0644:8043 TEAC Corp. TEAC UD-501 Bus 002 Device 001 Port 2-0 ID 1d6b:0003 Linux Foundation 3.0 root hub [8086:7aa7] 00:14.2 RAM memory: Intel Corporation Alder Lake-S PCH Shared SRAM (rev 11) Is there anything that can be done? The DAC is the TEAC device. (I appreciate that this is not the best way to get music to the DAC, and I am using other alternatives, like Ropieee on a Raspberry Pi, but if I can get this VM approach to work, it would be a nice solution for my use case.)
  6. I was going down an Internet rabbit hole looking at server motherboards and ended up at the CWWK website. (CWWK seems to manufacture most of the Topton and other brand mini-pcs and routers that you see on Aliexpress.) I saw this: Link is https://cwwk.net/products/j6412-j6413-nas-6-sata-dual-m-2-itx-i226-v-network-card. Cheapest option is $376.70 with no RAM or SSD. It seems that you get an unRAID Plus licence with this, although I couldn't see any details about how this is delivered and how you choose it, or what the cost is. I don't suppose anyone has one of these? Looks interesting, and I think not too bad cost-wise, especially if it includes a real unRAID licence.
  7. Thanks; deleting the nested folder has solved the problem. (Just to add a data point: some shares with the Recycle Bin have these nested folders, others don't.)
  8. I'm on 6.11.5. I was trying to empty out the Recycle Bin for some shares and found that somehow there were lots of nested .Recycle.Bin folders within the first level .Recycle.Bin folder. For example, I get \\TOWER\Movies\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\.Recycle.Bin\ I wasn't sure if clicking on each .Recycle.Bin folder created another nested one, so I stopped. If I try deleting the folder, Windows says items are being deleted (perhaps files within the folder), but I can't delete the nested folder - or any of the nested .Recycle.Bin sub-folders. I get a Windows error message saying "cannot find the path specified". But that's when I look on my Win10 PC. When I go to the Web GUI and look at the Movies share, I have only one nested .Recycle.Bin folder. If I click on the blue .Recycle.Bin text, I just get the same page. I haven't tried deleting that nested folder yet, in case there's something I might inadvertently damage. Don't know if it's happening on every share where Recycle Bin is applied, but I see the nested folders on all shares that I've looked at so far. Any idea what happened and how I can fix this? tower-diagnostics-20230924-1746.zip
  9. No. The drive that is added is cleared (written as all zeroes) if not pre-cleared before adding it. A drive that is all zeroes can then be added without affecting parity so no parity update is required. Sorry, didn't think that one through properly! Yes it does. And to your original question, I have wondered the same. It seems reasonable to think a parity check only needs to go as far as the largest data drive. There must be a sound reason. Can a rogue parity bit mysteriously flip from 0 to 1? 🤷‍♂️ So what happens when I add a new parity drive that has surplus capacity? Does unRAID clear the entire drive? Because if it does, there's a case to be made that pre-clearing a drive that is to be used for parity could be helpful beyond just being a stress test, since it will be writing zeroes to the whole disk. Admittedly there is the other issue of whether unRAID recognises the pre-clear signature and acts accordingly when a parity drive is added. (Whereas I believe that with a new data drive, unRAID will indeed check if there's a signature and if there is, will assume the disk is clear.) There seems to be a common view that a drive that is going to be used for parity doesn't have to be cleared (stress-testing it by pre-clearing is a separate consideration). But if that view is correct, then it implies that unRAID doesn't care what is on the drive because the required parity bits are going to overwrite whatever is already there. Following on from that, wouldn't this mean that doing a parity check on the "surplus" part is unnecessary? Whether there are zeroes or something else is immaterial if it's going to be overwritten anyway when larger data disks get added. Should our understanding be: If we are replacing a parity drive that is as large as the largest data drive with a drive of the same size, then we don't have to care what is on the new drive; but if we replace that parity drive with a larger one, then pre-clearing might (will?) reduce the time that unRAID takes to add the new drive to the array. The pre-clearing will also address the bad sectors concerns.
  10. I'm obviously confused about parity. I had read that a parity drive doesn't have to be pre-cleared because parity is calculated, and then written to the parity drive. In which case, it doesn't matter what was on the parity drive in the first place. And it also means that if I had added a non-pre-cleared drive as parity, there would almost certainly not be zeroes in the rest of the drive. Or does adding the larger (than necessary) parity drive involve writing zeroes to the "unused" part of the drive? If I add a larger data drive later (assuming that I increase the size of my second parity drive as well), won't parity be calculated for the new drive and then written to the parity drive, overwriting whatever was in the corresponding position?
  11. I recently had to shutdown my array and when I rebooted found there had been an unclean shutdown, so the server (6.11.5) started a parity check. I have dual parity drives, 16TB (very recently installed) and 8TB. Most of my eight data drives are 8TB, and none is bigger than that. Now I'm more than 50% of the way through the parity check (8.44 TB read), and looking at the Main tab on the GUI only the 16TB drive is being read from. There's no other read/write activity that I can see, and in fact currently only two of the other drives are still spun up. So I'm wondering what the parity check is doing. Simplistically, if the first 8TB of data/parity on the drives has been read and checked, I would have thought that there's no more parity to be checked. Isn't the extra 8TB of the larger parity drive immaterial to the parity integrity of the array? Asking this partly out of curiosity and partly hoping that I can just cancel the parity check now.
  12. Thanks, I feel reassured. I just recently started using Firefox, so didn't come across the second issue before.
  13. I'm on 6.11.5, running dual parity. My disk 4 has generated a few errors, so I am trying to replace it with a new drive (same size). I assigned the new drive as Disk 4 and started the data rebuild. But there's an ominous message next to it saying "All existing data on this device will be OVERWRITTEN when array is started." If I take this literally, this means that after the data rebuild, when the new disk has had all the old data rebuilt on it, all this data will be overwritten when I start the array. Surely this is not what is going to happen? Also a bit confused about whether the array has actually started. Because this is the message I see in the GUI footer: But I'm still able to access files on the shares on the array, presumably through Disk 4 contents being emulated. I thought that if the array was stopped, the disk shares would not be accessible? tower-diagnostics-20230914-1144.zip
  14. That part I understand. My issue is that if, having completed the pre-clear cycle, and therefore having written zeroes to the entire 16TB drive, this will result in writing 16TB of zeroes again, somewhat unnecessarily. Since I have completed one pre-clear cycle and only barely started the pre-read step of the second cycle before stopping, I am hoping that the disk has the two requirements for adding to the array as a cleared disk: - the drive is filled with zeroes - the drive has a signature that affirms this So if I add the drive in its current state to the array, will unRAID recognise it as a cleared drive and therefore allow me to format it as a new array drive? Or can I start a new pre-clear and select just the Verify Signature option? I'm trying to avoid doing an unnecessary re-zeroing of the drive if possible.
  15. I had originally set my brand new disk pre-clear to 2 cycles, but decided it wasn't worth taking the extra time. So I stopped the pre-clear after the post-read had completed (and the next cycle had started with pre-read). What do I need to do now to be able to add this disk to the array? I read through the various choices in the first post, but it's not clear to me what is required now.
  16. Amazing what some patience and reading can do. It turns out that there is a setting for Preclear Queue and the default is 2. So I've changed it to 3 and all is good. Also from this thread I see that it's not possible to change the number of cycles during the pre-clear.
  17. I started pre-clearing three 16TB disks; two started pre-read, but one pre-read started and then almost immediately paused. I tried restarting the pre-clear on that disk, but it went straight to the paused pre-read state. The log shows "Pause requested by queue manager". Should I worry about this? Dashboard shows CPU at under 25% and RAM usage is at 4%. So those should not be causing any problems. Also, having read a bit of this thread, can I set the number of cycles back to 1 during the pre-clear? I currently have the number set to 2, which I understand from an early posting in this thread is unnecessary.
  18. I don't have a definitive answer, but I think the problem might have been self-inflicted. I deleted the MACOSX folder on the server, rather than on the PC, and since the files were still on the PC, Nextcloud might have tried to sync them back before the rsync. Hence the files, or maybe temporary Nextcloud sync ones, were still there in the directory on the server, and some files there were still synced to the backup by rsync, resulting in the non-empty directories that couldn't be deleted.
  19. I'm trying to use rsync to backup from one unRAID server to another, just using the terminal. I've mounted the destination share using Unassigned Devices, and the command is basically: rsync -avP --delete /mnt/user/Nextcloud/username/files/ /mnt/remotes/Backup_server/Nextcloud/username/files It largely works, but I ran into a problem with an error message saying "cannot delete non-empty directory". Previously I had backed up a sub-directory a few levels down in a folder called custom_banners/__MACOSX (if this looks familiar it's from SpaceInvader). I subsequently deleted this sub-directory on my main server, and now when I run the rsync command above I get the non-empty directory error message, saying it can't delete this directory and its sub-directories. I tried --delete-during as a parameter, but that didn't make a difference. I also get some error messages like this: rsync: [receiver] rename "/mnt/remotes/Backup_server/Nextcloud/username/files/Documents/2021/moroccan-flower/__MACOSX/._moroccan-flower.jM0wjF" -> "Documents/2021/moroccan-flower/__MACOSX/._moroccan-flower": No such file or directory (2) When I check the destination server, that directory does exist, so not sure what's happening here. I don't know if it's just coincidental that both issues involve a "__MACOSX" directory. I think what I'm trying to do is quite simple; I'm not doing anything with Nextcloud when running rsync - and I've tried various times and always get the same issues with the same files and directories. How can I fix this?
  20. My server seems to be a bit sensitive - meaning that cables seem intermittently to come a little loose (UDMA CRC errors). So generally I avoid opening up the case unless absolutely necessary. So here I just rebooted the server without touching anything inside, and everything has been working fine for a few weeks now (I've probably now jinxed the server). So I'll put it down to a connection problem.
  21. I'm on 6.11.5. A few days ago I found that overnight my disk 4 was in an error state "No device identification ()" and shortly after that there had been a write error and the disk had been disabled. So I rebuilt, without incident, and all seemed fine. But this morning, about a day after the rebuild, there was a read error. From my amateur's view of the diagnostics, the disk looks OK, but obviously I'm worried about the errors. Does the disk need to be replaced? tower-diagnostics-20230730-0935.zip
  22. After changing the docker network type to ipvlan, the same problem re-occurred. So (additionally) I switched to the binhex/arch-qbittorrentvpn:libtorrentv1 version of qbittorrent, and now everything is running fine.
  23. One day I'd like to be able to set up MC properly, but even using the barebones MC in a console window works very well for me. I've tried and still use Krusader occasionally but it just acts weirdly sometimes. Recently I was trying to delete some files, but Krusader just didn't. No error message or anything. As usual as a Linux amateur, I thought it was possibly a permissions problem, although I think it deleted other files in the folder which should have had the same permissions. Then I went into Dynamix File Manager and deleted the files with no problem at all. Maybe there was a trash bin space problem in Krusader? But even so, there should have been a notification. (I use the binhex-Krusader docker container.) What I'd like to be able to do in MC is set up the starting panels to some other configuration than the default. Also, the folder/file size numbers are just weird. I read about some way of making it more humanly readable, but again can't get that to stick for when I run MC the next time. Just can't find documentation about MC at the right level for me. If I was forced to use only one, I would go for MC - quirky but works more reliably. I also use WinSCP from my Windows PC. It's great for getting a GUI into the unRAID array and easily changing permissions and viewing text files. Great for people like me who are not into "chmod 777" type commands. Can also move/copy files from PC to array - it's like a fancy FTP client. What it doesn't do is move/copy files within the array (afaik - if I'm wrong please do tell me how - would make it incredibly useful).
  24. Just to close this off: in the end I went with an Intel i5-13500 (and an ASRock Z690 Pro RS motherboard). ECC would have been nice, but just too difficult/expensive to find a reasonable hardware combination.
  25. On qbittorrent I had been using an older version of the container precisely because of the problem described. But recently that older version stopped working and I changed to the latest version, which seems to be running fine, unless it's causing these crashes. I'm not sure, but I think the first crash came before switching to the latest version. For the moment I'll stick to the latest version, but I have changed docker network type to ipvlan. Let's see how things go....