jkBuckethead

Members
  • Posts

    36
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jkBuckethead's Achievements

Noob

Noob (1/14)

12

Reputation

  1. I have been working on the very same problem, but with a twist. I added a P2000 to add horsepower to TDARR, but doing so disabled my Plex hardware transcoding via QSV on my i5-12500. Interestingly, I don't even seem to have two cards to choose from. With the iGPU and the P2000 installed, I only see Card0 and renderD128. root@UNBUCKET:~# ls -l /dev/dri/by-path/ total 0 lrwxrwxrwx 1 root root 8 Apr 21 20:13 pci-0000:01:00.0-card -> ../card0 lrwxrwxrwx 1 root root 13 Apr 21 20:13 pci-0000:01:00.0-render -> ../renderD128 root@UNBUCKET:~# I'm wondering if maybe this is a motherboard thing where it disables the iGPU when another GPU is installed. I haven't yet attached a monitor to check the BIOS for a setting that might disable the feature, if it exists. I also might try not installing the P2000 in the first slot and see what happens. If I can get the iGPU and the P2000 to coexist, I would be fine using the iGPU for my modest Plex transcoding needs while I leave the P2000 for TDARR. Down the road when TDARR is finished with existing files, and I only have new files to process, I might use the P2000 for Plex or simply remove it.
  2. Diagnostics attached and thanks in advance. unbucket-diagnostics-20230731-1723.zip
  3. One of my cache SSDs has been showing some errors so I want to replace it. Since SSD prices have come down, I also want to upgrade all the drives with larger ones because I have a large Plex library in appdata that takes up nearly half my total cache. Anyway, to prep for replacing the cache drives, I set all my cache shares to use the array, disabled Docker and started the mover. Once the move was complete, most files had moved from the cache pool to the array, but a number of small files remain in the cache appdata folder. The share size data shows 11.8MB on the cache pool, most of this is from two SUPERVISORD.LOG files in my RADARR and KRUSADER folder. The other files are all really small log and config files. Unfortunately, I don't know a way to attach a complete listing of the files. I haven't checked every file, but I did check all the files in the top folder for Krusdaer and Radarr. In every case, there are at least two versions of the files, one in cache and one on an array disk. The sizes and dates vary, sometimes they are only a few minutes apart in age. Many of these are files that haven't been modified in some time, others were modified just a few days ago. Some of the files on cache are newer, for others the array is newer. A lot of the files on the array have the same date, 1/3/23, which I think is the last time I cleared the cache to replace another cache drive. I wasn't changing the size of the cache pool so I just replaced the drive and let it rebuild the pool, but I moved the data out of cache as a precaution. It seems as if the duplicate files are preventing the files from moving from the cache to the array. Would it be safe to delete one copy so the mover can finish moving files from the cache to the array? If so, should I delete the older files, or the files in one location or the other? I'm also curious how unraid handles having more than one version of the same file. Does it use the file located in the primary location, or does it use the newer version?
  4. Thanks, I found the a similar comment about "EFI-" on linuxserver.tips shortly after posting here. Removing the hyphen did allow me to boot with both my current and backup-created flash drives. Great to get over that hump, but that was just on a tabletop. Waiting for new M.2 drives to arrive before I put everything back together and see if the storage array is still intact. I also tried the CSM option but could not enable it on my motherboard. Searching that issue led to articles on the ASUS support site saying CSM was disabled or restricted in intel 10th gen and 500 series boards. I say restricted because one of their proposed solutions was to use a discrete GPU instead of the iGPU and it would work. I guess in certain cases it is available but in mine it was not. I didn't find a specific reference to 12th gen and 600 series chipsets but it seems likely it was not added back.
  5. My unraid motherboard recently died. It was a 10 year old LGA1155 board so the only replacement boards I could find with enough PCIe slots were used, expensive and often both. Instead of spending a bunch of money on an old fossil I decided to modernize and got a new 12th gen Intel CPU and ASUS Z690 motherboard. The motherboard is a little overkill but had the number of slots needed for my HBA cards and included 2.5G networking, which let me remove my existing NIC. Unfortunately, I had no luck when booting the new board with my existing unraid flash drive. I could see the drive in the boot menu and in BIOS, but after selecting it the computer would quickly return to the BIOS screen, without an error message or anything. I tried my current unraid drive and one created from a recent backup and they both failed to boot. I then tried an unraid drive made with a fresh download plus a windows install drive and both worked fine. What could it be that is preventing the existing unraid drive from booting? I would definitely prefer to use my existing drive rather than start from scratch. I have no idea what could be preventing my existing drives from booting when the fresh one works. Suggestions or ideas will be much appreciated.
  6. I recently upgraded a drive in my SSD cache pool. I probably would have been fine without, but before replacing the drive I moved shares that normally reside on the cache to the array. Most importantly I wanted to move my large Plex appdata folder. To move the shares, I changed their cache preference from PREFER to YES and ran the Mover. I also stopped all dockers and disabled auto-start so that the files would not be in use. After the cache upgrade, which had no issues, I changed the shares back to PREFER and started the Mover again. Once finished, I checked the array disks for any lingering folders or files, and I found some. There aren't many, just one .icons folder from Krusader that seems to be empty, and six Plex metadata .bundle folders. The Plex folders are spread across five folders in one library, so just one or two subfolders in folders that normally contain hundreds of these .bundle folders. When I explore the bundle folders deeper, there are several more levels of subfolders but ultimately they all appear empty. This means I don't think there is really any data to be worried about, just folders that need to be cleaned up. I didn't previously have mover logging enabled, but now I do. The most recent mover operation is very close to the end of the attached log. What seems interesting is that the the mover is trying to move files (that do not exist), that have the same paths as the few folders still remaining on the array. I guess instead of deleting the empty folder path, Mover left the empty folders behind. Is there a way to force Mover to clean up the folders? Can I safely remove them manually? Should I ignore them since they take up no space? unbucket-syslog-20210624-0045.zip
  7. I have not shucked an 18TB drive, but I have shucked several 10 and 12TB Elements and MyBook drives. In all cases, they were SATA drives. Once installed they identify as Wester Digital Ultrastar He10/12 drives. I did have to tape off the 3rd pin to make them work with SATA power connectors but that was pretty easy.
  8. For a few weeks now, my server has been locking up every weekend. At first I didn't notice the regularity, but this week I noticed the uptime was 6 days and 20 hours when I was dealing with it. Considering I woke up around 3:30 AM and started messing with it, the uptime would have been right at 7 days if I had waited until morning to take care of it, like usual. The lockup first becomes apparent because the network shares become unavailable to file explorer and any other applications using the shares. While the shares are unavailable, the webGUI is still partially working. The exact state of the webGUI has been different each week. Some pages load fully, while others only load partly. For example, the past few weeks the MAIN page would load, except the Array Operation section at the bottom would be blank. In each case I have been able to access the page to download diagnostics, but until this week the diagnostics never would actually download. The week the diagnostics did finally download so I have something to upload. Recovering from the lockup always ends in me shutting down manually and restarting, which of course is followed by a parity check. I have tried shutting down via the webgui and the terminal window without success. With a monitor connected to the server, when I try powering down from the terminal window I can see the process starting, but it never finishes and actually shuts off the hardware. Since this week the webgui was a little more complete (i.e. Array Operation was loading) I got to see a little more info than past episodes. One interesting thing is it indicated Mover was running, but no actual disk activity was indicated. I don't know if that is significant, it's just something I saw. The regularity of this happening every saturday night/sunday morning made me look for a corresponding scheduled event. I have a number of things that check overnight, such as application updates that check daily, the only weekly item I found was SSD Trim (enabled for my cache SSDs) set for Sunday at 2AM. I am going to disable Trim for now and see if it solves the problem. Any thoughts on Trim locking up the system? UNBUCKET Main 09052020c.pdf unbucket-diagnostics-20200906-0331.zip
  9. Thanks, you've confirmed my thoughts. I knew I could turn on the PSU with a jumper, I just didn't know if there was a reason I shouldn't.
  10. Seems like your response is closest to what I want to achieve. If nothing is connected to the MB, then you are controlling the PSU directly. Not clear whether your remote switch is connected to the incoming power or the PSU output. Please clarify how you are turning the PSU on and keeping it on. Did you install a permanent jumper on the 24-pin connector and now you are switching the power from the wall? Or, is your electronic switch connected to the 24-pin connector? Either way would confirm my thought that using a jumper to activate the PSU, whether the jumper is solid or switched, is all I need to turn the PSU on.
  11. Thanks for the input, both of you. Not keen on the idea of splicing into my PSU cables. I'd need a pair of 24-pin extensions so I could splice into them without permanent damage to my PSU cables. Plus, I would need some sort of connectors at the back of the machine for easy disconnection when I need to move them. Cheaper than the supermicro widget, but still probably $15-20 in parts. Both of these solutions would require additional cabling between the two enclosures. I'm not aware of any off the shelf cables that would work for either option so I would have to rig something up using adapters and old cable parts. I might even have to solder, which I suck at. I'm not concerned about keeping the PSUs in sync. Except for upgrades and/or repairs, this server runs 24/7. I think I'll try to stick with a solution that doesn't involve extra connections between the two machines.
  12. Looking for a little room to grow. Planning on using one of the currently available mini-ITX enclosures with 8 hot swap bays to house the drives and connecting to an external SAS HBA in the main system with a pass-thru in the back of the case. I know that on a dollar per bay basis I would be better with a used server chassis, but I don't need that much expansion and I don't have anyplace to mount a server chassis. My question is how to power the external chassis since I won't have a motherboard. Do I really need something like the SuperMicro JBPWR2 Power Board or can I simply turn on the PSU with a jumper? If all I need is a jumper for the PSU, looking at this switch to make powering off and on easier. If I need something more, I also have an old ASUS AT5IONT-I board with an integral Atom CPU lying around collecting dust. I'm thinking I could also use it to control the PSU, and it would just be in a constant state of failed boot without a boot drive. This would waste a bit of power, but with a 13W CPU not too much.
  13. I have had no issues using the Aquantia AQtion 10G Pro NIC in my unraid machine. The card is multi-gig so it supports 1, 2.5, 5, and 10G depending on the connection at the other end and the length and quality of the cable. In my case it sits right next to my main switch with a CAT 7 patch cord connection, but is limited to 5G because it is connected to a 5G port on my switch. Still, with spinning hard drives this is more than enough speed.
  14. A couple of weeks ago, completely out of the blue I saw I had errors on two storage drives plus one of my two parity drives was offline. The first sign something was weird was that both storage drives had the exact same number of errors. This would be a huge coincidence if it was physical drive failures. It turned out that all three drives were connected to the same breakout cable (the 4th was unused) on my LSI 9207-8i HBA. Thinking it might be a bad cable, I swapped out the cable and rebooted. I rebuilt the 2nd parity drive and everything has been fine for the past two weeks. Tonight, I updated to version 6.8.1. Right after rebooting I saw a strange warning message that one of my cache drives was unavailable. Oddly, when I checked the drive on the MAIN page it said the drive was operating normally. A few minutes later, the same parity and two storage drives started having similar problems as before. While on a different breakout cable, the cache drive is connected to the same HBA as the other malfunctioning drives. I shut down and swapped the HBA for a spare I just bought for another machine. It seems like the HBA may be sketchy. I prefer not to put the HBA back into service without confirming it is healthy. I also don't want to buy another if not necessary. Does anyone know of any software tools or other methods for testing an HBA? unbucket-diagnostics-20200113-2308.zip
  15. Best Buy is currently offering $90 off on the 10TB Easystore, making it $159.99. This is $30 less than the 8TB at $199.99. They also have $100 off on the 14TB model, making it $209.99. If you can go big, $15/TB is not too shabby. No deal on the 12TB so it is $249.99.