oh-tomo

Members
  • Posts

    146
  • Joined

  • Last visited

Everything posted by oh-tomo

  1. According to Seagate, Instant Secure Erase eliminates the need to physically destroy or overwrite the drive, saving time, money, and resources. Maybe @stereobastler has figured out how to get ISE to work on unRAID in the 3 years since starting this thread:
  2. I would like to use this quick erase functionality on an IronWolf Pro HDD. I downloaded the SeaChest utilities as described in this thread: But I get "RevertSP is not supported on this device" when using that option with Seagate's SeaChest_Erase tool. The SeaChest Erase readme has a section "Enabling TCG Commands In Linux" and the below thread describes how to set libata.allow_tpm to 1 on unRAID, which I did and rebooted. Still "RevertSP is not supported on this device." Then I tried connecting the HDD to a SATA port on the motherboard instead of LSI 9300-8i LBA to see if that made a difference with the libata change. Still "RevertSP is not supported on this device." I don't see anything on Seagate's website saying this model (ST16000NE000-2RW103) is *not* SED. Seagate chat support claimed it is SED and after more questions ended the chat with "The recommendation we could provide is to contact with unRAID to further support." There's a PSID on the label. Why would there be a PSID on the label if it wasn't SED capable?
  3. The Data-Rebuild with the new drive finished so I was able to shut down and connect the old drive directly to SATA data & SATA power with the unRAID PC and on power up it appears in unassigned devices. The old drive is an Iron Wolf Pro and there's a PSID number on the label -- can I use this PSID to do a Instant Secure Erase and save time compared what sounds like a lengthy preclear read/write/read/confirm operation for a 16TB HDD? It would be nice to know if a faster erase for RMA option is possible for this and any future RMAs.
  4. Thanks for your suggestion! I appreciate you taking the time to help. As you mentioned, pre-clear is a valuable tool for checking drive health. However, my primary focus is securely erasing the data on this drive before returning it for RMA. My understanding is that pre-clear focuses on read-modify-write cycles for data integrity, while secure erase specifically overwrites the entire drive with patterns to render data unrecoverable. Perhaps there are specific pre-clear functionalities that you're aware of that might be suitable for secure erase in this case? I'd be happy to learn more if that's the case.
  5. "In order to protect your privacy and other interests in data, you should delete all data, or as much as possible, prior to returning any product to Seagate." I know long SMART scans can be initiated as a background process from the unRAID page but can a secure erase be initiated for a drive I'm about to return for a Seagate RMA? Or do I open a ssh window and run "sudo hdparm --security-erase /dev/sdl" and leave the window open until it finishes? Oh wait that doesn't work: root@Server2018:~# sudo hdparm --security-erase /dev/sdl missing PASSWD root@Server2018:~# HDD is connected via USB dock to unRAID PC and shows up in Unassigned Devices as sdl.
  6. Event: Unraid Data-Rebuild Subject: Notice [SERVER2018] - Data-Rebuild finished (0 errors) Description: Duration: 1 day, 4 hours, 10 minutes, 16 seconds. Average speed: 157.8 MB/s It finally finished. I wasn't sure if the smaller drive would finish and re-enable first but they both remained disabled until the 16TB finished also. Diagnostic attached. I was Resilio-syncing a big folder of unRAID stuff to a 10TB external USD HDD whose partition vanished but I haven't gotten around to seeing if that's recoverable yet. server2018-diagnostics-20240124-0057.zip
  7. 16TB extended offline test completed without error. Diagnostics attached. server2018-diagnostics-20240122-2010.zip
  8. 10TB extended test completed without errors. 16TB in progress 80% complete. Can rebuilding of the 10TB start while 16TB test is running? If I wait until 16TB self-test is done, is rebuild of both at once possible/advisable or is one at a time the preferred method? ST10000VN0004-1ZD101_ZA28Y9NQ-2021-04-07 disk6 (sdg) - DISK_DSBL.txt ST16000NE000-2RW103_ZL2P6V5T-2021-04-07 disk1 (sdj) - DISK_DSBL.txt
  9. Yes there is a molex to x2 SATA power splitter. The 5-bay drive cage has two SATA power inputs so it uses both from the splitter. And between those there was a molex-to-molex with a smaller cable powering a fan. Stupidly just now I examined this setup while the unRAID was running and the other 3 drives in the drive cage had read errors, so I shut down. Now that it's shut down I've been examining the power connections and removed that molex-to-molex from the SATA power chain.
  10. Hardware details which might not appear in Diagnostics: I checked SATA connections to iStarUSA BPN-DE350SS 3 X 5.25-Inch to 5 X 3.5-Inch SAS/SATA Trayless Hot-Swap Cage (Black) and all five are secure and not loose at all. The five are from an eight cable (CableCreation Internal HD Mini SAS (SFF-8643 Host) - 4X SATA (Target) Cable,SFF-8643 to 4X SATA Cable,1.6ft, SFF-8643 for Controller, 4 Sata Connect to Hard Drive) output from a LSI 9300-8i PCI-Express 3.0 SATA / SAS 8-Port SAS3 12Gb/s HBA - Single--Avago Technologies which is connected to the motherboard listed in the Diagnostics and which is running the latest BIOS.
  11. The disk used and free numbers weren't updating in the unRAID webUI so I rebooted and saw updated used and free numbers but also that Disks 1 and 6 had become disabled, contents emulated. Did a short SMART test on each and both passed. What is next step? Long SMART test? Last three diagnostics (2 from today after 2 reboots and 1 from 9 days ago) attached. server2018-diagnostics-20240112-1230.zip server2018-diagnostics-20240121-1630.zip server2018-diagnostics-20240121-1656.zip
  12. I'm still figuring this out, but try changing the USB controller to xHCI since the VIA is VL805/806 xHCI? It's not really clear in the VM settings that "USB Controller" applies to passthrough. https://www.via-labs.com/product_show.php?id=48
  13. So what stopped the TianoCore hanging on boot was to switch VM USB controller settings from: <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> to <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller>
  14. Ethernet speed seems good between Nvidia Shield Pro 2019 and unRAID. What issues am I looking for or should I just leave it alone? (Also my receipt and bios say my motherboard is H310M-A but lspci.txt says B450M-A.) 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15) Subsystem: ASUSTeK Computer Inc. PRIME B450M-A Motherboard [1043:8677] Kernel driver in use: r8169 Kernel modules: r8169 [ 5] 501.00-502.00 sec 111 MBytes 933 Mbits/sec 0 277 KBytes [ 5] 502.00-503.00 sec 112 MBytes 944 Mbits/sec 0 305 KBytes [ 5] 502.00-503.00 sec 112 MBytes 944 Mbits/sec 0 305 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-503.00 sec 54.1 GBytes 923 Mbits/sec 84 sender [ 5] 0.00-503.00 sec 0.00 Bytes 0.00 bits/sec receiver iperf3: error - the server has terminated
  15. Just upgraded the unRAID OS for the first time in ages (to 6.12.6 from 6.9.2) and now Activate Windows is nagging me in Windows 10. (product key already used it claims) So I guess the upgrade affected Windows perception of the VM hardware environment? Do I have to call Microsoft (it's new years day I don't think they are open) or is there a quick fix within unRAID OS or the VM settings? Is it the change of the Network MAC address when trying out the different VM settings the cause? I was trying to tweak the VM performance through creating new VM settings and I noticed that the MAC address changes each time a VM setting is added. I guess a warning for others if that is the case.
  16. Also when I shut down I do hear a clicking in my speakers connected to the HDMI GPU audio device. But still a black screen. Old VM setting gives me a UNRAID boot up screen instead of the old TianoCore and so passthrough GPU is working. Audio is working as well but it was never 100% before as I hear occasional audio ticks so I have to work out my latency issues there. Main difference between new VM setting and this old working one was QT35-7.2 vs QT35-5.1. Anyway old VM setting works so I guess I'll stick with that.
  17. Just upgrades to unRAID OS in the first time in ages (to 6.12.6 from 6.9.2) and thought I'd refresh the VM settings for Windows 10 with "Add VM" and am seeing some settings I've never seen previously. Do I just leave them blank or as they autofill when a new VM is added or is there documentation for them? I notice on first boot that the passthrough GPU is just displaying black so I'll restart and see if I at least see the TianoCore screen. (Was able to remote in with Chrome Remote Desktop) No TianoCore screen on reboot but Device Manager is seeing the NVIDIA GeForce GT 710 and Windows reports it is using the best drivers for it. Maybe I'll try my old VM settings.
  18. Darn, I asked too many questions. I was hoping the core question (how to expedite the process) would garner answers. Or more specifically, if I want to run ddrescue to clone from drive to drive and I have only 1 spare SATA connection available (because the array is taking up all the other SATA connections in the unRAID PC) how do I free up a SATA port and avoid later paying the cost of a long parity re-build after? (Background: ddrescue cloning requires a source and a destination -- that's two SATA ports!) Instead of parity 2 (which I removed because I thought it was redundant) should I swap out the fullest data drive (because it would be the least likely to be added to) and then hope restoring the full data drive back to the array won't initiate a full parity rebuild? (Ohhhhh, second guessing how unRAID is gonna react is so fraught with doubt.. I just wanna clone a drive.)
  19. Hello Unraid community, I hope you're all doing well. I recently encountered a situation with my Unraid server that has left me seeking some guidance. Here's the scenario: I needed to perform a drive clone operation (ddrescue) using a spare SATA port, so I temporarily removed my Parity 2 drive to free up the port. After completing the clone operation, I reconnected the same Parity 2 drive to its original SATA port. Upon powering up my Unraid server with the Parity 2 drive restored, I noticed that it appeared with a blue icon in the Unraid webGUI. The blue icon seems to indicate that the drive is being emulated or that some sort of background operation is taking place. However, I'm not entirely sure why this is happening, especially since I'm using the same drive that was previously part of my array. My questions are as follows: Is it normal for a Parity drive to go through an emulated rebuild process when it's temporarily removed and then reconnected to the same port? The estimated time for this emulated rebuild is quite long, around 19 hours. Is such a lengthy rebuild necessary for the same drive, or is there a way to expedite this process? Should I be concerned about data integrity during this process, or is there anything specific I should check or monitor? I'd greatly appreciate any insights or advice from the community regarding this situation. I want to ensure that my server is operating smoothly and that my data remains safe. Thank you in advance for your help!
  20. VM wasn't starting so I: - disconnected USB connections to passthrough card - shut down the unRAID - started the unRAID - started the VM - re-connected all the USB connections - VM froze - shut down the unRAID again - restarted the unRAID again - started the VM and waited for the login screen - re-connected just one of the USB connections (the hub to which the mouse & keyboard are attached) So is this an issue with the passthrough card? -- too hot? -- incompatible? -- the card is dying? The USB 3.0 passthrough device in the VM settings is listed as: VIA Technologies VL805/806 xHCI USB 3.0 Controller | USB controller (02:00.0) Which is either (haven't looked inside to check which yet): - Inateck KT5001 PCI-E to USB 3.0 5-Port PCI Express Card and 15-Pin Power Connector, Red or - Rivo PCI Express Riser USB 3.0 Card 5-Port PCI Extender Card and 4 Pin Power Connector VM is working now, but for how long before another freeze during reboot. Please advise next steps.
  21. I restart the Windows VM (because it's getting sluggish at 99% RAM utilization), I go out to do some errands, come back two hours later and the VM is still offline. This is why I'm reluctant to restart the VM. Because it gets hung for some mysterious reason while restarting. Is there another way to reset the VM and get the RAM back without restarting? Why is it stuck? Is it an attached USB periferal? Does it just hate me? Why does this keep happening randomly without reason?
  22. I set the Win10 VM to restart and left the site of the unRAID and from my remote location I am not seeing any sign of it having come back online 6h50m later. I only restarted because RAM usage in the Win10 VM was at 100% and restarting seems to be the only way to get it out of that sluggish state.
  23. I don't know. The Win10 was the only one I was using (why would the size suddenly change?) so I deleted the others and eventually was able to boot back in. I also had to disable the GPU passthrough and boot in using VNC. Once the Windows repairs were done and I got back in, I shut down the Win10 VM, re-added the GPU and booted in again. It seemed faster once I was back in but now after a few days Win10 seems less responsive. Wonder if it's the downloading of Win10 updates that is slowing things down. Space used is still only around 437 GB of out of 899 GB.
  24. Booted into Windows and there's 437 GB of free space out of 899 GB. Why was VM boot up pausing if there's all this space?