Leaderboard

Popular Content

Showing content with the highest reputation on 04/08/21 in all areas

  1. This release contains bug fixes and minor improvements. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Thank you to all Moderators, Community Developers and Community Members for reporting bugs, providing information and posting workarounds. Please remember to make a flash backup! Edit: FYI - we included some code to further limit brute-force login attempts; however, fundamental changes to certain default settings will be made starting with 6.10 release. Unraid OS has come a long way since originally conceived as a simple home NAS on a trusted LAN. It used to be that all protocols/shares/etc were by default "open" or "enabled" or "public" and if someone was interested in locking things down they would go do so on case-by-case basis. In addition, it wasn't so hard to tell users what to do because there wasn't that many things that had to be done. Let's call this approach convenience over security. Now, we are a more sophisticated NAS, application and VM platform. I think it's obvious we need to take the opposite approach: security over convenience. What we have to do is lock everything down by default, and then instruct users how to unlock things. For example: Force user to define a root password upon first webGUI access. Make all shares not exported by default. Disable SMBv1, ssh, telnet, ftp, nfs by default (some are already disabled by default). Provide UI for ssh that lets them upload a public key and checkbox to enable keyboard password authentication. etc. We have already begun the 6.10 cycle and should have a -beta1 available soon early next week (hopefully).
    12 points
  2. It's hard to release it in the USA and around the world at the same time. Someone is always sleeping. Also @limetech getting us the latest kernel the same day it was released. Hard to give the plugin devs the heads up when the linux kernel wasn't even released before they went to bed
    7 points
  3. @limetechUpgraded and no problems here, checked the version of runc and its all good, so thanks for the inclusion of the latest version of Docker, you guys have saved me a lot of additional support, much appreciated!.
    4 points
  4. The local syslog server requires the server to work properly and may miss things when the system hangs unexpectedly. The mirror function simply copies everything simultaneously to syslog and flash, and can catch more in case the system hangs. Of course I am expecting everything to work and no more call traces
    2 points
  5. You can also add this as said to your syslinux.conf file so that you don't have to do anything or create a file if you installed the Intel-GPU-TOP plugin. But I was wrong above you have to do it in this format: i915.force_probe=4c8b Simply append this to your syslinux.conf file and this is appended when the Intel-GPU-TOP plugin loads the drivers, so no need to create the file with the contents (Main -> Click on 'Flash' and append it like that and click Apply at the bottom): From what I know this is a problem with Plex and you can only solve this if you run a custom script but you have to run it on every update of the container otherwise it will stop working again. I would post an issue an the Plex Forums about that.
    2 points
  6. Hi Newbie, bei mir läuft es so stabil wie es nur kann. Ich selber betreibe eine 5700xt im Referenzdesign. Negative Erlebnisse werden immer mehr hervorgehoben, mach dir da am besten dein eigenes Bild mit deinem Setup und teste einfach mal. Ich selber hab bisher noch nichts schlechtes aus dem unraid forum gehört. Teilweise war der vendor-reset tatsächlich die lang ersehnte Lösung, da bei einigen der mittlerweile alte navi-patch nicht alle Probleme behob. Mein Name steht übrigens eine Zeile höher 😜
    2 points
  7. Das wird deine Lösung sein Dafür musst du dann mit @ich777s "unraid kernel helper" einen custom kernel kompilieren um den vendor-reset zu integrieren. Hast du über die community apps andere zusätzliche treiber installiert, müssen diese ebenfalls mit kompiliert werden und die dementsprechenden plugins vor dem Neustart deinstalliert werden. Für ein How-To checke bitte mal den entsprechenden thread ab. Solltest du Probleme haben wird er dir sicher gern helfen! Ich bin im Moment beruflich stark eingebunden, deswegen können weitere Antworten von mir eine Weile dauern 😜 Viel Spaß dann, sobald alles eingerichtet ist
    2 points
  8. Big Navi cards should work ootb, as they fully support flr as specified in pcie specifications. Strange... But as I read here, there seems to be an issue with the vbios. @trig229 you can also try to download your vbios from techpowerup and use that instead.
    2 points
  9. We're working on a design that lets driver plugins be automatically updated when we issue a release.
    2 points
  10. Dieses Verhalten solltest du über clover bzw. opencore deaktivieren. Es reicht, wenn der Bildschirm ausgeht. Ist deine Karte in macos unterstützt? Haben welche mit genau deiner Karte einen mac am laufen? Nutzt du den vendor-reset? Wenn nicht, solltest du das machen!! Korrekt. Ich habe immer über den xml editor konfiguriert, da ansonsten relevante Parameter beim speichern verloren gehen.
    2 points
  11. Done, container is already rebuilt and uploaded to Docker Hub.
    2 points
  12. Please try to append 'force_probe=4c8a' to your syslinux.config and reboot (if you do it like that then please remove the contents of the i915.conf file). Please don't double post, you also can mention me here. Your i915.conf file has to be empty or at least have only the middle line in it, the first and the last line are wrong.
    2 points
  13. No problem, here to help... I will keep you updated when everything is sorted out and the source is available.
    2 points
  14. The specific macvlan issue is discussed here The specific kernel fix is described here, it comes down to broadcast messages were not properly handled.
    2 points
  15. Builds for 6.9.2 have been added (2.0.0 and 2.0.4 if you have enabled "unstable" builds) Thanks to @ich777 the process is now automated! When a new unRAID version is released ZFS is built and uploaded automatically. Thanks a lot to @ich777 for this awesome addition!
    2 points
  16. Keep in mind he's probably sleeping.
    2 points
  17. Not to be the one to complain, but we need to turn from reactive to proactive. I genuinely appreciate the support here and the dev work put in here, but couldn't this be anticipated and communicated out to the developer ahead of time? If we are trying to bridge the gap between core product devs and community devs, this could be avoided In either case, no harm no foul. The system is running and we can wait for the fix.
    2 points
  18. @ich777 will update them when he awakes. He is on the other side of the world
    2 points
  19. Unraid does a nice job of controlling HDD's energy consumption (and probably longevity) by spinning down (mechanical) hard drives when idle for a set period of time. Unfortunately the technique used by Unraid to spin down an HDD, true to the time of writing this, works only for ATA drives. If you have SCSI/SAS hard drives, these drives do not spin down (although the UI will indicate they do). The drives continue spinning 24x7, expanding the energy footprint of your Unraid server. Following a long and fruitful discussion here, a solution is provided via this plugin. This is hopefully a temporary stopgap, until Limetech includes this functionality in the mainline Unraid, at which time this plugin will walk into the sunset. Essentially, this plugin complements the Unraid SATA spindown functionality with SAS-specific handling. In version 6.9 and upwards, it enhances the "sdspin" function (focal point for drive spin up/down) with support for SAS drives. In prior versions (up until 6.8.x) it does the following: 1. Install a script that spins down a SAS drive. The script is triggered by the Unraid syslog message reporting this drive's (intended) spin down, and actually spins it down. 2. Install an rsyslog filter that mobilizes the script in #1. 3. Monitor rsyslog configuration for changes, to make sure the filter in #2 above stays put across changes of settings. In addition, the plugin installs a wrapper for "smartctl", which works around smartctl's deficiency (in versions up to 7.1) of not supporting the "-n standby" flag for non-ATA devices (which leads to many unsolicited spin-ups for SAS drives). When this flag is detected, if the target device is SAS and is in standby (i.e. spun down), smartctl is bypassed. You can install this plugin via Community Applications (the recommended way), or by using this URL: https://raw.githubusercontent.com/doron1/unraid-sas-spindown/master/sas-spindown.plg In "Install Plugin" dialog. When you remove the plugin, original "vanilla" Unraid behavior is reinstated. As always, there is absolutely no warranty, use at your own risk. It works for me. With that said, please report any issue (or success stories...) here. Thanks and credit points go to this great community, with special mention to @SimonF and @Cilusse. EDIT: It appears that some combinations of SAS drives / controllers are not compatible with temporary spin-down. We've seen reports specifically re Seagate Constellation ES.3 and Hitachi 10KRPM 600GB but there are probably others. Plugin has been updated to exclude combinations known to misbehave, and to use a dynamic exclusion table so that other combinations can be added from time to time. 19-Nov-2020
    1 point
  20. Thanks for the quick reply. A side note: host access is only applicable to custom (macvlan) networks, not bridge networks. In your case enabling it has no use and it can stay disabled, it should however not cause the call traces (still investigating)!
    1 point
  21. I found the problem. As I said, a beginner mistake, or a lack of warnings in the CA/unraid. Under Setting/Management Access the ports need to be change from 80/443 to anything else... Simple ports conflicts between main system and CA docker.
    1 point
  22. It is for sure. Ya know, I did a little googling, and nothing applicable came up right away. Weird. This video from Shockbyte has the meat and potatoes of the backup process using an FTP browser. The link starts at 0:54 You didn't mention what sort of server, or service you're coming from. But the steps are generally: 1. shut down old server 2. (optional) compress server files to single zip or tar.gz files 3. download world, world_nether, and world_the_end (where server.properties "level-name" is "world") 4. create new server 5. upload files (un-compress if necessary) 6. be sure the prefix "world" matches server.properties "level-name" 7. start server This doesn't include the plugin folder, or other weird stuff: only the backed up world files, which includes playernames stats and inventories I think. I've had great luck with DrivebackupV2 backing up my server files to Google Drive account on a periodic rolling basis. https://dev.bukkit.org/projects/drivebackupv2 My binhex server files are actually located in /mnt/user/appdata/binhex-mineos-node/mineos/games/servers/neverland where my server.properties level-name is "neverland" so the three folders I have are neverland/, neverland_nether/, and neverland_the_end/.
    1 point
  23. yeah just create new paths. all the container needs to see is /library then you can sub folder that out.
    1 point
  24. its already installed:- sh-5.1# sqlite3 SQLite version 3.34.1 2021-01-20 14:10:07 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database.
    1 point
  25. It should be always safe to upgrade. If I haven't built the packages yet it will always grab the latest driver that where built by LT itself (added a fallback after the release of 6.9.1) regardless what version is set on the plugin page, the only thing is that the plugin page looks a little weird until I built the packages but that does not affect the function of the driver itself. Hope that answers your question.
    1 point
  26. You need to install the SAS Helper plugin to spin down SAS drives at this time. Review the support page before loading.
    1 point
  27. Please discuss here https://github.com/rix1337/docker-ripper/issues/64
    1 point
  28. Kein offener Slot? Denn nur weil der Slot ggfs nur x1/x4/was auch immer ist, können da eventuell nämlich durchaus "größere" Karten rein
    1 point
  29. Ins Go File einfach ans Ende folgendes eintragen (aus dem Powertop-Thread):
    1 point
  30. Das Problem lag in der Fehlfunktion des DNS Resovers. Die /etc/resolv.conf ist leer, was ja fast immer von Programmen, wie DNSMASQ, resovlconf oder openresolve gemanagt wird. Nach Eintrag meines lokalen DNS Servers in der /etc/resolv.conf konnte ich für die Installation das Update durchführen. Es wäre aber schön, wenn das Resolving gelöst wird.
    1 point
  31. Click on the flash drive on the Main tab I've often thought to myself that it should also be listed under the Disk Shares part of the Shares tab as that is a more 'discoverable' location.
    1 point
  32. Yes, just note you use SilverStone CS381. HBA/cable price a bit expensive, btw different market/source have different price. I got those in 1/3 🙂 FYR, after add LSI HBA, expect parity sync/check should start in 180MB/s and end in 100MB/s, average 140MB/s.
    1 point
  33. ahh ok that makes more sense then, so i guess when a new version of unraid appears and if the kernel version has changed (which is normally the case) then i guess it means you will need to re-compile all the nvidia drivers versions again, if so that could be a rather large task as the list of driver versions available grows. im just thinking of the condition where a user needs a specific version of the driver for compatibility reasons with their gpu, then a new version of unraid comes out and that driver versions is no longer available for that newer kernel, is this a possible scenario?.
    1 point
  34. Thanks for the reply!, so i see its fixed up which is great thanks!, however i got a couple more questions, i hope you don't mind. So my understanding (and here is where i think i am wrong) is that the nvidia driver downloaded via this plugin is not tied to a specific kernel version, is this not correct?, if this IS correct then why not display all driver versions irrespective of the version of unraid running on the host? as long as the host is at least the supported version, as in 6.9.1 or greater then we could always display all the versions available right?. The other question is more to do with exactly what this plugin does, on the face of it it simply downloads the nvidia driver from somewhere (your repo?) and places it in a specific folder on the flash drive, other than to do checks to see if a new versions is available on restart of the server, does it do anything else?.
    1 point
  35. I thought I had posted here about update a few days ago but it seems i didnt! Well I pushed out a fix for Macinabox over Easter and now it will pull BigSur correctly so if you update the container all should be good. Now both download methods in the docker template will pull Bigsur. Method 1 is quicker as it downloads base image as opposed to method 2 which pulls the InstallAssistant.pkg and extracts that.
    1 point
  36. This card has a MegaRaid LSI chip which cannot be flashed with IT mode firmware. It is not a good choice for unRAID. The card in the 2nd link will work well. Are you planning to connect SATA drives directly to the controller? If so, you need an SFF-8087 to 4 SATA forward breakout cable like this one. Make sure you get a forward breakout cable and not reverse. The direction matters. To connect 8 drives to the second controller listed, you would need two cables.
    1 point
  37. No stress man! Your work is greatly appreciated! We are back in business: --
    1 point
  38. The packages are all up now, they where built automatically but haven't uploaded automatically, have to investigate why... Please also look at the second post of this thread there will be updates for which version the Drivers are built already.
    1 point
  39. What did you have on cache? appdata I assume, did you have it backed up? Anything else important?
    1 point
  40. Yes but you will have to or it will still think the disk is parity2
    1 point
  41. Because the bug in unRAID was that changes to smartctl made it so drives would not spin down when smartctl is called as it is in the telegraf.conf file. Telegraf is an integral part of the UUD setup. If you disable UUD, you remove Telegraf and its calls to smrtctrl so you no longer see the problem. Alternatively, you could keep UUD active and just comment out this line in telegraf.conf. [[inputs.smart]] # ## Optionally specify the path to the smartctl executable path = "/usr/sbin/smartctl" However, as far as I know, that issue was fixed in unRAID 6.9.1? Perhaps someone else can confirm this.
    1 point
  42. Evening @ich777 a quick heads up, i just upgraded to 6.9.2 (stable) and it looks like its causing some ui issues and problems with the preferred driver selection for your plugin, namely the versions available are not shown correctly and the preferred version i had selected was not honoured on reboot, thus a long reboot time and forced up to the latest version now available (465.19.01). screenshot below:- note the preferred version top right. cheers!.
    1 point
  43. Yes, I thought it would be also somehow possible to hand the audio over to a device that is passed through but after thinking again that can't work. I will look into this ASAP.
    1 point
  44. You still need to have something automatically checking for upates. I would definately suggest using CA Auto Update for this. Yes, I am on Current Version: 0.15.1 There has been updates the past 2 days.
    1 point
  45. Sorry for restarting an old topic, but I dont know if you everr got this solved. I just ran into the same issue as you. (Unraid Nvidia 6.9-beta 30), ultimately it is down to the sucessful recognition of hardware For me the solution was VM configuration Machine: Q35-3.1 BIOS: SeaBIOS Full XML Below <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='6'> <name>pfSense</name> <uuid>f212c21d-5961-e131-3058-d5c9bb38b256</uuid> <metadata> <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/> </metadata> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='5'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='7'/> <vcpupin vcpu='3' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso' index='2'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/pfSense/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-pfSense/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-gb'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Hopefully this helps someone :)
    1 point
  46. Hi - had the same problem. I did solve it with this command in ssh: So - reboot isn't neccessary.
    1 point
  47. Hi, I have the same probs with that. Unraid 6.8.3. All Vm´s that switched from or to GPU Passthrough to VNC back have the problems. I find out that after switching the Grafik output the "bus" from 0x00 chnaged to 0x07. With these settings the vms dosen´t work. After change the bus back to 0x00 the vm works fine. If the bus 0x00 is occupied, change the slot to 2 or 3. Sorry for my bad english :-) <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    1 point
  48. Do the following to let docker rebuild the networks rm /var/lib/docker/network/files/local-kv.db /etc/rc.d/rc.docker restart
    1 point
  49. Agree you should always test a new drive to help eliminate infant mortality issues. Specifically, "preclear" is no longer an important function -- the primary motivation for JoeL when he wrote that utility was to eliminate the long downtime in an array while it was clearing a new drive. The PreClear utility provided a way to clear a drive BEFORE adding it to the array; and in conjunction with LimeTech a special "cleared" signature on the drive allowed it to not require clearing when added, since UnRAID "knew" that had already been done. As trurl noted, the newest version of UnRAID no longer disables array access when you're adding a new drive -- it will clear the drive BEFORE incorporating it into the array; and then automatically add it. So the PreClear function is no longer needed. But Joe included a fairly thorough bit of testing in the process -- reading every bit to confirm all sectors could be successfully read; zeroing (clearing) the drive; and then post-reading to confirm everything had been written correctly and could be successfully read back; and included a good bit of seek testing in the process. Running a few cycles of this became somewhat of a defacto "test" for new drives to confirm all was good before adding them to the array. This testing, however, can just as easily be done using various 3rd party disk utilities or the manufacturer's diagnostics, so if you'd prefer to test new drives on another system (e.g. Windows, Mac, or another Linux box), that is just as good. The important thing is that you DO test your new drives before using them. Personally, I test all new drives using WD's Data Lifeguard -- I run a short test; long test; then a full write zeroes; and then repeat the short and long tests. I do this regardless of which system the drive is destined for -- one of my desktops; an HTPCD; or one of my UnRAID servers.
    1 point