bidmead

Members
  • Posts

    120
  • Joined

  • Last visited

About bidmead

  • Birthday 01/18/1941

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bidmead's Achievements

Apprentice

Apprentice (3/14)

7

Reputation

  1. Tested Technology has posted part 1 of its review of Unraid on the LincStation N1. Comments, here or on the Tested Technology Web site, would be very welcome indeed. -- Chris
  2. Thanks for the steer, @JonathanM. Categorising the parity-protected array as another kind of user-defined pool makes a lot of sense. Meanwhile I'm following your advice (and Spencer's) with a dummy 4GB USB stick standing in for the conventional disk1 array. -- Chris
  3. I need to correct myself here. I've returned to Unraid after an absence of a couple of years following a hardware failure, and haven't yet got my head around some of the newer developments. I had assumed that ZFS, XFS and BTRFS pools, not forming part of the main Unraid array, would be categorised as "unassigned devices". I've since learned that this binary categorisation is wrong: there's a third category of "user-defined pools", to which these newer storage formats belong. This makes nonsense of my response to @JonathanM, so my apologies are due. Let me rally my thoughts and see if I can straighten out what I was trying to say, which I think remains valid: The core idea of a parity protected array depends entirely on the ability of a dedicated parity-maintaining device (comprising one or two drives) to track bit switching in close to real time as new data are written to any of the several other drives comprising the array. Hard drives are well designed to do this, at their own pace. Solid state devices using NAND can emulate this, and probably with impressive speed. However, although the SSD controller can comfortably write a zero bit into a cell that previously stored a one, it has to copy out an entire block of data to a new set of cells if it's required to overwrite a zero with a one. This elaborate choreography leaves wastelands of out-dated data that will need to be TRIMmed by the operating system from time to time. Copying an entire block that subsequently has to be erased by a high voltage before it can be reused, just to accommodate a single bit-flip, accelerates wear and significantly shortens the life of the device. So while SSDs are valuable for storing chunks of data in normal use, they are not a worthy technology for bit-by-bit parity matching. This suggests that Unraid implemented on an all-solid-state NAS should not employ the traditional parity-checked main array. Instead, its data will optimally be stored in one or more user-defined pools. Very likely these pools will employ RAID configurations. That's my thinking. In attempting to express it as clearly as possible I notice I've fallen into a tone that sounds authoritative. This is spurious. As I say, I'm still trying to get to grips with all this and there's lots of room for me to be wrong. Please straighten me out as appropriate. -- Chris
  4. Thanks, @Kilrah. But I'd suggest that the "most people" argument here demonstrates its usual weakness. UnRAID itself isn't designed for "most people". And the Linkstation N1 is certainly not designed for spinning rust users. Lime Technologies choosing to tie in with this device appears to signal that it's looking to evolve UnRAID to properly include all-SSD NASes. That's not to say there's any thought of abandoning HDs and the parity-protected array for which HDs are well-suited. But with the N1, Lime is taking on the challenge of SSDs. These storage devices can, as you suggest, be kluged into a conventional UnRAID main array, but they don't bit-flip at all well and engineers wince at the idea. -- Chris
  5. Apologies for not being clear. I was suggesting that the parity solution for which spinning rust is well-suited won't be useful for all-solid-state NAS devices, which will instead have to rely on RAID-based storage pool solutions on unassigned devices. The recommendation about unassigned drives you mention seems to me to refer to their use in current conventional HD-based UnRAID configurations. It appears that solid state storage will change that, particularly in future versions of the operating system that make the main (optionally parity-protected) array itself an option. > Why should the name Unraid be an issue? In which case the name "UnRAID" might be thought to be inappropriate. -- Chris
  6. @JonathanM, this question seems to me to be crucial. If I understand the problem correctly, there's an essential mismatch between the (great) idea of a parity checking hard drive (or drive pair) that is responsible for looking after changes of data on an array of drives and the way an SSD drive works. While spinning rust can without too much contortion be thought of as capable of (re)writing single parity bits one at a time, the same process on an SSD is much more of a palaver. The picture I'm getting is of a future all-SSD NAS that uses only unassigned drives pooled using conventional Linux storage techniques (which are, of course, RAID-based).* All the other good UnRAID stuff (readily installable docker apps and so on maintained by a vibrant community) should continue to thrive. But what happens to the name "UnRAID"? -- Chris * LATER: Foolish error on my part. These pools are categorised as "user-defined pools", not "unassigned drives".
  7. Thanks, trurl. No, it's unpingable and its address is unknown on the network. I've rebooted it several times, with and without the UnRAID boot USB. Without the UnRAID USB it should boot into QTS as the original DOM is still in place. However, it doesn't – all eight drive lights are red, I'm getting nothing over HDMI and the built-in display is stuck at SYSTEM BOOTING… As it won't boot into either UnRAID or QTS, it's looking very much as if we have a QNAP hardware failure. But I'd welcome any further suggestions. -- Chris
  8. I tried to log in today to my UnRAID 8-bay server based on a QNAP TS-853 Pro which has been running flawlessly for several months. I was unable to find it at its usual IP address even after a reboot. As far as I can make out, the Ethernet port is flashing normally and the connection to my switch is secure. I've tested the UnRAID boot USB in a second machine and it boots apparently correctly into UnRAID with the static IP address I'd allocated. I'd welcome any suggestion on how to go about diagnosing this further. Please let me know if any further details would be useful here. -- Chris
  9. My problem with calibre on UnRAID is that the webGUI is completely intractable when accessed from a phone. While the regular UnRAID webGUI can be zoomed and panned, calibre's simply fails to shift, making it impossible to access necessary parts of the screen. Has anyone found a workaround for this? -- Chris
  10. Good question, @John_M. It's all set out in the UnRAID story. The SSD in question uses RAISE and I did ask the forum earlier whether this made an external TRIM utility redundant but the response seemed to be to go ahead with TRIM anyway. I still have my doubts about whether the combination of btrfs, TRIM and RAISE may not be asking for trouble. I understand that OWC, the manufacturer, is currently investigating this. Meanwhile, I'm running the SSD, now happily formatted as xfs. TRIMmed weekly, uneventfully. -- Chris
  11. I've raised a query with the vendor about this, @JorgeB. I'm continuing to run the cache xfs-formatted with a weekly TRIM. I'll report back here if I find out more. -- Chris
  12. Thanks for that, @JorgeB This was the other error being thrown up. Can we be sure it wasn't the result of a TRIM conflict? -- Chris
  13. Not only is the TRIM utility not useful for btrfs formatted SSDs (because TRIM is built in to btrfs) but it appears to be positively dangerous. My recently destroyed btrfs SSD cache drive was using the UnRAID TRIM utility (on advice from other UnRAID users) and succumbed after a couple of months, turning read-only with the error: cache and super generation don't match, space cache will be invalidated If I'm reading this aright (always a big IF) and the current UnRAID implementation of TRIM is destructive on btrfs-formatted SSDs, shouldn't the utility be updated to detect btrfs, warn the user, and render itself inactive? The good news is that only the btrfs formatting was destroyed. Reformatted as xfs, the SSD lives on. But if I can get confirmation of my assertions here, I'm inclined to return the SSD to btrfs. -- Chris
  14. Thanks for that, @ich777 My jDownloader directory is on a single device, which happens to be the cache drive. I'll investigate deleting the .jar file from the appdata directory and report back. LATER THAT SAME EVENING I seem to be in serious trouble. I tried to get to appdate using krusader but its WebUI is kaput too. So I'm using the krusader console. All I can find in /unraid_shares/appdata/ is syncthing. No sign of any other docker data there. -- Chris
  15. Damn, jDownloader is defeating me again. I got it working when I set the port to one not previously assigned to another docker (always a good idea) and it was running fine. But now VNC is telling me it can't connect to the server and I gather from the logs that I have an invalid or corrupt jdownloader.jar. The only two changes I've made to the config files are to set the port to 18080 and change the local download to mnt/user/jDownloader/ -- Chris