ds123

Members
  • Posts

    32
  • Joined

  • Last visited

Posts posted by ds123

  1. 18 hours ago, whipdancer said:


    IMO WD <Color> <anything> is currently overpriced.  The exception probably being Blue drives which have slower RPM and smaller cache and don't come in a size I'd consider for data storage - but otherwise, attributes that make very little difference in Unraid in my limited, strictly anecdotal, experience.

    I'm curious if those price trends are recent and/or more indicative of Toshiba, than of general pricing strategies.  I know that 12TB Iron Wolf, RED Pro, Exos, Iron Wolf Pro, Red+, Toshiba NAS were all over $360-ish when I was looking last summer.  Technically, each of those models is targeted toward a different market, but that does not factor in to my purchases (which is why I bought the enterprise drives I did, when i did - strictly $/TB).

    Nostalgically speaking, what I wouldn't give for some WD Green 18TB drives.  My green drives all gave me better than 40k hours of service before I retired them.  4 of them now live in a friends QNAP (or whatever) NAS.

     

    So if pricing is not a factor, are you saying it doesn't really matter whether it's an enterprise drive or not?

    I'm mainly concerned about the noise level and temperatures, because enerprise drives are designed for use in data centers, where noise is less important and where there are massive cooling systems.

  2. 19 minutes ago, whipdancer said:

    I'm using WD Ultrastars which are enterprise drives.  No issues so far.  I got them because of the deal at the time, not because I care that they are enterprise drives.

    The Backblaze data is rather eye opening if you've never seen it.  There does not appear to be a compelling reason to use enterprise drives, when focused purely on costs (warranty/support associated with enterprise relationships are an entirely different consideration).

     

     

    They actually seem to be sold at a much lower price, for example -

    WD RED Plus 14TB - 410$ https://www.amazon.com/Western-Digital-14TB-Internal-Drive/dp/B08V13TGP4

    Toshiba MG Series Enterprise 14TB - 330$ https://www.amazon.com/Toshiba-14TB-SATA-7200RPM-Enterprise/dp/B07DHY61JP

  3. Hi,

    Toshiba Enterprise hard drives seem to be sold at a much lower price than traditional NAS hard drives like WD RED.

     

    Are they suitable for home use in an Unraid system? In terms of reliability, noise, temperatures. Specifically asking about MG08 14TB with helium inside - https://www.newegg.com/toshiba-mg08aca14te-14tb/p/N82E16822149785

     

    My current setup:

    image.thumb.png.5396b8e74db0d8c3dec7aabdad46a7a7.png

     

    Wanted to replace the 6TB data HDD (which is more than 5 years old) with 14TB.

     

    Thanks

  4. 13 hours ago, ich777 said:

    Well that's the Integrated Memory Controller... Hope that makes it a little bit clearer what it is and what it does.

     

    That's a bug in the intel_gpu_top executable and nothing that could be easily solved for now...

    Can you share your diagnostics so that I can send it to the developers from the intel_gpu_tools so that they can take a look at it?

     

    Sure, thanks.

     

    tower-diagnostics-20210821-1129.zip

    • Like 1
  5. I think there is a bug in the latest release.

    Sometimes when clicking on "reload all concerts", the containr crashes and stops. Didn't happen in the previous version.

     

    My settings - 

    image.thumb.png.de5bf548d79194e4b4ef9effb4e40902.png

     

    Logs -

    MediaElch 2021-05-03 19:07:43.092 DEBUG : [ConcertFileSearcher] Adding concert directory "/concerts"
    MediaElch 2021-05-03 19:07:43.094 DEBUG : Index is invalid
    [services.d] stopping services
    [services.d] stopping app...
    [services.d] stopping x11vnc...
    caught signal: 15
    03/05/2021 19:07:43 deleted 50 tile_row polling images.
    03/05/2021 19:07:43 Restored X server key autorepeat to: 1
    [services.d] stopping openbox...
    [services.d] stopping statusmonitor...
    [services.d] stopping logmonitor...
    [services.d] stopping xvfb...
    [services.d] stopping nginx...
    [services.d] stopping certsmonitor...
    [services.d] stopping s6-fdholderd...
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

  6. 9 hours ago, mDrewitt said:

     

    I'm seeing the same thing on a new install. Can't seem to figure out at all what's going on, no other logs seem to be displaying any errors.

     

    https://github.com/qbittorrent/qBittorrent/issues/11150

     It fails because qbittorrent doesn't have access to private trackers.

     

    I found a workaround - install the Jackett docker, then configure an indexer for the private tracker, then enable the Jackett search plugin in qbittorrent, search and download from there. 

  7. I can't download torrents from their URLs. Nothing happens when I try to do that.

    I looked at the logs file and all I see is the following log -

    "Downloading '{someTorrentUrl}', please wait..."

     

    Happens also in linuxserver docker.

     

    Any idea? I'm using the default settings, only disabled the vpn and changed the port via the web ui.

     

     

  8. On 4/4/2021 at 12:56 AM, itimpi said:


    in principle that plan should work.    One step you have omitted is a step 5a where you use Tools->New Config to reset the array to an uninitialised state so you can now assign the 14TB as parity and the data drives as you want them.   When you start the array the data drives will be left with their contents intact and unRaid will start building parity on the 14TB drive.   Just make sure that you have cleared all data off the 14TB drive as building parity will completely over-write its contents.

     

    When assigning the 14TB as parity, is it safe to reorder the data disks and change their current assingnments?

  9. On 1/30/2021 at 2:47 AM, LushFire said:

    I am on version 6.9.0 RC2

     

    I have been trying to find a way to get control of my fans through unraid.

    So far I have found one method which I do not particulary like found here

     

    I have also found a discussion regarding this issue here it seems like the better solution than changing the above mentioned parameter.

     

    This is the potential risk with using "acpi_enforce_resources=lax"

     

     

    I have been trying to follow the install instructions for the driver but I run into this error when running make command.

    
    make[1]: *** No rule to make target 'modules'.  Stop.
    make: *** [Makefile:73: modules] Error 2

     

    Any help or guidance is appreciated.

     

    Have you tried this?

    https://github.com/t-8ch/linux-gigabyte-wmi-driver

     

    I have the same model, considering whether to give it a try.

  10. 22 minutes ago, itimpi said:

    Common ways are:

    • from the command line using Midnight Commander (‘mc’ command) or some other tool such as rsync.   This is probably the fastest.
    • from a docker container such as Krusader.  Gives good graphical interface.
    • over the network from your favorite Client OS.

     

    If I use option #1 which is the fastest? Is it possible to track the progress from another session (web gui session / ssh session)? 

  11. 11 minutes ago, itimpi said:


    you can create the share either via the GUI (which will then create the folder to hold share data) or manually by copying explicitly to a disk using the folder name which you want the share to have.

     

    when you later rest the array to use the 14TB drive as parity any data on the remaining array data drives will be left intact.

     

    Thanks.

    One last question before starting the process - I see there is no file manager in unraid (I know that I can install).

    What is best option to copy the data?

    As I wrote before, I will mount the QNAP disks as unassigned drives, to copy the files directly and not over the LAN.

    It should be as fast as possioble, and I need also a way to see the progress, via the GUI if possible.

  12. 7 minutes ago, itimpi said:

    As was mentioned if you simply create a top level folder on one of the array drives you intend to keep in the array that will automatically be treated as a User Share with the same name as the folder.

     

    Cool. I can also create this share via the web gui, right?

    And once I finish to copy all the content to this share, and reset the array (to assign the parity disk), would the share remain intact? 

    Sorry for the questions, this is my first experience with unraid

  13. 2 minutes ago, itimpi said:


    Why bother to even assign the 14TB drive to the unRaid array?   You could simply use it as an Unassigned Device (mounted using the UD plugin) as it is only a temporary home for files. 

     

    Good question, I already assigned the 14TB drive. So I guess I need to proceed this way.

    Anyway, the question is how to create the share that eventually will contains all data from the disks.

  14. 9 minutes ago, itimpi said:

    Since  the total space on your current QNAP NAS is 36TB and you say there is only 23TB of data on them you might be able to minimise the amount of copying involved by looking at the free space on each drive and working out if you can get the drives into unRaid in an order that means only some of the data needs to go via the 14TB drive. 

    If I move the two larger disks first, I will have enough space for one disk without the data need to go via the 14 TB.

    But the fourth drive will need to go via the 14 TB.

  15. 12 minutes ago, jonathanm said:

    No.

    If you want the files to end up in a user share named Media, just copy them to /mnt/disk1/Media/

    All folders in the root of the disks ARE user shares already. Creating folders directly on the disk automatically creates a user share with that name.

     

    parity doesn't have a filesystem or a format.

     

    But disk1 eventually will be assigned as partity, and building parity will completely over-write its content. 

    So I can't create this share on disk1.

    Disk 1 (which is the 14 TB) is currently used just to transfer the files from existing QNAP disk, and then back to the same disk once added and formatted as data disk.

    After migrating all the disks, it will be assigned as parity.

  16. What would be best the way to copy the files?

    Eventually I want a media share with all the files. So I will create a Media share, and exclude the 14 TB (because it will be formatted as parity once the transfer is complete).

    When I copy the data to the 14 TB drive (labeled as disk1), of course it will be copied directly to mnt/disk1. But after each transfer from existing disk to the 14 TB disk, I will copy the files from mnt/disk1 to mnt/Media (which is a user share).

    Is that okay?