dandirk

Members
  • Posts

    106
  • Joined

  • Last visited

Everything posted by dandirk

  1. So I am somewhat confused... Am a Legacy Pro owner, never received and email and do not seem to have any license upgrade option. I assume since pro = unleashed it is more a waiting game for the new licenses? Or is there a version upgrade required, I wouldn't think so as they suggest to use connect to upgrade?
  2. ah welcome! If you are looking for a rabbit hole, Unraid and media is a great one to fall into😀. In my opinion the 1TB nvme is going to be plenty for a cache drive as you get comfortable with Unraid. Unraid is pretty forgiving, hardware and setup is pretty easy. I haven't really had any serious software/config issues in years (well that wasn't my own fault). Installing and setting up plugins or apps (dockers) is about as easy as it gets. Plex is a pretty standard and simple docker to learn with (at least to get installed/running). Some personal tips: Use shares! Read up on share "split level" and "Allocation method" options on the shares. These determine where and how files fill the drives in your array. Do you spread files out, fill up one drive at a time? etc Think about your media/file organization and use multiple shares (lots of guides and discussions of different methods) Movies TV Appdata (for dockers) Files So get your server up and running, setup an array with a parity disk and your cache drive. Play around with copies of some of your media to see how things work and the way you like it. Test out the cache and mover features. Get the "file" stuff working, then move on to Docker and apps like Plex. Sonarr (TV) and Radarr (Movies) are part of a family of apps people refer to as starr apps or *arr apps. These are the apps people are talking about that watches your media collection. Like plex these are apps (dockers) that you install on unraid. Though I wuold suggest getting your collection and plex running first, those are a whole other rabbit hole
  3. Short answer: Since you are starting with unraid, you will be fine with 1 cache @ 1TB. Longer answer: Loaded question, as you didn't give enough info to properly answer. How do you plan to use Unraid? Though I will try and give you my personal usage/thoughts and what I have seen others do in the forum. BIAS Warning: I am what I would consider a "power user", someone that knows/desires enough to do some limited tweaking and config with ease of use, simplicity as a priority. So this means limited to GUI configurations, using plugins and dockers available from the app store. Avoid manual cfg file editing etc... I would guess most people use 1 cache drive, I do. There are 2 major uses for the cache drive: Speed up, uploads to the server. The "Mover" then will copy them to the protected array at a time/interval you set. Store docker/VM files. Cache is faster at writes, so if you are running apps/VMs, you don't want them running directly on your protected shares. So cache is the perfect "home". Though I highly suggest backing up these files, CA appdata backup is a great plugin! Cache Size? Completely dependent on what and how you plan to use your cache. I use a few of the major starr app dockers (Sonarr, Radarr etc) and Plex Server, so I have my docker appdata and any "working"/temp download directories on my cache. It is common for me to get ~1TB of content a month. My cache usage fluctuates, right now it's ~50GB, but can get in the 100-200GB range. I have yet to turn on advanced Plex metadata, but plan to... this will use up more of my cache. The rest is purely for temporary storage of new files I have uploaded. I personally have my Mover running nightly, so really only need enough space for expected new content for 24 hours. So this completely depends on how much are you going to upload to unraid and how often you want the mover to run (even then Unraid has the option to go direct to protected if your cache gets full so it's not like things will fail, just slow down). Cache in Raid/Redundancy? Again, completely dependent on your usage/needs and desires. Cache content usually is fairly temporary, or small enough to easily backup. So, a Raid 1 setup is really dependent on how comfortable you are with failure risk. While Raid 1 will give you redundancy for your cache, do you really want/need it? AppData backup for your Dockers is most likely going to be enough. Though if you are running Windows VMs where backup/recovery will be more resource/time-consuming maybe Raid is for you. Not to mention you do risk any files on cache before mover runs. Raid 0: Valid option if you are running VMs or other disk performance apps or want to use 2 smaller nvme's. Number of Cache drives/configs? I would consider this advanced usage, not entirely required and if you are starting out probably not suggested until you know you need it. I have seen people use multiple cache drives and even the unassigned device plugin to segment and tweak usage. Some will have a dedicated cache for those uplaod files another for their dockers/appdata. Have seen some people use 3 disks keeping VMs on another drive entirely for performance.
  4. So trying to learn the ins and outs of PCIe generations/lanes in regard to drive counts/bandwidth support. Sanity check please? If my research and understanding is correct? My drives are 7200rpm 3.5 drives rated ~270 MB/s (I am seeing this real-world on existing LSI 2308 HBA). So 8 drives would need: ~2160 MB/s... Does this mean that for Gen3 x8, the 2308 is complete overkill, since it supports upto 7880 MB/s? And even at Gen3 x4, 3940 MB/s would allow max drive throughput? Thanks for confirming (or correcting :)) what I assume is a simple question... Thinking of adding another 2308 but only have x4 left on my config.
  5. Yeah both are on the same onboard controller (only 2 on the control). Not a big deal, the question was more general aka is this not surprising. I certainly need more confirmation, is it the port, cable, 4in3 dock etc. and at the end the day, it's only 4%... First world problems
  6. So running pre-clear on 2 identical drives... 1 is running consistently 10MB/s slower than the others. Is this common or sign of an issue? I still have to test cables/connections etc...
  7. No idea if related (in same ballpark)... I went from 6.7.0->6.8.3, while my parity checks appear to be similar speed 85-100MB/s... Video playback via Kodi was horrible during a parity check and also rebuild. Would buffer every minute or so. SMB and NFS. Reverted to 6.7.0, running a parity check now and playback appears fine. Will be running a rebuild after the check, will confirm that as well. Update: Confirmed not having playback issues during rebuild on 6.7.0
  8. Thanks Squid, the secondary flash drive is a good idea. Yeah cache is only mover and dockers.
  9. Noob question... I have a pretty simple Unraid setup, 1 massive user share, a couple of dockers. AppData on cache. What could be considered standard practice for location of backups? Was thinking AppData in a backup dir on my user share, and USB on the cache drive?
  10. Plain Jane user here... File Server Sonarr/NzbGet Dockers Pre-Clear/USB Backup Dockers That's about it. I had at one point used Windows VM to run apps on like Sonarr/NzbGet just due to confusion on usage of dockers. Been fairly happy moving to dockers for my small needs. Considering a DB docker to share Kodi DBs but quite honestly don't mind the simplicity and flexibility of just using the standard kodi DBs even if data isn't "synced".
  11. I was able to stop by 2 Best Buys to get 2... for $160 each They passed preclear with flying colors... what an awesome deal. I wasn't even looking to upgrade ha ha ha
  12. I would guess you should post that question in the respective plugin threads/forums. Willing to easily best sickbeard has a 5.0 version (if it isn't already) and you may not require the safe power (don't know what it is sorry). The discussion is just how to manage and deploy plugins vs the manual method most are using now. Don't think it will/would change plugins to much in the shortterm.
  13. I would rather see 1 repository for addon's with the ability for the dev to host the content anyway they wish (wget from http or something) if Tom has issue hosting content due to liability or resource cost. It would be a simple and easy to manage system. Heck you could even have a link to official DEV thread and do a hash on the file downloaded... It it doesn't match the hash on the DEV post/forum, then don't install. Hacker would have to hack the forums AND the dev hosted location (or the plugin mgr, which should be separate). My opinion that would be enough security and simple/flexible enough to handle a lot of situations. XBMC has multiple repository support and it is cool for branched projects like OpenElec, but as a user it just ticks me off for XBMC proper. I go to the XBMC forum, read all the cool skin/app posts then go home and try and find it... "wtf its not here... oh must be alpha or not supported on version". I go check out the forum again and I have to setup some stupid repository for one app/skin. I also think a review system is a bit over the top. I think it would hinder development more then help it. Personally I think ensuring the plugin system cannot touch the core server services is a more practical method, prvent plugins from doing damage to the important part... the server. Worst case scenario for a user that installed a bad plugin is to wipe their flash and re-install unraid... This also assumes unraid OS can reasonably be wiped and brought back to stock. Like it pretty much is today. As for dep packages... I would say either keep them current (or some agreed to baseline by tom for stability and compatibility) and force devs to keep up or completely isolate addon software + deps so they run any deps they want.
  14. Thanks for all the info and replies guys... Figured I would update this post for anyone else searching in the future. A little recap: Messing around, trying to clean up my unraid, old shares for plugins no longer in use etc. Had to run the permission script but it hung on a drive locking the server. Hard shutdown, reboot... the drive was disabled. Data was accessible but some smart results showed odd infrequent errors. Short smart test was fine, long would interupt/stop after a min or so. I ended up copying the data off the disk in question, removing it and then using the new config utility to rebuild parity. Took a while but it worked to get array protected again. Since tests like seatools, said the drive was ok... tried to pre-clear and readd to array. Preclear also stall/froze along with continued smart test failures... As per the convo here, drive is probably going down the tubes or risky enough to ditch. Got a new drive... pre-clear still hangs/stalls at 0% but doesn't hang server. Swapped out the SAS breakout cable and boom, much better results. Looks like I had the beginnings of a failing drive AND spotty cable. Pretty sure the drive was going because though the symptoms were similar they were slightly better with the new drive (failed with more responsiveness lol).
  15. I am using putty to telnet and run the script... Is it supposed to update the complete %, elapsed time, temp, bytes etc in real time while running? Thanks Well pretty sure the drive is not liking unraid anymore... completely locks up when any sort of constant read/write is done (perm script, preclear etc). Still would like to know if the status updates in realtime... though I guess I will find out when I get my replacement.
  16. lol most of my drives are that old... its taken a while to fill them up. I actually have a few 750gb drives in there that are older Thanks for the other evaluation Gary, most was over my head. As for the clearing time, I has since read it can take days? so an estimated 50hours for a 1.6 TB drive could be normal? I am not sure what to think, I saw the speed was 80 MB/s (When it was still reporting) which shouldn't take that long... at those speeds can't see any size disk taking 2+ days. I still have plenty of space and as I have been reminded my disks are getting pretty old so I probably should just play it safe and leave the drive out... I am running seatools long test now... will probably find out more in a few hours but it is pretty much moot.
  17. So I had a drive go disabled... I was able to move all data off the drive, rebuild parity. Array is up and running without the drive. Initial short smartctl test was ok, so I assumed something got messed up and I attempted to add it back to the array. Normal Unraid clearing seemed to be taking a very long time, 7% in about 5 hours. Speeds at first was being reported but at the 5 hour mark was not, so I cancelled the process and brought the array back online. I then attempted to run a couple of Smart long tests... both seemed to stop with a "Interupted (host restart)" message at 90% left. No errors but the long test being interrupted... is causing me to wonder about the health of the drive. I am going to look for additional HDD tests, just wondering if anyone else has an opinion.
  18. Just fyi... not a big deal... Drives not attached to array could use their own section... They look to appear below the total. Now granted I was clearing the drive but it only got 7% in 5 hours so I cancelled it to run more tests and restarted the array while I was testing etc... Snapshot attached.
  19. OK I admit it, I was messing around with my array... Trying to clean up some things (like old share/files for slimserver pkg), and unraid looks locked up and I did a hard-shut down... I powered back up and one of my disks is disabled, when I ran smartctl on it, looks ok? (don't really know how to read it). Not really sure how to correct this, so I am copying data off the disk right now (I know its using parity data) to other disks. I have plenty of space, but wasn't sure how to remove the disk and let unraid to its thing automatically, most people replace the disk which I may do but what to get protection up and running first since I have the space. Currently my plan is to remove data from the disk... remove the disk from the array (though this looks old:http://lime-technology.com/wiki/index.php/Shrink_array). Then possibly re-add it after I upgrade to beta 16 (on b6 right now). Any thoughts tips etc? smartctl -a -A results... Tower login: root Password: Linux 3.4.11-unRAID. root@Tower:~# smartctl -a -A /dev/sde smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.11 family Device Model: ST31500341AS Serial Number: 9VS1G1VS Firmware Version: CC1H User Capacity: 1,500,301,910,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Sun Aug 4 18:36:50 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 617) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 255) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 241079702 3 Spin_Up_Time 0x0003 097 092 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 099 099 020 Old_age Always - 1245 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 071 060 030 Pre-fail Always - 14239774 9 Power_On_Hours 0x0032 058 058 000 Old_age Always - 37542 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 2 12 Power_Cycle_Count 0x0032 100 037 020 Old_age Always - 152 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 093 093 000 Old_age Always - 11 189 High_Fly_Writes 0x003a 067 067 000 Old_age Always - 33 190 Airflow_Temperature_Cel 0x0022 057 042 045 Old_age Always In_the_past 43 (0 127 44 42) 194 Temperature_Celsius 0x0022 043 058 000 Old_age Always - 43 (0 14 0 0) 195 Hardware_ECC_Recovered 0x001a 044 024 000 Old_age Always - 241079702 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 194 194 000 Old_age Always - 1687 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 162066295954594 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 1937291412 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 421008021 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 37542 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. root@Tower:~#
  20. Thanks it did seem to know what its doing, though I probably won't find out until later, waaay to much stuff:)
  21. I just received 2 3TB WD reds... The plan was to replace the parity, then add the second. So I checked the server, and a data drive was red balled (750GB). SMART tests passed. I can directly access the disk (tower\disk#) as well which I thought odd. I rebooted the server and still redball. How I can replace the red balled drive, but this would make it the largest drive and thus parity which I assume would mess stuff up. If I replace the parity, then the red ball drive might not create parity properly etc... What can I do if I don't have another smaller drive available?
  22. You do need to verify the controller supports >2TB (M1015 does). UnRAID 5 does support >2TB. So you are good for 3 and 4 TB drives. Cool thanks... No only have to decide what HDDs to get, seems like there are sooo few choices compared to the past.
  23. So I have been looking into adding a few drives... Currently run various 8 =<2TB drives. While reading customer reviews of the 3/4TB drives, I did see a lot of talk about GPT partitions and UEFI support on motherboards to use data over 2TB... Are these concerns for unraid servers? I am running a SuperMicro C2SEE board with a single M1015 8 port card... Would I have to upgrade for <2TB?
  24. Case has been picked up. Thanks everyone