Jump to content

danioj

Members
  • Posts

    1,530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. I'd be interested in this too. The GUI "not responding" plagues users in various situations and has done across multiple versions of unRAID. I got around this in the past by installing the unMENU. When the GUI went down - which unfortunately is often - I was still able to pull a syslog and see what was going on! I was trying to stay away from the command line and use unRAID the way it was intended but I have resigned myself to the fact that using the command line with unRAID is pretty much compulsory *shrugs*. Not sure if unMENU works with v6 - as like I said, I just use the command line now. Anyway - to help you - I find that finding the process that is "hanging" the GUI is the best method to solving these issues. Find it and decide. Kill it or wait for it to complete. Usually (if you're patient) the process will finish and the GUI will become responsive again. But if you're not patient. telnet into your system ps -ef This will bring back a list of processes running. Usually you can observe the last running processes and one of these is usually the one that is hanging the GUI. I find it is often easy to figure out which one it is. For example the last few times for me has been "Unassigned Devices" doing a mount or unmount which has hung things. Kill that process and boom, GUI is back and responsive again. I have also killed docker or kvm in the past too if I felt it was one of those the culprit. If you're brave, decide which process you think is hanging the gui and use kill <process_id> to kill it. I do find however that just waiting and going to grab a cuppa and coming back later is often the best method. I only use the above as a last resort or if I am in a hurry. Usually things work out and the GUI comes back if you just let things play out.
  2. I got ask what the is the third one that wifey uses? I'd like the code if that's cool!
  3. The way I did it was to add it as a cache drive. Format as XFS. Remove it to unassigned and then mount via Unassigned Devices.
  4. I have noticed that this plugin doesn't do "great" with samba shares where the network is flaky or connection is poor. I mount cross shares between my two servers. In that, I mount flash, app and share of Main Server on Backup Server and VISA VERSA. When trying to shutdown one server there is some ... lag ... while it tries to unmount. Be patient and the plugin does its job. Yes the GUI does hang, but as long as your patient then things work out. If you're not patient then you can turn off your switch which your servers are connected to (assuming of course you have a dedicated switch) then things move quicker!
  5. I used CHBMB's help as documented above and it works great! While I am on it - @CHBMB - have you noticed that OwnCloud has stopped "Recommending" their mail application? When I updated I noticed that it had been removed (which of course is probably because it was "saved" to the docker file system itself and was not persisted BUT when I went to add it again I noticed it had stopped being recommended. Certainly doesn't seem like they have developed it for some time.
  6. Just updated and all seemingly went well. I did need the use of the FAQ section of the original post: Q: Whenever I try and do an update to the owncloud docker, it tries to update my database and just gets stuck on the page that says: "This ownCloud instance is currently being updated, which may take a while. This page will refresh itself when the ownCloud instance is available again." A: Please try to upgrade from the command line: Code: [select] docker exec ownCloud sudo -u nobody -s /bin/bash -c "php /var/www/owncloud/occ upgrade" The only difference in this update was the page. The text was a little different, it just asked me to "Click" here to update. When I clicked on it nothing happened. No matter, once I dropped to the command line and executed the above command. Worked great. Thank you!
  7. Hi All, From around November 2014 I was thinking of building a new system suite because my requirements outstripped the system I had. For detailed reference please go here: http://lime-technology.com/forum/index.php?topic=37567.0 As part of the consideration for the new system was the selection of New Hard Disk Drives. I had decided to go with the largest capacity drive I could get on my budget - which happened to be WD Red/Greens - but then then talk of the new Seagate 8TB drives was increasing. The introduction of these drives would get me a larger capacity drive at a lower price point than the WD’s. There was ALLOT of speculation as to whether these drives would be suitable in an Unraid setup / environment. It all boiled down to this for me: would there be a significant slow down in write speed while using these drives and how often would I experience this. I decided to take a punt on the drives and bought myself 3 initially to form the array in my new Backup Server. Before I deployed the drives I did some testing to determine what I should expect from the drives once deployed. Here is a record of that testing and my findings. Note: I have not posted screenshots and evidence of tests again as I did this ALLOT in this thread: http://lime-technology.com/forum/index.php?topic=36749.0 I will however if people ask me to. I hope this is good enough for all. Systems Used (Configuration at the time of testing): Main Server (Source): Case: Antec P183 V3 | Motherboard: ASUSTeK COMPUTER INC. - P8B75-M LX | Memory: G.Skill 1666 Ripjaws 4096 MB | CPU: Intel® Celeron® CPU G550 @ 2.60GHz | Power Supply: Antec Neo Eco 620 | Flash: Kingston DT_101_G2 7.91 GB | Parity: WDC_WD30EFRX 3TB | Data: Disks 1-4: 4 x WDC_WD30EFRX 3TB | Cache: WDC_WD20EARS 2TB | Spare: None | App-Drive: None (Using Cache) Totals: Array Size: 15TB - Available Size: 12TB Backup Server (Destination): Case: SilverStone Black DS380 Hot Swap SFF Chassis | Motherboard: ASRock C2550D4I Mini ITX Motherboard | Memory: Kingston KVR16LE11S8/4I 1666 ECC Unbuffered DDR3L, ValueRAM 4096 MB | CPU: Intel® Atom™ Processor C2550 (2M Cache, 2.40 GHz)| Power Supply: Silverstone ST45SF-G 450W SFX Form Factor | Flash: Kingston DT_101_G2 7.91 GB | Parity: Seagate ST8000AS0002 8TB | Data: Disks 1-2: 2 x Seagate ST8000AS0002 8TB | Cache: None | Spare: None | App-Drive: None (No Apps) Totals: Array Size: 25TB - Available Size: 16TB Transfer Manager (Medium): Host: 27” iMac Core i5 3.2GHz, 8GB RAM, 1TB HDD Guest: 2 CPU Cores, 2GB RAM, 100GB Hard Disk Space, Bridge Mode (No Audio or Device Passthrough). Network TP-LINK TL-SG1005D 5 Port Gigabit Switch plugged into each machine via quality Cat6a cables. Software: Main Server: Unraid 5.04* Backup Server: Unraid 6.14b* *All Plugins and Virtualisation was disabled on both machines. Preclear Script: Bjp999’s Unofficial Faster Preclear script v1.15b Transfer Manager: Windows 10 (December’14 Developers Preview with all latest patches installed as of April 2015) Virtual Machine running on Virualbox Hypervisor 4.3.26 on OS X Yosemite. Terracopy v2.3 Stable was installed on the Windows 10 Guest to manage the copy operations. Unraid Array Configuration: Hard Disk Drives on each Server were deployed in a parity protected array as follows: Main Server 5 x Western Digital WDC_WD30EFRX 3TB Drive 1: Parity Drive 2 to 5: Data 12TB of protected useable space. Backup Server 3 x Seagate ST8000AS0002 8TB Drive 1: Parity Drive 2 to 3: Data 16TB of protected usable space. Share Configuration: 1 user share configured identically on both the Main (Source) Server and the Backup (Destination) Server as follows: Share Name: nas Allocation Method: Most Free Minimum Free Space: 40GB Split Level: Automatically split any directory as required Excluded disk(s): None Share protocol: SMB Export: Yes Security: Public Tests Preclear 3 preclear cycles run independently (i.e. not using the -C x switch in the preclear script). S.M.A.R.T Test - 1 long S.M.A.R.T Test Small Files - Benchmark test - 46.56 GB made up of 390,688 files of exactly 125KB Medium Files - Benchmark Test - 3.7 TB made up of 22,526 files ranging between 400MB and 4GB Large Files - Benchmark Test - 4.7 TB made up of 1,472 files ranging between 5GB and 40GB Method I run 3 preclear cycles and then a long S.M.A.R.T test per disk (Seagate ST8000AS0002 8TB drives only). - preclear_bjp.sh -f -A -c 3 /dev/sdx EDIT: I have since bought more of these drives and did manage to complete a full 3 cycle preclear using the above command. I have added these results too. **I was unfortunate enough to have a power outage in the middle of cycle 2 so I had to restart the test from cycle 2. Due to the power cut I decided to run both remaining cycles separately. This is reflected clearly in the results. - preclear_bjp.sh -f -A /dev/sdx - smartctl -t long /dev/sdx Once the drives were cleared I deployed them as per the configuration above. I created a mapped network drive on the Transfer Medium to the ‘nas' share on the Main and Backup Server. I then configured Terracopy to point to the Main Server as source and the Backup Server as destination with verify copy selected. Each test was run independently. Results Preclear (Cycle 1, before power interruption) - Initial Pre Read speed at the commencement of the preclear was ~170MB/s for ALL 3 drives. - All three completed their Pre Read with a FINAL speed at 100% of ~80MB/s taking ~20 Hours. - Initial zeroing speed of cycle 1 of 3 was ~200MB/s for ALL 3 drives. - All three completed their Zeroing with a FINAL speed at 100% of ~138 MB/s taking ~16 Hours. - Total Time for cycle 1 at this point was ~36 Hours. - Initial Post Read speed was ~200MB/s for ALL 3 drives. - All three completed their Post Read with a FINAL speed at 100% of ~115 MB/s taking ~16 Hours. Total Time for cycle 1 was ~52 Hours. **Note between now and somewhere in the Pre Read of cycle 2 I had a Power Outage and as such no reports to interpret so I started again as detailed in the method section above** (Cycle 2, after power interruption) - Initial Pre Read speed at the commencement of the preclear was ~174MB/s for ALL 3 drives. - All three completed their Pre Read with a FINAL speed at 100% of ~110MB/s taking ~20 Hours - Initial zeroing speed of cycle 2 of 3 was ~200MB/s for ALL 3 drives. - All three completed their Zeroing with a FINAL speed at 100% of ~136 MB/s taking ~16 Hours - Total Time for cycle 2 at this point was ~37 Hours. - Initial Post Read speed was ~200MB/s for ALL 3 drives. - All three completed their Post Read with a FINAL speed at 100% of ~110 MB/s taking ~21 Hours. Total Time for cycle 2 was ~58 Hours. (Cycle 3) - Initial Pre Read speed at the commencement of the preclear was ~170MB/s for ALL 3 drives. - All three completed their Pre Read with a FINAL speed at 100% of ~110MB/s taking ~20 Hours - Initial zeroing speed of cycle 3 of 3 was ~200MB/s for ALL 3 drives. - All three completed their Zeroing with a FINAL speed at 100% of ~136 MB/s taking ~16 Hours - Total Time for cycle 3 at this point was ~36 Hours. - Initial Post Read speed was ~200MB/s for ALL 3 drives. - All three completed their Post Read with a FINAL speed at 100% of ~105 MB/s taking ~21 Hours. Total Time for cycle 3 was ~57 Hours. EDIT: As noted in the Method Section I have since bought more of these drives and did manage to complete a full 3 cycle preclear using the above command. The summary results were: == invoked as: ./preclear_bjp.sh -f -A -c 3 /dev/sde == ST8000AS0002-1NA17Z Z8404KRE == Disk /dev/sde has been successfully precleared == with a starting sector of 1 == Ran 3 cycles == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 19:42:48 (112 MB/s) == Last Cycle's Zeroing time : 17:07:40 (129 MB/s) == Last Cycle's Post Read Time : 20:59:15 (105 MB/s) == Last Cycle's Total Time : 38:07:58 == == Total Elapsed Time 133:19:52 Long S.M.A.R.T Test The test run on each disk without error and took ~940 Minutes to complete. Small Files Benchmark Test indicated an expected speed of ~1.2MB/s Random Observations from the Test: 13% (813KB/s after 53,505 of 390,688 files totalling 6.38GB of 46.56GB) 30% (563KB/s after 117,051 of 390,688 files totalling 13.95GB of 46.56GB) 42% (500KB/s after 165,674 of 390,688 files totalling 19.75GB of 46.56GB) 58% (438KB/s after 228,366 of 390,688 files totalling 27.22GB of 46.56GB) 85% (875KB/s after 334,593 of 390,688 files totalling 39.88GB of 46.56GB) 89% (575KB/s after 347,650 of 390,688 files totalling 41.44GB of 46.56GB) 100% (938KB/s after 387,723 of 390,688 files totalling 46.22GB of 46.56GB) At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above. Average Observed Speed was ~671KB/s Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~5.5MB/s start to finish. Medium Files Benchmark Test indicated an expected speed of ~40MB/s Random Observations from the Test: 19% (40MB/s after 4,994 of 22,526 files totalling 846MB of 2.79TB) 41% (41MB/s after 10,442 of 22,526 files totalling 1.56TB of 3.79TB) 60% (41MB/s after 13,251 of 22,526 files totalling 2.3TB of 3.79TB) 80% (40MB/s after 19,000 of 22,526 files totalling 3.05 TB of 3.79TB) 100% ( 41MB/s after 22,526 files totalling 3.79TB of 3.79TB) At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above. Average Observed Speed was ~40.6MB/s Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~46MB/s start to finish. Large Files Benchmark Test indicated an expected speed of ~42MB/s 25% (38MB/s after 264 of 1,472 files totalling 1.16TB of 4.7TB) 39% (41MB/s after 528 of 1,472 files totalling 1.86TB of 4.7TB) 58% (38MB/s after 786 of 1,472 files totalling 2.68TB of 4.7TB) 82% (40MB/s after 1,023 of 1,472 files totalling 3.76TB of 4.7TB) 100% (38MB/s after 1,472 of 1,472 files totalling 4.7TB of 4.7TB) At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above. Average Observed Speed was ~39MB/s Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~51MB/s start to finish. My Conclusion The speeds I saw from the Seagate drives were exactly as I had hoped. Personally I saw no difference between the speeds I have obtained using these drives using Unraid and the speeds I have seen using WD Red’s using Unraid. This makes me very happy. I think that whatever Seagate has done to mitigate the SMR technology it has been done excellently and it mitigates any observable write penalty we have been discussing and speculating about. All in all - whether you've got your Unraid Array filled with Large (~5GB to ~40GB), Medium (~400MB to ~4GB) or Small (<=215KB) Files then I have observed and now reasonably expect this Seagate 8TB SMR drive to behave on Par with the WD Red PMR drives I have in my Main Server. Based on my testing and observations I believe these are Excellent Drives which I would recommend for anyone using Unraid in a way that I (and I believe allot of others in the community) do. That goes for use as an Data or Parity Drive. I will certainly be putting these drives in my Main Server as well as my Backup Server now. And will do so without fear. It had been suggested to see the impact of a full persistent cache, disabling the drives write cache with hdparm would be worthwhile BUT I don't see the need to do that now. What I wanted to see is if during real world use I would experience any degraded performance from my WD Red PMR drives in normal use in an Unraid environment and I clearly have NOT experienced any of that. Noted Concerns with the drives: Users have reported that with the the Seagate Archive 8TB drive's lack of center mounting holes which means that they won't mount in the drive cages of a Fractal Design Node 804 Case: http://www.fractal-design.com/home/product/cases/node-series/node-804 See here for specific details. http://lime-technology.com/forum/index.php?topic=47061.0;topicseen
  8. Look at this: Would this work for you? Is this a link to the pre_clear script? This would be AWESOME! Man you are ploughing through features on the Roadmap faster than LT are! They should hire you! OoooOooo ... 2 links, one for Official and one for Brian's "Faster ...."
  9. You want the Open Files field (streams are relevant only if it's serving a share, not mounting it). There's space, so I can add it. This is not feasible for drives, since some drives have multiple partitions, all hidden until you click on the serial number. If mounted, the mountpoint will open the file browser. Thanks for the open files feature. Ahhh - makes sense about the drives. I was just thinking out loud anyway!
  10. Just had an idea - and by no means is this a criticism because this is just great - but was looking at the GUI and in the same way that the Unassigned device has "Open Files" maybe the SMB share section could integrate with the part of Unraid / Plugin that shows "Active Streams"? Just a thought. EDIT: OoooOoooo: maybe also browse buttons at the end of the row on both Unassigned devices and SMB Shares sections too. Make it uniform with the rest. Let the user quickly browse whats on each.
  11. Also, love the fact that even though I have both servers mapped to each other .... Main mapped to \\backup\nas and Backup mapped to \\main\nas This didn't stop either server un mounting all drives and stopping the array! ALSO - when my Backup Server started before my Main Server (stupid Supermicro board takes forever to post and boot) there was no noticeable lag in startup time when I started the array even though it clearly couldn't find the Main share to mount. Happy so far!
  12. Using user/password? The Add SMB Mount popup worked fine? Nah - I don't use userid and passwords for shares on my servers so I didn't enter anything into the credentials fields. But it worked fine without. EDIT: Here is a screenshot of the popup working great (without userids and passwords) on Safari Version 8.0.6 (10600.6.3) on Yosemite.
  13. It will only appear when you mount it. What a knob. I forgot the first time I have to click mount. Working perfect.
  14. I need feedback on this new feature. Please let me know if you encounter any bug. First bit of feedback: From my Backup Server I mounted the "nas" share on my Main Server. Notice that it did not grab the Size, Used or Free variables. See screenshot.
  15. I need feedback on this new feature. Please let me know if you encounter any bug. I am happy to contribute. This feature has made me think that I don't need to be using a Windows VM running Sync-Back as my backup solution. I am going to try using the mount created by this feature as part of an rsync backup solution. Going to give it a go this weekend! Love it!
  16. Woah - just upgraded my Backup Server with the new v6 stable and also did my plugins as well. Then I noticed the SMB mount section - which clearly is a result of this wonderful plugin! Wanted to say what a great job is being done to enhance this product by the community .... http://lime-technology.com/forum/index.php?topic=40690.msg384504#msg384504
  17. Interesting I just noticed that same line (different image ID though) when I updated smdion's Reverse-Proxy Docker just minutes ago. What do they have in common - Apache?
  18. Just noticed something interesting with the OwnCloud Docker. I get this msg in the OwnCloud GUI: ownCloud 8.0.4 is available. Get more information on how to update. I was surprised to see this as I have EDGE set to 1. I thought - OK Ill force the issue and hit save on the Docker settings page to force it to look. Still nothing - doesn't seem to be updating? So far CHBMB has confirmed this behaviour too. Can anyone else?
  19. I just read this thread as my weekend tinkering begins after a shocking week @ work. Sounds great - I am going to play around. @jimbobulator: re your statement about Unraid maintaining enough resources for Web GUI / NAS functionality - Jonp said this in post #6
  20. I tried to install this plugin today. Looks good - and wanted to have a play. I am running v6.0 RC4. Unfortunately I get an error that I am not able to diagnose. I wonder if someone can help. Plugin Install Log: plugin: installing: https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout.plg plugin: downloading https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout.plg plugin: downloading: https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout.plg ... done Installing plugin... Plugin folder /boot/config/plugins/serverlayout already exists Checking existing package /boot/config/plugins/serverlayout/serverlayout-package-2015.06.12.tar.gz... Latest package does not exist /boot/config/plugins/serverlayout/serverlayout-package-2015.06.12.tar.gz Saving any previous packages from /boot/config/plugins/serverlayout mv: cannot stat '/boot/config/plugins/serverlayout/serverlayout-package-*': No such file or directory Attempting to download plugin package https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout-package-2015.06.12.tar.gz... Package server down https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout-package-2015.06.12.tar.gz - Plugin cannot install Reverting back to previously saved packages... No previous packages to restored Plugin install failed plugin: run failed: /bin/bash retval: 1 Syslog: Jun 12 20:05:11 main emhttp: /usr/local/sbin/plugin install https://raw.githubusercontent.com/theone11/serverlayout_plugin/master/serverlayout.plg 2>&1 Jun 12 20:05:26 main logger: plugin: creating: /tmp/serverlayout-install - from INLINE content Jun 12 20:05:26 main logger: plugin: running: /tmp/serverlayout-install It is saying that the package server is down BUT when I paste the link into the web browser I can get the file and download it. I also did a wget of the file link on the unraid machine itself and that worked too. Stumped.
  21. Yep, got it now. I was clicking the icon and bring up then menu. I didn't pick up (despite clearly reading it several times) the "name" of the VM was a link to another menu too, which of course you describe. Just tried it out and works as advertised. Still needed the partition tool for within Windows - that stupid recovery partition they put directly after the system partition which needs to be deleted (and wonderfully they remove the ability to remove it in the windows disk utility) as it prevents resizing into available space is a pain in the bum! But that isn't an Unraid issue. Thank you
  22. Are you sure it is attached to a controller that supports disks > 2.2TB? I can also confirm this issue. See attached - my 8TB Seagate drive is showing as 2.2TB and that is sitting on an ASRock C2550 board. In fact it has 3 of these plugged into the array and both Unraid, the core system and the pre_clear script recognize the the drive as 8TB and run fine with these drives. So it must be a little issue with the plugin. I have also noticed that plugin does not interact with the GUI perfectly (I have checked this on Chrome, IE and Safari IOS and OSX). The columns extend beyond the other columns which makes the page look weird. We don't have to make this an issue - id accept this behavior for the functionality of this great plugin any day! EDIT: v6.0 RC4
×
×
  • Create New...