bubbaQ

Moderators
  • Posts

    3489
  • Joined

  • Last visited

Everything posted by bubbaQ

  1. Small bug in the network setting GUI. If you select "static" for IP address, it greys out the choice for static/automatic for DNS server.... so if both were Auto, and you changed IP address to static, you could not then change DNS server to static also. You have to change DNS server to static FIRST, before you change IP address to static.
  2. Some points: - be sure you start the script with "#!/bin/bash" line - be sure you are using Linux line endings, and not Windows. - use apt-get install -y -q <package name> or else it will fail waiting on you to approve it. Also check the log from the unRAID Docker GUI to see the progress of the script.
  3. One note.... the later versions of smartctl will tell you the spinstat of a drive, if you use the -n standby option, if the drive is spun down you get a message the drive is in standby mode, and if it is not, you get the data. This has 2 advantages over checking spinstat via hdparm: - hdparm doesn't work properly on some controllers. - you have to do 2 steps (check hdparm, then do smartctl). And one more future advantage.... if you get to the point where you can properly query Areca or other HW RAID controllers via smartctl -d custom parameters, you can also get spinstatus from them too even though hdparm won't work. Plus, if you had a process every so often to check spin status to resync the GUI indicators, it would help situation where the controllers are doing the spindown themselves w/ unRAID knowing it (i,e. Areca, Adaptec, etc.) A checkbox in the drive config page "Drive returns SMART data when spun down" would be nice too
  4. Discovered a little trick today if you are using Ataptec RAID cards. If you expose RAW drives in the controller config, you can use hdparm and smartctl on them. But if you use Adaptec Storage Manager to create logical drives so you get advanced features, they the drives don't get a /dev/adX assigned to them and you can't do smartctl or hdparm. (Well you can do smartctl but it's a pain and you are still SOL with hdparm.) But generic SCSI devices (/dev/sgX) will be assigned, and you can use them to access the drives without a /dev/sdX id. Run: lsscsi -g to get the SCSI generic ids for the drives, and you are good to go. This may work for some other advanced controllers too.
  5. FWIW, I compared the sector counts to a real "WD60EFRX- WD Red" and they are the same, so no issues there. I'm going back through invoices and trying to match which drives I got from which vendors to do returns.... it's a pain, but since I staggered purchases, I could dell pretty good from power-on hours. From now on, I put a label on each drive with source and date, and I check the serial with the vendor to clear warranty info and put that on the drive label too..... BEFORE I put the drive into service.
  6. One nagging PITA with me and unRAID has always been the fact that once you click the stop array button, you are blocked from having any further interaction with the UI. Plus, you get no feedback as to what the #$#%@#& is taking so long. Sure I can log in and apply Linux-fu and kill processes, force unmounts, etc., but I feel like this is one area of unRAID that needs some improvement. Some suggestions as far as the UI: - a better interface (statusbar at the bottom of the browser ain't cutting it) - more verbose/comprehensive list of steps being taken and their status - display errors that are logged - some rudimentary feedback if issues are encountered (such as after trying to unmount a drive 3 times, report what files are open by what processes) I realize its a synchronous process and trying to make it fully async is not possible, but perhaps you can put in some kind of axillary listener (like the awk listener shared used a few years ago) that would be able to give some http access to the server status.
  7. It's due to the udev device naming rules not running or sometimes running too soon while part of something else needed is still waking up. You can re-trigger udev or just reboot and it should be OK.
  8. DD is intended to be used on drives in the same system. You can use it over the wire, but doing dd across the wire is slow... you have to use netcat or ssh (netcat is unencrypted and faster, and you can pipe it through bzip. ssh is secure, but slow.)
  9. I just did a check on the serial numbers on my WD Reds (http://support.wdc.com/Warranty/warrantyStatus.aspx) and they all come back with the right warranty periods, but the model number is being reported as "WDBYCC0060HBK" and not "WD60EFRX" however the label says WD60EFRX and the drive reports "WD60EFRX" via smartctl. Plus *all* of them report that way with WD.... despite the fact I got them from 2 different sources. Before I start writing letters to the vendors I got the drives from, can someone else check their WD Red 6TB drives and see what they report on the WD website?
  10. Here is the problem: A better approach is to create "canary" files that you will never modify, but will be certain targets of ransomware. Set a process to monitor the canary files and to fire off notifications/alarms/fireworks if they are touched or deleted.
  11. bubbaQ

    lightberry

    Sorry, that was my fault... The post was screaming spam from the way it read. I didn't notice your post history in the sidepanel until after it was deleted. I was in the process of recovering it when you reposted it.
  12. All you need is a few lines in your go script, and a config file. Delete /etc/ntp.conf. Put this in the file /etc/default/ntpdate: NTPDATE_USE_NTP_CONF=no NTPSERVERS="pool.ntp.org" NTPOPTIONS="-u" then run /etc/rc.d/rc.ntpd start As Squid noted though, you will have to have Internet access
  13. This is discussed as implemented via user shares, which are significantly slower that direct disk shares. So while you will accelerate some accesses, you will slow down accesses to files that are not on the "accelerator" drive versus accessing them via disk shares.
  14. Depends on the drives and just how quiet you want to be. SSDs are silent. Seagate 8TB Archive drives are not regardless of vibration-insulating mounts. If you want to silence a noisy drive, baffles on air-flow channels is the way to go. The Antec SOLO *is* a baffled case.... the fans for the hard drives are not facing a grill for air intake.... you'd get a more drive noise with the front bezel open. The bezel is the baffle. BTW, some drive manufacturers recommend AGAINST rubber grommet mounting because it allows the drives to oscillate more, increasing the likelihood of damage when the head moves. They should be mounted tightly to the chassis (indeed, some rack-mount server chassis come with thread-lock on the drive screws to secure them to the trays). That is also part of drive cooling, i.e. the conduction to the case from the mount. Vibration protection is one of the enhancements the NAS drives (WS Reds) feature. Here's a link to a real-world demonstration of why you want drives secured as firmly as possible: < You are going to be very limited in rack-mount choices. You might consider using a rack shelf instead, and use a small case that can sit on the shelf.
  15. No. You are running the equivalent of a 60 to 150 Watt (or more) light bulb inside a closed box. The system generates heat. Heat has to be removed. Q=UAdT. Heat removal is based on heat sinks with limited area and airflow. So increase U, A, dT. U you can't change much at all... it is coefficient of heat transfer of airflow over metal fins. A - you can use larger heat sinks. dT, you can increase dT by more airflow, better sinks (more heatpipes) lower ambient temp of the air, or higher temp of the sink (run CPU hotter). Or reduce Q, the amount of heat you have to remove. Pick low-power components. Air has to get into the case, which means sound escapes through the same holes. Plus the air makes noise entering and existing the case. The quietest you can get is to try to go passive cooling, but you can only do that with a case large enough to hold a LARGE heat sink.... and that is not 2U or 3U. So you are left with watercooling. See if you can find one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16835118111 http://techgage.com/article/zalman_reserator_2_fanless_water_cooling/ These are 10 years old, so you will have to get your own water blocks for modern CPU sockets and other parts to go with the rads. Warning... external radiators accumulate dust and need to be regularly cleaned to maintain peak performance. Warning 2... external radiators make moving your rig a bitch. As for hard drive noise, a case-inside-a-case with baffles is the only way to deal with that effectively. But that kills any chance of using hot-swap cases/bays. Arrange all air intake and exhaust so they point down to carpet or rug. There are some waterblocks for hard drives out there... Koolance made several. I personally never tried them. Hell, there are even water-cooled PSUs... a new one was at CES this year. Suggestion #2. Don't have unrealistic expectations. Learn to accept an appropriate amount of noise, and concentrate on making it an acceptable tone and volume. I built a silent system with a Zalman external rad many years ago. It was so quiet it magnified the sound of the head seeking on the drives so it was MORE annoying than a little fan white-noise on the CPU cooler that pleasantly masked the sound of the hard drive seeks.
  16. LOL.... I completed the changes to the Syncrify docker to delay downloading the app tarball until the docker was run, and got it all working with the install going to the local filesystem. Did an install test on a clean unRAID test system and it worked. Then on a whim, I closed out docker, and tried running the application start from native unRAID and voilla..... it runs native now. So I went back to the application install script and it turns out the whole reason it was failing to install on unRAID native was it was looking for /etc/init.d/ to put its startup script or systemd to setup its service. So all the work trying to dockerize it was unnecessary. DOH!
  17. Where are you finding 6TB Reds and Red Pros within a few dollars of each other? I see more like $40 difference.
  18. Since Seagate 8TB SMR are a lower price than the WD 6TB Reds, I use all 8TB SMR in the server that holds backups of my main server, and the 6TB Reds in the main server. That way, I get more backup storage for the $$$, and I get some future-proofing if I upgrade the main server to 8TB drives later.
  19. I needed to tweak the php.ini file, so I copied your /etc/php5/apache2/php.ini file to the local appdata/Apache config and added a path in the template: This might be useful to others.
  20. Had a nasty hang with 6.2B21.... I could not get a diagnostic dump, but I was able to pull off the log file (sdc is cache drive):
  21. Except when you have a program that has some data files co-mingled with their code. Doing such an update loses your data, plus you get data stored in the docker image, that blows up its size. This is particularly true if Syncrify which stores its databases with its code. It's a good application, just not docker-friendly.
  22. I have done this but you have to have to install the program to a mountpoint on the local filesystem for it to be persistent.... otherwise you'd have to download it every time you started the container. This is actually necessary with Syncrify because it auto-updates itself and it keeps is own live database in its install home, so my container startup script enforces that the Syncrify home (/opt/Syncrify) has to be mounted to the host filesystem and installs it there if it doesn't already exist. If it does exist (i.e. the second time you start the container), then it starts the application.
  23. I get why pulling docker containers *is* supported, but simplicity does not negate copyrights. Suppose I created a dockerfile that when built created a docker container full of Madonna music. The fact that it gets pre-built automatically on Docker Hub before you pull it doesn't work legally. The analysis doesn't change if "Madonna music" is replaced with proprietary software. There is plenty of proprietary software out there that is free to download and use, but NOT free to redistribute (such as Adaptec maxView Storage Manager). Downloading it in a pre-built container is a problem, whereas building the container from a dockerfile (which pulls the software from a proper distribution source) is OK.
  24. Is there a reason the unRAID Docker templating system only supports prebuilt Docker containers, and not using dockerfiles themselves to build the containers locally? The reason I ask, is that I have built some docker containers that have free but proprietary software (such as Syncrify and Adaptec maxView Storage Manager) and it would not be appropriate to include their proprietary software inside a downloadable Docker container... but it is perfectly OK to distribute a dockerfile that--when being built--downloads the software and installs/configures it into a new docker container on the destination system.