S80_UK

Members
  • Posts

    1129
  • Joined

  • Last visited

Everything posted by S80_UK

  1. What I do (and I am sure there are many ways of doing it) is create the rsync.conf file and place in in the config folder on the USB stick. Mine looks like this... uid = nobody gid = users use chroot = no max connections = 20 pid file = /var/run/rsyncd.pid timeout = 3600 log file = /var/log/rsyncd.log incoming chmod = Dug=rwx,Do=rx,Fug=rw,Fo=r [mnt] path = /mnt comment = /mnt files read only = FALSE Then, in the go file in the config folder (which is what Unraid uses when starting up, I have the following... #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & #rsync daemon rsync --daemon --config=/boot/config/rsyncd.conf So when the rsync daemon is invoked it starts up using the parameters in the file on the stick. There is no need to copy the file anywhere else. Note: This is only as secure as a your local network. A bad actor on your network could use this to delete files etc. on the target machine. In my case, that machine is powered off 99.9% of the time, and only powered on when I occasionally need to access it. Other users may have better suggestions.
  2. Mostly Toshiba right now, the ones that I have being based on designs from Hitachi / HGST before it was split between WD and Toshiba.
  3. APC/Schneider appear to have removed much of the useful information about firmware / hardware version compatibility from their website (at least, it is not where it was before and I cannot now find it). The latest firmware appears to be in this recent update to the Firmware Update Wizard - here... https://www.apc.com/shop/us/en/products/Smart-UPS-Firmware-Upgrade-Wizard-v4-3/P-SFSUWIZ43 Note: I have not tried this update yet so I cannot vouch for its suitability. As discussed earlier in the thread, whether or not you are able to update to newer firmware can be a function of the level of the current firmware in your UPS. If too old then the update may not be possible. You should check the current version of the firmware in your UPS. If you have an option for the Modbus protocol in your UPS you should try to select it and then also select Modbus in the UPS options in Unraid. (Hint - it is in the Advanced options menu, it does not appear in the regular menu). If you don't have the option, then you can try the update., but there can be no guarantees. Unless you are confident (or can tolerate a failure) I would not try it. While it's nice to have the power consumption reported, it is not essential for reliable operation.
  4. I bought my first license in February 2011. One of the best value software purchases I have ever made. Recommended. @HeroRareheart I recommend a trial license before you take the plunge to give yourself a little time to find your way around. However, when you have issues, people here are generally very helpful. I have never noticed any of the "gatekeeping" that you have found to be a problem elsewhere. Note that a trial license does require your server to have a on-line connection to Lime Technology (the developers) as part of the license validation process; a paid-for license does not require this, however.
  5. That is also what I do. I colour coded the multi-way block so that I can easily see that it has a different purpose (it's under my desk). It handles the cable modem router, a couple of switches, and a KVM switch. The servers and my main PC go direct to the UPS. You just have to be careful that someone else can't come along and plug in a vacuum cleaner...
  6. Details of the case and how the drives are mounted is on the manufacturer's web site here... https://www.fractal-design.com/products/cases/meshify/meshify-2/black/ You can also download and study the user manual. According to the specs, the case comes with mounting hardware for up to six drives. If you want to max out the capacity (up to 14 drives, apparently), you can buy additional mounting brackets from sellers of Fractal hardware. The manual shows (on page eight) where they go in the case and which additional hardware you would need to buy.
  7. Not sure where you heard that. In silicon based semiconductor, leakage current increase with temperature. Also transistor speed tends to reduce with temperature, slowing things down. While flash memory cells (NAND or NOR) are designed to operate over a range of temperatures, there is no benefit in running them hot. You will run the risk of slightly shortening their working life. The best thing for semiconductors is always moderate temperatures with good long-term stability to prevent stresses caused by thermal cycling and differential expansion. The latter is not a big issue, but is best minimised. These devices do not as a rule come with heatsinks because the manufactures expect that appropriate cooling methods are already in place (adequate airflow, motherboard heat spreaders, etc.) and because it adds cost. Where heatsinks are provided it's either because the devices run abnormally hot, or in many cases I suspect it's a marketing gimmick (based on the advertising that tends to accompany such devices).
  8. I am OK, but for others it would be good if you could let people know where you're located (in which country at least) due to possible shipping constraints. Thanks.
  9. Hi @itimpi, a Happy New Year to you. Here's a strange one... EDIT: IGNORE THIS.... Parity check started (as scheduled) at 02:30 this morning (reported as 02:31 in Unraid and email Paused at 09:00 this morning (as scheduled) and reported correctly in Unraid and email (so 6 hours 30 minutes roughly) Unraid reports Parity History with a runtime of : Elapsed time: 15 hours, 44 minutes (paused) <---- seems very strange Current position: 3.52 TB (58.7 %) <---- seems completely normal The plugin reports "Version: 2021.10.10" Is there anything that you would like to see? I have not seen this strange elapsed time report before. I shall allow the second part of the check to run tonight and let you know what happens. I am not too worried, if I am honest, but I thought you would like to have the possible bug report. Take care, Les. EDIT: So Elapsed time includes the run time and the paused time. I had not appreciated that, and had thought it only included the active time spent checking parity. My apologies.
  10. I think you are forgetting the time taken to handle the parity generation. My guess is that you have the Disk Settings parameter "Tuneable (md write method)" set to auto. That means that the destination drive and the parity drive have to be read and then written with data and new parity. The read allows the new party to be calculated using the previous data and parity along with the new data. That will restrict speed to, at best, one half of what the drives are capable of. If you change that setting to "reconstruct write", all of the drives will be spun up, but the new parity will be recalculated based on the new data in conjunction with the existing data on the other drives. That will eliminate the need to pre-read the destination and parity drives and will allow a much higher average write speed to be maintained. So, to answer your question, what you are seeing is due to the way that Unraid works, but the "reconstruct write" setting allows you to overcome that barrier provided that you don't mind the overhead of spinning up all of your other drives. And whether that works well for you may also depend on whether there is any other disk activity at the same time, but that's down to individual use cases.
  11. Again, I have never seen this, and I use the standard Mover almost daily (today I am ripping Christmas present CDs and Blu-rays, so plenty going on). When you copy a new file to a share that will go to the cache if that share is set to use the cache - "Cache = Yes" in the share settings. (Do not confuse that with "Cache = Prefer" which will try to keep files on the cache.) The Mover will then move the file to the array according to its scheduler settings and parity will be updated as the move operation progresses, or you can run the Mover manually of course. If you are applying updates to an existing file then the updates will be applied directly to the disk in the array that holds the file and parity will be updated at the same time as the file. I would double check that none of your plugins or dockers, etc are specifying /mnt/diskx/sharename instead of /mnt/user/sharename - In particular it is very important never to move or copy between one path structure and the other.
  12. Like others in the thread that you linked to, as a long time user of Unraid (almost 11 years) I have never seen unexpected duplicates. Given that you are certain that you have not been manipulating files at the /mnt/disk level, this points to problems in the use or application of some other tool. I have no experience of Musicbrainz Picard so cannot comment on whether that may or may not have been responsible. As for which you copy of a file you choose to delete, are the duplicates that you see on multiple drives in the same share and in the same folder locations? I would check there before determining which copy to remove. If the paths to the files are the same on multiple disks then I would still do binary comparisons before deleting, although it should not then matter which is deleted. One other question - do you have a backup on a separate device that you can go back to if you delete a duplicated file and for some reason its copy also then disappears (not sure how that would happen, but just in case...).
  13. Yeah... Higher end server fans are a breed apart, they can certainly scream and take significant power. They can also damage fingers if one is not too careful.
  14. Just my two cents... If the data is that sensitive and valuable, this should not be done as a training exercise. Mistakes will be made (that's what happens when training) and data may be lost as a consequence. I agree with others, there are simpler and safer ways to manage this task. And in any case, what provisions would you be making for secure backups while this work is being done?
  15. My take... You can do this as described above, but personally I would not do this if I could avoid it. The reason? One of the biggest causes of data loss, whether with Unraid or any other storage system, is human error. Unless you are disciplined in your management of the configuration files on the USB stick, you will at some point make a mistake. And while the server would not start with all the drives in the system not matching the currently active configuration, as soon as you need to do some other maintenance such as replacing a drive or upgrading capacity, there is perhaps another opportunity for things to go wrong. Another consideration is how often you might want to switch between the two servers. At some point it may just be simpler to have two licenses. You can try it and see how you get on, but I would recommend getting a separate license for the second machine in the longer term.
  16. As @JorgeB says - your CPU will be fine for basic file storage. I started out with much less. And my daughter's Unraid NAS runs perfectly fine on about the same processing power that you have there. Also, well done on testing for yourself with a test-bench set up. It's the best way to learn how this stuff works.
  17. Updated from 6.9.2 to 6.10.0-rc2 without issues. Thanks for this update. Hardware is IvyBridge generation, so relatively old, as per my sig. Despite this, compatibility seems improved vs 6.9.2 with noticeably better display support (no longer restricted to 80x25 on local monitor).
  18. The poll definitely needs a "None of the above" option. Without that I cannot vote. Although I am unlikely to be adding to my existing licenses anytime soon, so perhaps it's irrelevant.
  19. The issue is cause by incompatibility between the HBA card's BIOS and the motherboard's BIOS. That is why cards may work fine in some motherboards but not in others. I have had both scenarios in my setups at different times.
  20. Sorry - but that's not true as far as I am concerned. I have never seen the drives attached to an H310 (or H200) appear in the motherboard's BIOS, but Unraid has always seen them just fine. Drives attached to a motherboard should be visible in the BIOS, of course. And yes, I have also had to do the tape mod on my H310.
  21. Which it would do if we did not then type two and a half messages debating it... It would be nice to have the option to turn it off though.
  22. Diagnostics would help. With adapters like the H310 you won't normally see the drives at the PC's BIOS level. You may see them in the cards BIOS if you specifically enter it. The diagnostics would tell us whether Unraid can see the card and would also tell us what the card can see as far as drives are concerned. (Really annoying that I can't seem to stop the forum s/w highlighting the word "diagnostics" like that.)
  23. Just a comment as an Unraid user for over 10 years... A few times over the years I have had issues with drives glitching or dropping off the array unexpectedly. However, in every single case this was down to poor connections between Molex style plugs and sockets, mostly related to the use of splitters. What may happen is that the "tube" part of the connector opens up slightly, and makes an intermittent connection on the "pin" part of the connector. I have started replacing these, but a temporary fix has sometimes been to try closing up the tube a little with a fine-nosed pair of pliers.
  24. I posted about this a while ago... Have a read of that thread. The suggested method that I give there for recalibrating the UPS settings for a replaced battery is probably what you need. This can all be done from Unraid via a command line window or ssh session for example.
  25. Which is best for you? I honestly cannot say. So much is down to your library, any other tools that you use for control or media management, personal preference, etc. It's also possible that the media player (your Bluesound Node) might also expose strengths or weaknesses in different server programs. I am more familiar with Twonky Server than the others, I suppose, but that might require you to buy a license after a period of time. minidlna is completely free as far as I know. As said before, the best way is to try. At least being able to do that using Docker makes it relatively easy.