jaybee

Members
  • Posts

    249
  • Joined

  • Last visited

Everything posted by jaybee

  1. Just thought I would respond to this topic to clarify some things after I found the exact same issues as the OP. Many people will want to automatically shutdown their server late at night, and then have it power on early in the morning to save power, noise and light. The shutdown part is easy using unraid plugins and is out of scope for discussion here. The startup can be done in really only two main ways which cover unattended power on. Wake on LAN (WOL) and power on by Real time Clock (RTC - computers BIOS clock on the motherboard). For most people in home server environments, WOL is not an option since they will not have any other network equipment online either of which could send the magic packet. This is possible to do over WAN but let's not get into that discussion here since this is to discuss using RTC to power on from full shutdown. Let it be understood firstly that this is not a motherboard or BIOS issue. This is expected behaviour from Linux Operating Systems during shutdown, that the reset of the BIOS Real Time Clock (RTC) to UTC standard time occurs. Assuming that the OP is trying to shutdown his server at midnight Australian Eastern Time (AET), then this is 14:00hrs UTC. What he is saying is that when the server shuts down at midnight AET, it sets the BIOS RTC back to UTC, and at the same time, the date shifts back a day since the reset takes it back accross midnight during the reset. It then comes to 07:00hrs AET and the server does not start up because the RTC is now at 21:00hrs with a date of the day before (since UTC has not passed midnight yet for that day). As a result the server to get back in sync requires both the time and the date resetting. Depending on the time of day that the OP performs any shutdown or reboot, this could happen again where the date also shifts. i.e. if it occurs during 00:00:00 AET to 09:59:59 AET. In any case, even if the OP sets shutdown to be 23:59:59 and avoids the date shifting issue, the BIOS RTC will still be set back to UTC which will cause issues with the BIOS RTC timings to start it back up. So how do we fix the issue? As already suggested above, you simply set your BIOS RTC to the exact current time in UTC. Then you set the startup time in the power on by RTC options to be 21:00 hrs. The only "issue" as such is that yes, your BIOS clock and date will sometimes read an alien time/date to you. But really this is a set and forget option. Once you have this set, unraid will always show your correct local time for everything, including logs. We face a similar issue in the UK since for 6 months of every year we have daylight saving time where we have either BST (British Summer Time) which is UTC +1, or GMT (Greenwich Mean Time) which is a timezone that is basically the same as UTC, UTC +0. So right now I have to set my server to turn on at 06:00 hours for it to come on at 07:00 hrs. When the clocks change again in October, I will have to set it back to 07:00 hrs. I hope this clarifies the situation.
  2. I assume Nvidia hardware decoding/encoding are both working for me based on below images?
  3. Hi all, Recently installed plex LSIO and nvidia unraid both on latest versions. I've been reading up the best I can on all the info and was wondering if someone can confirm that I'm correct on the below right now at time of writing: 1: The official plex apps/dockers did not support Nvidia Hardware decode specifically (hardware encode was enabled and had been working for some time) up until it was recently officially added around June 2019. 2: Before the above happened, there was a workaround which mainly involved a "hack" or wrapper script of some kind which altered ffmpeg included with plex to flag that it can do Nvidia hardware decoding. (I'm yet to find this separate thread where this is supposedly discussed and often referred to in this very thread so have only been able to take snippets from various places). 3: Soon after (1) occurred above, the docker plex images were updated (including this one) to also include the ability to Nvidia hardware decode as well as encode. 4: Therefore simply installing the latest version of LSIO plex does indeed work out of the box with Nvidia Hardware decoding and encoding, so long as it is bundled with nvidia unraid to allow pass through of nvidia gpu to the plex container. (I'm assuming the bolded part is required and it would not work with official unraid). 5: Nvidia hardware decode and encode has worked for ages on windows and hence people have previously worked around the linux issues with using windows VMs. 6: Updating to the latest official unraid new releases could break nvidia hardware decoding since nvidia unraid could break underneath, so best to wait for this to also be updated first. Sorry if I got any of the above wrong.
  4. Hi guys, looking for any advice around using an older Ryzen 1700. I'm yet to purchase anything but need to upgrade my server. I have seen Ryzen 1700 CPU can now be obtained in the UK second hand for about £85. I want to pair one with 32GB of unregistered ECC (2 x 16gb modules ideally, or if I need to, 4 x 8gb). It seems a bit of a minefield out there with compatibility around ECC ram on the motherboards. From some reading online, it seems that the best choice for ECC compatibility is firstly Asrock, closely followed by Asus. Does anyone know of a solid motherboard and ram combo that will work? I find the amount of Ryzen boards confusing and wonder whether I should go for a more modern one to hopefully avoid issues, but I don't really need overclocking or high end features, just 4 ram slots and if possible 2 high speed PCI-e slots.
  5. Can you tell me what this does as opposed to just clearing? Is it a more secure method of erasing a disk because it firstly overwrites all the sectors before setting all bits to zero?
  6. There are 16 disks in there and they all appear in the logs in the same way. So it complains about all disks I think. They can't all be failing so is this some problem with the RAID Card they all connect to? In the BIOS the card functions fine and I can go in and view disk info and perform actions as normal. Could it be that it is not compatible with unraid? The motherboard is a supermicro and the RAID card is an Adaptec PCI-X one I believe. The disks are not in a RAID array and have just been configured to be JBOD. EDIT: Not in the log I attached actually. Hmm...yes it does only complain about /dev/sdn Would a bad disk cause the entire GUI to lock up though?
  7. I just installed the latest Unraid 6.1.3 onto a server and I've noticed: 1: It takes a longtime to start up (5-10 minutes) 2: Sometimes it is impossible to navigate to the webgui. When I can, navigation around the pages is very slow or it never completes. 3: Within a few hours the syslog grew to 130MB 4: There are many errors and a lot of repeated logs So far all I have managed to do is install 1 x plugin (preclear http) and the preclear script. I just wanted to preclear some disks and I have not done any other config yet or additions. I went to do a preclear but the server hung up. I won't attach the 130MB sys log but towards the end of that period that the server was on just before it locked up, the last messages logged were as below, the same line repeated hundreds of times: Oct 16 15:16:58 Tower kernel: sd 0:0:0:0: rejecting I/O to offline device And at the time I got a top output of: 0k total, 0k used, 0k free, 515164k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13647 root 20 0 9796 1628 1380 R 100 0.1 26:37.11 dd 13648 root 20 0 8760 1608 1372 S 38 0.1 11:44.55 sed 1178 root 20 0 228m 3160 2016 S 25 0.2 6:05.31 rsyslogd 874 root 20 0 0 0 0 S 0 0.0 0:03.87 kworker/3:2 1377 root 20 0 188m 4756 3452 S 0 0.2 0:03.36 nmbd 19895 root 20 0 13284 2220 1912 R 0 0.1 0:00.10 top 1 root 20 0 4368 1552 1456 S 0 0.1 0:11.50 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.24 ksoftirqd/0 5 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/0:0H 7 root 20 0 0 0 0 S 0 0.0 0:06.94 rcu_preempt 8 root 20 0 0 0 0 S 0 0.0 0:00.00 rcu_sched 9 root 20 0 0 0 0 S 0 0.0 0:00.00 rcu_bh 10 root RT 0 0 0 0 S 0 0.0 0:00.35 migration/0 11 root RT 0 0 0 0 S 0 0.0 0:00.34 migration/1 12 root 20 0 0 0 0 S 0 0.0 0:00.21 ksoftirqd/1 13 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 14 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/1:0H 15 root RT 0 0 0 0 S 0 0.0 0:00.35 migration/2 16 root 20 0 0 0 0 S 0 0.0 0:00.28 ksoftirqd/2 17 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/2:0 18 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/2:0H 19 root RT 0 0 0 0 S 0 0.0 0:00.34 migration/3 20 root 20 0 0 0 0 S 0 0.0 0:00.28 ksoftirqd/3 22 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/3:0H 23 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kdevtmpfs 25 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns 28 root 0 -20 0 0 0 S 0 0.0 0:00.00 perf 262 root 0 -20 0 0 0 S 0 0.0 0:00.00 writeback 264 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd 265 root 39 19 0 0 0 S 0 0.0 0:00.11 khugepaged 266 root 0 -20 0 0 0 S 0 0.0 0:00.00 crypto 267 root 0 -20 0 0 0 S 0 0.0 0:00.00 kintegrityd 268 root 0 -20 0 0 0 S 0 0.0 0:00.00 bioset 270 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd Attached should be the smaller syslog (inside a diagnostics dump zip) where the server was started up for a shorter time. I managed to get onto the webgui once and change the page once then I gave up. Notice the same log repeating a lot: Oct 16 16:14:06 Tower kernel: blk_update_request: I/O error, dev sdn, sector 0 Oct 16 16:14:06 Tower kernel: Buffer I/O error on dev sdn, logical block 0, async page read Oct 16 16:14:10 Tower kernel: sd 0:0:13:0: [sdn] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Oct 16 16:14:10 Tower kernel: sd 0:0:13:0: [sdn] Sense Key : 0x3 [current] Oct 16 16:14:10 Tower kernel: sd 0:0:13:0: [sdn] ASC=0x11 ASCQ=0x0 Oct 16 16:14:10 Tower kernel: sd 0:0:13:0: [sdn] CDB: opcode=0x28 28 00 00 00 00 00 00 00 08 00 tower-diagnostics-20151016-1614.zip
  8. Frank, it's the same in UK. Amazon returns are very good.
  9. These drives are currently on offer for £109.99 on amazon UK. I'm immune to the general Seagate bashing as I know it's completely false, but with regard to this thread it seems there are fairly specific accusations about this particular product, and from what I can see it's all coming from one guy who's post is on various internet sites, and quoted (as above) but I cannot see where it originated from. So the statements are: 1: The firmware is "crippled" 2: The drive will not run in AHCI mode 3: It runs hot 4: It parks "too aggressively" [too often] 5: It does not work internally 5 is simply not true. Many people have this drive working inside NAS appliances and so on. 2, 3 and 4, I can't comment as don't own the drive. I suspect 2 is a symptom of the originator not being able to get it running as an internal? 1 is a little generic and not very specific. I think this is speculation. Seagate would not intentionally cripple a drive for it to fail just outside of warranty period. For them to intentionally add things into the firmware to stop the drive being used as an internal.... well.... it does work as an internal so.... ? My suggestion to the OP (of this thread here on unraid forums I mean) is that you simply have a bad drive if the performance appears to be eratic and sometimes slow? Anyone else care to comment? The £109.99 price is rumoured to stop at midnight tonight according to HUKD website.
  10. I have 4gb of ram. I have an Intel G6950 dual core CPU that states: " - Intel® Virtualization Technology (VT-x)." I have an Asus P7H55-M/USB3 motherboard. I could host unraid as a VM but...to be honest...not sure I would want the added complexity. Features I would find nice in addition to "just" an unraid server are below, but maybe these can all be done with unraid plugins already quite well: - Torrent downloads - Possibly access some files on the server from the web - UPS plugins for my APC UPS But, I would also like the server to be able to shutdown/sleep to save power which I expect would be more complex with VMs added to the mix as opposed to unraid out of the box. I think the best of both worlds would be to just trust in the betas (since historically they have been fairly sound) and just go with version 6 64bit and use unraid out of the box. However, not sure the above stuff works on 64 bit yet. Need to read up. Thanks
  11. Thanks vl1969. Makes sense. I think my hardware is virtual compatible. Will check. My heart says go 64bit. I have an issue with 32bit mostly psychologically I'm sure.
  12. Thanks for the responses. Interesting. I did not realize the unraid OS could be the HOST for virtual machines as well as a guest. I'm not sure which would be better. What would be the pros and cons of either situation? In fact, why would you run it is as host? Surely it is more limiting than more mature and fully featured Linux distros designed more for VM hosting?
  13. I was waiting out on version 5 final. Finally it came and then my life got very busy and I've been out of the loop for a while. A quick skim of the forums and it leaves me confused. Do we now have version 5 32bit AND version 5 64bit AND a new version 6 (64bit also presumably?). Can someone recap and give me an update quickly? I also keep reading about people talking about Xen and KVM both of which I have googled and still don't understand what benefits they bring. In fact, I don't understand why so many people want to run virtual machines so much. What benefits do these bring? How do you run virtual machines in Linux anyway without a GUI to manage them? I'm tech savy and know a bit of Linux, but only used virtual machines in Windows VMware/Vsphere. What would you recommend I do in terms of selecting an option to install? I want the minimum fuss going forward. i.e. Don't want to install 5 now, if the future is 6 and 64bit etc. Regards confused of unraid forums
  14. Is the above bold bits right? We went from B to C to B to stable? Or is that a typo?
  15. Tom, why do we see no emotion from you in this announcement? Do you not see this as a big milestone? You just pumped out the announcement like another RC release. Are you happy? How do you feel now that 5 is at stable/final status? Is there anything you would like to say to us? More specifically and getting back to this release, is there a reason it still has changes between the last RC and this, yet has become the new 5 final/stable? Is that not a bit risky?
  16. jaybee

    Status

    So since most other threads seem to have got locked, this is the main most recent one I can find talking about status and where things are at. Can we have an update Tom1 or Tom2 please? ETA on 5 final? Do we need more RC's? What are we waiting on? etc Nobody seems to know anything and it has all gone too quiet.
  17. I paid my licence for a pro version of the current production release which at the time was 4.7. Yes, it caters for future releases too...so? 4.7 is still to this day the current production release. Yes of course software has bugs. Software also has people responsible to fix these bugs. Do I need to keep giving history lessons? To summarise, 4.7 has major bugs and you can suffer data loss. These are major. These needed to be fixed. Tom said he would. 2 years later, not fixed. That's not acceptable. I'm not sure what you are even trying to point out here? It's acceptable for software to have bugs? Some, yes. Put for a product that as you yourself claim to be "just a NAS" essentially, and for it to fail at even this, this is quite major and certainly not acceptable. 4.7 was promised to be fixed. To this day it has not been. This is not fair on people that have purchased this. I already stated that the bugs were not made very public and I only found out about them monthes after purchasing a pro licence. What is your point? Yes you can test products. So? Are you saying that because I did not spot or experience the bug that can cause data loss, then I should just go on using the product? I don't think so. I don't care, and neither do a lot of people care how long person X has been running version Y for Z years with "no issues". My definition of a stable product would not be 4.7, which has bugs which can cause data loss. I don't know about you, but I deem that not a feature I would want to see in a storage solution. Yes. No current production release of unraid does this in a safe manner without major bugs. So what? Should I be happy that I have a flawed product just because I will get beta and RC releases of future versions for ever, but no commitment to a final and (hopefully) stable and polished version? Yes, I'm very happy to sit here waiting...and waiting....and waiting... It depends how you define "it works". I probably would too, but I do not wish to trust my data to RC's that come willy nilly with drivers missing, kernel changes every time, feature creep and random changes. I also do not have the time to keep testing them all only for another to come out a day later.
  18. It's like a broken record in here and on these forums generally, about the status of final being "just a name". I don't have the time nor energy to state why you are so wrong, but it's so frustrating to hear you repeat yourselves like we are in the wrong and don't understand that software has bugs. It's actually offensive that you deem us so naive, when the irony is that it's actually yourselves that are not understanding so many things about why 5 final is so important.
  19. Still no post from Tom. Just give us an update on where thing are. 5 minutes of your time.
  20. Good post. The problem is it goes deeper than this for me. You say you want to give your money to him for 5 when it is stable and given "final" status. The problem is, a lot of users (like myself) have already paid for a final version pro licence and are not even getting the product we paid for. 4.7 is not a final and stable product, and has known bugs. Tom promised to fix this and he never did as he wanted to "fix it" by basically just carrying on with version 5 to supercede version 4.7. This has still not happened over 2 years later. I bought the product on the basis that it worked. It does not. This was not made public to people that bought it. I discovered the bugs months later in a forum post on here, despite that the post had existed for some time.
  21. No they are not. You obviously do not understand the term beta or Release Candidate. With your logic, every release would just be a production release that was untested. There are a few people on here that like to go on about how "it's just a name". You are completely wrong on so many levels. How about we don't do that so that we can prevent any further feature creep. The irony is that you state this happens every 6 months. That's an absolute joke time period. I would expect people to complain about when the release is coming every 6 days, but you are not happy with them doing it every 6 months?! That says a lot about what we are dealing with here in terms of timelines which is a real shame.
  22. When? Can we have some commitment to a date? Don't take offense to this, just man up and give us a date, or at least an ETA.
  23. When is 5 going final? Getting bored now.
  24. The SMB slowness is a major issue in my opinion. NFS has been buggy before and SMB has always been reliable and fast as a fall back option. Now with SMB performance halving, that's unacceptable. I also want a 5 final, but it will never happen if we keep changing the kernel and moving goal posts.