Pixel5

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by Pixel5

  1. this problem still exists and happens on a regular basis sadly.
  2. Hello People. im on 6.12.2 right now and along with the problem i posted about here i also have the problem that any time i restart my server it will start a parity check because of an unclean shut down. this never happened on previous unraid versions and the server is in fact restarting without any problems at all except for that parity check starting.
  3. soo i guess nobody has any idea whats going on here?
  4. Hello People, i have the problem with my unraid setup that some of my disk suddenly dont want to spin down and only a reboot fixes the issue. This happens randomly with 2 - 4 of my 6 disks and theres no logic as to which one does it as which time, they are all running for something at one point and some of them just refuse to spin down afterwards. This is NOT related to anything accessing the disks, they are completely idle with nothing going on, click on spin down and checking the logs shows that the disks that stay active are not even getting a spin down command at all. right in this moment i have 4 of my disks spun up that can not be spun down no matter what and i have no idea why this keeps happening. thevault-diagnostics-20230801-1157.zip
  5. does anyone else have the issue that Filebot will always physically move files when it renames them via the AMC script? if i use the UI its just instantly doing its actions but when the AMC script runs i can see it reading and writing the file to the same SSD which obviously takes forever.
  6. can confirm the last update broke everything. [amc ] # [amc ] # A fatal error has been detected by the Java Runtime Environment: [amc ] # [amc ] # SIGSEGV (0xb) at pc=0x000014a1e2331030, pid=842, tid=844 [amc ] # [amc ] # JRE version: OpenJDK Runtime Environment (17.0.6+10) (build 17.0.6+10-alpine-r0) [amc ] # Java VM: OpenJDK 64-Bit Server VM (17.0.6+10-alpine-r0, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64) [amc ] # Problematic frame: [amc ] # C [libmediainfo.so+0x131030] ZenLib::BitStream_LE::Get(unsigned long)+0x50 [amc ] # [amc ] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [amc ] # [amc ] # An error report file with more information is saved as: [amc ] # /tmp/hs_err_pid842.log [amc ] # [amc ] # If you would like to submit a bug report, please visit: [amc ] # https://gitlab.alpinelinux.org/alpine/aports/issues [amc ] # The crash happened outside the Java Virtual Machine in native code. [amc ] # See problematic frame for where to report the bug. [amc ] #
  7. none of that is really an issue though as long as you follow the basic security measures of not exposing your unraid server to the internet. there are millions of systems out there running on windows 98 or XP that didnt get patched in decades simply because they will never be exposed to the internet.
  8. my mariaDB is stuck in a boot loop now, is anyone else having this problem after the lastest update? 211026 11:00:22 mysqld_safe Starting mariadbd daemon with databases from /config/databases
  9. the problem is already evident in your comment, you say crypto currency because you dont want to imply using a certain one. Problem is there are a bazillion crypto currencies now and whenever you start accepting one people will spam around why Doge coin is accepted by Shiba Inu coin is not Then you also need a service in between that updates the pricing as crypto currencies are far from stable so you usually end up with a payment provider which defeats the purpose of using crypto in the first place. Then you also need to convert that crypto back to FIAT currency as crypto usually isnt accepted to pay the bills so they need a tax guy to handle this stuff on top of the usual stuff. Same for customers, depending on where you are buying something with your crypto could be a realized gain which is then taxed so you need documentation to track all this for tax season. Overall its just one giant effort for a relatively small group of people who if they would practice what they preach not use cryptos anyways because they will HODL forever.
  10. Hello People, i have a problem that my share are becoming very unresponsive and slow completely randomly which can only be fixed via a restart or just waiting for i dont know what to do its thing. i have yet to find out whats going on here but its always the same process thats going hard on the CPU but generally there is almost nothing going on on the server while this problem persists. As you can see htop shows the process that pegs the server and IO top shows basically nothing. The docker tab shows 0.3% CPU load on the most demanding container and stopping all docker containers does not fix this problem. i already tried to remove the folder caching plugin but the problem still happened when the plugin was gone. All disks are spun down, nothing is going on except for tiny amounts of data being moved on the cache drives. Logs are clean as well. does anyone have any idea whats going on here? thevault-diagnostics-20210731-1607.zip
  11. is there any chance to get a Wreckfest server running via SteamCMD?
  12. i have solved this problem now with the help from some people on the unraid subreddit. the problem was that allowed IP´s needed to contain 0.0.0.0/0 in order to route all traffic through the VPN.
  13. im having some issues getting all my traffic routed through my unraid server. I can connect both from my laptop and my phone to via VPN to the unraid server without any issues when its set to remote tunneled access but it seems like not all my traffic is routed through the server as my ip address on my phone and my laptop does not change at all. This was very annoying in the last two weeks as i wanted to use my VPN to make netflix think im still in my home country but it never worked. does anyone have any idea whats going on here?
  14. it took about a day of 100% CPU usage and now it is synced so that is working fine now. Problem is anything that i plot on my NAS using the same mnemonic seed does not show up in the GUI on my main PC its all listed under plots not found for some reason. Someone said its related to that my Chia instance in docker has a different private key then my main rig but i though the mnemonic was supposed to make sure that doesnt happen. the big question is now what can i do to make everything use the same private keys and preferably do so without losing any plots? The chia on the unraid system can see all plots just fine for some reason but there also seem to be two wallets here.
  15. im currently doing nothing with my Chia install on my unraid system and its basically pegging the CPU at 100% is this because its syncing or is something wrong here?
  16. People probably should look into this video here to see which SSD they wanna assign to this and if they are willing to basically destroy it withing a few weeks. TL:DW is Chia takes considerably more power to run then the marketing makes it seem its by no means a "green" crypto and within just one week of farming it wrote 50TB to the SSD which was for his higher end SSD 3% of the rated TBW so within less then a year that SSD will be hitting its TBW limit
  17. how many coins does one get when you "find" what ever they are doing here and get the reward? some online calculators say it takes months to get a reward with like 40TB of space assigned to it so you would have to get like thousands of chia coins to ever recover even the electricity cost of that. Edit: just check there is a reward of 2 XCH per block you find so if you are the chosen one after a few months you get 2 XCH which are currently worth nothing and which is expected to cost like 20 bucks if anything they say holds true and thats already an insanely high estimate. Its basically not worth it to do any of this unless you enjoy burning money.
  18. the link you posted above has some number for the power consumption. are you getting a super good price for it or are you buying it for the form factor? because a simple i3 10100 has about 10% more performance while consuming less power and supporting hyper threading. This would also give you NVME support if you need that.
  19. honestly the bigger question would be do you have 10G LAN or other stuff that would require you to have a very fast array write speed or not. because most people with 1G LAN will never really feel the difference between unraid and ZFS because their network is their bottleneck.
  20. After upgrading to 6.9 (previously was running 6.9 RC2) i noticed that spinning up the array often leads to error messages in the log and some disk randomly dont spin up. Sometimes it all works fine and other times 2-3 disks dont spin up. All disks are connected to the same backplane which is connected to one port of the HBA, clicking spin up again usually works and spins up all the remaining disks. The have noticed that my 3TB WD Red 5400RPM drive seems to spin up most of the time while my other 3 8TB Seagate 7200RPM drives randomly dont spin up on the first try.thevault-diagnostics-20210302-1247.zip Mar 2 12:42:16 TheVault emhttpd: Spinning up all drives... Mar 2 12:42:16 TheVault emhttpd: spinning up /dev/sdg Mar 2 12:42:16 TheVault emhttpd: spinning up /dev/sdd Mar 2 12:42:16 TheVault emhttpd: spinning up /dev/sde Mar 2 12:42:16 TheVault emhttpd: spinning up /dev/sdf Mar 2 12:42:31 TheVault kernel: sd 1:0:3:0: attempting task abort!scmd(0x000000009c45948a), outstanding for 15091 ms & timeout 15000 ms Mar 2 12:42:31 TheVault kernel: sd 1:0:3:0: [sdg] tag#608 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Mar 2 12:42:31 TheVault kernel: scsi target1:0:3: handle(0x000c), sas_address(0x4433221103000000), phy(3) Mar 2 12:42:31 TheVault kernel: scsi target1:0:3: enclosure logical id(0x500605b00daaefd0), slot(1) Mar 2 12:42:31 TheVault kernel: scsi target1:0:3: enclosure level(0x0000), connector name( ) Mar 2 12:42:33 TheVault kernel: sd 1:0:3:0: task abort: SUCCESS scmd(0x000000009c45948a) Mar 2 12:42:33 TheVault kernel: sd 1:0:0:0: attempting task abort!scmd(0x000000002ae4c915), outstanding for 16610 ms & timeout 15000 ms Mar 2 12:42:33 TheVault kernel: sd 1:0:0:0: [sdd] tag#6911 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Mar 2 12:42:33 TheVault kernel: scsi target1:0:0: handle(0x0009), sas_address(0x4433221100000000), phy(0) Mar 2 12:42:33 TheVault kernel: scsi target1:0:0: enclosure logical id(0x500605b00daaefd0), slot(3) Mar 2 12:42:33 TheVault kernel: scsi target1:0:0: enclosure level(0x0000), connector name( ) Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/nvme1n1 Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/sdb Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/sdf Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/sdc Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/nvme0n1 Mar 2 12:42:35 TheVault emhttpd: read SMART /dev/sda Mar 2 12:42:35 TheVault kernel: sd 1:0:0:0: task abort: SUCCESS scmd(0x000000002ae4c915) Mar 2 12:42:35 TheVault kernel: sd 1:0:1:0: attempting task abort!scmd(0x00000000a8c75da9), outstanding for 18420 ms & timeout 15000 ms Mar 2 12:42:35 TheVault kernel: sd 1:0:1:0: [sde] tag#306 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Mar 2 12:42:35 TheVault kernel: scsi target1:0:1: handle(0x000a), sas_address(0x4433221101000000), phy(1) Mar 2 12:42:35 TheVault kernel: scsi target1:0:1: enclosure logical id(0x500605b00daaefd0), slot(2) Mar 2 12:42:35 TheVault kernel: scsi target1:0:1: enclosure level(0x0000), connector name( ) Mar 2 12:42:35 TheVault kernel: sd 1:0:1:0: No reference found at driver, assuming scmd(0x00000000a8c75da9) might have completed Mar 2 12:42:35 TheVault kernel: sd 1:0:1:0: task abort: SUCCESS scmd(0x00000000a8c75da9) thevault-diagnostics-20210302-1247.zip thevault-diagnostics-20210302-1247.zip
  21. when i spin up all drives i get this error under 6.9 Stable only two drives spin up the others stay spun down, all drives are connected to the same backplane going to an HBA. Mar 2 12:10:32 TheVault emhttpd: Spinning up all drives... Mar 2 12:10:32 TheVault emhttpd: spinning up /dev/sdg Mar 2 12:10:32 TheVault emhttpd: spinning up /dev/sdd Mar 2 12:10:32 TheVault emhttpd: spinning up /dev/sde Mar 2 12:10:32 TheVault emhttpd: spinning up /dev/sdf Mar 2 12:10:48 TheVault kernel: sd 1:0:3:0: attempting task abort!scmd(0x00000000d857461f), outstanding for 15523 ms & timeout 15000 ms Mar 2 12:10:48 TheVault kernel: sd 1:0:3:0: [sdg] tag#1783 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Mar 2 12:10:48 TheVault kernel: scsi target1:0:3: handle(0x000c), sas_address(0x4433221103000000), phy(3) Mar 2 12:10:48 TheVault kernel: scsi target1:0:3: enclosure logical id(0x500605b00daaefd0), slot(1) Mar 2 12:10:48 TheVault kernel: scsi target1:0:3: enclosure level(0x0000), connector name( ) Mar 2 12:10:50 TheVault kernel: sd 1:0:3:0: task abort: SUCCESS scmd(0x00000000d857461f) Mar 2 12:10:50 TheVault kernel: sd 1:0:0:0: attempting task abort!scmd(0x0000000054158672), outstanding for 17520 ms & timeout 15000 ms Mar 2 12:10:50 TheVault kernel: sd 1:0:0:0: [sdd] tag#283 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Mar 2 12:10:50 TheVault kernel: scsi target1:0:0: handle(0x0009), sas_address(0x4433221100000000), phy(0) Mar 2 12:10:50 TheVault kernel: scsi target1:0:0: enclosure logical id(0x500605b00daaefd0), slot(3) Mar 2 12:10:50 TheVault kernel: scsi target1:0:0: enclosure level(0x0000), connector name( ) Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/nvme1n1 Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/sde Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/sdb Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/sdf Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/sdc Mar 2 12:10:51 TheVault kernel: sd 1:0:0:0: task abort: SUCCESS scmd(0x0000000054158672) Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/nvme0n1 Mar 2 12:10:51 TheVault emhttpd: read SMART /dev/sda
  22. früher mit 1G LAN habe ich alles Lokal bearbeitet und dann verschoben. Jetzt habe ich 10G LAN und arbeite mit den Daten wo auch immer sie gerade liegen weil mein Bottleneck jetzt immer die performance der SSD oder HDD ist genau so als würde ich lokal damit arbeiten.
  23. naja ECC brauch man ja nicht unbedingt unter unraid gibt es ja nativ eh kein ZFS und das wäre ja auch absolut gegen das was unraid eigentlich ist.
  24. Wenn du dann später mehr SATA ports brauchst und 10G LAN willst werden dir hier die PCI-E Slots ausgehen. Ich würde sehr empfehlen ein intel System zu bauen dann kannst du dir die GPU komplett sparen das spart nicht nur Strom sondern auch einen PCI-E Slot
  25. well seems like nobody knows that UPS, is there any special standard how a UPS communicated with unraid that i should look for?