Jump to content

hawihoney

Members
  • Posts

    3,497
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by hawihoney

  1. Sorry, what is a myth? Must be my bad english, sorry. Here's a screenshot of a E-2288G for example. The same is true for 9900K - just to mention two:
  2. I'm currently in the same boat - actually I came here to ask a similar question. Thanks for your detailed answer ramblinreck47. I'm screwing my head thru the details since two days and all I got was that current Intel CPU/GPU combinations seem to limit PCI lanes to 16. Is this correct? And if this is correct, are there different Intel APUs with more lanes? Consider 3x PCI-E x8 PLUS 2x M.2 x4. I can't build that with Intel APUs? I really would like to move from "Unraid NVIDIA" to plain "Unraid" but the current Intel APUs seem to be limited IMHO.
  3. No. All USB devices are plugged into the Mainboard (Supermicro X9DRi-F). As I said, it worked until last week. The Server is running 365/24/7.
  4. Thanks for looking into it. These two USB devices are passed through to two Unraid VMs. These devices didn't show up on dashboard and they didn't show up on Main page. It works that way since a year. Since some days (under a week) they are shown on dashboard. They only thing that has been changed in that week are the Plugins UD, Wireguard and CA. It's no big deal and I thought that new Passthrough Switch might be the reason. Hence I didn't use that switch til now.
  5. Sorry, my fault (sda was a typo). sdb and sdc are shown as Unassigned devices on dashboard but not on Main page. Both are USB-Devices passed through to two VMs. Uploaded diagnostics still applies.
  6. Did it exactly by the word. Looks exactly like before. sda/sdb are shown on Dashboard but not on Main page. There's no way to mark these two devices as passed-through.
  7. Diagnostics attached. sda and sdb are two USB sticks that are passed thru to two Unraid VMs. Since the latest UD update today these are shown on the dashboard while correctly missing on the main page IMHO. Thanks for looking into it. tower-diagnostics-20200131-1420.zip
  8. Never seen that before - think it came with the UA update today. The Main page shows no Unassigned Devices, the Dashboard shows two Unassigned Devices - at the same time. These two devices are passed through to two VMs. Bug or feature? Thanks.
  9. Here's a picture of that powerboard in a Supermicro SC846E16 JBOD case connected to a BPN-SAS2-EL1 backplane. It's the last picture: https://forums.unraid.net/topic/78165-how-many-tbs-is-your-unraid-server/?do=findComment&comment=797795
  10. The name of a powerboard is "CSE-PTJBOD-CB2". It works perfect with "BPN-SAS2-846EL" backplanes. There's a newer powerboard "CB3" but I don't have any experience with it.
  11. Today I had to replace the second parity disk in a 21 data disks array on 6.8.1. Both parity disks are 6TB. The largest data disk is 3TB. Something weird is going on during the still running rebuild. As long as data disks were involved performance was over 100MB/s. After 3TB (size of the largest data disk) performance dropped to under 80MB/s immediately. I mean: Frst parity disk is being read and second parity disk is being written. That's all. At 80MB/s. I don't get that. Reading 21 data disks plus 1 parity disk and writing 1 parity disk in parallel is much faster than reading just one and writing just one? Both parity disks are in perfect shape. Who can explain that? Never seen that before. Last time I had to do that was with 6.7.* and it was way faster. In fact, when the rebuild crossed the largest data disk parity rebuild got a huge boost with 6.7.*. Many thanks in advance.
  12. What JBOD enclosure? For my CSE846 JBODs I've left out the two back fans and did replace the three Front fans by more silent drop-in replacements (the green ones). But you mention drives on the back of your enclosure, so I think you do have a bigger One ...
  13. I do a manual backup of /boot already (rsync). What I try is to get that shiny ZIP - ready for the Unraid Creator Tool. So my idea and question was: If there's a diagnostics creation executable, there must be a flash ZIP creation executable as well. Is it?
  14. Sorry, don't understand. /boot and what command? The question was: Can I issue Flash zip creation via Console?
  15. Thanks. I don't use this backup plugin. I do have a self written complex backup environment (three different internal stores, two different external stores, four different timed roles). I would like to use that for automatic Flash backup creation as well. Is it possible without the plugin - from console - or with a wget/curl call to the Web GUI?
  16. I can get diagnostics from console. Is it possible to get a flash backup from console as well? I mean that same archiv I get when I call the flash backup from GUI. Currently this is a manual act here. I would like to add that to my backup job. Thanks in advance.
  17. I don't know if this is important or already known. With a little change my problems went away this night. I do have a lengthy backup job that runs every night. This job calls a lot of cp/rsync/stop docker/start docker/... commands. This job usually took ~2 hours on weekday, ~4 hours on Sundays and ~6 hours at the first day of a month. With 6.8 this changed heavily. The weekday job took around ~10 hours now. The weekly job never came back - I had to kill running rsyncs and the surounding job. I could see Kbit/s values that got tranfered. The harddisks in use showed sporadic blips of the activity LED. Here's a sniplet what the weekday job does. I show the Plex part only: ... docker stop plex rsync -avPX --delete-during --exclude Cache/ "/mnt/cache/system/appdata/plex/" "/mnt/user/Daten/unRAID/Tower/appdata_backup/plex/" cp /mnt/cache/system/appdata/plex/Library/Application\ Support/Plex\ Media\ Server/Preferences.xml /mnt/user/Daten/unRAID/Tower/plex_backup/ cp /mnt/cache/system/appdata/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db /mnt/cache/system/appdata/SQLite/Plex/ echo ".dump metadata_item_settings" | sqlite3 /mnt/cache/system/appdata/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db | grep -v TABLE | grep -v INDEX > /mnt/user/Daten/unRAID/Tower/appdata_backup/plex/settings.sql echo ".dump metadata_item_settings" | sqlite3 /mnt/cache/system/appdata/plex/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db | grep -v TABLE | grep -v INDEX > /mnt/user/Daten/unRAID/Tower/plex_backup/settings.sql docker start plex ... The weekly job transfers Backup data to another server. Here's an example. This command never came back: ... rsync -avPX --delete-during --protect-args -e ssh "/mnt/user/Daten/" "[email protected]:/mnt/user/Backup/Daten/" ... Yesterday I changed all occurences of /mnt/user to /mnt/disk17 (source and target of rsync) and booom. This night I was back to the old performance. This is funny because in the past I never used User Shares. disk17 was the backup disk on all servers. Recently one backup folder on a remote server had to spread two disks. So I switched all backup jobs from Disk Shares to User Shares. This was working fine with 6.7.2. It became a problem with 6.8. Just my 0.02 USD.
  18. Thanks for your answer. Two profiles, one for the router and one for Unraid/Wireguard, is a good idea, really.
  19. I'm more than impressed. Never had a VPN tunnel configured that fast and easy with all our devices. Respect. May I ask two questions: 1.) I'm not quiet sure if I want Unraid/Wireguard vs. our router to be the endpoint in our house. What do you think? 2.) Do you guys use the Wireguard App on your remote devices or do you use the VPN settings in your mobile OS. I ask because 100,000 Downloads for this App are not that many. Is there a reason to avoid the App? Thanks again for a thrilling addition to Unraid. I'm a customer since ~11 years and never thought that this was a bad decision ...
  20. Ah, Thanks. This red error was to much for me
  21. It was Cache disk missing or Disk missing in red within the cache slot. The array was not started. Had to put the cache Disk back in. What should've had happened? Should Unraid change that new config itself or do I have to do that? AFAIK I never removed a Disk (Array, cache) so I'm a little bit nervous. Any help is highly appreciated.
  22. Tried again today. Can't figure out how to seamlessly remove the cache disk I no longer need in a particular server. All steps in the original post verified: 1.) Remove cache disk and reboot --> Error 2.) Keep cache disk in place --> Unraid puts it back into the first Cache slot Hmm?
  23. @je82: It's standard 4U. I'm not good in photography ... The Last picture is one of the two JBODs. The picture before that is the server. @ghoule: Yes, one VM with one HBA passed through for one attached JBOD. Previously they were three complete Servers. But it was a waste of Material. One Single HBA is more than enough for one JBOD. To be honest, even the HBAs for the JBODs are not necessary. So it's only the drive Limit and the lack of multiple array Pools that drove my decision. The day multiple array Pools are implemented in Unraid I will throw out the two 9300-8e HBAs and use the expander Feature of the backplanes and make that Beast one single shiny Server. The Main Server does see all individual Disks of the JBODs. I do not give user shares to the server. It's because all disks are spun down usually and spinning up user shares for a single file was a show Stopper. I know about the disk Cache Plugin. But reading 60 disks regulary is no Option for me. So all applications work with individual disks and build their own Pools/shares. BTW this combination is running without any problems since over a year. It's cool to see three parity Checks Running in parallel on one Server ...
  24. Hmm, must be something different then. I can see the problems you describe for small folders too.
×
×
  • Create New...