From 2bd18a97614dc161300188f67c73398cd63e1992 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 11:30:23 +0100 Subject: [PATCH 01/16] Update README.md --- README.md | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/README.md b/README.md index b7c2128..a27b3f1 100644 --- a/README.md +++ b/README.md @@ -50,6 +50,62 @@ * Fragility Functions * Vulnerability Functions +# ๐Ÿ‘ฉโ€๐Ÿ’ป๐Ÿง‘โ€๐Ÿ’ป Installation + +Follow these steps to install the required tools and set up the development environment. It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. + +1. Open a terminal and navigate to the folder where you intend to install the virtual environment using the "cd" command. + + ```bash + cd {virtual_environment_diretory} + ``` + +2. Create a virtual environment using the following command: + + ```bash + python3 -m venv {virtual_environment_name} + ``` + +3. Activate the virtual environment: +* On Linux: + + ```bash + source {virtual_environment_name}/bin/activate + ``` + +* On Windows: + + ```bash + .\{virtual_environment_name}\Scripts\activate + ``` + +4. Enter (while on virtual environment) the preferred directory for "oq-vmtk" using the "cd" command + + ```bash + cd {preferred_directory} + ``` + +5. Clone the "oq-vmtk" repository + + ```bash + git clone https://github.com/GEMScienceTools/oq-vmtk.git + ``` + +6. Complete the development installation by running the following commands depending on your python version {py-version} (e.g., 310, 311 or 312): +* On Linux + + ```bash + pip install -r {preferred_directory}/requirements-py{py-version}-linux.txt + pip install -e . + ``` + +* On Windows + + ```bash + pip install -r {preferred_directory}/requirements-py{py-version}-win64.txt + pip install -e . + ``` + # ๐Ÿ“š Documentation TBD From a886740b7deca275f435d6fb206d18cf8aa47a52 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 11:33:35 +0100 Subject: [PATCH 02/16] Update README --- README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index a27b3f1..9e20e7b 100644 --- a/README.md +++ b/README.md @@ -52,12 +52,13 @@ # ๐Ÿ‘ฉโ€๐Ÿ’ป๐Ÿง‘โ€๐Ÿ’ป Installation -Follow these steps to install the required tools and set up the development environment. It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. +Follow these steps to install the required tools and set up the development environment. Note that this procedure implies the installation of the OpenQuake engine dependencies. This procedure was tested on Mac and Linux OS. +It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. 1. Open a terminal and navigate to the folder where you intend to install the virtual environment using the "cd" command. ```bash - cd {virtual_environment_diretory} + cd {virtual_environment_directory} ``` 2. Create a virtual environment using the following command: @@ -83,13 +84,13 @@ Follow these steps to install the required tools and set up the development envi ```bash cd {preferred_directory} - ``` + ``` 5. Clone the "oq-vmtk" repository - ```bash - git clone https://github.com/GEMScienceTools/oq-vmtk.git - ``` + ```bash + git clone https://github.com/GEMScienceTools/oq-vmtk.git + ``` 6. Complete the development installation by running the following commands depending on your python version {py-version} (e.g., 310, 311 or 312): * On Linux From 137a5132ac4b80b1a0619d16470fdce4e0113382 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:01:06 +0100 Subject: [PATCH 03/16] Update README --- README.md | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 9e20e7b..b9daa4c 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ # ๐Ÿ‘ฉโ€๐Ÿ’ป๐Ÿง‘โ€๐Ÿ’ป Installation Follow these steps to install the required tools and set up the development environment. Note that this procedure implies the installation of the OpenQuake engine dependencies. This procedure was tested on Mac and Linux OS. -It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. +It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. 1. Open a terminal and navigate to the folder where you intend to install the virtual environment using the "cd" command. @@ -111,6 +111,33 @@ It is highly recommended to use a **virtual environment** to install this tool. TBD +# ๐Ÿ“ผ Demos + +The repository includes demo scripts that showcase the functionality of the vulnerability-modellers-toolkit (oq-vmtk). You can find them in the demos folder of the repository. + +To run a demo, simply navigate to the demos directory and execute the relevant demo script in Jupyter Lab. Jupyter Lab is automatically installed with oq-vmtk. + +1. Open a terminal and activate the virtual environment: +* On Linux: + + ```bash + source {virtual_environment_name}/bin/activate + ``` + +* On Windows: + + ```bash + .\{virtual_environment_name}\Scripts\activate + ``` +2. Open Jupyter Lab from the terminal: + + ```bash + jupyter-lab + ``` + +3. Navigate to the "demos" folder +4. Run the examples + # ๐ŸŒŸ Contributors Contributors are gratefully acknowledged and listed in CONTRIBUTORS.txt. From 5437d079a02efcb7a262958e0f431ec113cf3009 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:02:18 +0100 Subject: [PATCH 04/16] Update README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b9daa4c..184dbfa 100644 --- a/README.md +++ b/README.md @@ -121,13 +121,13 @@ To run a demo, simply navigate to the demos directory and execute the relevant d * On Linux: ```bash - source {virtual_environment_name}/bin/activate + source {virtual_environment_directory}/bin/activate ``` * On Windows: ```bash - .\{virtual_environment_name}\Scripts\activate + {virtual_environment_directory}\Scripts\activate ``` 2. Open Jupyter Lab from the terminal: From c37d9ee43d2430d65d3bd3b15c1995882e0620bf Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:03:38 +0100 Subject: [PATCH 05/16] Update README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 184dbfa..59c376a 100644 --- a/README.md +++ b/README.md @@ -71,13 +71,13 @@ It is highly recommended to use a **virtual environment** to install this tool. * On Linux: ```bash - source {virtual_environment_name}/bin/activate + source {virtual_environment_directory}/bin/activate ``` * On Windows: ```bash - .\{virtual_environment_name}\Scripts\activate + {virtual_environment_directory}\Scripts\activate ``` 4. Enter (while on virtual environment) the preferred directory for "oq-vmtk" using the "cd" command From 04da3c158b8d08dc01264d7a6bd7dddc7f86eea7 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:11:10 +0100 Subject: [PATCH 06/16] Update contribute_guidelines --- contribute_guidelines.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/contribute_guidelines.md b/contribute_guidelines.md index ad993f6..8a1508b 100644 --- a/contribute_guidelines.md +++ b/contribute_guidelines.md @@ -1,7 +1,12 @@ # ๐Ÿค“ CONTRIBUTE TO THE VULNERABILITY TOOLKIT -You can contribute by improving the code available, addressing issues and bugs or include additional functionalities. +We welcome contributions to improve the vulnerability-modellers-toolkit (oq-vmtk) and this repository! If you'd like to contribute, follow the steps below. Otherwise, you can email your information to _mouayed.nafeh@globalquakemodel.org_. -If you are familiar working with `git` repositories, open a pull request with the new information, and follow the standards and recommendations in the sections below. Otherwise, you can email your information to _mouayed.nafeh@globalquakemodel.org_. +1. Fork the repository on GitHub. +2. Clone your fork to your local machine. +3. Create a new branch for your feature or fix (git checkout -b feature-branch). +4. Make your changes and commit them (git commit -am 'Add new feature'). +5. Push your changes to your fork (git push origin feature-branch). +6. Open a pull request on GitHub. -Contribution guidelines to be available soon. +Please ensure that your code follows the existing style and includes relevant tests and documentation. From b3805657600798a6263ff6f298b11cf594511aba Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:11:57 +0100 Subject: [PATCH 07/16] Update README --- README.md | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 59c376a..a5e614d 100644 --- a/README.md +++ b/README.md @@ -109,7 +109,7 @@ It is highly recommended to use a **virtual environment** to install this tool. # ๐Ÿ“š Documentation -TBD +[WIP] # ๐Ÿ“ผ Demos @@ -158,17 +158,11 @@ This work is licensed under an AGPL v3 license (https://www.gnu.org/licenses/agp ### How to contribute? -You can follow the instructions indicated in the [contributing guidelines](./contribute_guidelines.md). (Work-In-Progress) - -### Which version am I seeing? How to change the version? - -By default, you will see the files in the repository in the `main` branch. Each version of the model that is released can be accessed is marked with a `tag`. By changing the tag version at the top of the repository, you can change see the files for a given version. - -Note that the `main` branch could contain the work-in-progress of the next version of the model. +You can follow the instructions indicated in the [contributing guidelines](./contribute_guidelines.md) # ๐Ÿ“‘ References -TBD +[WIP] From 1da24f30988c4e2d29d03e428585ecbca42cc56d Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:15:00 +0100 Subject: [PATCH 08/16] Update README --- README.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a5e614d..df506fb 100644 --- a/README.md +++ b/README.md @@ -127,8 +127,16 @@ To run a demo, simply navigate to the demos directory and execute the relevant d * On Windows: ```bash - {virtual_environment_directory}\Scripts\activate + {virtual_environment_directory}\Scripts + activate + ``` + +* To deactivate virtual environment: + + ```bash + deactivate ``` + 2. Open Jupyter Lab from the terminal: ```bash From 72d2067316967688e711f9a6b55aa593e7cdd307 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:17:08 +0100 Subject: [PATCH 09/16] Fix bug in example_2 --- demos/example_2.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/example_2.ipynb b/demos/example_2.ipynb index 799695b..a3b878c 100644 --- a/demos/example_2.ipynb +++ b/demos/example_2.ipynb @@ -313,7 +313,7 @@ " sf, \n", " t_max, \n", " dt_ansys,\n", - " temp_nrha_outdir,\n", + " temp_nrha_directory,\n", " pflag=False,\n", " xi = mdof_damping)\n", "\n", From 02788de18149161a3edd809b9dc069f1b81bbeb7 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:37:57 +0100 Subject: [PATCH 10/16] Fix bug in example_3 --- demos/example_1.ipynb | 16 ++++++++++++++-- demos/example_2.ipynb | 8 ++++++++ demos/out/nltha/ansys_out.pkl | Bin 66561 -> 66561 bytes 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/demos/example_1.ipynb b/demos/example_1.ipynb index 844e328..c527581 100644 --- a/demos/example_1.ipynb +++ b/demos/example_1.ipynb @@ -46,10 +46,22 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "3aa0f9f2-68c4-4201-b9ef-5d90031e4477", "metadata": {}, - "outputs": [], + "outputs": [ + { + "ename": "ModuleNotFoundError", + "evalue": "No module named 'openquake.vmtk'", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mModuleNotFoundError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 8\u001b[39m\n\u001b[32m 5\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mmatplotlib\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mpyplot\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mas\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mplt\u001b[39;00m\n\u001b[32m 7\u001b[39m \u001b[38;5;66;03m# Import the IMCalculator class\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m8\u001b[39m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mopenquake\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mvmtk\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mim_calculator\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m IMCalculator \n\u001b[32m 9\u001b[39m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mopenquake\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mvmtk\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mutilities\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m sorted_alphanumeric, export_to_pkl\n", + "\u001b[31mModuleNotFoundError\u001b[39m: No module named 'openquake.vmtk'" + ] + } + ], "source": [ "import os\n", "import sys\n", diff --git a/demos/example_2.ipynb b/demos/example_2.ipynb index a3b878c..ac99cfc 100644 --- a/demos/example_2.ipynb +++ b/demos/example_2.ipynb @@ -363,6 +363,14 @@ "# Export the analysis output variable to a pickle file using the \"export_to_pkl\" function from \"utilities\"\n", "export_to_pkl(os.path.join(nrha_directory,'ansys_out.pkl'), ansys_dict) " ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6d41ae04-db7a-4d79-b7d9-bfc4aff65bdd", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/demos/out/nltha/ansys_out.pkl b/demos/out/nltha/ansys_out.pkl index 5bd754f7a24ebaee889deed4737f9e014f9c41ba..df55d95c16712209886d7894d343862a5189bad1 100644 GIT binary patch delta 10250 zcmZ8nc_38X8+NVP4Q6g-DOxZSQ;3i)m59$ELb6o$HCuKjel5s$P||`@(O1+IS|n1^ zf|6D#5~W06`bs4~bLYF~y3L=@^SBg47q+Q+o;L%0LTnTC4ebGO_VMiEztJi9>m?95eKu^Te!KX z5V;%>N^P7K5euz+g_qVzJ4XMJaZ}Yuvw#7S*VxTw-TRuD+%+E~pX0g`hzs zOg5lZ`d3v*14O|-U@|L|^1UrNB(s|hVHwaci-|@p|H9{VHxZ(>9Y?M1J=@I9Oo{0K z=XV8(J6V;PyP?073p2fe6b}|72mhs z40-~@ofq#B`3cO$7--gxdOCQJ0huGCWi%tE>mOMJJk!zNrc78B9SaAIVw!07;-M9< ztJXko`)xwMDh5rs-U>t?elaCAMpy5KR&~Hwd{nlxd$rlSgG3LSw*V6fEoz@dOFr|% zF=#ydp#J0*z?NW8RD5r@Z>bK2OdV*DV9N3j4mXVE?1E?%1N@c zJHT%#E%c~9{rd%GAvBKYs&`ac5X23}*&;25P%iZxzkWxh1RD5&BpD{UU-9GVSC*v^ z6@R)(W@S>~AA^3oV#PxR>pRfut>sZLC8LSX&dM#-ovMI%DqyF0UyOZg-53 zD6t`T$sJx2n-GbmR!*@-sD5ii{@u^4g<%ZmehHEXh#Z4j6=-Zo_s)k5|FYow;lNz( z14XB|<5S93;=GXo#_~*5dHlhGt-sDg`{pJ?kKNr^cV;AXdW~9d#C~GFkqV!7D>n~r zLW;y%Nf#e=>Ha`WvVtu5Ew6>n@>!WUZbpn)_!`OPLpNQx&fPsU zCZ+J%z3vqNp)q95@5^s~#ZN?e`5Ktew5VY%=O0h_5a;D5r-A05fK6jiyWj7q8!C>5 zJ=@SGae4YwqcJEB)IjmH0b)-d<4n?+{IVkBb5d3Uu&vIG<<(z z_IVbkX$}KHA8mOvq~CiNdN}QVZJoW^j@z>tgV$PW6?brPCA6xsc^IP@oxrsMTNLid z#xSh!(|?(q04%>iiFz8BnfhO)=gbAZT}GE#%$7q#~2D zOI`nr-RmHTE&#?#Oloi6EQ!xiS0JjD!!MB!*h&oQ^DseI$M3{SN>0q)+zFbL80f*V zkVXrgBnFClz+9PuDr((-*$_mW>*a!IvE~(T ztlQJnnVd=-yG0^i_USN|3K;|(h>$-I9=2jD)p}=@vwYO*|?! zRD&o_z6m$#%Uy*`G%#MkMD#$&0#nrA^mvCwix=eX|23qtsl$=m_KeeBN_Ua!c57m1 zECrJbnCN{9;xWZfoDpeX4t9U`boj$p;zG#P0=f%qK@TJ?WKzdI*Jpgobbu(IaJsh` zr)eRBA1&UDtj$CUAmuVg=d?or(H;sd$A_h@;r%SFGmG5NzCL%vnV3F+Y6)I~Y-Epf&KT$!cyoGtS5BMo(^w zI>$k)Z*kz7h8E?9uilf-wK@(IMA}6%fAD-kP@cyxJe# za#QL z5vOO|HtGCH@xXj3gGx`ndMaP<0Bqs(PU#UPz+TEgLcp1&rgFyKv8S5%5Igegx!+)F zX$aMq`muPqLOJx)e`wP+2e4-|z~9SckQmUgj7d@094;YyfyjO(@c5_7aNNfa5N4Vx3C+}2*p@E_KIv+JlX_lMotAT2p>HH1zd-x%{(V4E-v})}Qqv+}Ee)BC zg-II)8+o4l^}T3+t)XXW3d%!1;LVjXT6$mY{exTm*hXmx8+MEzuyNEMlaG9U_%rnMLnMS0w zxb^VECeGEB2?)APc_NK{QVhEN?}L54@=adGJ4|GRUoI487YcxVCI?Z@LNi-U1wLr1 z8T(*k?%?m~#j3&aGp#{HK1?YN-VLH~Q{q^fmm+B2r@vpus1O4cc4RoWq)Y;2ucmX< zE$k39N!2d@tbr^HIW@G@(}Pc%lfAkPp(Vcw>R9rgAS>lMS3j)*A!{?xz2;tTJ5`Bs z6u)>Pu2lmY44`i*gPs`ibPae&@QRD7+n0G^JZHD%bgF&v@@FlZod@YVV)xG zbBH-8`!e^+>S%00fWA!)`r%aI&Qn*2!JeL!K-)+uAmyyZSz;R`hz`grwY{C9!nSLE zcX(fO=Z)wSdlo`vHQT&s)F#G+kNX2F9gWa@PR&WBBXZCcTeA;jvB{`&WFfRi7P^&@jO@@)SVC?z8SCY=q@@8NCRPhKE0tnih-z<)y+3%z5bjF9-ltTkOAYhwwT z`>m=_)Rb72EuLWca2aOF@pCCfXovr2JTit7%SV4YDfv90kS+p7bm`#9rujm(G6SXQ;MXQ2 zl({RSeDh6W+Y3gHe{Z<|U*_&!wDG+wtu8MnCSE6G>W+06XT+V3ps!zy`>l){lSn z{ke{~1m>*oBZGd~u)9Y>e+nMA8fWQ0&#ptT)_41u@uOi+Yt=byV%NJJ5!vB)h95H! z@n<1Il@*t$%^ddtBLod^-}A87Ya9lfST=Ze(YEi{3GpP5g`fdSPPQ`9U!g%&op7!7 zQl5cd@N_Vd#N^p3oU0+32zB|^2T?Kq5X%EQuQ~2vQv!$uaF1jNVL}qfid0j;g`H!Icy;%0CF+#Lxw%l@lPdLqnJ+ zlmOEmE46V=glvy%B4m^&iGVXZ{5^2-gKU6{A7m64JIE+5b&ye>v;iVxlNE3ggKU5a z7(gP<5zzr&aa31apdee}f&>}G1qd>V>knj93be;DP+V^ybLK!Mo~nsU3uF$L709Rq zaEQN&h=AD%3_-4704x&%6>)ihY|4`Z!2JY==_}y;M5-{a&<93|i&OBLp6r3w^kfvT z>B%Tw&y!KSmM5cl?G8~fY=vQkI~Yz>L-7(F8esf7pq^wd$Sub~NRs+Gy#6K|;q^8d z#j9;H%B!xy%cRBHcp*(Tzzb(G$}5b)^5n(ByqXwzCofLHi(s;?UR+T{wAn#ZGKv?) zWE3xg$tYd~lTp0rg(zLJMJ_bJh7d1m$>@JY>>o*H0Qpn~(h1B`87X+> zO7_DmS2BuMu4ELiRLLk_nUYbwFoh@>PE|nh!W5!dk6M5t+Z-06WbP7(Vw-E?g(jK9 z%SsN_a_0=Kd=w0e$BpWD>A;ny27(BH0eF3(4q;XYUTo)k^wfjPW{=Y>U^4 zWE8IxA*zcF0A3_Q7(+w;c78w7bMKGlrVB=}I!!|eD-c0Mni`@24yG|?<7FQ?hS&gj znHG%VH6EFq54_T;vlR1ETW^T$GzFz+XdICKK~E)p=XVHlxzQPh8LR!f$!^SJ$1P#>1mxq}%1%;0RbjrP{^@;VhY7*!r{pt7Makj*Z zg&P-`-({h420q2M*J>c&67lc-+kG4^kA#B$yGPL?adGRs;z!V;_r#Ey&67u1ga`Ln z=%xz;+Bd!tGR3J;dA;B}C0OSA5idEP)CjGOUNW5ZrMqZ9B7hjuUg;(5)gqpFI`i(( zyyxd(ew)}HRZ(^nTHMb;=YG1^hyT)tAgzgeg0%%R#EYI8Z7*`>g9Vh444+Fm~ z59o`8alD`ABB*;oxJ#&X z2XvM#L{~dp?!cTc|8!C|xMD7eRVbzEX91k z?PCfz|>+JC#%KFsg< z_1%`vO~e3wJNjIq?NA-YtsTf6k*FjVlEj3Q`u?7)Tn>1@WTB2RYZCu@OU#T|bls$; z6gD&Hczx}=pg25aNm0^2J>B~6fPLPRISwQq1akT)=f90xn!+M!bi zeS9Sx`cj|AzAOG%3*VNTO$4T;K~C^hjFZFl zOpJRMXyCMJMHl3jwf$$ohvXaYzvUeRVL(*%&X|T_^}wvtFKug((gL4noosTJVwncct?DH$pd%eozSnPeTjH^3-4q2 z#p?HA(7zpWSB51;@5RJp{h@qhq1~bfd-mTVp1~h~?`kjW;O!E#HI?a8hlo$d0XdZy z{|FUe`4pHIyh?hX0^?@V`HBN?WMOgbM_B03!zp_H8pKoYxDGvP*`5C`71O*`ymGMQ zNn&2jt?Xq3+q$sWrdRCT9K49*3t5!EmoJ#N8*aMTTxG=VfbEtXKM~cYG0OPyAO-XmKy_=&%w2oHFO?D&oy_Xr29`eKKE`uc%P=WnIxB6W?gtCWaW z)S&-n~|fAFNOP9X8@K>l%hKa1GOYFXSn)Lcr}>vMql6k zN$`w-{hN}vAW@0fLUYnK z$Im~^P00YQ(xPf5CE7w?r>o+ccHYhy*T4MS)1d86kjo8LKE5j29pehuMmpRsaf4g} zL#g}S5nrrL`(LH#Sn&KkPZjMUpP}dC6=Z`AF-=V8e~&v~G(&FGWw4{{e}$ch5$(Xr zBeEgHp@;7MCn7BJ5Z2<4_v{n#mkvX(_Bw^2b^O?62KCUK{aU1pr~r}#-O8t24`Y5u zUn4$xr-{vYY`cMll`D30MOiMxtN6YUx7yA959o8cvCY;S8FlShS|$8nN}Rmtd~LIcFVr3>K;(j>yl#1jLkb>p8pI z^c*n+x`lU<9Mo3fBlB`u8FX7Tr*kB#@ff#BS;I?)H_H2RAt&5XcNIaB5#-{2BLTz? w^=iJA9;!y{lfkSe>W@9xK2aOrZ;Q&B5MSBH!akE93yHgEK~%vOpJ=WB0|3Y7&j0`b delta 10239 zcmZ8nc_38n7k2H0%rJ9BR1`9pvM<>}ipn5sWl1Dk3R$vElt_h_6!{X`NMcGQ6xB~s zsTAsyQc5LJX%qRGJKue8H-A3QIp;m^d6#+5bFudbviAsP7mH=dYL{5nUAHK%IU&qX zvHhfRE3hiXg73e-JkZF$h34dHvL)@)VTcw6d~}}&tOP>PSsyz>d#WlRQ>Ne?pZwS> zpN-6!!|%YqJ1>ZmVLf-Ldr?+k{w8SU83?-c0U{Ve^VwIZ`OYsJGS12ibLIlJAcLl@ z)*bU8q8u{cVien&K(io2CaQCF)MV*q$S9mm0#kxosPoYg>b;>m5I3ZYjT!cC;N~)q z$i??_06Gv53$5(k^j|8*M*o%Z>-$DQ%Em-qwe~7*%RsZx5PGM}!vVw>LjMn;RDVgK zm4EpkP5+j8R`Cq9n!mBZBo!bcAykX2PnO#c6+mXp6(oshQ)Ja`PsfE^g>VvR6xl)z zYn?e!5!MLNkCnh&G@SCYEh%j8V>X1v^S9&& z31j;spq0b?FTg@fi*n|pQYp1$C&U{-qSzK$2PgLD)|0y+T3~WYFIQZXrj@={LtA49 z#QEoaIn@QM#2M&v$wA3I4>BQ>)KIxALfiANENmK84L2$gRwuoGfve(L=#3ZqY>*!| z&|9UeTC%;?oa?PX%ztqQG!YR{rY|5$bsDu71RR1CBkJ_tpE2!Z1O<}3uvjF z1u2rDX!>36LDyB=AmO>j>DzoqA8yJ(St^r~V*WP#`_v`KM}bBuCRM$jt*h{pSlKVz z!EY&T6!2S&PPFGk>Uzts-JVwj4Gck|G?Ug6|5K*Vt^}e! zmhzS~V5Q5DVU-#^RItAd85fBnFeR;pUaF*rnM@srIOq1JZ%_AO2ltP|;yg?**)6hA29H z9UqhZi95y|n8`BH&qptM?4PKG_GQ18%@kRV&HRsq%7Wg;CI2SY`1+-aI*QL)ZbGw( z{qcRqh{xk0Vqq;P;J2(c8sxC6N70C=lK5no&c->|)|^2&i-9m&GVSp-+uDO(#6m>w zk7&aKFb~${2+Fc@w*t+7ob^@fu^Kn-ALS* z2z^9q`*N>$`Etkq<2R=K>%7XU%y?+^_*$pAF0fK!pg@nURvSXx<;h-gR$cXey!@5y zX%X5^8#^nvL(e+9gb-_h%wbTLb?Q0FR}-Emf!UmDS(omV9i|Fl&|b{WZ#Z{uDEdU# z?9rRpNQn2l<%^z}+{PW}kNemEkwei%g;c_7>I1ksR|~DF6)Uop3Wm6DXUD#y;~re^ zKbBCgZ|RS92RxxwU;L$|S2S7JTsF+JN0oBh?^=xz(HEUhGu z(VU|q7QqzaDzX{af)$Nb{@wjJn*{6+;FybTSpnvw~TXWO-<9;kR?Y<@E76+}+ zc+j*^n{p%2p!-vuE`&S5(2YceRaZz$YxtTuzpXE3R`g<>;ncZqc*vHTGrQHzO=Jm>${(@+oEu6O$M z!#zy{Tc?*=^2kRpsl}vfx;fJiAlA@oZZ9y`K8v>Yw>A9qGlH<*vH1t9YSjLk*1Gx8 zI{l*+&`KAWF`0-N2xD4MzOH8%xtkL2sEeSH$)udG{oocUNbGWbFv(=n-i!|$ZmT3- zm>+EvZYKe&MGTsUX>)n5!67)p)SJEglmL4X0|^0Di!5Y1`QlEs>>*BMpW!5!S`>zM z+?7y46iT6Ah`zAdhma_2@}-OAkvPz}m`PDzA0a_KPvq|oCKqp!eWAW7TYFIk^yOrW zwSW{IP0E*7E}yR}#zMl5)2PElD5MaB(4SZ?MQ*O2XuOuN~6Xq0{3T7Lcq6uo#QSE0!MoE`7YpRLo zSzzc(XYSrsx?V^Y@{5?)%uQT@q>%|)vgTLRk`qL>nM+pG2W}I^22D4TMmY=JV+)2T z(6ze8Va=Ail3c#5+X$gMYhFAr?@S~WWZ`VH+m$nHT+dG{$vD^ljz*+)et(7P9!|rO z9SAzpmTjJMQXGbDT9y1w`}^oMd)WfkwJ(EH|_S5H|rAZ%GC8kW6c**#^Vjy~QgiBTF@ zF@T|+G}`@a$>Vw51Rv1C{Q4yj<2mVe)2Yfov-+^ISq7`_rWMjcI_t$| zXYY;QHPYd*4*ZWE)zaiKQ#%J~`r@IRtkqG(q(lyH9MC)i!W=SDr_{}I=E?-6$yjA? z6@wK77_K~rS_aFgeZN8!`*?`z^Uz2sAmygb(OeZGh_?C*YrLAG!LjdFzu{|M$E#?! z6APiq`OYb>*CA@MKYFWzBv|5NiVixjbNyB<16^l}w(+apr(%n7(S?Q3rKImXc-1Wp z4U**MEqQv0qv&dahL|+0BWiHj#l&Okq#Woo(EvSeGwGon znqn+K3T?a14vgI>~f&++?0 zUx$U@!;5^T3g5h7Gfk2|NW3D#OU{mOH zFw4G5H&sqU*j|JdQBx}1IZ9-H6!aN3an@~_5F3&9b^<2JIE+c+yIfWNeZ}(K{mi73?LEb zg6II>IGQIeQIM^0NrH^x5(F8=6$mma2JXc%P+V~!b3H&ho~DJ13uF!#7RaazaE`x; zP=VYIh9Flm0CqcqXX63^*_0;;fV(@Kkti@XfhNpr^?_-EYBJu{lRfaRo{ZvMJsHJ2 zdNPW4^JEn7-XSWE?J%r$2k#TqP`pWp1{l8#s3%$pavO3GmZ-iQ@4(4Mc*jjf@qU|( z^7?D=EKyYlZ>7lwcP)K(f``mzmhBjvMCIt2Uw;slJVY^?1%TR zWEAgR$td2dl2N=jC8Kz23Q_PrMFGWIQ;1?Sx(;Tutzauk=FWpCwv`s%YLYp;(Ilg@ z!Amwn5pOEV+<#3a(Ac>E(FClWR>^p$NVdZ}Lo#aiwqj;wX zQ9Y~xc$)}e3=KKs_HMZM&R@e#w~W9hb)gW}B7$wHYKS^0Ol8R7Z67(tmsj9fYAA~L zcw}-G@J*wMs^+A$UKiPE0ZOXq9FR7oEX;e3fwYqqLfDfD=$;xDnc%&BfZAywggt%0 z)zi(wyf+KjU2#pA_ap%;D}#i2j}K0L<=`xIJgqcop(pYAB;#7|m7`9j(=Qie@aZNC zol_b;ICCK(5v8o3ed)`#T8z*69}BHiWvm=H(*hqhl@HZ^nD(}Cd9**nn5n%p1#;o_ zQHdU&>6kCOm4)u|=_!vJIRW|Y1)s)bK2~79PYkrAJ{J>`*M#JWH+0Es827+8_yPKv zXo)k!^KUKY&&0UP%2H2Hn-9R4-`3AlE!jGNanjl2r{~D#KyC;7Oli3CQH-u*GZNK>L6f74|(>98JWj3AX%OL9TEXOru4j%{e9r2FK^F4bV#&!8Ux-pY_gUbP{ zb{0xC4miB3=^W&5Zu@rkRzK!D{X6lp{U~~P#ti$M!;hea?#V&RmEDi92oLVC&~@hr zbgq9Vgo?8xvIe2Ii+Gt|3|nrao(FwTIIfIEIbscaMegA74G!!Ra3uSIlea!I$ST9yN|Vp*So6AeY~f;I1OZ z8*^pG$0g5i65`0MC#B6VzLs<2(pNa$^Nd?gaK|XE)muO2rywS1Ul{nQw{|@Y>1bp= z=Y$$p9W_P+!yAq~T|p@I`Ul+dSB>ye&AC5{-Y*AVC=$d}6ipPI^E`Q}4vCI3dS4O> zdHqJ0+(*kBvADLfo*Fmf2^Fc!>t3^%RtCnkNhOpy9wb!y+BI)mukvBj0L>3ssP1gt z)TPPQFy9=Pk*tc6vs_=`*U3T~<_uZK9wl@hw~5|rElMuNBacc@&zq0=#*-gF|AZ6Y zEHXOpnmrpm*eC-wnWfbdX~eI zzlojd*K#9aMCStLd^$G!fIZsE?=K?AUoKFX#ul(RwHxt1U%y*6qvZ6KbHsq}1 zY8u+A+PNIv-pAG^xq#RatOFWv=c*L21Q1jDo~Q1#PT(a(MNJA@7_{|r z-P@30BJSy5z4SYm>ae)hPDdp-wAMhskxGGoUH4!&>~wP!u>YCae#pIXO}qC}wIKfi zVnTQF+kR1k_&)aKl=a?y(*JNna;%=G@FOQcOaDR&f_rG%)YYbYU`mjYT_sV3exM{oebo1Yslm(6{^tfs) zvoGXvK8wAV<>f*@heGG&?5cchJiS-vi!~>SpMbZajSnLuPh+z^_LhYTSj|vf+js!_ z#)$S^(8%Cj!oR%EneJVC5^|-3*Nd`Dc)mIBO3>l9$gWx<;(OgY)i-#* zgf>C9s=NlaLXOX0HdAbSJT@K52NwEWhvjnME%D{Mk1z6*UoP+0&`aRL(wrm2@5T+r z|CWdd=V4_k7&&!f&--K;cb9ilTgJ^SEUwcq3+*|uYFoJ`@uhdwfboK&_WxcK)4WBz zQmEufVp$DVI8lS!I4Op`4a6&{ZQF^x4i64tob5Nc@k@stpr6muuu=8Z?$}h2 z?<{opktQXZi^Mmla#lrO)SCdzSCpMGb|Bmj`i2BHYFLX#V7{Nem!P!U`Fg5K#7EQ$ zVgJMRu`!r$n}uf9i<89Hu~t(T-9QTWW8HKKf0m$EUIc4P>?M5FEPnbnTJXyMB$)5} z(R_V4u0j9Vpb8@^EUrvf+^MwIHIVZ^v~A|6z1&aX(;r^PS!gvSMH- z=H`ic*6JydmmRX;pfh)utFibZ#Ii7h}pFT-4uv$n)|59X^OF33dw4#a}e@s$18 zqJ}7fRen2i0KMPQzqtO`G3X{bG(Hk_wTv5y3s)mfN^e#(ZF~=QQq+(`t5B*r@iY5eIAHQaKJgUIi^|&&5Uu@x3Xk}% From 7ae8686400b6f7383638c334f35f08b8219877f5 Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:39:18 +0100 Subject: [PATCH 11/16] Update README --- README.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index df506fb..bddeeb0 100644 --- a/README.md +++ b/README.md @@ -58,32 +58,33 @@ It is highly recommended to use a **virtual environment** to install this tool. 1. Open a terminal and navigate to the folder where you intend to install the virtual environment using the "cd" command. ```bash - cd {virtual_environment_directory} + cd ``` 2. Create a virtual environment using the following command: ```bash - python3 -m venv {virtual_environment_name} + python3 -m venv ``` 3. Activate the virtual environment: * On Linux: ```bash - source {virtual_environment_directory}/bin/activate + source /bin/activate ``` * On Windows: ```bash - {virtual_environment_directory}\Scripts\activate + \Scripts + activate ``` 4. Enter (while on virtual environment) the preferred directory for "oq-vmtk" using the "cd" command ```bash - cd {preferred_directory} + cd ``` 5. Clone the "oq-vmtk" repository @@ -96,14 +97,14 @@ It is highly recommended to use a **virtual environment** to install this tool. * On Linux ```bash - pip install -r {preferred_directory}/requirements-py{py-version}-linux.txt + pip install -r /requirements-py-linux.txt pip install -e . ``` * On Windows ```bash - pip install -r {preferred_directory}/requirements-py{py-version}-win64.txt + pip install -r /requirements-py-win64.txt pip install -e . ``` @@ -121,13 +122,13 @@ To run a demo, simply navigate to the demos directory and execute the relevant d * On Linux: ```bash - source {virtual_environment_directory}/bin/activate + source /bin/activate ``` * On Windows: ```bash - {virtual_environment_directory}\Scripts + \Scripts activate ``` From 019eb7b34d667ccb34b7766c98411f36ab85ccc0 Mon Sep 17 00:00:00 2001 From: Al Mouayed Bellah Nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 12:48:07 +0100 Subject: [PATCH 12/16] Update example_2.ipynb From dd160559f1ffa149cd2a1c9dca03ddce7c89a51e Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 16:57:10 +0100 Subject: [PATCH 13/16] Update plotter class --- openquake/vmtk/plotter.py | 843 ++++++++++++-------------------------- 1 file changed, 271 insertions(+), 572 deletions(-) diff --git a/openquake/vmtk/plotter.py b/openquake/vmtk/plotter.py index 0ba054b..c2a0611 100644 --- a/openquake/vmtk/plotter.py +++ b/openquake/vmtk/plotter.py @@ -3,226 +3,140 @@ import seaborn as sns from scipy import stats import matplotlib.pyplot as plt -import matplotlib.patches as mpatches from matplotlib.lines import Line2D +import matplotlib.patches as mpatches import matplotlib.gridspec as gridspec from matplotlib.ticker import MultipleLocator from matplotlib.animation import FuncAnimation -## Define plot style -HFONT = {'fontname':'Helvetica'} - -FONTSIZE_1 = 16 -FONTSIZE_2 = 14 -FONTSIZE_3 = 12 - -LINEWIDTH_1= 3 -LINEWIDTH_2= 2 -LINEWIDTH_3 = 1 - -RESOLUTION = 500 -MARKER_SIZE_1 = 100 -MARKER_SIZE_2 = 60 -MARKER_SIZE_3 = 10 - -FRAG_COLORS = ['green', 'yellow', 'orange', 'red'] -DS_COLORS = ['blue','green', 'yellow', 'orange', 'red'] # For animation -DS_LABELS = ['No Damage','Slight Damage','Moderate Damage','Extensive Damage','Complete Damage'] -GEM_COLORS = ["#0A4F4E","#0A4F5E","#54D7EB","#54D6EB","#399283","#399264","#399296"] +class plotter: + def __init__(self): + # Define default styles + self.font_sizes = { + 'title': 16, + 'labels': 14, + 'ticks': 12, + 'legend': 14 + } + self.line_widths = { + 'thick': 3, + 'medium': 2, + 'thin': 1 + } + self.marker_sizes = { + 'large': 100, + 'medium': 60, + 'small': 10 + } + self.colors = { + 'fragility': ['green', 'yellow', 'orange', 'red'], + 'damage_states': ['blue', 'green', 'yellow', 'orange', 'red'], + 'gem': ["#0A4F4E", "#0A4F5E", "#54D7EB", "#54D6EB", "#399283", "#399264", "#399296"] + } + self.resolution = 500 + self.font_name = 'Helvetica' + def _set_plot_style(self, ax, title=None, xlabel=None, ylabel=None, grid=True): + """Set consistent plot style for all plots.""" + if title: + ax.set_title(title, fontsize=self.font_sizes['title'], fontname=self.font_name) + if xlabel: + ax.set_xlabel(xlabel, fontsize=self.font_sizes['labels'], fontname=self.font_name) + if ylabel: + ax.set_ylabel(ylabel, fontsize=self.font_sizes['labels'], fontname=self.font_name) + ax.tick_params(axis='both', labelsize=self.font_sizes['ticks']) + if grid: + ax.grid(visible=True, which='major') + ax.grid(visible=True, which='minor') -class plotter(): + def _save_plot(self, output_directory, plot_label): + """Save the plot if output_directory is provided.""" + if output_directory: + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=self.resolution, format='png') + plt.show() - def __init__(self): - pass - def duplicate_for_drift(self, peak_drift_list, control_nodes): - """ - Creates data to process box plots for peak storey drifts - ----- - Input - ----- - :param peak_drift_list: list Peak Storey Drifts - :param control_nodes: list Nodes of the MDOF oscillator - - ------ - Output - ------ - x: list Box plot-ready drift values - y: list Box plot-ready control nodes values - """ - - x = []; y = [] - for i in range(len(control_nodes)-1): - y.extend((float(control_nodes[i]),float(control_nodes[i+1]))) - x.extend((peak_drift_list[i],peak_drift_list[i])) - y.append(float(control_nodes[i+1])) - x.append(0.0) - + """Creates data to process box plots for peak storey drifts.""" + y = [float(node) for node in control_nodes] # Convert all control nodes to float + x = [peak_drift_list[i // 2] if i < 2 * (len(control_nodes) - 1) else 0.0 + for i in range(2 * (len(control_nodes) - 1) + 1)] + return x, y - def plot_cloud_analysis(self, - cloud_dict, - output_directory, - plot_label = 'cloud_analysis_plot', - xlabel = 'Peak Ground Acceleration, PGA [g]', - ylabel = r'Maximum Peak Storey Drift, $\theta_{max}$ [%]'): - - """ - Plots the cloud analysis results - - Parameters - ---------- - cloud_dict: dict Direct output from do_cloud_analysis function - output_directory: string Output directory path - plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") - xlabel: string X-axis label (default set to mpsd) - ylabel: string Y-axis label (default set to pga) - - Returns - ------- - None. - - """ - - ### Initialise the figure - plt.rcParams['figure.figsize'] = [6, 6] - fig, ax = plt.subplots() - - plt.scatter(cloud_dict['imls'], cloud_dict['edps'], color = GEM_COLORS[2], s=MARKER_SIZE_2, alpha = 0.5, label = 'Cloud Data',zorder=0) # Plot the cloud scatter + cloud_dict, + output_directory=None, + plot_label='cloud_analysis_plot', + xlabel='Peak Ground Acceleration, PGA [g]', + ylabel=r'Maximum Peak Storey Drift, $\theta_{max}$ [%]'): + + """Plot the cloud analysis results.""" + fig, ax = plt.subplots(figsize=(6, 6)) + self._set_plot_style(ax, xlabel=xlabel, ylabel=ylabel) + + ax.scatter(cloud_dict['imls'], cloud_dict['edps'], color=self.colors['gem'][2], s=self.marker_sizes['medium'], alpha=0.5, label='Cloud Data', zorder=0) for i in range(len(cloud_dict['damage_thresholds'])): - plt.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color = FRAG_COLORS[i], s = MARKER_SIZE_1, alpha=1.0, zorder=2) - - plt.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle = 'solid', color = GEM_COLORS[1], lw=LINEWIDTH_1, label = 'Cloud Regression', zorder=1) # Plot the regressed fit - - plt.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['upper_limit'],cloud_dict['upper_limit']],'--',color=GEM_COLORS[-1], label = 'Upper Censoring Limit') # Plot the upper limit - plt.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['lower_limit'],cloud_dict['lower_limit']],'-.',color=GEM_COLORS[-1], label = 'Lower Censoring Limit') # Plot the lower limit - - plt.xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) - plt.ylabel(ylabel, fontsize = FONTSIZE_1, **HFONT) - - plt.xticks(fontsize=FONTSIZE_2, rotation=0) - plt.yticks(fontsize=FONTSIZE_2, rotation=0) - - plt.grid(visible=True, which='major') - plt.grid(visible=True, which='minor') - - plt.xscale('log') - plt.yscale('log') - - plt.xlim([min(cloud_dict['imls']), max(cloud_dict['imls'])]) - plt.ylim([min(cloud_dict['edps']), max(cloud_dict['edps'])]) - - plt.legend() - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') - plt.show() + ax.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color=self.colors['fragility'][i], s=self.marker_sizes['large'], alpha=1.0, zorder=2) + ax.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle='solid', color=self.colors['gem'][1], lw=self.line_widths['thick'], label='Cloud Regression', zorder=1) + ax.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['upper_limit'], cloud_dict['upper_limit']], '--', color=self.colors['gem'][-1], label='Upper Censoring Limit') + ax.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['lower_limit'], cloud_dict['lower_limit']], '-.', color=self.colors['gem'][-1], label='Lower Censoring Limit') + ax.set_xscale('log') + ax.set_yscale('log') + ax.set_xlim([min(cloud_dict['imls']), max(cloud_dict['imls'])]) + ax.set_ylim([min(cloud_dict['edps']), max(cloud_dict['edps'])]) + ax.legend(fontsize=self.font_sizes['legend']) - def plot_demand_profiles(self, - peak_drift_list, - peak_accel_list, - control_nodes, - output_directory, - plot_label): - """ - Plots the demand profiles associated with each record of cloud analysis - - Parameters - ---------- - peak_drift_list: list Peak storey drifts quantities from analysis - peak_accel_list: list Peak floor acceleration quantities from analysis - control_nodes: list Nodes of the MDOF system - output_directory: string Output directory path - Returns - ------- - None. - - """ - - ### Initialise the figure - plt.figure(figsize=(12, 6)) - plt.rcParams['axes.axisbelow'] = True - ax1 = plt.subplot(1,2,1) - ax2 = plt.subplot(1,2,2) - - ### get number of storeys - nst = len(control_nodes)-1 - - ### plot the results - for i in range(len(peak_drift_list)): - - x,y = self.duplicate_for_drift(peak_drift_list[i][:,0],control_nodes) - ax1.plot([float(i)*100 for i in x], y, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[1], alpha = 0.7) - ax1.set_xlabel(r'Peak Storey Drift, $\theta_{max}$ [%]',fontsize = FONTSIZE_2, **HFONT) - ax1.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) - ax1.grid(visible=True, which='major') - ax1.grid(visible=True, which='minor') - ax1.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) - xticks = np.linspace(0,5,11) - ax1.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) - ax1.set_xlim([0, 5.0]) - - ax2.plot([float(x)/9.81 for x in peak_accel_list[i][:,0]], control_nodes, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[0], alpha=0.7) - ax2.set_xlabel(r'Peak Floor Acceleration, $a_{max}$ [g]', fontsize = FONTSIZE_2, **HFONT) - ax2.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) - ax2.grid(visible=True, which='major') - ax2.grid(visible=True, which='minor') - ax2.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) - xticks = np.linspace(0,5,11) - ax2.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) - ax2.set_xlim([0, 5.0]) - - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') - plt.show() + self._save_plot(output_directory, plot_label) - def plot_fragility_analysis(self, - cloud_dict, - output_directory, - plot_label = 'fragility_plot', - xlabel = 'Peak Ground Acceleration, PGA [g]'): - - """ - Plots the cloud analysis results - - Parameters - ---------- - cloud_dict: dict Direct output from do_cloud_analysis function - output_directory: string Output directory path - plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") - xlabel: string X-axis label (default set to pga) - - Returns - ------- - None. - - """ - - ### Plot the cloud - plt.rcParams['figure.figsize'] = [6, 6] - fig, ax = plt.subplots() - + cloud_dict, + output_directory=None, + plot_label='fragility_plot', + xlabel='Peak Ground Acceleration, PGA [g]'): + + """Plot the fragility analysis results.""" + fig, ax = plt.subplots(figsize=(6, 6)) + self._set_plot_style(ax, xlabel=xlabel, ylabel='Probability of Exceedance') + for i in range(len(cloud_dict['medians'])): - plt.plot(cloud_dict['intensities'], cloud_dict['poes'][:,i], linestyle = 'solid', color = FRAG_COLORS[i], lw=LINEWIDTH_1, label = f'DS{i+1}') # Plot the regressed fit - - plt.xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) - plt.ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) - - plt.xticks(fontsize=FONTSIZE_2, rotation=0) - plt.yticks(fontsize=FONTSIZE_2, rotation=0) - - plt.grid(visible=True, which='major') - plt.grid(visible=True, which='minor') - plt.xlim([0,5]) - plt.ylim([0,1]) - - plt.legend() - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') - plt.show() + ax.plot(cloud_dict['intensities'], cloud_dict['poes'][:, i], linestyle='solid', color=self.colors['fragility'][i], lw=self.line_widths['thick'], label=f'DS{i+1}') + + ax.set_xlim([0, 5]) + ax.set_ylim([0, 1]) + ax.legend(fontsize=self.font_sizes['legend']) + + self._save_plot(output_directory, plot_label) + + def plot_demand_profiles(self, + peak_drift_list, + peak_accel_list, + control_nodes, + output_directory=None, + plot_label='demand_profiles'): + + """Plot the demand profiles for peak drifts and accelerations.""" + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6)) + self._set_plot_style(ax1, xlabel=r'Peak Storey Drift, $\theta_{max}$ [%]', ylabel='Floor No.') + self._set_plot_style(ax2, xlabel=r'Peak Floor Acceleration, $a_{max}$ [g]', ylabel='Floor No.') + + nst = len(control_nodes) - 1 + for i in range(len(peak_drift_list)): + x, y = self.duplicate_for_drift(peak_drift_list[i][:, 0], control_nodes) + ax1.plot([float(i) * 100 for i in x], y, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][1], alpha=0.7) + ax2.plot([float(x) / 9.81 for x in peak_accel_list[i][:, 0]], control_nodes, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][0], alpha=0.7) + + ax1.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) + ax2.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) + ax1.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) + ax2.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) + ax1.set_xlim([0, 5.0]) + ax2.set_xlim([0, 5.0]) + + self._save_plot(output_directory, plot_label) def plot_ansys_results(self, @@ -230,406 +144,191 @@ def plot_ansys_results(self, peak_drift_list, peak_accel_list, control_nodes, - output_directory, - plot_label, - cloud_xlabel = 'PGA', - cloud_ylabel = 'MPSD'): - - ### Initialise the figure - plt.figure(figsize=(10, 10)) + output_directory=None, + plot_label='ansys_results', + cloud_xlabel='PGA', + cloud_ylabel='MPSD'): + """Plot analysis results including cloud analysis, fragility, and demand profiles.""" + fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10, 10)) plt.rcParams['axes.axisbelow'] = True - ax1 = plt.subplot(2,2,1) - ax2 = plt.subplot(2,2,2) - ax3 = plt.subplot(2,2,3) - ax4 = plt.subplot(2,2,4) - - # First: Cloud - ax1.scatter(cloud_dict['imls'], cloud_dict['edps'], color = GEM_COLORS[2], s=MARKER_SIZE_2, alpha = 0.5, label = 'Cloud Data',zorder=0) # Plot the cloud scatter + + # Cloud Analysis + self._set_plot_style(ax1, xlabel=cloud_xlabel, ylabel=cloud_ylabel) + ax1.scatter(cloud_dict['imls'], cloud_dict['edps'], color=self.colors['gem'][2], s=self.marker_sizes['medium'], alpha=0.5, label='Cloud Data', zorder=0) for i in range(len(cloud_dict['damage_thresholds'])): - ax1.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color = FRAG_COLORS[i], s = MARKER_SIZE_1, alpha=1.0, zorder=2) - ax1.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle = 'solid', color = GEM_COLORS[1], lw=LINEWIDTH_1, label = 'Cloud Regression', zorder=1) # Plot the regressed fit - ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['upper_limit'],cloud_dict['upper_limit']],'--',color=GEM_COLORS[-1], label = 'Upper Censoring Limit') # Plot the upper limit - ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['lower_limit'],cloud_dict['lower_limit']],'-.',color=GEM_COLORS[-1], label = 'Lower Censoring Limit') # Plot the lower limit - ax1.set_xlabel(cloud_xlabel, fontsize = FONTSIZE_1, **HFONT) - ax1.set_ylabel(cloud_ylabel, fontsize = FONTSIZE_1, **HFONT) - ax1.set_xticks(np.linspace(np.log(min(cloud_dict['imls'])),np.log(max(cloud_dict['imls']))), labels = np.linspace(np.log(min(cloud_dict['imls'])),np.log(max(cloud_dict['imls']))), minor = False, fontsize=FONTSIZE_3) - ax1.set_yticks(np.linspace(np.log(min(cloud_dict['edps'])),np.log(max(cloud_dict['edps']))), labels = np.linspace(np.log(min(cloud_dict['edps'])),np.log(max(cloud_dict['edps']))), minor = False, fontsize=FONTSIZE_3) - ax1.grid(visible=True, which='major') - ax1.grid(visible=True, which='minor') + ax1.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color=self.colors['fragility'][i], s=self.marker_sizes['large'], alpha=1.0, zorder=2) + ax1.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle='solid', color=self.colors['gem'][1], lw=self.line_widths['thick'], label='Cloud Regression', zorder=1) + ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['upper_limit'], cloud_dict['upper_limit']], '--', color=self.colors['gem'][-1], label='Upper Censoring Limit') + ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['lower_limit'], cloud_dict['lower_limit']], '-.', color=self.colors['gem'][-1], label='Lower Censoring Limit') ax1.set_xscale('log') ax1.set_yscale('log') - plt.legend() - - # Second: Fragility + ax1.legend(fontsize=self.font_sizes['legend']) + + # Fragility Analysis + self._set_plot_style(ax2, xlabel=cloud_xlabel, ylabel='Probability of Exceedance') for i in range(len(cloud_dict['medians'])): - ax2.plot(cloud_dict['intensities'], cloud_dict['poes'][:,i], linestyle = 'solid', color = FRAG_COLORS[i], lw=LINEWIDTH_1, label = f'{DS_LABELS[i+1]}') # Plot the regressed fit - - ax2.set_xlabel(cloud_xlabel, fontsize = FONTSIZE_1, **HFONT) - ax2.set_ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) - - ax2.set_xticks(np.linspace(0,5,6), labels = np.round(np.linspace(0,5,6),2), minor = False, fontsize=FONTSIZE_3) - ax2.set_yticks(np.linspace(0,1,11), labels =np.round(np.linspace(0,1,11),2), minor = False, fontsize=FONTSIZE_3) - - ax2.grid(visible=True, which='major') - ax2.grid(visible=True, which='minor') - ax2.set_xlim([0,5]) - ax2.set_ylim([0,1]) - ax2.legend() - - # Third: Demands - nst = len(control_nodes)-1 - for i in range(len(peak_drift_list)): - x,y = self.duplicate_for_drift(peak_drift_list[i][:,0],control_nodes) - ax3.plot([float(i)*100 for i in x], y, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[1], alpha = 0.7) - ax3.set_xlabel(r'Peak Storey Drift, $\theta_{max}$ [%]',fontsize = FONTSIZE_2, **HFONT) - ax3.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) - ax3.grid(visible=True, which='major') - ax3.grid(visible=True, which='minor') - ax3.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) - xticks = np.linspace(0,5,11) - ax3.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) - ax3.set_xlim([0, 5.0]) - - ax4.plot([float(x) for x in peak_accel_list[i][:,0]], control_nodes, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[0], alpha=0.3) - ax4.set_xlabel(r'Peak Floor Acceleration, $a_{max}$ [g]', fontsize = FONTSIZE_2, **HFONT) - ax4.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) - ax4.grid(visible=True, which='major') - ax4.grid(visible=True, which='minor') - ax4.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) - xticks = np.linspace(0,5,11) - ax4.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) - ax4.set_xlim([0, 5.0]) - - plt.tight_layout() - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') - plt.show() + ax2.plot(cloud_dict['intensities'], cloud_dict['poes'][:, i], linestyle='solid', color=self.colors['fragility'][i], lw=self.line_widths['thick'], label=f'DS{i+1}') + ax2.set_xlim([0, 5]) + ax2.set_ylim([0, 1]) + ax2.legend(fontsize=self.font_sizes['legend']) - def plot_multiple_stripe_analysis(msa_dict, - output_directory, - plot_label = 'multiple_stripe_analysis_plot', - xlabel = r'Maximum Peak Storey Drift, $\theta_{max}$ [%]', - ylabel = 'Peak Ground Acceleration, PGA [g]'): - - """ - Creates a combined subplot of two figures for multiple stripe analysis: - - First figure: Stripe analysis (IMLs vs EDPs) - - Second figure: Fitted fragilities (Exceedance probabilities for different thresholds) - - Parameters - ---------- - msa_dict: dict Direct output from do_multiple_stripe_analysis function - output_directory: string Output directory path - plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") - xlabel: string X-axis label (default set to mpsd) - ylabel: string Y-axis label (default set to pga) - - Returns - ------- - None. - - """ - - def plot_stripe_analysis(imls, - edps, - damage_thresholds, - xlabel, - ylabel, - ax): - - """Plots the stripe analysis (IMLs vs EDPs) on a given axis""" - for i, threshold in enumerate(damage_thresholds): - for j, im in enumerate(imls): - ax.scatter(edps[j, :], [im] * len(edps[j, :]), color = GEM_COLORS[1], s=MARKER_SIZE_2, alpha = 0.5, label = 'MSA Data',zorder=0) - - # Add vertical lines for the damage thresholds - for i, threshold in enumerate(damage_thresholds): - ax.axvline(x=threshold, color=FRAG_COLORS[i], linestyle='--', label=f'Threshold {threshold}') - - ax.set_xlabel(xlabel,fontsize = FONTSIZE_2, **HFONT) - ax.set_ylabel(ylabel, fontsize = FONTSIZE_2, **HFONT) - ax.grid(visible=True, which='major') - ax.grid(visible=True, which='minor') - ax.set_xlim([0, np.max(edps)]) - - def plot_exceedance_fit(imls, - num_exc, - num_gmr, - eta, - beta, - threshold, - xlabel, - color, - ax): - - """Plot the exceedance fit for the fragility curve on a given axis""" - fitted_exceedance = stats.norm.cdf(np.log(imls / eta) / beta) - ax.plot(imls, fitted_exceedance, label=f"Fitted Lognormal (Threshold {threshold})", color=color) - ax.scatter(imls, num_exc / num_gmr, color = color, s=MARKER_SIZE_2, alpha = 0.5, label = 'Observed Exceedances',zorder=0) - ax.set_xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) - ax.set_ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) - ax.legend() - ax.grid(visible=True, which='major') - ax.grid(visible=True, which='minor') - - - # Extract values from msa_dict - imls = msa_dict['imls'] - edps = msa_dict['edps'] - damage_thresholds = msa_dict['damage_thresholds'] - - ### Initialise the figure - plt.figure(figsize=(12, 6)) - plt.rcParams['axes.axisbelow'] = True - ax1 = plt.subplot(1,2,1) - ax2 = plt.subplot(1,2,2) - - # Plot the stripe analysis on the first axis - plot_stripe_analysis(imls, - edps, - damage_thresholds, - xlabel, - ylabel, - ax1) - - # Loop over all damage thresholds to plot the fragility fits - for i, threshold in enumerate(damage_thresholds): - eta = msa_dict['medians'][i] - beta = msa_dict['betas_total'][i] - color = FRAG_COLORS[i] - num_exc = np.array([np.sum(edp >= threshold) for edp in edps]) - num_gmr = np.full(len(imls), len(edps[0])) # Number of ground motions at each IM level - - # Plot the exceedance fit for the current threshold on the second axis - plot_exceedance_fit(imls, num_exc, num_gmr, eta, beta, threshold, xlabel, color, ax2) - - # Adjust layout for better readability - plt.tight_layout() - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') - plt.show() + # Demand Profiles: Drifts + self._set_plot_style(ax3, xlabel=r'Peak Storey Drift, $\theta_{max}$ [%]', ylabel='Floor No.') + nst = len(control_nodes) - 1 + for i in range(len(peak_drift_list)): + x, y = self.duplicate_for_drift(peak_drift_list[i][:, 0], control_nodes) + ax3.plot([float(i) * 100 for i in x], y, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][1], alpha=0.7) + ax3.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) + ax3.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) + ax3.set_xlim([0, 5.0]) + # Demand Profiles: Accelerations + self._set_plot_style(ax4, xlabel=r'Peak Floor Acceleration, $a_{max}$ [g]', ylabel='Floor No.') + for i in range(len(peak_accel_list)): + ax4.plot([float(x) for x in peak_accel_list[i][:, 0]], control_nodes, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][0], alpha=0.3) + ax4.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) + ax4.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) + ax4.set_xlim([0, 5.0]) - - def animate_model_run(self,control_nodes, acc, dts, nrha_disps, nrha_accels, drift_thresholds, pflag=True): - """ - Animates the seismic demands for a single nonlinear time-history analysis run - Parameters - ---------- - control_nodes: list Control nodes of the MDOF system - acc: array Acceleration values of the applied time-history - dts: array Pseudo-time values of the applied time-history - nrha_disps: array Nodal displacement values, output from do_nrha_analysis method - nrha_accels: array Relative nodal acceleration values, output from do_nrha_analysis method - drift_thresholds: list Drift-based damage thresholds - - Returns - ------- - None. - - """ - - # Set up the figure and the GridSpec layout - fig = plt.figure(figsize=(8, 8)) - gs = gridspec.GridSpec(2, 2, height_ratios=[1, 0.5]) - - # Create square subplots for the first row - ax1 = fig.add_subplot(gs[0, 0]) - ax2 = fig.add_subplot(gs[0, 1]) - - # Create a horizontal subplot that spans the bottom row - ax3 = fig.add_subplot(gs[1, :]) - - # Initial plots for each subplot - line1, = ax1.plot([], [], color="blue", linewidth=LINEWIDTH_2, marker='o', markersize=MARKER_SIZE_3) - line2, = ax2.plot([], [], color="red", linewidth=LINEWIDTH_2, marker='o', markersize=MARKER_SIZE_3) - line3, = ax3.plot([], [], color="green", linewidth=LINEWIDTH_2) - - # Set up each subplot - ax1.set_title("Floor Displacement (in m)", **HFONT) - ax2.set_title("Floor Acceleration (in g)", **HFONT) - ax3.set_title("Acceleration Time-History", **HFONT) - ax1.set_ylim(0.0, len(control_nodes)) - ax2.set_ylim(0.0, len(control_nodes)) - ax3.set_xlim(0, dts[-1]) - ax3.set_ylim(np.floor(acc.min()), np.ceil(acc.max())) - - # Set up ticks - ax1.set_yticks(range(len(control_nodes))) - ax1.set_yticklabels([f"Floor {i}" for i in range(len(control_nodes))]) - - ax2.set_yticks(range(len(control_nodes))) - ax2.set_yticklabels([f"Floor {i}" for i in range(len(control_nodes))]) - - # --- Enable and customize the grid --- - # Enable minor ticks for both axes - ax1.minorticks_on() - ax2.minorticks_on() - ax3.minorticks_on() - - # Set the major grid locator (spacing of major grid lines) - ax1.xaxis.set_major_locator(MultipleLocator(1)) # Major grid line every 1 unit on x-axis - ax1.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis - - # Set the minor grid locator (spacing of minor grid lines) - ax1.xaxis.set_minor_locator(MultipleLocator(0.2)) # Minor grid lines every 0.2 units on x-axis - ax1.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis - - # Customize the appearance of the grid lines (major and minor) - ax1.grid(which='major', color='gray', linestyle='-', linewidth=0.8) - ax1.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) - - ax2.xaxis.set_major_locator(MultipleLocator(1)) # Major grid line every 1 unit on x-axis - ax2.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis - ax2.xaxis.set_minor_locator(MultipleLocator(0.2)) # Minor grid lines every 0.2 units on x-axis - ax2.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis - - ax2.grid(which='major', color='gray', linestyle='-', linewidth=0.8) - ax2.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) - - ax3.xaxis.set_major_locator(MultipleLocator(2)) # Major grid line every 2 units on x-axis - ax3.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis - ax3.xaxis.set_minor_locator(MultipleLocator(0.5)) # Minor grid lines every 0.5 units on x-axis - ax3.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis - - ax3.grid(which='major', color='gray', linestyle='-', linewidth=0.8) - ax3.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) - - # Initialize the third line - line1.set_data([], []) - line2.set_data([], []) - line3.set_data([], []) - - # Add a static legend for damage states in ax1 (floor drift subplot) - legend_elements = [Line2D([0], [0], color=c, lw=3, label=state) for c, state in zip(DS_COLORS, DS_LABELS)] - ax1.legend(handles=legend_elements, loc="upper right", fontsize=FONTSIZE_3) - - # Initialize tracking variables to remember the maximum threshold exceeded - max_drift_threshold_index = 0 # Track max threshold index for drift - - # Animation update function - def update(frame): - - nonlocal max_drift_threshold_index - - # Get current displacements and accelerations for each control node at the current time frame - disp_values = nrha_disps[frame, :] - accel_values = nrha_accels[frame, :] - - # Calculate drift as the difference in displacement between consecutive floors - drift_values = np.abs(np.diff(disp_values)) # Absolute drift between consecutive floors - - # Determine maximum threshold level exceeded by drift for this frame - current_drift_threshold_index = max_drift_threshold_index # Start with the current maximum threshold - - for i, threshold in enumerate(drift_thresholds): - if np.max(drift_values) > threshold: - current_drift_threshold_index = max(current_drift_threshold_index, i) - - # Update the maximum drift threshold index reached so far - max_drift_threshold_index = current_drift_threshold_index - - # Set line1 color based on the highest drift threshold reached - line1.set_color(DS_COLORS[max_drift_threshold_index]) - - # Update line data - line1.set_data(disp_values, range(len(control_nodes))) - line2.set_data(accel_values, range(len(control_nodes))) - - # Time-history plot for acceleration data up to the current frame - line3.set_data(dts[:frame], acc[:frame]) - - return line1, line2, line3 - - # Create the animation - ani = FuncAnimation(fig, update, frames=len(dts), interval=1, blit=True, repeat=False) - - # Show the animation plt.tight_layout() - plt.show() # block=True ensures the animation is displayed in a blocking way - plt.pause(0.1) - - return ani - + self._save_plot(output_directory, plot_label) - def plot_vulnerability_analysis(self, + def plot_vulnerability_analysis(self, intensities, loss, cov, xlabel, ylabel, - output_directory, - plot_label): - - - # Simulating Beta distributions for each intensity measure + output_directory=None, + plot_label='vulnerability_plot'): + """Plot the vulnerability analysis results.""" + fig, ax1 = plt.subplots(figsize=(14, 8)) + self._set_plot_style(ax1, xlabel=xlabel, ylabel='Simulated Loss Ratio') + + # Simulate Beta distributions simulated_data = [] intensity_labels = [] - for j, mean_loss in enumerate(loss): - variance = (cov[j] * mean_loss) ** 2 # Calculate variance using CoV + variance = (cov[j] * mean_loss) ** 2 alpha = mean_loss * (mean_loss * (1 - mean_loss) / variance - 1) beta_param = (1 - mean_loss) * (mean_loss * (1 - mean_loss) / variance - 1) - - # Generate samples from the Beta distribution data = np.random.beta(alpha, beta_param, 10000) simulated_data.append(data) - intensity_labels.extend([intensities[j]] * len(data)) # Repeat intensity measures for each sample - - # Convert to DataFrame for seaborn visualization + intensity_labels.extend([intensities[j]] * len(data)) + + # Create DataFrame for seaborn df_sns = pd.DataFrame({ 'Intensity Measure': intensity_labels, 'Simulated Data': np.concatenate(simulated_data) }) - - # Create a figure and a set of axes for the violin plot - fig, ax1 = plt.subplots(figsize=(14, 8)) - - # --- Violin plot for Beta distributions --- - violin=sns.violinplot( - x='Intensity Measure', y='Simulated Data', data=df_sns, - scale='width', bw=0.2, inner=None, ax=ax1, zorder=1 - ) - - # Overlay a strip plot for better visualization of individual samples - sns.stripplot( - x='Intensity Measure', y='Simulated Data', data=df_sns, - color='k', size=1, alpha=0.5, ax=ax1, zorder=3 - ) - - # Customize the first y-axis (for the violin plot) - ax1.set_ylabel("Simulated Loss Ratio", fontsize=FONTSIZE_1, color='blue') - ax1.set_xlabel(f"{xlabel}", fontsize=FONTSIZE_1) - ax1.tick_params(axis='y', labelcolor='blue') - ax1.grid(True, which='both', linestyle='--', linewidth=0.5) - ax1.set_ylim(-0.1, 1.2) # Adjust y-axis range for the violin plot - - # Add the legend for the violin plots (Beta distribution) - # Create a dummy plot handle for the legend, since the violins are not directly plotted as lines - beta_patch = mpatches.Patch(color=violin.collections[0].get_facecolor()[0], label="Beta Distribution") - ax1.legend(handles=[beta_patch], loc='upper left', fontsize=FONTSIZE_1, bbox_to_anchor=(0, 1), ncol=1) - - - # --- Add a second set of x and y axes for the Loss Curve --- - ax2 = ax1.twinx() # Create a shared y-axis for the loss curve - - # Plot the loss curve on ax2 (now in blue) - ax2.plot( - range(len(intensities)), loss, marker='o', linestyle='-', color='blue', - label="Loss Curve", zorder=2 - ) - - # Customize the second y-axis (for the loss curve) - ax2.set_ylabel(f"{ylabel}", fontsize=FONTSIZE_1, color='blue', rotation = 270, labelpad=20) + + # Violin plot + violin = sns.violinplot(x='Intensity Measure', y='Simulated Data', data=df_sns, scale='width', bw=0.2, inner=None, ax=ax1, zorder=1) + sns.stripplot(x='Intensity Measure', y='Simulated Data', data=df_sns, color='k', size=1, alpha=0.5, ax=ax1, zorder=3) + + # Loss curve on secondary axis + ax2 = ax1.twinx() + ax2.plot(range(len(intensities)), loss, marker='o', linestyle='-', color='blue', label="Loss Curve", zorder=2) + ax2.set_ylabel(ylabel, fontsize=self.font_sizes['labels'], color='blue', rotation=270, labelpad=20) ax2.tick_params(axis='y', labelcolor='blue') - ax2.set_ylim(-0.1, 1.2) # Adjust y-axis range for the loss curve if needed - - # Customize both x-axes to match + + # Customize x-axis ax1.set_xticks(range(len(intensities))) - ax1.set_xticklabels([f"{x:.3f}" for x in intensities], rotation=45, ha='right', fontsize= FONTSIZE_3) - - # Add a legend for the loss curve - ax2.legend(loc='upper left', fontsize=FONTSIZE_1, bbox_to_anchor=(0, 0.95), ncol=1) - - # Tight layout and show the combined plot + ax1.set_xticklabels([f"{x:.3f}" for x in intensities], rotation=45, ha='right', fontsize=self.font_sizes['ticks']) + + # Add legends + beta_patch = mpatches.Patch(color=violin.collections[0].get_facecolor()[0], label="Beta Distribution") + ax1.legend(handles=[beta_patch], loc='upper left', fontsize=self.font_sizes['legend'], bbox_to_anchor=(0, 1), ncol=1) + ax2.legend(loc='upper left', fontsize=self.font_sizes['legend'], bbox_to_anchor=(0, 0.95), ncol=1) + + self._save_plot(output_directory, plot_label) + + def plot_slf_model(self, + out, + cache, + xlabel, + output_directory=None, + plot_label='slf'): + + """Plot the storey loss function generator output.""" + keys_list = list(cache.keys()) + for i, current_key in enumerate(keys_list): + rlz = len(cache[current_key]['total_loss_storey']) + total_loss_storey_array = np.array([cache[current_key]['total_loss_storey'][i] for i in range(rlz)]) + + fig, ax = plt.subplots(figsize=(8, 6)) + self._set_plot_style(ax, xlabel=xlabel, ylabel='Storey Loss') + + for i in range(rlz): + ax.scatter(out[current_key]['edp_range'], total_loss_storey_array[i, :], color=self.colors['gem'][3], s=self.marker_sizes['small'], alpha=0.5) + + ax.fill_between(out[current_key]['edp_range'], cache[current_key]['empirical_16th'], cache[current_key]['empirical_84th'], color='gray', alpha=0.3, label=r'16$^{\text{th}}$-84$^{\text{th}}$ Percentile') + ax.plot(out[current_key]['edp_range'], cache[current_key]['empirical_median'], lw=self.line_widths['medium'], color='blue', label='Median') + ax.plot(out[current_key]['edp_range'], out[current_key]['slf'], color='black', lw=self.line_widths['medium'], label='Storey Loss') + + ax.legend(fontsize=self.font_sizes['legend']) + self._save_plot(output_directory, f"{plot_label}_{current_key}") + + def animate_model_run(self, + control_nodes, + acc, + dts, + nrha_disps, + nrha_accels, + drift_thresholds, + output_directory=None, + plot_label='animation'): + """Animate the seismic demands for a single nonlinear time-history analysis run.""" + fig = plt.figure(figsize=(8, 8)) + gs = gridspec.GridSpec(2, 2, height_ratios=[1, 0.5]) + + # Create subplots + ax1 = fig.add_subplot(gs[0, 0]) # Floor displacement + ax2 = fig.add_subplot(gs[0, 1]) # Floor acceleration + ax3 = fig.add_subplot(gs[1, :]) # Acceleration time-history + + # Initialize lines + line1, = ax1.plot([], [], color="blue", linewidth=self.line_widths['medium'], marker='o', markersize=self.marker_sizes['small']) + line2, = ax2.plot([], [], color="red", linewidth=self.line_widths['medium'], marker='o', markersize=self.marker_sizes['small']) + line3, = ax3.plot([], [], color="green", linewidth=self.line_widths['medium']) + + # Set up subplots + self._set_plot_style(ax1, title="Floor Displacement (in m)", ylabel='Floor No.') + self._set_plot_style(ax2, title="Floor Acceleration (in g)", ylabel='Floor No.') + self._set_plot_style(ax3, title="Acceleration Time-History", xlabel='Time (s)', ylabel='Acceleration (g)') + + ax1.set_ylim(0.0, len(control_nodes)) + ax2.set_ylim(0.0, len(control_nodes)) + ax3.set_xlim(0, dts[-1]) + ax3.set_ylim(np.floor(acc.min()), np.ceil(acc.max())) + + # Add damage state legend + legend_elements = [Line2D([0], [0], color=c, lw=3, label=state) for c, state in zip(self.colors['damage_states'], ['No Damage', 'Slight Damage', 'Moderate Damage', 'Extensive Damage', 'Complete Damage'])] + ax1.legend(handles=legend_elements, loc="upper right", fontsize=self.font_sizes['legend']) + + # Animation update function + def update(frame): + disp_values = nrha_disps[frame, :] + accel_values = nrha_accels[frame, :] + drift_values = np.abs(np.diff(disp_values)) + + # Update line data + line1.set_data(disp_values, range(len(control_nodes))) + line2.set_data(accel_values, range(len(control_nodes))) + line3.set_data(dts[:frame], acc[:frame]) + + # Update line color based on maximum drift threshold exceeded + max_drift_threshold_index = np.max(np.where(np.max(drift_values) > drift_thresholds)[0]) if np.any(drift_values > drift_thresholds) else 0 + line1.set_color(self.colors['damage_states'][max_drift_threshold_index]) + + return line1, line2, line3 + + # Create animation + ani = FuncAnimation(fig, update, frames=len(dts), interval=1, blit=True, repeat=False) + + # Save animation if output_directory is provided + if output_directory: + ani.save(f'{output_directory}/{plot_label}.mp4', writer='ffmpeg', fps=30, dpi=self.resolution) + plt.tight_layout() - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') plt.show() From ece9278aeb1eabdfc4f5c44959ba50467d57c77d Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Thu, 6 Mar 2025 17:02:16 +0100 Subject: [PATCH 14/16] Revert "Update plotter class" This reverts commit dd160559f1ffa149cd2a1c9dca03ddce7c89a51e. --- openquake/vmtk/plotter.py | 843 ++++++++++++++++++++++++++------------ 1 file changed, 572 insertions(+), 271 deletions(-) diff --git a/openquake/vmtk/plotter.py b/openquake/vmtk/plotter.py index c2a0611..0ba054b 100644 --- a/openquake/vmtk/plotter.py +++ b/openquake/vmtk/plotter.py @@ -3,140 +3,226 @@ import seaborn as sns from scipy import stats import matplotlib.pyplot as plt -from matplotlib.lines import Line2D import matplotlib.patches as mpatches +from matplotlib.lines import Line2D import matplotlib.gridspec as gridspec from matplotlib.ticker import MultipleLocator from matplotlib.animation import FuncAnimation -class plotter: - def __init__(self): - # Define default styles - self.font_sizes = { - 'title': 16, - 'labels': 14, - 'ticks': 12, - 'legend': 14 - } - self.line_widths = { - 'thick': 3, - 'medium': 2, - 'thin': 1 - } - self.marker_sizes = { - 'large': 100, - 'medium': 60, - 'small': 10 - } - self.colors = { - 'fragility': ['green', 'yellow', 'orange', 'red'], - 'damage_states': ['blue', 'green', 'yellow', 'orange', 'red'], - 'gem': ["#0A4F4E", "#0A4F5E", "#54D7EB", "#54D6EB", "#399283", "#399264", "#399296"] - } - self.resolution = 500 - self.font_name = 'Helvetica' +## Define plot style +HFONT = {'fontname':'Helvetica'} - def _set_plot_style(self, ax, title=None, xlabel=None, ylabel=None, grid=True): - """Set consistent plot style for all plots.""" - if title: - ax.set_title(title, fontsize=self.font_sizes['title'], fontname=self.font_name) - if xlabel: - ax.set_xlabel(xlabel, fontsize=self.font_sizes['labels'], fontname=self.font_name) - if ylabel: - ax.set_ylabel(ylabel, fontsize=self.font_sizes['labels'], fontname=self.font_name) - ax.tick_params(axis='both', labelsize=self.font_sizes['ticks']) - if grid: - ax.grid(visible=True, which='major') - ax.grid(visible=True, which='minor') +FONTSIZE_1 = 16 +FONTSIZE_2 = 14 +FONTSIZE_3 = 12 - def _save_plot(self, output_directory, plot_label): - """Save the plot if output_directory is provided.""" - if output_directory: - plt.savefig(f'{output_directory}/{plot_label}.png', dpi=self.resolution, format='png') - plt.show() +LINEWIDTH_1= 3 +LINEWIDTH_2= 2 +LINEWIDTH_3 = 1 + +RESOLUTION = 500 +MARKER_SIZE_1 = 100 +MARKER_SIZE_2 = 60 +MARKER_SIZE_3 = 10 +FRAG_COLORS = ['green', 'yellow', 'orange', 'red'] +DS_COLORS = ['blue','green', 'yellow', 'orange', 'red'] # For animation +DS_LABELS = ['No Damage','Slight Damage','Moderate Damage','Extensive Damage','Complete Damage'] +GEM_COLORS = ["#0A4F4E","#0A4F5E","#54D7EB","#54D6EB","#399283","#399264","#399296"] + + +class plotter(): + + def __init__(self): + pass + def duplicate_for_drift(self, peak_drift_list, control_nodes): - """Creates data to process box plots for peak storey drifts.""" - y = [float(node) for node in control_nodes] # Convert all control nodes to float - x = [peak_drift_list[i // 2] if i < 2 * (len(control_nodes) - 1) else 0.0 - for i in range(2 * (len(control_nodes) - 1) + 1)] - + """ + Creates data to process box plots for peak storey drifts + ----- + Input + ----- + :param peak_drift_list: list Peak Storey Drifts + :param control_nodes: list Nodes of the MDOF oscillator + + ------ + Output + ------ + x: list Box plot-ready drift values + y: list Box plot-ready control nodes values + """ + + x = []; y = [] + for i in range(len(control_nodes)-1): + y.extend((float(control_nodes[i]),float(control_nodes[i+1]))) + x.extend((peak_drift_list[i],peak_drift_list[i])) + y.append(float(control_nodes[i+1])) + x.append(0.0) + return x, y + def plot_cloud_analysis(self, - cloud_dict, - output_directory=None, - plot_label='cloud_analysis_plot', - xlabel='Peak Ground Acceleration, PGA [g]', - ylabel=r'Maximum Peak Storey Drift, $\theta_{max}$ [%]'): - - """Plot the cloud analysis results.""" - fig, ax = plt.subplots(figsize=(6, 6)) - self._set_plot_style(ax, xlabel=xlabel, ylabel=ylabel) - - ax.scatter(cloud_dict['imls'], cloud_dict['edps'], color=self.colors['gem'][2], s=self.marker_sizes['medium'], alpha=0.5, label='Cloud Data', zorder=0) + cloud_dict, + output_directory, + plot_label = 'cloud_analysis_plot', + xlabel = 'Peak Ground Acceleration, PGA [g]', + ylabel = r'Maximum Peak Storey Drift, $\theta_{max}$ [%]'): + + """ + Plots the cloud analysis results + + Parameters + ---------- + cloud_dict: dict Direct output from do_cloud_analysis function + output_directory: string Output directory path + plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") + xlabel: string X-axis label (default set to mpsd) + ylabel: string Y-axis label (default set to pga) + + Returns + ------- + None. + + """ + + ### Initialise the figure + plt.rcParams['figure.figsize'] = [6, 6] + fig, ax = plt.subplots() + + plt.scatter(cloud_dict['imls'], cloud_dict['edps'], color = GEM_COLORS[2], s=MARKER_SIZE_2, alpha = 0.5, label = 'Cloud Data',zorder=0) # Plot the cloud scatter for i in range(len(cloud_dict['damage_thresholds'])): - ax.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color=self.colors['fragility'][i], s=self.marker_sizes['large'], alpha=1.0, zorder=2) - - ax.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle='solid', color=self.colors['gem'][1], lw=self.line_widths['thick'], label='Cloud Regression', zorder=1) - ax.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['upper_limit'], cloud_dict['upper_limit']], '--', color=self.colors['gem'][-1], label='Upper Censoring Limit') - ax.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['lower_limit'], cloud_dict['lower_limit']], '-.', color=self.colors['gem'][-1], label='Lower Censoring Limit') - - ax.set_xscale('log') - ax.set_yscale('log') - ax.set_xlim([min(cloud_dict['imls']), max(cloud_dict['imls'])]) - ax.set_ylim([min(cloud_dict['edps']), max(cloud_dict['edps'])]) - ax.legend(fontsize=self.font_sizes['legend']) - - self._save_plot(output_directory, plot_label) - - def plot_fragility_analysis(self, - cloud_dict, - output_directory=None, - plot_label='fragility_plot', - xlabel='Peak Ground Acceleration, PGA [g]'): - - """Plot the fragility analysis results.""" - fig, ax = plt.subplots(figsize=(6, 6)) - self._set_plot_style(ax, xlabel=xlabel, ylabel='Probability of Exceedance') - - for i in range(len(cloud_dict['medians'])): - ax.plot(cloud_dict['intensities'], cloud_dict['poes'][:, i], linestyle='solid', color=self.colors['fragility'][i], lw=self.line_widths['thick'], label=f'DS{i+1}') + plt.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color = FRAG_COLORS[i], s = MARKER_SIZE_1, alpha=1.0, zorder=2) + + plt.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle = 'solid', color = GEM_COLORS[1], lw=LINEWIDTH_1, label = 'Cloud Regression', zorder=1) # Plot the regressed fit + + plt.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['upper_limit'],cloud_dict['upper_limit']],'--',color=GEM_COLORS[-1], label = 'Upper Censoring Limit') # Plot the upper limit + plt.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['lower_limit'],cloud_dict['lower_limit']],'-.',color=GEM_COLORS[-1], label = 'Lower Censoring Limit') # Plot the lower limit + + plt.xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) + plt.ylabel(ylabel, fontsize = FONTSIZE_1, **HFONT) + + plt.xticks(fontsize=FONTSIZE_2, rotation=0) + plt.yticks(fontsize=FONTSIZE_2, rotation=0) + + plt.grid(visible=True, which='major') + plt.grid(visible=True, which='minor') + + plt.xscale('log') + plt.yscale('log') + + plt.xlim([min(cloud_dict['imls']), max(cloud_dict['imls'])]) + plt.ylim([min(cloud_dict['edps']), max(cloud_dict['edps'])]) + + plt.legend() + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') + plt.show() - ax.set_xlim([0, 5]) - ax.set_ylim([0, 1]) - ax.legend(fontsize=self.font_sizes['legend']) - self._save_plot(output_directory, plot_label) def plot_demand_profiles(self, - peak_drift_list, - peak_accel_list, - control_nodes, - output_directory=None, - plot_label='demand_profiles'): - - """Plot the demand profiles for peak drifts and accelerations.""" - fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6)) - self._set_plot_style(ax1, xlabel=r'Peak Storey Drift, $\theta_{max}$ [%]', ylabel='Floor No.') - self._set_plot_style(ax2, xlabel=r'Peak Floor Acceleration, $a_{max}$ [g]', ylabel='Floor No.') - - nst = len(control_nodes) - 1 + peak_drift_list, + peak_accel_list, + control_nodes, + output_directory, + plot_label): + """ + Plots the demand profiles associated with each record of cloud analysis + + Parameters + ---------- + peak_drift_list: list Peak storey drifts quantities from analysis + peak_accel_list: list Peak floor acceleration quantities from analysis + control_nodes: list Nodes of the MDOF system + output_directory: string Output directory path + Returns + ------- + None. + + """ + + ### Initialise the figure + plt.figure(figsize=(12, 6)) + plt.rcParams['axes.axisbelow'] = True + ax1 = plt.subplot(1,2,1) + ax2 = plt.subplot(1,2,2) + + ### get number of storeys + nst = len(control_nodes)-1 + + ### plot the results for i in range(len(peak_drift_list)): - x, y = self.duplicate_for_drift(peak_drift_list[i][:, 0], control_nodes) - ax1.plot([float(i) * 100 for i in x], y, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][1], alpha=0.7) - ax2.plot([float(x) / 9.81 for x in peak_accel_list[i][:, 0]], control_nodes, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][0], alpha=0.7) - - ax1.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) - ax2.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) - ax1.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) - ax2.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) - ax1.set_xlim([0, 5.0]) - ax2.set_xlim([0, 5.0]) + + x,y = self.duplicate_for_drift(peak_drift_list[i][:,0],control_nodes) + ax1.plot([float(i)*100 for i in x], y, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[1], alpha = 0.7) + ax1.set_xlabel(r'Peak Storey Drift, $\theta_{max}$ [%]',fontsize = FONTSIZE_2, **HFONT) + ax1.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) + ax1.grid(visible=True, which='major') + ax1.grid(visible=True, which='minor') + ax1.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) + xticks = np.linspace(0,5,11) + ax1.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) + ax1.set_xlim([0, 5.0]) + + ax2.plot([float(x)/9.81 for x in peak_accel_list[i][:,0]], control_nodes, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[0], alpha=0.7) + ax2.set_xlabel(r'Peak Floor Acceleration, $a_{max}$ [g]', fontsize = FONTSIZE_2, **HFONT) + ax2.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) + ax2.grid(visible=True, which='major') + ax2.grid(visible=True, which='minor') + ax2.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) + xticks = np.linspace(0,5,11) + ax2.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) + ax2.set_xlim([0, 5.0]) + + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') + plt.show() - self._save_plot(output_directory, plot_label) + + def plot_fragility_analysis(self, + cloud_dict, + output_directory, + plot_label = 'fragility_plot', + xlabel = 'Peak Ground Acceleration, PGA [g]'): + + """ + Plots the cloud analysis results + + Parameters + ---------- + cloud_dict: dict Direct output from do_cloud_analysis function + output_directory: string Output directory path + plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") + xlabel: string X-axis label (default set to pga) + + Returns + ------- + None. + + """ + + ### Plot the cloud + plt.rcParams['figure.figsize'] = [6, 6] + fig, ax = plt.subplots() + + for i in range(len(cloud_dict['medians'])): + plt.plot(cloud_dict['intensities'], cloud_dict['poes'][:,i], linestyle = 'solid', color = FRAG_COLORS[i], lw=LINEWIDTH_1, label = f'DS{i+1}') # Plot the regressed fit + + plt.xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) + plt.ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) + + plt.xticks(fontsize=FONTSIZE_2, rotation=0) + plt.yticks(fontsize=FONTSIZE_2, rotation=0) + + plt.grid(visible=True, which='major') + plt.grid(visible=True, which='minor') + plt.xlim([0,5]) + plt.ylim([0,1]) + + plt.legend() + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') + plt.show() def plot_ansys_results(self, @@ -144,191 +230,406 @@ def plot_ansys_results(self, peak_drift_list, peak_accel_list, control_nodes, - output_directory=None, - plot_label='ansys_results', - cloud_xlabel='PGA', - cloud_ylabel='MPSD'): - """Plot analysis results including cloud analysis, fragility, and demand profiles.""" - fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10, 10)) + output_directory, + plot_label, + cloud_xlabel = 'PGA', + cloud_ylabel = 'MPSD'): + + ### Initialise the figure + plt.figure(figsize=(10, 10)) plt.rcParams['axes.axisbelow'] = True - - # Cloud Analysis - self._set_plot_style(ax1, xlabel=cloud_xlabel, ylabel=cloud_ylabel) - ax1.scatter(cloud_dict['imls'], cloud_dict['edps'], color=self.colors['gem'][2], s=self.marker_sizes['medium'], alpha=0.5, label='Cloud Data', zorder=0) + ax1 = plt.subplot(2,2,1) + ax2 = plt.subplot(2,2,2) + ax3 = plt.subplot(2,2,3) + ax4 = plt.subplot(2,2,4) + + # First: Cloud + ax1.scatter(cloud_dict['imls'], cloud_dict['edps'], color = GEM_COLORS[2], s=MARKER_SIZE_2, alpha = 0.5, label = 'Cloud Data',zorder=0) # Plot the cloud scatter for i in range(len(cloud_dict['damage_thresholds'])): - ax1.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color=self.colors['fragility'][i], s=self.marker_sizes['large'], alpha=1.0, zorder=2) - ax1.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle='solid', color=self.colors['gem'][1], lw=self.line_widths['thick'], label='Cloud Regression', zorder=1) - ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['upper_limit'], cloud_dict['upper_limit']], '--', color=self.colors['gem'][-1], label='Upper Censoring Limit') - ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])], [cloud_dict['lower_limit'], cloud_dict['lower_limit']], '-.', color=self.colors['gem'][-1], label='Lower Censoring Limit') + ax1.scatter(cloud_dict['medians'][i], cloud_dict['damage_thresholds'][i], color = FRAG_COLORS[i], s = MARKER_SIZE_1, alpha=1.0, zorder=2) + ax1.plot(cloud_dict['fitted_x'], cloud_dict['fitted_y'], linestyle = 'solid', color = GEM_COLORS[1], lw=LINEWIDTH_1, label = 'Cloud Regression', zorder=1) # Plot the regressed fit + ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['upper_limit'],cloud_dict['upper_limit']],'--',color=GEM_COLORS[-1], label = 'Upper Censoring Limit') # Plot the upper limit + ax1.plot([min(cloud_dict['imls']), max(cloud_dict['imls'])],[cloud_dict['lower_limit'],cloud_dict['lower_limit']],'-.',color=GEM_COLORS[-1], label = 'Lower Censoring Limit') # Plot the lower limit + ax1.set_xlabel(cloud_xlabel, fontsize = FONTSIZE_1, **HFONT) + ax1.set_ylabel(cloud_ylabel, fontsize = FONTSIZE_1, **HFONT) + ax1.set_xticks(np.linspace(np.log(min(cloud_dict['imls'])),np.log(max(cloud_dict['imls']))), labels = np.linspace(np.log(min(cloud_dict['imls'])),np.log(max(cloud_dict['imls']))), minor = False, fontsize=FONTSIZE_3) + ax1.set_yticks(np.linspace(np.log(min(cloud_dict['edps'])),np.log(max(cloud_dict['edps']))), labels = np.linspace(np.log(min(cloud_dict['edps'])),np.log(max(cloud_dict['edps']))), minor = False, fontsize=FONTSIZE_3) + ax1.grid(visible=True, which='major') + ax1.grid(visible=True, which='minor') ax1.set_xscale('log') ax1.set_yscale('log') - ax1.legend(fontsize=self.font_sizes['legend']) - - # Fragility Analysis - self._set_plot_style(ax2, xlabel=cloud_xlabel, ylabel='Probability of Exceedance') + plt.legend() + + # Second: Fragility for i in range(len(cloud_dict['medians'])): - ax2.plot(cloud_dict['intensities'], cloud_dict['poes'][:, i], linestyle='solid', color=self.colors['fragility'][i], lw=self.line_widths['thick'], label=f'DS{i+1}') - ax2.set_xlim([0, 5]) - ax2.set_ylim([0, 1]) - ax2.legend(fontsize=self.font_sizes['legend']) - - # Demand Profiles: Drifts - self._set_plot_style(ax3, xlabel=r'Peak Storey Drift, $\theta_{max}$ [%]', ylabel='Floor No.') - nst = len(control_nodes) - 1 + ax2.plot(cloud_dict['intensities'], cloud_dict['poes'][:,i], linestyle = 'solid', color = FRAG_COLORS[i], lw=LINEWIDTH_1, label = f'{DS_LABELS[i+1]}') # Plot the regressed fit + + ax2.set_xlabel(cloud_xlabel, fontsize = FONTSIZE_1, **HFONT) + ax2.set_ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) + + ax2.set_xticks(np.linspace(0,5,6), labels = np.round(np.linspace(0,5,6),2), minor = False, fontsize=FONTSIZE_3) + ax2.set_yticks(np.linspace(0,1,11), labels =np.round(np.linspace(0,1,11),2), minor = False, fontsize=FONTSIZE_3) + + ax2.grid(visible=True, which='major') + ax2.grid(visible=True, which='minor') + ax2.set_xlim([0,5]) + ax2.set_ylim([0,1]) + ax2.legend() + + # Third: Demands + nst = len(control_nodes)-1 for i in range(len(peak_drift_list)): - x, y = self.duplicate_for_drift(peak_drift_list[i][:, 0], control_nodes) - ax3.plot([float(i) * 100 for i in x], y, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][1], alpha=0.7) - ax3.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) - ax3.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) - ax3.set_xlim([0, 5.0]) - - # Demand Profiles: Accelerations - self._set_plot_style(ax4, xlabel=r'Peak Floor Acceleration, $a_{max}$ [g]', ylabel='Floor No.') - for i in range(len(peak_accel_list)): - ax4.plot([float(x) for x in peak_accel_list[i][:, 0]], control_nodes, linewidth=self.line_widths['medium'], linestyle='solid', color=self.colors['gem'][0], alpha=0.3) - ax4.set_yticks(np.linspace(0, nst, nst + 1), labels=np.linspace(0, nst, nst + 1), minor=False) - ax4.set_xticks(np.linspace(0, 5, 11), labels=np.linspace(0, 5, 11), minor=False) - ax4.set_xlim([0, 5.0]) - + x,y = self.duplicate_for_drift(peak_drift_list[i][:,0],control_nodes) + ax3.plot([float(i)*100 for i in x], y, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[1], alpha = 0.7) + ax3.set_xlabel(r'Peak Storey Drift, $\theta_{max}$ [%]',fontsize = FONTSIZE_2, **HFONT) + ax3.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) + ax3.grid(visible=True, which='major') + ax3.grid(visible=True, which='minor') + ax3.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) + xticks = np.linspace(0,5,11) + ax3.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) + ax3.set_xlim([0, 5.0]) + + ax4.plot([float(x) for x in peak_accel_list[i][:,0]], control_nodes, linewidth=LINEWIDTH_2, linestyle = 'solid', color = GEM_COLORS[0], alpha=0.3) + ax4.set_xlabel(r'Peak Floor Acceleration, $a_{max}$ [g]', fontsize = FONTSIZE_2, **HFONT) + ax4.set_ylabel('Floor No.', fontsize = FONTSIZE_2, **HFONT) + ax4.grid(visible=True, which='major') + ax4.grid(visible=True, which='minor') + ax4.set_yticks(np.linspace(0,nst,nst+1), labels = np.linspace(0,nst,nst+1), minor = False, fontsize=FONTSIZE_3) + xticks = np.linspace(0,5,11) + ax4.set_xticks(xticks, labels=xticks, minor=False, fontsize=FONTSIZE_3) + ax4.set_xlim([0, 5.0]) + plt.tight_layout() - self._save_plot(output_directory, plot_label) - - def plot_vulnerability_analysis(self, - intensities, - loss, - cov, - xlabel, - ylabel, - output_directory=None, - plot_label='vulnerability_plot'): - """Plot the vulnerability analysis results.""" - fig, ax1 = plt.subplots(figsize=(14, 8)) - self._set_plot_style(ax1, xlabel=xlabel, ylabel='Simulated Loss Ratio') - - # Simulate Beta distributions - simulated_data = [] - intensity_labels = [] - for j, mean_loss in enumerate(loss): - variance = (cov[j] * mean_loss) ** 2 - alpha = mean_loss * (mean_loss * (1 - mean_loss) / variance - 1) - beta_param = (1 - mean_loss) * (mean_loss * (1 - mean_loss) / variance - 1) - data = np.random.beta(alpha, beta_param, 10000) - simulated_data.append(data) - intensity_labels.extend([intensities[j]] * len(data)) - - # Create DataFrame for seaborn - df_sns = pd.DataFrame({ - 'Intensity Measure': intensity_labels, - 'Simulated Data': np.concatenate(simulated_data) - }) - - # Violin plot - violin = sns.violinplot(x='Intensity Measure', y='Simulated Data', data=df_sns, scale='width', bw=0.2, inner=None, ax=ax1, zorder=1) - sns.stripplot(x='Intensity Measure', y='Simulated Data', data=df_sns, color='k', size=1, alpha=0.5, ax=ax1, zorder=3) - - # Loss curve on secondary axis - ax2 = ax1.twinx() - ax2.plot(range(len(intensities)), loss, marker='o', linestyle='-', color='blue', label="Loss Curve", zorder=2) - ax2.set_ylabel(ylabel, fontsize=self.font_sizes['labels'], color='blue', rotation=270, labelpad=20) - ax2.tick_params(axis='y', labelcolor='blue') - - # Customize x-axis - ax1.set_xticks(range(len(intensities))) - ax1.set_xticklabels([f"{x:.3f}" for x in intensities], rotation=45, ha='right', fontsize=self.font_sizes['ticks']) - - # Add legends - beta_patch = mpatches.Patch(color=violin.collections[0].get_facecolor()[0], label="Beta Distribution") - ax1.legend(handles=[beta_patch], loc='upper left', fontsize=self.font_sizes['legend'], bbox_to_anchor=(0, 1), ncol=1) - ax2.legend(loc='upper left', fontsize=self.font_sizes['legend'], bbox_to_anchor=(0, 0.95), ncol=1) - - self._save_plot(output_directory, plot_label) - - def plot_slf_model(self, - out, - cache, - xlabel, - output_directory=None, - plot_label='slf'): - - """Plot the storey loss function generator output.""" - keys_list = list(cache.keys()) - for i, current_key in enumerate(keys_list): - rlz = len(cache[current_key]['total_loss_storey']) - total_loss_storey_array = np.array([cache[current_key]['total_loss_storey'][i] for i in range(rlz)]) - - fig, ax = plt.subplots(figsize=(8, 6)) - self._set_plot_style(ax, xlabel=xlabel, ylabel='Storey Loss') - - for i in range(rlz): - ax.scatter(out[current_key]['edp_range'], total_loss_storey_array[i, :], color=self.colors['gem'][3], s=self.marker_sizes['small'], alpha=0.5) + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') + plt.show() - ax.fill_between(out[current_key]['edp_range'], cache[current_key]['empirical_16th'], cache[current_key]['empirical_84th'], color='gray', alpha=0.3, label=r'16$^{\text{th}}$-84$^{\text{th}}$ Percentile') - ax.plot(out[current_key]['edp_range'], cache[current_key]['empirical_median'], lw=self.line_widths['medium'], color='blue', label='Median') - ax.plot(out[current_key]['edp_range'], out[current_key]['slf'], color='black', lw=self.line_widths['medium'], label='Storey Loss') + def plot_multiple_stripe_analysis(msa_dict, + output_directory, + plot_label = 'multiple_stripe_analysis_plot', + xlabel = r'Maximum Peak Storey Drift, $\theta_{max}$ [%]', + ylabel = 'Peak Ground Acceleration, PGA [g]'): + + """ + Creates a combined subplot of two figures for multiple stripe analysis: + - First figure: Stripe analysis (IMLs vs EDPs) + - Second figure: Fitted fragilities (Exceedance probabilities for different thresholds) + + Parameters + ---------- + msa_dict: dict Direct output from do_multiple_stripe_analysis function + output_directory: string Output directory path + plot_label: string Designated filename for plot (default set to "cloud_analysis_plot") + xlabel: string X-axis label (default set to mpsd) + ylabel: string Y-axis label (default set to pga) + + Returns + ------- + None. + + """ + + def plot_stripe_analysis(imls, + edps, + damage_thresholds, + xlabel, + ylabel, + ax): + + """Plots the stripe analysis (IMLs vs EDPs) on a given axis""" + for i, threshold in enumerate(damage_thresholds): + for j, im in enumerate(imls): + ax.scatter(edps[j, :], [im] * len(edps[j, :]), color = GEM_COLORS[1], s=MARKER_SIZE_2, alpha = 0.5, label = 'MSA Data',zorder=0) + + # Add vertical lines for the damage thresholds + for i, threshold in enumerate(damage_thresholds): + ax.axvline(x=threshold, color=FRAG_COLORS[i], linestyle='--', label=f'Threshold {threshold}') + + ax.set_xlabel(xlabel,fontsize = FONTSIZE_2, **HFONT) + ax.set_ylabel(ylabel, fontsize = FONTSIZE_2, **HFONT) + ax.grid(visible=True, which='major') + ax.grid(visible=True, which='minor') + ax.set_xlim([0, np.max(edps)]) + + def plot_exceedance_fit(imls, + num_exc, + num_gmr, + eta, + beta, + threshold, + xlabel, + color, + ax): + + """Plot the exceedance fit for the fragility curve on a given axis""" + fitted_exceedance = stats.norm.cdf(np.log(imls / eta) / beta) + ax.plot(imls, fitted_exceedance, label=f"Fitted Lognormal (Threshold {threshold})", color=color) + ax.scatter(imls, num_exc / num_gmr, color = color, s=MARKER_SIZE_2, alpha = 0.5, label = 'Observed Exceedances',zorder=0) + ax.set_xlabel(xlabel, fontsize = FONTSIZE_1, **HFONT) + ax.set_ylabel('Probability of Exceedance', fontsize = FONTSIZE_1, **HFONT) + ax.legend() + ax.grid(visible=True, which='major') + ax.grid(visible=True, which='minor') + + + # Extract values from msa_dict + imls = msa_dict['imls'] + edps = msa_dict['edps'] + damage_thresholds = msa_dict['damage_thresholds'] + + ### Initialise the figure + plt.figure(figsize=(12, 6)) + plt.rcParams['axes.axisbelow'] = True + ax1 = plt.subplot(1,2,1) + ax2 = plt.subplot(1,2,2) + + # Plot the stripe analysis on the first axis + plot_stripe_analysis(imls, + edps, + damage_thresholds, + xlabel, + ylabel, + ax1) + + # Loop over all damage thresholds to plot the fragility fits + for i, threshold in enumerate(damage_thresholds): + eta = msa_dict['medians'][i] + beta = msa_dict['betas_total'][i] + color = FRAG_COLORS[i] + num_exc = np.array([np.sum(edp >= threshold) for edp in edps]) + num_gmr = np.full(len(imls), len(edps[0])) # Number of ground motions at each IM level + + # Plot the exceedance fit for the current threshold on the second axis + plot_exceedance_fit(imls, num_exc, num_gmr, eta, beta, threshold, xlabel, color, ax2) + + # Adjust layout for better readability + plt.tight_layout() + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') + plt.show() - ax.legend(fontsize=self.font_sizes['legend']) - self._save_plot(output_directory, f"{plot_label}_{current_key}") - def animate_model_run(self, - control_nodes, - acc, - dts, - nrha_disps, - nrha_accels, - drift_thresholds, - output_directory=None, - plot_label='animation'): - """Animate the seismic demands for a single nonlinear time-history analysis run.""" + + def animate_model_run(self,control_nodes, acc, dts, nrha_disps, nrha_accels, drift_thresholds, pflag=True): + """ + Animates the seismic demands for a single nonlinear time-history analysis run + Parameters + ---------- + control_nodes: list Control nodes of the MDOF system + acc: array Acceleration values of the applied time-history + dts: array Pseudo-time values of the applied time-history + nrha_disps: array Nodal displacement values, output from do_nrha_analysis method + nrha_accels: array Relative nodal acceleration values, output from do_nrha_analysis method + drift_thresholds: list Drift-based damage thresholds + + Returns + ------- + None. + + """ + + # Set up the figure and the GridSpec layout fig = plt.figure(figsize=(8, 8)) gs = gridspec.GridSpec(2, 2, height_ratios=[1, 0.5]) - - # Create subplots - ax1 = fig.add_subplot(gs[0, 0]) # Floor displacement - ax2 = fig.add_subplot(gs[0, 1]) # Floor acceleration - ax3 = fig.add_subplot(gs[1, :]) # Acceleration time-history - - # Initialize lines - line1, = ax1.plot([], [], color="blue", linewidth=self.line_widths['medium'], marker='o', markersize=self.marker_sizes['small']) - line2, = ax2.plot([], [], color="red", linewidth=self.line_widths['medium'], marker='o', markersize=self.marker_sizes['small']) - line3, = ax3.plot([], [], color="green", linewidth=self.line_widths['medium']) - - # Set up subplots - self._set_plot_style(ax1, title="Floor Displacement (in m)", ylabel='Floor No.') - self._set_plot_style(ax2, title="Floor Acceleration (in g)", ylabel='Floor No.') - self._set_plot_style(ax3, title="Acceleration Time-History", xlabel='Time (s)', ylabel='Acceleration (g)') - + + # Create square subplots for the first row + ax1 = fig.add_subplot(gs[0, 0]) + ax2 = fig.add_subplot(gs[0, 1]) + + # Create a horizontal subplot that spans the bottom row + ax3 = fig.add_subplot(gs[1, :]) + + # Initial plots for each subplot + line1, = ax1.plot([], [], color="blue", linewidth=LINEWIDTH_2, marker='o', markersize=MARKER_SIZE_3) + line2, = ax2.plot([], [], color="red", linewidth=LINEWIDTH_2, marker='o', markersize=MARKER_SIZE_3) + line3, = ax3.plot([], [], color="green", linewidth=LINEWIDTH_2) + + # Set up each subplot + ax1.set_title("Floor Displacement (in m)", **HFONT) + ax2.set_title("Floor Acceleration (in g)", **HFONT) + ax3.set_title("Acceleration Time-History", **HFONT) ax1.set_ylim(0.0, len(control_nodes)) ax2.set_ylim(0.0, len(control_nodes)) ax3.set_xlim(0, dts[-1]) ax3.set_ylim(np.floor(acc.min()), np.ceil(acc.max())) - - # Add damage state legend - legend_elements = [Line2D([0], [0], color=c, lw=3, label=state) for c, state in zip(self.colors['damage_states'], ['No Damage', 'Slight Damage', 'Moderate Damage', 'Extensive Damage', 'Complete Damage'])] - ax1.legend(handles=legend_elements, loc="upper right", fontsize=self.font_sizes['legend']) - + + # Set up ticks + ax1.set_yticks(range(len(control_nodes))) + ax1.set_yticklabels([f"Floor {i}" for i in range(len(control_nodes))]) + + ax2.set_yticks(range(len(control_nodes))) + ax2.set_yticklabels([f"Floor {i}" for i in range(len(control_nodes))]) + + # --- Enable and customize the grid --- + # Enable minor ticks for both axes + ax1.minorticks_on() + ax2.minorticks_on() + ax3.minorticks_on() + + # Set the major grid locator (spacing of major grid lines) + ax1.xaxis.set_major_locator(MultipleLocator(1)) # Major grid line every 1 unit on x-axis + ax1.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis + + # Set the minor grid locator (spacing of minor grid lines) + ax1.xaxis.set_minor_locator(MultipleLocator(0.2)) # Minor grid lines every 0.2 units on x-axis + ax1.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis + + # Customize the appearance of the grid lines (major and minor) + ax1.grid(which='major', color='gray', linestyle='-', linewidth=0.8) + ax1.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) + + ax2.xaxis.set_major_locator(MultipleLocator(1)) # Major grid line every 1 unit on x-axis + ax2.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis + ax2.xaxis.set_minor_locator(MultipleLocator(0.2)) # Minor grid lines every 0.2 units on x-axis + ax2.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis + + ax2.grid(which='major', color='gray', linestyle='-', linewidth=0.8) + ax2.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) + + ax3.xaxis.set_major_locator(MultipleLocator(2)) # Major grid line every 2 units on x-axis + ax3.yaxis.set_major_locator(MultipleLocator(0.5)) # Major grid line every 0.5 unit on y-axis + ax3.xaxis.set_minor_locator(MultipleLocator(0.5)) # Minor grid lines every 0.5 units on x-axis + ax3.yaxis.set_minor_locator(MultipleLocator(0.1)) # Minor grid lines every 0.1 units on y-axis + + ax3.grid(which='major', color='gray', linestyle='-', linewidth=0.8) + ax3.grid(which='minor', color='gray', linestyle=':', linewidth=0.5) + + # Initialize the third line + line1.set_data([], []) + line2.set_data([], []) + line3.set_data([], []) + + # Add a static legend for damage states in ax1 (floor drift subplot) + legend_elements = [Line2D([0], [0], color=c, lw=3, label=state) for c, state in zip(DS_COLORS, DS_LABELS)] + ax1.legend(handles=legend_elements, loc="upper right", fontsize=FONTSIZE_3) + + # Initialize tracking variables to remember the maximum threshold exceeded + max_drift_threshold_index = 0 # Track max threshold index for drift + # Animation update function def update(frame): + + nonlocal max_drift_threshold_index + + # Get current displacements and accelerations for each control node at the current time frame disp_values = nrha_disps[frame, :] accel_values = nrha_accels[frame, :] - drift_values = np.abs(np.diff(disp_values)) - + + # Calculate drift as the difference in displacement between consecutive floors + drift_values = np.abs(np.diff(disp_values)) # Absolute drift between consecutive floors + + # Determine maximum threshold level exceeded by drift for this frame + current_drift_threshold_index = max_drift_threshold_index # Start with the current maximum threshold + + for i, threshold in enumerate(drift_thresholds): + if np.max(drift_values) > threshold: + current_drift_threshold_index = max(current_drift_threshold_index, i) + + # Update the maximum drift threshold index reached so far + max_drift_threshold_index = current_drift_threshold_index + + # Set line1 color based on the highest drift threshold reached + line1.set_color(DS_COLORS[max_drift_threshold_index]) + # Update line data line1.set_data(disp_values, range(len(control_nodes))) line2.set_data(accel_values, range(len(control_nodes))) + + # Time-history plot for acceleration data up to the current frame line3.set_data(dts[:frame], acc[:frame]) - - # Update line color based on maximum drift threshold exceeded - max_drift_threshold_index = np.max(np.where(np.max(drift_values) > drift_thresholds)[0]) if np.any(drift_values > drift_thresholds) else 0 - line1.set_color(self.colors['damage_states'][max_drift_threshold_index]) - + return line1, line2, line3 - - # Create animation + + # Create the animation ani = FuncAnimation(fig, update, frames=len(dts), interval=1, blit=True, repeat=False) + + # Show the animation + plt.tight_layout() + plt.show() # block=True ensures the animation is displayed in a blocking way + plt.pause(0.1) + + return ani - # Save animation if output_directory is provided - if output_directory: - ani.save(f'{output_directory}/{plot_label}.mp4', writer='ffmpeg', fps=30, dpi=self.resolution) + def plot_vulnerability_analysis(self, + intensities, + loss, + cov, + xlabel, + ylabel, + output_directory, + plot_label): + + + # Simulating Beta distributions for each intensity measure + simulated_data = [] + intensity_labels = [] + + for j, mean_loss in enumerate(loss): + variance = (cov[j] * mean_loss) ** 2 # Calculate variance using CoV + alpha = mean_loss * (mean_loss * (1 - mean_loss) / variance - 1) + beta_param = (1 - mean_loss) * (mean_loss * (1 - mean_loss) / variance - 1) + + # Generate samples from the Beta distribution + data = np.random.beta(alpha, beta_param, 10000) + simulated_data.append(data) + intensity_labels.extend([intensities[j]] * len(data)) # Repeat intensity measures for each sample + + # Convert to DataFrame for seaborn visualization + df_sns = pd.DataFrame({ + 'Intensity Measure': intensity_labels, + 'Simulated Data': np.concatenate(simulated_data) + }) + + # Create a figure and a set of axes for the violin plot + fig, ax1 = plt.subplots(figsize=(14, 8)) + + # --- Violin plot for Beta distributions --- + violin=sns.violinplot( + x='Intensity Measure', y='Simulated Data', data=df_sns, + scale='width', bw=0.2, inner=None, ax=ax1, zorder=1 + ) + + # Overlay a strip plot for better visualization of individual samples + sns.stripplot( + x='Intensity Measure', y='Simulated Data', data=df_sns, + color='k', size=1, alpha=0.5, ax=ax1, zorder=3 + ) + + # Customize the first y-axis (for the violin plot) + ax1.set_ylabel("Simulated Loss Ratio", fontsize=FONTSIZE_1, color='blue') + ax1.set_xlabel(f"{xlabel}", fontsize=FONTSIZE_1) + ax1.tick_params(axis='y', labelcolor='blue') + ax1.grid(True, which='both', linestyle='--', linewidth=0.5) + ax1.set_ylim(-0.1, 1.2) # Adjust y-axis range for the violin plot + + # Add the legend for the violin plots (Beta distribution) + # Create a dummy plot handle for the legend, since the violins are not directly plotted as lines + beta_patch = mpatches.Patch(color=violin.collections[0].get_facecolor()[0], label="Beta Distribution") + ax1.legend(handles=[beta_patch], loc='upper left', fontsize=FONTSIZE_1, bbox_to_anchor=(0, 1), ncol=1) + + + # --- Add a second set of x and y axes for the Loss Curve --- + ax2 = ax1.twinx() # Create a shared y-axis for the loss curve + + # Plot the loss curve on ax2 (now in blue) + ax2.plot( + range(len(intensities)), loss, marker='o', linestyle='-', color='blue', + label="Loss Curve", zorder=2 + ) + + # Customize the second y-axis (for the loss curve) + ax2.set_ylabel(f"{ylabel}", fontsize=FONTSIZE_1, color='blue', rotation = 270, labelpad=20) + ax2.tick_params(axis='y', labelcolor='blue') + ax2.set_ylim(-0.1, 1.2) # Adjust y-axis range for the loss curve if needed + + # Customize both x-axes to match + ax1.set_xticks(range(len(intensities))) + ax1.set_xticklabels([f"{x:.3f}" for x in intensities], rotation=45, ha='right', fontsize= FONTSIZE_3) + + # Add a legend for the loss curve + ax2.legend(loc='upper left', fontsize=FONTSIZE_1, bbox_to_anchor=(0, 0.95), ncol=1) + + # Tight layout and show the combined plot plt.tight_layout() + plt.savefig(f'{output_directory}/{plot_label}.png', dpi=RESOLUTION, format='png') plt.show() From cf4352764842ed4bf9364857f5513dc2bc53d90e Mon Sep 17 00:00:00 2001 From: Antonio Ettorre Date: Fri, 7 Mar 2025 08:55:05 +0100 Subject: [PATCH 15/16] set correct version of wheel for python3.10 --- requirements-py310-win64.txt | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/requirements-py310-win64.txt b/requirements-py310-win64.txt index c7c9758..8198539 100644 --- a/requirements-py310-win64.txt +++ b/requirements-py310-win64.txt @@ -1,13 +1,13 @@ # vmtk requirements # From OQ wheels # -https://wheelhouse.openquake.org/v3/windows/py311/pandas-2.0.3-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/numpy-1.26.2-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/matplotlib-3.8.2-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/scipy-1.11.4-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/fiona-1.9.5-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/GDAL-3.7.3-cp311-cp311-win_amd64.whl -https://wheelhouse.openquake.org/v3/windows/py311/pyproj-3.6.1-cp311-cp311-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/pandas-2.0.3-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/fiona-1.9.5-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/GDAL-3.7.3-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/pyproj-3.6.1-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/scipy-1.11.4-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/numpy-1.26.2-cp310-cp310-win_amd64.whl +https://wheelhouse.openquake.org/v3/windows/py310/matplotlib-3.8.2-cp310-cp310-win_amd64.whl https://wheelhouse.openquake.org/v3/windows/py310/h5py-3.10.0-cp310-cp310-win_amd64.whl https://wheelhouse.openquake.org/v3/windows/py310/numba-0.58.1-cp310-cp310-win_amd64.whl https://wheelhouse.openquake.org/v3/windows/py310/llvmlite-0.41.1-cp310-cp310-win_amd64.whl From 7d457149a818c0a5712c0e4a7b3dfd309b00ab4e Mon Sep 17 00:00:00 2001 From: mouayed-nafeh <149155077+mouayed-nafeh@users.noreply.github.com> Date: Fri, 7 Mar 2025 09:13:59 +0100 Subject: [PATCH 16/16] Corrects instructions in README --- README.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index bddeeb0..cb6d46c 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ # ๐Ÿ‘ฉโ€๐Ÿ’ป๐Ÿง‘โ€๐Ÿ’ป Installation -Follow these steps to install the required tools and set up the development environment. Note that this procedure implies the installation of the OpenQuake engine dependencies. This procedure was tested on Mac and Linux OS. +Follow these steps to install the required tools and set up the development environment. Note that this procedure implies the installation of the OpenQuake engine dependencies. This procedure was tested on Windows and Linux OS. It is highly recommended to use a **virtual environment** to install this tool. A virtual environment is an isolated Python environment that allows you to manage dependencies for this project separately from your systemโ€™s Python installation. This ensures that the required dependencies for the OpenQuake engine do not interfere with other Python projects or system packages, which could lead to version conflicts. 1. Open a terminal and navigate to the folder where you intend to install the virtual environment using the "cd" command. @@ -77,8 +77,7 @@ It is highly recommended to use a **virtual environment** to install this tool. * On Windows: ```bash - \Scripts - activate + \Scripts\Activate.ps1 ``` 4. Enter (while on virtual environment) the preferred directory for "oq-vmtk" using the "cd" command @@ -97,14 +96,14 @@ It is highly recommended to use a **virtual environment** to install this tool. * On Linux ```bash - pip install -r /requirements-py-linux.txt + pip install -r requirements-py-linux.txt pip install -e . ``` * On Windows ```bash - pip install -r /requirements-py-win64.txt + pip install -r requirements-py-win64.txt pip install -e . ``` @@ -128,8 +127,7 @@ To run a demo, simply navigate to the demos directory and execute the relevant d * On Windows: ```bash - \Scripts - activate + \Scripts\Activate.ps1 ``` * To deactivate virtual environment: