PK KóRŐ"c˝Ç! Ç! plot_u_parameter_space.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Parameter space\n\nIn this example,\nwe will see the basics of :class:`.ParameterSpace`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\n\nfrom gemseo.algos.parameter_space import ParameterSpace\nfrom gemseo.api import configure_logger, create_discipline, create_scenario\n\nconfigure_logger()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a parameter space\nFirstly,\nthe creation of a :class:`.ParameterSpace` does not require any mandatory argument:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"parameter_space = ParameterSpace()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we can add either deterministic variables\nfrom their lower and upper bounds\n(use :meth:`.ParameterSpace.add_variable`)\nor uncertain variables from their distribution names and parameters\n(use :meth:`.ParameterSpace.add_random_variable`)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"parameter_space.add_variable(\"x\", l_b=-2.0, u_b=2.0)\nparameter_space.add_random_variable(\"y\", \"SPNormalDistribution\", mu=0.0, sigma=1.0)\nprint(parameter_space)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can check that the deterministic and uncertain variables are implemented\nas deterministic and deterministic variables respectively:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"x is deterministic: \", parameter_space.is_deterministic(\"x\"))\nprint(\"y is deterministic: \", parameter_space.is_deterministic(\"y\"))\nprint(\"x is uncertain: \", parameter_space.is_uncertain(\"x\"))\nprint(\"y is uncertain: \", parameter_space.is_uncertain(\"y\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sample from the parameter space\nWe can sample the uncertain variables from the :class:`.ParameterSpace`\nand get values either as a NumPy array (by default)\nor as a dictionary of NumPy arrays indexed by the names of the variables:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sample = parameter_space.compute_samples(n_samples=2, as_dict=True)\nprint(sample)\nsample = parameter_space.compute_samples(n_samples=4)\nprint(sample)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sample a discipline over the parameter space\nWe can also sample a discipline over the parameter space.\nFor simplicity,\nwe instantiate an :class:`.AnalyticDiscipline` from a dictionary of expressions:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"discipline = create_discipline(\"AnalyticDiscipline\", expressions_dict={\"z\": \"x+y\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From these parameter space and discipline,\nwe build a :class:`.DOEScenario`\nand execute it with a Latin Hypercube Sampling algorithm and 100 samples.\n\n
Warning
A :class:`.DOEScenario` considers all the variables\n available in its :class:`.DesignSpace`.\n By inheritance,\n in the special case of a :class:`.ParameterSpace`,\n a :class:`.DOEScenario` considers all the variables\n available in this :class:`.ParameterSpace`.\n Thus,\n if we do not filter the uncertain variables,\n the :class:`.DOEScenario` will consider\n both the deterministic variables as uniformly distributed variables\n and the uncertain variables with their specified probability distributions.
\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"scenario = create_scenario(\n [discipline], \"DisciplinaryOpt\", \"z\", parameter_space, scenario_type=\"DOE\"\n)\nscenario.execute({\"algo\": \"lhs\", \"n_samples\": 100})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can export the optimization problem to a :class:`.Dataset`:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dataset = scenario.formulation.opt_problem.export_to_dataset(name=\"samples\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and visualize it in a tabular way:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(dataset.export_to_dataframe())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or with a graphical post-processing,\ne.g. a scatter plot matrix:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dataset.plot(\"ScatterMatrix\", show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sample a discipline over the uncertain space\nIf we want to sample a discipline over the uncertain space,\nwe need to extract it:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"uncertain_space = parameter_space.extract_uncertain_space()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we clear the cache, create a new scenario from this parameter space\ncontaining only the uncertain variables and execute it.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"discipline.cache.clear()\nscenario = create_scenario(\n [discipline], \"DisciplinaryOpt\", \"z\", uncertain_space, scenario_type=\"DOE\"\n)\nscenario.execute({\"algo\": \"lhs\", \"n_samples\": 100})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally,\nwe build a dataset from the disciplinary cache and visualize it.\nWe can see that the deterministic variable 'x' is set to its default value\nfor all evaluations,\ncontrary to the previous case where we were considering the whole parameter space:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dataset = scenario.formulation.opt_problem.export_to_dataset(name=\"samples\")\nprint(dataset.export_to_dataframe())"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóR
í"ŞŢ Ţ ( distributions/plot_sp_distribution.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Probability distributions based on SciPy\n\nIn this example,\nwe seek to create a probability distribution based on the SciPy library.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\n\nfrom gemseo.api import configure_logger\nfrom gemseo.uncertainty.api import create_distribution, get_available_distributions\n\nconfigure_logger()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all,\nwe can access the names of the available probability distributions from the API:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"all_distributions = get_available_distributions()\nprint(all_distributions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and filter the ones based on the SciPy library\n(their names start with the acronym 'SP'):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sp_distributions = [dist for dist in all_distributions if dist.startswith(\"SP\")]\nprint(sp_distributions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a distribution\nThen,\nwe can create a probability distribution for a two-dimensional random variable\nwhose components are independent and distributed\nas the standard normal distribution (mean = 0 and standard deviation = 1):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_0_1 = create_distribution(\"x\", \"SPNormalDistribution\", 2)\nprint(distribution_0_1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or create another distribution with mean = 1 and standard deviation = 2\nfor the marginal distributions::\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_1_2 = create_distribution(\n \"x\", \"SPNormalDistribution\", 2, mu=1.0, sigma=2.0\n)\nprint(distribution_1_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We could also use the generic :class:`.SPDistribution`\nwhich allows access to all the SciPy distributions\nbut this requires to know the signature of the methods of this library:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_1_2 = create_distribution(\n \"x\",\n \"SPDistribution\",\n 2,\n interfaced_distribution=\"norm\",\n parameters={\"loc\": 1.0, \"scale\": 2.0},\n)\nprint(distribution_1_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot the distribution\nWe can plot both cumulative and probability density functions\nfor the first marginal:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_0_1.plot(show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note
We can provide a marginal index\n as first argument of the :meth:`.Distribution.plot` method\n but in the current version of |g|,\n all components have the same distributions and so the plot will be the same.
\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get mean\nWe can access the mean of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.mean)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get standard deviation\nWe can access the standard deviation of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.standard_deviation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get numerical range\nWe can access the range, ie. the difference between the numerical minimum and maximum,\nof the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get mathematical support\nWe can access the range, ie. the difference between the minimum and maximum,\nof the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.support)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate samples\nWe can generate 10 samples of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_samples(10))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute CDF\nWe can compute the cumulative density function component per component\n(here the probability that the first component is lower than 0.\nand that the second one is lower than 1.)::\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_cdf([0.0, 1.0]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute inverse CDF\nWe can compute the inverse cumulative density function\ncomponent per component\n(here the quantile at 50% for the first component\nand the quantile at 97.5% for the second one):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_inverse_cdf([0.5, 0.975]))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóRŁ1AĆ Ć ( distributions/plot_ot_distribution.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Probability distributions based on OpenTURNS\n\nIn this example,\nwe seek to create a probability distribution based on the OpenTURNS library.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\n\nfrom gemseo.api import configure_logger\nfrom gemseo.uncertainty.api import create_distribution, get_available_distributions\n\nconfigure_logger()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all,\nwe can access the names of the available probability distributions from the API:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"all_distributions = get_available_distributions()\nprint(all_distributions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and filter the ones based on the OpenTURNS library\n(their names start with the acronym 'OT'):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ot_distributions = [dist for dist in all_distributions if dist.startswith(\"OT\")]\nprint(ot_distributions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a distribution\nThen,\nwe can create a probability distribution for a two-dimensional random variable\nwhose components are independent and distributed\nas the standard normal distribution (mean = 0 and standard deviation = 1):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_0_1 = create_distribution(\"x\", \"OTNormalDistribution\", 2)\nprint(distribution_0_1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or create another distribution with mean = 1 and standard deviation = 2\nfor the marginal distributions:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_1_2 = create_distribution(\n \"x\", \"OTNormalDistribution\", 2, mu=1.0, sigma=2.0\n)\nprint(distribution_1_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We could also use the generic :class:`.OTDistribution`\nwhich allows access to all the OpenTURNS distributions\nbut this requires to know the signature of the methods of this library:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_1_2 = create_distribution(\n \"x\", \"OTDistribution\", 2, interfaced_distribution=\"Normal\", parameters=(1.0, 2.0)\n)\nprint(distribution_1_2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot the distribution\nWe can plot both cumulative and probability density functions\nfor the first marginal:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"distribution_0_1.plot(show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note
We can provide a marginal index\n as first argument of the :meth:`.Distribution.plot` method\n but in the current version of |g|,\n all components have the same distributions and so the plot will be the same.
\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get mean\nWe can access the mean of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.mean)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get standard deviation\nWe can access the standard deviation of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.standard_deviation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get numerical range\nWe can access the range, ie. the difference between the numerical minimum and maximum,\nof the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.range)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get mathematical support\nWe can access the range, ie. the difference between the minimum and maximum,\nof the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.support)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate samples\nWe can generate 10 samples of the distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_samples(10))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute CDF\nWe can compute the cumulative density function component per component\n(here the probability that the first component is lower than 0.\nand that the second one is lower than 1.)::\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_cdf([0.0, 1.0]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute inverse CDF\nWe can compute the inverse cumulative density function\ncomponent per component\n(here the quantile at 50% for the first component\nand the quantile at 97.5% for the second one):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(distribution_0_1.compute_inverse_cdf([0.5, 0.975]))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóR:F ' distributions/plot_ot_distfactory.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Fitting a distribution from data based on OpenTURNS\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\nfrom numpy.random import randn, seed\n\nfrom gemseo.api import configure_logger\nfrom gemseo.uncertainty.distributions.openturns.fitting import OTDistributionFitter\n\nconfigure_logger()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example,\nwe will see how to fit a distribution from data.\nFor a purely pedagogical reason,\nwe consider a synthetic dataset made of 100 realizations of *'X'*,\na random variable distributed according to the standard normal distribution.\nThese samples are generated from the NumPy library.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"seed(1)\ndata = randn(100)\nvariable_name = \"X\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a distribution fitter\nThen,\nwe create an :class:`.OTDistributionFitter` from these data and this variable name:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fitter = OTDistributionFitter(variable_name, data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fit a distribution\nFrom this distribution fitter,\nwe can easily fit any distribution available in the OpenTURNS library:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(fitter.available_distributions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example,\nwe can fit a normal distribution:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"norm_dist = fitter.fit(\"Normal\")\nprint(norm_dist)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or an exponential one:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"exp_dist = fitter.fit(\"Exponential\")\nprint(exp_dist)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The returned object is an :class:`.OTDistribution`\nthat we can represent graphically\nin terms of probability and cumulative density functions:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"norm_dist.plot(show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Measure the goodness-of-fit\nWe can also measure the goodness-of-fit of a distribution\nby means of a fitting criterion.\nSome fitting criteria are based on significance tests\nmade of a test statistics, a p-value and a significance level.\nWe can access the names of the available fitting criteria:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(fitter.available_criteria)\nprint(fitter.available_significance_tests)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example,\nwe can measure the goodness-of-fit of the previous distributions\nby considering the `Bayesian information criterion (BIC)\n`_:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"quality_measure = fitter.compute_measure(norm_dist, \"BIC\")\nprint(\"Normal: \", quality_measure)\n\nquality_measure = fitter.compute_measure(exp_dist, \"BIC\")\nprint(\"Exponential: \", quality_measure)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here,\nthe fitted normal distribution is better than the fitted exponential one\nin terms of BIC.\nWe can also the Kolmogorov fitting criterion\nwhich is based on the Kolmogorov significance test:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"acceptable, details = fitter.compute_measure(norm_dist, \"Kolmogorov\")\nprint(\"Normal: \", acceptable, details)\nacceptable, details = fitter.compute_measure(exp_dist, \"Kolmogorov\")\nprint(\"Exponential: \", acceptable, details)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this case,\nthe :meth:`.OTDistributionFitter.measure` method returns a tuple with two values:\n\n1. a boolean\n indicating if the measured distribution is acceptable to model the data,\n2. a dictionary containing the test statistics,\n the p-value and the significance level.\n\nNote
We can also change the significance level for significance tests\n whose default value is 0.05.\n For that, use the :code:`level` argument.
\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select an optimal distribution\nLastly,\nwe can also select an optimal :class:`.OTDistribution`\nbased on a collection of distributions names,\na fitting criterion,\na significance level\nand a selection criterion:\n\n- 'best': select the distribution\n minimizing (or maximizing, depending on the criterion) the criterion,\n- 'first': select the first distribution\n for which the criterion is greater (or lower, depending on the criterion)\n than the level.\n\nBy default,\nthe :meth:`.OTDistributionFitter.select` method uses a significance level equal to 0.5\nand 'best' selection criterion.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"selected_distribution = fitter.select([\"Exponential\", \"Normal\"], \"Kolmogorov\")\nprint(selected_distribution)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóRßÖ sensitivity/plot_sobol.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Sobol' analysis\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import pprint\n\nfrom matplotlib import pyplot as plt\nfrom numpy import pi\n\nfrom gemseo.algos.parameter_space import ParameterSpace\nfrom gemseo.api import create_discipline\nfrom gemseo.uncertainty.sensitivity.sobol.analysis import SobolAnalysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example,\nwe consider a function from $[-\\pi,\\pi]^3$ to $\\mathbb{R}^3$:\n\n\\begin{align}(y_1,y_2)=\\left(f(x_1,x_2,x_3),f(x_2,x_1,x_3)\\right)\\end{align}\n\nwhere $f(a,b,c)=\\sin(a)+7\\sin(b)^2+0.1*c^4\\sin(a)$ is the Ishigami function:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"expressions = {\n \"y1\": \"sin(x1)+7*sin(x2)**2+0.1*x3**4*sin(x1)\",\n \"y2\": \"sin(x2)+7*sin(x1)**2+0.1*x3**4*sin(x2)\",\n}\ndiscipline = create_discipline(\n \"AnalyticDiscipline\", expressions_dict=expressions, name=\"Ishigami2\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then,\nwe consider the case where\nthe deterministic variables $x_1$, $x_2$ and $x_3$ are replaced\nwith the uncertain variables $X_1$, $X_2$ and $X_3$.\nThe latter are independent and identically distributed\naccording to an uniform distribution between $-\\pi$ and $\\pi$:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"space = ParameterSpace()\nfor variable in [\"x1\", \"x2\", \"x3\"]:\n space.add_random_variable(\n variable, \"OTUniformDistribution\", minimum=-pi, maximum=pi\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From that,\nwe would like to carry out a sensitivity analysis with the random outputs\n$Y_1=f(X_1,X_2,X_3)$ and $Y_2=f(X_2,X_1,X_3)$.\nFor that,\nwe can compute the correlation coefficients from a :class:`.SobolAnalysis`:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sobol = SobolAnalysis(discipline, space, 100)\nsobol.main_method = \"total\"\nsobol.compute_indices()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The resulting indices are the first and total order Sobol' indices:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(sobol.indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"They can also be accessed separately:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(sobol.first_order_indices)\npprint.pprint(sobol.total_order_indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The main indices corresponds to the Spearman correlation indices\n(this main method can be changed with :attr:`.SobolAnalysis.main_method`):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(sobol.main_indices)\n\npprint.pprint(sobol.get_intervals())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also sort the input parameters by decreasing order of influence:\nand observe that this ranking is not the same for both outputs:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(sobol.sort_parameters(\"y1\"))\nprint(sobol.sort_parameters(\"y2\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly,\nwe can use the method :meth:`.SobolAnalysis.plot`\nto visualize both first and total order Sobol' indices:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sobol.plot(\"y1\", save=False, show=False)\nsobol.plot(\"y2\", save=False, show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóRÇ4ś` sensitivity/plot_morris.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Morris analysis\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import pprint\n\nfrom matplotlib import pyplot as plt\nfrom numpy import pi\n\nfrom gemseo.algos.parameter_space import ParameterSpace\nfrom gemseo.api import create_discipline\nfrom gemseo.uncertainty.sensitivity.morris.analysis import MorrisAnalysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example,\nwe consider a function from $[-\\pi,\\pi]^3$ to $\\mathbb{R}^3$:\n\n\\begin{align}(y_1,y_2)=\\left(f(x_1,x_2,x_3),f(x_2,x_1,x_3)\\right)\\end{align}\n\nwhere $f(a,b,c)=\\sin(a)+7\\sin(b)^2+0.1*c^4\\sin(a)$ is the Ishigami function:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"expressions = {\n \"y1\": \"sin(x1)+7*sin(x2)**2+0.1*x3**4*sin(x1)\",\n \"y2\": \"sin(x2)+7*sin(x1)**2+0.1*x3**4*sin(x2)\",\n}\ndiscipline = create_discipline(\n \"AnalyticDiscipline\", expressions_dict=expressions, name=\"Ishigami2\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then,\nwe consider the case where\nthe deterministic variables $x_1$, $x_2$ and $x_3$ are replaced\nwith the uncertain variables $X_1$, $X_2$ and $X_3$.\nThe latter are independent and identically distributed\naccording to an uniform distribution between $-\\pi$ and $\\pi$:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"space = ParameterSpace()\nfor variable in [\"x1\", \"x2\", \"x3\"]:\n space.add_random_variable(\n variable, \"OTUniformDistribution\", minimum=-pi, maximum=pi\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From that,\nwe would like to carry out a sensitivity analysis with the random outputs\n$Y_1=f(X_1,X_2,X_3)$ and $Y_2=f(X_2,X_1,X_3)$.\nFor that,\nwe can compute the correlation coefficients from a :class:`.MorrisAnalysis`:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"morris = MorrisAnalysis(discipline, space, 10)\nmorris.compute_indices()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The resulting indices are the empirical means and the standard deviations\nof the absolute output variations due to input changes.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(morris.indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The main indices corresponds to the Spearman correlation indices\n(this main method can be changed with :attr:`.MorrisAnalysis.main_method`):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(morris.main_indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also sort the input parameters by decreasing order of influence\nand observe that this ranking is not the same for both outputs:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(morris.sort_parameters(\"y1\"))\nprint(morris.sort_parameters(\"y2\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly,\nwe can use the method :meth:`.MorrisAnalysis.plot`\nto visualize the different series of indices:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"morris.plot(\"y1\", save=False, show=False, lower_mu=0, lower_sigma=0)\nmorris.plot(\"y2\", save=False, show=False, lower_mu=0, lower_sigma=0)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóRzűi[ [ - sensitivity/plot_sensitivity_comparison.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Comparing sensitivity indices\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\nfrom numpy import pi\n\nfrom gemseo.algos.parameter_space import ParameterSpace\nfrom gemseo.api import create_discipline\nfrom gemseo.uncertainty.sensitivity.correlation.analysis import CorrelationAnalysis\nfrom gemseo.uncertainty.sensitivity.morris.analysis import MorrisAnalysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example,\nwe consider the Ishigami function:\n\n\\begin{align}Y=\\sin(X_1)+7\\sin(X_2)^2+0.1*X_3^4\\sin(X_1)\\end{align}\n\nwhich is well-known in the uncertainty domain:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"expressions = {\"y\": \"sin(x1)+7*sin(x2)**2+0.1*x3**4*sin(x1)\"}\ndiscipline = create_discipline(\n \"AnalyticDiscipline\", expressions_dict=expressions, name=\"Ishigami\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The different uncertain variables $X_1$ , $X_2$ and $X_3$\nare independent and identically distributed\naccording to an uniform distribution between $-\\pi$ and $\\pi$:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"space = ParameterSpace()\nfor variable in [\"x1\", \"x2\", \"x3\"]:\n space.add_random_variable(\n variable, \"OTUniformDistribution\", minimum=-pi, maximum=pi\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We would like to carry out two sensitivity analyses,\ne.g. a first one based on correlation coefficients\nand a second one based on the Morris methodology,\nand compare the results,\n\nFirstly,\nwe create a :class:`.CorrelationAnalysis`\nand compute the sensitivity indices:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"correlation = CorrelationAnalysis(discipline, space, 10)\ncorrelation.compute_indices()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then,\nwe create a :class:`.MorrisAnalysis`\nand compute the sensitivity indices:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"morris = MorrisAnalysis(discipline, space, 10)\nmorris.compute_indices()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly,\nwe compare these analyses\nwith the graphical method :meth:`.SensitivityAnalysis.plot_comparison`,\neither using a bar chart:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"morris.plot_comparison(correlation, \"y\", use_bar_plot=True, save=False, show=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"or a radar plot:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"morris.plot_comparison(correlation, \"y\", use_bar_plot=False, save=False, show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóR÷2 " sensitivity/plot_correlation.ipynb{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Correlation analysis\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import pprint\n\nfrom matplotlib import pyplot as plt\nfrom numpy import pi\n\nfrom gemseo.algos.parameter_space import ParameterSpace\nfrom gemseo.api import create_discipline\nfrom gemseo.uncertainty.sensitivity.correlation.analysis import CorrelationAnalysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example,\nwe consider a function from $[-\\pi,\\pi]^3$ to $\\mathbb{R}^3$:\n\n\\begin{align}(y_1,y_2)=\\left(f(x_1,x_2,x_3),f(x_2,x_1,x_3)\\right)\\end{align}\n\nwhere $f(a,b,c)=\\sin(a)+7\\sin(b)^2+0.1*c^4\\sin(a)$ is the Ishigami function:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"expressions = {\n \"y1\": \"sin(x1)+7*sin(x2)**2+0.1*x3**4*sin(x1)\",\n \"y2\": \"sin(x2)+7*sin(x1)**2+0.1*x3**4*sin(x2)\",\n}\ndiscipline = create_discipline(\n \"AnalyticDiscipline\", expressions_dict=expressions, name=\"Ishigami2\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then,\nwe consider the case where\nthe deterministic variables $x_1$, $x_2$ and $x_3$ are replaced\nwith the uncertain variables $X_1$, $X_2$ and $X_3$.\nThe latter are independent and identically distributed\naccording to an uniform distribution between $-\\pi$ and $\\pi$:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"space = ParameterSpace()\nfor variable in [\"x1\", \"x2\", \"x3\"]:\n space.add_random_variable(\n variable, \"OTUniformDistribution\", minimum=-pi, maximum=pi\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From that,\nwe would like to carry out a sensitivity analysis with the random outputs\n$Y_1=f(X_1,X_2,X_3)$ and $Y_2=f(X_2,X_1,X_3)$.\nFor that,\nwe can compute the correlation coefficients from a :class:`.CorrelationAnalysis`:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"correlation = CorrelationAnalysis(discipline, space, 1000)\ncorrelation.compute_indices()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The resulting indices are\nthe Pearson correlation coefficients,\nthe Spearman correlation coefficients,\nthe Partial Correlation Coefficients (PCC),\nthe Partial Rank Correlation Coefficients (PRCC),\nthe Standard Regression Coefficients (SRC),\nthe Standard Rank Regression Coefficient (SRRC)\nand the Signed Standard Rank Regression Coefficient (SSRRC):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(correlation.indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The main indices corresponds to the Spearman correlation indices\n(this main method can be changed with :attr:`.CorrelationAnalysis.main_method`):\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pprint.pprint(correlation.main_indices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also sort the input parameters by decreasing order of influence\nand observe that this ranking is not the same for both outputs:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(correlation.sort_parameters(\"y1\"))\nprint(correlation.sort_parameters(\"y2\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly,\nwe can use the method :meth:`.CorrelationAnalysis.plot`\nto visualize the different correlation coefficients:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"correlation.plot(\"y1\", save=False, show=False)\ncorrelation.plot(\"y2\", save=False, show=False)\n# Workaround for HTML rendering, instead of ``show=True``\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK KóRXH_ph&