+
+
+*attrs* is the Python package that will bring back the **joy** of **writing classes** by relieving you from the drudgery of implementing object protocols (aka [dunder methods](https://www.attrs.org/en/latest/glossary.html#term-dunder-methods)).
+[Trusted by NASA](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-profile/customizing-your-profile/personalizing-your-profile#list-of-qualifying-repositories-for-mars-2020-helicopter-contributor-achievement) for Mars missions since 2020!
+
+Its main goal is to help you to write **concise** and **correct** software without slowing down your code.
+
+
+## Sponsors
+
+*attrs* would not be possible without our [amazing sponsors](https://github.com/sponsors/hynek).
+Especially those generously supporting us at the *The Organization* tier and higher:
+
+
+
+
+
+
+
+
+ Please consider joining them to help make attrs’s maintenance more sustainable!
+
+
+
+
+## Example
+
+*attrs* gives you a class decorator and a way to declaratively define the attributes on that class:
+
+
+
+```pycon
+>>> from attrs import asdict, define, make_class, Factory
+
+>>> @define
+... class SomeClass:
+... a_number: int = 42
+... list_of_numbers: list[int] = Factory(list)
+...
+... def hard_math(self, another_number):
+... return self.a_number + sum(self.list_of_numbers) * another_number
+
+
+>>> sc = SomeClass(1, [1, 2, 3])
+>>> sc
+SomeClass(a_number=1, list_of_numbers=[1, 2, 3])
+
+>>> sc.hard_math(3)
+19
+>>> sc == SomeClass(1, [1, 2, 3])
+True
+>>> sc != SomeClass(2, [3, 2, 1])
+True
+
+>>> asdict(sc)
+{'a_number': 1, 'list_of_numbers': [1, 2, 3]}
+
+>>> SomeClass()
+SomeClass(a_number=42, list_of_numbers=[])
+
+>>> C = make_class("C", ["a", "b"])
+>>> C("foo", "bar")
+C(a='foo', b='bar')
+```
+
+After *declaring* your attributes, *attrs* gives you:
+
+- a concise and explicit overview of the class's attributes,
+- a nice human-readable `__repr__`,
+- equality-checking methods,
+- an initializer,
+- and much more,
+
+*without* writing dull boilerplate code again and again and *without* runtime performance penalties.
+
+**Hate type annotations**!?
+No problem!
+Types are entirely **optional** with *attrs*.
+Simply assign `attrs.field()` to the attributes instead of annotating them with types.
+
+---
+
+This example uses *attrs*'s modern APIs that have been introduced in version 20.1.0, and the *attrs* package import name that has been added in version 21.3.0.
+The classic APIs (`@attr.s`, `attr.ib`, plus their serious-business aliases) and the `attr` package import name will remain **indefinitely**.
+
+Please check out [*On The Core API Names*](https://www.attrs.org/en/latest/names.html) for a more in-depth explanation.
+
+
+## Data Classes
+
+On the tin, *attrs* might remind you of `dataclasses` (and indeed, `dataclasses` [are a descendant](https://hynek.me/articles/import-attrs/) of *attrs*).
+In practice it does a lot more and is more flexible.
+For instance it allows you to define [special handling of NumPy arrays for equality checks](https://www.attrs.org/en/stable/comparison.html#customization), allows more ways to [plug into the initialization process](https://www.attrs.org/en/stable/init.html#hooking-yourself-into-initialization), and allows for stepping through the generated methods using a debugger.
+
+For more details, please refer to our [comparison page](https://www.attrs.org/en/stable/why.html#data-classes).
+
+
+## Project Information
+
+- [**Changelog**](https://www.attrs.org/en/stable/changelog.html)
+- [**Documentation**](https://www.attrs.org/)
+- [**PyPI**](https://pypi.org/project/attrs/)
+- [**Source Code**](https://github.com/python-attrs/attrs)
+- [**Contributing**](https://github.com/python-attrs/attrs/blob/main/.github/CONTRIBUTING.md)
+- [**Third-party Extensions**](https://github.com/python-attrs/attrs/wiki/Extensions-to-attrs)
+- **Get Help**: please use the `python-attrs` tag on [StackOverflow](https://stackoverflow.com/questions/tagged/python-attrs)
+
+
+### *attrs* for Enterprise
+
+Available as part of the Tidelift Subscription.
+
+The maintainers of *attrs* and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source packages you use to build your applications.
+Save time, reduce risk, and improve code health, while paying the maintainers of the exact packages you use.
+[Learn more.](https://tidelift.com/subscription/pkg/pypi-attrs?utm_source=pypi-attrs&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
+
+## Release Information
+
+### Changes
+
+- The type annotation for `attrs.resolve_types()` is now correct.
+ [#1141](https://github.com/python-attrs/attrs/issues/1141)
+- Type stubs now use `typing.dataclass_transform` to decorate dataclass-like decorators, instead of the non-standard `__dataclass_transform__` special form, which is only supported by Pyright.
+ [#1158](https://github.com/python-attrs/attrs/issues/1158)
+- Fixed serialization of namedtuple fields using `attrs.asdict/astuple()` with `retain_collection_types=True`.
+ [#1165](https://github.com/python-attrs/attrs/issues/1165)
+- `attrs.AttrsInstance` is now a `typing.Protocol` in both type hints and code.
+ This allows you to subclass it along with another `Protocol`.
+ [#1172](https://github.com/python-attrs/attrs/issues/1172)
+- If *attrs* detects that `__attrs_pre_init__` accepts more than just `self`, it will call it with the same arguments as `__init__` was called.
+ This allows you to, for example, pass arguments to `super().__init__()`.
+ [#1187](https://github.com/python-attrs/attrs/issues/1187)
+- Slotted classes now transform `functools.cached_property` decorated methods to support equivalent semantics.
+ [#1200](https://github.com/python-attrs/attrs/issues/1200)
+- Added *class_body* argument to `attrs.make_class()` to provide additional attributes for newly created classes.
+ It is, for example, now possible to attach methods.
+ [#1203](https://github.com/python-attrs/attrs/issues/1203)
+
+
+---
+
+[Full changelog](https://www.attrs.org/en/stable/changelog.html)
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/RECORD b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/RECORD
new file mode 100644
index 00000000..9708b064
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/RECORD
@@ -0,0 +1,55 @@
+attr/__init__.py,sha256=WlXJN6ICB0Y_HZ0lmuTUgia0kuSdn2p67d4N6cYxNZM,3307
+attr/__init__.pyi,sha256=u08EujYHy_rSyebNn-I9Xv2S_cXmtA9xWGc0cBsyl18,16976
+attr/__pycache__/__init__.cpython-312.pyc,,
+attr/__pycache__/_cmp.cpython-312.pyc,,
+attr/__pycache__/_compat.cpython-312.pyc,,
+attr/__pycache__/_config.cpython-312.pyc,,
+attr/__pycache__/_funcs.cpython-312.pyc,,
+attr/__pycache__/_make.cpython-312.pyc,,
+attr/__pycache__/_next_gen.cpython-312.pyc,,
+attr/__pycache__/_version_info.cpython-312.pyc,,
+attr/__pycache__/converters.cpython-312.pyc,,
+attr/__pycache__/exceptions.cpython-312.pyc,,
+attr/__pycache__/filters.cpython-312.pyc,,
+attr/__pycache__/setters.cpython-312.pyc,,
+attr/__pycache__/validators.cpython-312.pyc,,
+attr/_cmp.py,sha256=OQZlWdFX74z18adGEUp40Ojqm0NNu1Flqnv2JE8B2ng,4025
+attr/_cmp.pyi,sha256=sGQmOM0w3_K4-X8cTXR7g0Hqr290E8PTObA9JQxWQqc,399
+attr/_compat.py,sha256=QmRyxii295wcQfaugWqxuIumAPsNQ2-RUF82QZPqMKw,2540
+attr/_config.py,sha256=z81Vt-GeT_2taxs1XZfmHx9TWlSxjPb6eZH1LTGsS54,843
+attr/_funcs.py,sha256=VBTUFKLklsmqxys3qWSTK_Ac9Z4s0mAJWwgW9nA7Llk,17173
+attr/_make.py,sha256=LnVy2e0HygoqaZknhC19z7JmOt7qGkAadf2LZgWVJWI,101923
+attr/_next_gen.py,sha256=as1voi8siAI_o2OQG8YIiZvmn0G7-S3_j_774rnoZ_g,6203
+attr/_typing_compat.pyi,sha256=XDP54TUn-ZKhD62TOQebmzrwFyomhUCoGRpclb6alRA,469
+attr/_version_info.py,sha256=exSqb3b5E-fMSsgZAlEw9XcLpEgobPORCZpcaEglAM4,2121
+attr/_version_info.pyi,sha256=x_M3L3WuB7r_ULXAWjx959udKQ4HLB8l-hsc1FDGNvk,209
+attr/converters.py,sha256=Kyw5MY0yfnUR_RwN1Vydf0EiE---htDxOgSc_-NYL6A,3622
+attr/converters.pyi,sha256=jKlpHBEt6HVKJvgrMFJRrHq8p61GXg4-Nd5RZWKJX7M,406
+attr/exceptions.py,sha256=HRFq4iybmv7-DcZwyjl6M1euM2YeJVK_hFxuaBGAngI,1977
+attr/exceptions.pyi,sha256=zZq8bCUnKAy9mDtBEw42ZhPhAUIHoTKedDQInJD883M,539
+attr/filters.py,sha256=9pYvXqdg6mtLvKIIb56oALRMoHFnQTcGCO4EXTc1qyM,1470
+attr/filters.pyi,sha256=0mRCjLKxdcvAo0vD-Cr81HfRXXCp9j_cAXjOoAHtPGM,225
+attr/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+attr/setters.py,sha256=pbCZQ-pE6ZxjDqZfWWUhUFefXtpekIU4qS_YDMLPQ50,1400
+attr/setters.pyi,sha256=pyY8TVNBu8TWhOldv_RxHzmGvdgFQH981db70r0fn5I,567
+attr/validators.py,sha256=LGVpbiNg_KGzYrKUD5JPiZkx8TMfynDZGoQoLJNCIMo,19676
+attr/validators.pyi,sha256=167Dl9nt7NUhE9wht1I-buo039qyUT1nEUT_nKjSWr4,2580
+attrs-23.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+attrs-23.2.0.dist-info/METADATA,sha256=WwvG7OHyKjEPpyFUZCCYt1n0E_CcqdRb7bliGEdcm-A,9531
+attrs-23.2.0.dist-info/RECORD,,
+attrs-23.2.0.dist-info/WHEEL,sha256=mRYSEL3Ih6g5a_CVMIcwiF__0Ae4_gLYh01YFNwiq1k,87
+attrs-23.2.0.dist-info/licenses/LICENSE,sha256=iCEVyV38KvHutnFPjsbVy8q_Znyv-HKfQkINpj9xTp8,1109
+attrs/__init__.py,sha256=9_5waVbFs7rLqtXZ73tNDrxhezyZ8VZeX4BbvQ3EeJw,1039
+attrs/__init__.pyi,sha256=s_ajQ_U14DOsOz0JbmAKDOi46B3v2PcdO0UAV1MY6Ek,2168
+attrs/__pycache__/__init__.cpython-312.pyc,,
+attrs/__pycache__/converters.cpython-312.pyc,,
+attrs/__pycache__/exceptions.cpython-312.pyc,,
+attrs/__pycache__/filters.cpython-312.pyc,,
+attrs/__pycache__/setters.cpython-312.pyc,,
+attrs/__pycache__/validators.cpython-312.pyc,,
+attrs/converters.py,sha256=8kQljrVwfSTRu8INwEk8SI0eGrzmWftsT7rM0EqyohM,76
+attrs/exceptions.py,sha256=ACCCmg19-vDFaDPY9vFl199SPXCQMN_bENs4DALjzms,76
+attrs/filters.py,sha256=VOUMZug9uEU6dUuA0dF1jInUK0PL3fLgP0VBS5d-CDE,73
+attrs/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+attrs/setters.py,sha256=eL1YidYQV3T2h9_SYIZSZR1FAcHGb1TuCTy0E0Lv2SU,73
+attrs/validators.py,sha256=xcy6wD5TtTkdCG1f4XWbocPSO0faBjk5IfVJfP6SUj0,76
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/WHEEL b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/WHEEL
new file mode 100644
index 00000000..2860816a
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: hatchling 1.21.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/licenses/LICENSE b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/licenses/LICENSE
new file mode 100644
index 00000000..2bd6453d
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs-23.2.0.dist-info/licenses/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2015 Hynek Schlawack and the attrs contributors
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.py b/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.py
new file mode 100644
index 00000000..0c248156
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.py
@@ -0,0 +1,65 @@
+# SPDX-License-Identifier: MIT
+
+from attr import (
+ NOTHING,
+ Attribute,
+ AttrsInstance,
+ Factory,
+ _make_getattr,
+ assoc,
+ cmp_using,
+ define,
+ evolve,
+ field,
+ fields,
+ fields_dict,
+ frozen,
+ has,
+ make_class,
+ mutable,
+ resolve_types,
+ validate,
+)
+from attr._next_gen import asdict, astuple
+
+from . import converters, exceptions, filters, setters, validators
+
+
+__all__ = [
+ "__author__",
+ "__copyright__",
+ "__description__",
+ "__doc__",
+ "__email__",
+ "__license__",
+ "__title__",
+ "__url__",
+ "__version__",
+ "__version_info__",
+ "asdict",
+ "assoc",
+ "astuple",
+ "Attribute",
+ "AttrsInstance",
+ "cmp_using",
+ "converters",
+ "define",
+ "evolve",
+ "exceptions",
+ "Factory",
+ "field",
+ "fields_dict",
+ "fields",
+ "filters",
+ "frozen",
+ "has",
+ "make_class",
+ "mutable",
+ "NOTHING",
+ "resolve_types",
+ "setters",
+ "validate",
+ "validators",
+]
+
+__getattr__ = _make_getattr(__name__)
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.pyi b/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.pyi
new file mode 100644
index 00000000..9372cfea
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/__init__.pyi
@@ -0,0 +1,67 @@
+from typing import (
+ Any,
+ Callable,
+ Dict,
+ Mapping,
+ Optional,
+ Sequence,
+ Tuple,
+ Type,
+)
+
+# Because we need to type our own stuff, we have to make everything from
+# attr explicitly public too.
+from attr import __author__ as __author__
+from attr import __copyright__ as __copyright__
+from attr import __description__ as __description__
+from attr import __email__ as __email__
+from attr import __license__ as __license__
+from attr import __title__ as __title__
+from attr import __url__ as __url__
+from attr import __version__ as __version__
+from attr import __version_info__ as __version_info__
+from attr import _FilterType
+from attr import assoc as assoc
+from attr import Attribute as Attribute
+from attr import AttrsInstance as AttrsInstance
+from attr import cmp_using as cmp_using
+from attr import converters as converters
+from attr import define as define
+from attr import evolve as evolve
+from attr import exceptions as exceptions
+from attr import Factory as Factory
+from attr import field as field
+from attr import fields as fields
+from attr import fields_dict as fields_dict
+from attr import filters as filters
+from attr import frozen as frozen
+from attr import has as has
+from attr import make_class as make_class
+from attr import mutable as mutable
+from attr import NOTHING as NOTHING
+from attr import resolve_types as resolve_types
+from attr import setters as setters
+from attr import validate as validate
+from attr import validators as validators
+
+# TODO: see definition of attr.asdict/astuple
+def asdict(
+ inst: AttrsInstance,
+ recurse: bool = ...,
+ filter: Optional[_FilterType[Any]] = ...,
+ dict_factory: Type[Mapping[Any, Any]] = ...,
+ retain_collection_types: bool = ...,
+ value_serializer: Optional[
+ Callable[[type, Attribute[Any], Any], Any]
+ ] = ...,
+ tuple_keys: bool = ...,
+) -> Dict[str, Any]: ...
+
+# TODO: add support for returning NamedTuple from the mypy plugin
+def astuple(
+ inst: AttrsInstance,
+ recurse: bool = ...,
+ filter: Optional[_FilterType[Any]] = ...,
+ tuple_factory: Type[Sequence[Any]] = ...,
+ retain_collection_types: bool = ...,
+) -> Tuple[Any, ...]: ...
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/__init__.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 00000000..829ac51e
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/__init__.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/converters.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/converters.cpython-312.pyc
new file mode 100644
index 00000000..94f23248
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/converters.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/exceptions.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/exceptions.cpython-312.pyc
new file mode 100644
index 00000000..1b20a6ce
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/exceptions.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/filters.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/filters.cpython-312.pyc
new file mode 100644
index 00000000..1c995d5f
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/filters.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/setters.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/setters.cpython-312.pyc
new file mode 100644
index 00000000..ec3030d6
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/setters.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/validators.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/validators.cpython-312.pyc
new file mode 100644
index 00000000..0436d43a
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/attrs/__pycache__/validators.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/converters.py b/allAutomation/pytest-env/Lib/site-packages/attrs/converters.py
new file mode 100644
index 00000000..7821f6c0
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/converters.py
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+
+from attr.converters import * # noqa: F403
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/exceptions.py b/allAutomation/pytest-env/Lib/site-packages/attrs/exceptions.py
new file mode 100644
index 00000000..3323f9d2
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/exceptions.py
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+
+from attr.exceptions import * # noqa: F403
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/filters.py b/allAutomation/pytest-env/Lib/site-packages/attrs/filters.py
new file mode 100644
index 00000000..3080f483
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/filters.py
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+
+from attr.filters import * # noqa: F403
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/py.typed b/allAutomation/pytest-env/Lib/site-packages/attrs/py.typed
new file mode 100644
index 00000000..e69de29b
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/setters.py b/allAutomation/pytest-env/Lib/site-packages/attrs/setters.py
new file mode 100644
index 00000000..f3d73bb7
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/setters.py
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+
+from attr.setters import * # noqa: F403
diff --git a/allAutomation/pytest-env/Lib/site-packages/attrs/validators.py b/allAutomation/pytest-env/Lib/site-packages/attrs/validators.py
new file mode 100644
index 00000000..037e124f
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/attrs/validators.py
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+
+from attr.validators import * # noqa: F403
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/INSTALLER b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/INSTALLER
new file mode 100644
index 00000000..a1b589e3
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/INSTALLER
@@ -0,0 +1 @@
+pip
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/METADATA b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/METADATA
new file mode 100644
index 00000000..a2681d72
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/METADATA
@@ -0,0 +1,122 @@
+Metadata-Version: 2.1
+Name: beautifulsoup4
+Version: 4.12.3
+Summary: Screen-scraping library
+Project-URL: Download, https://www.crummy.com/software/BeautifulSoup/bs4/download/
+Project-URL: Homepage, https://www.crummy.com/software/BeautifulSoup/bs4/
+Author-email: Leonard Richardson
+License: MIT License
+License-File: AUTHORS
+License-File: LICENSE
+Keywords: HTML,XML,parse,soup
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: MIT License
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Text Processing :: Markup :: HTML
+Classifier: Topic :: Text Processing :: Markup :: SGML
+Classifier: Topic :: Text Processing :: Markup :: XML
+Requires-Python: >=3.6.0
+Requires-Dist: soupsieve>1.2
+Provides-Extra: cchardet
+Requires-Dist: cchardet; extra == 'cchardet'
+Provides-Extra: chardet
+Requires-Dist: chardet; extra == 'chardet'
+Provides-Extra: charset-normalizer
+Requires-Dist: charset-normalizer; extra == 'charset-normalizer'
+Provides-Extra: html5lib
+Requires-Dist: html5lib; extra == 'html5lib'
+Provides-Extra: lxml
+Requires-Dist: lxml; extra == 'lxml'
+Description-Content-Type: text/markdown
+
+Beautiful Soup is a library that makes it easy to scrape information
+from web pages. It sits atop an HTML or XML parser, providing Pythonic
+idioms for iterating, searching, and modifying the parse tree.
+
+# Quick start
+
+```
+>>> from bs4 import BeautifulSoup
+>>> soup = BeautifulSoup("
SomebadHTML")
+>>> print(soup.prettify())
+
+
+
+ Some
+
+ bad
+
+ HTML
+
+
+
+
+
+>>> soup.find(text="bad")
+'bad'
+>>> soup.i
+HTML
+#
+>>> soup = BeautifulSoup("SomebadXML", "xml")
+#
+>>> print(soup.prettify())
+
+
+ Some
+
+ bad
+
+ XML
+
+
+```
+
+To go beyond the basics, [comprehensive documentation is available](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
+
+# Links
+
+* [Homepage](https://www.crummy.com/software/BeautifulSoup/bs4/)
+* [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
+* [Discussion group](https://groups.google.com/group/beautifulsoup/)
+* [Development](https://code.launchpad.net/beautifulsoup/)
+* [Bug tracker](https://bugs.launchpad.net/beautifulsoup/)
+* [Complete changelog](https://bazaar.launchpad.net/~leonardr/beautifulsoup/bs4/view/head:/CHANGELOG)
+
+# Note on Python 2 sunsetting
+
+Beautiful Soup's support for Python 2 was discontinued on December 31,
+2020: one year after the sunset date for Python 2 itself. From this
+point onward, new Beautiful Soup development will exclusively target
+Python 3. The final release of Beautiful Soup 4 to support Python 2
+was 4.9.3.
+
+# Supporting the project
+
+If you use Beautiful Soup as part of your professional work, please consider a
+[Tidelift subscription](https://tidelift.com/subscription/pkg/pypi-beautifulsoup4?utm_source=pypi-beautifulsoup4&utm_medium=referral&utm_campaign=readme).
+This will support many of the free software projects your organization
+depends on, not just Beautiful Soup.
+
+If you use Beautiful Soup for personal projects, the best way to say
+thank you is to read
+[Tool Safety](https://www.crummy.com/software/BeautifulSoup/zine/), a zine I
+wrote about what Beautiful Soup has taught me about software
+development.
+
+# Building the documentation
+
+The bs4/doc/ directory contains full documentation in Sphinx
+format. Run `make html` in that directory to create HTML
+documentation.
+
+# Running the unit tests
+
+Beautiful Soup supports unit test discovery using Pytest:
+
+```
+$ pytest
+```
+
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/RECORD b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/RECORD
new file mode 100644
index 00000000..adf87a27
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/RECORD
@@ -0,0 +1,78 @@
+beautifulsoup4-4.12.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+beautifulsoup4-4.12.3.dist-info/METADATA,sha256=UkOS1koIjlakIy9Q1u2yCNwDEFOUZSrLcsbV-mTInz4,3790
+beautifulsoup4-4.12.3.dist-info/RECORD,,
+beautifulsoup4-4.12.3.dist-info/WHEEL,sha256=mRYSEL3Ih6g5a_CVMIcwiF__0Ae4_gLYh01YFNwiq1k,87
+beautifulsoup4-4.12.3.dist-info/licenses/AUTHORS,sha256=uSIdbrBb1sobdXl7VrlUvuvim2dN9kF3MH4Edn0WKGE,2176
+beautifulsoup4-4.12.3.dist-info/licenses/LICENSE,sha256=VbTY1LHlvIbRDvrJG3TIe8t3UmsPW57a-LnNKtxzl7I,1441
+bs4/__init__.py,sha256=kq32cCtQiNjjU9XwjD0b1jdXN5WEC87nJqSSW3PhVkM,33822
+bs4/__pycache__/__init__.cpython-312.pyc,,
+bs4/__pycache__/css.cpython-312.pyc,,
+bs4/__pycache__/dammit.cpython-312.pyc,,
+bs4/__pycache__/diagnose.cpython-312.pyc,,
+bs4/__pycache__/element.cpython-312.pyc,,
+bs4/__pycache__/formatter.cpython-312.pyc,,
+bs4/builder/__init__.py,sha256=nwb35ftjcwzOs2WkjVm1zvfi7FxSyJP-nN1YheIVT14,24566
+bs4/builder/__pycache__/__init__.cpython-312.pyc,,
+bs4/builder/__pycache__/_html5lib.cpython-312.pyc,,
+bs4/builder/__pycache__/_htmlparser.cpython-312.pyc,,
+bs4/builder/__pycache__/_lxml.cpython-312.pyc,,
+bs4/builder/_html5lib.py,sha256=0w-hmPM5wWR2iDuRCR6MvY6ZPXbg_hgddym-YWqj03s,19114
+bs4/builder/_htmlparser.py,sha256=_VD5Z08j6A9YYMR4y7ZTfdMzwiCBsSUQAPuHiYB-WZI,14923
+bs4/builder/_lxml.py,sha256=yKdMx1kdX7H2CopwSWEYm4Sgrfkd-WDj8HbskcaLauU,14948
+bs4/css.py,sha256=gqGaHRrKeCRF3gDqxzeU0uclOCeSsTpuW9gUaSnJeWc,10077
+bs4/dammit.py,sha256=G0cQfsEqfwJ-FIQMkXgCJwSHMn7t9vPepCrud6fZEKk,41158
+bs4/diagnose.py,sha256=uAwdDugL_67tB-BIwDIFLFbiuzGxP2wQzJJ4_bGYUrA,7195
+bs4/element.py,sha256=Dsol2iehkSjk10GzYgwFyjUEgpqmYZpyaAmbL0rWM2w,92845
+bs4/formatter.py,sha256=Bu4utAQYT9XDJaPPpTRM-dyxJDVLdxf_as-IU5gSY8A,7188
+bs4/tests/__init__.py,sha256=NydTegds_r7MoOEuQLS6TFmTA9TwK3KxJhwEkqjCGTQ,48392
+bs4/tests/__pycache__/__init__.cpython-312.pyc,,
+bs4/tests/__pycache__/test_builder.cpython-312.pyc,,
+bs4/tests/__pycache__/test_builder_registry.cpython-312.pyc,,
+bs4/tests/__pycache__/test_css.cpython-312.pyc,,
+bs4/tests/__pycache__/test_dammit.cpython-312.pyc,,
+bs4/tests/__pycache__/test_docs.cpython-312.pyc,,
+bs4/tests/__pycache__/test_element.cpython-312.pyc,,
+bs4/tests/__pycache__/test_formatter.cpython-312.pyc,,
+bs4/tests/__pycache__/test_fuzz.cpython-312.pyc,,
+bs4/tests/__pycache__/test_html5lib.cpython-312.pyc,,
+bs4/tests/__pycache__/test_htmlparser.cpython-312.pyc,,
+bs4/tests/__pycache__/test_lxml.cpython-312.pyc,,
+bs4/tests/__pycache__/test_navigablestring.cpython-312.pyc,,
+bs4/tests/__pycache__/test_pageelement.cpython-312.pyc,,
+bs4/tests/__pycache__/test_soup.cpython-312.pyc,,
+bs4/tests/__pycache__/test_tag.cpython-312.pyc,,
+bs4/tests/__pycache__/test_tree.cpython-312.pyc,,
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4670634698080256.testcase,sha256=yUdXkbpNK7LVOQ0LBHMoqZ1rWaBfSXWytoO_xdSm7Ho,15
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase,sha256=Uv_dx4a43TSfoNkjU-jHW2nSXkqHFg4XdAw7SWVObUk,23
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase,sha256=OEyVA0Ej4FxswOElrUNt0In4s4YhrmtaxE_NHGZvGtg,30
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5000587759190016.testcase,sha256=G4vpNBOz-RwMpi6ewEgNEa13zX0sXhmL7VHOyIcdKVQ,15347
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase,sha256=3d8z65o4p7Rur-RmCHoOjzqaYQ8EAtjmiBYTHNyAdl4,19469
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5270998950477824.testcase,sha256=NfGIlit1k40Ip3mlnBkYOkIDJX6gHtjlErwl7gsBjAQ,12
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5375146639360000.testcase,sha256=xy4i1U0nhFHcnyc5pRKS6JRMvuoCNUur-Scor6UxIGw,4317
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5492400320282624.testcase,sha256=Q-UTYpQBUsWoMgIUspUlzveSI-41s4ABC3jajRb-K0o,11502
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase,sha256=2bq3S8KxZgk8EajLReHD8m4_0Lj_nrkyJAxB_z_U0D0,5
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase,sha256=MZDu31LPLfgu6jP9IZkrlwNes3f_sL8WFP5BChkUKdY,35
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase,sha256=w58r-s6besG5JwPXpnz37W2YTj9-_qxFbk6hiEnKeIQ,51495
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase,sha256=q8rkdMECEXKcqVhOf5zWHkSBTQeOPt0JiLg2TZiPCuk,10380
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase,sha256=QfzoOxKwNuqG-4xIrea6MOQLXhfAAOQJ0r9u-J6kSNs,19
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6306874195312640.testcase,sha256=MJ2pHFuuCQUiQz1Kor2sof7LWeRERQ6QK43YNqQHg9o,47
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase,sha256=EItOpSdeD4ewK-qgJ9vtxennwn_huguzXgctrUT7fqE,3546
+bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase,sha256=a2aJTG4FceGSJXsjtxoS8S4jk_8rZsS3aznLkeO2_dY,124
+bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase,sha256=jRFRtCKlP3-3EDLc_iVRTcE6JNymv0rYcVM6qRaPrxI,2607
+bs4/tests/fuzz/crash-ffbdfa8a2b26f13537b68d3794b0478a4090ee4a.testcase,sha256=7NsdCiXWAhNkmoW1pvF7rbZExyLAQIWtDtSHXIsH6YU,103
+bs4/tests/test_builder.py,sha256=nc2JE5EMrEf-p24qhf2R8qAV5PpFiOuNpYCmtmCjlTI,1115
+bs4/tests/test_builder_registry.py,sha256=7WLj2prjSHGphebnrjQuI6JYr03Uy_c9_CkaFSQ9HRo,5114
+bs4/tests/test_css.py,sha256=jCcgIWem3lyPa5AjhAk9S6fWI07hk1rg0v8coD7bEtI,17279
+bs4/tests/test_dammit.py,sha256=MbSmRN6VEP0Rm56-w6Ja0TW8eC-8ZxOJ-wXWVf_hRi8,15451
+bs4/tests/test_docs.py,sha256=xoAxnUfoQ7aRqGImwW_9BJDU8WNMZHIuvWqVepvWXt8,1127
+bs4/tests/test_element.py,sha256=92oRSRoGk8gIXAbAGHErKzocx2MK32TqcQdUJ-dGQMo,2377
+bs4/tests/test_formatter.py,sha256=eTzj91Lmhv90z-WiHjK3sBJZm0hRk0crFY1TZaXstCY,4148
+bs4/tests/test_fuzz.py,sha256=_K2utiYVkZ22mvh03g8CBioFU1QDJaff1vTaDyXhxNk,6972
+bs4/tests/test_html5lib.py,sha256=2-ipm-_MaPt37WTxEd5DodUTNhS4EbLFKPRaO6XSCW4,8322
+bs4/tests/test_htmlparser.py,sha256=wnngcIlzjEwH21JFfu_mgt6JdpLt0ncJfLcGT7HeGw0,6256
+bs4/tests/test_lxml.py,sha256=nQCmLt7bWk0id7xMumZw--PzEe1xF9PTQn3lvHyNC6I,7635
+bs4/tests/test_navigablestring.py,sha256=RGSgziNf7cZnYdEPsoqL1B2I68TUJp1JmEQVxbh_ryA,5081
+bs4/tests/test_pageelement.py,sha256=VdGjUxx3RhjqmNsJ92ao6VZC_YD7T8mdLkDZjosOYeE,14274
+bs4/tests/test_soup.py,sha256=JmnAPLE1_GXm0wmwEUN7icdvBz9HDch-qoU2mT_TDrs,19877
+bs4/tests/test_tag.py,sha256=FBPDUisDCbFmvl5HmTtN49CGo3YoUXh5Wiuw5FMLS5E,9616
+bs4/tests/test_tree.py,sha256=n9nTQOzJb3-ZnZ6AkmMdZQ5TYcTUPnqHoVgal0mYXfg,48129
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/WHEEL b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/WHEEL
new file mode 100644
index 00000000..2860816a
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/WHEEL
@@ -0,0 +1,4 @@
+Wheel-Version: 1.0
+Generator: hatchling 1.21.0
+Root-Is-Purelib: true
+Tag: py3-none-any
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/AUTHORS b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/AUTHORS
new file mode 100644
index 00000000..1f14fe07
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/AUTHORS
@@ -0,0 +1,49 @@
+Behold, mortal, the origins of Beautiful Soup...
+================================================
+
+Leonard Richardson is the primary maintainer.
+
+Aaron DeVore and Isaac Muse have made significant contributions to the
+code base.
+
+Mark Pilgrim provided the encoding detection code that forms the base
+of UnicodeDammit.
+
+Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful
+Soup 4 working under Python 3.
+
+Simon Willison wrote soupselect, which was used to make Beautiful Soup
+support CSS selectors. Isaac Muse wrote SoupSieve, which made it
+possible to _remove_ the CSS selector code from Beautiful Soup.
+
+Sam Ruby helped with a lot of edge cases.
+
+Jonathan Ellis was awarded the prestigious Beau Potage D'Or for his
+work in solving the nestable tags conundrum.
+
+An incomplete list of people have contributed patches to Beautiful
+Soup:
+
+ Istvan Albert, Andrew Lin, Anthony Baxter, Oliver Beattie, Andrew
+Boyko, Tony Chang, Francisco Canas, "Delong", Zephyr Fang, Fuzzy,
+Roman Gaufman, Yoni Gilad, Richie Hindle, Toshihiro Kamiya, Peteris
+Krumins, Kent Johnson, Marek Kapolka, Andreas Kostyrka, Roel Kramer,
+Ben Last, Robert Leftwich, Stefaan Lippens, "liquider", Staffan
+Malmgren, Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon",
+Ed Oskiewicz, Martijn Peters, Greg Phillips, Giles Radford, Stefano
+Revera, Arthur Rudolph, Marko Samastur, James Salter, Jouni Seppnen,
+Alexander Schmolck, Tim Shirley, Geoffrey Sneddon, Ville Skytt,
+"Vikas", Jens Svalgaard, Andy Theyers, Eric Weiser, Glyn Webster, John
+Wiseman, Paul Wright, Danny Yoo
+
+An incomplete list of people who made suggestions or found bugs or
+found ways to break Beautiful Soup:
+
+ Hanno Bck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel,
+ Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes,
+ Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams,
+ warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison,
+ Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed
+ Summers, Dennis Sutch, Chris Smith, Aaron Swartz, Stuart
+ Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de
+ Sousa Rocha, Yichun Wei, Per Vognsen
diff --git a/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/LICENSE b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/LICENSE
new file mode 100644
index 00000000..08e3a9cf
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/beautifulsoup4-4.12.3.dist-info/licenses/LICENSE
@@ -0,0 +1,31 @@
+Beautiful Soup is made available under the MIT license:
+
+ Copyright (c) Leonard Richardson
+
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
+
+Beautiful Soup incorporates code from the html5lib library, which is
+also made available under the MIT license. Copyright (c) James Graham
+and other contributors
+
+Beautiful Soup has an optional dependency on the soupsieve library,
+which is also made available under the MIT license. Copyright (c)
+Isaac Muse
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__init__.py b/allAutomation/pytest-env/Lib/site-packages/bs4/__init__.py
new file mode 100644
index 00000000..d8ad5e1d
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/bs4/__init__.py
@@ -0,0 +1,840 @@
+"""Beautiful Soup Elixir and Tonic - "The Screen-Scraper's Friend".
+
+http://www.crummy.com/software/BeautifulSoup/
+
+Beautiful Soup uses a pluggable XML or HTML parser to parse a
+(possibly invalid) document into a tree representation. Beautiful Soup
+provides methods and Pythonic idioms that make it easy to navigate,
+search, and modify the parse tree.
+
+Beautiful Soup works with Python 3.6 and up. It works better if lxml
+and/or html5lib is installed.
+
+For more than you ever wanted to know about Beautiful Soup, see the
+documentation: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
+"""
+
+__author__ = "Leonard Richardson (leonardr@segfault.org)"
+__version__ = "4.12.3"
+__copyright__ = "Copyright (c) 2004-2024 Leonard Richardson"
+# Use of this source code is governed by the MIT license.
+__license__ = "MIT"
+
+__all__ = ['BeautifulSoup']
+
+from collections import Counter
+import os
+import re
+import sys
+import traceback
+import warnings
+
+# The very first thing we do is give a useful error if someone is
+# running this code under Python 2.
+if sys.version_info.major < 3:
+ raise ImportError('You are trying to use a Python 3-specific version of Beautiful Soup under Python 2. This will not work. The final version of Beautiful Soup to support Python 2 was 4.9.3.')
+
+from .builder import (
+ builder_registry,
+ ParserRejectedMarkup,
+ XMLParsedAsHTMLWarning,
+ HTMLParserTreeBuilder
+)
+from .dammit import UnicodeDammit
+from .element import (
+ CData,
+ Comment,
+ CSS,
+ DEFAULT_OUTPUT_ENCODING,
+ Declaration,
+ Doctype,
+ NavigableString,
+ PageElement,
+ ProcessingInstruction,
+ PYTHON_SPECIFIC_ENCODINGS,
+ ResultSet,
+ Script,
+ Stylesheet,
+ SoupStrainer,
+ Tag,
+ TemplateString,
+ )
+
+# Define some custom warnings.
+class GuessedAtParserWarning(UserWarning):
+ """The warning issued when BeautifulSoup has to guess what parser to
+ use -- probably because no parser was specified in the constructor.
+ """
+
+class MarkupResemblesLocatorWarning(UserWarning):
+ """The warning issued when BeautifulSoup is given 'markup' that
+ actually looks like a resource locator -- a URL or a path to a file
+ on disk.
+ """
+
+
+class BeautifulSoup(Tag):
+ """A data structure representing a parsed HTML or XML document.
+
+ Most of the methods you'll call on a BeautifulSoup object are inherited from
+ PageElement or Tag.
+
+ Internally, this class defines the basic interface called by the
+ tree builders when converting an HTML/XML document into a data
+ structure. The interface abstracts away the differences between
+ parsers. To write a new tree builder, you'll need to understand
+ these methods as a whole.
+
+ These methods will be called by the BeautifulSoup constructor:
+ * reset()
+ * feed(markup)
+
+ The tree builder may call these methods from its feed() implementation:
+ * handle_starttag(name, attrs) # See note about return value
+ * handle_endtag(name)
+ * handle_data(data) # Appends to the current data node
+ * endData(containerClass) # Ends the current data node
+
+ No matter how complicated the underlying parser is, you should be
+ able to build a tree using 'start tag' events, 'end tag' events,
+ 'data' events, and "done with data" events.
+
+ If you encounter an empty-element tag (aka a self-closing tag,
+ like HTML's tag), call handle_starttag and then
+ handle_endtag.
+ """
+
+ # Since BeautifulSoup subclasses Tag, it's possible to treat it as
+ # a Tag with a .name. This name makes it clear the BeautifulSoup
+ # object isn't a real markup tag.
+ ROOT_TAG_NAME = '[document]'
+
+ # If the end-user gives no indication which tree builder they
+ # want, look for one with these features.
+ DEFAULT_BUILDER_FEATURES = ['html', 'fast']
+
+ # A string containing all ASCII whitespace characters, used in
+ # endData() to detect data chunks that seem 'empty'.
+ ASCII_SPACES = '\x20\x0a\x09\x0c\x0d'
+
+ NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features=\"%(parser)s\"' to the BeautifulSoup constructor.\n"
+
+ def __init__(self, markup="", features=None, builder=None,
+ parse_only=None, from_encoding=None, exclude_encodings=None,
+ element_classes=None, **kwargs):
+ """Constructor.
+
+ :param markup: A string or a file-like object representing
+ markup to be parsed.
+
+ :param features: Desirable features of the parser to be
+ used. This may be the name of a specific parser ("lxml",
+ "lxml-xml", "html.parser", or "html5lib") or it may be the
+ type of markup to be used ("html", "html5", "xml"). It's
+ recommended that you name a specific parser, so that
+ Beautiful Soup gives you the same results across platforms
+ and virtual environments.
+
+ :param builder: A TreeBuilder subclass to instantiate (or
+ instance to use) instead of looking one up based on
+ `features`. You only need to use this if you've implemented a
+ custom TreeBuilder.
+
+ :param parse_only: A SoupStrainer. Only parts of the document
+ matching the SoupStrainer will be considered. This is useful
+ when parsing part of a document that would otherwise be too
+ large to fit into memory.
+
+ :param from_encoding: A string indicating the encoding of the
+ document to be parsed. Pass this in if Beautiful Soup is
+ guessing wrongly about the document's encoding.
+
+ :param exclude_encodings: A list of strings indicating
+ encodings known to be wrong. Pass this in if you don't know
+ the document's encoding but you know Beautiful Soup's guess is
+ wrong.
+
+ :param element_classes: A dictionary mapping BeautifulSoup
+ classes like Tag and NavigableString, to other classes you'd
+ like to be instantiated instead as the parse tree is
+ built. This is useful for subclassing Tag or NavigableString
+ to modify default behavior.
+
+ :param kwargs: For backwards compatibility purposes, the
+ constructor accepts certain keyword arguments used in
+ Beautiful Soup 3. None of these arguments do anything in
+ Beautiful Soup 4; they will result in a warning and then be
+ ignored.
+
+ Apart from this, any keyword arguments passed into the
+ BeautifulSoup constructor are propagated to the TreeBuilder
+ constructor. This makes it possible to configure a
+ TreeBuilder by passing in arguments, not just by saying which
+ one to use.
+ """
+ if 'convertEntities' in kwargs:
+ del kwargs['convertEntities']
+ warnings.warn(
+ "BS4 does not respect the convertEntities argument to the "
+ "BeautifulSoup constructor. Entities are always converted "
+ "to Unicode characters.")
+
+ if 'markupMassage' in kwargs:
+ del kwargs['markupMassage']
+ warnings.warn(
+ "BS4 does not respect the markupMassage argument to the "
+ "BeautifulSoup constructor. The tree builder is responsible "
+ "for any necessary markup massage.")
+
+ if 'smartQuotesTo' in kwargs:
+ del kwargs['smartQuotesTo']
+ warnings.warn(
+ "BS4 does not respect the smartQuotesTo argument to the "
+ "BeautifulSoup constructor. Smart quotes are always converted "
+ "to Unicode characters.")
+
+ if 'selfClosingTags' in kwargs:
+ del kwargs['selfClosingTags']
+ warnings.warn(
+ "BS4 does not respect the selfClosingTags argument to the "
+ "BeautifulSoup constructor. The tree builder is responsible "
+ "for understanding self-closing tags.")
+
+ if 'isHTML' in kwargs:
+ del kwargs['isHTML']
+ warnings.warn(
+ "BS4 does not respect the isHTML argument to the "
+ "BeautifulSoup constructor. Suggest you use "
+ "features='lxml' for HTML and features='lxml-xml' for "
+ "XML.")
+
+ def deprecated_argument(old_name, new_name):
+ if old_name in kwargs:
+ warnings.warn(
+ 'The "%s" argument to the BeautifulSoup constructor '
+ 'has been renamed to "%s."' % (old_name, new_name),
+ DeprecationWarning, stacklevel=3
+ )
+ return kwargs.pop(old_name)
+ return None
+
+ parse_only = parse_only or deprecated_argument(
+ "parseOnlyThese", "parse_only")
+
+ from_encoding = from_encoding or deprecated_argument(
+ "fromEncoding", "from_encoding")
+
+ if from_encoding and isinstance(markup, str):
+ warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.")
+ from_encoding = None
+
+ self.element_classes = element_classes or dict()
+
+ # We need this information to track whether or not the builder
+ # was specified well enough that we can omit the 'you need to
+ # specify a parser' warning.
+ original_builder = builder
+ original_features = features
+
+ if isinstance(builder, type):
+ # A builder class was passed in; it needs to be instantiated.
+ builder_class = builder
+ builder = None
+ elif builder is None:
+ if isinstance(features, str):
+ features = [features]
+ if features is None or len(features) == 0:
+ features = self.DEFAULT_BUILDER_FEATURES
+ builder_class = builder_registry.lookup(*features)
+ if builder_class is None:
+ raise FeatureNotFound(
+ "Couldn't find a tree builder with the features you "
+ "requested: %s. Do you need to install a parser library?"
+ % ",".join(features))
+
+ # At this point either we have a TreeBuilder instance in
+ # builder, or we have a builder_class that we can instantiate
+ # with the remaining **kwargs.
+ if builder is None:
+ builder = builder_class(**kwargs)
+ if not original_builder and not (
+ original_features == builder.NAME or
+ original_features in builder.ALTERNATE_NAMES
+ ) and markup:
+ # The user did not tell us which TreeBuilder to use,
+ # and we had to guess. Issue a warning.
+ if builder.is_xml:
+ markup_type = "XML"
+ else:
+ markup_type = "HTML"
+
+ # This code adapted from warnings.py so that we get the same line
+ # of code as our warnings.warn() call gets, even if the answer is wrong
+ # (as it may be in a multithreading situation).
+ caller = None
+ try:
+ caller = sys._getframe(1)
+ except ValueError:
+ pass
+ if caller:
+ globals = caller.f_globals
+ line_number = caller.f_lineno
+ else:
+ globals = sys.__dict__
+ line_number= 1
+ filename = globals.get('__file__')
+ if filename:
+ fnl = filename.lower()
+ if fnl.endswith((".pyc", ".pyo")):
+ filename = filename[:-1]
+ if filename:
+ # If there is no filename at all, the user is most likely in a REPL,
+ # and the warning is not necessary.
+ values = dict(
+ filename=filename,
+ line_number=line_number,
+ parser=builder.NAME,
+ markup_type=markup_type
+ )
+ warnings.warn(
+ self.NO_PARSER_SPECIFIED_WARNING % values,
+ GuessedAtParserWarning, stacklevel=2
+ )
+ else:
+ if kwargs:
+ warnings.warn("Keyword arguments to the BeautifulSoup constructor will be ignored. These would normally be passed into the TreeBuilder constructor, but a TreeBuilder instance was passed in as `builder`.")
+
+ self.builder = builder
+ self.is_xml = builder.is_xml
+ self.known_xml = self.is_xml
+ self._namespaces = dict()
+ self.parse_only = parse_only
+
+ if hasattr(markup, 'read'): # It's a file-type object.
+ markup = markup.read()
+ elif len(markup) <= 256 and (
+ (isinstance(markup, bytes) and not b'<' in markup)
+ or (isinstance(markup, str) and not '<' in markup)
+ ):
+ # Issue warnings for a couple beginner problems
+ # involving passing non-markup to Beautiful Soup.
+ # Beautiful Soup will still parse the input as markup,
+ # since that is sometimes the intended behavior.
+ if not self._markup_is_url(markup):
+ self._markup_resembles_filename(markup)
+
+ rejections = []
+ success = False
+ for (self.markup, self.original_encoding, self.declared_html_encoding,
+ self.contains_replacement_characters) in (
+ self.builder.prepare_markup(
+ markup, from_encoding, exclude_encodings=exclude_encodings)):
+ self.reset()
+ self.builder.initialize_soup(self)
+ try:
+ self._feed()
+ success = True
+ break
+ except ParserRejectedMarkup as e:
+ rejections.append(e)
+ pass
+
+ if not success:
+ other_exceptions = [str(e) for e in rejections]
+ raise ParserRejectedMarkup(
+ "The markup you provided was rejected by the parser. Trying a different parser or a different encoding may help.\n\nOriginal exception(s) from parser:\n " + "\n ".join(other_exceptions)
+ )
+
+ # Clear out the markup and remove the builder's circular
+ # reference to this object.
+ self.markup = None
+ self.builder.soup = None
+
+ def _clone(self):
+ """Create a new BeautifulSoup object with the same TreeBuilder,
+ but not associated with any markup.
+
+ This is the first step of the deepcopy process.
+ """
+ clone = type(self)("", None, self.builder)
+
+ # Keep track of the encoding of the original document,
+ # since we won't be parsing it again.
+ clone.original_encoding = self.original_encoding
+ return clone
+
+ def __getstate__(self):
+ # Frequently a tree builder can't be pickled.
+ d = dict(self.__dict__)
+ if 'builder' in d and d['builder'] is not None and not self.builder.picklable:
+ d['builder'] = type(self.builder)
+ # Store the contents as a Unicode string.
+ d['contents'] = []
+ d['markup'] = self.decode()
+
+ # If _most_recent_element is present, it's a Tag object left
+ # over from initial parse. It might not be picklable and we
+ # don't need it.
+ if '_most_recent_element' in d:
+ del d['_most_recent_element']
+ return d
+
+ def __setstate__(self, state):
+ # If necessary, restore the TreeBuilder by looking it up.
+ self.__dict__ = state
+ if isinstance(self.builder, type):
+ self.builder = self.builder()
+ elif not self.builder:
+ # We don't know which builder was used to build this
+ # parse tree, so use a default we know is always available.
+ self.builder = HTMLParserTreeBuilder()
+ self.builder.soup = self
+ self.reset()
+ self._feed()
+ return state
+
+
+ @classmethod
+ def _decode_markup(cls, markup):
+ """Ensure `markup` is bytes so it's safe to send into warnings.warn.
+
+ TODO: warnings.warn had this problem back in 2010 but it might not
+ anymore.
+ """
+ if isinstance(markup, bytes):
+ decoded = markup.decode('utf-8', 'replace')
+ else:
+ decoded = markup
+ return decoded
+
+ @classmethod
+ def _markup_is_url(cls, markup):
+ """Error-handling method to raise a warning if incoming markup looks
+ like a URL.
+
+ :param markup: A string.
+ :return: Whether or not the markup resembles a URL
+ closely enough to justify a warning.
+ """
+ if isinstance(markup, bytes):
+ space = b' '
+ cant_start_with = (b"http:", b"https:")
+ elif isinstance(markup, str):
+ space = ' '
+ cant_start_with = ("http:", "https:")
+ else:
+ return False
+
+ if any(markup.startswith(prefix) for prefix in cant_start_with):
+ if not space in markup:
+ warnings.warn(
+ 'The input looks more like a URL than markup. You may want to use'
+ ' an HTTP client like requests to get the document behind'
+ ' the URL, and feed that document to Beautiful Soup.',
+ MarkupResemblesLocatorWarning,
+ stacklevel=3
+ )
+ return True
+ return False
+
+ @classmethod
+ def _markup_resembles_filename(cls, markup):
+ """Error-handling method to raise a warning if incoming markup
+ resembles a filename.
+
+ :param markup: A bytestring or string.
+ :return: Whether or not the markup resembles a filename
+ closely enough to justify a warning.
+ """
+ path_characters = '/\\'
+ extensions = ['.html', '.htm', '.xml', '.xhtml', '.txt']
+ if isinstance(markup, bytes):
+ path_characters = path_characters.encode("utf8")
+ extensions = [x.encode('utf8') for x in extensions]
+ filelike = False
+ if any(x in markup for x in path_characters):
+ filelike = True
+ else:
+ lower = markup.lower()
+ if any(lower.endswith(ext) for ext in extensions):
+ filelike = True
+ if filelike:
+ warnings.warn(
+ 'The input looks more like a filename than markup. You may'
+ ' want to open this file and pass the filehandle into'
+ ' Beautiful Soup.',
+ MarkupResemblesLocatorWarning, stacklevel=3
+ )
+ return True
+ return False
+
+ def _feed(self):
+ """Internal method that parses previously set markup, creating a large
+ number of Tag and NavigableString objects.
+ """
+ # Convert the document to Unicode.
+ self.builder.reset()
+
+ self.builder.feed(self.markup)
+ # Close out any unfinished strings and close all the open tags.
+ self.endData()
+ while self.currentTag.name != self.ROOT_TAG_NAME:
+ self.popTag()
+
+ def reset(self):
+ """Reset this object to a state as though it had never parsed any
+ markup.
+ """
+ Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)
+ self.hidden = 1
+ self.builder.reset()
+ self.current_data = []
+ self.currentTag = None
+ self.tagStack = []
+ self.open_tag_counter = Counter()
+ self.preserve_whitespace_tag_stack = []
+ self.string_container_stack = []
+ self._most_recent_element = None
+ self.pushTag(self)
+
+ def new_tag(self, name, namespace=None, nsprefix=None, attrs={},
+ sourceline=None, sourcepos=None, **kwattrs):
+ """Create a new Tag associated with this BeautifulSoup object.
+
+ :param name: The name of the new Tag.
+ :param namespace: The URI of the new Tag's XML namespace, if any.
+ :param prefix: The prefix for the new Tag's XML namespace, if any.
+ :param attrs: A dictionary of this Tag's attribute values; can
+ be used instead of `kwattrs` for attributes like 'class'
+ that are reserved words in Python.
+ :param sourceline: The line number where this tag was
+ (purportedly) found in its source document.
+ :param sourcepos: The character position within `sourceline` where this
+ tag was (purportedly) found.
+ :param kwattrs: Keyword arguments for the new Tag's attribute values.
+
+ """
+ kwattrs.update(attrs)
+ return self.element_classes.get(Tag, Tag)(
+ None, self.builder, name, namespace, nsprefix, kwattrs,
+ sourceline=sourceline, sourcepos=sourcepos
+ )
+
+ def string_container(self, base_class=None):
+ container = base_class or NavigableString
+
+ # There may be a general override of NavigableString.
+ container = self.element_classes.get(
+ container, container
+ )
+
+ # On top of that, we may be inside a tag that needs a special
+ # container class.
+ if self.string_container_stack and container is NavigableString:
+ container = self.builder.string_containers.get(
+ self.string_container_stack[-1].name, container
+ )
+ return container
+
+ def new_string(self, s, subclass=None):
+ """Create a new NavigableString associated with this BeautifulSoup
+ object.
+ """
+ container = self.string_container(subclass)
+ return container(s)
+
+ def insert_before(self, *args):
+ """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement
+ it because there is nothing before or after it in the parse tree.
+ """
+ raise NotImplementedError("BeautifulSoup objects don't support insert_before().")
+
+ def insert_after(self, *args):
+ """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement
+ it because there is nothing before or after it in the parse tree.
+ """
+ raise NotImplementedError("BeautifulSoup objects don't support insert_after().")
+
+ def popTag(self):
+ """Internal method called by _popToTag when a tag is closed."""
+ tag = self.tagStack.pop()
+ if tag.name in self.open_tag_counter:
+ self.open_tag_counter[tag.name] -= 1
+ if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:
+ self.preserve_whitespace_tag_stack.pop()
+ if self.string_container_stack and tag == self.string_container_stack[-1]:
+ self.string_container_stack.pop()
+ #print("Pop", tag.name)
+ if self.tagStack:
+ self.currentTag = self.tagStack[-1]
+ return self.currentTag
+
+ def pushTag(self, tag):
+ """Internal method called by handle_starttag when a tag is opened."""
+ #print("Push", tag.name)
+ if self.currentTag is not None:
+ self.currentTag.contents.append(tag)
+ self.tagStack.append(tag)
+ self.currentTag = self.tagStack[-1]
+ if tag.name != self.ROOT_TAG_NAME:
+ self.open_tag_counter[tag.name] += 1
+ if tag.name in self.builder.preserve_whitespace_tags:
+ self.preserve_whitespace_tag_stack.append(tag)
+ if tag.name in self.builder.string_containers:
+ self.string_container_stack.append(tag)
+
+ def endData(self, containerClass=None):
+ """Method called by the TreeBuilder when the end of a data segment
+ occurs.
+ """
+ if self.current_data:
+ current_data = ''.join(self.current_data)
+ # If whitespace is not preserved, and this string contains
+ # nothing but ASCII spaces, replace it with a single space
+ # or newline.
+ if not self.preserve_whitespace_tag_stack:
+ strippable = True
+ for i in current_data:
+ if i not in self.ASCII_SPACES:
+ strippable = False
+ break
+ if strippable:
+ if '\n' in current_data:
+ current_data = '\n'
+ else:
+ current_data = ' '
+
+ # Reset the data collector.
+ self.current_data = []
+
+ # Should we add this string to the tree at all?
+ if self.parse_only and len(self.tagStack) <= 1 and \
+ (not self.parse_only.text or \
+ not self.parse_only.search(current_data)):
+ return
+
+ containerClass = self.string_container(containerClass)
+ o = containerClass(current_data)
+ self.object_was_parsed(o)
+
+ def object_was_parsed(self, o, parent=None, most_recent_element=None):
+ """Method called by the TreeBuilder to integrate an object into the parse tree."""
+ if parent is None:
+ parent = self.currentTag
+ if most_recent_element is not None:
+ previous_element = most_recent_element
+ else:
+ previous_element = self._most_recent_element
+
+ next_element = previous_sibling = next_sibling = None
+ if isinstance(o, Tag):
+ next_element = o.next_element
+ next_sibling = o.next_sibling
+ previous_sibling = o.previous_sibling
+ if previous_element is None:
+ previous_element = o.previous_element
+
+ fix = parent.next_element is not None
+
+ o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)
+
+ self._most_recent_element = o
+ parent.contents.append(o)
+
+ # Check if we are inserting into an already parsed node.
+ if fix:
+ self._linkage_fixer(parent)
+
+ def _linkage_fixer(self, el):
+ """Make sure linkage of this fragment is sound."""
+
+ first = el.contents[0]
+ child = el.contents[-1]
+ descendant = child
+
+ if child is first and el.parent is not None:
+ # Parent should be linked to first child
+ el.next_element = child
+ # We are no longer linked to whatever this element is
+ prev_el = child.previous_element
+ if prev_el is not None and prev_el is not el:
+ prev_el.next_element = None
+ # First child should be linked to the parent, and no previous siblings.
+ child.previous_element = el
+ child.previous_sibling = None
+
+ # We have no sibling as we've been appended as the last.
+ child.next_sibling = None
+
+ # This index is a tag, dig deeper for a "last descendant"
+ if isinstance(child, Tag) and child.contents:
+ descendant = child._last_descendant(False)
+
+ # As the final step, link last descendant. It should be linked
+ # to the parent's next sibling (if found), else walk up the chain
+ # and find a parent with a sibling. It should have no next sibling.
+ descendant.next_element = None
+ descendant.next_sibling = None
+ target = el
+ while True:
+ if target is None:
+ break
+ elif target.next_sibling is not None:
+ descendant.next_element = target.next_sibling
+ target.next_sibling.previous_element = child
+ break
+ target = target.parent
+
+ def _popToTag(self, name, nsprefix=None, inclusivePop=True):
+ """Pops the tag stack up to and including the most recent
+ instance of the given tag.
+
+ If there are no open tags with the given name, nothing will be
+ popped.
+
+ :param name: Pop up to the most recent tag with this name.
+ :param nsprefix: The namespace prefix that goes with `name`.
+ :param inclusivePop: It this is false, pops the tag stack up
+ to but *not* including the most recent instqance of the
+ given tag.
+
+ """
+ #print("Popping to %s" % name)
+ if name == self.ROOT_TAG_NAME:
+ # The BeautifulSoup object itself can never be popped.
+ return
+
+ most_recently_popped = None
+
+ stack_size = len(self.tagStack)
+ for i in range(stack_size - 1, 0, -1):
+ if not self.open_tag_counter.get(name):
+ break
+ t = self.tagStack[i]
+ if (name == t.name and nsprefix == t.prefix):
+ if inclusivePop:
+ most_recently_popped = self.popTag()
+ break
+ most_recently_popped = self.popTag()
+
+ return most_recently_popped
+
+ def handle_starttag(self, name, namespace, nsprefix, attrs, sourceline=None,
+ sourcepos=None, namespaces=None):
+ """Called by the tree builder when a new tag is encountered.
+
+ :param name: Name of the tag.
+ :param nsprefix: Namespace prefix for the tag.
+ :param attrs: A dictionary of attribute values.
+ :param sourceline: The line number where this tag was found in its
+ source document.
+ :param sourcepos: The character position within `sourceline` where this
+ tag was found.
+ :param namespaces: A dictionary of all namespace prefix mappings
+ currently in scope in the document.
+
+ If this method returns None, the tag was rejected by an active
+ SoupStrainer. You should proceed as if the tag had not occurred
+ in the document. For instance, if this was a self-closing tag,
+ don't call handle_endtag.
+ """
+ # print("Start tag %s: %s" % (name, attrs))
+ self.endData()
+
+ if (self.parse_only and len(self.tagStack) <= 1
+ and (self.parse_only.text
+ or not self.parse_only.search_tag(name, attrs))):
+ return None
+
+ tag = self.element_classes.get(Tag, Tag)(
+ self, self.builder, name, namespace, nsprefix, attrs,
+ self.currentTag, self._most_recent_element,
+ sourceline=sourceline, sourcepos=sourcepos,
+ namespaces=namespaces
+ )
+ if tag is None:
+ return tag
+ if self._most_recent_element is not None:
+ self._most_recent_element.next_element = tag
+ self._most_recent_element = tag
+ self.pushTag(tag)
+ return tag
+
+ def handle_endtag(self, name, nsprefix=None):
+ """Called by the tree builder when an ending tag is encountered.
+
+ :param name: Name of the tag.
+ :param nsprefix: Namespace prefix for the tag.
+ """
+ #print("End tag: " + name)
+ self.endData()
+ self._popToTag(name, nsprefix)
+
+ def handle_data(self, data):
+ """Called by the tree builder when a chunk of textual data is encountered."""
+ self.current_data.append(data)
+
+ def decode(self, pretty_print=False,
+ eventual_encoding=DEFAULT_OUTPUT_ENCODING,
+ formatter="minimal", iterator=None):
+ """Returns a string or Unicode representation of the parse tree
+ as an HTML or XML document.
+
+ :param pretty_print: If this is True, indentation will be used to
+ make the document more readable.
+ :param eventual_encoding: The encoding of the final document.
+ If this is None, the document will be a Unicode string.
+ """
+ if self.is_xml:
+ # Print the XML declaration
+ encoding_part = ''
+ if eventual_encoding in PYTHON_SPECIFIC_ENCODINGS:
+ # This is a special Python encoding; it can't actually
+ # go into an XML document because it means nothing
+ # outside of Python.
+ eventual_encoding = None
+ if eventual_encoding != None:
+ encoding_part = ' encoding="%s"' % eventual_encoding
+ prefix = '\n' % encoding_part
+ else:
+ prefix = ''
+ if not pretty_print:
+ indent_level = None
+ else:
+ indent_level = 0
+ return prefix + super(BeautifulSoup, self).decode(
+ indent_level, eventual_encoding, formatter, iterator)
+
+# Aliases to make it easier to get started quickly, e.g. 'from bs4 import _soup'
+_s = BeautifulSoup
+_soup = BeautifulSoup
+
+class BeautifulStoneSoup(BeautifulSoup):
+ """Deprecated interface to an XML parser."""
+
+ def __init__(self, *args, **kwargs):
+ kwargs['features'] = 'xml'
+ warnings.warn(
+ 'The BeautifulStoneSoup class is deprecated. Instead of using '
+ 'it, pass features="xml" into the BeautifulSoup constructor.',
+ DeprecationWarning, stacklevel=2
+ )
+ super(BeautifulStoneSoup, self).__init__(*args, **kwargs)
+
+
+class StopParsing(Exception):
+ """Exception raised by a TreeBuilder if it's unable to continue parsing."""
+ pass
+
+class FeatureNotFound(ValueError):
+ """Exception raised by the BeautifulSoup constructor if no parser with the
+ requested features is found.
+ """
+ pass
+
+
+#If this file is run as a script, act as an HTML pretty-printer.
+if __name__ == '__main__':
+ import sys
+ soup = BeautifulSoup(sys.stdin)
+ print((soup.prettify()))
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/__init__.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/__init__.cpython-312.pyc
new file mode 100644
index 00000000..f9385a88
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/__init__.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/css.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/css.cpython-312.pyc
new file mode 100644
index 00000000..c2891590
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/css.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/dammit.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/dammit.cpython-312.pyc
new file mode 100644
index 00000000..ef73d8af
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/dammit.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/diagnose.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/diagnose.cpython-312.pyc
new file mode 100644
index 00000000..112e2676
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/diagnose.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/element.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/element.cpython-312.pyc
new file mode 100644
index 00000000..0189dcd7
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/element.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/formatter.cpython-312.pyc b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/formatter.cpython-312.pyc
new file mode 100644
index 00000000..05be2441
Binary files /dev/null and b/allAutomation/pytest-env/Lib/site-packages/bs4/__pycache__/formatter.cpython-312.pyc differ
diff --git a/allAutomation/pytest-env/Lib/site-packages/bs4/builder/__init__.py b/allAutomation/pytest-env/Lib/site-packages/bs4/builder/__init__.py
new file mode 100644
index 00000000..ffb31fc2
--- /dev/null
+++ b/allAutomation/pytest-env/Lib/site-packages/bs4/builder/__init__.py
@@ -0,0 +1,636 @@
+# Use of this source code is governed by the MIT license.
+__license__ = "MIT"
+
+from collections import defaultdict
+import itertools
+import re
+import warnings
+import sys
+from bs4.element import (
+ CharsetMetaAttributeValue,
+ ContentMetaAttributeValue,
+ RubyParenthesisString,
+ RubyTextString,
+ Stylesheet,
+ Script,
+ TemplateString,
+ nonwhitespace_re
+)
+
+__all__ = [
+ 'HTMLTreeBuilder',
+ 'SAXTreeBuilder',
+ 'TreeBuilder',
+ 'TreeBuilderRegistry',
+ ]
+
+# Some useful features for a TreeBuilder to have.
+FAST = 'fast'
+PERMISSIVE = 'permissive'
+STRICT = 'strict'
+XML = 'xml'
+HTML = 'html'
+HTML_5 = 'html5'
+
+class XMLParsedAsHTMLWarning(UserWarning):
+ """The warning issued when an HTML parser is used to parse
+ XML that is not XHTML.
+ """
+ MESSAGE = """It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor."""
+
+
+class TreeBuilderRegistry(object):
+ """A way of looking up TreeBuilder subclasses by their name or by desired
+ features.
+ """
+
+ def __init__(self):
+ self.builders_for_feature = defaultdict(list)
+ self.builders = []
+
+ def register(self, treebuilder_class):
+ """Register a treebuilder based on its advertised features.
+
+ :param treebuilder_class: A subclass of Treebuilder. its .features
+ attribute should list its features.
+ """
+ for feature in treebuilder_class.features:
+ self.builders_for_feature[feature].insert(0, treebuilder_class)
+ self.builders.insert(0, treebuilder_class)
+
+ def lookup(self, *features):
+ """Look up a TreeBuilder subclass with the desired features.
+
+ :param features: A list of features to look for. If none are
+ provided, the most recently registered TreeBuilder subclass
+ will be used.
+ :return: A TreeBuilder subclass, or None if there's no
+ registered subclass with all the requested features.
+ """
+ if len(self.builders) == 0:
+ # There are no builders at all.
+ return None
+
+ if len(features) == 0:
+ # They didn't ask for any features. Give them the most
+ # recently registered builder.
+ return self.builders[0]
+
+ # Go down the list of features in order, and eliminate any builders
+ # that don't match every feature.
+ features = list(features)
+ features.reverse()
+ candidates = None
+ candidate_set = None
+ while len(features) > 0:
+ feature = features.pop()
+ we_have_the_feature = self.builders_for_feature.get(feature, [])
+ if len(we_have_the_feature) > 0:
+ if candidates is None:
+ candidates = we_have_the_feature
+ candidate_set = set(candidates)
+ else:
+ # Eliminate any candidates that don't have this feature.
+ candidate_set = candidate_set.intersection(
+ set(we_have_the_feature))
+
+ # The only valid candidates are the ones in candidate_set.
+ # Go through the original list of candidates and pick the first one
+ # that's in candidate_set.
+ if candidate_set is None:
+ return None
+ for candidate in candidates:
+ if candidate in candidate_set:
+ return candidate
+ return None
+
+# The BeautifulSoup class will take feature lists from developers and use them
+# to look up builders in this registry.
+builder_registry = TreeBuilderRegistry()
+
+class TreeBuilder(object):
+ """Turn a textual document into a Beautiful Soup object tree."""
+
+ NAME = "[Unknown tree builder]"
+ ALTERNATE_NAMES = []
+ features = []
+
+ is_xml = False
+ picklable = False
+ empty_element_tags = None # A tag will be considered an empty-element
+ # tag when and only when it has no contents.
+
+ # A value for these tag/attribute combinations is a space- or
+ # comma-separated list of CDATA, rather than a single CDATA.
+ DEFAULT_CDATA_LIST_ATTRIBUTES = defaultdict(list)
+
+ # Whitespace should be preserved inside these tags.
+ DEFAULT_PRESERVE_WHITESPACE_TAGS = set()
+
+ # The textual contents of tags with these names should be
+ # instantiated with some class other than NavigableString.
+ DEFAULT_STRING_CONTAINERS = {}
+
+ USE_DEFAULT = object()
+
+ # Most parsers don't keep track of line numbers.
+ TRACKS_LINE_NUMBERS = False
+
+ def __init__(self, multi_valued_attributes=USE_DEFAULT,
+ preserve_whitespace_tags=USE_DEFAULT,
+ store_line_numbers=USE_DEFAULT,
+ string_containers=USE_DEFAULT,
+ ):
+ """Constructor.
+
+ :param multi_valued_attributes: If this is set to None, the
+ TreeBuilder will not turn any values for attributes like
+ 'class' into lists. Setting this to a dictionary will
+ customize this behavior; look at DEFAULT_CDATA_LIST_ATTRIBUTES
+ for an example.
+
+ Internally, these are called "CDATA list attributes", but that
+ probably doesn't make sense to an end-user, so the argument name
+ is `multi_valued_attributes`.
+
+ :param preserve_whitespace_tags: A list of tags to treat
+ the way
tags are treated in HTML. Tags in this list
+ are immune from pretty-printing; their contents will always be
+ output as-is.
+
+ :param string_containers: A dictionary mapping tag names to
+ the classes that should be instantiated to contain the textual
+ contents of those tags. The default is to use NavigableString
+ for every tag, no matter what the name. You can override the
+ default by changing DEFAULT_STRING_CONTAINERS.
+
+ :param store_line_numbers: If the parser keeps track of the
+ line numbers and positions of the original markup, that
+ information will, by default, be stored in each corresponding
+ `Tag` object. You can turn this off by passing
+ store_line_numbers=False. If the parser you're using doesn't
+ keep track of this information, then setting store_line_numbers=True
+ will do nothing.
+ """
+ self.soup = None
+ if multi_valued_attributes is self.USE_DEFAULT:
+ multi_valued_attributes = self.DEFAULT_CDATA_LIST_ATTRIBUTES
+ self.cdata_list_attributes = multi_valued_attributes
+ if preserve_whitespace_tags is self.USE_DEFAULT:
+ preserve_whitespace_tags = self.DEFAULT_PRESERVE_WHITESPACE_TAGS
+ self.preserve_whitespace_tags = preserve_whitespace_tags
+ if store_line_numbers == self.USE_DEFAULT:
+ store_line_numbers = self.TRACKS_LINE_NUMBERS
+ self.store_line_numbers = store_line_numbers
+ if string_containers == self.USE_DEFAULT:
+ string_containers = self.DEFAULT_STRING_CONTAINERS
+ self.string_containers = string_containers
+
+ def initialize_soup(self, soup):
+ """The BeautifulSoup object has been initialized and is now
+ being associated with the TreeBuilder.
+
+ :param soup: A BeautifulSoup object.
+ """
+ self.soup = soup
+
+ def reset(self):
+ """Do any work necessary to reset the underlying parser
+ for a new document.
+
+ By default, this does nothing.
+ """
+ pass
+
+ def can_be_empty_element(self, tag_name):
+ """Might a tag with this name be an empty-element tag?
+
+ The final markup may or may not actually present this tag as
+ self-closing.
+
+ For instance: an HTMLBuilder does not consider a
tag to be
+ an empty-element tag (it's not in
+ HTMLBuilder.empty_element_tags). This means an empty
tag
+ will be presented as "
", not "" or "
".
+
+ The default implementation has no opinion about which tags are
+ empty-element tags, so a tag will be presented as an
+ empty-element tag if and only if it has no children.
+ "" will become "", and "bar" will
+ be left alone.
+
+ :param tag_name: The name of a markup tag.
+ """
+ if self.empty_element_tags is None:
+ return True
+ return tag_name in self.empty_element_tags
+
+ def feed(self, markup):
+ """Run some incoming markup through some parsing process,
+ populating the `BeautifulSoup` object in self.soup.
+
+ This method is not implemented in TreeBuilder; it must be
+ implemented in subclasses.
+
+ :return: None.
+ """
+ raise NotImplementedError()
+
+ def prepare_markup(self, markup, user_specified_encoding=None,
+ document_declared_encoding=None, exclude_encodings=None):
+ """Run any preliminary steps necessary to make incoming markup
+ acceptable to the parser.
+
+ :param markup: Some markup -- probably a bytestring.
+ :param user_specified_encoding: The user asked to try this encoding.
+ :param document_declared_encoding: The markup itself claims to be
+ in this encoding. NOTE: This argument is not used by the
+ calling code and can probably be removed.
+ :param exclude_encodings: The user asked _not_ to try any of
+ these encodings.
+
+ :yield: A series of 4-tuples:
+ (markup, encoding, declared encoding,
+ has undergone character replacement)
+
+ Each 4-tuple represents a strategy for converting the
+ document to Unicode and parsing it. Each strategy will be tried
+ in turn.
+
+ By default, the only strategy is to parse the markup
+ as-is. See `LXMLTreeBuilderForXML` and
+ `HTMLParserTreeBuilder` for implementations that take into
+ account the quirks of particular parsers.
+ """
+ yield markup, None, None, False
+
+ def test_fragment_to_document(self, fragment):
+ """Wrap an HTML fragment to make it look like a document.
+
+ Different parsers do this differently. For instance, lxml
+ introduces an empty
tag, and html5lib
+ doesn't. Abstracting this away lets us write simple tests
+ which run HTML fragments through the parser and compare the
+ results against other HTML fragments.
+
+ This method should not be used outside of tests.
+
+ :param fragment: A string -- fragment of HTML.
+ :return: A string -- a full HTML document.
+ """
+ return fragment
+
+ def set_up_substitutions(self, tag):
+ """Set up any substitutions that will need to be performed on
+ a `Tag` when it's output as a string.
+
+ By default, this does nothing. See `HTMLTreeBuilder` for a
+ case where this is used.
+
+ :param tag: A `Tag`
+ :return: Whether or not a substitution was performed.
+ """
+ return False
+
+ def _replace_cdata_list_attribute_values(self, tag_name, attrs):
+ """When an attribute value is associated with a tag that can
+ have multiple values for that attribute, convert the string
+ value to a list of strings.
+
+ Basically, replaces class="foo bar" with class=["foo", "bar"]
+
+ NOTE: This method modifies its input in place.
+
+ :param tag_name: The name of a tag.
+ :param attrs: A dictionary containing the tag's attributes.
+ Any appropriate attribute values will be modified in place.
+ """
+ if not attrs:
+ return attrs
+ if self.cdata_list_attributes:
+ universal = self.cdata_list_attributes.get('*', [])
+ tag_specific = self.cdata_list_attributes.get(
+ tag_name.lower(), None)
+ for attr in list(attrs.keys()):
+ if attr in universal or (tag_specific and attr in tag_specific):
+ # We have a "class"-type attribute whose string
+ # value is a whitespace-separated list of
+ # values. Split it into a list.
+ value = attrs[attr]
+ if isinstance(value, str):
+ values = nonwhitespace_re.findall(value)
+ else:
+ # html5lib sometimes calls setAttributes twice
+ # for the same tag when rearranging the parse
+ # tree. On the second call the attribute value
+ # here is already a list. If this happens,
+ # leave the value alone rather than trying to
+ # split it again.
+ values = value
+ attrs[attr] = values
+ return attrs
+
+class SAXTreeBuilder(TreeBuilder):
+ """A Beautiful Soup treebuilder that listens for SAX events.
+
+ This is not currently used for anything, but it demonstrates
+ how a simple TreeBuilder would work.
+ """
+
+ def feed(self, markup):
+ raise NotImplementedError()
+
+ def close(self):
+ pass
+
+ def startElement(self, name, attrs):
+ attrs = dict((key[1], value) for key, value in list(attrs.items()))
+ #print("Start %s, %r" % (name, attrs))
+ self.soup.handle_starttag(name, attrs)
+
+ def endElement(self, name):
+ #print("End %s" % name)
+ self.soup.handle_endtag(name)
+
+ def startElementNS(self, nsTuple, nodeName, attrs):
+ # Throw away (ns, nodeName) for now.
+ self.startElement(nodeName, attrs)
+
+ def endElementNS(self, nsTuple, nodeName):
+ # Throw away (ns, nodeName) for now.
+ self.endElement(nodeName)
+ #handler.endElementNS((ns, node.nodeName), node.nodeName)
+
+ def startPrefixMapping(self, prefix, nodeValue):
+ # Ignore the prefix for now.
+ pass
+
+ def endPrefixMapping(self, prefix):
+ # Ignore the prefix for now.
+ # handler.endPrefixMapping(prefix)
+ pass
+
+ def characters(self, content):
+ self.soup.handle_data(content)
+
+ def startDocument(self):
+ pass
+
+ def endDocument(self):
+ pass
+
+
+class HTMLTreeBuilder(TreeBuilder):
+ """This TreeBuilder knows facts about HTML.
+
+ Such as which tags are empty-element tags.
+ """
+
+ empty_element_tags = set([
+ # These are from HTML5.
+ 'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr',
+
+ # These are from earlier versions of HTML and are removed in HTML5.
+ 'basefont', 'bgsound', 'command', 'frame', 'image', 'isindex', 'nextid', 'spacer'
+ ])
+
+ # The HTML standard defines these as block-level elements. Beautiful
+ # Soup does not treat these elements differently from other elements,
+ # but it may do so eventually, and this information is available if
+ # you need to use it.
+ block_elements = set(["address", "article", "aside", "blockquote", "canvas", "dd", "div", "dl", "dt", "fieldset", "figcaption", "figure", "footer", "form", "h1", "h2", "h3", "h4", "h5", "h6", "header", "hr", "li", "main", "nav", "noscript", "ol", "output", "p", "pre", "section", "table", "tfoot", "ul", "video"])
+
+ # These HTML tags need special treatment so they can be
+ # represented by a string class other than NavigableString.
+ #
+ # For some of these tags, it's because the HTML standard defines
+ # an unusual content model for them. I made this list by going
+ # through the HTML spec
+ # (https://html.spec.whatwg.org/#metadata-content) and looking for
+ # "metadata content" elements that can contain strings.
+ #
+ # The Ruby tags (