diff --git a/docs/api/geo_feature_aggregation.md b/docs/api/geo_feature_aggregation.md new file mode 100644 index 0000000..2fa7b7e --- /dev/null +++ b/docs/api/geo_feature_aggregation.md @@ -0,0 +1 @@ +:::hsclient.hydroshare.GeoFeatureAggregation diff --git a/docs/api/geo_raster_aggregation.md b/docs/api/geo_raster_aggregation.md new file mode 100644 index 0000000..2916e39 --- /dev/null +++ b/docs/api/geo_raster_aggregation.md @@ -0,0 +1 @@ +:::hsclient.hydroshare.GeoRasterAggregation diff --git a/docs/api/netcdf_aggregation.md b/docs/api/netcdf_aggregation.md new file mode 100644 index 0000000..dad057b --- /dev/null +++ b/docs/api/netcdf_aggregation.md @@ -0,0 +1 @@ +:::hsclient.hydroshare.NetCDFAggregation diff --git a/docs/api/time_series_aggregation.md b/docs/api/time_series_aggregation.md new file mode 100644 index 0000000..da0dec1 --- /dev/null +++ b/docs/api/time_series_aggregation.md @@ -0,0 +1 @@ +:::hsclient.hydroshare.TimeseriesAggregation diff --git a/docs/examples/Aggregation_Data_Object_Operations.ipynb b/docs/examples/Aggregation_Data_Object_Operations.ipynb index a1f2111..4b3dc05 100644 --- a/docs/examples/Aggregation_Data_Object_Operations.ipynb +++ b/docs/examples/Aggregation_Data_Object_Operations.ipynb @@ -10,6 +10,7 @@ "\n", "\n", "The following code snippets show examples for how to use the hsclient HydroShare Python Client to load certain aggregation data types to relevant data processing objects to view data properties as well as be able to modify the data. The aggregation data object feature is available for the following HydroShare's content type aggregations:\n", + "\n", " * Time series\n", " * Geographic feature\n", " * Geographic raster\n", @@ -24,7 +25,12 @@ "source": [ "## Install the hsclient Python Client\n", "\n", - "The hsclient Python Client for HydroShare may not be installed by default in your Python environment, so it has to be installed first before you can work with it. Use the following command to install hsclient via the Python Package Index (PyPi)." + "The hsclient Python Client for HydroShare may not be installed by default in your Python environment, so it has to be installed first before you can work with it. Use the following command to install hsclient via the Python Package Index (PyPi). This will install the hsclient as well as all the python packages to work with aggregation data as data processing objects. The following packages will be installed in addition to hsclient:\n", + "\n", + "* pandas\n", + "* fiona\n", + "* rasterio\n", + "* xarray" ], "metadata": { "collapsed": false diff --git a/docs/examples/Aggregation_Operations.ipynb b/docs/examples/Aggregation_Operations.ipynb index d4629f9..d27f5a1 100644 --- a/docs/examples/Aggregation_Operations.ipynb +++ b/docs/examples/Aggregation_Operations.ipynb @@ -2,9 +2,6 @@ "cells": [ { "cell_type": "markdown", - "metadata": { - "id": "HHsuQMMJyms4" - }, "source": [ "# hsclient HydroShare Python Client Resource Aggregation Operation Examples\n", "\n", @@ -13,18 +10,21 @@ "\n", "\n", "The following code snippets show examples for how to use the hsclient HydroShare Python Client to manipulate aggregations of known content types in HydroShare. HydroShare's content type aggregations include individual file, fileset, time series, geographic feature, geographic raster, and multidimensional NetCDF." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "markdown", - "metadata": { - "id": "b_Tj5gJx0fRj" - }, "source": [ "## Install the hsclient Python Client\n", "\n", "The hsclient Python Client for HydroShare may not be installed by default in your Python environment, so it has to be installed first before you can work with it. Use the following command to install hsclient via the Python Package Index (PyPi)." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -39,14 +39,14 @@ }, { "cell_type": "markdown", - "metadata": { - "id": "CZNOazcn9-23" - }, "source": [ "## Authenticating with HydroShare\n", "\n", "Before you start interacting with resources in HydroShare you will need to authenticate." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -64,16 +64,16 @@ }, { "cell_type": "markdown", - "metadata": { - "id": "TH3UUihSojIb" - }, "source": [ "## Create a New Empty Resource\n", "\n", "A \"resource\" is a container for your content in HydroShare. Think of it as a \"working directory\" into which you are going to organize the code and/or data you are using and want to share. The following code can be used to create a new, empty resource within which you can create content and metadata.\n", "\n", "This code creates a new resource in HydroShare. It also creates an in-memory object representation of that resource in your local environment that you can then manipulate with further code." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -96,9 +96,6 @@ }, { "cell_type": "markdown", - "metadata": { - "id": "rcrEJDQkOtI8" - }, "source": [ "## Resource Aggregation Handling\n", "\n", @@ -112,18 +109,21 @@ "* File set\n", "\n", "The general process for creating an aggregation within a resource requires adding files to the resource and then applying the appropriate aggregation type. For some of the aggregation types, some of the aggregation metadata fields will be automatically extracted from the files you upload. You can then set the values of the other aggregation-level metadata elements. " - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "markdown", - "metadata": { - "id": "7yUSEF_tOySg" - }, "source": [ "## Create a Single File Aggregation\n", "\n", "A single file aggregation in a HydroShare is any individual file to which you want to add extra metadata. " - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -153,23 +153,27 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### Add Metadata to the Aggregation\n", "\n", "Once you have created an aggregation, you can edit and add metadata elements. For a single file aggregation, you can add a title, subject keywords, extended metadata as key-value pairs, and spatial and temporal coverage. \n", "\n", "All of the metadata edits are stored locally until you call the `save()` function on the aggregation to write the edits you have made to HydroShare." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "markdown", - "metadata": {}, "source": [ "#### Title and Keywords\n", "\n", "The title of an aggregation is a string. Subject keywords are handled as a list of strings." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -191,12 +195,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "#### Extended Metadata Elements\n", "\n", "Extended metadata elements for an aggregation are handled using a Python dictionary. You can add new elements using key-value pairs." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -230,12 +236,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "#### Spatial and Temporal Coverage\n", "\n", "Spatial and temporal coverage for an aggregation are handled in the same way they are handled for resource level metadata. Initially the spatial and temporal coverage for an aggregation are empty. To set them, you have to create a coverage object of the right type and set the spatial or temporal coverage to that object." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -289,19 +297,23 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "## Creating Other Aggregation Types" - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### Geographic Feature Aggregation\n", "\n", "Geographic feature aggregations are created in HydroShare from the set of files that make up an ESRI Shapefile. You need to upload the shapefile and then HydroShare will automatically set the aggregation on the set of files you upload. You can then retrieve the aggregation using its title or by searching for one of the files it contains." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -323,12 +335,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "If you upload all of the files of a shapefile together as shown above, HydroShare automatically recognizes the files as a shapefile and auto-aggregates the files into a geographic feature aggregation for you. So, you then just need to get the aggregation that was created if you want to further operate on it - e.g., to modify the aggregation-level metadata.\n", "\n", "Metadata for a geographic feature aggregation includes a title, subject keywords, extended key-value pairs, temporal coverage, spatial coverage, geometry information, spatial reference, and attribute field information. When HydroShare creates the aggregation on the shapefile, the spatial coverage, geometry information, spatial reference, and attribute field information metadata will be automatically set for you. You can then set all of the other metadata elements as shown above for the single file aggregation if you need to." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -352,14 +366,16 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### Geographic Raster Aggregation\n", "\n", "Geographic raster aggregations are created in HydroShare from one or more raster data files that make up a raster dataset. HydroShare uses GeoTiff files for raster datasets. Like the geographic feature aggregation, when you upload all of the files for a geographic raster dataset (all .tif and a .vrt file) at once, HydroShare will automatically create the aggregation for you. You can then get the aggregation and set the other metadata elements as shown above for the single file aggregation.\n", "\n", "HydroShare initially sets the title of the geographic raster aggregation to the first .tif file that appears in the .vrt file. The spatial coverage, spatial reference, and cell information are set automatically based on information extracted from the dataset. " - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -382,12 +398,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### Multidimensional NetCDF Aggregation\n", "\n", "Multidimensional aggregations are created in HydroShare from a NetCDF file. Like the other aggregation types, you can upload the NetCDF file and HydroShare will automatically create the aggregation for you. HydroShare also automatically extracts metadata from the NetCDF file to populate the aggregation metadata. Some of this metadata may get propagated to the resource level if you haven't set things like the title and keywords. You can then get the aggregation and set the other metadata elements as shown above for the single file aggregation." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -408,12 +426,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### Time Series Aggregation\n", "\n", "Time series aggregations are created in HydroShare from an ODM2 SQLite database file. The ODM2 SQLite database contain one or more time series " - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -434,12 +454,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "### File Set Aggregation\n", "\n", "A file set aggregation is any folder within a resource to which you want to add metadata. If you want to create a file set aggregation, you first have to create a folder, then upload files to it. After that, you can set the aggregation on the folder." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -459,12 +481,14 @@ }, { "cell_type": "markdown", - "metadata": {}, "source": [ "## Get Aggregation Properties\n", "\n", "Each aggregation in a resource has metadata properties associated with it. You can query/retrieve those properties for display. The following shows an example for the time series aggregation that was created above." - ] + ], + "metadata": { + "collapsed": false + } }, { "cell_type": "code", @@ -524,8 +548,8 @@ "# Get a list of aggregations searching by a nested metadata attribute (__)\n", "aggregations = new_resource.aggregations(period_coverage__name=\"period_coverage name\")\n", " \n", - "# Get a list of aggregations by combining field searching, filtered with “AND”\n", - "aggrregations = new_resource.aggregations(period_coverage__name=\"period_coverage name\", title=\"watersheds\")" + "# Get a list of aggregations by combining field searching, filtered with \"AND\"\n", + "aggregations = new_resource.aggregations(period_coverage__name=\"period_coverage name\", title=\"watersheds\")" ] }, { diff --git a/hsclient/hydroshare.py b/hsclient/hydroshare.py index 15335b6..9ec007d 100644 --- a/hsclient/hydroshare.py +++ b/hsclient/hydroshare.py @@ -393,7 +393,7 @@ class DataObjectSupportingAggregation(Aggregation): @staticmethod def create(aggr_cls, base_aggr): - """creates a type specific aggregation object from an instance of Aggregation""" + """Creates a type specific aggregation object from an instance of Aggregation""" aggr = aggr_cls(base_aggr._map_path, base_aggr._hs_session, base_aggr._parsed_checksums) aggr._retrieved_map = base_aggr._retrieved_map aggr._retrieved_metadata = base_aggr._retrieved_metadata @@ -410,6 +410,8 @@ def refresh(self) -> None: @property def data_object(self) -> \ Union['pandas.DataFrame', 'fiona.Collection', 'rasterio.DatasetReader', 'xarray.Dataset', None]: + """Returns the data object for the aggregation if the aggregation has been loaded as + a data object, otherwise None""" return self._data_object def _get_file_path(self, agg_path): @@ -478,18 +480,32 @@ def _update_aggregation(self, resource, *files): class NetCDFAggregation(DataObjectSupportingAggregation): + """Represents a Multidimensional Aggregation in HydroShare""" @classmethod def create(cls, base_aggr): return super().create(aggr_cls=cls, base_aggr=base_aggr) def as_data_object(self, agg_path: str) -> 'xarray.Dataset': + """ + Loads the Multidimensional aggregation to a xarray Dataset object + :param agg_path: the path to the Multidimensional aggregation + :return: the Multidimensional aggregation as a xarray Dataset object + """ if xarray is None: raise Exception("xarray package was not found") return self._get_data_object(agg_path=agg_path, func=xarray.open_dataset) def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: bool = False, destination_path: str = "") -> 'Aggregation': + """ + Saves the xarray Dataset object to the Multidimensional aggregation + :param resource: the resource containing the aggregation + :param agg_path: the path to the Multidimensional aggregation + :param as_new_aggr: Defaults False, set to True to create a new Multidimensional aggregation + :param destination_path: the destination path in Hydroshare to save the new aggregation + :return: the updated or new Multidimensional aggregation + """ self._validate_aggregation_for_update(resource, AggregationType.MultidimensionalAggregation) file_path = self._validate_aggregation_path(agg_path, for_save_data=True) @@ -526,12 +542,18 @@ def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: boo class TimeseriesAggregation(DataObjectSupportingAggregation): - + """Represents a Time Series Aggregation in HydroShare""" @classmethod def create(cls, base_aggr): return super().create(aggr_cls=cls, base_aggr=base_aggr) def as_data_object(self, agg_path: str, series_id: str = "") -> 'pandas.DataFrame': + """ + Loads the Time Series aggregation to a pandas DataFrame object + :param agg_path: the path to the Time Series aggregation + :param series_id: the series id of the time series to retrieve + :return: the Time Series aggregation as a pandas DataFrame object + """ if pandas is None: raise Exception("pandas package not found") @@ -548,6 +570,14 @@ def to_series(timeseries_file: str): def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: bool = False, destination_path: str = "") -> 'Aggregation': + """ + Saves the pandas DataFrame object to the Time Series aggregation + :param resource: the resource containing the aggregation + :param agg_path: the path to the Time Series aggregation + :param as_new_aggr: Defaults False, set to True to create a new Time Series aggregation + :param destination_path: the destination path in Hydroshare to save the new aggregation + :return: the updated or new Time Series aggregation + """ self._validate_aggregation_for_update(resource, AggregationType.TimeSeriesAggregation) file_path = self._validate_aggregation_path(agg_path, for_save_data=True) with closing(sqlite3.connect(file_path)) as conn: @@ -597,7 +627,7 @@ def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: boo class GeoFeatureAggregation(DataObjectSupportingAggregation): - + """Represents a Geo Feature Aggregation in HydroShare""" @classmethod def create(cls, base_aggr): return super().create(aggr_cls=cls, base_aggr=base_aggr) @@ -616,12 +646,25 @@ def _validate_aggregation_path(self, agg_path: str, for_save_data: bool = False) return file_path def as_data_object(self, agg_path: str) -> 'fiona.Collection': + """ + Loads the Geo Feature aggregation to a fiona Collection object + :param agg_path: the path to the Geo Feature aggregation + :return: the Geo Feature aggregation as a fiona Collection object + """ if fiona is None: raise Exception("fiona package was not found") return self._get_data_object(agg_path=agg_path, func=fiona.open) def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: bool = False, destination_path: str = "") -> 'Aggregation': + """ + Saves the fiona Collection object to the Geo Feature aggregation + :param resource: the resource containing the aggregation + :param agg_path: the path to the Geo Feature aggregation + :param as_new_aggr: Defaults False, set to True to create a new Geo Feature aggregation + :param destination_path: the destination path in Hydroshare to save the new aggregation + :return: the updated or new Geo Feature aggregation + """ def upload_shape_files(main_file_path, dst_path=""): shp_file_dir_path = os.path.dirname(main_file_path) filename_starts_with = f"{pathlib.Path(main_file_path).stem}." @@ -685,7 +728,7 @@ def upload_shape_files(main_file_path, dst_path=""): class GeoRasterAggregation(DataObjectSupportingAggregation): - + """Represents a Geo Raster Aggregation in HydroShare""" @classmethod def create(cls, base_aggr): return super().create(aggr_cls=cls, base_aggr=base_aggr) @@ -741,12 +784,25 @@ def _validate_aggregation_path(self, agg_path: str, for_save_data: bool = False) return file_path def as_data_object(self, agg_path: str) -> 'rasterio.DatasetReader': + """ + Loads the Geo Raster aggregation to a rasterio DatasetReader object + :param agg_path: the path to the Geo Raster aggregation + :return: the Geo Raster aggregation as a rasterio DatasetReader object + """ if rasterio is None: raise Exception("rasterio package was not found") return self._get_data_object(agg_path=agg_path, func=rasterio.open) def save_data_object(self, resource: 'Resource', agg_path: str, as_new_aggr: bool = False, destination_path: str = "") -> 'Aggregation': + """ + Saves the rasterio DatasetReader object to the Geo Raster aggregation + :param resource: the resource containing the aggregation + :param agg_path: the path to the Geo Raster aggregation + :param as_new_aggr: Defaults False, set to True to create a new Geo Raster aggregation + :param destination_path: the destination path in Hydroshare to save the new aggregation + :return: the updated or new Geo Raster aggregation + """ def upload_raster_files(dst_path=""): raster_files = [] for item in os.listdir(agg_path): @@ -808,7 +864,7 @@ class Resource(Aggregation): @property def _hsapi_path(self): - path = urlparse(self.metadata.identifier).path + path = urlparse(str(self.metadata.identifier)).path return '/hsapi' + path def _upload(self, file, destination_path): @@ -1308,9 +1364,9 @@ def _validate_oauth2_token(token: Union[Token, Dict[str, str]]) -> dict: of OAuth2 token dropping optional fields that are None.""" if isinstance(token, dict) or isinstance(token, Token): # try to coerce into Token model - o = Token.parse_obj(token) + o = Token.model_validate(token) # drop None fields from output - return o.dict(exclude_none=True) + return o.model_dump(exclude_none=True) else: error_message = "token must be hsclient.Token or dictionary following schema:\n" "{}".format( pformat(Token.__annotations__, sort_dicts=False) diff --git a/hsclient/json_models.py b/hsclient/json_models.py index d8454bd..c3d089a 100644 --- a/hsclient/json_models.py +++ b/hsclient/json_models.py @@ -2,7 +2,7 @@ from typing import Dict, List, Tuple from hsmodels.schemas.enums import UserIdentifierType -from pydantic import AnyUrl, BaseModel, validator +from pydantic import AnyUrl, BaseModel, HttpUrl, field_validator class User(BaseModel): @@ -12,13 +12,13 @@ class User(BaseModel): phone: str = None address: str = None organization: str = None - website: str = None - identifiers: Dict[UserIdentifierType, str] = {} + website: HttpUrl = None + identifiers: Dict[UserIdentifierType, AnyUrl] = {} type: str = None subject_areas: List[str] = [] date_joined: datetime = None - @validator("subject_areas", pre=True) + @field_validator("subject_areas", mode='before') def split_subject_areas(cls, value): return value.split(", ") if value else [] @@ -43,7 +43,7 @@ class ResourcePreview(BaseModel): resource_map_url: str = None resource_metadata_url: str = None - @validator("authors", pre=True) + @field_validator("authors", mode='before') def handle_null_author(cls, v): # return empty list when supplied authors field is None. if v is None: diff --git a/mkdocs.yml b/mkdocs.yml index 6d9cacd..79cea3d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -42,4 +42,7 @@ nav: Resource: api/resource.md File: api/file.md Aggregation: api/aggregation.md - + Multidimensional Aggregation: api/netcdf_aggregation.md + Geographic Feature Aggregation: api/geo_feature_aggregation.md + Geographic Raster Aggregation: api/geo_raster_aggregation.md + Time Series Aggregation: api/time_series_aggregation.md diff --git a/requirements.txt b/requirements.txt index 375d7b8..1e36a6b 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,4 +1,4 @@ -hsmodels >= 0.5.5 +hsmodels >= 1.0.0 pytest == 6.0.2 requests == 2.24.0 email-validator diff --git a/setup.py b/setup.py index 8338a40..578e7b2 100644 --- a/setup.py +++ b/setup.py @@ -5,11 +5,11 @@ setup( name='hsclient', - version='0.3.4', + version='1.0.0', packages=find_packages(include=['hsclient', 'hsclient.*'], exclude=("tests",)), install_requires=[ - 'hsmodels>=0.5.5', + 'hsmodels>=1.0.0', 'requests', 'requests_oauthlib', ], diff --git a/tests/data/user.json b/tests/data/user.json index cee3adc..ad4afa2 100644 --- a/tests/data/user.json +++ b/tests/data/user.json @@ -5,7 +5,7 @@ "phone": "3399334127", "address": "MA, US", "organization": "CUAHSI", - "website": "http://anthonycastronova.com", + "website": "http://anthonycastronova.com/", "identifiers": { "ORCID": "https://orcid.org/0000-0002-1341-5681", "ResearchGateID": "https://www.researchgate.net/profile/Anthony_Castronova", diff --git a/tests/test_functional.py b/tests/test_functional.py index 3013c2e..39e4d24 100644 --- a/tests/test_functional.py +++ b/tests/test_functional.py @@ -125,13 +125,21 @@ def test_filtering_files(resource): def test_creator_order(new_resource): res = new_resource # hydroshare.resource("1248abc1afc6454199e65c8f642b99a0") + assert len(res.metadata.creators) == 1 res.metadata.creators.append(Creator(name="Testing")) res.save() + assert len(res.metadata.creators) == 2 + for cr in res.metadata.creators: + assert cr.creator_order in (1, 2) + assert res.metadata.creators[0].creator_order != res.metadata.creators[1].creator_order assert res.metadata.creators[1].name == "Testing" + assert res.metadata.creators[1].creator_order == 2 reversed = [res.metadata.creators[1], res.metadata.creators[0]] res.metadata.creators = reversed res.save() - assert res.metadata.creators[0].name == "Testing" + # check creator_order does not change + assert res.metadata.creators[1].name == "Testing" + assert res.metadata.creators[1].creator_order == 2 def test_resource_metadata_updating(new_resource): diff --git a/tests/test_json_models.py b/tests/test_json_models.py index 65dc62b..c4855ec 100644 --- a/tests/test_json_models.py +++ b/tests/test_json_models.py @@ -3,6 +3,7 @@ import pytest from dateutil import parser +from hsmodels.schemas.enums import UserIdentifierType from hsmodels.schemas.fields import Contributor, Creator from hsclient.json_models import ResourcePreview, User @@ -47,7 +48,7 @@ def test_resource_preview_authors_field_handles_none_cases(test_data): [None, ""] """ - from_json = ResourcePreview.parse_raw(test_data) + from_json = ResourcePreview.model_validate_json(test_data) assert from_json.authors == [] @@ -59,22 +60,21 @@ def test_resource_preview_authors_raises_validation_error_on_string_input(): data = json.dumps({"authors": "should_fail"}) with pytest.raises(ValidationError): - ResourcePreview.parse_raw(data) + ResourcePreview.model_validate_json(data) def test_user_info(user): assert user.name == "Castronova, Anthony M." assert user.email == "castronova.anthony@gmail.com" - assert user.url == "http://beta.hydroshare.org/user/11/" + assert str(user.url) == "http://beta.hydroshare.org/user/11/" assert user.phone == "3399334127" assert user.address == "MA, US" assert user.organization == "CUAHSI" - assert user.website == "http://anthonycastronova.com" - assert user.identifiers == { - "ORCID": "https://orcid.org/0000-0002-1341-5681", - "ResearchGateID": "https://www.researchgate.net/profile/Anthony_Castronova", - "GoogleScholarID": "https://scholar.google.com/citations?user=ScWTFoQAAAAJ&hl=en", - } + assert str(user.website) == "http://anthonycastronova.com/" + assert str(user.identifiers[UserIdentifierType.ORCID]) == "https://orcid.org/0000-0002-1341-5681" + assert str(user.identifiers[UserIdentifierType.research_gate_id]) == "https://www.researchgate.net/profile/Anthony_Castronova" + assert str(user.identifiers[UserIdentifierType.google_scholar_id]) == "https://scholar.google.com/citations?user=ScWTFoQAAAAJ&hl=en" + assert user.type == "Commercial/Professional" assert user.date_joined == parser.parse("2015-06-03T16:09:31.636Z") assert user.subject_areas == [ @@ -93,7 +93,7 @@ def test_user_info(user): assert creator.email == user.email assert creator.homepage == user.website assert creator.identifiers == user.identifiers - assert creator.hydroshare_user_id == int(user.url.split("/")[-2]) + assert creator.hydroshare_user_id == int(user.url.path.split("/")[-2]) contributor = Contributor.from_user(user) assert contributor.name == user.name @@ -103,4 +103,4 @@ def test_user_info(user): assert contributor.email == user.email assert contributor.homepage == user.website assert contributor.identifiers == user.identifiers - assert contributor.hydroshare_user_id == int(user.url.split("/")[-2]) + assert contributor.hydroshare_user_id == int(user.url.path.split("/")[-2])