Skip to content

Commit 06c7187

Browse files
snazyrenovate-botflyrainexceptionfactoryHonahX
authored
Dremio merge 2025 05 21 09 39 (apache#72)
* main: Update docker.io/prom/prometheus Docker tag to v3.4.0 (apache#1602) * Site: Update production configuration page (apache#1606) * main: Update dependency com.google.cloud:google-cloud-storage-bom to v2.52.3 (apache#1623) * main: Update dependency boto3 to v1.38.19 (apache#1622) * Remove Bouncy Castle dependency usage from PemUtils (apache#1318) - Added PEM format parsing in PemUtils - Added unit test for PemUtils for empty file and multiple PEM objects - Removed Bouncy Castle Provider dependency from service common module - Removed Bouncy Castle Provider dependency from quarkus service module * Site: Add a page for policy management (apache#1600) * [Policy Store | Management Spec] Add policy privileges to spec and update admin service impl (apache#1529) This PR adds new policy related privileges to polaris-management-api.yml and update PolarisAdminService to allow granting new privileges * Spec: Add SigV4 Auth Support for Catalog Federation (apache#1506) * Spec changes for SigV4 Auth Support for Catalog Federation * Extract service identity info as a nested object * nit: fix admin tool log level and comments (apache#1626) The previous WARNING log levels seems to work, but WARN aligns better with standard Quarkus log levels. Fixes apache#1612 * Doc: switch to use iceberg-aws-bundle jar (apache#1609) * main: Update dependency org.mockito:mockito-core to v5.18.0 (apache#1630) * main: Update dependency boto3 to v1.38.20 (apache#1631) * Require explicit user-consent to enable HadoopFileIO (apache#1532) Using `HadoopFileIO` in Polaris can enable "hidden features" that users are likely not aware of. This change requires users to manually update the configuration to be able to use `HadoopFileIO` in way that highlights the consequences of enabling it. This PR updates Polaris in multiple ways: * The default of `SUPPORTED_CATALOG_STORAGE_TYPES` is changed to not include the `FILE` storage type. * Respect the `ALLOW_SPECIFYING_FILE_IO_IMPL` configuration on namespaces, tables and views to prevent setting an `io-impl` value for anything but one of the configured, supported storage-types. * Unify validation code in a new class `IcebergPropertiesValidation`. * Using `FILE` or `HadoopFileIO` now _also_ requires the explicit configuration `ALLOW_INSECURE_STORAGE_TYPES_ACCEPTING_SECURITY_RISKS=true`. * Added production readiness checks that trigger when `ALLOW_INSECURE_STORAGE_TYPES_ACCEPTING_SECURITY_RISKS` is `true` or `SUPPORTED_CATALOG_STORAGE_TYPES` contains `FILE` (defaults and per-realm). * The two new readiness checks are considered _severe_. Severe readiness-errors prevent the server from starting up - unless the user explicitly configured `polaris.readiness.ignore-security-issues=true`. Log messages and configuration options explicitly use "clear" phrases highlighting the consequences. With these changes it is intentionally extremely difficult to start Polaris with HadoopFileIO. People who work around all these safety nets must have realized that what they are doing. A lot of the test code relies on `FILE`/`HadoopFileIO`, those tests got all the configurations to let those tests continue to work as they are, bypassing the added security safeguards. --------- Co-authored-by: Dmitri Bourlatchkov <[email protected]> --------- Co-authored-by: Mend Renovate <[email protected]> Co-authored-by: Yufei Gu <[email protected]> Co-authored-by: David Handermann <[email protected]> Co-authored-by: Honah (Jonas) J. <[email protected]> Co-authored-by: Rulin Xing <[email protected]> Co-authored-by: Dmitri Bourlatchkov <[email protected]> Co-authored-by: MonkeyCanCode <[email protected]>
1 parent 70592b8 commit 06c7187

File tree

50 files changed

+1080
-183
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1080
-183
lines changed

client/python/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ dependencies = [
3434
"python-dateutil>=2.8.2",
3535
"pydantic>=2.0.0",
3636
"typing-extensions>=4.7.1",
37-
"boto3==1.38.18",
37+
"boto3==1.38.20",
3838
]
3939

4040
[project.urls]

getting-started/eclipselink/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ services:
7676
retries: 15
7777
command: [
7878
/opt/spark/bin/spark-sql,
79-
--packages, "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,software.amazon.awssdk:bundle:2.28.17,software.amazon.awssdk:url-connection-client:2.28.17,org.apache.iceberg:iceberg-gcp-bundle:1.9.0,org.apache.iceberg:iceberg-azure-bundle:1.9.0",
79+
--packages, "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0,org.apache.iceberg:iceberg-gcp-bundle:1.9.0,org.apache.iceberg:iceberg-azure-bundle:1.9.0",
8080
--conf, "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions",
8181
--conf, "spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog",
8282
--conf, "spark.sql.catalog.quickstart_catalog.type=rest",

getting-started/jdbc/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ services:
7676
retries: 15
7777
command: [
7878
/opt/spark/bin/spark-sql,
79-
--packages, "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,software.amazon.awssdk:bundle:2.28.17,software.amazon.awssdk:url-connection-client:2.28.17,org.apache.iceberg:iceberg-gcp-bundle:1.9.0,org.apache.iceberg:iceberg-azure-bundle:1.9.0",
79+
--packages, "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0,org.apache.iceberg:iceberg-gcp-bundle:1.9.0,org.apache.iceberg:iceberg-azure-bundle:1.9.0",
8080
--conf, "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions",
8181
--conf, "spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog",
8282
--conf, "spark.sql.catalog.polaris.type=rest",

getting-started/spark/notebooks/SparkPolaris.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@
256256
"\n",
257257
"spark = (SparkSession.builder\n",
258258
" .config(\"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n",
259-
" .config(\"spark.jars.packages\", \"org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.hadoop:hadoop-aws:3.4.0,software.amazon.awssdk:bundle:2.23.19,software.amazon.awssdk:url-connection-client:2.23.19\")\n",
259+
" .config(\"spark.jars.packages\", \"org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0\")\n",
260260
" .config('spark.sql.iceberg.vectorization.enabled', 'false')\n",
261261
" \n",
262262
" # Configure the 'polaris' catalog as an Iceberg rest catalog\n",

getting-started/telemetry/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ services:
6060
entrypoint: '/bin/sh -c "chmod +x /polaris/create-catalog.sh && /polaris/create-catalog.sh"'
6161

6262
prometheus:
63-
image: docker.io/prom/prometheus:v3.3.1
63+
image: docker.io/prom/prometheus:v3.4.0
6464
ports:
6565
- "9093:9090"
6666
depends_on:

gradle/libs.versions.toml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,6 @@ assertj-core = { module = "org.assertj:assertj-core", version = "3.27.3" }
4343
auth0-jwt = { module = "com.auth0:java-jwt", version = "4.5.0" }
4444
awssdk-bom = { module = "software.amazon.awssdk:bom", version = "2.31.45" }
4545
azuresdk-bom = { module = "com.azure:azure-sdk-bom", version = "1.2.34" }
46-
bouncycastle-bcprov = { module = "org.bouncycastle:bcprov-jdk18on", version = "1.80" }
4746
caffeine = { module = "com.github.ben-manes.caffeine:caffeine", version = "3.2.0" }
4847
cel-bom = { module = "org.projectnessie.cel:cel-bom", version = "0.5.3" }
4948
commons-codec1 = { module = "commons-codec:commons-codec", version = "1.18.0" }
@@ -52,7 +51,7 @@ commons-text = { module = "org.apache.commons:commons-text", version = "1.13.1"
5251
docker-java-api = { module = "com.github.docker-java:docker-java-api", version = "3.5.0" }
5352
eclipselink = { module = "org.eclipse.persistence:eclipselink", version = "4.0.6" }
5453
errorprone = { module = "com.google.errorprone:error_prone_core", version = "2.38.0" }
55-
google-cloud-storage-bom = { module = "com.google.cloud:google-cloud-storage-bom", version = "2.52.2" }
54+
google-cloud-storage-bom = { module = "com.google.cloud:google-cloud-storage-bom", version = "2.52.3" }
5655
guava = { module = "com.google.guava:guava", version = "33.4.8-jre" }
5756
h2 = { module = "com.h2database:h2", version = "2.3.232" }
5857
dnsjava = { module = "dnsjava:dnsjava", version = "3.6.3" }
@@ -81,7 +80,7 @@ junit-bom = { module = "org.junit:junit-bom", version = "5.12.2" }
8180
logback-classic = { module = "ch.qos.logback:logback-classic", version = "1.5.18" }
8281
micrometer-bom = { module = "io.micrometer:micrometer-bom", version = "1.15.0" }
8382
microprofile-fault-tolerance-api = { module = "org.eclipse.microprofile.fault-tolerance:microprofile-fault-tolerance-api", version = "4.1.2" }
84-
mockito-core = { module = "org.mockito:mockito-core", version = "5.17.0" }
83+
mockito-core = { module = "org.mockito:mockito-core", version = "5.18.0" }
8584
mockito-junit-jupiter = { module = "org.mockito:mockito-junit-jupiter", version = "5.17.0" }
8685
mongodb-driver-sync = { module = "org.mongodb:mongodb-driver-sync", version = "5.5.0" }
8786
opentelemetry-bom = { module = "io.opentelemetry:opentelemetry-bom", version = "1.50.0" }

integration-tests/src/main/java/org/apache/polaris/service/it/env/ManagementApi.java

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,16 @@ public void addGrant(String catalogName, String catalogRoleName, GrantResource g
115115
}
116116
}
117117

118+
public void revokeGrant(String catalogName, String catalogRoleName, GrantResource grant) {
119+
try (Response response =
120+
request(
121+
"v1/catalogs/{cat}/catalog-roles/{role}/grants",
122+
Map.of("cat", catalogName, "role", catalogRoleName))
123+
.post(Entity.json(grant))) {
124+
assertThat(response).returns(CREATED.getStatusCode(), Response::getStatus);
125+
}
126+
}
127+
118128
public void grantCatalogRoleToPrincipalRole(
119129
String principalRoleName, String catalogName, CatalogRole catalogRole) {
120130
try (Response response =

integration-tests/src/main/java/org/apache/polaris/service/it/test/PolarisPolicyServiceIntegrationTest.java

Lines changed: 186 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,9 @@
1818
*/
1919
package org.apache.polaris.service.it.test;
2020

21+
import static jakarta.ws.rs.core.Response.Status.NOT_FOUND;
2122
import static org.apache.polaris.service.it.env.PolarisClient.polarisClient;
23+
import static org.assertj.core.api.Assertions.assertThat;
2224

2325
import com.google.common.collect.ImmutableMap;
2426
import jakarta.ws.rs.client.Entity;
@@ -33,6 +35,7 @@
3335
import java.util.Map;
3436
import java.util.Optional;
3537
import java.util.UUID;
38+
import java.util.stream.Stream;
3639
import org.apache.iceberg.Schema;
3740
import org.apache.iceberg.catalog.Namespace;
3841
import org.apache.iceberg.catalog.TableIdentifier;
@@ -47,9 +50,16 @@
4750
import org.apache.polaris.core.admin.model.CatalogRole;
4851
import org.apache.polaris.core.admin.model.FileStorageConfigInfo;
4952
import org.apache.polaris.core.admin.model.GrantResource;
53+
import org.apache.polaris.core.admin.model.GrantResources;
54+
import org.apache.polaris.core.admin.model.NamespaceGrant;
55+
import org.apache.polaris.core.admin.model.NamespacePrivilege;
5056
import org.apache.polaris.core.admin.model.PolarisCatalog;
57+
import org.apache.polaris.core.admin.model.PolicyGrant;
58+
import org.apache.polaris.core.admin.model.PolicyPrivilege;
5159
import org.apache.polaris.core.admin.model.PrincipalWithCredentials;
5260
import org.apache.polaris.core.admin.model.StorageConfigInfo;
61+
import org.apache.polaris.core.admin.model.TableGrant;
62+
import org.apache.polaris.core.admin.model.TablePrivilege;
5363
import org.apache.polaris.core.catalog.PolarisCatalogHelpers;
5464
import org.apache.polaris.core.entity.CatalogEntity;
5565
import org.apache.polaris.core.policy.PredefinedPolicyTypes;
@@ -68,6 +78,7 @@
6878
import org.apache.polaris.service.types.PolicyAttachmentTarget;
6979
import org.apache.polaris.service.types.PolicyIdentifier;
7080
import org.assertj.core.api.Assertions;
81+
import org.assertj.core.api.InstanceOfAssertFactories;
7182
import org.junit.jupiter.api.AfterAll;
7283
import org.junit.jupiter.api.AfterEach;
7384
import org.junit.jupiter.api.BeforeAll;
@@ -86,6 +97,8 @@ public class PolarisPolicyServiceIntegrationTest {
8697
Optional.ofNullable(System.getenv("INTEGRATION_TEST_ROLE_ARN"))
8798
.orElse("arn:aws:iam::123456789012:role/my-role");
8899

100+
private static final String CATALOG_ROLE_1 = "catalogrole1";
101+
private static final String CATALOG_ROLE_2 = "catalogrole2";
89102
private static final String EXAMPLE_TABLE_MAINTENANCE_POLICY_CONTENT = "{\"enable\":true}";
90103
private static final Namespace NS1 = Namespace.of("NS1");
91104
private static final Namespace NS2 = Namespace.of("NS2");
@@ -225,9 +238,9 @@ public void before(TestInfo testInfo) {
225238
extraPropertiesBuilder.build());
226239
CatalogGrant catalogGrant =
227240
new CatalogGrant(CatalogPrivilege.CATALOG_MANAGE_CONTENT, GrantResource.TypeEnum.CATALOG);
228-
managementApi.createCatalogRole(currentCatalogName, "catalogrole1");
229-
managementApi.addGrant(currentCatalogName, "catalogrole1", catalogGrant);
230-
CatalogRole catalogRole = managementApi.getCatalogRole(currentCatalogName, "catalogrole1");
241+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_1);
242+
managementApi.addGrant(currentCatalogName, CATALOG_ROLE_1, catalogGrant);
243+
CatalogRole catalogRole = managementApi.getCatalogRole(currentCatalogName, CATALOG_ROLE_1);
231244
managementApi.grantCatalogRoleToPrincipalRole(
232245
principalRoleName, currentCatalogName, catalogRole);
233246

@@ -487,6 +500,176 @@ NS2_T1, new Schema(Types.NestedField.optional(1, "string", Types.StringType.get(
487500
restCatalog.dropTable(NS2_T1);
488501
}
489502

503+
@Test
504+
public void testGrantsOnPolicy() {
505+
restCatalog.createNamespace(NS1);
506+
try {
507+
policyApi.createPolicy(
508+
currentCatalogName,
509+
NS1_P1,
510+
PredefinedPolicyTypes.DATA_COMPACTION,
511+
EXAMPLE_TABLE_MAINTENANCE_POLICY_CONTENT,
512+
"test policy");
513+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_2);
514+
Stream<PolicyGrant> policyGrants =
515+
Arrays.stream(PolicyPrivilege.values())
516+
.map(
517+
p ->
518+
new PolicyGrant(
519+
Arrays.asList(NS1.levels()),
520+
NS1_P1.getName(),
521+
p,
522+
GrantResource.TypeEnum.POLICY));
523+
policyGrants.forEach(g -> managementApi.addGrant(currentCatalogName, CATALOG_ROLE_2, g));
524+
525+
Assertions.assertThat(managementApi.listGrants(currentCatalogName, CATALOG_ROLE_2))
526+
.extracting(GrantResources::getGrants)
527+
.asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class))
528+
.map(gr -> ((PolicyGrant) gr).getPrivilege())
529+
.containsExactlyInAnyOrder(PolicyPrivilege.values());
530+
531+
PolicyGrant policyReadGrant =
532+
new PolicyGrant(
533+
Arrays.asList(NS1.levels()),
534+
NS1_P1.getName(),
535+
PolicyPrivilege.POLICY_READ,
536+
GrantResource.TypeEnum.POLICY);
537+
managementApi.revokeGrant(currentCatalogName, CATALOG_ROLE_2, policyReadGrant);
538+
539+
Assertions.assertThat(managementApi.listGrants(currentCatalogName, CATALOG_ROLE_2))
540+
.extracting(GrantResources::getGrants)
541+
.asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class))
542+
.map(gr -> ((PolicyGrant) gr).getPrivilege())
543+
.doesNotContain(PolicyPrivilege.POLICY_READ);
544+
} finally {
545+
policyApi.purge(currentCatalogName, NS1);
546+
}
547+
}
548+
549+
@Test
550+
public void testGrantsOnNonExistingPolicy() {
551+
restCatalog.createNamespace(NS1);
552+
553+
try {
554+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_2);
555+
Stream<PolicyGrant> policyGrants =
556+
Arrays.stream(PolicyPrivilege.values())
557+
.map(
558+
p ->
559+
new PolicyGrant(
560+
Arrays.asList(NS1.levels()),
561+
NS1_P1.getName(),
562+
p,
563+
GrantResource.TypeEnum.POLICY));
564+
policyGrants.forEach(
565+
g -> {
566+
try (Response response =
567+
managementApi
568+
.request(
569+
"v1/catalogs/{cat}/catalog-roles/{role}/grants",
570+
Map.of("cat", currentCatalogName, "role", "catalogrole2"))
571+
.put(Entity.json(g))) {
572+
573+
assertThat(response.getStatus()).isEqualTo(NOT_FOUND.getStatusCode());
574+
}
575+
});
576+
} finally {
577+
policyApi.purge(currentCatalogName, NS1);
578+
}
579+
}
580+
581+
@Test
582+
public void testGrantsOnNamespace() {
583+
restCatalog.createNamespace(NS1);
584+
try {
585+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_2);
586+
List<NamespacePrivilege> policyPrivilegesOnNamespace =
587+
List.of(
588+
NamespacePrivilege.POLICY_LIST,
589+
NamespacePrivilege.POLICY_CREATE,
590+
NamespacePrivilege.POLICY_DROP,
591+
NamespacePrivilege.POLICY_WRITE,
592+
NamespacePrivilege.POLICY_READ,
593+
NamespacePrivilege.POLICY_FULL_METADATA,
594+
NamespacePrivilege.NAMESPACE_ATTACH_POLICY,
595+
NamespacePrivilege.NAMESPACE_DETACH_POLICY);
596+
Stream<NamespaceGrant> namespaceGrants =
597+
policyPrivilegesOnNamespace.stream()
598+
.map(
599+
p ->
600+
new NamespaceGrant(
601+
Arrays.asList(NS1.levels()), p, GrantResource.TypeEnum.NAMESPACE));
602+
namespaceGrants.forEach(g -> managementApi.addGrant(currentCatalogName, CATALOG_ROLE_2, g));
603+
604+
Assertions.assertThat(managementApi.listGrants(currentCatalogName, CATALOG_ROLE_2))
605+
.extracting(GrantResources::getGrants)
606+
.asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class))
607+
.map(gr -> ((NamespaceGrant) gr).getPrivilege())
608+
.containsExactlyInAnyOrderElementsOf(policyPrivilegesOnNamespace);
609+
} finally {
610+
policyApi.purge(currentCatalogName, NS1);
611+
}
612+
}
613+
614+
@Test
615+
public void testGrantsOnCatalog() {
616+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_2);
617+
List<CatalogPrivilege> policyPrivilegesOnCatalog =
618+
List.of(
619+
CatalogPrivilege.POLICY_LIST,
620+
CatalogPrivilege.POLICY_CREATE,
621+
CatalogPrivilege.POLICY_DROP,
622+
CatalogPrivilege.POLICY_WRITE,
623+
CatalogPrivilege.POLICY_READ,
624+
CatalogPrivilege.POLICY_FULL_METADATA,
625+
CatalogPrivilege.CATALOG_ATTACH_POLICY,
626+
CatalogPrivilege.CATALOG_DETACH_POLICY);
627+
Stream<CatalogGrant> catalogGrants =
628+
policyPrivilegesOnCatalog.stream()
629+
.map(p -> new CatalogGrant(p, GrantResource.TypeEnum.CATALOG));
630+
catalogGrants.forEach(g -> managementApi.addGrant(currentCatalogName, CATALOG_ROLE_2, g));
631+
632+
Assertions.assertThat(managementApi.listGrants(currentCatalogName, CATALOG_ROLE_2))
633+
.extracting(GrantResources::getGrants)
634+
.asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class))
635+
.map(gr -> ((CatalogGrant) gr).getPrivilege())
636+
.containsExactlyInAnyOrderElementsOf(policyPrivilegesOnCatalog);
637+
}
638+
639+
@Test
640+
public void testGrantsOnTable() {
641+
restCatalog.createNamespace(NS2);
642+
try {
643+
managementApi.createCatalogRole(currentCatalogName, CATALOG_ROLE_2);
644+
restCatalog
645+
.buildTable(
646+
NS2_T1, new Schema(Types.NestedField.optional(1, "string", Types.StringType.get())))
647+
.create();
648+
649+
List<TablePrivilege> policyPrivilegesOnTable =
650+
List.of(TablePrivilege.TABLE_ATTACH_POLICY, TablePrivilege.TABLE_DETACH_POLICY);
651+
652+
Stream<TableGrant> tableGrants =
653+
policyPrivilegesOnTable.stream()
654+
.map(
655+
p ->
656+
new TableGrant(
657+
Arrays.asList(NS2.levels()),
658+
NS2_T1.name(),
659+
p,
660+
GrantResource.TypeEnum.TABLE));
661+
tableGrants.forEach(g -> managementApi.addGrant(currentCatalogName, CATALOG_ROLE_2, g));
662+
663+
Assertions.assertThat(managementApi.listGrants(currentCatalogName, CATALOG_ROLE_2))
664+
.extracting(GrantResources::getGrants)
665+
.asInstanceOf(InstanceOfAssertFactories.list(GrantResource.class))
666+
.map(gr -> ((TableGrant) gr).getPrivilege())
667+
.containsExactlyInAnyOrderElementsOf(policyPrivilegesOnTable);
668+
} finally {
669+
policyApi.purge(currentCatalogName, NS2);
670+
}
671+
}
672+
490673
private static ApplicablePolicy policyToApplicablePolicy(
491674
Policy policy, boolean inherited, Namespace parent) {
492675
return new ApplicablePolicy(

plugins/spark/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ and depends on iceberg-spark-runtime 1.9.0.
3131
# Build Plugin Jar
3232
A task createPolarisSparkJar is added to build a jar for the Polaris Spark plugin, the jar is named as:
3333
`polaris-iceberg-<icebergVersion>-spark-runtime-<sparkVersion>_<scalaVersion>-<polarisVersion>.jar`. For example:
34-
`polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`.
34+
`polaris-iceberg-1.9.0-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`.
3535

3636
- `./gradlew :polaris-spark-3.5_2.12:createPolarisSparkJar` -- build jar for Spark 3.5 with Scala version 2.12.
3737
- `./gradlew :polaris-spark-3.5_2.13:createPolarisSparkJar` -- build jar for Spark 3.5 with Scala version 2.13.
@@ -53,7 +53,7 @@ jar, and to use the local Polaris server as a Catalog.
5353
```shell
5454
bin/spark-shell \
5555
--jars <path-to-spark-client-jar> \
56-
--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
56+
--packages org.apache.iceberg:iceberg-aws-bundle:1.9.0,io.delta:delta-spark_2.12:3.3.1 \
5757
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension \
5858
--conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
5959
--conf spark.sql.catalog.<catalog-name>.warehouse=<catalog-name> \
@@ -67,13 +67,13 @@ bin/spark-shell \
6767
```
6868

6969
Assume the path to the built Spark client jar is
70-
`/polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`
70+
`/polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.9.0-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar`
7171
and the name of the catalog is `polaris`. The cli command will look like following:
7272

7373
```shell
7474
bin/spark-shell \
75-
--jars /polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar \
76-
--packages org.apache.hadoop:hadoop-aws:3.4.0,io.delta:delta-spark_2.12:3.3.1 \
75+
--jars /polaris/plugins/spark/v3.5/spark/build/2.12/libs/polaris-iceberg-1.9.0-spark-runtime-3.5_2.12-0.10.0-beta-incubating-SNAPSHOT.jar \
76+
--packages org.apache.iceberg:iceberg-aws-bundle:1.9.0,io.delta:delta-spark_2.12:3.3.1 \
7777
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,io.delta.sql.DeltaSparkSessionExtension \
7878
--conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog \
7979
--conf spark.sql.catalog.polaris.warehouse=<catalog-name> \

plugins/spark/v3.5/getting-started/notebooks/SparkPolaris.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -266,8 +266,8 @@
266266
"from pyspark.sql import SparkSession\n",
267267
"\n",
268268
"spark = (SparkSession.builder\n",
269-
" .config(\"spark.jars\", \"../polaris_libs/polaris-iceberg-1.8.1-spark-runtime-3.5_2.12-0.11.0-beta-incubating-SNAPSHOT.jar\")\n",
270-
" .config(\"spark.jars.packages\", \"org.apache.hadoop:hadoop-aws:3.3.4,io.delta:delta-spark_2.12:3.2.1\")\n",
269+
" .config(\"spark.jars\", \"../polaris_libs/polaris-iceberg-1.9.0-spark-runtime-3.5_2.12-0.11.0-beta-incubating-SNAPSHOT.jar\")\n",
270+
" .config(\"spark.jars.packages\", \"org.apache.iceberg:iceberg-aws-bundle:1.9.0,io.delta:delta-spark_2.12:3.2.1\")\n",
271271
" .config(\"spark.sql.catalog.spark_catalog\", \"org.apache.spark.sql.delta.catalog.DeltaCatalog\")\n",
272272
" .config('spark.sql.iceberg.vectorization.enabled', 'false')\n",
273273
"\n",

0 commit comments

Comments
 (0)