Skip to content

refactor(spark): Re-use Spark Python models from PyPI instead of custom dataclasses #271

@Shekharrajak

Description

@Shekharrajak

Ref #225

Currently, the SparkClient implementation uses custom dataclasses (Driver, Executor, SparkConnectInfo) defined in kubeflow/spark/types/types.py. We should migrate to using the official Spark Python models from the spark-operator PyPI package once they become available.

  • Update imports and type hints throughout the codebase
  • Ensure backward compatibility during migration

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions