|
| 1 | +# Spark connector for Azure SQL Databases and SQL Server |
1 | 2 |
|
2 |
| -# Contributing |
| 3 | +The Spark connector for [Azure SQL Database](https://azure.microsoft.com/en-us/services/sql-database/) and [SQL Server](https://www.microsoft.com/en-us/sql-server/default.aspx) enables SQL databases, including Azure SQL Databases and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real time transactional data in big data analytics and persist results for adhoc queries or reporting. |
3 | 4 |
|
4 |
| -This project welcomes contributions and suggestions. Most contributions require you to agree to a |
5 |
| -Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us |
6 |
| -the rights to use your contribution. For details, visit https://cla.microsoft.com. |
| 5 | +Comparing to the built-in Spark connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row by row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Databases and SQL Server also supports AAD authentication. It allows you securely connecting to your Azure SQL databases from Azure Databricks using your AAD account. It provides similar interfaces with the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector. |
7 | 6 |
|
8 |
| -When you submit a pull request, a CLA-bot will automatically determine whether you need to provide |
9 |
| -a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions |
10 |
| -provided by the bot. You will only need to do this once across all repos using our CLA. |
| 7 | +## How to connect to Spark using this library |
| 8 | +This connector uses Microsoft SQLServer JDBC driver to fetch data from/to the Azure SQL Database. |
| 9 | +Results are of the `DataFrame` type. |
11 | 10 |
|
12 |
| -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). |
13 |
| -For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or |
14 |
| -contact [[email protected]](mailto:[email protected]) with any additional questions or comments. |
| 11 | +All connection properties in |
| 12 | +<a href="https://docs.microsoft.com/en-us/sql/connect/jdbc/setting-the-connection-properties"> |
| 13 | + Microsoft JDBC Driver for SQL Server |
| 14 | +</a> are supported in this connector. Add connection properties as fields in the `com.microsoft.azure.sqldb.spark.config.Config` object. |
| 15 | + |
| 16 | + |
| 17 | +### Reading from Azure SQL Database or SQL Server |
| 18 | +```scala |
| 19 | +import com.microsoft.azure.sqldb.spark.config.Config |
| 20 | +import com.microsoft.azure.sqldb.spark.connect._ |
| 21 | + |
| 22 | +val config = Config(Map( |
| 23 | + "url" -> "mysqlserver.database.windows.net", |
| 24 | + "databaseName" -> "MyDatabase", |
| 25 | + "dbTable" -> "dbo.Clients" |
| 26 | + "user" -> "username", |
| 27 | + "password" -> "*********", |
| 28 | + "connectTimeout" -> "5", //seconds |
| 29 | + "queryTimeout" -> "5" //seconds |
| 30 | +)) |
| 31 | + |
| 32 | +val collection = sqlContext.read.sqlDB(config) |
| 33 | +collection.show() |
| 34 | + |
| 35 | +``` |
| 36 | + |
| 37 | +### Writing to Azure SQL Database or SQL Server |
| 38 | +```scala |
| 39 | +import com.microsoft.azure.sqldb.spark.config.Config |
| 40 | +import com.microsoft.azure.sqldb.spark.connect._ |
| 41 | + |
| 42 | +// Aquire a DataFrame collection (val collection) |
| 43 | + |
| 44 | +val config = Config(Map( |
| 45 | + "url" -> "mysqlserver.database.windows.net", |
| 46 | + "databaseName" -> "MyDatabase", |
| 47 | + "dbTable" -> "dbo.Clients" |
| 48 | + "user" -> "username", |
| 49 | + "password" -> "*********" |
| 50 | +)) |
| 51 | + |
| 52 | +import org.apache.spark.sql.SaveMode |
| 53 | +collection.write.mode(SaveMode.Append).sqlDB(config) |
| 54 | + |
| 55 | +``` |
| 56 | +### Pushdown query to Azure SQL Database or SQL Server |
| 57 | +For SELECT queries with expected return results, please use |
| 58 | +[Reading from Azure SQL Database using Scala](#reading-from-azure-sql-database-using-scala) |
| 59 | +```scala |
| 60 | +import com.microsoft.azure.sqldb.spark.config.Config |
| 61 | +import com.microsoft.azure.sqldb.spark.query._ |
| 62 | +val query = """ |
| 63 | + |UPDATE Customers |
| 64 | + |SET ContactName = 'Alfred Schmidt', City= 'Frankfurt' |
| 65 | + |WHERE CustomerID = 1; |
| 66 | + """.stripMargin |
| 67 | + |
| 68 | +val config = Config(Map( |
| 69 | + "url" -> "mysqlserver.database.windows.net", |
| 70 | + "databaseName" -> "MyDatabase", |
| 71 | + "user" -> "username", |
| 72 | + "password" -> "*********", |
| 73 | + "queryCustom" -> query |
| 74 | +)) |
| 75 | + |
| 76 | +sqlContext.azurePushdownQuery(config) |
| 77 | +``` |
| 78 | +### Bulk Copy to Azure SQL Database or SQL Server |
| 79 | +```scala |
| 80 | +import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata |
| 81 | +import com.microsoft.azure.sqldb.spark.config.Config |
| 82 | +import com.microsoft.azure.sqldb.spark.connect._ |
| 83 | + |
| 84 | +/** |
| 85 | + Add column Metadata. |
| 86 | + If not specified, metadata will be automatically added |
| 87 | + from the destination table, which may suffer performance. |
| 88 | +*/ |
| 89 | +var bulkCopyMetadata = new BulkCopyMetadata |
| 90 | +bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0) |
| 91 | +bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0) |
| 92 | +bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0) |
| 93 | + |
| 94 | +val bulkCopyConfig = Config(Map( |
| 95 | + "url" -> "mysqlserver.database.windows.net", |
| 96 | + "databaseName" -> "MyDatabase", |
| 97 | + "user" -> "username", |
| 98 | + "password" -> "*********", |
| 99 | + "databaseName" -> "MyDatabase", |
| 100 | + "dbTable" -> "dbo.Clients", |
| 101 | + "bulkCopyBatchSize" -> "2500", |
| 102 | + "bulkCopyTableLock" -> "true", |
| 103 | + "bulkCopyTimeout" -> "600" |
| 104 | +)) |
| 105 | + |
| 106 | +df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata) |
| 107 | +//df.bulkCopyToSqlDB(bulkCopyConfig) if no metadata is specified. |
| 108 | +``` |
| 109 | + |
| 110 | +## Requirements |
| 111 | +Official supported versions |
| 112 | + |
| 113 | +| Component | Versions Supported | |
| 114 | +| --------- | ------------------ | |
| 115 | +| Apache Spark | 2.0.2 or later | |
| 116 | +| Scala | 2.10 or later | |
| 117 | +| Microsoft JDBC Driver for SQL Server | 6.2 or later | |
| 118 | +| Microsoft SQL Server | SQL Server 2008 or later | |
| 119 | +| Azure SQL Databases | Supported | |
| 120 | + |
| 121 | +## Download |
| 122 | +### Download from Maven |
| 123 | +*TBD* |
| 124 | + |
| 125 | +### Build this project |
| 126 | +Currently, the connector project uses maven. To build the connector without dependencies, you can run: |
| 127 | +```sh |
| 128 | +mvn clean package |
| 129 | +``` |
| 130 | + |
| 131 | +## Contributing & Feedback |
| 132 | + |
| 133 | +This project has adopted the [Microsoft Open Source Code of |
| 134 | +Conduct](https://opensource.microsoft.com/codeofconduct/). For more information |
| 135 | +see the [Code of Conduct |
| 136 | +FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact |
| 137 | + |
| 138 | +questions or comments. |
| 139 | + |
| 140 | +To give feedback and/or report an issue, open a [GitHub |
| 141 | +Issue](https://help.github.com/articles/creating-an-issue/). |
| 142 | + |
| 143 | + |
| 144 | +*Apache®, Apache Spark, and Spark® are either registered trademarks or |
| 145 | +trademarks of the Apache Software Foundation in the United States and/or other |
| 146 | +countries.* |
0 commit comments