Releases: snowflakedb/spark-snowflake
Releases · snowflakedb/spark-snowflake
v.2.3.1
v2.3.0
v2.2.8
v2.2.7
v2.2.6
v2.2.4
v2.2.3
v2.2.2
v2.2.1
v2.2.0 - External S3 No Longer Needed!
Use of an external S3 bucket location for data staging is deprecated.
In version 2.2.0, users will no longer need to supply an external S3 bucket for staging of data for movement to/from Spark/Snowflake.
This means that the tempdir
, awsAccessKey
, and awsSecretAccessKey
parameters no longer need to be provided in the connector options.
This brings the minimum parameter set to the following (although you may still need to specify one or more additional options based on your specific use case):
var sfOptions = Map(
"sfURL" -> "<account_name>.snowflakecomputing.com",
"sfUser" -> "<user_name>",
"sfPassword" -> "<password>"
)
val df: DataFrame = sqlContext.read
.format("net.snowflake.spark.snowflake")
.options(sfOptions)
.option("dbtable", "<table>")
.load()
If you still desire to stage data in an S3 location of your choice, the deprecated parameters are still functional, but may be removed in a later version.