pyodbcazure-synapseazure-sql-managed-instancespark-jdbc

Connecting from Azure Synapse Analytics Spark Pool to Azure MI


I need to connect (Read/Write) to Azure MI through the synapse notebook. Part of the data lives in ADLS Gen2, that I need to migrate to MI. I want to schedule this notebook to execute daily.

currently looking to use pyodbc and Jdbc libraries. I have used the below commands with same error message. I could not even find any relevant Microsoft Documentation on this.

token=TokenLibrary.getConnectionString(MILinkedServiceName) token= mssparkutils.credentials.getConnectionStringOrCreds(MILinkedServiceName)

Error:

enter image description here

Checked the Supported Linked Services, but there is nothing about Azure MI there.

I can not use Syanpse pipeline Copy/Data Flow for this requiment, becuase I need to write queries (Joins/Sunqueries) on Delta tables to fetch the relevant data for migration to MI. Any help is appreciated.


Solution

  • "errorMessage":"[Code=LinkedServiceTypeNotSupported, Target=AzureSqlMI1, Message=Linked Service Type 'AzureSqlMI' not supported].
    

    As per this Azure SQL database Managed instance is not in the list of supported linked services.

    enter image description here

    That may be the reason to get the above error. You can Configure public endpoint - Azure SQL Managed Instance and use below code in spark notebook:

    jdbcHostname = "<MIName>.public.da292b1d138f.database.windows.net"
    jdbcDatabase = "<databasename>"
    jdbcPort = "3342"
    username = "<iuserName>"
    password = "<password>"
    jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
    connectionProperties = {
    "user" : username,
    "password" : password,
    "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    
    }
    query = "Select * from <tablename>"
    df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
    display(df)  
    

    It will connect and read the data from SQL MI database successfully from spark notebook.