Skip to main content
Snowflake Integration

How to ingest snowflake data into Kapiche

C
Written by Cameron Parry
Updated over a week ago

A Kapiche source integration into Snowflake requires:

  • Host

  • Role

  • Warehouse

  • Database

  • Schema

  • Username

  • Password

  • Creation of a dedicated read-only Kapiche user and role with access to all schemas needed for Kapiche integration (determining which Schemas we need access to is likely an iterative process with the customer).

Kapiche integrates directly into Snowflake with incremental syncs, on a configurable schedule. We will need to know the cursor field or the field to determine which records to sync. This is usually the DATE or a modification timestamp.
Kapiche then ingests the data it is given access to.

The data is then available for the customer to use in projects.

NOTE:

  • If you require for extra security to whitelist Kapiche IP addresses for the connection, these can be provided. This is a set of IP addresses that we will use to connect to your Snowflake instance.

  • All sensitive data in transit and at rest must be encrypted using strong, industry-recognized algorithms.

  • Kapiche maintains approved encryption algorithm standards. These internal standards are reviewed and subject to change when significant changes to encryption standards occur within the security industry.

  • All Kapiche public web properties, applicable infrastructure components and applications using SSL/TLS, IPSEC and SSH to facilitate the encryption of data in transit over open, public networks, must have certificates signed by a known, trusted provider.

  • Data must adhere to customer data sovereignty laws and guidelines. If a customer has specific requirements around data sovereignty, these will be strictly adhered to. This also applies to data redaction and any laws/guidelines around length of time data must be stored for.

Did this answer your question?