Working with different marketing platforms can be challenging and disorienting when it comes to data management. This is why, unifying your data in a single destination, you can get a clear picture of how your marketing campaigns and business are performing. One of the best ways to store marketing data is by using a cloud storage or a data warehouse platform.
Luckily, Supermetrics integrates with a handful of cloud storage and data warehouse platforms such as Snowflake, Amazon S3, Google Cloud Storage, and Redshift. Furthermore, collecting your data on the cloud platforms helps you take control of all the historical data. Also, you can use cloud computing resources to analyze your data.
In this post, I will guide you step-by-step on how to use the popular data warehouses and cloud storage integrations that Supermetrics has to offer.
Snowflake is a cloud computing data warehousing company. It helps organizations mobilize data with nearly unlimited power, performance, and concurrency. Let’s look at how to use Snowflake with Supermetrics. Before starting, make sure that you have signed up for both platforms.
Follow these steps to create a destination.
Creating a transfer helps you migrate data from a data source to the destination you have just created. You can also schedule automatic transfers to your destination at this step. You can run daily, weekly, or monthly data transfers. Note a single transfer has a maximum processing time of 6 hours.
After creating a destination, it’s time to initiate data transfer into the warehouse.
After setting up the destination and transfer, you need to add historical data to your destination by using a data backfill.
Amazon S3 is a simple cloud data storage platform that stores data as objects. This platform uses the same infrastructure as Amazon.com for scalability. It can host different data types such as applications, data archives, backup and recovery, hybrid cloud storage, and Data Lake for analytics. This is how to use Supermetrics with Amazon S3 cloud storage.
Before you begin configuring the Amazon S3 destination, you need to ensure you meet the following prerequisite. Otherwise, the connection will not work successfully.
Amazon S3 Bucket
The bucket will host data from Supermetrics.
Permissions
The Amazon S3 subfolder used as the destination needs the putObject permission. This permission is not mandatory, but it is useful. Furthermore, you can grant the deleteObject permission to automatically remove the empty setup file generated during testing.
IP Allowlisting
If you have set up an IAM policy that allows only certain IPS, IP allowlisting is required. Make sure to add the following IP addresses to your AWS IAM policy. Supermetrics will use these addresses to transfer data to your Amazon S3 account.
Whenever you wish to transfer data to a data warehouse data lake or cloud storage, you have to create a destination in the super metrics dashboard.
The process of creating a destination is similar INvarious data lakes, data warehouses, and cloud storage platforms. However, the destination configuration is unique as each platform has a different infrastructure. Take a look at the steps on how to configure Amazon S3 storage.
Fill in the fields with accurate Information, including:
Display Name: Enter a unique name that will help you distinguish this destination from other destinations in the super metrics dashboard.
Bucket name: Choose the Amazon S3 bucket you wish to use as your destination. Amazon s3 uses buckets as storage containers. You can store any type of data in a bucket. Each Amazon S3 account has a limit of 100 buckets. If you are not sure of the bucket, launch the Amazon S3 dashboard and check the name under Services>S3.
Upload Path: Select a folder structure inside your AmazonS3 destination bucket. If the path you specify exists, Supermetrics will transfer data to that folder; otherwise, it will be forced to create a new folder based on the path you provide.
Output format: You can select your preferred output format of the data you transfer to the Amazon S3 bucket.
Access Key: This is to verify ownership of the Amazon S3 account. You can get the access key In the AWS dashboard >> my security credentials.
Access secret: This is the secret access key for your access key ID. A corresponding secret access key is generated for every access key you generate.
Supermetrics comes with a Backfill feature that allows you to add historical data to the destination.
Google Cloud Storage allows users to store data as objects. This means you can store data in any format you like. Most people use Google-related products, and Google Cloud is not an exception. Luckily, Supermetrics allows users to connect and transfer data to Google Cloud Storage. Here are the steps to use Supermetrics with Google Cloud Storage.
You need to meet the following prerequisite for successful configuration. The following data can be obtained from the Google Cloud dashboard.
Display name
Choose any name you wish to help you distinguish this transfer from others.
Bucket name
Just like in Amazon S3, Google also uses Buckets to store data. Look for your preferred bucket name in the dashboard under Cloud storage > Browser.
Output format
This feature provides you with options to choose your preferred output format.
Upload path
It refers to the location/folder inside the google storage bucket you wish to store the data. If you choose an existing folder, Supermetrics will transfer all the data inside that folder. However, if the folder does not exist, Supermetrics will create a new folder under the name specified.
Auth Key
You need to enter an Auth Key in Supermetrics for a successful data transfer to your Google Cloud account. The auth key is located at IAM &Admin>>service Accounts. It should look like JSON code. Make sure to copy and paste the entire code to the configuration field.
Login to your Supermetrics dashboard and create a destination. Remember to choose Google Cloud Storage as your destination.
It is important to configure the destination using accurate Information from your Google Cloud storage.
The Google Cloud platform has another product that integrates seamlessly with Supermetrics called Google BigQuery. If you want to learn more about it, you can take a look at “The Beginner’s Guide to Using BigQuery with Google Data Studio” that we released a while back.
Redshift is a service by Amazon that offers a petabyte-scale warehouse. It allows users to store huge data from gigabytes to petabytes and more. After uploading the data sets, you can use SQL-based tools to perform queries and data analysis. You can link Supermetrics to Redshift by following these steps.
You will need to fill out the following data in the configuration steps. All these prerequisites have to be met to use Redshift as a Supermetrics destination. They ensure that a connection between these two platforms can be established.
Database and Schema
Before creating a redshift destination, you need to ensure that you have a Redshift cluster, Schema, and database.
Managing Schema changes
Supermetrics can make schema changes in Redshift. You or someone in your team can initiate such a change using the query manager. However, you need to ensure that Redshift has all it needs to perform the changes. Follow these guidelines:
Use views with NO SCHEMA BINDING
Redshift enables Schema binding for views by default. So the tables referenced by the views cannot undergo schema changes or be dropped. For this reason, users need to use views with no schema binding, especially the ones that reference super metrics tables.
Remove Key constraints
Key constraints can hinder schema changes sometimes. That’s why users are encouragednot to use them while dealing with Supermetrics.
Permissions
If you are a Redshift team member, you need to have CREATE and USAGE permissions. Otherwise, you need to have user credentials with these permissions. Only Redshift cluster admins can grant users the required permissions.
IP Allowlisting
If your organization allows selected networks only, you will have to allow the IP addresses from Supermetrics. Add the following IP addresses, including your own, to not be locked out.
The settings are located under Network and Security settings>Listed VPC security>group>Edit Inbound Rules.
Use the prerequisites that we have discussed above to configure the destination.
A backfill helps users add historical data to Redshift and other preferred storage. Before using this feature, you need to ensure that you have created a destination and data transfer. Here are the steps to run a backfill to Redshift.
Note that there are limitations to using the backfill feature. Some data sources limit the amount the range of historical data that you can fetch. Backfilling data can take several weeks or months. So if you are using a trial version, data will be backfilled for 14 days only.
Azure is a cloud storage platform powered by Microsoft. It allows users to store various data objects on its highly reliable, durable, scalable, and secure storage. Just like other cloud storage platforms we have mentioned above, you can use Microsoft Azure Storage with Supermetrics following these steps:
The following prerequisites have to be met to use Supermetrics with Microsoft Azure storage.
Permissions
A minimum role of Reader and Data access is needed for Supermetrics to have enough permissions to complete the transfer. The permissions should be applied in the destination container of Azure Storage.
Access Key
Supermetrics requires an access key to authenticate transfers.
IP Allowlisting
Make sure the following IP addresses can access your Microsoft Azure storage.Note that you need to meet a minimum role of storage account contributor to modify firewall settings in Azure Storage. If you do not meet this role, you need to ask the admin to add these Supermetrics IP addresses. Of course, don’t forget to include your IP address.
Similar to the steps described in the previous platforms, you need to create a destination. Then initiate a transfer and backfill the data that you are planning to connect.
It’s always important to store your data in cloud storage, data lakes, and warehouses. But using the manual way can be time-consuming and tiresome. This is where Supermetrics comes in!
You can add various data sources and automate the process saving you hours of manual labor. Supermetrics works with all the popular cloud storage platforms. All you need is to get permissions and the necessary details to configure a connection for data transfer.
I hope this was helpful! If you have any questions on advanced data blending, feel free to subscribe to my newsletter or take a look at one of our courses.
If you want to learn how to automate your reporting process and dashboards with Supermetrics for Data Studio, Excel and Google Sheets and BigQuery, take a look at my full course on Udemy.
After covering ChatGPT and testing Zapier AI Actions, I thought I will explore Microsoft Copilot…
I've used Zapier for years to automate small tasks and create the lead generation system…
When I started creating custom GPTs for various tasks, one of the areas that I…
Over the past few months, I've had the opportunity to dive into Canva Magic Studio,…
In my nano tips series on ChatGPT so far, I've covered Data Storytelling and Visualization,…
It has been a while since my last post, but I finally managed to wrap-up…