Developing a bare minimum REST API for Azure PostgreSQL Database using Azure Functions with Node.js in Azure Portal

Image for post
Image for post
Photo by Sincerely Media on Unsplash

Azure Functions is a serverless computing service running on Azure’s cloud-hosted execution environment. Azure Functions lets us run auto-scaled, event-triggered code on an abstracted underlying hosting environment.

An everyday use case scenario for Azure Functions is to perform work in response to an event and when that work can be completed quickly, typically in seconds. The on-demand execution feature makes Azure Functions a suitable candidate for developing REST APIs.

This article will create a stateless HTTP triggered RESTful Azure Functions app with Node.js to perform read operations on an Azure PostgreSQL database and return the results as a JSON object. …


A step-by-step guide to turning COVID-19 data into stunning Power BI visuals using Microsoft Azure offerings.

Image for post
Image for post
Photo by Matthew Henry on Burst

Cloud, Big Data, and Business Intelligence are the three buzz words of the decade. Everyone is talking about them. Everyone wants to do it. But no one tells you how to do it. How do you use the cloud to process your big data and build intelligence around it to make business decisions?

There are multiple answers to that question, and in this guide, we tried to answer that question using Microsoft’s cloud solution (Azure) and Microsoft’s BI tool (Power BI) to get you started in the right direction.

Microsoft Azure is one of the leading cloud solutions providers, offering an end-to-end set of tools and techniques to ingest, analyze, and consume vast sources and formats of data. …


Making Sense of Big Data

A step-by-step guide to importing CSV data from ADLS Gen2 to Azure Synapse Analytics by using PolyBase

Image for post
Image for post
Photo by Markus Winkler on Unsplash

Azure Synapse Analytics SQL pool supports various data loading methods. The fastest and most scalable way to load data is through PolyBase. PolyBase is a data virtualization technology that can access external data stored in Hadoop or Azure Data Lake Storage via the T-SQL language.

PolyBase shifts the data loading paradigm from ETL to ELT. The data is first loaded into a staging table followed by the transformation steps and finally loaded into the production tables.

In this article, we load a CSV file from an Azure Data Lake Storage Gen2 account to an Azure Synapse Analytics data warehouse by using PolyBase. …


Referencing and Accessing Azure Key Vault Secrets in Azure Functions as Environment Variables

Image for post
Image for post
Photo by Annie Spratt on Unsplash

Azure functions offer a great way to store and reference credentials, keys, and other secrets as application settings. Application settings are exposed as environment variables during execution. Application Settings are encrypted at rest and transmitted over an encrypted channel. However, you still run the risk of inadvertently exposing these secrets to unauthorized users.

A more favored approach to alleviate this issue is to store the credentials or keys in the Azure Key Vault as secrets and reference the secrets as environment variables in our Azure functions apps.

This article will set up an Azure functions app to access secrets stored in the Azure Key Vault. We will also demonstrate how to access environment variables in a JavaScript function. …


A guide on adding and executing an Azure Databricks notebook in Azure Data Factory pipeline with Azure Key Vault safe Access Tokens.

Image for post
Image for post
Photo by ZSun Fu on Unsplash

Azure Data Factory is a great tool to create and orchestrate ETL and ELT pipelines. The Data Factory's power lies in seamlessly integrating vast sources of data and various compute and store components.

This article looks at how to add a Notebook activity to an Azure Data Factory pipeline to perform data transformations. We will execute a PySpark notebook with Azure Databricks cluster from a Data Factory pipeline while safeguarding Access Token in Azure Key Vault as a secret.

Caution: Microsoft Azure is a paid service, and following this article can cause financial liability to you or your organization.

Please read our terms of use before proceeding with this article…


Using PySpark to incrementally processing and loading schema drifted files to Azure Synapse Analytics data warehouse in Azure Databricks

Image for post
Image for post
Photo by Håkon Grimstad on Unsplash

Data is the blood of a business. Data comes in varying shapes and sizes, making it a constant challenging task to find the means of processing and consumption, without which it holds no value whatsoever.

This article looks at how to leverage Apache Spark’s parallel analytics capabilities to iteratively cleanse and transform schema drifted CSV files into queryable relational data to store in a data warehouse. We will work in a Spark environment and write code in PySpark to achieve our transformation goal.

Caution: Microsoft Azure is a paid service, and following this article can cause financial liability to you or your organization.


A guide on accessing Azure Data Lake Storage Gen2 from Databricks in Python with Azure Key Vault-backed Secret Scopes and Service Principal.

Image for post
Image for post
Photo by Markus Winkler on Unsplash

Azure Data Lake Storage and Azure Databricks are unarguably the backbones of the Azure cloud-based data analytics systems. Azure Data Lake Storage provides scalable and cost-effective storage, whereas Azure Databricks provides the means to build analytics on that storage.

The analytics procedure begins with mounting the storage to Databricks distributed file system (DBFS). There are several ways to mount Azure Data Lake Store Gen2 to Databricks. Perhaps one of the most secure ways is to delegate the Identity and access management tasks to the Azure AD.

This article looks at how to mount Azure Data Lake Storage to Databricks authenticated by Service Principal and OAuth 2.0 …


A guide on how to setup SQL Server firewall and connect from Databricks using Secret Scopes in PySpark.

Image for post
Image for post
Photo by Lucian Alexe on Unsplash

Azure SQL Servers comes packed with benefits and a twist when you need to access them outside the Azure network. With default settings, the Azure SQL Server firewall denies all access to the server.

This article looks at how to access Azure Synapse Analytics data warehouse from our client computer using SSMS and Databricks without revealing and storing our credentials by leveraging Azure Key Vault-backed Secret Scopes.

Caution: Microsoft Azure is a paid service, and following this article can cause financial liability to you or your organization.

At the time of writing, Azure Key Vault-backed Secret Scopes is in ‘Public Preview.’ It is recommended not to use any ‘Preview’ feature in production or critical systems.


An innovative Azure Data Factory pipeline to copy multiple files, incrementally, over HTTP from a third-party webserver

Image for post
Image for post
Photo by tian kuan on Unsplash

Copying files using Azure Data Factory is straightforward; however, it gets tricky if the files are being hosted on a third-party web server, and the only way to copy them is by using their URL.

In this article, we look at an innovative use of Data factory activities to generate the URLs on the fly to fetch the content over HTTP and store it in our storage account for further processing.

Caution: Microsoft Azure is a paid service, and following this article can cause financial liability to you or your organization.

Please read our terms of use before proceeding with this article…


The terms and conditions apply to all the articles authored and published by https://dhyanintech.medium.com/

Image for post
Image for post
Photo by Matthew Henry on Burst

The article that brought you here is governed and should only be used if you agree and accept the non-exhaustive terms and conditions mentioned in this document.

  • The views and opinions expressed in this article are pure of the author. These opinions do not reflect the ideas, ideologies, or points of view of any organization the author is affiliated to.
  • All product names, logos, and brands are the property of their respective owners. All company, product, and service names used in this website are for identification purposes only. The use of these names, logos, and brands does not imply endorsement.
  • All content provided in this article is for informational purposes only. …

About

Dhyanendra Singh Rathore

A data and BI professional with certification in Microsoft Azure. Passionate about solving problems, currently playing with Big data in Spark.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store