Azure is constantly adding new tools to its arsenal and although Azure Data Factory is not new but I have been recently working a lot with it to manage and transform data. In the process I learned a lot about the various capabilities of ADF and also managed to figure out a few tricks. In this series I would share what I have learned so far to help anyone who is taking steps towards a similar journey.

Azure Data Factory allows us to manage the ETL lifecycle for the big data with the flexibility of serverless scaling. Let’s create a Data Factory and I will explain more as we proceed. Navigate to Azure Portal if not already there and search for Data Factories in the search bar. Select Data Factories from the result.

You will be taken to the Data Factories Blade. If you have not created any then you should see a screen like below.

Click on Create Data Factory and you will be taken to the blade like below to enter the details like subscription, resource group, region name and version. As of today V2 is the latest version.

If you like you could enter setup a git repository for the Data Factory. I created a repo and added the details.

The next option is to add tags. I am going to skip that as I am not going to use them in this demo. Once you click next you would see the Review + Create section. If all the validation pass you would be able to Create the Data Factory.

Once you click on create, you would see the deployment starting and once the resource is created you should see something like below.

Click on Go to resource and we will be taken to the Data Factory. You will a lot of details about the Data Factory here just like any other Azure Resource. The couple of like that I would like to point out are

  • Author and Monitor – this will take to another interface where we will be able to create the Pipelines (scheduled data-driven workflows) that would the ETL for us.
  • Documentation – as you already know this would give pretty detailed information about the different aspects of the Data Factories.

I would leave the documentation part up to you to browse through. Let’s navigate to Author and Monitor.

If you added GitHub details while creating the Data Factory then you would be asked to Login to GitHub to provide ADF the necessary permissions.

Once you are done with that you would see a screen like below. The main navigation is in the top left corner of the page. Click on the

This will take to the screen below. Since this the first time we are we do not have anything at the moment. Let’s go ahead and click on the + in the top left corner and select pipeline.

This will create a pipeline and show the Activities toolbar which we will use to drag and drop new activities into the pipeline.

You will also notice that there is yellow indicator with number 1 below the Author Icon. This is because we now have changes that unpublished.

* Your top menu might be a little different if you did not select GitHub for source control.

Let’s add a Copy data activity to the pipeline. We can do this by dragging and dropping the Copy data activity from Move & transform onto the central screen.

We will copy data from a SQL Server database to the blob storage. I am not going to cover the creation of SQL Server Database. If you do not have one then you could take this as an assignment to create one. Now let’s link to the SQL Database under the Source. Click on New.

This will open a flyout on the side and we can select SQL as the new Dataset.

We do not have a linked service yet so let’s go ahead and create one which will let us connect to the Sql database we have.

Once we have saved the dataset and linked service information, we should see something like below in the Source section

We want to copy the data to Blob storage. Just like SQL storage, I am not going to go into details of creating a blob storage. I hope you are able to set that on your own. If you are unable to do so then please comment on the post and I will add the details for it. Let’s select New and create a new dataset for the blob storage and select the Azure Blob Storage.

Select the DelimitedText and select New for the creation of Linked service.

Add the required details for the Azure Blob Storage and make sure you test connection.

Once we have setup the copy data then click on Debug to run the activity.

This will run the pipeline in Debug mode and show the run details in the output tab in the bottom. You would be able to see more details about the run when you click on the glasses icon. Also clicking on the two arrow icons you could see the input and output to this activity.

Also when I browse to the container in the Blob Storage I am able to see the data from the SQL database there.

Hope you found this helpful. Feel free to leave feedback and questions.

WHO has declared Coronavirus a global pandemic and almost every country in the world is impacted by it. A lot of people are trying to track the state of pandemic and it’s impact. A lot of people and organizations have come forward to help in this global crisis. But just like everything else where there is good there is evil. Malicious apps, websites, scams and ransomware have spun up to take advantage of the situation.

One of the ransomeware app it the Covid 19 Tracker which promises to give you the real time tracking of the spreading virus near you.

But in the background changes the password of your android phone and locks it. It then demands $100 in Bitcoin to be paid to be able to unlock your phone.

Be safe and just like always visit only the sites you trust and please refrain from installing untrusted apps on your mobile phone.

Microsoft has delivered the fastest project of this size that I know of to provide accurate and up to date information on the coronavirus (COVID-19). You can visit the live tracker at https://bing.com/covid. The website is mobile friendly as well.

We all understand the concept of boxing and unboxing in C# but type casting is a bit more complicated and we do have more than one option to accomplish it.

The two options that we have is 1) Have an explicit cast  to specific type or 2) Use the as keyword for type casting. Let’s look at each one of those in a little detail in code

 

I am going to call the type casting without any keyword as standard type casting.  Below is a code example of one of the ways of doing it right.

object obj1 = new object();

try
{
    Person person1 = (Person)obj1;
    Console.WriteLine(person1);
}
catch(InvalidCastException castException)
{
    Console.WriteLine(castException.Message);
}

It is apparent from the code that the object we are trying to cast to typeof Person is not actually a person and hence will be unsuccessful, so we are prepared for it by catching the InvalidCastException. This is exactly the problem when using the standard type casting.

We can avoid getting an exception during type casting by using the as keyword. Below is the code example of one way of doing that

object obj1 = new object();
Person person2 = obj1 as Person;
 if (person2 != null)
 {
     Console.WriteLine(person2);
 }
 else
 {
     Console.WriteLine("The person was not the original type of the object");
 }

As we can see from the code above, we used the as keyword to type cast and since the obj1 was is not of the type Person we will get back null as the result of the cast.

 

I know. The next question one would ask as to why is better than catching an exception since it is almost the same amount of code?

 

The answer to that question is performance. It is always expensive to throw an exception because of the additional work that needs to done by the runtime like colleting the stacktrace, increase in memory pressure sue to the page faults, etc

 

There is one catch while using the as keyword though, the as keyword only works for reference types or nullable types, which is understandable since it either returns the casted object or null and there no option to return null in case of failure for non nullable type (value type), it can’t be used there.

 

In summary, we should try to use as wherever possible as checking for null is definitely preferred as compared to throwing an exception, however if we must use type casting we should catch the specific InvalidCastException for better performance.

 

Although there is not much to the source code, regardless it could be found at github

 

Any questions, suggestion or feedback is always welcome.

Jigsaw Ransomeware featured

A brand new breed of ransomware has ramped up the sport in an evil means by threatening to delete user files if they refuse to drop and pay the ransom.

The malware, dubbed Jigsaw, is one in all the newest entries into the ransomware family learned by researchers.

Jigsaw, otherwise called the at one time branded BitcoinBlackmailer. exe, was engineered on March 23rd 2016 and was discharged into the wild solely every week later. Once a victim downloading the malware, the harmful code encrypts user knowledge and creates a fastened screen rather than the private laptop, within the typical manner of ransomware. Users square measure then control to ransom and asked to pay a payment in virtual forex to retrieve their content.

However, in step with Forcepoint researchers, this ransomware not solely encrypts files, however it threatens users with a enumeration by displaying the face of Billy the Puppet from the worry flick Saw, victims are told files are chosen by the hour for deletion if the ransom isn’t paid.

The threatening notice says that in the primary day, solely a couple of files are erased, however following now, many thousand are removed on a daily basis for missing payment. If users try to shut the system or shut down the pc, Jigsaw tells users one thousand files are deleted on startup “as a social control. ”

Jigsaw Countdown

Jigsaw Countdown

 

Yet , the code isn’t specifically refined. As Jigsaw is written in. NET, the team were ready to reverse engineer the malware’s code and tear out the encoding key used by Jigsaw to secure away user files — moreover as find each one of the a hundred Bitcoin addresses accustomed store ransomware repayments.

In the video below, you’ll be able to observe however the ransomware behaves every system is compromised — and also the creepy message victims given to force those to pay.

 

The infection rates are tiny and therefore the come looks to be poor. However, the practicality of this new variety of ransomware remains value noting. As law-breaking becomes additional refined and tools are developed, even those with an absence of talent will take advantage and Jigsaw could be a prime example of however ransomware could find yourself evolving on a wider scale within the future.

 

The new ransomeware first discovered by @Trojan7Sec. Once it encrypts all the data on your system then you would see the following message

OphionLocker Screen Message

OphionLocker Screen Message

 

It also add a textfile on your desktop with the details of making the payment and collecting the decryption key

OphionLocker Text

OphionLocker Text

 

The payment website looks like below

 

Ransom Page

Ransom Page

 

 

Fake Ransom

Fake Ransom

This ransomware does not securely delete your files or remove the shadow volume copies so it is still possible to recover your files using a file recovery tool or a program like Shadow Explorer.

 

More information on this can be found @trojan7malware.blogspot.co.uk