All of us need to capture the screenshot of the browser one time or another and I have used many third party freemium extensions over time but nothing could be as good as something built in to the browser itself.
Chrome now includes a way to capture the screen shot through Dev Tools. To Open Dev Tools Press F12 or Ctrl + Shift + I or from the hamburger menu in Chrome select More Tools –> Developer Tools
Developer Tools
Click on device toggle to select device mode
Device Toggle
Select the Device to capture the screenshot
Device Select
Go to 3 dots menu on the right and select Capture Screenshot.
Capture Screenshot
And volla. there you have your screenshot.
Sreenshot iPhone
Note – If you not want the device border then you could disable it from the 3 dots menu in the right
Hide Device Frame
I know what you are thinking. What if I want the screenshot in a desktop / laptop browser size and not a device. There is a simple way to do that. In the Device selection menu select Responsive
Responsive Device
Now drag to resize the area of the responsive device and then click on capture screenshot.
Resize Device
Ans here is your screenshot.
Screnshot Responsive
Hope this helps you. Any feedback, questions and comments are welcome.
by Abhi·Comments Off on Jigsaw ransomware – want to play a game (deletes your files as you wait)
A brand new breed of ransomware has ramped up the sport in an evil means by threatening to delete user files if they refuse to drop and pay the ransom.
The malware, dubbed Jigsaw, is one in all the newest entries into the ransomware family learned by researchers.
Jigsaw, otherwise called the at one time branded BitcoinBlackmailer. exe, was engineered on March 23rd 2016 and was discharged into the wild solely every week later. Once a victim downloading the malware, the harmful code encrypts user knowledge and creates a fastened screen rather than the private laptop, within the typical manner of ransomware. Users square measure then control to ransom and asked to pay a payment in virtual forex to retrieve their content.
However, in step with Forcepoint researchers, this ransomware not solely encrypts files, however it threatens users with a enumeration by displaying the face of Billy the Puppet from the worry flick Saw, victims are told files are chosen by the hour for deletion if the ransom isn’t paid.
The threatening notice says that in the primary day, solely a couple of files are erased, however following now, many thousand are removed on a daily basis for missing payment. If users try to shut the system or shut down the pc, Jigsaw tells users one thousand files are deleted on startup “as a social control. ”
Jigsaw Countdown
Yet , the code isn’t specifically refined. As Jigsaw is written in. NET, the team were ready to reverse engineer the malware’s code and tear out the encoding key used by Jigsaw to secure away user files — moreover as find each one of the a hundred Bitcoin addresses accustomed store ransomware repayments.
In the video below, you’ll be able to observe however the ransomware behaves every system is compromised — and also the creepy message victims given to force those to pay.
The infection rates are tiny and therefore the come looks to be poor. However, the practicality of this new variety of ransomware remains value noting. As law-breaking becomes additional refined and tools are developed, even those with an absence of talent will take advantage and Jigsaw could be a prime example of however ransomware could find yourself evolving on a wider scale within the future.
by Abhishek Shukla·Comments Off on Introduction to the new ASP.NET 5 framework with Visual Studio 2015
In this blog post we are going to talk about the basics and new features in ASP.NET 5 and Visual Studio 2015 by creating our favourite Hello World Program.
Visual Studio 2015 is a File based project system now which means that when we add a file in the file system then it is automatically picked up and a dynamic compilation happens that could result in faster refresh and faster build behind the scenes. This is possible because of the Rosyln compiler https://github.com/dotnet/roslyn. In the newer version of Visual studio we can develop applications with the full featured .Net framework as well as the Cloud Optimized Core CLR. The idea of designing this new Core CLR is to make the application with fast startup, low memory usage and high throughput (I wonder why this is not the default framework always). Core CLR is designed to work on the environments other than windows as well like Linux and OSX. Core CLR is available as a nuget package and hence could be deployed as part of the application itself but keep in mind, this being an optimized CLR might be missing types available in the full features .NET framework.
The newer version of ASP.NET has taken steps towards unification by combining ASP.NET MVC + WebAPI into one class itself which means that it would have only one base controller for both.
MVC6 WebPage MVC WebAPI
MVC 6 does not rely on the System.Web anymore and also the minimum size of the HttpContext is reduced to 2kb from 30kb. Let’s create our favourite project HelloWorld in ASP.NET 5
New ASP.NET 5 Project
Make sure you select the Web Application under ASP.NET 5 Templates. and Uncheck the Host in Cloud checkbox for now.
Hello World ASP.NET 5 Project
Now let’s navigate to the Controllers folder of the project in the File Explorer and create a new file named HellowWorldController.cs
New Controller
You would notice that this file is automatically refreshed in the solution
File Explorer Sync
Now let’s add some code to this controller to see it in action.
using Microsoft.AspNet.Mvc;
namespace HelloASPNET5.Controllers
{
public class HelloWorldController : Controller
{
public string Index()
{
return "Hello World! Have we achieved World Peace Yet?";
}
}
}
Build and run the project by pressing Ctrl + F5 and change the url to call the controller that we just created.
Run ASP.NET 5
This was certainly less painful as we do not have to remember what files we created in the file system (Not that I do, Git takes care of all that) as Visual Studio automatically takes care of that.
Now if we change the controller and save the file and then refresh the browser, my changes are reflected which was not the case before while working with C# files.
Run ASP.NET 5 After File Save
The root of the website is not longer the root of the project. By default the root of the website is wwwroot folder which contain all the static resources of the Website (including the bin folder for dlls) by default. We can add more static resources here as well. This allows a clear separation between the files that need to deployed to webserver and the configuration files.
WebRoot wwwroot
By default the views also reference resources from the webroot (~) of the website.
Reference Webroot wwwroot
We could change it by modifying the webroot in the project.json file.We will also notice the new way of declaring dependencies.
Change Webroot wwwroot
However the dependencies that could be added in project.json has be 100% .NET dependencies. We could include these dependencies using the good old Manage Nuget Packages option.
Manage Nuget Packages
or we could hand type the dependencies in the project.json files itself and intellisense would help us out there. We could specify the version we want to target or add empty string (“”) to use the latest always.
Add nuget reference manually
Once we do that and save the project.json file we would see that the References section in the solution explorer will show the progress of getting and mapping the package that we just added. And at last building the project.
Auto Build
If you look further down in the project.json file you would see in the framework section that both full featured dotnet and dotnet core frameworks have been included. This will make visual studio build the solution against both the framework which could be beneficial when we are supporting multiple frameworks.
Target Multiple Frameworks
When we run the project in the local system it would run against only one of the framework so to check which framework we are running against we could go to the project properties we could select the framework.
Select One Framework
When we are working in the solution on anything like say in the HelloWorldController I want to return the current date and time using timezone then we could see the information of availability in intellisence.
Package Availability
We could use the # defined symbols to write code against specific frameworks. Like we could use #dnx451 for dotnet framework and #dnxcore50 for core framework.
namespace HelloASPNET5.Controllers
{
public class HelloWorldController : Controller
{
public string Index()
{
#if dnx451
return "Hello World its " + System.TimeZone.CurrentTimeZone.ToLocalTime(System.DateTime.Now).ToString() + "here.";
#endif
return "I don't know whet time it is.";
}
}
}
Use # directive
Now if you see the in the current solution of the project you would not find the packages that are listed as dependencies. The reason is because they are present under the user profile folder in .dnx folder.
Framework Location
When you go inside the .dnx folder you would see the packages folder where all the packages referenced by the current project and all the other projects will be present so that all of them could be reusable.
Packages Location
You would also see a runtimes folder where we could see the available runtimes which are x64 and x86 versions of both full dotnet framework and core framework.
Runtime Location
Any questions, comments and feedback are always welcome.
by Abhishek Shukla·Comments Off on Selenium Automation to verify if an email address is valid or not
Hey guys,
Today I am going to share a program that I wrote sometime back when I was learning to work with Selenium. I had a lot of people commenting and subscribing on my blog and I wanted to figure out if those emails were valid or not so I found out a bunch of websites that could do that for you but I did not want to go through the exercise of copy pasting and verifying one email at a time. So I wrote a selenium program to do that for me.
The website did have a bulk purchase program but the minimum that one could purchase was 3000 emails and I needed to verify like a 100 hence automation to the rescue.
To write a program similar to this you would need to know C# and little bit of html/css and working knowledge of selenium
I just tried this program recently and found out that it needed a fix. So I have fixed the program and its available on my GitHub page. This would give you a basic infrastructure and knowledge to write a selenium automation. This program is only educational purposes. I request you to not to use it for any other reason.
Here are a few screenshots from the program
Start Screen
Select Emails To Verify
See App In Action
Feel Free to provide your feedback, comments and suggestions
I recently came across a great tool for benchmarking your APIs. It’s a nodejs tool written by Matteo Figus. Complete documentation on the tool could be found here
In this post I will provide simple tutorial for anyone to use this tool for their API’s
Create a folder and run navigate to the folder using your tool of choice for running node commands. I use Git Bash. Run the following command to install the api-benchmark package. This would require node to be installed beforehand.
$ npm install api-benchmark
Now let’s add a new JavaScript file and name it as mybenchmark.js. We will require the benchmark tool
var apiBenchmark = require('api-benchmark');
In this example we will use the Giphy API. Giphy is a GIF search engine. So let’s define a few variables that we will use.
var service = {
server1: "http://api.giphy.com/"
};
var apiKey = 'dc6zaTOxFJmzC'; // public beta key
Let’s add the routes which we want to test. In this example we will get the trending gifs.
Here is the complete code of mybechmark.js for your convinence
var apiBenchmark = require('api-benchmark');
var service = {
server1: "http://api.giphy.com/"
};
var apiKey = 'dc6zaTOxFJmzC'; // public beta key
var routes = {
trending: {
method: 'get',
route: 'v1/gifs/trending?api_key=' + apiKey,
headers: {
'Accept': 'application/json'
}
}};
apiBenchmark.measure(service, routes, function(err, results){
console.log(results);
});
To see this in action we will run the benchmark by running the following command in your console.
$ node mybenchmark.js
And you should see something like below.
Api-Benchmark 1
This does show that our benchmark ran but we cannot interpret the results from here. To see that we will have to use the getHtml method available on api-benchmark.
To see this in action we will run the benchmark by running the following command in your console.
$ node mybenchmark.js
This would create a new html file (benchmarks.html) in your current folder with the results. It would look something like below. You see the details of your requests and your api is performing.
API Benchmark Stats
It also has 2 more tabs which show Request Details and Response Details as well. All of this provides great insight into your APIs.
API Benchmark Request Response
However I felt that if we could get the distribution of the api calls then it would provide deeper insight into my APIs. So I added a new tab to the report to showcase the distribtion of api calls overtime. The pull request is merged. So you would notice additional tab in the report i.e. distribution tab and you should see something like below
API Benchmark Distribution
We could also specify the available options to benchmark the API’s deeply. Let’s try out a few