Monday, 23 December 2019

Azure Service Bus Messages Duplicate Detection

In this blog post, I will demonstrate how duplicate detection works in azure service bus queues.

Pre-Requisites
____________
  • Azure Subscription
  • Azure Service Bus namespace created in Standard Pricing tier or above. Duplicate detection doesn't work with basic pricing tier.
  • Azure Service Bus Queue with Duplicate Detection enabled
1. Create Azure Service Bus namespace in azure portal.





Click Create. Make sure you select pricing tier standard and above. Service namespace is created below.




2. Create Queue under the service bus namespace created in step 1.







Selection "Enable Duplication Detection" checkbox. and Specify duplicate detection timeframe window. (Incoming messages during these duration will be rejected in case of duplicate).



I have specified duplicate detection window as 5 minutes. We will try to send multiple messages with same MessageId property and verify how many messages will get pushed actually to service bus queue.



Click Create. 

3. Now we will create a console application that pushes 10 messages with 30 second interval. i.e. 10 messages will be sent to queue within 5 minutes. all messages will be assigned with same MessageId property.

4. Add following Nuget Packages to your console app.

using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.ServiceBus.Management;

5. Add a method to count Messages in queue.



6. Add following in the main method.


7. Press F5 to run the application and see how many messages get actually inserted to queue.

8. We can see, even though we sent ten messages to Queue, message count is 1. As all Messages were having same MessageId so other 9 messages were rejected by Service Bus Queue.


We can even verify from Service Bus Explorer, that only one message is received in queue.



Conclusion

Service bus with Duplicate detection enabled receiving messages within specification duplication time window will reject messages in case same MessageId property is received for multiple messages. Only messages will unique MessageId will be retained in the queue. Code can be found at (https://github.com/yogeetayadav/SbDuplicateDetection)

Wednesday, 12 December 2018

Azure SQL Pricing pattern

Azure pricing can be quite complex to understand at times. This blog focuses on specifically Azure SQL Pricing models.

Azure SQL Pricing Models

vCore Purchase Model - The vCore purchasing model offers trade off for performance and storage. has two performance tiers, which are General Purpose (GP) and Business Critical (BC). 

DTU Purchase Model - This model offers SQL Azure in three tiers namely Basic Tier, Standard Tier, Premium Tier.

DTU Pricing Model Standard Tier





       Premium service tier - Following table illustrates included storage and max storage options.



Suppose an S3 database has provisioned 1 TB. The amount of storage included for S3 is 250 GB, and so the extra storage amount is 1024 GB – 250 GB = 774 GB. The unit price for extra storage in the Standard tier is approximately ₹16.86/GB/month, and so the extra storage price is 774 GB * ₹16.86/GB/month = ₹13,045.42/month. Therefore, the total price for the S3 database is ₹10,116.88/month for DTUs + ₹13,045.42/month for extra storage = ₹23,162.30/month.
Suppose a 125 eDTU Premium pool has provisioned 1 TB. The amount of storage included for a 125 eDTU Premium pool is 250 GB, and so the extra storage amount is 1024 GB – 250 GB = 774 GB. The unit price for extra storage in the Premium tier is approximately ₹33.71/GB/month, and so the extra storage price is 774 GB * ₹33.71/GB/month = ₹26,090.84/month. Therefore, the total price for the pool is ₹51,743.39/month for pool eDTUs + ₹26,090.84/month for extra storage = ₹77,834.22/month.

One should always know about their workloads expected during given period of time and choose accordingly which type of purchase option will be good for their application.



Implement swagger in API Definitions

API's are crucial part of any project now a days. Swagger is visualization tool for understanding API signature and functionality in real-time. Swagger is a language-agnostic specification for describing REST APIs. It allows both computers and humans to understand the capabilities of a service without any direct access to implementation (source code, network access, documentation). One goal is to minimize the amount of work needed to connect disassociated services. Another goal is to reduce the amount of time needed to accurately document a service.

Steps to implement swagger in .NET core

  • From the Manage NuGet Packages dialog:
    • Right-click the project in Solution Explorer > Manage NuGet Packages
    • Set the Package source to "nuget.org"
    • Enter "Swashbuckle.AspNetCore" in the search box
    • Select the "Swashbuckle.AspNetCore" package from the Browse tab and click Install


  • Add the Swagger generator to the services collection in the Startup.ConfigureServices method.
  • Import the following namespace to use the Info class:

  • In the Startup.Configure method, enable the middleware for serving the generated JSON document and the Swagger UI
  • Build and run the solution. Browse to Swagger URL now, which will be http://localhost:port/swagger. e.g. http://localhost:60212/swagger/




  • You can dig into an api definition by clicking on it and try out by providing parameters required. e.g. in example below we can see what will be provided in POST call to create dealer.







































    Wednesday, 7 June 2017

    Message pick up strategies for stream analytics jobs

    What happens to messages when a stream analytics job is stopped and start again after some time.
    What are the decisions that impact this. Here are few options that are necessary to consider.



    Refer to diagram below.






    1. Now - when this option is chosen, it will pickup messages that are ingested now on wards. All messages which got ingested into system when job was stopped will not be picked up.

    2. Custom - You can specify the time from which you want to pick messages. Let's say job was stopped for 4 hours, so you want to select messages during that time, in case you can choose this option.

    3. When last stopped - This option will enable the job to pick up messages which were ingested even when job was stopped, From last run of the job, it keeps are checkpoint and hence pick up messages post that checkpoint.

    Usual recommendation option is "When last stopped", But your choice may vary as per your requirement.


    Different ways to automate Stream analytics jobs on azure

    Stream analytics job on azure can be created from azure portal using wizard and step by step graphical guideline process. However let's think of the scenario where we need to create same jobs to different environments testing, UAT, pre-prod, prod, regression test environment etc. It will be highly tedious job to create jobs manually for each environment. And that when one may feel need to automate this process.

    Following are some of the ways in which stream analytics job on azure can be automated and reuse at different places. Each approach pros and cons are also compared in below section.

    Using ARM template
    Pros
          Deployment of stream analytics jobs can be automated and solution can be deployed to different environment with minimal time span without repetitions of steps.
          Deployment through portal to 10 different environments manually would be tiresome and error prone job.
    Cons
          Jobs need to be stopped till deployment and users need to be informed. **However same case is applicable even through portal.
          For specifying PowerBI output in stream analytics, it's necessary to perform a logon to the powerbi service something that is not possible during the ARM Template deployment. (https://github.com/vtex/VtexInsights/wiki/Stream-Analytics).

    Using Stream Analytics Powershell cmdlets

    Pros
          Jobs can be created with a simple PowerShell command.


    Cons
          Separate commands are available for creating jobname,outputs,transformations. Need to combine them at one place as write custom commands to deploy one job including all input,transformation,outputs.

    Using Stream Analytics REST API References

    Pros
          Separate API’s for inputs, outputs, transformation. So we need to put all pieces together and customize to deploy all parts of SA-jobs at a time
    Cons
          We will need to authenticate the API requests.


    Using stream-analytics-dotnet-management-sdk

    Pros
          Gives us more control over stream analytics job like programmatically monitor SA-jobs. Stream Analytics jobs created via REST APIs, Azure SDK, or Powershell do not have monitoring enabled by default. So this feature comes in use.

    Cons
          Use only if you need more control over jobs and when none of above options satisfy your needs. 



    Recommendations
                 
                      Use ARM template, It serves following 2 purposes

          We can configure tumbling window of SA-jobs dynamically.
          It helps to automate the deployment of SA-jobs to multiple environments. If there are multiple environments for the applications, once development is complete, we don’t want to spend same time as development to move the solution to other environments. So it’s better to have parameterized variables as per environments and deploy SA jobs.

    Thursday, 27 March 2014

    Dependency Injection in MVC4

    Dependency Injection is a design pattern that helps to reduce tight coupling between software components.
    Due to loosely coupled code, we can develop maintainable software systems. Dependency Injection pattern uses an object to initialize the objects and provide the required dependencies to object it allows you to inject a dependency, from outside the class.



    DI provides objects that an object needs. So rather than the dependencies construct themselves they are injected by some external means. For instance let’s say we have the following below class “Customer” who uses a “Logger” class to log errors. So rather than creating the “Logger” from within the class, you can inject the same via a constructor as shown in the below code snippet.

     public class Customer
        {
            public Logger Log;
            public Customer(Logger obj)
            {
                Log = obj;
            }

        }

    Now we can invoke the customer object with any kind of logger.

    Customer obj = new Customer (new EmailLogger());
    Customer obj1=new Customer (new EventViewerLogger());

     This approach helps in decoupling. Hence dependency injection helps us in developing testable, maintainable and decoupled software components. There are various tools available for implementing depency injection like Ninject, Unity etc. 

    
    
    
    

    MVC4 Web API - Using Custom Action Names

    By default, In MVC Web API, the route configuration follows RESTFUL conventions meaning that it will accept only the Get, Post, Put and Delete action names. However for better readability  a developer will definitely feel the need for his own meaning action names. In this blog, I will explain how to implement custom action names.

    1. Let's first create a simple web API. Create ASP.NET MVC4 project by selecting the Web API project template.





    2. By default valuescontroller and home controller is added as part of solution.
    3. Following methods are added to values controller.
             // GET api/values
            public IEnumerable<string> Get()
            {
                return new string[] { "value1", "value2" };
            }

            // GET api/values/5
            public string Get(int id)
            {
                return "helloss yogita";
            }

                
    3. Run the project, type the url "http://localhost:56355/api/values/" in addess bar. Here is the     response.
                                

    4. Similarly, now type the url "http://localhost:56355/api/values/5" and observe the response.


    5. Now, these methods are standard methods. But usually, a developer would like to give more logical and meaningful method names. For example, action name Get doesn't reveal much information about what kind of information is returned from a particular action. Let's say I want to call an action GetContactInfo(string id) where id can be EmployeeId.

    6. Let's write a method now, in values controller which accepts a Id and returns contact information.
            public string GetContactInformation(string id)
            {
                return "Contact number is 1234567890";
            }
                
    7. Now run the project to see what's is returned as output of this method. Type this url "http://localhost:56355/apis/values/GetContactInfo/5"


    Oops!!! Ok, this happened because, Web API routing is not aware of this route. 

    8. Let's fix it. App_Start->WebApiConfig.cs and provide the route information.

     //declare the route handler for contacts api to handle CustomActionMethod GetContactInfo
                config.Routes.MapHttpRoute(
                    name: "ContactsApi",
                    routeTemplate: "api/{controller}/{action}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );

    9. Now, we have mapped the route handler with controller and action methods. Now run the project and try                                

    Oops!! Its not finding the resource again. 

    10. Let's be more specific and provide the actionname information is web uri template as follows.
           //declare the route handler for contacts api to handle CustomActionMethod GetContactInfo
               config.Routes.MapHttpRoute(
                    name: "ContactsApi",
                    routeTemplate: "api/{controller}/{GetContactInformation}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );

               
    11. Let's run the URL again. And observe the output.

    Therefore we saw how we can use meaningful action names and map them using Web API routing template.