VisualStudio

C# 8.0 New Feature–Interface Default Implementation for Methods

December 1, 2018 .NET, .NET 4.8, .NET Core, .NET Core 3.0, ASP.NET, Microsoft, Visual Studio 2017, VisualStudio, VS2017 No comments

With upcoming C# 8.0, there is an interesting feature called default implementation body for methods within an interface definition. That means if you have few methods signatures defined and you want make implementation classes to implement these methods optionally (remember, previously all interface methods needs to be implemented in implementation classes) , with C# 8.0, you can define methods to follow default implementation body, if it not explicitly implemented by implementation classes of the same interface.

When will we get C# 8.0?

C# 8.0 will be released along .NET Core 3.0, in upcoming months. Currently preview 1 version is available to try out.

Get Started:

1.) First of all, download and install Preview 1 of .NET Core 3.0 and Preview 1 of Visual Studio 2019.

imageimage

image

2.) Launch Visual Studio 2019 Preview, Create a new project, and select “Console App (.NET Core)” as the project type.

image

image

image

3.) Once the project is up and running, change its target framework to .NET Core 3.0 (right click the project in Solution Explorer, select Properties and use the drop down menu on the Application tab).

image

Here is how it can be implemented:

using System;

namespace CSharp8Demo
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");

            IVehicle bmw = new Bmw();
            bmw.DefaultMessage();

            IVehicle audi = new Audi();
            audi.DefaultMessage(); 
        }
    }


    interface IVehicle
    {
        //default implementation 
        void DisplayMessage();

        void DefaultMessage() { Console.WriteLine("I am  inside default method in the interface!");} 
      
    }

    public class Bmw : IVehicle
    {
        public void DisplayMessage()
        {
            Console.WriteLine("I am BMW!!!");
        }
    }

    public class Audi : IVehicle
    {
        public void DisplayMessage()
        {
            Console.WriteLine("I am AUDI!!!");
        }
        public void DefaultMessage() => Console.WriteLine("I am  inside audi class!");
    }
}

Global Office 365 Developer Bootcamp–Letterkenny-Nov’10 2018–Register Now

October 19, 2018 Boot Camp, Dev Community, Global Office 365 Developer Bootcamp, Microsoft, Office 365, Share Point, VisualStudio, Windows No comments

We have got the opportunity to host Global Office 365 Developer Bootcamp in Letterkenny as part of Letterkenny DotNet Azure User Group (LK-MUG).

Global Office 365 Developer Bootcamp – Overview?

Following the success of last year, Global Office 365 Developer Bootcamp now becomes an annual event.

  • It is a free, one-day, hands-on training event led by Microsoft MVPs with support from Microsoft and local community leaders.
  • Developers worldwide are invited to attend the bootcamp to learn the latest on Office 365 platform including Microsoft Graph, SharePoint Framework, Microsoft Teams, Office Add-ins, Connectors and Actionable Messages and apply what you learn to your future projects.
  • Watch the video to hear from Jeff Teper and Microsoft MVPs on 2018 Global Office 365 Developer Bootcamp.
  • Global Office 365 Developer Bootcamp will take place between October 1 and November 30, 2018.

[Quoting from: Official Site]

Letterkenny is first venue in Ireland and among many other 59 venues announced so far.

You can get a glance at all the venues at: http://aka.ms/O365DevBootcamp

Letterkenny – Global Office 365 Developer Bootcamp â€“ November 10th 2018

Below is the event announcement artwork and agenda. Looking forward to the event.

image

image

Seats are limited. If you would like to join us, please join us using the registration link below:

REGISTER & RSVP: https://www.meetup.com/lk-mug/events/255066993/ 

Event Website: http://lk-mug.org/wp-event/global-office-365-developer-bootcamp-letterkennyireland/ 

About LK-MUG?

LK-MUG is a Microsoft recognized user community established in Letterkenny, Donegal,  Ireland under the full name “Letterkenny DotNet Azure User Group”.  We are currently being operated  with support of .NET foundation and Microsoft. This community is for everyone interested in Microsoft .NET, Office-365, SharePoint, Azure cloud platform and other Microsoft Open Source initiatives.

  • All skill levels are welcome.
  • We are committed to helping you learn and share things about .Net, Office-365, SharePoint and Azure Cloud.
  • Our community brings all students, enthusiasts, experts and professionals working in and around Donegal county in Ireland.
  • Time to time we will be seeing events organized by Microsoft and MVPs to provide best-in-class learning experience for Microsoft Technology enthusiasts.

User Group URLs:

  Meet Up:  https://www.meetup.com/lk-mug/
  Facebook:  https://www.facebook.com/mugdonegal 
  Twitter: https://twitter.com/lkmsug
  LinkedIn: https://www.linkedin.com/groups/12121376/
  Website: http://www.lk-mug.org
  Email : lk-mug@outlook.com

Azure Cosmos DB – TTL (Time to Live) – Reference Usecase

October 9, 2018 .NET, .NET Core, .NET Framework, Analytics, Architecture, Azure, Azure, Azure Cosmos DB, Azure Functions, Azure IoT Suite, Cloud Computing, Cold Path Analytics, CosmosDB, Emerging Technologies, Hot Path Analytics, Intelligent Cloud, Intelligent Edge, IoT Edge, IoT Hub, Microsoft, Realtime Analytics, Visual Studio 2017, VisualStudio, VS2017, Windows No comments

TTL capability within Azure Cosmos DB is a live saver, as it would take necessary steps to purge redudent data based on the configurations you may. 

Let us think in terms of an Industrial IoT scenario, devices can produce vast amounts of telemetry information, logs and user session information that is only useful until we operate on them and take action on them, to be specific up to finate period of time. Once that data becomes surplus, we need an application logic that purges these old records.

With the “Time to Live” or TTL, Microsoft Cosmos DB provides an ability to have your documents automatically purged from database storage after a certian period if time(which you configured)

  • This TTL by default can be set on a document collection level and later can be overridden on a per document basis.
  • Once the TTL is set, Cosmos DB service will automatically remove the documents when its lifetime is over.
  • Inorder to track TTL, Cosmos DB uses an offset field to check when it was last modified.  This field is identifiable as “_ts”, which exists in every document you create.  Basically it is a UNIX epoch timestamp. This field is updated everytime when the document is modified. [Ref: Picture1]

image

[Picture1]

Enabling TTL on Cosmos DB Collection:

You can enable TTL on a Cosmos DB collection simply by using Azure Portal –> Cosmos DB collection setting for existing or during creation of  a new collection)

TTL value needs to be set in seconds – if you need 90 days => 60 sec * 60 min * 24 hour * 90 days = 7776000 seconds

image

[Picture2]

Below is a one of the reference architecture in which Cosmos DB – TTL would be essentially useful and viable to any Iot business case:

image

[Picture3]

Hope that was helpful to get some understanding. For more references visit:  Cosmos DB Documentation

Azure Cosmos DB – 429 Too Many Requests

October 6, 2018 .NET, Azure, CosmosDB, Document DB, Microsoft, Performance, Reliability, Resilliancy, Scalability, Visual Studio 2017, VisualStudio, VS2017 No comments

Recently while I was doing Performance Testing in one of the APIs interacting with Cosmos DB, I encountered a problem as Azure Cosmos DB API’s started returning Http Code 429.  Http Status Code 429 indicates that too many request been received or request rate is very large. This error would happen when we have concurrent users trying to write or read from same cosmos db collection.

Following diagram covers the architecture of the performance test I am performing:

image

Based on analysis it found out to be the Throttling happening from Azure Cosmos DB, as we make requests that may use more than provisioned Request Units(RU) per second. We were using default Cosmos DB configuration for a fixed collection of 1000 RU’s per second which is sufficient enough for a 500 reads and 100 writes for a 1 kb file. You can refer more about Request Units from Azure Docs.

image

 

 

 

Solution(s):

1. Now first logical step we can do is to get rid off this error by increasing the Throughput for the collection.  I am going to increase to 10000 RU/s maximum allocatable for a Storage Capacity: Fixed.   This should ideally improve the Throughput for 250 or more virtual users hitting.

image

2. Second logical step is to improve the code: Improve the connection parameters in the Document DB SDK –> DocumentDbClient. For this I referred to the Microsoft Docs: Performance tips for Azure Cosmos DB and .NET

Providing optimum values to the following Properties in RetryOption class   to be passed as parameter to Connection Policy.

image

 

In my case I provided a value of 30 to give ultimate results:

new RetryOptions() { MaxRetryAttemptsOnThrottledRequests = 30, MaxRetryWaitTimeInSeconds = 30  }

That should resolve most of the 429 issues when dealing with Cosmos DB SDK

Introduction to NDepend : Static Code Analysis Tool

June 16, 2018 .NET, .NET Core, .NET Framework, ASP.NET, Best Practices, C#.NET, Code Analysis, Code Quality, Dynamic Analysis, Emerging Technologies, Help Articles, Microsoft, Static Analysis, Tech-Trends, Tools, Tools, Visual Studio 2017, VisualStudio, Windows No comments , , , , , ,

As a developer, you always have to take the pain of getting adapted to the best practices and coding guidelines to be followed as per the organizational or industrial standards.  Easy way to ensure your coding style follows certain standard is to manually analyze your code or use a static code analyzer like FxCop, StyleCop etc. Earlier days I have been a fan of FxCop as it was free and it provides me all necessary general guidelines in terms  of improving my solution.

In this modern world of programming everything needs to be automated, as it saves time and money in terms of automating repetitive tasks and improves efficiency. This is where static code analysers coming effective.

What is Static Code Analysis?

Static program analysis is the analysis of computer software that is performed without actually executing programs, on some version of the program source code, and in the other cases, some form of the object code or intermediate compiled code .

Sophistication of static program analysis increases is based on how deep they analyze in terms of behavior of individual statements and declarations, to analyzing the entire source code.

PS: Analysis performed on executing programs is known as dynamic analysis.

In this article I will give you an overview of one such premier static code analysis tool that can be used for your daily development routine plus use it for CI integration for DevOps efficiency.

NDepend:

NDepend is a static analysis tool for .NET, specifically for managed code:  NDepdend supports a large number of code metrics, allowing to visualize dependencies using directed graphs and dependency matrix. It also performs code base snapshots comparisons, and validation of architectural and quality rules.

The important capabilities of NDepend are:

  • Dependency Visualization through dependency matrix and graphs.
  • Analyse and generate software quality metrics – as per the documentation it supports 82 quality metrices.
  • Declarative rule support through LINQ queries, and it is called CQLinq and comes with a large number of predefined CQLinq rules.
  • Integration support for Cruise Control.Net, SonarCube, am City. Code rules can be configured to be checked automatically in Visual Studio or during continuous integration(CI).

License: NDepend is a commercial tool with licensing options as below:

  1. Developer seats – $477 approx. / per seat.
  2. Build Machine seats  – $955 approx. / per seat.

** You could get volume discount if you bulk procure your licenses.

Installation: 

Once you obtained license you will able to download NDepend_2018.1.1.9041.zip, is latest version available while I write this article. Extract the zip file into your local folder, you could see the different packages/executables within the package.

image

1.) NDepend.Console    – Command line program to execute NDepend analysis.  You would be mostly using this component on CI Build server Help

2.) NDepend.PowerTools –  Helps write your own static analyzer based on NDepend.API, or tweak existing open-source Power Tools. Help

image

3.) NDepend.VisualStudioExtension.Installer – To install NDepend extension as part of Visual studio

image

4.) VisualNDepend – Independent visual environment for managing your NDepend tasks.

image

Visual Tool gives you different options to choose from:

  • You can analyse a Visual Studio Solution or project.
  • Analyse .NET assemblies in a folder.

image

image

image

For the demo purpose our analysis target would be one of the starter project from github –  ContosoUniversity by @alimon808.

image

image

Demo: Summary Report

image

Demo: Application Metrics

image

Demo: Dependency Dashboard:

image

Demo: Interactive Graph

image

Demo: Code Matrix View

image

Demo: Quality Gates Summary

image

Demo: Rules Summary

image

Conclusion:

NDepend is one of the best enterprise grade commercial static analyser seen so far.  There are Visual Studio Code Analysis, FxCop and Stylecop Analyzer tools available but they do not provide extensive level of analysis reports NDepend provides. Being a commercial tool it gives value for money for customers by what they need.  In terms of a day to day developer  or devops lifecycle, you can integrate NDepend in your build process, which could be simple as executing the NDepend Console and reviewing the output. With NDepend’s API it is easy to develop your own custom analysis tools based on CQLinq and NDepend.PowerTools(which is open source). You could find all the detailed help in NDepend documentation.

References:

Azure Cosmos DB – Programatically Connect to a preferred location using the SQL API

May 29, 2018 .NET, Azure, CosmosDB, Microsoft, VisualStudio, Windows, Windows Azure Development No comments ,

Cosmos Db is a multi-region scallable, globally-distributed database solution as part of Microsoft Azure Platform.  With a button click, Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions. It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs),  that no other database service can offer. [REF]

What is multi-region scalability or global distribution ?

What it means is that once you select this option, and underlying platform will ensure that your main database is replicated across other global regions you have defined.

So when a customer/application requests the data from a certain geo location:

  1. Cosmos Db will serve the data from nearest available regional copy to provide low latency in accessing the database.  Inorder to achieve it is recommended to deploy both the application and Azure Cosmos DB in the regions that correspond.
  2. Incase that nearest available region is not defined, it would serve from nearest available or main copy. This could be East US or West US depending on your deployment decisions.
  3. As BCDR(Business Continuity and Disaster Recovery) plan, Incase main copy is not available, it would faillover to serve the requests from any backup region.  

Benefits?

  • Ensured AVAILABILITY @ 99.99% – Azure Cosmos DB offers low latency reads and writes at the 99th percentile worldwide.
  • Faster READS: It ensures that all reads are served from the closest (local) region.  To serve a read request, the quorum local to the region in which the read is issued is used.
  • Reliable WRITES: The same applies to writes. A write is acknowledged only after a majority of replicas have durably committed the write locally but without being gated on remote replicas to acknowledge the writes.

PS: The replication protocol of Azure Cosmos DB operates under the assumption that the read and write quorums are always local to the region where the request has been issued.

How to turn on – Cosmos Db and multi-region replication?

In CosmosDb instance settings select Replicate data globally page, then select the regions to add or remove by clicking regions in the map.

Azure Cosmos DB enables you to configure the regions (associated with the database) for “read”, “write” or “read/write” regions.

image

image 

image

Then configure Manual/Automatic failover options as well. image I would cover this in later articles.

All that said, you are in good hands of Azure Platform as a  Cosmos Db customer or user. 

NB: For the purpose of this article, I have configured my instance to run different regions with write region as East US and read region as West Europe,North Europe and West US.

image

Programatically Connect to a preferred location using the SQL API:

Now coming to the context of this blog, as a application developer some times you would like to programatically control the access to these regions while using Cosmos Db .NET SQL API. 

In CosmosDb.NET SDK version 1.8 and later, there is the ConnectionPolicy parameter for the DocumentClient constructor has a property called Microsoft.Azure.Documents.ConnectionPolicy.PreferredLocations

  • All reads will be sent to the first available region in the PreferredLocations list. If the request fails, the client will fail down the list to the next region, and so on.
  • SDK will automatically send all writes to the current write region.
  • SDK will only attempt to read from the regions specified in PreferredLocations.
  • For example: If you have 4 read regions defined in your cosmos Db instance and you only have 2 regions defined in PreferredLocations in connectionPolicy, requests from other two regions would never be served from SDK.

NB: The client application can verify the current write endpoint and read endpoint chosen by the SDK by checking two properties, WriteEndpoint and ReadEndpoint. **SDK version 1.8+.

Following code snippet would make it easiter to implement:

 
   //Setting read region selection preference. 
   connectionPolicy.PreferredLocations.Add(LocationNames.EastUS); // applications first preference
   connectionPolicy.PreferredLocations.Add(LocationNames.WestEurope); // applications second preference

Full Source Code: https://github.com/AzureContrib/CosmosDB-DotNet-Quickstart-Preferred-Location 

References: