Visitor Pattern

When we look at patterns, we can find simple ones like the singleton pattern and complicated ones like the Mediator pattern,
and all of them have a lot of explanation, but today we will discuss the visitor pattern,
which is one of the poorly explained patterns but has a lot of benefits in our daily work.

To explain this pattern and how it can Hekp us I’ll use a small and simple example to demonstrate how useful it is.

 Assume you have these requirements.

write library that help users on acting with Shapes like Squar and Circle ;this library should be able to calculate the Area for the shapes , but consider when you write your code that may be we want to add new operation later for these shapes ;


👌 this Look simple From the first Look we can do the following

public interface IShape
{
    double CalculateArea();
}

public class Square : IShape
{
    public double Side { get; init; }

    public double CalculateArea()
    {
        return Side * Side;
    }
}

public class Circle : IShape
{
    public double Radius { get; init; }

    public double CalculateArea()
    {
        return Math.PI * Radius * Radius;
    }
}

and we can use it like

public void Main(string[] args)
    {
        IShape shape = new Square()
        {
            Side = 16,
        }; 
        double area = shape.CalculateArea();

        Console.WriteLine($"Area of {shape.GetType().Name} = {area}");
    }

How can Visitor Pattern Help You

Image if your manager comes back a few days later and tells you that he wants to add a function that can calculate the perimeter, yes I Know it is getting difficult 🤦‍♂️, you need to change the interface to add the new function then visit all the concrete classes to add the implementation for this method and it gets complex every time you add new shape, now by using this beautiful and small pattern we can enhance and extend our code without worrying about such new changes. and it will be like this

public interface IShapeVisitor
{
    double Visit(Square square);
    double Visit(Circle circle);
}
public interface IShape
{
    double Accept(IShapeVisitor visitor);
}
 
public class Square : IShape
{
    public double Side { get; init; }

    public double Accept(IShapeVisitor visitor)
    {
        return visitor.Visit(this);
    }
}

public class Circle : IShape
{
    public double Radius { get; init; }

    public double Accept(IShapeVisitor visitor)
    {
        return visitor.Visit(this);
    }
}

public class AreaVisitor : IShapeVisitor
{
    public double Visit(Square square)
    {
        return square.Side * square.Side;
    }

    public double Visit(Circle circle)
    {
        return Math.PI * circle.Radius * circle.Radius;
    }
}

if you look carfully at the new Implemnation you can find that our concrete classes no longer know how to implement the Area its just accept visitor and the implementation moved to the visitor , then you can use it like this

 public void Main(string[] args)
    {
        IShape shape = new Square()
        {
            Side = 16,
        };
        double area = shape.Accept(new AreaVisitor());  //this is where  your magic 

        Console.WriteLine($"The Area of {shape.GetType().Name} equals {area}");
    }

Now like Boos you can Add the new Function with just simple step

public class  PerimeterVisitor : IShapeVisitor
{
    public double Visit(Square square)
    {
        return 4 * square.Side;
    }

    public double Visit(Circle circle)
    {
        return 2 * Math.PI * circle.Radius;
    }
}

Operation logic is encapsulated in a single class. Moreover, it also corresponds to Open/Close principle. We don’t edit any code, just add a new one.

Imazing Thing isnt it 😎

What we Learn :

When a visitor is Good :

  • Visitor assists in defining a new operation for a class hierarchy without changing them.
  • It is beneficial when:The hierarchy of classes is known and is not expected to change, so new operations must be added on a regular basis.And the other way around. 

When a visitor is bad:

  • New operations must be added. 
  • The class hierarchy is rarely known and is expected to change.

Anemic Domain Model VS Rich Domain Model

In the last few years we notice a lot of companies adopt the DDD on their application architecture

which raised a lot of discussion between teams about a lot of ideas and designs , one of these discussions is the differences between Anemic Domain Model and Rich Domain Model

In this article will try to highlight the main difference is short words and in simple example

Anemic Domain Model

The simple idea behind the Anemic Domain Model is to have a model with a set of getters and setters ,that only contains the data and does not contains any methods thats reflect business rules and calculation, then you have to create another class name it as (Helper , service , manager… ) which contians all the business rules and calculation by taking the object as argument to change the model state or do any other needed business rules

as an example :

    void main()
        {
            Employee employee = new Employee()
            {
                Age = 35,
                BaseSalery = 1000,
                JobPosition = Position.RegulerEmployee,
                Name = "Test Employee"
            };
            EmployeeService employeeService = new EmployeeService();
           Console.WriteLine( employeeService.CalculateSalaryIncrease(employee));

        }

public class Employee
    {
        public string Name { get; set; }
        public int Age { get; set; }
        public decimal Salery { get; set; }

        public  Position JobPosition { get; set; }
    }

  public class EmployeeService
    { 
        public double CalculateSalaryIncrease(Employee employee)
        {
            if (employee == null)
            {
                throw new NullReferenceException("employee must not be null");
            }
            double totalIncrease= employee.BaseSalery;
             
            switch(employee.JobPosition)
            {
                case Position.RegulerEmployee:
                    totalIncrease= employee.BaseSalery * baseIncrease;
                    break;
                    case Position.Senior:
                    totalIncrease=employee.BaseSalery * seniorbounce;
                    break ; 
                    case Position.Manager:  
                    totalIncrease= employee.BaseSalery* Managerbounce;
                    break;
            }

            return totalIncrease; 

        }
    }

” This is one of those anti-patterns that’s been around for quite a long time ,The fundamental horror of this anti-pattern is that it’s so contrary to the basic idea of object-oriented design; which is to combine data and process together. The anemic domain model is really just a procedural style design”.”

Martin Fowler

Rich Domain Model

The idea here in the Rich Domain Model is to have the data and the behavior in the same place , expressing the behavior using public method of the object , by that we se the responsibilitiy for keeping the state of the object to the object it self

    void main()
        {
            Employee employee = new Employee()
            {
                Age = 35,
                BaseSalery = 1000,
                JobPosition = Position.RegulerEmployee,
                Name = "Test Employee"
            };
            
           Console.WriteLine( employee.CalculateSalaryIncrease());

        }

public class Employee
    {
        public string Name { get; set; }
        public int Age { get; set; }
        public decimal Salery { get; set; }

        public  Position JobPosition { get; set; }

 
        public double CalculateSalaryIncrease(   )
        { 
            double totalIncrease= employee.BaseSalery;
             
            switch(JobPosition)
            {
                case Position.RegulerEmployee:
                    totalIncrease= employee.BaseSalery * baseIncrease;
                    break;
                    case Position.Senior:
                    totalIncrease=employee.BaseSalery * seniorbounce;
                    break ; 
                    case Position.Manager:  
                    totalIncrease= employee.BaseSalery* Managerbounce;
                    break;
            }

            return totalIncrease; 
 
    }
}

Conclusion

Martin Fowler in his article defined Anemic Model as an anti-pattern , i think its depend on the type of the application and its complexity , i hop this samll articl gives at least a small idea about the deference between these to approch of development for more information you refer to

https://martinfowler.com/bliki/AnemicDomainModel.html

https://enterprisecraftsmanship.com/

Health Checks for Microservices applications

in recent years we notice a huge adoption for the Microservices Architecture and become increasingly popular over the past few years more than the Monolithic Architecture ;Microservices Architecture convinced big names to adopt it (Amazon, Netflix, and Airbnb,.. );
in this article, we will not talk too much about this Architecture but I will list some of its advantages:
1.High availability
2.Flexibility
3.Better scaling
4.Rapid growth facilitation
and more…

drawbacks

besides its advantages you have to know that there are some few drawbacks like the complexity, the complexity of the MSA is related to the number of the service involved in your Architecture ;for example if I have a Monolithic  project with about 5 service and each service contains 4- 5 actions then we will have 25 endpoints ; all these endpoints will be centralized and managed in one place like one API project ; now if I have MSA I have to create 5 distributed web API projects each service has its own boundary ; and I think its clear to you now the complexity to manage and maintain these 5 distributed application ;

also what if one of the services is down ? or its lost its connection to the database or the server memory is full , what if the service could not handle the request ????

Solution

To solve the above issue we need to check the services health , every service of the 5 service must have its own health checks such as Storage check , memory , database connectivity and so on ; then the failure alert should be collect and send to the development or the operation teams to check the issues and solve it

Implementation

At he first moment you will think that there is a lot of work we have to do to reach that level of the health check for your Microservices application.

But Luckily, when developing an ASP.Net Core microservice or web application, you can use built-in health checks feature that was released in ASP.Net Core 2.2 , like many other ASP.Net Core built in features

let’s code Now 😎

1.Create your API project

3. Implement asp.net core built-in feature

now you have to go to the startup.cs then implement the built-in feature

lets run the application to test the result

4. Implement Custom SQL server Health Check 

5. Register the SQL checker class

now in the startup.cs you can register you SQLServerHealthCheckService class at the ConfigureServices method

If you don’t want to create SQLServerHealthCheckService , you can download  AspNetCore.HealthChecks.SqlServer package from NuGet .then use it like below

now if you run the application and rout to the health url if any of the checker not working correctly you will get Unhealthy like below

now to get more information and get all the status for all the services we need to do more configuration

Install the AspNetCore.HealthChecks.UI , ” AspNetCore.HealthChecks.UI.Client” packages from the nuget

then do the following configuration

Now Run your Application to test the result when the connection string is wrong

After we fix the SQL connection we will get result like below

Now what we need to do is to add checker for the Disk storage to get alert if the disk space below specific size

to get this point we need to install this nuget package “AspNetCore.HealthChecks.System”

After installing the package do the following configuration

Now Run your Application to test the result

Basically, HealthChecks packages include health checks for:

  • SQL Server (AspNetCore.HealthChecks.SqlServer)
  • MySql (AspNetCore.HealthChecks.MySql)
  • Oracle (AspNetCore.HealthChecks.Oracle)
  • Sqlite (AspNetCore.HealthChecks.SqLite)
  • RavenDB (AspNetCore.HealthChecks.RavenDB)
  • Postgres (AspNetCore.HealthChecks.Npgsql)
  • EventStore (AspNetCore.HealthChecks.EventStore)
  • RabbitMQ (AspNetCore.HealthChecks.RabbitMQ)
  • Elasticsearch (AspNetCore.HealthChecks.Elasticsearch)
  • Redis (AspNetCore.HealthChecks.Redis)
  • System: Disk Storage, Private Memory, Virtual Memory (AspNetCore.HealthChecks.System)
  • Azure Service Bus: EventHub, Queue and Topics (AspNetCore.HealthChecks.AzureServiceBus)
  • Azure Storage: Blob, Queue and Table (AspNetCore.HealthChecks.AzureStorage)
  • Azure Key Vault (AspNetCore.HealthChecks.AzureKeyVault)
  • Azure DocumentDb (AspNetCore.HealthChecks.DocumentDb)
  • Amazon DynamoDb (AspNetCore.HealthChecks.DynamoDB)
  • Amazon S3 (AspNetCore.HealthChecks.Aws.S3)
  • NetworkFtp, SFtp, Dns, TCP port, Smtp, Imap (AspNetCore.HealthChecks.Network)
  • MongoDB (AspNetCore.HealthChecks.MongoDb)
  • Kafka (AspNetCore.HealthChecks.Kafka)
  • Identity Server (AspNetCore.HealthChecks.OpenIdConnectServer)
  • Uri: single Uri and Uri groups (AspNetCore.HealthChecks.Uris)
  • Consul (AspNetCore.HealthChecks.Consul)
  • Hangfire (AspNetCore.HealthChecks.Hangfire)
  • SignalR (AspNetCore.HealthChecks.SignalR)
  • Kubernetes (AspNetCore.HealthChecks.Kubernetes)

Summary

In this article, we try to implemented Health Checks API using built-in features of ASP.NET Core as we as some NuGet package ;there is more and more about the monitoring and micososervice I hope this article put you at the first step of this field and knowledge and give you an idea of some key concepts and challenges that may face any one who dealing with big scale applications

Also you can fFind the source code of the project at the link :

https://drive.google.com/file/d/1nG5AyT0KcGlcCePH61pzWgh5nFdwmI1H/view?usp=sharing

GraphQL – Core concepts

in the previous article, we learned about GraphQL and how much GraphQl changes the way we integrate the systems and exchange the data; in this article, we will talk about some fundamental language constructs of GraphQL such as the GraphQL schema, queries, mutations, and subscriptions.GraphQL has its own type system that’s used to define the schema of an API , the Syntax of writing schema is calling SDL(schema definition language ) ,GraphQL services can be written in any language. Since we can’t rely on a specific programming language syntax, like c#, We’ll use the “GraphQL schema language” – it’s similar to the query language, and allows us to talk about GraphQL schemas

Object types

object type its represent the entity of your project like person , school Blog,etc the below examples show you how we define tow objects types one is called blog and other is called author

type Author{
name :string!
age :int!
}

type Post{
name :String!
createDate :int!
description :String
}

note that The exclamation point following at the field type means that this field is required. It’s also possible to create relationships between GraphQL types. In GraphQL that’s simply called a relation. to add a relation between the Author and the Post types to express that one Author can have many posts we just need to ad the author field in the post type like the below

type Post{
name :String!
createDate :int!
description :String
auther :Auther!
}

then we need add the post field in the Author type to express that the person can have multiple post and this can be like below , uses square brackets to specify that post is a list as many other programming languages

type Author{
Name :string!
age :int!
posts :[Post!]!
}

Arguments

Every field on a GraphQL object type can have zero or more arguments and to work with argument you have to conseder the following :

  1. All arguments are named. Unlike some languages like JavaScript and Python where functions take a list of ordered arguments,
  2. all arguments in GraphQL are passed by name specifically
  3. Arguments can be required or optional. When an argument is optional, you can define a default value

for Example the All post hav an optional argument wich is rowcount , so the client did not give it a value it will take the default value wich is 10

type AllPost (rowcount:10){
Name
age
posts
}

Mutations

most of our application need to do changes to the data that currently stored in our backend . these changes can be done by using the mutations , its follows the same structure of the query but always need to start with the mutation keyword ; mutations divide into three types:

  1. Creating New data
  2. Updating data
  3. Deleting data

type Mutation {
createPost(name: String , description string): post
}

as you noticed the mutation also has a root field (createPost). We also learned about the concept of arguments for fields. In this case the createPost field takes two arguments that specify the new Post name and description.

Scalar types

GraphQL comes with a set of default scalar types out of the box:

  1. Int: A signed 32‐bit integer.
  2. Float: A signed double-precision floating-point value.
  3. String: A UTF‐8 character sequence.
  4. Booleantrue or false.
  5. ID: The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. The ID type is serialized in the same way as a String; however, defining it as an ID signifies that it is not intended to be human‐readable.

In most GraphQL service implementations, there is also a way to specify custom scalar types. For example, we could define a Date type:

Enumeration types

Also called Enum, these types are a special kind of scalar that is restricted to a set of values. This allows you to:

enum Gender{
Male
FeMale
Other
}

Interfaces

Interface is an abstract type that includes a certain set of fields that a type must include to implement the interface.

interface Blog {
id: ID!
name: String!
posts: [post]
Aother: [Aother]!
}

at the end these are some of the graphQL Main schema types you can find another type like union types and Query types at the graphQL document page also you can see the references below ;

https://graphql.org/learn

https://www.howtographql.com/graphql-js/3-a-simple-mutation/

https://channel9.msdn.com/Series/GraphQL

Exploring GraphQL

So what’s this GraphQL Technology ?

In 2012, Facebook invented GraphQL then released it publicly in 2015. Since that time, GraphQL community grew exponentially,  companies and individuals from all over the world had joined the community. AirBnB , Spotify, Facebook, Walmart ,GitHub and more are adapting it in deferent fashion.     GraphQL -in simple words – is a query language that gives API Clients the power to specify exactly what data they need by building a query that contains the definition of the requested data. Also, it provides a complete description of our data

Why Facebook developed GraphQL ?

in the past RESET has been a popular way to get data from the Server. but when the RESET was introduced to the world the applications were simple and the development world speed wasn’t where we today,
now everything is changed, the applications became more massive, and the transferred data get more complex.
there are main factors that have been challenging the way that APIs are designed, Increase mobile usage, low-power devices, and sloppy networks, Poor performance, Difficulty understanding APIs were the initial reasons why Facebook developed GraphQL.

GraphQL minimizes the amount of data that needs to be transferred over the network, with GraphQL no need to develop a huge number of endpoints to fit the requirements of all the different clients in GraphQL every client now can access and request the data he needs. which allows the continuous deployment to be standard for the companies with rapid development iteration and products updates

It’s interesting to note that other companies like Netflix or Coursera were working on comparable ideas to make API interactions more efficient.
Coursera envisioned a similar technology to let a client specify its data requirements and Netflix even open-sourced their solution called Falcor.

What problems GraphQL solves ?

Over the past years, REST has become the standard for designing web APIs which came with several great ideas like stateless servers, structured access to resources. but these specifications become a restriction of how the server has exposed its data to the client, so GraphQL was developed to deal with the needs for more flexibility and efficiency in client-server interaction, To get a better sight of the differences between GraphQL and REST let us look to the below issues

1) Over-fetching and under-fetching of data

the real problem in the REST that annoys every developer, in my opinion, is the over-fetching and the under-fetching of the data because the REST Always returns a fixed data structure and you cant get that data unless you create an endpoint to it; for example, if we need tiny Employee information like ( FirstName, LastName, and DepartmentId ) we can’t get these data without creating an endpoint that returns the whole Object for us

 for example https://domain.com/api/Person/512

and this also will be the under-fetching problem of data. If we want to get data from two or more resources like data that mixed form (post, comment, blog ), we need to make different calls to all the different endpoints in the huge application this cant help us since some application does not need all your modules and also, its not a good idea to request several endpoints to get what I need. to know the issue imagine API contains 100s of endpoint how much it will be easy to get data from this API ? or how much harder its will be for the developer to maintains and update the service endpoints ??? ;

2) A lot of API endpoints

as we mentioned in the previous point if you need to access some resource we need to implement an endpoint so if you want to make a Get request you need to implement an endpoint specifically for that request and this applies to all the HTTP verbs (Post, Put, Delete, ...) in a real-world application, we will end up having a huge number of endpoints for a lot of resources, which means more bugs, more developer time and less flexibility and code maintainability

3) API Versioning

the painful points in REST services in my opinion is the Versioning, its common to see several versions for the API with V1, v2, V3 … in GraphQL no need for this versioning at all; you just write new types, queries, and mutations without the need to ship another version of your API.which will make your client happy by serving the new requirement for who need it and the one need the old data still can work with the same schema with zero change in his side, also there is no need to maintain several versions from the API

So, you won’t see GraphQL APIs with endpoints like the following:

https//domainname/Api//v1/post/512 
https//domainname/Api//v2/post/512 

Is GraphQL The Future ??

as we said when Facebook announced this great technology its has been adapted by so many companies and individuals developer, Now, GraphQL has been growing rapidly and more developers are starting to build APIs with it
Also, The fact that GraphQL is an open-source query language means that the community can contribute to it and make improvements to it,

GraphQL in google Trend

I hope Post gives you a good feeling about GraphQL and its benefits in the next articles we will talk more about its schema and structure also we will walk through creating our API that supports GraphQL

Caching in C# .NET – Quick Start

these days with the big amount of transferred data between the Systems or within one system between the requests, you will find your self searching for a way to increase the performance of your application and minimize the time of the process, here where you need to go to the caching because its the most commonly used patterns in the application , it’s simple and a very effective concept.

The idea from caching is to try reusing the operations result by save the result in a our caching container and the next time request need to use the result we will get it from our cach instead of performing a heavy operation like getting data from database or request another system to give it to us

But you have to keep in mind that caching is working for infrequently data or data that will not change and this is the better ;

But what are the Caching Types

you need to know that there is Three different types of caching

  • In-Memory Cache : this type of caching is used to cache data for Single process and when the process killed the cache will be flush out from the memory, and if the processes executing from different machine then you will have a different cache for each process which means that is type of caching is not shared between the process
  • Persistent in-process Cache: and this when you store you cached data outside the process memory and by doing this if the process killed you still can get your cached data again when it restarts by reading the cached container (file, database .. ) and this also heavy but we use this type when extracting the data item is too expensive and more expensive than read its from the cached data file or database like get yearly reporting collection data and that data accessed frequently from the client or any other too expensive data item
  • Distributed Cache: this type gives you the ability to have a shared cache container between several servers and this means if one process add cache item to the container it will be accessible from other processes and there is a great service to help you to implement this Type Like Rides or Memcached and I will talk about these type later on separated article

in-process cache.

Today we will cove the in-process cache Type with several examples and later will talk about each one of the caching types in a separate article 👌

Let’s create a simple in-process cache implementation in C#:

   public class MyCacheRepo<IData>
    {
        Dictionary<object, IData> _cache = new Dictionary<object, IData>();

        public IData Get(object key)
        {
            if (_cache.ContainsKey(key))
            {
                return _cache[key];
            }
            return default;
        }

        public IData Set(object key, IData data)
        {
            if (!_cache.ContainsKey(key))
            {
                _cache[key] = data;
            }
            return data;
        } 
    }

using

var _documentCache = new MyCacheRepo<byte[]>();

var myDocument = _documentCache.Set(“documentId”, mydocumentObject);

this is a very simple example how we can save and get item from the cache for the frequent used document and by this we will get the document from the file or the database at the first time then we will save it to the cached and the next request will get there data from the cached directly

But This way is not good to cache item in real application because exceptions can occur when used from multiple threads because it’s not thread-safe, also the cage object will remain in the memory for a long time which very bad idea because its may lead to out-of-memory exceptions and the high memory consumptions may lead to the GC Pressure (garbage collector pressure) When your application spends more time in garbage collecting, it spends less time on executing code, thus directly hurting performance, also the data need to be refreshed to get the latest version from it

fortunately, the .net framework has several policies to remove the item from cache depends on several scenarios

  • Absolute Expiration: this policy will remove an item from the cache after a fixed amount of time, no matter what changes it will have or the number of access on it.
  • Sliding Expiration policy: will remove an item from the cache if it wasn’t accessed in a fixed amount of time. for example, if you set the expiration to 1 minute, the item will keep staying in the cache as using it and Once I don’t use it for longer than a minute, the item is evicted.
  • Size Limit will limit the cache memory size.

For caching data microsoft produced two different way for and both are great. As Microsoft’s recommendation,

1.System.Runtime.Caching.MemoryCache 2.Microsoft.Extensions.Caching.Memory

System.Runtime.Caching.MemoryCache Example :

 public class CachingHandler :ICacheHandler
        {
             

 protected MemoryCache cache = new MemoryCache("CachingProviderName");

            static readonly object padlock = new object();

   protected  void Remove(string key)
    {
        lock (padlock)
        {
            cache.Remove(key);
        }
    }
  protected void AddItem(string key, object value)
            {
                lock (padlock)
                {

// NotRemovable :Indicates that a cache entry should never be //removed from the cache.
var cachePolicy = new CacheItemPolicy() {  Priority =CacheItemPriority.NotRemovable};
 // this SlidingExpiration =  minute will remove the item if  i did  not use it for more than  1 minute 
// var cachePolicy = new CacheItemPolicy() {  SlidingExpiration= new TimeSpan(0,1,0) };
//var cachePolicy = new CacheItemPolicy() { AbsoluteExpiration = DateTime.MaxValue };
                    cache.Add(key, value, cachePolicy);
                }
            }
        }
 

Microsoft.Extensions.Caching.Memory Example :

public class CachingHandler
{
private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions());
public IData Get(object key)
{
IData cacheEntry;
// check if its contains cache key.
if ( _cache.TryGetValue(key, out cacheEntry))
{
return cacheEntry;
}
return default;
}
}

at the first look, you will feel that these two implementations is the same of the first one

to remove the confusion I need from you to remember that this is

1-tread-save way and you can call it from multi-thread and nothing get crashed like the dictionary

2-its has the Policies what we talk about before

3-and you can also set a RegisterPostEvictionCallback delegate, which will be called when an item is evicted.;

see the example below

public class CachingHandler<IData>
    {
        private MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()
        {
            //this adds a size-based policy to our cache container
            SizeLimit = 1024, 
              
        });
        public IData Get(object key )
        {
            IData cacheEntry;
            if (_cache.TryGetValue(key, out cacheEntry))// Look if the item in the cache  .
            {
                var cacheEntryOptions = new MemoryCacheEntryOptions()
                    //item Size amount 
                    .SetSize(1)
                    //Priority on removing when reaching size limit (memory pressure)
                    .SetPriority(CacheItemPriority.High)
                    // Keep in cache for this time, reset time if accessed.
                    .SetSlidingExpiration(TimeSpan.FromSeconds(2))
                    // Remove from cache after this time, regardless of sliding expiration
                    .SetAbsoluteExpiration(TimeSpan.FromSeconds(10));

                // Save data in cache.
                _cache.Set(key, cacheEntry, cacheEntryOptions);
            }
            return cacheEntry;
        }
    }

WEB API over WCF

before about 2 years ago I met someone working on the development management field, and we had some  good conversation about the differences between the API and WCF service, and before two to three days ago I met one of his team at one of the big organization which  we deal with them as one of our clients , and by accident the same conversation happened between us  ;

the surprising thing is that the same idea his manager take the same idea he represents

because of that I wrote this small article to identify the differences between the Restfull WCF and API 

 

we have to know better about WCF and API before we go to the comparing section ;

when the WCF service was introduced on .net3 its main goal was to support SOAP over a wide variety of transports, but over the time its become clear that  SOAP  is not the only way to go when creating services and the need to create non SOAP service increased. beside that, we can take advantage of the Power of the HTTP on creating  simple GET requests, or passing plain XML over POST, and respond with non-SOAP content such as plain XML, a JSON  So in WCF 3.5, we get the WebHttpBinding   a new binding that helped the developers to create  non-SOAP service over HTTP, better known as a RESTful service. but the the WebHttpBinding was not enough and there is a lot of tools was introduced to enrich the support of HTTP like support multi-content type, request-response caching and other things.

SOAP and the HTTP services are very different. SOAP allows us to place all the knowledge required by our service in the message itself, disregarding its transport protocol,  HTTP is an application-level protocol and its offers a wide variety of features and stateless interaction between clients and services.HTTP is more than a transport protocol. It is an application-level protocol, and how many platforms supports SOAP, many more platforms know how to use HTTP! among the HTTP supporting platforms which do not support SOAP

as time passed, the WCF Web APIs had a lot of trouble adapting WCF to the “native” HTTP world. As WCF was primarily designed for SOAP-based XML messages, and the “open-heart” surgery that was required to make the Web API work as part of WCF was a bit too much

there is a lot of the article talking about the glory of API and the differences between API and WCF, I wrote some in the past and now I take some from other article and now give it to you as short info

WCF Rest

  • To use WCF as WCF Rest service you have to enable webHttpBindings.
  • It supports HTTP GET and POST verbs by [WebGet] and [WebInvoke] attributes respectively.
  • To enable other HTTP verbs you have to do some configuration in IIS to accept the request of that particular verb on .svc files
  • Passing data through parameters using a WebGet needs configuration. The UriTemplate must be specified.
  • It supports XML, JSON, and ATOM data format

Web API

  • This is the new framework for building HTTP services with easy and simple way.
  • Web API is open source an ideal platform for building REST-ful services over the .NET Framework.
  • Unlike WCF Rest service, it uses the full feature of HTTP (like URIs, request/response headers, caching, versioning, various content formats)
  • It also supports the MVC features such as routing, controllers, action results, filter, model binders, IOC container or dependency injection, unit testing that makes it more simple and robust.
  • It can be hosted within the application or on IIS.
  • It is lightweight architecture and good for devices which have limited bandwidth like smartphones.
  • Responses are formatted by Web API’s MediaTypeFormatter into JSON, XML or whatever format you want to add as a MediaTypeFormatter.

 

 

Is There Still Use for WCF? When Should I Choose Web APIs Over WCF?

Recall my points from before – HTTP is a lot more than a transport protocol; use SOAP across the board and consider HTTP as no more than another way to pass messages.

  • If your intention is to create services that support special scenarios – one way messaging, message queues, duplex communication, etc., then you’re better of picking WCF.
  • If you want to create services that can use fast transport channels when available, such as TCP, Named Pipes, or maybe even UDP (in WCF 4.5), and you also want to support HTTP when all other transports are unavailable, then you’re better off with WCF and using both SOAP-based bindings and the WebHttp binding.
  • If you want to create resource-oriented services over HTTP that can use the full features of HTTP – define cache control for browsers, versioning and concurrency using ETags, pass various content types such as images, documents, HTML pages, etc., use URI templates to include Task URIs in your responses, then the new Web APIs are the best choice for you.
  • If you want to create a multi-target service that can be used as both resource-oriented service over HTTP and as RPC-style SOAP service over TCP – talk to me first, so I’ll give you some pointers.

 

for more info you can refer to  :

https://wordpress.com/post/salemalbadawi.wordpress.com/104

https://martinfowler.com/articles/richardsonMaturityModel.html

https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.webhttpbinding?redirectedfrom=MSDN&view=netframework-4.8

How to install Kafka and use it with .Net in details

today we will talk about how to use Apache Kafka messaging in. Net ;

but first, let take a look about what is KAFKA and it’s the main terminology and why to use it    ;

kafka is open source stream processing platform developed by the Apache Software Foundation written in Scala and Java for more details got to https://kafka.apache.org/  its provide a unified, high-throughput, low-latency platform for handling real-time data messages

Kafka depends on the following components:

  • Kafka Cluster: a collection of one or more servers known as brokers
  • Producer – the component that publishes the messages
  • Consumer – the component that retrieves and consume messages
  • ZooKeeper – software developed by Apache that acts as a centralized service to maintain the configuration information across cluster nodes -in a distributed environment –

How to install KAFKA on windows 10 :

1- Go to the Kafka downloads page and download the binary package  then unzip the package in a particular path let say D:\kafka_2.11-2.2.0

kafka1

in this article, we will use the ZooKeeper which will be included in the package itself so no need to install the ZooKeeper package 

now go the Kafka directory to this folder D:\Kafka\kafka_2.11-2.2.0\config and open server.properties file with any text editor TextPad or notePade++ which I prefer 

then change the path of the Kafka log to be like thislog.dirs =: \Kafka\logs or to any other directory you want; this is the path where Kafka will write its log the events and other log information.

you can change the port of the Kafka Zookeeper  by changing the value zookeeper.connect=localhost:2181  and  note that Kafka will run the port 9092 as the default port 

another thing you have to know that Kafka Message  is represented as a key-value pair. and Kafka converts all messages into byte arrays.

2- Now let us start the  Zookeeper by writing the following on the command window 

D:\Kafka\\kafka_2.11-2.2.0>.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

kafka2.png

2- Now let us start the  Kafka by writing the following on another new  command window 

D:\Kafka\kafka_2.11-2.2.0>.\bin\windows\kafka-server-start.bat .\config\server.properties

kafka3.png

3- after we start Kafka we need to create a new topic to use it on sending and consuming the messages 

so open new CMD window and write the following

D:\Kafka\kafka_2.11-2.2.0>.\bin\windows\kafka-topics.bat –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic Hello_Salem_Topic

kafka4.png

now we are ready to create the consumer and the producer for  , we can create them using the command line but in this articl we will create them using c# language so we need to create to console application  (consumer and producer ) and install this package from the NuGet Package manager kafka-net , for more details about the package you can refer to https://github.com/Jroland/kafka-net 

4 -Go and create new Console application name it KAfaProducer

and then  in the package Manager Console write the following

Install-Package kafka-net

kafka5

now in the main method write the following

static void Main(string[] args)
{
var options = new KafkaOptions(new Uri(http://localhost:9092&#8221;), new                  Uri(http://SERVER2:9092&#8221;));

         var router = new BrokerRouter(options);
var client = new Producer(router);

client.SendMessageAsync(“HelloWorldTopic”, new[] { new Message(” My Kafka first Message “)  }).Wait();

using (client)
{

}

Console.ReadLine();
}

 

now the same thing for the consumer, create a console application and install the kafka-net then in the main method write the following

static void Main(string[] args)
{
var options = new KafkaOptions(new Uri(http://localhost:9092&#8221;), new Uri(http://SERVER2:9092&#8221;));
var router = new BrokerRouter(options);
var consumer = new Consumer(new ConsumerOptions(“HelloWorldTopic”, router));

foreach (var message in consumer.Consume())
{
Console.WriteLine(” PartitionId={0},Offset ={1} message {2}”,message.Meta.PartitionId, message.Meta.Offset, message.Value);
}

Console.ReadLine();
}

I hope this will be a good introduction for the one of the most powerful, fast and scalable open source message broker to give you the option to dive deeper on this topic

✌ have Fun

Source Code can be found here

 

https://drive.google.com/file/d/1jiDUncJCLpSiN3-wwKcdzwJFlnRYSKBa/view?usp=sharing

How to secure an ASP.NET Web API with JWT

Securing you API application is the most popular thing nowadays and there is a lot of ways to take it for that; but today   I will try to explain how to use JWT in the simplest and basic way that I can ; so you won’t get lost from a jungle of OWIN, Oauth2, ASP.NET Identity…

but before let take a look at what every single term means; from their official sites

OWIN

defines a standard interface between .NET web servers and web applications. The goal of the OWIN interface is to decouple server and application, encourage the development of simple modules for .NET web development, and, by being an open standard, stimulate the open source ecosystem of .NET web development tools.

Katana

OWIN implementations for Microsoft servers and frameworks. a flexible set of components for building and hosting OWIN-based web applications on .NET Framework.

OAuth 2

OAuth 2 is an authorization framework that enables applications to obtain limited access to user accounts on an HTTP service, such as Facebook, GitHub, and DigitalOcean. It works by delegating user authentication to the service that hosts the user account,

JWT

Abstract

JSON Web Token (JWT) is a compact, URL-safe means of representing
claims to be transferred between two parties. The claims in a JWT
are encoded as a JSON object that is used as the payload of a JSON
Web Signature (JWS) structure or as the plaintext of a JSON Web
Encryption (JWE) structure, enabling the claims to be digitally
signed or integrity protected with a Message Authentication Code
(MAC) and/or encrypted.

JWT contains from three sections which encoded in base64 Header, Claims and Signature

JWT uses signature which is signed from headers and claims with security algorithm specified in the headers (example: HMACSHA256). Therefore, JWT is required to be transferred over HTTPs if you store any sensitive information in claims.

to generate JWT token there is tow way from my experience one with using the OWIN middleware and the second without using OWIN middleware and using action in your controller today we will talk about the second one because its the simplest way then in another article we can grow our example to use OWIN middleware  and inject another authorization types like Role based authorization, claim or external authorization

Now :

to create   JWT token endpoint  using the action from the controller You need to add a NuGet package called System.IdentityModel.Tokens.Jwt from Microsoft,

now go to your visual studio and create a new empty project with API type

Capture

in the console manage window  type this and press enter

Install-Package System.IdentityModel.Tokens.Jwt

co to the controller folder and add new empty API controller and name it

TokenController   

in this controller create a new method like the one below

    private const string Secret = "aGs4andKSWZUaFVlN2dzSVhPWTJicHpoMzljUndXR1I0Zm5tS2NXb29CWUZs"; 
[httpGet]        
public string Token(string username, string userEmail )
        {
            
            var symmetricKey = Convert.FromBase64String(Secret);
            var tokenHandler = new JwtSecurityTokenHandler();
 
            var now = DateTime.UtcNow;
            var tokenDescriptor = new SecurityTokenDescriptor
            {
                Subject = new ClaimsIdentity(new[]
                        {
                        new Claim(ClaimTypes.Name, username) ,
                    new Claim(ClaimTypes.Email, userEmail)
                    }),
                // the Expiration value must be stored in the web config to make it easy when you need to change it 
                Expires =DateTime.Now.AddMinutes(20),
 
                SigningCredentials = new SigningCredentials(new SymmetricSecurityKey(symmetricKey), SecurityAlgorithms.HmacSha256Signature), 
                 Issuer = "T2 - Business Research & Development", 
                 IssuedAt =DateTime.Now,
                 
            };
 
            var secureToken = tokenHandler.CreateToken(tokenDescriptor);
            var token = tokenHandler.WriteToken(secureToken);
 
            return token;
        }
    }

now if you test your method at the postman you can get your first JWT token

token

Copy your token and past it here https://jwt.io/

JWT

now we finishing the JWT  producer and we need to implement the way we authorize the user with

we need to generate new custom attribute that inherits from AuthorizationFilterAttribute

let’s create a new class  CustomAttribute and we need to override the OnAuthorizationAsync method like this

private const string Secret = "aGs4andKSWZUaFVlN2dzSVhPWTJicHpoMzljUndXR1I0Zm5tS2NXb29CWUZs";
 
     public override Task OnAuthorizationAsync(System.Web.Http.Controllers.HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken)
     {
         if(actionContext.Request.Headers.Authorization==null || string.IsNullOrEmpty(actionContext.Request.Headers.Authorization.Parameter))
         {
             actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, "You Are not Authorized to use this Resource");
             return Task.FromResult<object>(null);
         }
         var principal = GetUserPrincipal(actionContext.Request.Headers.Authorization.Parameter);
 
         if (!principal.Identity.IsAuthenticated)
         { 
             actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, "You Are not Authorized to use this Resource");
                 return Task.FromResult<object>(null);
         }
 
          var identity = new ClaimsIdentity(principal.Claims, "Jwt");
           IPrincipal user = new ClaimsPrincipal(identity);
 
           //User is Authorized, complete execution
                return Task.FromResult<object>(user );



public static ClaimsPrincipal GetUserPrincipal(string token)
{
try
{
var tokenHandler = new JwtSecurityTokenHandler();
var jwtToken = tokenHandler.ReadToken(token) as JwtSecurityToken;

if (jwtToken == null)
return null;

var symmetricKey = Convert.FromBase64String(Secret);

var validationParameters = new TokenValidationParameters()
{
RequireExpirationTime = true,
ValidateIssuer = false,
ValidateAudience = false,
IssuerSigningKey = new SymmetricSecurityKey(symmetricKey)
};

SecurityToken securityToken;
var principal = tokenHandler.ValidateToken(token, validationParameters, out securityToken);
return principal;
}

catch (Exception)
{
return null;
}
}

now Create new  Controller and give the nave Value  then put the CustomAttribute as an Attribute on the controller level then, this attribute will check if the request has the authorization to execute the method or not if not it will return Unauthorized HTTP status code with custom message otherwise it will allow the request to access the Resource

the controller will be like this

[CustomAttribute]
  public class ValueController : ApiController
  {
 
      [HttpGet]
     
      public string Get()
      {
          
          return "Helloo";
 
      }
  }

Test Screen Shoot below

This slideshow requires JavaScript.

Design Pattern:: 1- Strategy

today we’ll talk about the design pattern, specifically about one of the most used design pattern in the programming world which is the strategy design pattern.

but firstly why we talk about this topic and why we need to know such these things?

as dofactory website define the design pattern

Design patterns are solutions to software design problems you find again and again in real-world application development. Patterns are about reusable designs and interactions of objects.

Strategy Design pattern :

is a behavioral design pattern that enables an algorithm behavior to change at the runtime

Example  of that

Sorting with Custom comparer, Log4Net  ;

and now to the Code  :👏🐱‍👤🐱‍👤

now imagen that we have a game with different levels the, while the player on the game he can get some extra tool and his power will level up, by that we need to change the power of the hero at the runtime , for that we need to use the strategy

we need a base interface to define the signature of the hero functionalities let us name is IHero and be like this  :

public  interface IHero
  {
      string Fight();
      void ChangeMyPower(ISuperPower superPower);
  }

after that, if we this in the hero we think about the superpowers and the capabilities for each one and because we have severl superpower we need to create another Interface for the power say its name is ISuperPower , and it will be like this 

public interface ISuperPower
    {
        string ShowPower();
    }

now we need to implement the superpowers that we need in our application  and to make our application simple will implement just four classes, Weapons, Jumb, FlyPower, Disappearance

public class Disappearance : ISuperPower
  {
      public string ShowPower()
      {
          return "My power now is Disappearance i can walk and no one can see me  ";
      }
  }
public class FlyPower : ISuperPower
    {
 
        public string ShowPower()
        {
            return "My power now is flying    ";
        }
    }
class class Jumb : ISuperPower
   {
       public string ShowPower()
       {
           return "My power now is Spider Strings and Jumbing ";
       }
   }
public class Weapons : ISuperPower
   {
       public string ShowPower()
       {
           return "My power now is Weapons i can fight with Weapons ";
       }
   }

now our superPowers are  ready to use let us create our hero class

public class Hero : IHero
   {
       //instance of the super power 
       ISuperPower _currentPower;
 
       // we construct  it with default power here
       public Hero() : this(new Jumb())
       {
 
       }
 
       public Hero(ISuperPower superPower)
       {
           _currentPower = superPower;
       }
 
 
       // implement the Ihero interface method to change the super power when we need 
       public void ChangeMyPower(ISuperPower superPower) => _currentPower = superPower;
 
       // to let our hero fighting the bad
       public string Fight() => _currentPower.ShowPower();
 
   }

now we have two choices the one and the best is to create specific Hero classes like (Spiderman , Batman, FlashMan …) inherited from the Hero to make our hero functionality extendable and it will be like this

public Class   cSpiderMan : Hero
    {
        public SpiderMan() :base (new Jumb())
        { 
        }
       
    }
public Class  BatMan : Hero
   {
       public BatMan() :base(new Weapons())
       {
 
       }
        
   }
public Class SuperMan:Hero
   {
       public SuperMan() :base (new FlyPower())
       {
 
       }
   }

now our functionality is ready to execute

[TestMethod]
     public void TestMethod()
     {
         IHero spiderman = new SpiderMan();
         Assert.Equals(new Weapons().ShowPower(), spiderman.Fight());
         spiderman.ChangeMyPower(new FlyPower() );
         Assert.Equals(new Weapons().ShowPower(), spiderman.Fight());
         spiderman.ChangeMyPower(new Weapons());
         Assert.Equals(new Weapons().ShowPower(), spiderman.Fight());
     }

 

this is a small demo for Strategy and how we can implement it I hope this will be useful for you, for any question or further feedback kindly don’t hesitate to contact me

 

👏🐱‍👤 Thank you and best wishes