Skip to main content

Follow the SUN Model - Global Workflow


Recently I had an opportunity to do an architectural analysis for enabling the “follow the SUN” business model for an Enterprise. Since the application is based on Microsoft Sharepoint 2010, analysis include evaluating various deployment options available in SharePoint platform, different WAN optimizations solutions & constraints and also performing cost benefit exercise.

Scenario involved users accessing a web based system from different geographical locations & in different time zones, carrying various parts of the same business process.

The primary options available for this are:

  • Option #1 - Central Farm with users around the world
  • Option #2 - Central farm with regional SharePoint deployments

Each of these options carries its own merits & demerits.

Option #1

 In this option, all the layers of the application (Web & App) will be deployed in the central location including the DB layer (which holds all the databases and data pertaining to users of all regional locations). The central location is chosen where the Enterprise has its IT support hub.

This is suited in the where bandwidths & latencies between WAN connections provide a reasonable experience.

Merits
·         Minimal Total Cost of Operations
·         Optimized IT operations maintenance
·         Zero data / content latency which will enable efficient load sharing model
Demerits
·         There could a impact on the page response / performance for the other regional users in the scenarios of high network latency & underperforming WAN



Option #2

In this option, the Web & App layers will be deployed in the central location with regional deployments of Data layers. To enable the users of different regional locations to carry out workflow activities, regional databases should be kept mutually synchronized.  

This is suited in the scenarios where regional users most likely to access local data & data on the day-to-day basis.

Merits
·         Optimized performance for regional editors when they access data & content  of their local regional enquiries
Demerits
·         There could be latency on the availability of data & content to other regional users.
·         Increases complexity and operations cost of the overall solution
·         Requires greater organizational coordination to build effective governance of content that is authored in multiple geographic locations



Let us see what could be the best option and why

In case of option #1, sometimes it may require real-time replication which will significantly affect OLTP performance. This will also become more complicated when the volume of data is huge.

In case of option #2, even though constraints related to network & bandwidth may affect the performance when the system is accessed by the users in different geographical locations especially when they access data from remote locations, it can be solved through the various WAN optimization solutions. From the cost perspective, investment in WAN optimization solutions is less that investing in servers farms across different geographical locations.

Also, TCO will be high in option #2 than in option #1 because of the need of deployment of IT support professionals across different regions in the earlier case.

From the security perspective, regional deployments will increase complexity.

This qualitative analysis helps us to conclude that option #1 looks more suitable than option #2.

But still it is to be backed by the metrics of actual POC results.

When it comes to option #1, Data layer will become a bottleneck. Since the Data layer includes data for all the regions, a request from a particular region need to traverse way all the data of other regions. Unless a strategy is  adopted for its scalability and efficient data retrieval & updation, it will affect the performance of the entire system.

On performing a detailed analysis and evaluating various options, Data Dependant Routing emerged as the best suitable option.

We will see more on Data Dependant Routing.




Comments

Popular posts from this blog

Lambda Architecture on Microsoft Azure

Gone are those days when Enterprises will wait for hours and days to look at the dashboards based on the old, stale data. In this fast world of BYOD, fitness gears and flooding of other devices, it is becoming super important  to derive out “actionable” information from huge volume of data / noise that is generated from these devices or any other data sources and act proactively on them  in real-time, to stay competitive.

At the same time, the need for having dashboards and other analytical capabilities based on the quality, cleansed, processed data still very much exists.

With the emergence of more data types and need to handle huge volume, shift is happening from the conventional data warehouse practice to cloud based data processing & management capabilities where high volume batch processing is possible at the optimized cost. Business scenarios demanding the need to process the data in real-time   More

SharePoint 2013 Architectural Trade-Offs

When planning for deploying SharePoint 2013 based Enterprise workloads, it should be done with the consideration / awareness of impact of various architectural decisions what we make. As SharePoint 2013 is a flexible platform providing lots of options in terms of configuration of the existing OOB features and development of custom development solutions to address specific business functional needs, care should be taken when making a particular decision and its impact on overall solution. Even though SharePoint is a matured product, the effectiveness of various business capabilities such as Enterprise Social, Enterprise Search, BI, Document Management, Web Content Management, and Enterprise Content Management that will be delivered based on it, in terms of addressing the business requirements depends on architecture planning. Effectiveness here means performance, security, up-time and other architectural qualities like Scalability, Reliability etc. more ...

Heterogeneous Cloud Integration

Heterogeneous integration is common scenario in the Enterprises where their IT portfolio is based on heterogeneous platforms. Various solution approaches such as message broker, messaging middleware, SOA – service based integration were employed to address heterogeneous integration challenges.

These solution approaches were good when the integration happens on premise, with in the data centers of an Enterprise. Problem here is non-availability of “elasticity”.

With the Enterprises started leveraging cloud platforms extensively for various solution aspects such as elastic computing, storage, it opens new capabilities that can be leveraged for heterogeneous integration. Also, similar to existing on premise scenario, Enterprises are also leveraging multiple cloud platforms to address their business needs. This scenario will pose same integration challenges as those that were faced within on premise datacenters

Within datacenters / on premise, integration products were expected to comply wi…