Thursday, October 27, 2011

Microsoft SharePoint and Agility

SharePoint is a becoming natural choice for enabling key capabilities of an Enterprise such as Collaboration, Social Computing, BI, Web Content Management, Document Management, Portals / web applications targeting both intranet & internet users etc. . Perception on SharePoint is gradually changing from a simple COTS product to very flexible platform. Especially the custom development & integration capabilities of SharePoint platform is a door opener for lots of opportunities. In fact, in some of the Enterprises, SharePoint is becoming the default development platform.

Recent research report from Forrester highlights that SharePoint is gaining very wide adoption in large organizations. Given that the capabilities of SharePoint can address most of the needs of an Enterprise, Enterprises which are planning to leverage the full potential of the SharePoint platform may end up in an IT portfolio where major portion will be solutions based on SharePoint. Hence, Enterprise Agility of those Enterprises will have more dependency on SharePoint.

Enterprise Agility of an Enterprise depends on its IT Portfolio. If the IT portfolio is based on the home grown solutions and COTS products which do not support agility, then Enterprise agility will be affected. Investments in COTS products can be maximized only if those products are customizable / supports wider custom development. Based on how “Testable” the “Customizable” parts of the COTS products of an IT portfolio, Enterprise will have impact on Maintainability, TCO and Agility.

True agility is considered to be achieved only when a change in business / functional requirement is achievable with minimal time – when you have a piece of code developed and if an automated unit testing capability can be leveraged to test that piece of code, then your overall time for development , testing & deployment will decrease.

Testability & loose coupling are 2 key paths to Agility. In SharePoint, when it comes to custom development, Web Parts are predominantly used to address business / functional needs. These Web Parts are “Testable” when they are built based on MVP pattern. These web parts can leverage Automated Unit Testing capabilities. These MVP based web parts are de-coupled from the environment, data source & UI. Hence they are unit testable through Mock objects

A combination of Microsoft’s PEX & Moles framework & VSTS 2010 can be used for performing automated unit tests on MVP based (Testable) web parts.

Loose coupling can be achieved through Service Locator pattern provided by SharePoint guidance from Microsoft Patterns & Practices – compass for the best practices adoption in SharePoint based development. When there is a change is the actual service provider, we can deploy that as a separate assembly and change can be enabled in the main application without re-compilation

So, the conclusion here is - Once a SharePoint based development is planned with these 2 things in mind – Testable & Loose coupling through MVP pattern based web parts to leverage Automated Unit Testing framework and Service Locator pattern respectively, time required to respond to changing business requirements will decrease and hence will have a positive impact on Enterprise Agility !

Sunday, April 17, 2011

CEP - CPU of an Enterprise ?

Can CEP can become a CPU of an Enterprise ?

Let us see first what is CEP


With the increasing competitiveness and changing market dynamics, Enterprises are looking for more innovative ways to do business. Lots of emerging capabilities are available for an Enterprise that could help it to keep an edge over its competitor through efficient operation of its business. One such capability is Complex Event Processing.

Even though “Agility” capability will help an Enterprise to drive its business in accordance with the changing market demands, “Real-time Analytics “will help an Enterprise to plan its business more accurately.

Complex Event Processing (CEP) helps an Enterprise to have the real-time analysis capability. CEP is suitable in the scenarios where there is a need to capture a very high volume of events in real-time to perform some analysis and take proactive measures / to make informed decisions. It include capturing more than one event, correlate them and extracting a “meaning” out of it.

Let’s look at this with an example from a lay man point of view. When a plane starts its flight from a source to destination, lots of its parameters such as latitude and longitude, altitude, speed, etc. will start changing continuously at a rapid rate

Here assume that it is configured to emit an event when it’s ground speed change from X miles / hour to X+ 200 miles / hour within a time period of 1 second; Also it is configured to emit an event when it’s altitude drops from some X meters to X-100 meters . These are 2 separate events. When you are able to co-relate these 2 events and extract a “meaning” out of it, it can be used to make a decision.

Say for example, when the altitude drops from 100 meters to 0 meters and its speed shoots up from X to X+200 miles / hour you can co-relate these 2 events and make a meaning out of it – something wrong. A plane’s speed may not shoot up when its altitude drops to zero and hence accordingly a “red alert” alarm can be triggered; Pilot can be warned. A CEP capability is required here to perform such type of processing.

This same type scenario can be applicable for the business / operational data of an Enterprise. As an Enterprise moves through days, weeks, months and quarters of its business, lots of it’s parameters will change and lots of events can be published; Co-relating a no. of individual, isolated events published across an Enterprise will help in understanding on what is actually happening in the system and plan accordingly.

Similar to what an ESB was having a place in Enterprise IT ecosystem; CEP could also find an equal position. When it comes to enabling an Enterprise with a CEP capability, multiple options are available from the platform / product vendors such as Microsoft, IBM, Oracle etc.

Let's see more on technology option for implementing CEP in an Enterprise in the coming posts.

Wednesday, January 26, 2011

Follow the SUN Model - Global Workflow


Recently I had an opportunity to do an architectural analysis for enabling the “follow the SUN” business model for an Enterprise. Since the application is based on Microsoft Sharepoint 2010, analysis include evaluating various deployment options available in SharePoint platform, different WAN optimizations solutions & constraints and also performing cost benefit exercise.

Scenario involved users accessing a web based system from different geographical locations & in different time zones, carrying various parts of the same business process.

The primary options available for this are:

  • Option #1 - Central Farm with users around the world
  • Option #2 - Central farm with regional SharePoint deployments

Each of these options carries its own merits & demerits.

Option #1

 In this option, all the layers of the application (Web & App) will be deployed in the central location including the DB layer (which holds all the databases and data pertaining to users of all regional locations). The central location is chosen where the Enterprise has its IT support hub.

This is suited in the where bandwidths & latencies between WAN connections provide a reasonable experience.

Merits
·         Minimal Total Cost of Operations
·         Optimized IT operations maintenance
·         Zero data / content latency which will enable efficient load sharing model
Demerits
·         There could a impact on the page response / performance for the other regional users in the scenarios of high network latency & underperforming WAN



Option #2

In this option, the Web & App layers will be deployed in the central location with regional deployments of Data layers. To enable the users of different regional locations to carry out workflow activities, regional databases should be kept mutually synchronized.  

This is suited in the scenarios where regional users most likely to access local data & data on the day-to-day basis.

Merits
·         Optimized performance for regional editors when they access data & content  of their local regional enquiries
Demerits
·         There could be latency on the availability of data & content to other regional users.
·         Increases complexity and operations cost of the overall solution
·         Requires greater organizational coordination to build effective governance of content that is authored in multiple geographic locations



Let us see what could be the best option and why

In case of option #1, sometimes it may require real-time replication which will significantly affect OLTP performance. This will also become more complicated when the volume of data is huge.

In case of option #2, even though constraints related to network & bandwidth may affect the performance when the system is accessed by the users in different geographical locations especially when they access data from remote locations, it can be solved through the various WAN optimization solutions. From the cost perspective, investment in WAN optimization solutions is less that investing in servers farms across different geographical locations.

Also, TCO will be high in option #2 than in option #1 because of the need of deployment of IT support professionals across different regions in the earlier case.

From the security perspective, regional deployments will increase complexity.

This qualitative analysis helps us to conclude that option #1 looks more suitable than option #2.

But still it is to be backed by the metrics of actual POC results.

When it comes to option #1, Data layer will become a bottleneck. Since the Data layer includes data for all the regions, a request from a particular region need to traverse way all the data of other regions. Unless a strategy is  adopted for its scalability and efficient data retrieval & updation, it will affect the performance of the entire system.

On performing a detailed analysis and evaluating various options, Data Dependant Routing emerged as the best suitable option.

We will see more on Data Dependant Routing.