Sunday, December 12, 2010

Way to Mobile !

Mobility has an important place in the Enterprise Architecture roadmap of all enterprises. Every enterprise is seriously looking at the mobility space and defining strategy and business plans based on mobility. Demand is rapidly growing from both workers and consumers to access corporate and business applications from their mobile devices.

In this article, I will highlight trends and analyst predictions in the mobility space, study various options available for enterprises and define a method to strategically build mobility initiatives based on experience gained from rolling out mobile applications.

Imperative for Mobility


What analysts have to say on mobility:


Gartner
  • By 2013, mobile phones will overtake PCs as the most common web access device worldwide.
  • By 2014, more than 3 bn of the world's adult population will be able to transact electronically via mobile and Internet technology.
  • By 2015, context will be as influential to mobile consumer services and relationships as search engines are to the web.
These highlights indicate how important it is for every enterprise to have a strategy in place for getting their enterprise/consumer facing applications onto mobile devices. Even though there is a debate on whether native mobile apps or mobile web apps are best, enterprises cannot ignore the need for existing applications to be web enabled as it ensures a wider reach
Read more 

Monday, November 1, 2010

Architectural documentation: Views makes things clear

When Architects are involved in the definition of architecture, it is also their responsibility to represent it in such a way that all the audience understands it very well. This is more important when architects are involved in selling the proposed architecture of a solution as part of pre-sales process. They have to present concrete justification points to the potential customers to make them understand why their proposed solution architecture is best. When the solution presentation happens, it may involve various high level stakeholders from business, IT, technology etc. Same is the case when IT of an enterprise is pitching for an imitative or a new project; unless business is convinced, IT will not get a “go” signal and budget allocation.

The target audience of architecture can be categorized into two:

1. High level stakeholders like CIO, CEO, CFO, EA etc from business & IT who influence decisions significantly
2. Medium / Low level stakeholders like Tech. leads, developers who will actually help in realization of the architecture. It may also include other fellow architects from various technology domains like security, performance.

Unless the Architect who is responsible for an application’s architecture is able to document the architecture in such a way that it is clear for the above categorized audience, it will not move anywhere. Depending upon the target audience, architecture needs to be documented either only with the high level details or accurately in a very detailed manner. High level stakeholders (as classified above) many not be interested in the details of the architecture related to its implementation whereas developers & tech. leads will be very much interested in it. So, based on the target audience, it is the duty of the architect to use appropriate views & models.

There are various models & views used by the industry to help the architects in effectively documenting the architecture. Generally adopted views / viewpoints are Conceptual view, logical view, technical view & physical view. There are no industry wide naming standards or taxonomy related to documenting architecture. Some enterprises use the names like logical architecture, deployment architecture, conceptual architecture, etc to document architecture. Some even use business architecture, functional architecture (!) without knowing what content to put under such headings. Point here is that irrespective of views / naming standards that is used, architect should be aware the target audience & their expectations before planning for architectural documentation. Unless an architecture is flavor with various appropriate views / viewpoints in its documentation, it will not be clearly understandable by its audience. It’s like some blind men touching a hug banyan tree and making their own impressions about it.

In addition to 4 + 1 view as recommended by rational (which is in general followed by practicing architects), architects should use additional views / viewpoints based on the target audience. For example, it an architect is defining architecture for a complex system where security is considered as highly important quality, than that project may have a dedicated security architect. May be even in that case, there could be a demand from business & IT to get the architecture evaluated by security domain experts. In that case, architect should use Security view, in addition to other standard views.
Point here is that architect of architecture should use appropriate views / viewpoints / models for documenting architecture so that the target audience gets more clear understanding of the architecture from their domain perspective. If not, it will lead to more ambiguity & mis-interpretation and more room for conflicts.

Wednesday, September 1, 2010

Will WCF fade out ESBs?

As Microsoft’s WCF progresses through new versions, it is gradually becoming a replacement for many existing products and architecture styles. In an earlier post, I had briefed how it can be looked as a best replacement for hub & spoke architecture style. In this post, let us see how it can replace ESB. 

If you look at the key capabilities of a typical ESB product, it includes Intelligent Routing, Service Aggregation & Load balancing.

If you spend some time of WCF Routing capability, you can understand how it supports “content based routing”. It allows routing of the incoming messages / requests based on the content in the message or in the header. This capability can be leveraged to address the typical techno-business requirements like endpoint virtualization, load balancing, service versioning, priority routing, service aggregation & protocol bridging.

Endpoint virtualization: A single router service can be exposed to end clients and it can be configured to route the messages to several internal service endpoints, by that way it will act as a fa├žade service (also can be called “Composite Service”).

Load Balancing: The same concept can be applied to route the services based on the load by applying rules as filters to the incoming messages

If your solution need only these capabilities, you can better go with WCF (free !) rather than investing in a ESB product or spending effort on leveraging open source ESBs.

Thursday, August 12, 2010

Hub & Spoke Vs WCF Routing

Hub & Spoke architecture style – When it comes to integration in heterogeneous environment, hub & spoke architecture style is the best and proven option. This architecture style can be realized through queues, messages & some routines. The different apps of an enterprise can communicate with each other through message exchanges.

Sometimes, the same architecture style is also used for just content based message routing. Some people call it as ‘message based integration’. Messages will have different values in headers indicating their type & other details. Based on the content in the header, each message will be routed through different paths. Message processing will be taken care by routines or batch processes.

With the availability of the options like WCF Router in .Net 4.0, it has now become a must to find when exactly to use hub & spoke. Using it in simple places like content based routing will make things complex. While options like WCF Routing is available, hub & spoke should not be employed unless there is a definite need.

Let us see the candidate scenarios where hub & spoke could be the best solution:

  1. When there is a need for asynchronous, content based routing. Here, queues will be used to persist the messages so that they can be processed later by the routines asynchronously
  2. When reliability & transaction takes high priority, then obviously message based solution scores higher no. of points (by saying that I does not mean that WCF Routing is unreliable)

WCF Routing scores high when it comes to synchronous content based routing. Why? Because it is simple when it comes to effort required for implementing a solution. Also, it makes things easy when to compared to hub & spoke.

When it comes to implementation, following are the building blocks that will be used in hub & spoke:

Queue –It will be used as a place for persisting messages temporarily; MSMQ, MQ Series etc.

Messages – XML messages in pre-defined formats; Header attributes will indicate what the
message is meant for ; say like invoice, order, delivery note etc.

Routines / Listeners – Windows services which will be configured to watch various queues and process the messages based on the header values. Say invoice processing windows service will look for the messages with “Invoice” in the “Transaction” attribute in the header and process it

In some cases, even databases are used instead of queues as they also offer the same benefits as what queues offers in terms of transaction, security, durability etc.

Even though hub & spoke architecture style enables heterogeneous integration with loose coupling, using of components like queues & windows services will make implementation more complex. Also, as no of transaction types increase, routines are to be developed and deployed to do processing.  

Even though WCF Router does not have any persistent capability, it can also attribute to reliability to certain extent through its support for increasing fault tolerance by routing traffic away from unavailable services.

You can think why I am comparing Apple with an Orange! I had compared here a technology option with an architecture style. But I had done it from the perspective of the problem – “content based routing”.

Saturday, July 10, 2010

TOGAF - ADM - Snapshot

Here is a snapshot of TOGAF - ADM Phases


·         Preliminary Phase

o   Defining the Enterprise
o   Identifying key drivers and elements in the organizational context
o   Defining the requirements for architecture work
o   Defining the architecture principles that will inform any architecture work
o   Defining the framework to be used
o   Defining the relationships between management frameworks
o   Evaluating the enterprise architecture’s maturity

·         Phase A – Architecture Vision
o   It starts with a ‘Request for Architecture Work’ from sponsoring organization to architecture organization
o   Key activities : Architecture Vision & Business Scenarios
o   Development of Business Scenarios
o   Define scope and Validate business principles, goals & drivers and establish EA KPIs
o   Stakeholder Analysis should be used to identify key players.
o   Identification of Business Transformation Risks

·         Phase B – Business Architecture
o   To develop business architecture to support agreed architecture vision
o   Describe baseline business architecture, target business architecture, analyze gaps between them
o   Select & develop relevant architecture viewpoints.

·         Phase C – Information Systems Architecture
o   Documenting fundamental organization of an Enterprise’s IT systems, embodied in major types of information & application systems
o   Two key artifacts can be developed either sequentially or concurrently :
§  Data Architecture
§  Application Architecture

·         Phase D – Technology Architecture
o   Map application components to Software & Hardware
o   Objective is to develop target technical architecture
o   Define physical realization of the architectural solution

·         Phase E – Opportunities & Solutions
o   Review target business objectives & capabilities
o   Consolidate gaps from phases B to D
o   Objective is identification of target delivery vehicles
o   It concentrates on how to deliver the architecture
o   Derive a series of Transition Architectures that deliver continuous business value
o   Generate & gain consensus on an outline Implementation and Migration strategy
o   Actual solutions (COTS, Packages etc.) are selected

·         Phase F – Migration Planning
o   Cost benefit analysis of the projects identified in the Phase E & risk assessment
o   Prioritization of the work packages, projects and building blocks
o   Finalize the Architecture vision & Architecture Definition document in line with the agreed implementation approach
o   Confirm the Transition architectures with stakeholders.
o   Finalize a detailed implementation & migration plan

·         Phase G – Implementation Governance
o   Objectives are
§  To formulate recommendations for each implementation project
§  To Govern and manage an architecture contract
§  To ensure conformance of deployed solution with Target Architecture
o   Defines an Operation Framework to ensure long life of deployed solution
o   Outputs are : Architecture Contract, Compliance Assessments, Change Requests, Implementation Governance Model

·         Phase H – Architecture Change Management
o   To ensure baseline architectures continue to fit for purpose
o   To maximize business value from architecture
o   To ensure that architecture achieves its original target business value
o   It is related to management of Architecture Contracts between architecture function and business users of the enterprise
o   Rule of thumb to identify the extent of change
§  If change impacts more than 2 stakeholders, then it may require architecture re-design and re-entry to ADM
§  If only one stakeholder, then may be a candidate for change management
§  If change can be allowed under dispensation, then candidate for change management.

·         Requirements Management
o   Objective is to define process to manage requirements for EA and its subsequent implementation

Let us see more on TOGAF - ADM in coming days 

Thursday, July 1, 2010

Web to Mobile #1

Some of the highlights of the Gartner’s 2010 “End User Predictions” report:

  • By 2014, more than three billion of the world’s adult population will be able to transact electronically via mobile and Internet technology.

  • By 2015, context will be as influential to mobile consumer services and relationships as search engines are to the Web.

  • By 2013, mobile phones will overtake PCs as the most common Web access device worldwide.
These highlights indicate how important it is for every Enterprise to have a strategy in place for enabling their enterprise / consumer facing applications on the mobile devices. Even though there is a debate going on whether Native Mobile apps or mobile web apps are best, Enterprises cannot ignore the path in which existing applications should be web enabled, as it ensures a wider reach.

Any initiative on mobile enabling web applications will revolve around 2 key concepts:

  • Markups

  • Style sheets

Both of these ensure an optimized /tailored delivery of the content of a web application on mobile browsers. Without having understanding fundamentals behind these 2 concepts, no team can succeed in mobile enabling web applications.

Adding to these, 2 key building blocks of any mobile web applications are:

  • Content Adaptation Facility

  • Device Database

They are available both as “open source” softwares and as part of COTS products that are used for developing mobile web applications.

Let us see more on “web to mobile” in coming days J

Saturday, May 1, 2010

Your Architecture diagram now has a life!

When an Architect drafts the Architecture Specification or System Architecture Definition document of a system to be built, it is one of the key requirement to include necessary views of the architecture so that those views will convey how various concerns of different stakeholders of the system are addressed. Usually the documentation includes various views of the architecture like conceptual architecture, logical architecture, technical architecture etc. targeting different stakeholders.

Tools like MS Visio, MS Word were typically in use for drafting various views of an application architecture. With the release of .Net 4.0 and Visual Studio 2010, logical architecture diagram can be drawn in Visual Studio itself!  Beauty is not just a simple capability to draw some boxes. But to link the logical architecture (which is called “layer diagram” in MS VSTS 2010 Context) to the actual implementation so that if the system is developed not in accordance to the application’s architecture, it can be easily identified. So, now your architecture diagram has life! It’s not now just a fancy diagram in a word document! It is directly linked to the application implementation. Any violation can be identified immediately. I think it is an amazing feature not available in any of the sophisticated architecture modeling tool / product. It also has the capability to forward and reverse engineer the models. With all these, MS Visual Studio now becomes a more power tool not only for developer community but also for Architects community as well.

Saturday, April 17, 2010

Follow-up : Evaluating Application Architecture, Quantitatively





Since the publication of my article “Evaluating Application Architecture, Quantitatively” in the 23rd issue of Microsoft’s The Architecture Journal , Iam receiving lots of questions / encouraging comments / wishes / suggestions. I never expected such a response back from the architects’ community around the world and result is this follow-up.

In the article ‘Evaluating Application Architecture, Quantitatively’ which is outlining the framework for evaluating application architectures quantitatively, it is been specified that for a positive response to every question / statement in the questionnaire / checklist '1' will be assigned and '0' will be assigned for a negative response. When a set of questions / checklist is used for an application architecture evaluation, some of them may not be suitable for a particular context.

Say for example, you are evaluating an application’s architecture that is meant for intranet only. So, in that context, assume that you are doing an architecture evaluation based on a particular repository of questions and it has a question which goes like this:

“Are web servers are placed in the DMZ zone?”

In this given context, this question is not applicable. For an intranet application, it is not a must to place the web servers in a DMZ zone. So, here if the response is “No” then zero is to be assigned against that question. But here the question itself is invalid or “Not applicable (NA)”. If the repository has more such “NA” questions, then resulting “Architecture Index” will be misguiding.

Although more no. of a questions make your repository rich and increase the chance of doing architecture evaluation for wider variety of applications, because of the nature of some of the contexts, some questions may become “invalid” or “Not Applicable”.

So, when you are building a tool, you should always have a provision to allow the reviewing architect to make a question as “Not Applicable” so that particular question will be excluded from the Architecture Index calculation.

Thursday, March 25, 2010

Evaluating Application Architecture, Quantitatively



Evaluation of an application architecture is an important step in any architecture-definition process. Its level of significance varies from organization to organization, based on a variety of factors (such as application size and business criticality). In some IT organizations, it is a part of a formal process; in others, it is performed only upon special requests that stakeholders might raise. Enterprises sometimes have a dedicated “Architectural Review Board” (or ARB) that is made up of a team of experienced architects who are earmarked for performing periodic architectural evaluations.
Scenarios that drive the architecture-evaluation process include:
·         When a business must validate an application architecture to see whether it can support new business models.
·         An expansion to new geographies and regions—resulting in the need to check whether an existing application architecture can scale to new levels.
·         Impaired application performance and user concerns that lead to an assessment, to see whether it can be reengineered with minimal effort to ensure optimum performance.
·         Stakeholders having to ensure that a proposed application architecture will meet all technical and business goals—ensuring that key architectural decisions were made with key use cases/ architectural scenarios in mind and will meet the nonfunctional requirements of the application.

In the context of the new application development, the key objectives of carrying out an architecture-evaluation process are:
·         Avoiding costly redevelopment later in the software-development life-cycle (SDLC) process by detecting and correcting architectural flaws earlier.
·         Eliminating surprises and last-minute rework that is due to the suboptimal usage of technology options that are provided by platform vendors such as Microsoft.

Architectural reviews are also performed based on only a particular quality-of-service attribute—such as “Performance” or “Security”—for example, how secure the architecture is, whether an architecture has the potential to support a certain number of transactions per second, or whether an architecture will support such a specified time.
The application architectural-evaluation process involves a preliminary review, based on a checklist that is provided by the platform vendor and subsequent presentations, debates, brainstorming sessions, and whiteboard discussions among the architects. Key aspects of brainstorming sessions also include the outputs of the scenario-based evaluation exercises that are performed by using industry-standard methods such as the Architecture Trade-Off Analysis Method (ATAM), Software Architecture Analysis Method (SAAM), and Architecture Reviews for Intermediate Designs (ARID). There are also different methods that are available in the industry to assess the architectures, based exclusively on factors such as cost, modifiability, and interoperability.
The checklist that is provided by a platform vendor ensures the adoption of the right architectural patterns and appropriate design patterns. With its patterns & practices initiative, Microsoft provides a set of checklists/questionnaires across various crosscutting concerns for the evaluation of application architectures that are built on Microsoft’s platform and products. An architecture-evaluation process usually results in an evaluation report that contains qualitative statements such as, “The application has too many layers” or “The application cannot be scaled out, because the layers are tightly coupled.”
Instead of having qualitative statements, if the evaluation process ends up providing some metrics—such as a kidney-diagnosis process that ends with a “kidney number” or a lipid-profile analysis that ends with numerical figures for HDL and LDL—it will be easier for stakeholders to get a clear picture of the quality of the architecture.
This article outlines a framework for applying quantitative treatment to the architecture-evaluation process that results in more intuitive and quantitative output. This output will throw more light on areas of the application architecture that need refactoring or reengineering and will be more useful for further discussions and strategic decision making.

Wednesday, March 3, 2010

Factors that may impact your estimation, Significantly

Listed below are the key factors that are to be considered while starting estimation for an application, as they may have significant impact on cost as well as overall schedule. Level of impact will vary based on the nature of the team that is going to carry out the development. Even though we can think that some of these factors are any way considered for estimation implicitly, they are listed here explicitly to ensure that they are not missed. So when we get a RFP to respond, keeping a ‘special eye’ on these factors will help us in arriving at a reasonable ‘ball-park’ figure.

Even we can allocate a percentage against each of this factor to indicate the level of impact each one has on overall estimation. After the execution of the project, these percentages can be verified with the actual figures and it can be used as benchmarks in subsequent estimation for similar types of application developments. There could be multiple factors that are related to these percentages. Once these factors are well documented, it will help in arriving at reasonable percentages upfront. It will help in arriving at reasonable assumptions when estimation is carried out with the limited availability of information. Also, it will help us to put required, relevant questions to clients before starting the estimation so that the ‘ball park’ figure will be close to actual one. 

  1. Nature of Integration with the legacy systems / other home grown internal applications / COTS applications / Partner integration
Some COTS products are service enabled so that it may be comparatively easy for integration. Same thing is applicable for SOA based applications. But when you have to deal with proprietary APIs or other types of integration like watching a particular folder for files based on some naming conventions and pre-defined templates, then it will impact the schedule. Similarly, in the scenarios where there is a need to integrate with partner systems like consuming partner web services , generating data in a pre-defined format, interacting with proprietary systems of partners etc. then additional time will be required because of the effort involved in learning.

  1. Proposed Architecture style of the application
When the proposed architecture of an application that is to be build is based on ‘client server’, then it will be comparatively less time when compared with that where a multi-layered, multi-tiered architecture is proposed. Development has to happen across multiple layers and similarly also the testing. In addition to that, system has to be tested in a distributed environment. All these will impact the schedule

  1. Nature of development platform
If the application development is proposed on a ‘homogenous’ platform then it will need less time when compared to a development where it has to be carried out on  heterogeneous platforms. Testing related to ‘interoperability’ issues will add time in addition on other normal tests.

  1. Significant Non-Functional requirements
Usually as part of ‘system requirement specification, non-functional requirements will be specified. In some scenarios, there will be a special demand on a particular QoS attribute. Say for example, in banking domain, the applications will be subjected to lots of security vulnerability tests like litmus testing, smoke testing, sanity testing, penetration testing, etc to ensure data, application, infrastructure and communication security. This is not a common requirement in general app. development scenarios. So in case of such special ‘Non-functional requirements’ extra time is required towards these additional tasks. These things may not be explicitly specified by clients. Based on the application nature, only the estimating professional should identify the need of such special tasks as part of development cycle and accordingly budget it. Also, they have to educate the customers as well to avoid any miscommunication.

  1. Life of the application / product
If the life of the application is going to be long, then lot of efforts will go towards good design so that the application does not need to undergo frequent fixes & changes. Similarly, if the final product is like a ‘app. Development framework’ based on which further applications will be developed in an Enterprise, then lot of time has to be spent to ensure that the framework will include all common functional and infrastructure components .

  1. Compliance to regulatory requirements
Additional learning curve and development effort to deal with industry standard data formats like FIX/SWIFT etc., to design the system to make it comply with regulatory requirements like SOX impact cost and schedule.

  1. Multilingual Capability
If an application need to be developed targeting multiple languages, then testing effort will increase based on no. of target languages. That too, if data need to be stored in database in different languages, it will add more cost when compared to the scenarios where only a multilingual display is required.

Wednesday, February 10, 2010

‘Customers & Vendors’ Maturity Level

In these days of business where we are living in the world of various maturity models to measure the maturity levels of SOA, software capability etc. I am seeing maturity level from a different perspective.  

As the need to build highly available & scalable systems is growing high day by day, unless there is a reasonable level of maturity that exists both on the customer side and vendor side, it is tough to meet the goals.

Here, by the word ‘Maturity’, what I mean is awareness and knowledge on the factors like best architectural / design practices, non-functional requirements, QoS requirements, industry standards and their direct impact on the business, based on a business domain, industry and operational nature of the system

Some customers are matured enough to clearly specify their ‘Scalability’ & ‘High Availability’ requirements more precisely than others. Some of them will specify it just for the name sake and if a vendor asks do you want high availability, customer then will simply reply “yes, we want”. But, in reality they may not aware of different levels of availability that exists (99.99, 99.999, 99.9 etc.) and what type of implication each one have on infrastructure planning, development & testing life cycles, cost etc.  If a customer has a maturity, he/she will also know what level of availability would suffice their business and how much it would cost.

In some cases, a matured vendor will know what level of impact a customer’s demand would have on his project schedule and budget. If a vendor lacks maturity, he will simply accepts whatever a customer demand in terms of QoS requirements just to get the project!

Lack of maturity on both sides will result in a disaster!

Matured customers will be more precise in all their requirements. Depending upon their level of maturity, they will document their requirements in addition to their functional requirements. Some customers even specify their requirements on cross-cutting concerns very clearly. They will be more specific about logging, instrumentation, exception handling etc. For these types of customers, the final system will meet the objectives, irrespective of the type of vendors they are dealing with. Even if the vendors lack knowledge on best practices, industry standards, architectures etc., as the customers are very specific in their functional, non –functional and cross-cutting concerns related requirements, entire development will be driven in the right direction.

Imagine the case where an immature customer dealing with an immature vendor. It is like a blind leading a blind. In this case, customer would have documented the functional requirements very well. But, he/she would not have specified any other thing explicit. In this case, if the vendor is matured, he would have educated the customer on the significance of non-functional /QoS and cross-cutting concerns requirements. He would have also made customer aware of the consequence of building a system without giving due weight-age to those factors. Take for example, an immature customer ‘A’ wants to build an e-commerce portal by a mature vendor ‘B’. In this case, even if the customer has not specified anything explicit about high availability, the vendor would have conveyed the customer about the importance of high availability, its different levels and its impacts on the business. Vendor would have educated the customer that if their web site is not build for at least 99% high availability, web site may not be available for so many no. of days in a week and in that time, online buyers cannot make orders. So here, the customer would have realized immediately the importance of the high availability and opted for it. So, the outcome of a system depends on the maturity of both the customers and vendors. Unless any one of them has a reasonable level of maturity, their partnership will not result in a system that meets its objectives.

Wednesday, January 20, 2010

Why Windows Azure is on Cloud 9?

Cloud computing market is crowded with more and more vendors as it is seen becoming no.1 in the ‘what’s next’ list among the CEOs & CIOs. Google, Amazon, Microsoft are pioneers in this arena. Based on my observations, I can see more tractions on Windows Azure based enterprise apps. There may be many reasons. One key reason that would have made Windows Azure as one of the key choice may be ‘data storage’ facilities it offers.

If you see the key players in cloud computing viz Amazon, Google, Microsoft, they are offering data stores based on EAV model:

• Amazon – SimpleDB

• Google – BigTable

• Microsoft – SQL Data Services

EAV models are suited only for the scenarios where vast no. of attributes is used to describe an ‘Entity’. From the business domain perspective, ‘Health Care’ can be a good candidate for such scenarios. Especially when data models are designed for the databases to store info on clinical findings, we will end up in lots of attributes. If we design database models for such info based on relational database principles, we will end up in tables with thousands of columns and it will end up in many overheads and underperforming systems. For such scenarios, EAV based data storages are appropriate. So, when we design and develop a cloud application for Health Care, it is best to leverage the EAV based storage systems provided by the cloud vendors either it is SimpleDB or BigTable or SQL Data Services.

For the applications such e-commerce or banking where we don’t see much variation in attributes of entities, EAV will not work. RDBMS based storage is the best option for such applications.

Unlike the other vendors, Microsoft provides first relational database model based cloud storage called ‘SQL Azure’, in addition to EAV model storage ‘SQL Data Services’. Developers can leverage their existing skills on T-SQL, SQL Query, stored procedures etc.

So, for applications like e-commerce or banking where EAV may not appropriate, Windows Azure stands as the first choice because of its support for relational database model.

With the opportunities for leveraging developers skills on Visual Studio and conventional SQL based data access techniques, cloud computing may not look much different for the developers. Their lack of knowledge on EAV concepts will not become stumbling blocks in their development journey. Adding to that, SQL Azure also allows SQL developers to migrate the data from their existing RDBMS databases without any major effort.

With the fact that not all applications to be made ‘Cloud’ aware are should be based on EAV models, I think we have justifications to see Windows Azure as the front runner in cloud platform.

Let us wait & watch :)