Wednesday, October 28, 2009

Economics and Enterprise Architecture

I thought this quite a good item (http://virtualization.sys-con.com/node/1159476).

It touches indirectly on a couple of points e.g.
- Getting the right answer requires that you pose the right question.
- You need to ensure the cause/effect relationships are valid if you know your conclusions are correct.
- Assumptions and theories need to be tested by identifying the variables that need to remain static and then testing changes.

Having observed much EA/Strategy work I think that this makes some fundamental assumptions that are typically wrong e.g.
- that cause/effect relationships are made explicit and testable
- that assumptions and theories (or beliefs) are stated as such and their rationale made explict.

See: http://ict-tech-and-industry.blogspot.com/2009/03/business-analysis-and-visioning.html

It also says:
"Unfortunately, you cannot demonstrate the value of Enterprise Architecture if you cannot monetize or enumerate the value of all possible choices relative to the choices that are being recommended or those that have been made. "

To me this is kind of ducking the issue - i.e. one can never know all all possible choices that could have need made (let alone monetize them) - this is the intrinsic problem of counterfactual reasoning I published a few years ago http://ea-in-anz.blogspot.com/2007/08/justifying-architecture-and-strategy.html) see summarised below:

There are always challenges in valuing and justifying, good (or often any) design, strategies, plans. The problem lies in a number of areas e.g.:

- Challenges with Counterfactual reasoning - In most situations one can't easily compare the result of what was done (designs/strategies/plans) with what could have been done.

- Self-interest: militates against sharing knowledge. Knowledge is power and sharing knowledge (e.g. the basis of a decision) can diminish ones power and one one up criticism (e.g. regarding the veracity of the data or the soundness of the analysis).

- Secrecy: sometimes one does not wish to disclose a strategy/plan. Often the downside of secrecy is that people in ones own team or side, don't understand what should be done, why, when etc. (i.e. the benefit of secrecy is outweighed by its impact on efficacy).

- Analysis and design is intrinsically difficult - strategy/design/planning, and modelling are often quite difficult to do well. To make matters worse the result can be seen, with hindsight, as trivial and not justifying the effort required. It is always quicker just to just act without thinking, and it may even be cheaper in the very short term.

- Future is someone else's problem - if what is done doesn't last, is expensive to operate, expensive to change etc. it is seldom the problem of the initiator, the initial designer, builder or user. It is a problem inherited by others.

While these issues can be seen all areas of life, ICT is particularly bad as it has not yet matured into a profession (where people have the requisite learning, training and discipline; and profess anything approaching code of ethics).

People wishing to ensure better strategies, designs and plans, need to look for others who:
- Intrinsically value better strategies, designs and plans, and can consider the future.
- Can overcome self-interest of knowledge holders and know what must be kept secret
- Know thinking is difficult, takes time and money and are prepared for the effort.

Monday, October 26, 2009

Industry Reference Model framework

INTRODUCTION

Systems need to be established and evolve and over time. Therefore an understanding of what lead to the current state is critical. These endeavours include a gamut of challenges: regulatory, governance, policy, institutional arrangements and agreements, organisation structure and roles, skills and capabilities, technologies and technical standards, transitional and phasing, etc. To often this knowledge departs or lies in thousands of documents and files.

There are existing best practices emerging in how to establish industry frameworks. There use addresses: recognised issues associated with the implementation of complex information systems by government; assists using in capture knowledge in a semantically precise way; allows specific local implementations to be supported that can be mapped to external reference set (that reflect common best practice).

In implementing a system one needs to ensure it can evolve. This allows it to be extended to address future needs. The evolution of systems requires that the raison d’ĂȘtre for the design of the systems and organisations is understood i.e. so that experience is institutionalised and retrograde “enhancements” can be avoided.

This framework must ensure semantic precision and allow easy case by case instantiation. It relates patterns, principles, standards, modules (or build blocks), reference models and maturity levels. It allows the adoption, emphasis or abnegation of the elements in the framework in each specific implementation to be made explicit. This allows the framework to be common – while each implementation is different with a mapping between the common framework and the specific implementation

ANALYSIS OF PROBLEM

Complicated systems

Complicated systems and relate to complex organizational structures. They are typically networks of systems, distributed and loosely coupled, in federated or discrete organizations, serving a multitude of purposes and audiences, support transactional and archival functions. To address the problems effectively we need to learn from other complex disciplines better and recognize that many of the best practice that applies to these apply to our systems.

Specifically we could start by looking at the generic problems associated with the implementation of complex IT systems (especially in government).

The Royal Academy of Engineering and and The British Computer Society observed that

"A significant percentage of IT project failures, perhaps most, could have been avoided using techniques we already know how to apply. For shame, we can do better than this."

They go on to say:

"It is alarming that significant numbers of complex software and IT projects still fail to deliver key benefits on time and to target cost and specification. Whilst complex IT project success rates may be improving, the challenges associated with such projects are also increasing rapidly. These are fuelled in large part by the growth ... in the capability of hardware and communications technology, and the corresponding inflation in people’s expectations and ambition.".

They examine how complex IT projects differ from other engineering projects, with a view to identifying ways to augment the successful delivery of IT projects.

Amongst their findings and recommendations are that:

“A striking proportion of project difficulties stem from people in both customer and supplier organisations failing to implement known best practice. This can be ascribed to the general absence of collective professionalism in the IT industry, as well as inadequacies in the education and training of customer and supplier staff at all levels”

The significance of systems architecture is not appreciated.

Further developments in methods and tools to support the design and delivery of such projects could also help to raise success rates. In particular, basic research into complexity is required to facilitate more effective management of the increasingly complex IT projects being undertaken.

There is an urgent need to promote the adoption of best practice amongst IT practitioners and their customers.

They also identify some things that we think most people have known for some time e.g. the need for good project management and risk analysis. However both of these tasks are significantly impeded if the underlying knowledge required for analysis is not available.

What issues does an framework address

A framework must help improve:

collective professionalism – by providing all parties a way of undertaking analysis, design and planning in an effective and professional manner.

education and training – by providing an explicit relationships between the outcomes, the procedures and systems, the organizations and roles, and the skills required.

architecture – by provide a template for defining: sector or industry (NSDI) architect, the enterprise architectures and systems architectures

strategies for dealing with complexity – by providing methods and tools that support the analysis, design and delivery of systems

adoption of best practice – it is all to easy to speak of adopting best practice, everyone does, but in order to do this we really need to define what the elements of best practice are, how this knowledge is manner and provide a strategy for its adoption.

What capabilities does a framework need provide

It needs to allow federated group of participants to do a number of things, with greater transparency and more objectivity. Transparency helps people understand what and why they agree or disagree on things in an objective and unemotional manner. Reaching consensus is therefore easier. So it must be possible to:

Capture drivers and requirements – these are the things that determine what a system should. Each and every elements of a solution (roles, skills, technologies etc.) must derive from these.

Undertake analysis – a simple structured way to analysis is required. Analysis can be organised around simple set of canonical model (Goals, Facts, Beliefs and Recommendations).

Design and decide – Designs are assemblages of elements. So we need to be able to record these things (and relate to externalities e.g. technologies). Design and decision making is made based on analysis of alternatives. So we have the information on drivers and requirements and are able to undertake analysis we can make explain the basis of decisions and designs.

Plan, Programme & Phase – these require us to understand sequencing, prerequisites and co-requisites (implementation constraints). Intrinsic is the relationship between the requirements and the designs (i.e. the set of design constraints).

Promulgate, educate, communicate and socialize – we need to be able to very selectively extract information for the framework that is suited to a particular audience, purpose or interest. What is keep is that we don’t need to manually re-craft communications artifacts for each different purpose or audience (and ideally we want information to be discoverable)

Estimate the effort, costs, risk and timeframes associated with people, technology, procedures – in practice costs can only be effectively estimated by examining the proposed implementation i.e. the designs. But decisions need to be made related to the requirements and outcomes therefore we need to understand how the elements of the implementation relate to the requirements (and the marginal economic impact of each requirement).

Support the validation, assessment, quality assurance and review – by making the above relationships explicit and transparent we provide a mechanism for doing this in a clear simple, and object way.

What is best practice (what can we learn from)

We can learn from a number of standards and approaches that are applied elsewhere by examining some existing methods e.g. ValIT (Val IT - is framework addressing the governance of IT-enabled business investments), COBIT (Control Objectives for Information and related Technology), OSIMM (Open group SOA Integration Maturity Model), CMMI (Capability Maturity Model Integration), FEAF (Federal Enteprise Architecture Framework), DODAF (Department of Defense Architecture Framework), TOGAF (The Open Group Architectural Framework), Zachman Framework , ITIL (Information Technology Infrastructure Library), IFW (Information FrameWork), DSM (Design Structure Matrix), Pattern Language (i.e. Alexander’s seminal work), TMForum.

These are from many disciplines e.g. engineering, architecture, portfolio analysis, defense, IT, telecommunications, etc.

Our approach to a framework is informed by these sources and others. The effectiveness of these approaches has in the past significantly impacted in most cases by their means of implementation (usually many documents and consultants). We need an approach that minimizes the need for both.

What does best practice suggest a framework needs to encompass

Space does not permit a full review of all of these but we believe that there is general consensus that following seem to make good sense:

Business case and investment models – FEAF, ValIT

Reference Models – FEAF, OSIMM

Principles, patterns and anti-patterns – DODAF, Pattern Language, TOGAF:

Principles – Pattern Language, TOGAF,

Standards and technologies – TOGAF, FEAF, ITIL, OSIMM

Modules (modularity, building blocks, e.g. service and components models) and collaboration models: indicating interfaces between roles and systems – IFW, DSM

Taxonomies – DODAF, Zachman,

Maturity models – CMMI, OSIMM, TOGAF (implied)

Compliance mechanism – Cobit, FEAF, OSIMM. CMMI

Different levels (of detail, of technicality) – Zachman, Pattern Language

Instantiation – almost all of these frameworks allow instantiated instances

A FRAMEWORK BASED ON BEST PRACTICE

What analysis is enabled by semantic precision of the framework

In general we seek to provide a mechanism for better “analysis”. We proposed a simple model based on facts, goals, beliefs and recommendations. Where: goals are things you are trying to achieve, sometimes expressed as principles, issues (goals stated the reverse), visions, measures, objectives or KPIs; facts are not disputable and include laws, regulations, social factors and technical constraints; beliefs are based on facts and relate to goals and include causes, findings, implications; and recommendations are based on beliefs and achieve goals and strategies, plans etc. In addition we would want some grouping concepts (classification systems) for: terms, patterns, principles, technologies, standards etc. By support business or analysis with this paradigm we move for persuasive narrative to structured reasoning.

There are two types of specific analysis that we want to be able to do. I call them referential and inferential.

Referential analysis allows us to confirm that the relationships between element are correct this allows us to follow a path of relationships i.e. if this skill is unavailable what is affected, if this goal is to be achieved what is required, what elements are affected by this projects.

Inferential analysis is useful when we have compositions or when we have reference models - where we can related our implementation to the reference models. It allows us to infer what relationships should exist i.e. are implied to exist but do not. It allows us a check on correctness.

Reference models would usually be instantiated e.g. for example a functional reference models indicates a function is performed, our instantiation would indicate how we perform it. A data reference model indicates data we need, our instantiation would indicate how we manage it exactly.

While this seems obvious in this example when the relationships above are all described textually e.g. in a document inconsistencies are not so easy to see. Wven when they are dealt with graphically when there are large amounts of information (or multiple level of decomposition) these inconsistencies are hard to see. In both cases best practice would be to have systems do these checks (rather than checking for them manually).

What is the scope a framework

Therefore we propose a set of reference models which capture the fundamental issues

Determinants

Externalities that determine what a business does, how it operates etc. These are external factors that include: cultural and social, market and competitive, legal and regulatory, etc.

Environmental, kegislative and regulatory reference model (ERM): describe the sets of laws, regulations and compliance issues (national and international), and local factors.

Strategy

Enterprise strategy, vision, mission, goals, strategies and plans (product, market, organisation), cases, governance and compliance, measures etc.

Strategy reference models (XRM) – which relates to the Determinants and covers the vision, goals, strategies etc.

Performance reference models (PRM) – which outlines the performance goals, measures etc. (Cf. FEAF’s PRM)

Operations

Enterprise operations - business: services, processes, rules, information (objects, documents, data etc.), organisation, facilities etc.

Offerings, services and product reference model (OSM) – which outlines offerings, services and products which need to be provided to achieve the strategy

Functional reference model – which outlines the capabilities, functions and steps, that must be performed to achieve the strategy (Cf. FEAF’s BRM).

Rules and policy reference model – which outlines the rules and policies that are required by the strategy and determinants and to support operations.

Information reference model – which outlines information, metadata and data required by the strategy and determinants and to support operations.

Organisational reference models – which outlines the organizational networks, units, roles, techniques, skills that are required by the strategy and determinants and to support operations.

Systems and facilities

Systems and facilities including: applications and services, technologies and system services, facilities and internal utilities. The systems area could be further divided into business services and systems and technology services and systems (though the boundaries can become blurred)

Suppliers are externalities that determine the products, services, standards a business can use e.g. technology products, services and standards; product channels; real estate and facilities.

Interface reference models – describes the interfaces, services and by implication the applications required for a system to operate.

Technical reference models – describes the technologies, standards required to supports the interfaces.

Vendor reference model – describes the products, agreements etc. required

Patterns and maturity models

Sitting beside the reference models are some knowledge bases:

Patterns and template implementation plans – which indicate exemplar implementations and relate these to the reference models

Maturity models – that outlines stage of maturity and relate these to the other aspects of reference models.

The 1st three can be considered to represent the requirements and the last two are in the solution domain.

The Environmental, Technical, Vendor are effectively the external constraints.

These reference models are related to allow different types of analysis (referential and inferential) to be performed. They are most useful when they are related so as to allow impact analysis (both between domains, and within domains). Unfortunately many are often presented diagrammatically (great for communication, hopeless for analysis) or represented using inconsistent semantics, modelling paradigms that can't be related, or using arcane technology terms that are not meaningful to 99% of the world (e.g. use case, actor, cardinality etc.).

Need to be analysable from the top (and bottom) - this is why absence of Determinants is so limiting as ideally these would allow one to relate the circumstances to the strategy, and then from that the strategy to the operations; and the suppliers to the systems and facilities. It is why compliance issues seem so hard (and produce such a good feeding frenzy for consultants).

To be support modularity they can connect within each level and between each touching levels (in defined ways).

Need to be tailored and used carefully - as with all things that represent purported best practice they need to instantiated in-situ (using experience and judgement) to produce something that is useful in the specific set of circumstances. This needs to be done in a way that allows: the variations to explicitly understood and for the source reference models constant while being connected to detailed views of the enterprise e.g. the FRM may connect to existing detailed process models, the IRM may connect to data models, the PRM may connect to existing KPIs, etc.

Key characteristics of the framework

We can see a number of other key characteristics we require

Implementation technology neutral and non-aligned – our framework must intrinsically technology and vendor neutral i.e. having no affiliation of alignment, and no preferred technology, products. This allows a clearer focus on the real needs and on standards.

Accessible by anyone from anywhere – this effectively means Web accessible and is key so that knowledge can accessed where, when and by whom its it required and that knowledge can be capture as a natural by product of field work.

Supporting different roles, scenarios of use, and levels of control – that is with role based access and presentation, so that people can see what they are interested in in a way that makes sense to them and can change information that is in their domain of control.

Ensure semantic precision - which ensures it is analyzable, fully auditable, that the basis of decisions is explicit, objective and transparent. The lack semantic precision is one of the key problems with most, if not all, approaches based on documents.

Represent idealised models – that has a holistic, coherent and complete set of well-structured, unambiguous and well partitioned and categorised set of business definitions (roles, functions, interfaces etc.). Has an explicit conceptual model of how the systems organizations are structured and operate (e.g. reporting, controls, data flows etc.)

Divides the generic framework from the specific implementation - so the generic framework is reusable and extensible. Allows nations to maintain their know how i.e. how they do things, why they do things - rather than this knowledge be in the hands of third parties with vested interests e.g. consultants, vendors.

Allows relationships and concepts to be – visualised, analysed and reported on.

As simple as possible – to reduce complexity we limit connections within each level and between each touching levels.

CONCLUSIONS

We seek to make a non-incremental step to the way the systems are implemented. We believe that a framework with specific characteristics, capabilities and structure in order to allow best practice to be capture and applied. It is only by establishing this that we will significantly improve the efficacy of system implementations.

We can see that we identify many best practice methods we can learn and we believe there are better ways now available to implement a framework that will enable it operate effectively.

At present many parties are focused on capturing knowledge that should reside in a such a framework. Sadly much of the knowledge still resides in documents or in people’s heads where is not particularly useful (accessible, integratable, analyzable etc.) and frequently “leaves the building”

Thursday, October 22, 2009

Facing some facts about EA

Recent feedback from EA conferences - http://visualstudiomagazine.com/articles/2009/10/15/downturn-places-focus-on-role-of-software-architects.aspx

Zachman says:

  • "... we're not meeting expectations and we're not getting things aligned, we're not accommodating a lot of whatever the demands are from general management perspectives, their frustrations continue to escalate"
  • "...faced with the problem of addressing the conflicts between addressing short-term demand and keeping them in line with long-term architectural...if you don’t produce things in the short term you probably loose the opportunity to work on architecture...if you are not paying any attention to the long term you know you are going to end up with more of the same."

MJE - I believe the answer lies in see EA as an enterprise activity, and one that needs to evolve over time. This requires progressive changes to how the work is undertaken.

IASA president Timothy Leonard says

  • "..."The biggest call to action is to make sure people understand that information is key, that organizations are trying to build applications but build architectural designs,"

MJE - Well yes information is the key to management. I would suggest most enterprises are not actually seeking to build applications - they are seeking to build business (where development usually a necessary evil).

Grady Booch says

  • "You've got to have architects that have skin in the game, in the ideal I prefer my architects also cut code,"

MJE - Yes, and I prefer my brick layers to lay bricks. But I don't want them doing down town planning (or designing a building for that matter).

Wednesday, September 2, 2009

Gartner's have identified 10 pitfalls of EA.

I see Gartner have identified 10 pitfalls of EA - http://www.tradingmarkets.com/.site/news/Stock%20News/2507730/.

It is quite useful (and better than their effort last month) however I think it is wrongly factored - and there are really three things (Like most areas it requires: right people, process and technology) based on the right focus.
a) The Wrong Lead Architect i.e. right person
b) Insufficient Stakeholder Understanding and Support (Technical and Business) i.e. right focus
c) Not Establishing Effective EA Governance Early i.e. right process
d) Right technology [which Gartner fail to identify - presumably for fear of alienating some of their major sponsors]


The rest are simply manifestations of these things i.e.

1. The Wrong Lead Architect - to often "Architects" have primarily self-selected to deal with technology not people and is the role of EA moves from applying knowledge to managing knowledge across the enterprise stylist changes are required.

See: http://ict-tech-and-industry.blogspot.com/2009/01/12-step-program-for-enterprise.html

2. Insufficient Stakeholder Understanding and Support: when people outside the EA team don't use knowledge organised by the EA team in projects (and they don't gather data from projects). This requires clarity regarding the roles of EA and SA (and other elements in the enterprise) and for the EA team to realise they are NOT the primary audience of EA (and in fact can't do EA - alone).

http://enterprisesto.blogspot.com/2009/02/enterprise-architecture-cant-be-done-by.html
http://enterprisesto.blogspot.com/2009/03/i-will-be-your-crmist-and-he-will-be.html

3. Not Engaging the Business People: this is an extension of the problem above.

4. Doing Only Technical Domain-Level Architecture: this is usually caused by 1.

5. Doing Current-State EA First: This is usually because people don't realise that most knowledge in a EA should accrete as by product of business as usual.

http://ea-in-anz.blogspot.com/2007/08/myth-of-heavy-weight-ea.html

6. The EA Group Does Most of the Architecting: This is result of 1 and 2.

7. Not Measuring and Not Communicating the Impact: This is failure to look at the communciations and governance issues (i.e. is a by-product of 9)

8. Architecting the 'Boxes' Only: This is really comes from 1 & 6.

9. Not Establishing Effective EA Governance Early: yes

10. Not Spending Enough Time on Communications: Communications is really part of the governance process

Sunday, August 30, 2009

What price SITP data quality

More lessons from CRM - CRM systems like SITP systems are largely oriented at making decisions (rather that supporting transactions) i.e. more shameless plagiarism (or learning from more mature disciplines as a I prefer to think of it as)

See the source reference here: http://www.computerworld.com/s/article/9135688/What_Price_CRM_Data_Quality_?source=CTWNLE_nlt_entsoft_2009-07-23

In an SITP the data does not have to be perfect to be useful. Many things can be just an approximation or can be missing altogether. So how do you decide where to make your data quality investment?

The first step is separate the data elements (objects and relationships and their properties) into three categories the ones that:
(1) must be there and must be correct to prevent corruption in external systems or misrepresentation of the business.
(2) should be correct for the SITO to work at all
(3) people have asked for to make decisions or record the state of the enterprise

A data quality analysis on each data element in the three categories can be scored based on questions such as:
- Ownership - Does it have an undisputed owner, or is it updated by specific roles as part of a formal business process, or can nearly anyone update it in an ad-hoc fashion.
- Validatity and auditability - Does it have validation on entry and does an audit trail track changes.
- Completeness and correctness - how complete is the source data, how much is missing, clearly incorrect, or duplicate.
Developing an understanding of this metadata is a 1st step that needs to be taken when looking at data preparation and loading.

Data element in an SITP system is in there because it was required to answer a specific business questions and/or because the SITP system is the source of record for this data.

Analysis will discover what data elements that are often missing or wrong. Depending on what concepts are being dealt with the data that is often found missing depends changes e.g.
- Business goals and strategies - will often lack: weighting and explicit relationships
- Applications - will often lack: a range of cost information, related standards and business processes
- Standards - will often lack: lifecycles, basis of current preferences, current and planned usage

Historically where there is no SITP solution it is difficult to collect some of the data in the first place, there are many ways for the meaning of the data to be misinterpreted or misrepresented, and there is often no easy way of usefully applying it (i.e. its recorded as academic exercise and not tested through the fire of use).

In many cases, you can't afford to spend too much on data quality before the system is implemented. What you need to understand is the metadata. If the business question needing this data is materially affected by the quality of this data you will need to carefully assess the cost of remediating the data with the impact and improvement in its quality with have on the decision that are being made base on it.

You can set business rules that allow you to determine when it's worth chasing a data element's quality and when it isn't.

It gets exponentially more expensive to improve data quality. If it costs $X to get solid data quality on 2/3rds of your records, it will probably cost $2X to get the data quality right on half of the remaining records, and $4X to get the half of what bis then left.

For most purposes it is hard to justify a lot of remediation on historical data, and one is better to invest the effort time in improving the way data is captured on an ongoing basis. In many cases it is sufficient to start capturing data on a forward on a JIT basis, or as data is used. Over a short space of time i.e. a year a two most of the data needed for SITP will accrete as natural by-product of improved processes (of course adapting the processes to capitalise on the SITP is a key step to enabling this).

Wednesday, August 26, 2009

EA Frameworks

Everytime one looks at implementating solutions to support strategic IT planning, transformation, optimisations and governance one is asked one supports one or more of the common EA frameworks.

The use word "frameworks" gives a good insight to EA as a practice. Major EA frameworks (FEAF, DODAF, TOGAF, Zachman) mean different things by the terms (taxonomy, method, reference models, architectural styles) and have different roles.

In my experience people who ask about this seldom have a good grasp of the real questions - so I usually ask them to describe what they are trying to achieve. This usually provokes frustrated responses because they don't want they want to achieve they just want to follow the framework du jour.

It would perhaps be more acceptable to blythly follow these frameworks, if over more than a decade, they had proved widely successful.

[WIP]

Tuesday, August 11, 2009

EA-Emperor's new clothes - from Gartner

According to this emergent architect item Gartner now claims - EAs must adopt a new style of enterprise architecture - 'emergent architecture', also known as middle-out EA and light EA, and set out definitions of the new approach. I see this as Gartner starting to slowly realise that they have long confused Enterprise and Solutions Architecture (doing neither constituency any good).

I would argue that what is now being advocated has always been best practice i.e.
  • Architect the lines, not the boxes i.e. the connections between different parts of the business rather than the actual parts of the business themselves. [this is why the focus SW developed oriented modelling and complex BP modelling/simulation has seldom made sense]
  • Models all relationships as interactions via some set of interfaces [obviously - which is why I have long argued for a decade that the venerated Zachman model needs another column i.e. on presentation/interfaces (user, system)]

And regarding the 7 "new" principles:
  1. Non-deterministic - "they instead must decentralise decision-making". The term non-deterministic is a misnomer. What you really want is objective deterministic decision making carried out in a decentralised, objective and transparent way.
  2. Autonomous actors - "EAs must now recognise the broader business ecosystem and devolve control to constituents". EAs have NEVER controlled all aspects of architecture. One is tempted to suggest EAs should also consider how they work with the IT ecosystem.
  3. - Rule-bound actors - "EAs must now define a minimal set of rules and enable choice". EAs have NEVER provided detailed design specifications for all aspects of the EA (this is confusing Enterprise and Solution Architect)
  4. Goal-oriented actors - "Each constituent acting in their own best interests". It is just silly to suggest anything else has ever really happened the real world.
  5. Local Influences: "People are influenced by local interactions and limited information. Feedback within their sphere of communication alters the behaviour of individuals. No one has data about all of an emergent system. EA must increasingly coordinate". [Putting aside the absurd use of the word "Actor" in the original"]. Yawn - nothing new here. However one wonders if the use of the word "system" belies a recidivist tendancy towards again confusing Enterprise and Solutions Architecture]
  6. Dynamic or Adaptive Systems: "System changes over time. EA must design emergent systems sense and respond to changes in their environment.". Again nothing new.
  7. Resource-Constrained Environment: "emergence; rather, the scarcity of resources drives emergence". Necessity if the mother of invention [is over two thousands year old]
Nevertheless it is helpful to recognise that 5 of Gartners "new" points if restated are worth reiterating
  1. Decentralised decision making needs to be facilitated (and therefore access to information) - so we need to make information available to people in many places (in a way that suits them)
  2. Engaging with all constituencies is key - including the broader business ecosystem and devolve control to constituents (and of course the IT ecosystem)
  3. Publishing rules (standards, patterns etc.) is critical - so we can let solutions architects, and specialised technical architects do their job.
  4. Making decisions objectively and transparently is critical - as constituencies will always act in their own best interest - so we need to understand, and be able to examine/question, why different conclusions are reached.
  5. Facilitating KM and collaboration if fundametal - rather than trying to control all the information, and create all the artefacts (or pretend that the knowledge can "live" in documentary artefacts).

Sunday, August 9, 2009

Reference models - what examples exist in mature domains

To gain an insight into reference models we can examine how more industry deal with reference models. I am looking at this through a very narrow lenst focusing on patterns for the built environment.

In Alexander's seminal work on Patterns he describes a set of patterns. Some of these patterns relate to each other. The patterns based on the physical size of what they apply to i.e. first come those applying to large areas (e.g. towns), then comes patterns relating to buildings, then to space in buildings (e.g. rooms), then patterns relating to elements within buildings (e.g. column details).

We could regard these as a reference model (RM) for design.

These patterns as a whole sit beside an extant set of reference models some of which are so well known that they hardly need articulation e.g. lists of:
  • types of space - rooms - a dwelling could have;
  • functions a dwelling could facilitate;
  • measures: structure, construction, lighting, acoustics etc.;
As well as beside reference models that are encapsulated within various regulations and laws, or by technologies
  • building best practice (codes)
  • products
  • safety and efficiency
  • etc.
Lastly we could consider the set of psychological and physiological requirements people have for a buildings:
  • warmth, quiet
  • peaceful, welcoming
The original work, while seminal, has several limitations resulting from a lack of semantic precision and the form of the work (a document). These are that:
  • it does not relate the patterns RM to other RMs requirements, space-types, functions, products etc. which as a set are the constraints reflect requirements and technologies.
  • the mode of presentation does allow one to automate the analyse of the use of patterns for referential of inferential integrity.
These limitations can be now be easily addressed with modern methods of managing reference models.

Perhaps in architectural domain they can be effectively dealt with be people (though looking at building one sometimes wondered). But I don't think this is true in some other increasingly complex domains where technology and expectations change too rapidly to allow best practice to encapsulated in documents, taught in a professional school, and reflected in regulations.

Sunday, July 26, 2009

Common frameworks and reference models

I believe what is neeeded is a consistent meta-framework for describing how information about an industry/sector and enterprise (federated or stand alone) is organised.

There have been a number of frameworks which may provide insights into what is required in a meta-framework - some are generic and some are industry specific.

Generic
  • Zachman framework - has two dimensional taxonomy with rows indicating different the views of different roles (business to technical) and columns indicating data, function, network, organisation, time and motivation
  • IFW - has a taxonomy and reference models. It is for analysing and structuring information. It is portrayed with 10 columns representing types of information (e.g. strategy, organisation, data, skills, functions, interfaces, network and platform - and either "workflow & solution" or "involved parties, products/ arrangements") and rows based on types of analysis (conceptual categories for analysing information, terms and terminology, principles for structuring different types of information, detailed designs that use information, how to implement these designs).
Industry specific frameworks (which could be called Industry Architectures - Cf. "Enterprise Architecture" include:
  • Government agencies - FEAF which has a set of reference models and a some methods for a applying them (particularly around alignment, investment planning and business case creation). The reference model set includes: Performance, Functions (BRM), Services (Applications), Data and Technology
  • Telecommunications - TM Forum's reference models for: process (eTOM), information (SID), and applications/services (TAM).
  • Hotel Industry - which I have not yet seen a published structure for.
  • Insurance - IAA which has business processes and activities, ACORD's eMerge (information exchange). IAA (IBM) is a sets of that trace out: information, data and component-architectural infrastructures and describe business objects and the components owning these objects. IAA contains a Business Model, Interface Design Model (components, interfaces and messages), Specification Framework (product definition and agreement administration), IIW (models creatimng data warehouses)
  • Health - Healix established a framework consisting of a set of reference models oriented at: strategy; policies and rules; function and process; information; components (applications and services).
The purpose of this entry is not to define the answer - but to outline some things to consider. See also:
  • http://enterprisesto.blogspot.com/2009/01/structure-for-thinking-about.html-
  • http://enterprisesto.blogspot.com/2009/01/introduction-to-reference-models.html
It is important to realise that IT industry as a whole has a lot to loose if a consistent way of deal with these issues arises - so one can expect opposition for the major incumbent/dominant vendors of IT products and IT services. This opposition is most likely come from activite participation in any groups attempting to establish standards i.e. embrace and smother (rather than the old fashioned FUD strategy which people are becoming resistent to).

The reason is for opposition is that standardised ways of doing things will allow many things to be demystified and commoditised i.e. reducing the amount can be charged for products or services as informed contestable selections becomes viable, and best practice becomes common and public.

Wednesday, July 22, 2009

Semantic precision and documents

Modelling semantics provides all of the following: glossary, controlled vocabulary, data dictionary (objects, properties, visualisations), data model (relationships, cardinality, layouts), taxonomy (inheritance hierarchy, and domains), ontology (where URLs relate data).

When determining semantic precision one would assess if the representing is
  • correct: consistency of syntax and semantics
  • translatable: can it transformed ensuring semantic equivalence
  • analysable: can it be searched, queried and reasoned on. Can logic be applied, can it be "measured" directly or does the representation require intepretation.
  • integrateable: can they be related to other knowledge in other forms
  • complete: adequacy, expressivity, scope
  • extendable: is it difficult to define new concepts
  • concise: do they record things as efficiently as possible and their is no unintended duplication.
In these definitions we don't mean what a person can do through interpretation or inference i.e. perhaps someone looking at a picture can analyse it, integrate with other concepts they have etc., but a system or application can't.

When we are examining the utility of documents (e.g. Word, Powerpoint, Visio diagrams) - we can see that they are not a very good way of ensuring semantic precision. They don't ensure correctness, they are not easily translatable, intergratable, and they can not be analysed or reasoned on. They can be complete, though seldom are, and the form doesn't require completeness (e.g. there is no validation of cardinality). Likewise they are seldom concise. They are extendable.

Diagrams provide special problems. In some mature domains semiotics and iconography have matured so that images can convey semantics with some precision. Sadly when one looks at many Visio diagrams one find that they don't follow any precise semantics e.g. they don't have a controlled vocabulary and spatial concepts e.g. alignment, containment, proximity and connecting lines, don't have explicit meaning and are used inconsistently. This lack of precision means they can not easily be analysed, integrated, and often importantly translated.

People often seek to reuse the information in documents. This means they need to be able to be translated and integrated. It is often suggested that with some remediation (i.e. by a person) these documents can be made "correct", "complete" and "concise" so that this is viable. In practice it is almost always more efficient for a person to examine them and model them directly in some form that ensures semantic precision - rather than attempt to correct them, then translate them and then deal with any translation errors (or remaining incorrectness in the source). The very act of fixing them is more effort than the modelling.

The metamodel provides the modelling semantics in a central repository. A metamodel can consist of a set of metamodels each covering a different domain of interest. Processes are required to manage change (approval processes associated with the creation or changes) and ensure duplicate or redundant data elements are not introduced. This is more difficult than it appears as often multiple mechanisms are available for recording information e.g. a property of a person could be their last name, or their last name could be recorded as a relationship to a family.

I encounter problems with these issues every day when looking at business or technology strategy, enterprise architecture, business analysis etc. The authors of the artefacts and a small clique that the authors have usually communicated with directly to explain things - don't see the problem - but everyone else who encounters the documents and tries to work with them finds problems resulting from semantic imprecision i.e. incorrect, incomplete, untranslatable, unanalysable, unintegratable and are not concise.

I would argue that is a major problem with current approaches to the adoption of business plans an strategies in particular i.e. when the foundation artefacts are semantically imprecise.

Tuesday, July 21, 2009

Lessons from the melt down

I thought this a really useful real world example: http://www.infoworld.com/d/architecture/lesson-meltdown-listen-your-architects-806?source=IFWNLE_nlt_blogs_2009-07-20

Fighting the good fight, and losing the war - because internal internal competition militates against success.

Internecine inertia and the desire to avoid the identification of common-denominators - because the silos of activity are happy being silos.

Planner and architects see the benefit, but what did those within each department have to gain - compensation models don' reward enterprise players.

Large enterprise software vendors are complicit (IBM, Oracle, HP) i.e. the benefit in having things commodified (anything really). They want to lock you into a proprietary stack, even though it's impossible to adopt the same homogeneous stack across a very large

Monday, July 13, 2009

8 dirty little secrets of Enterprise Architecture and Strategic IT Planning

When I looked at these 9 little secrets of CRM (http://www.cio.com/article/496616/The_Dirty_Little_Secrets_of_CRM) I could not resist drawing the parallels with Enterprise Architectre and Strategic IT Planning (SITP)

One reasons I like to draw analogies with CRM is that while most people now realise that CRM is an enterprise activity - once upon a time, and not that long ago, it was dealt with much as enterprise architect is dealt with today i.e. with lots of people holding their own siloed views of the data in whatever happened to suit them. So it useful to learn from a more mature discpline. Sadly many people still think they can get success through the isolated ivory tower work of a small set of architects (usually focused on modelling).

So here are the 8 dirty little secrets of Enterprise Architect and Strategic IT planning:

1. Scope of data and users is key - An SITP planning system without real users and real customer-facing data is just an empty shell. This means that mechanisms of gathering data from many systems and people is critical.

2. Widespread adoption is key - User adoption and percentage-of-business represented are the key metrics of an SITP system's success. The virtuous cycle in SITP systems: the more users adopt the system, the more data that will be entered. The more credible and meaningful the SITP data, the more valuable an asset it is for all users. The more valuable the asset, the easier it is to get more users leveraging, and contributing to, the system. Even if some users are spectacularly effective thanks to SITP usage, if you only have pockets of usage, most situations are not represented in the database. Broad usage is more valuable to overall collaboration, as compared to deep but spotty use of the system. This means that you must plan to have a large user base and user of many types (i.e. not just a small number of modelling users). This requires role based interfaces and sophisticated security and data administration.

3. Data quality needs to improved through adoption - You will discover data quality problems that are irritants to every user and poisonous to the system's overall credibility. Data quality needs to be attacked at three levels: clean data as it is being loaded, identify sources of data pollution and systematically correct them; You need self-healing data; Identify business processes that corrupt the semantics of SITP data. This means that you must have mechanism for managing policies associated with data, have data quality reporting and support ETL process that effectively transform data on loading

4. Integration is key - There's no such thing as a siloed SITP system. Any useful SITP system must give users access to data that's beyond the purview of the SITP database. So integration will be essential, and it won't be as easy or inexpensive as the initial SITP project. Integration almost always exposes data problems that were hidden or tolerable in siloed system operation. This means you need various mechanisms for integration e.g. ETL, APIs etc.

5. Improving touch point process and governance is critical - Most of the time, an "SITP problem" is really a disjointed process, a policy conflict, or goofed data. Sometimes, a SITP system is just inadequate to the task - and you really do have a "SITP problem." But the most visible and important SITP problems are the ones resulting from holes or redundancies in business processes, contradictory business polices or rules, or hopelessly polluted data. You'll need to solve these other problems to have a chance of SITP success. This is why an SITP methodology that address touch point processes is critical.

6. Better performance is the goal (through transformation and optimisation) - The benefits of SITP really come from improvements to process enabled by, and in conjunction with, the system -- not from the SITP system itself. The twin purposes of SITP are to: enterprise intelligence (what they do and how they do it) to improve your ability to profitably operate. While SITP functionality plays a roll in achieving both these purposes, it's really about enabling your people to see better and react sooner. If you don't change your business processes to take advantage of SITP, your workers will just be doing dumb things faster and with less waste. Said another way, you'll probably need to change some processes and business rules to leverage SITP for maximum advantage. This is why we need to focus implementations around desired business outcomes and resulting (not in abstract framework or methods) and recognise that we need to change how things are actually done (and often by whom).

7. Sponsorship at the right level is key - Making a SITP system truly successful is a highly political act. Any time business processes, policies, and rules get changed, somebody's job, objectives, and even budget may change as well. This means politics at every level, and change management will be important for worker-bees ("will my job be automated?") and executives ("will my metrics and bonus change?") alike. If for no other reason than this, we recommend a phased, incremental approach to SITP deployment and expansion. This is why we recommend a phased, incremental approach (based on a CMMI model that make progressive improvements across the enterprise)

8. Incremental and progressive adoption is only practical approach - The benefits of SITP grow with the more users you have -- but you can never afford to bring everyone on the system at once. Even if system extension, integration, and data quality issues weren't relevant, even if you had perfect execution of a "big bang" system deployment, and even if you had the budget for all the user licenses on day one, you shouldn't do things that way. There are too many process issues to discover, too many political speed-bumps. Since leveraging SITP is a multi-year process, you need to plan for it that way. This is why each phase is carefully scoped.

Sadly what one hears from these people is: "I don't see how adopting SITP would help me with having conversations with my business stakeholders" - they seem to fail to recognise that it is not about "me", and "my conversations" or "my stakeholders" - it is about how the enterprise communicates as a whole. Once one would have heard this same kind solipsism from the sales team, the customer service team etc. regarding "I don't see how adopting CRM would help me with having conversations with my customers" now at least with CRM people recognise the enterprise nature of the activities and data.

See also http://enterprisesto.blogspot.com/2009/03/i-will-be-your-crmist-and-he-will-be.html

Tuesday, June 30, 2009

An IBM view on EA

The following is based on my analysis of:
- http://www.it-director.com/technology/sys_mgmt/content.php?cid=11379&page=1
- http://download.boulder.ibm.com/ibmdl/pub/software/dw/wes/bpmjournal/0812_jensen/SOA_BPM_EA.pdf

The following is what I understand IBM's position to be. Comments in square brackets [] are mine (not IBM's). It is amazing when one looks at IBM's position one can only conclude that the tooling they offer is unsuited to the purpose.

EA models defines a context in which may be elaborated in more detailed models e g. ER Diagramming, UML, BP Modeller etc. EA needs to understand processes [and presumably rules/policies, information, services, etc.] irrespective of how they are implemented (by what, who, where and how). EA information needs to be able to used to lead to technology implementations. In the integrated scenario Architecture Plans (in EA) are handed downstream, either between companies in the supply chain or within a single entity, to the Build team that continues putting meat on the bones with eye towards product development" [and presumably we must allow that these implementation will be done in technologies from a variety of vendors i.e. so we will need a open way of transition for EA to technology specific tools].

• EA is about "Architecture for Planning"
• EA provides the ability to clearly communication strategies and objectives to the entire organization; and records how technology assets are used to run an enterprise.
• EA's real value comes with the ability to investigate multiple future architecture scenarios and understanding the impacts to assets, organization and process before making the changes.
[so an ability to deal with Iniatives, Scenarios and multiple possible futures is key].
• Enterprise Architect deals in abstractions and things that are always changing, use a generalised tool and are comfortable with "knowing something about everything" vs. Solution Architects (designers) who deal in specialist areas (BPM, SOA etc.) and use specialised tools oriented at engineering design (UML modellers, ER modeller, etc.) and are only happy when knowing "everything about something".
• EA is about "doing the right things", BPM is about "doing things right".

EA, SOA, BPM and are all of value, using all three together can produce the most value. To succeed in you must
• 1st and foremost: establish a collaborative platform supporting the critical dynamic interaction across the enterprise. [presumably this requires many different ways of communication to many in the enterprise]
• have a holistic approach shared by all involved roles, tailored to the culture and environment
• establish awareness amongst key stakeholders across the organization, and particularly look at how to engage the with the business people [which typically not mean forcing them to at technical oriented abstractions of complex models. Rather they will want interfaces, views, reports and visualisations that address their issues and interests]
• think through the lifecycle and how to execute your architectural framework
• be specific about your objectives and have an explicit and accepted governance model, recognise this will often require significant cultural shifts to ensure buy-in to a shared effective process of decision making [so focusing on decision making early would make sense e.g. what questions to be answered]
• enable collaboration through good communication and a sharing of work products, based on a common language, with an understanding of and empathy for roles other than your own
[create artefacts,interfaces, views, reports and visualisations suited to the audience]
• enable integration between enterprise planning and solution delivery across all planes of architecture [using open standards to avoid lock-ins and ensure transparency]

IBM now has one real EA modelling tools - System Architect. Rhapsody is about "Architecture for Building" and is about model-based systems and software design and development. [There are many other tools in with more detailed orientations (SPARX) and one should not see these as EA tools]

On EA in Government - also see:
http://www.govtech.com/gt/articles/698252?id=698252&full=1&story_pg=2

CIOs perform a balancing act between the need for new technology versus the need to contain costs, security versus collaborative processes and accessible data versus the need for business with IT strategies. To achieve this they must close the gap between business and IT by becoming a business executive first and a technologist second. They need to build a hybrid skill set that enables IT professionals to understand the business's needs [and one would have thought communicating effectively with the business i.e. not using abstract technical languages and frameworks].

EA addresses this balancing of business and IT strategies when properly implemented:
- achieve stronger alignment between IT strategy and business goals;
- align various platforms and technologies that have resulted in excessive complexity and cost;
- implement IT standards and governance that enable greater technology efficiencies;
- improve performance, availability, scalability and management of existing architectures and applications;
- support new business processes with new technologies; and
- adopt reusable assets to drive greater efficiencies and faster time to market.

Wednesday, June 24, 2009

What is the cost maintaining the data we need for informed decision making

I am often asked about the costs of maintain the information needed to make informed decisions.

I have recently got some facts from two different sources - one a bank in Australia and the second a large utility in Europe. Both are of similiar size.

It is difficult to get validated real world data on the broad range of information required (from business strategies, through business operations, through to technology). It may be that some things e.g. business goals and strategies may very little effort to maintain, however things have more properties, characteristics or relationships and may take more effort. Of course with many we can record them at different levels of detail from different perspectives (determined by the focus of our interest.)

Applications are potentially one of the more complicated things that data is captured about. Applications, or more accurately the services and user interfaces that applications provide, are also useful to consider as they are of interest to several parties e.g. business (whose function it is to achieve business results with the services and user interfaces) and technology (whose function it is to provide the services and user interfaces).

The two organisations both capture a few dozen key properties and relationships about they applications and both have similiar intended uses.

One of the organisations uses a strategic IT planning solutions that acts a single of truth for a wide range of business and technical stakeholders. They have calculated that maintaining a set of information they need about applications takes about 90minutes a year.

The other organisation has calculated the manual cost of creating an application inventory costs them 1-2 hours each application, each time it is done (in documents e.g. Word, Excel and Powerpoint). They also say that it is done many times a year to support to many initiatives and assessments (risk, regulatory, cost etc.) and by different parties. If we said that such an inventory is created just 3-4 times a year that would put the cost at 6-8 hours per year.

This reinforces my view that maintaining an enterprise architecture using the right class of solutions has a negative cost i.e. 1.5 hours per year, vs 6-8 hours per year in the case of the applications inventory.

Of course another strategy could be not to gather the information necessary for informed decision making - but few people openly advocate this.

The also ignores the fact that when this information is done in documents it is difficult or impossible to relate to other data, to analyse, to render in different visualisations.

Monday, June 22, 2009

When individualism becomes solipsism

I have just listened for umpteenth time a person say that the way they current deal with the information (in some ratty little document or model that sits in isolation from other scrap of knowledge about the enterprise) suits them well.

Of course it suits them quite well

They created the document for a purpose. If it. If it didn't suit even them, for that purpose - that would really sad. The fact that it doesn't suit anyone else or any other purpose (in fact is probably to most other people most of the time) doesn't seem to have dawned on them.

It is unclear to me if they are unaware that other people exist, or if the idea that some of the knowledge in their documents could in fact be of use to someone else hasn't occurred to them.

What disturbs me most is that of these people claim to be "information" technology "professionals". So if they don't understand the need to manage information effectively what chance have we with others.

I hear:
  • IT people say it about their: interfaces, applications, servers, data etc.
  • Business people say it about their: capabilities, functions, policies
  • Strategists and planners say it about their: plans
Yes - they all realise that no one can answer a simple questions like: what projects affect this interface.

Changing the ways people manage and communicate

Fortunately in my calmer moments I recall the challenges I enountered in the past when trying to introduce new technologies e.g.
- email & word processing - I heard the same things in the 1970s
- computer mapping and CAD systems - I heard the same things in the 1980s
- IM and document management (Cf. Sharepoint, Wiki, Blogs etc.) - I heard the same thing in the 1990s
And now I hear it again.

Michael

Sunday, June 21, 2009

Challenges to objectivity (getting typing pool to like email)

I recall many years ago advocating the introduction of word processors and email in place of the old typing pool and mail delivery trolleys. Unsurprisingly a range of people were not too impressed e.g. those who:
  • sold contract typists or did typing as a service or
  • provided contractors/staff to push around the mail delivery trolleys (or the mail trolleys themselves or the typewriters)
  • the typing pool itself (usually an internal function) wasn't pleased.
  • the senior executives saw little benefit because they could justify having someone type their
letter and read their mail (and many middle managers aspired to be senior managers so by aspirational affinity - said they could see no benefit).

Many of these people genuinely, and understandably, struggled to be objective. Every conceivable excuse was postulated. As there was practical differentiation in many of the products and services (contract typists, type writers etc.) the vendors of these things were often successful due to the strength of their relationship with senior management. So their interests combined the disinterest of senior management presented powerful opposition.

Fortunately the CFO usually saw the benefit. Sadly the profileration of these document creation tools didn't do much to keep down paper consumption.

I see similar occurring now with the introduction of solutions that seek to provide true enterprise solutions to strategic IT planning, governance, architecture etc. Like email and word processing these systems involve a form disintermediation and democritisation i.e. the people who know record things (not have it recorded for them), and communication is by the people for the people.

These systems are often described as BI for IT, or ERP for IT. Fortunately the implementation of this next generation of tools should eliminate the need for many documents (and emails) and go some way to reduce the amount of paper that needs to be consumed.

Unsurprisingly some people are not too impressed e.g. getting "traditional" strategy, architecture consultants or teams interested in these solution is like trying to get the head of the typing pool (or the person who sells typists) to buy into word processors and email. Ironically the reason many of these people got into strategy and architecture was not to produce artefacts per se - but rather to think, to imagine, to synthesise, to concieve and these solutions would actually free from the drudgery of data collection and collation.

Many consultants working in this area are sadly conflicted and struggle to objectively see that while their self-interest is threatened the paradigm shift is essential.

Sunday, June 7, 2009

EA covers a multitude of sins

I am often asked to suggest the best approach (or solutions) for implementing enterprise architecture. The challenge with this is that people mean so many different things by it that it is hard to know what to suggest e.g.

Strategists - usually seek a strategic IT planning solution - they seek solutions that can engage with all constituencies and are focused on establishing a single source of truth. They seek the ability to integrate data from many sources (people and systems) and communicate with many people (visualisations, reports, models etc.). The challenge with this approach is that it can take a long time for the value of the knowledge base created to be visible.

Business oriented architects - usually seek solutions that manage the technology assets (costs, value, risk etc.) and enable a wide group of people to objectively participate in the management of the existing portfolios (services, applications, infrastructure, skills) and plan future investment. The risk with these approaches is undertaking each optimization activity as a stand alone or point in time exercise rather than establishing a systemic solution.

Governance oriented architects - usually seek solutions oriented at compliance with internal and external standards. They also realise that there are many roles and constituencies that need to be engaged (so technical and some not - in any area being considered). They many focus on the management of technical standards, preferred products, patterns etc. A key to the success is that the touch point processes are adapted so that governance is built into them, and feedback loops occur naturally i.e. that the approach is inclusive. The risk with this approach if it is implemented by itself is that it is seen just as barrier to change and initiative.

Framework oriented architects - seek solutions that implement the framework they have educated in or sold on. If the frameworks is oriented around investment planning and business cases based they seek business analysis and business case management for the enterprise (usually portal oriented interfaces with document production); if it is oriented around a set of reference models they seek support for the reference models and the ability to align their enterprise with the reference models; if it is oriented around a taxonomy they seek a predefined set of semantics and views; if it is oriented around a method (a set of steps) they seek a model or portal solution that steps them through the steps with wizard-like simplicity. The risk is that framework zealots focus on the frameworks for their own sake and don’t ground their efforts in terms of visible business value.

Aspect oriented architects - these people are focus on one aspect e.g. data oriented; service oriented; process oriented; object oriented; package oriented etc. and tend to see the needs as extensions to these domain e.g. oriented at data modelling; SOA; process modelling; UML modelling; modelling the capabilities of their preferred package (ERP, CRM etc.), etc.

Solutions oriented architects - often seek extended solutions architecture - what they seek are usually modelling tools for a group of architects to use to communicate principally amongst themselves - with some views created for other constituencies. They will often seek solution of engineering oriented languages and views. Experience shows that these modelling oriented approaches seldom produce sustainable value.

Value or mission oriented architects - are usually driven directly by a project or business imperatives e.g. reduce the cost of X by ...; reduce the time it takes us to do Y by ...; reduce the risks of Z by... . They probably have the best chance of concrete success as they by definition have SMART goals.

When pressed many will say they want all these things - but the solutions they select and how they go about the implementation reflect one or more these orientations. The emphasis on tools depending on how much people place weight on:
- establishing a sustainable enterprise assets (vs meetings the needs of single constituency) which involves strategies for ensuing data consistency, quality
- communicating with all key stakeholders e.g. engaging with them interactively and allow them to enquire
- integrating data from many sources (which requires support for semantics and integration mechanisms)
- modelling and presenting the various views and diagrams that have been produced (often manually) in the past.

Thursday, June 4, 2009

Justifying the cost of solutions for strategic IT planning

There are 4 common ways that investments decisions are made.

1/ Common sense e.g. this is how we justify the use of: Office, Email, Accounting systems etc. no one in their right mind would not use them. And this is in fact probably the soundest way to justify them i.e. connecting to specific ROI, or short term business outcomes is problematic and not that sensible i.e. perhaps the benefits are pervasive and manifold.
2/ Associate the business value to some current initiatives e.g. we want to save $X in IT cost/risk and to that we need to do by: doing some things (run some programmes); having some people (staff, consultants etc.) and using some suitable tooling.
3/ Try and create specific metrics that allow an ROI to calculated. The real challenge is usually that the quality of the current state analysis is so poor, that accurately slicing and dicing the costs is challenging (hence the reason for a better approach). And IT is good at burying the dead and excusing under delivery i.e. within 20% of budget, within 50% of time, and achieving 50% of original scope is regarded as a success.
4/ We have people who do this job - they need solutions that enable them to do it e.g. we write documents - we need a Word processor (not a type writer), we send messages - we need email (not a faster mail trolley), we do strategic IT planning - we need a strategic IT planning solutuin (not more Visio, Excel, Word and Powerpoint documents). See: Providing professional tools of trade.

Wednesday, April 1, 2009

ICT needs to get its own house in order

ICT really needs to get its own house in order before suggesting to the business how it can help the business on business things (strategies, plans etc.).

I have just sat in a room of 50 EA's saying how EA needs to be driven by the business and the need to engage with the business i.e. go and talk to them. It is a sentiment I agree with in principle. Clearly all ICT strategy and architecture is for and about business - and should be driven by the business.

However usually these people have no credibility with the business - because the business doesn't see they as business people i.e. understanding fundamentally what business is about.

Like me, the business people suspect that these EA people can't usually answer fundamental questions about ICT (i.e. their business domain). They think it a little precipitous of the EA people to suggest to the business that they jump up and help the business proper - when getting their own house in order would usually be a better starting point (i.e. leading by example, and from a position or strength and knowledge).

Most large ICT organisations can't produce, maintain and analyse the basic information associated with the ICT domain they are meant to be managing (i.e. they don't manage their business). So what the business really thinks is
"how are they so brave - having failed in the business management of their domain - to suggest they can engage with the business about another domain (which they are not the domain experts in)?"

Ultimately managing things in any business is about managing things like people, assets, money and risks.

Usually EA's can't report on what is actually spent on ICT, how it is spent, why it is spent; how the spend can be reduced; how asset utilisation can be optimised etc. They can't demonstrate an understanding of ICT costs based on: asset/product type, location, vendor, class (product, people, service) etc or related to the services/offerings their business provides (except at the very highest and vaguest level). They can't link their ICT spend to the enterprise's income and mandated compliance needs at a level of detail that allows management to be effective i.e. what is the impact of this project not proceeding or this asset failing. They can't describe the major business functions, information, rules etc. their ICT systems implement (where and how).

This all requires a system (accessible to all, that supports reporting) and data gathering, and some hard work and thinking.

So having failed in one domain they are in fact responsible for - rather than doing the hard work to manage this domain (ICT as it increasingly gets more complex and less well undestood) they seem more inclined to redefine the domain rather than actually address the problems.

I can't help thinking it would make more sense to demonstrate excellence in the management of ICT before postulating the ability to engage with the business.