Sunday, July 26, 2009

Common frameworks and reference models

I believe what is neeeded is a consistent meta-framework for describing how information about an industry/sector and enterprise (federated or stand alone) is organised.

There have been a number of frameworks which may provide insights into what is required in a meta-framework - some are generic and some are industry specific.

Generic
  • Zachman framework - has two dimensional taxonomy with rows indicating different the views of different roles (business to technical) and columns indicating data, function, network, organisation, time and motivation
  • IFW - has a taxonomy and reference models. It is for analysing and structuring information. It is portrayed with 10 columns representing types of information (e.g. strategy, organisation, data, skills, functions, interfaces, network and platform - and either "workflow & solution" or "involved parties, products/ arrangements") and rows based on types of analysis (conceptual categories for analysing information, terms and terminology, principles for structuring different types of information, detailed designs that use information, how to implement these designs).
Industry specific frameworks (which could be called Industry Architectures - Cf. "Enterprise Architecture" include:
  • Government agencies - FEAF which has a set of reference models and a some methods for a applying them (particularly around alignment, investment planning and business case creation). The reference model set includes: Performance, Functions (BRM), Services (Applications), Data and Technology
  • Telecommunications - TM Forum's reference models for: process (eTOM), information (SID), and applications/services (TAM).
  • Hotel Industry - which I have not yet seen a published structure for.
  • Insurance - IAA which has business processes and activities, ACORD's eMerge (information exchange). IAA (IBM) is a sets of that trace out: information, data and component-architectural infrastructures and describe business objects and the components owning these objects. IAA contains a Business Model, Interface Design Model (components, interfaces and messages), Specification Framework (product definition and agreement administration), IIW (models creatimng data warehouses)
  • Health - Healix established a framework consisting of a set of reference models oriented at: strategy; policies and rules; function and process; information; components (applications and services).
The purpose of this entry is not to define the answer - but to outline some things to consider. See also:
  • http://enterprisesto.blogspot.com/2009/01/structure-for-thinking-about.html-
  • http://enterprisesto.blogspot.com/2009/01/introduction-to-reference-models.html
It is important to realise that IT industry as a whole has a lot to loose if a consistent way of deal with these issues arises - so one can expect opposition for the major incumbent/dominant vendors of IT products and IT services. This opposition is most likely come from activite participation in any groups attempting to establish standards i.e. embrace and smother (rather than the old fashioned FUD strategy which people are becoming resistent to).

The reason is for opposition is that standardised ways of doing things will allow many things to be demystified and commoditised i.e. reducing the amount can be charged for products or services as informed contestable selections becomes viable, and best practice becomes common and public.

Wednesday, July 22, 2009

Semantic precision and documents

Modelling semantics provides all of the following: glossary, controlled vocabulary, data dictionary (objects, properties, visualisations), data model (relationships, cardinality, layouts), taxonomy (inheritance hierarchy, and domains), ontology (where URLs relate data).

When determining semantic precision one would assess if the representing is
  • correct: consistency of syntax and semantics
  • translatable: can it transformed ensuring semantic equivalence
  • analysable: can it be searched, queried and reasoned on. Can logic be applied, can it be "measured" directly or does the representation require intepretation.
  • integrateable: can they be related to other knowledge in other forms
  • complete: adequacy, expressivity, scope
  • extendable: is it difficult to define new concepts
  • concise: do they record things as efficiently as possible and their is no unintended duplication.
In these definitions we don't mean what a person can do through interpretation or inference i.e. perhaps someone looking at a picture can analyse it, integrate with other concepts they have etc., but a system or application can't.

When we are examining the utility of documents (e.g. Word, Powerpoint, Visio diagrams) - we can see that they are not a very good way of ensuring semantic precision. They don't ensure correctness, they are not easily translatable, intergratable, and they can not be analysed or reasoned on. They can be complete, though seldom are, and the form doesn't require completeness (e.g. there is no validation of cardinality). Likewise they are seldom concise. They are extendable.

Diagrams provide special problems. In some mature domains semiotics and iconography have matured so that images can convey semantics with some precision. Sadly when one looks at many Visio diagrams one find that they don't follow any precise semantics e.g. they don't have a controlled vocabulary and spatial concepts e.g. alignment, containment, proximity and connecting lines, don't have explicit meaning and are used inconsistently. This lack of precision means they can not easily be analysed, integrated, and often importantly translated.

People often seek to reuse the information in documents. This means they need to be able to be translated and integrated. It is often suggested that with some remediation (i.e. by a person) these documents can be made "correct", "complete" and "concise" so that this is viable. In practice it is almost always more efficient for a person to examine them and model them directly in some form that ensures semantic precision - rather than attempt to correct them, then translate them and then deal with any translation errors (or remaining incorrectness in the source). The very act of fixing them is more effort than the modelling.

The metamodel provides the modelling semantics in a central repository. A metamodel can consist of a set of metamodels each covering a different domain of interest. Processes are required to manage change (approval processes associated with the creation or changes) and ensure duplicate or redundant data elements are not introduced. This is more difficult than it appears as often multiple mechanisms are available for recording information e.g. a property of a person could be their last name, or their last name could be recorded as a relationship to a family.

I encounter problems with these issues every day when looking at business or technology strategy, enterprise architecture, business analysis etc. The authors of the artefacts and a small clique that the authors have usually communicated with directly to explain things - don't see the problem - but everyone else who encounters the documents and tries to work with them finds problems resulting from semantic imprecision i.e. incorrect, incomplete, untranslatable, unanalysable, unintegratable and are not concise.

I would argue that is a major problem with current approaches to the adoption of business plans an strategies in particular i.e. when the foundation artefacts are semantically imprecise.

Tuesday, July 21, 2009

Lessons from the melt down

I thought this a really useful real world example: http://www.infoworld.com/d/architecture/lesson-meltdown-listen-your-architects-806?source=IFWNLE_nlt_blogs_2009-07-20

Fighting the good fight, and losing the war - because internal internal competition militates against success.

Internecine inertia and the desire to avoid the identification of common-denominators - because the silos of activity are happy being silos.

Planner and architects see the benefit, but what did those within each department have to gain - compensation models don' reward enterprise players.

Large enterprise software vendors are complicit (IBM, Oracle, HP) i.e. the benefit in having things commodified (anything really). They want to lock you into a proprietary stack, even though it's impossible to adopt the same homogeneous stack across a very large

Monday, July 13, 2009

8 dirty little secrets of Enterprise Architecture and Strategic IT Planning

When I looked at these 9 little secrets of CRM (http://www.cio.com/article/496616/The_Dirty_Little_Secrets_of_CRM) I could not resist drawing the parallels with Enterprise Architectre and Strategic IT Planning (SITP)

One reasons I like to draw analogies with CRM is that while most people now realise that CRM is an enterprise activity - once upon a time, and not that long ago, it was dealt with much as enterprise architect is dealt with today i.e. with lots of people holding their own siloed views of the data in whatever happened to suit them. So it useful to learn from a more mature discpline. Sadly many people still think they can get success through the isolated ivory tower work of a small set of architects (usually focused on modelling).

So here are the 8 dirty little secrets of Enterprise Architect and Strategic IT planning:

1. Scope of data and users is key - An SITP planning system without real users and real customer-facing data is just an empty shell. This means that mechanisms of gathering data from many systems and people is critical.

2. Widespread adoption is key - User adoption and percentage-of-business represented are the key metrics of an SITP system's success. The virtuous cycle in SITP systems: the more users adopt the system, the more data that will be entered. The more credible and meaningful the SITP data, the more valuable an asset it is for all users. The more valuable the asset, the easier it is to get more users leveraging, and contributing to, the system. Even if some users are spectacularly effective thanks to SITP usage, if you only have pockets of usage, most situations are not represented in the database. Broad usage is more valuable to overall collaboration, as compared to deep but spotty use of the system. This means that you must plan to have a large user base and user of many types (i.e. not just a small number of modelling users). This requires role based interfaces and sophisticated security and data administration.

3. Data quality needs to improved through adoption - You will discover data quality problems that are irritants to every user and poisonous to the system's overall credibility. Data quality needs to be attacked at three levels: clean data as it is being loaded, identify sources of data pollution and systematically correct them; You need self-healing data; Identify business processes that corrupt the semantics of SITP data. This means that you must have mechanism for managing policies associated with data, have data quality reporting and support ETL process that effectively transform data on loading

4. Integration is key - There's no such thing as a siloed SITP system. Any useful SITP system must give users access to data that's beyond the purview of the SITP database. So integration will be essential, and it won't be as easy or inexpensive as the initial SITP project. Integration almost always exposes data problems that were hidden or tolerable in siloed system operation. This means you need various mechanisms for integration e.g. ETL, APIs etc.

5. Improving touch point process and governance is critical - Most of the time, an "SITP problem" is really a disjointed process, a policy conflict, or goofed data. Sometimes, a SITP system is just inadequate to the task - and you really do have a "SITP problem." But the most visible and important SITP problems are the ones resulting from holes or redundancies in business processes, contradictory business polices or rules, or hopelessly polluted data. You'll need to solve these other problems to have a chance of SITP success. This is why an SITP methodology that address touch point processes is critical.

6. Better performance is the goal (through transformation and optimisation) - The benefits of SITP really come from improvements to process enabled by, and in conjunction with, the system -- not from the SITP system itself. The twin purposes of SITP are to: enterprise intelligence (what they do and how they do it) to improve your ability to profitably operate. While SITP functionality plays a roll in achieving both these purposes, it's really about enabling your people to see better and react sooner. If you don't change your business processes to take advantage of SITP, your workers will just be doing dumb things faster and with less waste. Said another way, you'll probably need to change some processes and business rules to leverage SITP for maximum advantage. This is why we need to focus implementations around desired business outcomes and resulting (not in abstract framework or methods) and recognise that we need to change how things are actually done (and often by whom).

7. Sponsorship at the right level is key - Making a SITP system truly successful is a highly political act. Any time business processes, policies, and rules get changed, somebody's job, objectives, and even budget may change as well. This means politics at every level, and change management will be important for worker-bees ("will my job be automated?") and executives ("will my metrics and bonus change?") alike. If for no other reason than this, we recommend a phased, incremental approach to SITP deployment and expansion. This is why we recommend a phased, incremental approach (based on a CMMI model that make progressive improvements across the enterprise)

8. Incremental and progressive adoption is only practical approach - The benefits of SITP grow with the more users you have -- but you can never afford to bring everyone on the system at once. Even if system extension, integration, and data quality issues weren't relevant, even if you had perfect execution of a "big bang" system deployment, and even if you had the budget for all the user licenses on day one, you shouldn't do things that way. There are too many process issues to discover, too many political speed-bumps. Since leveraging SITP is a multi-year process, you need to plan for it that way. This is why each phase is carefully scoped.

Sadly what one hears from these people is: "I don't see how adopting SITP would help me with having conversations with my business stakeholders" - they seem to fail to recognise that it is not about "me", and "my conversations" or "my stakeholders" - it is about how the enterprise communicates as a whole. Once one would have heard this same kind solipsism from the sales team, the customer service team etc. regarding "I don't see how adopting CRM would help me with having conversations with my customers" now at least with CRM people recognise the enterprise nature of the activities and data.

See also http://enterprisesto.blogspot.com/2009/03/i-will-be-your-crmist-and-he-will-be.html