Wednesday, December 3, 2014

Improving solution architecture by reducing the need for archaeology and being clear on requirements


I want to start with a simple analogy to avoid the scope for erudite excuses I hear so often from practitioners in IT. The reality is with new classes of solutions a very small change by each party can produce a dramatically different outcome.


Building - something we all know and understand
When a change is required to a building (or any complex technology) there are 4 things to consider when we are doing the high level design (e.g. the level of design done by architect):
  1. how things should be built. For a build the building codes and regulations which indicate what things such as: technologies (products, materials) can be used for what purposes and in what way (patterns, best practice, standards, etc.) and; various other constraints re how the new building should relate to other existing, or future, elements in the environment (how it should connect, that it should not obstruct etc.)
  2. what exists - For a build what existing buildings, infrastructures, common services, etc. exist (in just enough detail to know how they will be impacted, which is typically a lot less detail than is required to construct them i.e. we don't need to know where every nail or bolt is placed or necessarily where every detailed element exists or why).
  3. what the requirements are. For the new building. what products and capabilities it should enable, what it should enable to be done (e.g. functions and processes it should support), what it should store, who should be able to use it (roles, positions, types of people), what characteristics it should have (secure, efficient, safe, etc.), budgets and key dates etc.
  4. what will be delivered. For a building. a high design can be created that determines what is to delivered (built or bought) and how. It doesn't need to determine the placement of every building element - just sufficient to be clear what is to be delivered, how it relates to what exists, what requirements it meets (for the client) and that it will code with key codes (for the town planner). Most of detailed engineering designs necessary for constructions are done at follow stages by specialised engineers, designers and trades.
With buildings.
Town planning function aims to ensure:   a) how things should be built is clear
   b) what exists is know - so archaeological for new projects is minimised.
   c) what is built is recorded - so future archaeology is not required.
Architects aim to convey   d) what they understand the requirements to be
   e) what is to be delivered (for the client)
   f) how what is to be delivered meets the requirements
   g) how what is to be delivered relates to what exists
   h) that what is to be delivered is compliant with standards and codes.

Fortunately with building few town planners were once carpenters, who became architectures and then became town planners - and each level is relatively clear on its responsibilities, methods and tools.

IT - something we all know and understand doesn't deliver well.
When we look at changes in IT systems or businesses themselves we need to know the same things.
  1. how things should be built e.g. what: technologies can be used for what purposes and in what way (patterns, reference architectures); other constraints re how the new elements should relate to other existing, or future, elements
  2. what exists e.g. existing infrastructures or common services exist (in just enough detail to know how they will be impacted, which is typically a lot less detail than is required to construct them - i.e. we don't need to know where every nail or bolt is placed or necessarily where every detailed element exists or why).
  3. what the requirements are e.g. products and capabilities enabled, functions and procesess supported, information stores, users (roles, positions), characteristics it should have (secure, efficient, safe, etc.), budgets and key dates, etc.
  4. what will be delivered - a high design that shows is to delivered (built or bought) and how. It doesn't need to determine the details of every element - just sufficient to be clear what is to be delivered, how it relates to what exists, what requirements it meets, and that it will code with key codes. Most of detailed engineering designs necessary for constructions are done at follow stages by specialised engineers, designers and implementation specialists.
With IT.
Enterprise Architecture aims to ensure:   a) how things should be built is clear
   b) what exists is know - so archaeological for new projects is minimised.
   c) what is built is recorded - so future archaeology is not required.
Solution Architecture aims to convey  d) what they understand the requirements to be
  e) what is to be delivered (for the client)
  f) how what is to be delivered meets the requirements
  g) how what is to be delivered relates to what exists
  h) that what is to be delivered is compliant with standards and codes.

Unfortunately with IT many of enterprise architects where once are software engineers, who became solutions designers then became enterprise architects - and each level is not very clear on its responsibilities, methods and tools.

To complicate things further in a complex enterprise there are many initiatives operating in parallel and what is required to ensure all the changes fit together is a canonical view (common to all) or the things the requirements relate to: products, capabilities, functions, information, roles, positions etc.  So steps d & e need to allow all to understand overlaps and possible conflicts as early as possible.

How it can be done better With modern EA/EPM solutions the basis is provided for recording:
  a) how things should be built - standards and patterns
  b) what exists - so archaeological for new projects is minimised.
  c) what is actually built - so future archaeology is not required.
  d) what is requirements -
       by all projects and a specific project (explicitly related to a canonical view of the enterprise)
  e) what is to be delivered (solution architecture) can be clearly seen as a set of elements. 
      by all projects and a specific project (explicitly related to current assets and future assets).
  f) how requirements map to solution elements
  g) how what is to be delivered relates to what exists
  h) how what is to be delivered is compliant with standards and codes.

One can see from this that critical to improving how solution architecture is done - and minimising the need for archaeological efforts in this work is improving the recording of a to h. It requires a holistic view that sits above several streams of activity that are frequently siloed. Traditionally: EA has been strong on a, b but relatively weak on c. Solution Architecture has been weak on: d, e & f; Requirements teams have failed to relate their requirements to either a canonical of the business or its assets.

A very small change by each party can produce a dramatically different outcome.



Sunday, October 12, 2014

Six keys to EA success


I recently read the six hereries for EA (www,eight2late.wordpress.com/2014/10/08/six-heresies-for-enterprise-architecture/). My thoughts follow

Distinguish between planning and design - The "self-referential" problems of EA extend past the frameworks to the practice and practitioners. This is because most of the have a background in engineering rather than planning (or architecture for that matter). By in large engineers create designs for engineers i.e. design practice to an extent is intrinsically self-referential. Plans are created for a broader audience.

Only consumption of a plan leads to value - The reason EA "rarely leads to anything of genuine business value" - is because too attention is paid to how the information can be consumed, acted on, etc - and the failure to realises the artefacts are not where the value lies.

Accretional rather than agile - What is required is an accretional approach. That is to say an approach where knowledge accrets. This is incremental, but has a different ethos to agile as applied in SW engineering i.e. where knowledge is transactional, transitory and related to a specific individual or outcome.

Virtuous feedback cycles - Planning assumptions and decision need constant testing and refinement. EA at present is too much like riding a bike, in a business city, with your eyes closed. You know where you want to get to and think you want know the direction. You close your eyes and take off. What you need to do is constantly assess knew information and adjust. The only way to get knew information is to apply the information you have, and this way have it tested and corrected. (see: www.enterprisesto.blogspot.com/2010/05/virtuous-circles-of-strategy.html)

Problem finding rather than problem solving - is great advice, but it is challenge for many who have a legacy as engineers or designers and secretly yearn to return to the comforting and satisfying realm of design i.e. solving problems.

Understand the social implications - is good advice, because the most of the impediments to EA are organisation and structural. (see www.ea-in-anz.blogspot.com/2008/07/focus-on-ea-is-inversing-proportional.html)

Thursday, October 2, 2014

Demand and Requirements Management for Business Transformation


Demand and Requirements Management for Business Transformation


  • the information relating to things in light blue stuff, at the top, are basically recorded in business architectures and asset portfolios solutions (i.e. they are canonical enterprise wide source)
  • most of the money is spent on the green stuff - with the hope that it affects positively the light blue stuff
  • sadly there is usually a disconnect between the light blue green 
    • what purports to be the connections are created by lots of "consultants", "business analysts", etc. who create a plethora of persuasive, convenient, documents and presentations (essentially disconnected from the facts). 
    • this disconnect militates against the real world value of business architectures and asset portfolios (which is why they are often rightly seen as ivory tower or academic exercises).
    • that is to say that strangely in in most organisatons there is no source of record (canonical, complete or connected) for the darker blue boxes? 
  • for most business the orange stuff is a necessary evil yet
    • the approach to requirements is driven from engineer needs in the bottom right hand corner (on essentially a waterfall model)
  • in an agile approach, which the only thing that can work, the information flows both ways
    • (which is why the blue arrows go both ways i.e. as you define requirements in the context of existing views of capabilities and systems (i.e. business operational procedures, business assets etc.)
    • you will discover things that are missing or need to be updated - which is good. That is because when you data you find issues with it and you improve it (this is what makes it non-ivory tower). 
    • If you don't use the data in the business architectures and asset portfolios to actually make the changes happen who really cares about it - other than a few planners or startegists (when in reality it needs to be at the heart of business transformation)





Monday, September 15, 2014

BTM Decision Support - Agile Solutions




BTM Decision support solution — requirements 

Agility is key and there is a need to iterate rapidly through questions, answers ("art of the possible") and refinement cycles. 

Support the 4 common scenarios (see agile decision making for BTM ):


  1. BTM tool experts to use tools to give the answers. In BTM these questions will arise often from many different people. They need to be answered quickly (an hour) with little effort (5-10 mins work)
  2. End users experts need to get good enough answers for themselves (without recourse to an BTM tool expert). Of questions arising perhaps 20% will require this. Solutions need to be able put in place quickly (within an hour) with little effort (30 mins work)
  3. Non-expert end users may want simple portal solutions with on-line analytics. Of questions arising perhaps 10% will require this. Solutons need to be put in place fairly quickly (a day or so) and with relatively little work (a few hours).
  4. Non-expert end users may want simple task specific tablet/phone applications. Of questions arising perhaps 5% will require this. Solutions need to be put in place moderately quickly (a week or so) and with only moderate effort (a few days)


In all cases:

  • the exact path through the labyrinth of data may be different for every question (and in some cases changes to the data model may be required on the fly). 
  • the data may be presented in many different ways (report, diagram, chart, an interactive graphic etc.)
  • the data may be captured and aggregated (from systems, people, models etc.) and it may be mechanism may be needed by which this data can be validated, maintained, trigger reviews, interrogated. etc.
  • data access rules need to be applied to constrain who can see and change what.
  • the solution used on a early scenarios provides a prototype for the solution to the next scenario


Our approach - we build our solutions on the world's best BTM/EPM platform and extend that so that:

  1. BTM tool experts give the answers: we use dynamic reporting (which defined paths through the data); dynamic model templates; dynamic visualizations to support this.
  2. End users experts get answers for themselves: we place the dynamic reports/analytics in a user accessible information portal to support this. 
  3. Non-expert end users may want simple portal solutions with on-line analytics: we use sing a mix of  reporting/visualizations tools (using the data paths defined in early steps); dynamic portal configurations (navigators, layouts, workflows, etc.).
  4. Non-expert end users may want simple task specific tablet/phone applications: we do this using a simple framework that re-uses the paths defined in earlier steps and some common extendable display templates.

Usually the data is reasonable well defined in the 1st two scenarios and the latter two scenarios are principally focused on supporting better interactions with the data and provider clearer insights.

Why many fixed OTB solutions fail:

  • they make it too hard to change the data model
  • they make it too hard to change the specific path through the data
  • they don't provide interfaces for preferred device mix (PC, tablet, phone)
Why many flexible solutions fail
  • they don't provide the full mix of output types (report, chart, diagram, interactive)
  • they don't provide access control
  • they don't provide mechanism for data aggregation, maintenance and integration







Thursday, September 11, 2014

Agile Decision Making for Business Transformation Management

BTM method are immature and evolving

The fact is that BTM remains today as much an art as it is a science and consequently the methods used to support it are still evolving and maturing. 

Agility is key

People often have known unknowns and unknown unknowns i.e. things that, until they know more, they don't know they need to know, but don't know.

They will say things like "I we need to achieve this and I need to know more about that. But I don't know exactly what I want, but if you show me something I will be able to tell you if that is what I want, or if it is a bit different". 

This is why the "art of the possible" is important and the process to support decision making (and many other things) needs to be agile.

People need to rapidly iterate, through trial and error, until what is needed emerges. In fact the process may even be serendipitous. 

The real world isn't perfect but it is the one we live in


In the real world many ad-hoc questions arise and need to be answered in minutes (or within an hour). If the  stakeholder gets frustrated and can't get answers based on facts in these timeframes there is risk they will just guess (or go to someone who can make up an answer). These questions may only be only partially thought through, specific questions may never recur, and may change rapidly and iteratively as answers are provided.

Learning is about questions that lead to questions

You can see this even with a small child is trying to understand something. They ask question, which leads to another question and another and another. This is the process of learning i.e. intelligence requires iteration and agility.

Out of the box solutions provide a starting point - not an end point

Out of the box solutions for BTM decision support can only for the foreseeable future provide a starting point. They can't encapsulate definitive present best practice because best practice is inchoate and changing quickly as the industry as whole learns (unlike in relatively mature disciplines e.g. accounting, architecture, personnel management, etc.)

OTB solutions provide an idea of the "art of the possible" - but it is critical that they not seen as the way things must be done i.e. cast in stone. 

As it is inevitable that the questions asked will evolve, it follows that the data required (to answer the questions) must evolve, and the ways in which information is presented (and captured) will evolve. It is enabling this evolution which is critical and it must start with the data (if one things of a model, it is the metamodel).

Four BTM Decision support scenarios

1. Expert answer specific questions for specific users - Ad Hoc questions, needing quick answers, arise all the time. They may be the wrong questions (they may never be asked again). They need to be answered quickly and people will come to an expert is necessary to get an answer e.g. a specific report, number, chart, diagram etc. In answering these question it may be necessary to analyse or present data that is currently not recorded (so data model/metamodel changes are needed).

2. Expert user can get answers themselves - a small percentage of questions that arise will want to be asked and answered fairly often. People will want to be able produce the answer themselves e.g. a specific report, number, chart, diagram etc. It may require some expertise, but it doesn't require an expert.

A small percentage of questions from 1/2 will be of use to many users often (users with no specific expertise or knowledge).
3. Expert solution with generic tools - In some cases generic ad-hoc analytical tools, dashboards etc. may be able to be used to create answers
4. Expert Customised solution - In other cases highly customised and optimised user interfaces and analytics may be required e.g. for tablet, phone and watch interactions (with voice activited interfaces, continuous data collection etc.)

Let us take a non-BTM related every day example - how is my heart health

1. Expert answer specific questions  - I go to an expert to get my blood pressure checked, my blood tested etc. These tests may lead to others depending on what the answer are. In the following I just focus on one path (blood pressure)
2. Expert user can get answers - I want to monitor my blood pressure daily and track it
3. Expert solution with Generic tools - I can dump my blood pressure results into a spreadsheet and chart/analyse them

4. Expert customised solution - I have phone based application that monitors and records by blood pressure, sends alerts, and analyses the results (and compares with my diary, my diet record) presenting results on my phone or tablet - when I ask for them

BTM Decision support solutions.

They need to support all 4 scenarios and allow rapid evolution from one stage to the next ideally reusing information and solution elements from earlier stages.
1. Expert answer specific questions - solution needs 5-10 mins work with hour turn around
2. Expert user can get answers - solution needs 10-30 mins work and 1-2 hour turn around
3. Expert solution with Generic - solution needs less than an hours work and 1-2 days turn around 
4. Expert customised solution - solution needs a 4-24 hours work and 5-10 days turnaround (and results in an asset)






Tuesday, August 19, 2014

Absurd use of the term "Business Use Case"

Firstly in my own defense, I should say that I first worked with objects in a computer system in 1980 and adopted OO analytic approaches for business analysis in the early 1990s and did OO development first in 1994 (no doubt poorly).

The term "Use case" seems to me to have been imported from another language by non-English speakers.  It is an arcane short hand of "a situation or scenario or use" and often in vernacular usage as "use".

It is often used by business people or executives to try and sound knowledgeable or technically savvy. Usually its inclusions results in unclear communication.

It is list of steps, usually defining interactions things (people of systems) and a system (usually the being designed) to achieve a goal.

The abstraction of the thing doing the interaction makes sense if you are the designer of the thing i.e. conceptually you don't care what does the interaction (so you call that thing an "actor" meaning that which "acts" - atypical use of the word). In most business, and in fact in most technical contexts, if you what started off with is knowing perhaps it will be person (in fact a specific role or even a specific persion) or a systems (in fact a specific class of systems, or even a specific system) record the interaction as this knowledge isn't known isn't helpful i.e. use of term "actor" just removes knowledge.

Business people would be better of saying: situation, scenario, scenario of use, business demand, business situation, business scenario, process etc. Unless they actually intend to articulate quite precisely a list of steps, usually defining interactions of things with  a system to achieve a goal. In my experience most executives certainly can't be bother with doing this - there are no steps, there is often not a goal - it isn't a use case.

If what you are describing is a specific instance (usually a set of instances) relating to an interaction with an element of technology e.g. a UI button, a mouse button, a break button, a break lever, a break disc, then Use Case makes perfect sense. It allows us to specify the expected/desired behaviour of the technology based on certain actions (and we may not know what is undertaking the action).





Saturday, August 9, 2014

TOGAF - it doesn't really work, most of the time for most organizations


Some other than me pointing some issues with TOGAF. See (http://www.forbes.com/sites/jasonbloomberg/2014/08/07/enterprise-architecture-dont-be-a-fool-with-a-tool/ )


I know Vish quite and have a lot of time for him, and have a great deal of respect for his experience. Sadly think he is flogging a dead, though popular and profitable, horse. Even the most experienced advocates of TOGAF say:
a) doesn't have a great track record of success
b) you can fail if you take it "literally"?
c) it can't be applied by someone who doesn't know what they are doing
d it isn't a cookbook
e) it needs to be needs to adapted to organization (and if following a method needs expertise, adapting a method surely requires more expertise).

Or to extract some quotes:
  • "TOGAF's popularity ... doesn’t have much to do with how well it helps ..."
  • "many TOGAF-driven initiatives have failed, although not necessarily because of problems with TOGAF..."
  • "TOGAF is not a cookbook .. it consists of: best practices, processes, principles, rules, guidelines, techniques ... a toolkit,” 
  • “TOGAF can fail is when people take it too literally ... ”
  • "If people are struggling with TOGAF, either they’re not adapting it for their organization, or they’re not getting people who’ve ‘been there, done that, got the T-shirt, have the scratch marks’ to help with the initiative”
  • “A fool with a tool is still a fool. ”

Vish suggests 3 approaches: 1/ "...baseline ... because it’s good for cleaning up messes";  2/ "... target [business outcomes] ..."; 3/ “some baseline, then target. ... an iterative approach ...take a pain point, create that slice of EA. ...”. I would suggest that only last of these can really work. And even then it won't work if the done using the wrong tooling - because the data can't be maintained and re-purposed for subsequent iterations.

Vish does point out that information needs to be managed. But sadly TOGAF isn't an IT management solution, so doesn't help that much. In other areas people are smart enough to realise that they need good tools for managing information - yet EA teams (and TOGAF people in particular) persist with SW Engineering oriented tools, languages and methods and modelling completely ill-suited to the task (aided and abetted by OMG with its SW engineering oriented legacy and prejudice).

Jason Bloomberg is right to point to the failure of traditional EA, and TOGAF (he should look at tradition Project Management in IT, which is even more of farce).  But he seems to have difficulty with concepts of classification and understanding what the issues really are.

1. Buckets that are not buckets - he suggests TOGAF users fall into four buckets; those that achieve: very little (because the apply TOGAF incorrectly), a baseline (that helps to resolve legacy issues), help addressing specific business outcomes and those that want to deal better with change overall, and look to EA to help them become more agile. Many users: achieve very little, but do establish a baseline (that helps a bit with legacy issues) and they do address a few specific business outcomes. Most want to deal better with change overall. So it just doesn't make sense to say most fall into one of these four buckets - most don't fall into any - but sit across many.

Bloomberg fails to identify the real issue in making decisions and changes (agility). There is always a:
a) a relationship between the quality of the information available and quality of the decision
b) there is usually a relationship between the ease with which information can be analysed and interpreted quality of the decision i.e. if analysis is just too difficult it often won't be done
c) there is a trade off between speed and quality of decision
d) a poor decision made in haste, with poor information, may have unforeseen long term adverse outcomes that affect others i.e. not the immediate decisions makers or the people who implement the specifics of the decision, but those that live with the consequences (a number of wars, and social experiments, come to mind)

The challenge is finding the sweet spot i.e. or point of diminishing return. The really failure of EA is that few people address these issues how do you:
a) establish a lightweight effective way to maintain the base set of information needed for good decisions
b) use that information in decision making that rather pulling the answers out of the air (which is how most IT decisions are really made)
c) apply that information effectively to down stream initiatives and as a by-product of that application information the quality of the information available to all.
d) avoid decisions that suit an immediate political agenda with disasters downstream consequences for a broader community.
Most of these things come down to establish a virtuous cycle of information use and reuse combined with governance that balances short term and longer term outcomes.

TOGAF's problem is that it is like a condom with a hole in it - it gives the illusion of making something safe without actually making it safe. It is a EA equivalent to old medical practice of bloodletting.

See:
http://enterprisesto.blogspot.co.nz/2010/05/virtuous-circles-of-strategy.html
http://methods-and-tools.blogspot.co.nz/2013/11/what-is-wrong-with-togaf-in-practice.html




Sunday, February 2, 2014

Why don't IT organizations maintain, reference and continously improve the simplest of inventies

If you large organizations who have fleets of cars:
a) if each cars cost less than $50k
b) if their can essentially continue to operate of a car fails 
c) if they have an inventory of the 100s or 1000s they have 
- the answers are usually: yes, yes and yes. 


If you ask the same organizations who multitudes or people:
a) if each person costs them less than $50k to on board and make productive. 
b) if their can essentially continue to operate if one their people fails catastrophically
c) if they have an inventory of the 100s or 1000s of people they have. Know who manages them, knows where they are, what they do etc.
- the answers are usually: yes, yes and yes. 

If you ask the same question about interfaces between systems they answers are usually: no, no and no. And often they don't have even canonical inventories of the systems  

Tuesday, January 21, 2014

Why classical Project Management fails to deliver for IT


Classical Project Management methods and systems fail to deliver what is needed for IT strategic planning because they are oriented at a different problem (not strategic planning or management) in a different domain (not IT). Consequently classic PPM solutions, and PMOs applying them, fail to provide some key strategic insights required in a IT and business transformation context. 

Classic project management has its roots in the built-environment (construction and engineering). The classical paradigm assumes a well defined solution scope, well defined solution impacts and focuses on defining and managing the plan i.e. what is required for execution and delivery i.e. when, in what sequence, who, etc (i.e. tasks, resources). It also assumes well defined methods for detailed delivery, boundaries between teams, and an implicit and detailed understanding of the behaviour and characteristics of materials and technologies. So essentially it focuses on tracking and predicting: costs, dates, overruns and overloads, etc. at a task and resource level.

To give a simple example when planning the delivery of a building - it assumed:

  • the design of building being delivered is fairly well defined (solution scope), 
  • where that building is and how it relates to other buildings, services etc. is fairly clear.
  • how bricks are laid (methods for detailed delivery), how brick laying relates to wall plastering is know (boundaries between teams), we know what bricks do (behaviour and characteristics of materials and technologies)

The reality is that in IT, when considering a portfolio of transformation initiatives, key things we are trying to understand are: 

  • how the transformation portfolio produces the desired business outcome
  • what the portfolio of solutions (and therefore projects) looks like and what the scope of each is exactly
  • how each solution (and therefore project, and sub-project) relates to other solutions and projects.
  • what new technologies do, how new technologies are best applied, and how this affects boundaries between delivery roles.
Strategic planning operates at a different level of detail than execution planning. Strategic planning of the portfolio is like a property investor working out which building he will build, retire, etc. It doesn't require task level details of resources that would be require for execution - its only requires a project level allocation of classes or resources and scheduling at this higher level.

Even detailed IT project management focuses more on managing scope with an, ideally fixed, time and budget - than it does to managing time and budget to an explicitly defined scope. 

The things PPM systems excel at when managing execution are often just not applicable e.g. task sheet production, time sheet recording and tracking detailed actuals of execution tasks, estimates cost and time to complete based on task level critical paths. That is not to say they are not useful - they are just not useful for strategic planning.

What is required are multi-dimensional views of various impacts i.e. on the many aspects of the business plan, policies,  architecture, assets and agreements (and related IT policies, architectures and assets). 

More thoughts on the underlying issues is here:

Projects are artificial constructs and often managed in ways unsuited to enterprise change initiatives