Meaningful software. Modelling bias

Vsevolod Vlaskine
24 min readJul 23, 2019

By definition, a model is an abstract representation of a certain aspect of the actual thing. Say, the queue theory models how items move through when the throughput is limited. The abstract queue can represent cars on the road, or a production line, etc. A good model predicts well how a system structured in a certain way would behave, simply because the model repeats the system in its significant parts, but in a simpler way.

Models are picked for a reason. Unlike queue or complexity theory models, say, the models of human cardio-vascular system are not routinely applied to the software process (apart from maybe ergonomics), although they accurately represent human functioning, which is an integral part of the software production. You apply the models not just because they are “correct”, but because they demonstrably predict the production outcomes.

Say, one can look at the specific software project in terms of the queue theory as a collection of queues of items in waiting to assess and improve time to market. Time to market is used as a quantitative metric: longer is worse, faster is better. Reduced waiting time in the project queues will reduce calculated time to market: the metrics or utility function of the model can be directly calculated.

Obviously, in many cases, one cannot directly calculate utility of the models. Say, a psychological model may suggest that higher salaries might lead to higher productivity, but such conclusion would not follow directly from a model-based calculation. It can be established in indirect way and then measured in experiments. Say, one can find that, statistically, comfortable chairs improve development speed logarithmically. Although no doubt such experimental results are useful (e.g. Google do this sort of data mining to improve conditions and productivity of their teams), the advantage of the direct calculation of the utility function is that it shows you the exact mechanism of how to achieve better outcomes, while the experimental, empirical cause-effect relationships look more like soft recommendations, which can be misinterpreted by those who apply them.

The “direct” models are very attractive, since their utility follows from their mechanics. It leads to the temptation of devising “direct” models not to represent the mechanics of the modelled system, but to extract from it a desired utility function. For example, the old, but still wide-spread “mythical man-month” illusion that if the project is planned down to a day, it is sufficient to find sum total of all the planned man-days to tell the project delivery date.

It is worth to dwell a bit on the latter example, since it will help to later question even the “direct” models whose mechanics define how the cause-effect relation is produced. The source of the mythical man-month illusion is a flavour of Laplace’ Daemon fallacy, coming from the classical science, which suggests that if we know everything about a system, we can perfectly predict its behaviour; or in a weaker form: the more we know about a system, the better we can predict its behaviour. However, it is not the case with software development: the management or analysts go deeper and deeper into implementation details, request more and more accountability for every hour spent, sowing anxiety and slugging the creativity and productivity of the project team.

The level of detail of the project planning looks like a continuum from coarse to infinitely refined. Say, what would be the best level of detail for the initial upfront project planning? Roughly, two common answers might correspond to the “traditional” and agile approaches: The “traditional” way would be based on a time-to-market utility function derived from a faulty assumption that project work can be modelled as sum total of work on each individual item. The agile approach would suggest best practices of managing backlogs at several scales, which works empirically, however it is justified only indirectly: For example, why is the time horizon for a sprint is two-four weeks? Because empirically it has been found that doing detailed planning for periods longer than a month would lead to the development losing focus and digressing from value-driven goals.

Various models, e.g. from the queue theory, sociology, etc, only partially explain such time scales, but do not give technical criteria of what is the right time scale. Instead, best practices, techniques, and heuristics like organisational patterns or formulae like “go see”, “andon cord”, etc are offered. The common justification of the best practices falls into two parts: Firstly, some aspects of those best practices are modelled (e.g. as in queue theory). Others do not offer easy modelling. For example, the pair programming looks like taking double amount of engineers’ time. How could we show it is more productive? The pair programming as an intellectual effort, human communication, etc does not lend a simple model that would behave like pair programming to allow prediction. Instead of a model, we run an experiment on the system itself. Instead of assessing the model’s behaviour (we don’t have one), we assess the behaviour of the system itself: say, pair programming productivity.

There are several consequences of it:

  • Although the experiment gives us the utility function (e.g. tells us how long the task took), it does not reveal why and how the system works at all: in pair programming, was it improved socialization, more focussed effort, increased supervision, or more fun that led to a better performance?
  • Such experiments at a larger scale fall into the Laplace’ Daemon trap: to prove the point and run such an experiment on a project scale and compare apples to apples, we effectively need to run the whole project twice, say once as “waterfall” and once as “agile”. Obviously, such experiments are rarely, if ever, staged and rightly so. With the result of endless culture wars.
  • Because this sort of experiments does not reveal the mechanics of success or failure, their only proof-in-the-pudding is the measured utility function (time-to-complete, salary costs, etc). Since entering mainstream, the agile thinking has been somewhat too anxious to justify itself through a “better value for customer” and therefore practices to qualify for being “best” have to graduate as “value-adding”. As one of the results the “inner” value is seen as “inventory” that needs to be reduced as much as possible, rather than a capability that has high intrinsic value to the software team and the whole company, but little value to the customer or marketing department.
  • Empirical results may be perfectly scientific, however they are a much harder sell to unconvinced; the missing direct causal explanation exacerbated by delayed and interpretation-prone effects provokes the common reactions to “yet another agile technique” like “it all is religious wars” or “it’s our word against yours”.
  • Empirical approach in software engineering is biased to produce thousands of concrete techniques on one hand and very high-level principles like 5S (seiri, seiton, seiso, seiketsu, shitsuke) on the other that (usefully) express the spirit rather than specific method. Such approach requires long apprenticeship to pick many techniques and absorb the spirit first-hand, rather than gives the engineer compact conceptual framework that would allow to generate and assess techniques and designs on her or his own. Apprenticeship is vital. It may be long and expensive and speeding it up pays.

There are essential things in agile that are hard to “model”. Instead, they are taken from and confirmed by experience as best practices, recipes, or patterns coming from books, reflecting on one’s experience, or common sense. How can their “mechanics” be expressed in more formal terms, so that one does not just soak in them after years of mentoring and practice, but can actually assess a specific organisational or design practice or generate a new one?

Moreover, empirical models eventually stop at the questions like: what is the right level of detail of a component architecture? how much documentation a product needs? how fine-detail time planning is required? The naive approach often falls into Laplace’ Daemon trap, e.g. document “everything”. The empirical modelling cautiously suggests: “A better way to frame all these issues [level of detail, documentation, etc] is along continuums. Appropriate behaviour varies along a continuum for each discipline, and this may evolve iteration by iteration … This is the view in Scrum: Practices adjust along continuums according to context.” [Larman, Vodde]*, p.130

Apart from the empirical iterative approach, can we offer structural criteria or decision mechanisms that would help to identify the right scale in design, planning, etc on a continuum spanning from small to large or from fine-grain to high-level without falling into the trap of simplistic models meant to optimise “customer value”? And is “continuum” the best way to think of those things: are they really continuous functions of size or time?

Software possesses one quality to a much larger degree than most of industries: its main instrument is language and its products are written word. The software systems do something to the material world: control machines, provide communication, etc — actions that are performed through executing sentences in artificial languages. The connection between a user story in natural language and the code as a text in an artificial language, as well as the connection between saying (code) and doing (its execution) is fundamental for software engineering, but software methodologies rarely (if ever) look into its structure.

Sentences in a programming language could be classified into two large groups:

  • descriptive: data structure definitions
  • performative: the expressions that say something and do something by the merit of saying it.

There would not be any other types of expressions in the code. In fact, the definitions of passive data structures may be only descriptive, but they inevitably are meant to be used in performative expressions somewhere in the code. Something will be done with them, otherwise they are useless.

The expressions written in programming languages are performative exactly as John Austin defined them (in natural languages) in 1950s: By saying “I pronounce you husband and wife” the priest actually makes a couple husband and wife. By saying v+=2;, I actually increment the value of v by 2, when my code is executed; and say, if v is passed as velocity to the vehicle control, my saying actually does make the vehicle go faster.

Why does this obvious fact matter? Because computer programs are language artefacts of a special (performative) sort and therefore linguistic reasoning applies to them. Most of engineers, analysts, managers, and customers are used to think about a software product as a “model” of some aspect of reality and software design itself as a sufficient amount and right type of modelling (hence “object-oriented models, relational models, software ontologies etc). However, in terms of Hjemslev, a key figure in the 20th century linguistics, models are essentially symbolic systems: they substitute reality with matching symbols, where symbols correspond to things and therefore the “model” behaves similarly to the real thing, which renders its predictive power.

However, computer programs are systems of signs, semiotic systems (see Code as two texts for more). Hjelmslev showed that symbolic and semiotic systems cannot be reduced to each other, therefore “modelling” cannot possibly sufficiently cover software engineering design and practices, since those are semiotic systems. Instead, the latter would benefit from applying linguistic apparatus to them and, specifically for software, the analysis of the performative structure of the code and of the software process.

Each executable portion of code is two things at the same time: expressive and performative: It is up to the programmer to make sure the code says what it does in a concise manner.

Moreover, the only reason a class, or function, or utility are written is using them for something else, do something with them. Their meaning is in how they can be used. To make sure, it is not just the class interface, it is actually the class usage that represents the meaning of a software artefact. For example, the meaning of C++ STL containers consists not just in their types and methods like begin(), or end(), or iterator increment etc, but in the patterns of their usage: the iteration concepts and so on. It was Wittgenstein who revolutionized philosophy of language by suggesting that the meaning of words or sentences is their use. We don’t need to go into general philosophical discussions, since Wittgenstein’s definition of meaning simply works for programming languages in practice, since only programming artefacts that are used make sense.

Therefore, each software artefact has two sides: performative (what it does) and expressive (what it says). What is says needs to be meaningful in terms of its use. Once used, it becomes a performative implementation detail of another series of expressive statements in the code; for example on a small scale from low-level comms through database query primitives to sets of complex specialized database queries to the end customer-facing semantics — the use (and therefore the meaning) of each of those steps should not be coupled with the implementation details of its executable side.

(One may think: is not it all just about encapsulation? Encapsulation is only a part of the story: making a hundred line-long method private is hiding a blob of code that lacks the expressive-performative quality. Feature-hiding often mistaken for encapsulation is the last resort and typically a quick fix; the good code is exposed as much as possible. E.g. if a class has to have a lot of complexity, the latter should be hidden in private, but decomposed into more classes and then instantiated as private members of the class. If it is not done religiously, the poor unit test coverage for the class is almost guaranteed.)

This all may be commonplace and obvious. I would like to emphasise that the structure described above does not model any kind of “reality” outside of the code, unlike, say, object-oriented design. Instead, it describes the foundational elements of code. Just like object-oriented design articulates cleaner models for some aspects of “real world”, maintaining a clean-cut execution↔meaning↔use structure throughout the code leads to naturally better software (i.e. code that it is closer to its semiotic nature). The main thing about this structure is that it represents sign and language qualities and therefore is essentially linguistic and particularly related to Austin’s theory of performative utterances.

One property of performative utterances is that they do not have a truth value, i.e. they are not true or false [Austin]. The correctness of the code is strangely less relevant than one might think, taking into account that there is a strong inclination to think about software code as a chain of logical conclusions — and in classical logic the statements are either true or false. Rather, the “correctness” or better “consistency” of the code is really assessed by checking whether is says what it really does. Does it say one thing, but does something else? Meaning that the artificial languages seem to have a very human propensity to lie? Or the code is written the way that one has to read each line and to understand what it does (which is a flavour of software Laplace’ Daemon) — but then how can the code be “wrong” or “false”?

Unlike logic, there is no inherent need or even possibility for being true or false in the code. It may be only semantically consistent: when you use it, it does what you think it does. Therefore, relentless semantic decomposition along execution↔meaning↔use line. One common design problem, which I think comes from perceiving software as something “logical”, is that the participants see design considerations as conditions that are either true or false. This problem has been solved (or rather never existed) in Pattern Languages: design considerations are the forces that need to be balanced. The solutions are not true or false. Instead, the interacting forces are in balance or they are not. A path to a good design is to articulate use cases as constellations of forces. The resulting solution balances forces for given use cases and therefore is meaningful, because its meaning is in its use. (“Language” is often is omitted in “Software Pattern Languages”. A pattern language allows to put its patterns together into the “usage” sentences, but its underlying semantic layer is language of forces.)

It is essentially insufficient to assess whether a piece of software is meaningful purely in the logical terms of being true/false or correct/incorrect, which unfortunately still happens a lot in the “non-agile” world. The following citation is an example of the well-meaning proposition: “It is this semiformal structuring that liberates the creativity of people. Rigid formal requirement models can be stifling, and are unusable by most people because they have not been expertly trained in the appropriate modelling technique.” [Adolph et al] It is followed by a great text, however its reasoning is built on empirical evidence and common sense of software experts and misses a chance to drill down toward a firmer conclusions on why formal specifications do not work well: the formal specifications are inefficient not because “most people … have not been expertly trained in the appropriate modelling technique”, but because “modelling techniques” are symbolic systems and therefore inevitably become inadequate, when applied to a software project as an essentially semiotic system.

Another common critique of the “waterfall” goes as: “Its great strength is that it is supremely logical … It has just one great weakness: humans are involved.” [Deemer, Benefield] The following discussion is great and convincing, but despite that I take the quote out of context to demonstrate how the reasoning in that specific sentence, although well-meaning, falls into the same Laplace’ Daemon trap. Criticising the “waterfall” for wrong reasons would make it hard, if not impossible, to fix certain problems of heavy-weight methodologies. The “waterfall” is expensive to the point of intractability not just because of irrational humans and necessity of a more empirical approach. Its costs would remain prohibitive even with most compliant engineers and the problems may not disappear with the adaptive approach. It is so expensive and inadequate exactly because it is “supremely logical” in attempt to build a symbolic model of the software design, which is a semiotic system.

In the same way, the software often is seen meaningful, if it brings “value to customer” — and books on agile are permeated by the statements like that. Whatever does not create “value” is seen as “waste”, in the best case “necessary waste”. The problem is that once the value is expressed quantitatively: in terms of money, time to market, or productivity metrics, the meaning in software gets trapped in a symbolic system representing circulation of that value. That is why customers or the marketing department do not like when programmers try to do things properly with crisp semantic-pragmatic relationships in place. It is not because the customers or managers are greedy or impatient, but because semiotic aspects are in a blind spot of the “value”-driven symbolic system.

The expression “customer value” can be a misnomer, since it has the connotation of measurability. What really is delivered to the customer is a software capability to do something that brings the customer benefits, hopefully beyond the costs of developing that capability. Those benefits not necessarily are immediately measurable; e.g. it could be a system used in public education or fundamental science with no immediately obvious quantitative value. Thus, the delivered capabilities of a software product have the same execution↔meaning↔use structure as any intermediate software artefacts. From this point of view, let us take a closer look at a popular format of user stories: “As a <customer/user role> I want <goal> so that <reason>” (C-Style User Story, see e.g. [Larman, Vodde]**, p.271).

Expressing the customer’s goal and reason (rightly) focusses the meaning of the user story on the “customer value”. However, the eventual purpose of the user story is to be projected into its technical design and implementation (the user story is useless unless [eventually] implemented). Thus, I will try to rephrase the C-Style User Story format according to the semantic/pragmatic structure of software:

  • Role emphasizes the concrete product user or actor. It is the right thing to do. The story tells that user does something by utilizing the product capabilities. That action makes sense not on its own, but in relation to <reason>. The <reason>expresses the meaning of the user’s action. Where I am going with it is rephrasing the C-Style in meaning-use (expressive/performative) terms. Let us formulate as a user story Austin’s classical example of a performative utterance: “As a priest, I want to be able to perform the wedding ceremony so that I can pronounce you husband and wife.” Clearly, “priest” is the <role> here. However, the priest will be able to effectively perform the required ceremony only if people actually understand and accept what the “priest”, “church”, or “wedding” are. His pronouncement would be cargo cult in the place where this vocabulary is not effectual, e.g. in a Buddhist village. If you feel this example is too remote from software engineering, think for instance of financial software products: they would be meaningless in moneyless societies. Therefore, <role> defines not just an actor, but a namespace in a vocabulary in which this role makes sense. Without constantly keeping it in mind, emphasising the actor leads to personality bias and subjectivism of “customer satisfaction”.
  • Goal is really an action, i.e. what the actor is going to do given some (required) product capabilities. This action is deployment of capabilities from product capability vocabulary, a performative step just like execution of a statement in the code, except performed by the user story actor, not a CPU.
  • Reason: is somewhat a misnomer. Although, <reason> answers the question “why?”, it potentially (and easily) invites to think of user stories as cause-effect sentences or logical statements, whereas the <reason> clause should formulate not the purpose, cause, or reason of a requirement, but express the actor’s action in the user-level vocabulary, i.e. the vocabulary or semantic namespace implied in <role>.

Let us go through one more iteration of reasoning. (As a potentially dry side note, while many thinkers of the 20th century, such as Wittgenstein, Austin, Hjelmslev, or even Deleuze to name a few, have done work on expressive/performative aspects of semiotic systems, the relatively recent book of Robert Brandom [Brandom] is a condenced research of what Brandom calls meaning-use relationships, which is just another way to represent the execution↔meaning↔use structure. If we have two vocabularies or mini-languages A and B, the meaning-use relationship comes as an answer to the question: “what should I do with the vocabulary A to be able to say something in vocabulary B?” Brandom looks into much more generic vocabularies like modal logics, whereas in the applied field, the vocabularies a scaled as much “smaller” languages like Pattern Languages or Wittgenstein’s language games, e.g. his classical example of a builder’s language that happens to open his Philosophical Investigations.)

  • Role (user): Suppose we write a user story on a product for trading. Something like: “As a trader, I want … so that I …” The story effectively says: “As a trader in an investement firm, I need to be able to …” The namespace or user-level vocabulary/language here is one of the vocabularies to express operation of an investment firm. Analysed as an execution↔meaning↔use structure, the trader plays the executing role in the firm, just like the CPU executes code. We are speaking from the structural point of view, with no intention to compare people to mechanical cogs in an organization.
  • Action (goal) is sufficiently analyzed by now: it is an action of capability deployment, or more formally, the pragmatic step of mapping the software product capability vocabulary/langugage into user’s vocabulary/language.
  • Reason of the trader’s user story represents in its turn a capability for an “organizational” story like: “As a risk management specialist I want to get traders to … so that I can …”. Note that the latter story is not necessarily of “higher” or more “strategical” level than the trader’s; the risk specialist is not the trader’s manager, but rather trader’s customer working with traders peer-to-peer. Trader’s language/vocabulary (in terms of execution/meaning/use) is different from the risk analyst language. Both of them may make use of each other: the trader too may express something in his language by deploying capabilities of the risk analysis language. Although this part of the user story seems to answer the question “why?”, but in a good user story the relation is not really causal, but pragmatic: performing the action (based on the [desired] software system capabilities) by the user is the same as saying what it means to the user in his language. Stretching it somewhat, it would be better to use meaning instead of reason.

Seen this way, action and meaning of the user story correspond directly to execution and meaning in the execution↔meaning↔use structure, whereas user specifies:

  • the namespace/vocabulary/language at which meaning is expressed (e.g. language of accounting operations)
  • who executes the operations, in the same way as the system capabilities listed as required for the action identify which part of the software product gets executed to bring those capabilities to life

The rephrasing above matters because it highlights a more basic structure behind the user story. The user story is the point of contact of the customers and developers. Earlier, we identified the execution↔meaning↔use structure at the core of the software code. Above, we just have seen how the very same structure applies to the customer’s side of the user story: the actor executes (by utilizing the capabilities of the software product) actions to express something in one of the (human) languages in the customer’s domain. Rigorously defining the user stories with the execution↔meaning↔use structure in mind not only leads to better software requirements, but helps to better analyse business on the customer’s side. The effective user requirements in the form of user stories and software code actually have the same semiotic structure: execution↔meaning↔use, i.e. we have shown that the semiotic structure is not just an idiosyncrasy of the code, but a feature of the development process and indeed of the functioning of an organisation.

Therefore, we can analyse not only the code or software design, but also the organisational processes from the semiotic point of view in the software team, in the communication with the customer, and in the business analysis on the customer’s side: expressing something in a certain organisational domain has its action counterpart that deploys a bunch of capabilities making it possible. E.g. the complementary side of the ability to say: “Portfolio risk is continuously assessed” is the risk specialist’s action of updating the trading data and running necessary computations, which is based on the vocabulary of capabilities of trading data interactions (request, filter, etc) and computations (e.g. value at risk, greeks, etc). The ability to say: “Plan next sprint” is an action of practical deployment the vocabulary of the “product owner refining product backlog”, “engineers doing planning poker”, etc.

This execution↔meaning↔use may sound trivial in case of well-known practices. What is the use of looking into the ability to “walk for 100m” as the capability deployment action of “making a step with one’s left” and “making a step with one’s right foot”? Let us look at two typical decision-making scenarios:

  • The room is full of subject-matter specialists. They expertly speak in the domain language about a specific need in the organisation (e.g. a requirement for the software product they commission. However, exactly because they are (hopefully) experts wielding the domain language so well, the conversation either wildly veers in all possible directions or stays on the point, but forks into endless what-ifs. In other words, it falls into the expensive trap of the speculation bias.
  • To be productive, such a meeting needs to be moderated in two ways: As a technicality, constantly asking question: “What are we trying to do here?” would help focus on the point. Anyone who has been in such meetings knows how hard and tedious it may be. The more creative part, though, is to spot every time when the discussion moves into speculative forking, which can be endless, but most importantly indicates where speaking should give way to doing and the question to solve is: “What can we do that would resolve or reduce the number of the speculative variants and assumptions?” The answer would come in the form of an experiment, test, prototype, etc that should reduce the number of possibilities.
  • Spot speculation, once it starts, and turn the conversation from exploring possibilities to devising a decisive experiment, meaning: move from saying to doing. Once the experiment is complete, reconvene and move back to the forking point of the discussion. It is important that as often as possible the experiment should not be a feasibility study, because the latter simply moves the forking from speculation into action, which is even more expensive. Both speculation and feasibility studies are not useless, they just are very expensive. Instead, the experiment should reduce the number or complexity of the solutions under dispute.
  • The action or implementation bias is just the opposite: the actions are taken without expressing them in the user’s language in the tacit conviction that the actors “understand” what is the purpose of what they are doing. The result is wildly different from what user actually wanted or the job perennially “almost done”. The moderator would need to continuously spot this kind of situations, making everyone answer the question: the purpose of your actions is to eventually allow the user to be able to say such-and-such.

I call this continuous shaping of the decision-making as alternation between saying and doing “performative negotiation” carefully structured as negotiating in the meeting, in the group, or with the customer always keeping in mind what we are trying to say and to do. If speaking becomes fuzzy or the doing aspect not clearly articulated, the performative step (an experiment, a test, a prototype, a missing piece of work that would bring back clarity) needs to be identified. If the action loses the view of the usage scenario, it’s time to speak.

Performative negotiation has the now-familiar two-sided semiotic structure of execution vs meaning. It goes beyond software technicalities and really applies to any business interaction (leaving politics, ambitions, or emotional aspects aside for now). It does not impose any specific ways of doing things. Instead, it is a tool to tell whether a proposed solution makes sense or whether a demanded requirement can be translated into the action.

(It is very hard to maintain the effort of the continuous performative negotiation. It requires a certain endorsement in the organisation or team, otherwise people at all levels may see it as intrusive, threatening, confrontational, or taxing, similar to the Five-Whys analysis. However, after a short time, the key stakeholders start seeing the direct benefits and improvements: the meetings are more organised and efficient; there are fewer empty promises and more material benefits timely delivered, etc, this all coming for them by the price of accepting the new form of interaction, which they perhaps may find quirky. My own planning conversations were called by friendly stakeholders, hopefully as a joke, “interrogations”. Although, I think a better word would be “interviews”, with the degree of persistence similar to the Five-Whys analysis.)

Seeing not only the software design, but all aspects in the organisation as this semiotic structure is closely related to the Pattern Languages. Christopher Alexander [Alexander 1977] introduced a Pattern Language as a vocabulary (e.g. in architecture, rooms, windows, latches), syntax (how the vocabulary items can be combined into “sentences” other semantic entities that become meaningful as they solve a given problem. This and other pattern language definitions rephrase the execution↔meaning↔use or capability deployment relationship I tried to describe above. However, in software engineering the emphasis has been on generative aspect of the pattern languages: spot a recurring situation, represent it as a bunch of forces and capabilities, formulate a resolution pattern that would balance the forces well, give it a good name in a corresponding pattern language, and reuse it. It is totally the right thing to do (as long as those patterns are not perceived as cookbook recipes).

The emphasis of the continuous performative negotiation (and continuous software design) is same as for patterns, except its goal is not just resolving recurring situations, but in constantly resolving the forces, no matter whether the solution is going to be reusable or not. It addresses the fact that the nature of many problems to solve in an organisation producing value also has the same semiotic structure: some problems are about building organisation capabilities (tools, expertise acquisition, knowledge packaging, etc) and others are about capability deployment (making products, providing concrete services, etc). These problem spaces have the same structure of vocabulary A (capabilities, e.g. investment specialists) that is being deployed to be able to achieve a vocabulary B (e.g. a range of financial advice). A capability is an ability to repeatably accomplish something in various circumstance and therefore capabilities and their basic use very well may be expressed by boiler-plate patterns. On the other hand, deployment may be extremely specific and therefore the forces may need to be resolved differently every time. Thus, while learning established pattern languages may be a great thing, ultimately it requires the continuous performative negotiation as semantic/pragmatic analysis of the forces:

The (good) books on pattern languages and agile methodologies often consist of multiple volumes with hundreds of pages each: the closer we get to the business of producing final products, the larger in the projects is the proportion of capability deployment as opposed to building capability and therefore the more specific problems need to be solved, turning the agile textbooks on the subject into colossal nomenclatures of what are better practices and what are not. They tend to constantly refer back to the principles of agile and the need to efficiently produce value, however I rarely, if ever, have seen the emphasis on the immediate underlying semiotic execution↔meaning↔use structure of most, if not all of those patterns and practices. If, through performative negotiation, any practice, decision or solution is distilled as action of deployment of capability language A into the target language B and the value of it is seen in the crisp meaningful expression in the target language, it gives a concise conceptual tool that allows to generate practical solutions rather than just refer to models or experiments.

Commonly, one would hear from analysts, management, or programmers that until one builds a model to solve a design, architectural, or process problem, the problem has not definitively addressed, yet. On the other hand, the agilists may often say that until a real-life experiment is set, the justification of the proposed solution or improvement is not complete. Both modelling and experiment are yardsticks that certainly should be kept at the back of one’s mind and used. However, I tried to show above that both approaches are essentially limited and leave in a blind spot probably the most essential property of the software development, its linguistic nature. Therefore, I would say, one needs to consciously apply meaning-use analysis (execution↔meaning↔use) at every scale and aspect of software development to assess, design, and organize.

It has been common to offer “software metaphors”: “try to think of software developement as …” to understand its nature through similarities, software “as architecture” (making blueprints, building, etc), or “as gardening” (grooming, pruning, evolving) being most well-known. On one hand, the semiotic approach is not a model. It does not say: “software developement is accurately modeled in language structures”. The semiotic systems cannot be reduced to symbolic systems, models being the latter. On the other hand, the semiotic approach is not a metaphor, either. It does not say: “try to think of software development as a language activity”. It says: making a software product at all its levels is a language activity at its core and needs to be analysed, driven, and executed as such.

References

[Adolph et al] Steve Adolph, Paul Bramble, Alistair Cockburn, Andy Pols, What Is a Quality Use Case?

[Alexander 1977], Christopher Alexander, A Pattern Language: Towns, Buildings, Construction

[Austin] J.L.Austin, How to Do Things with Words

[Brandom], Robert Brandom, Between Saying and Doing: Towards an Analytic Pragmatism

[Deemer, Benefield] Pete Deemer & Gabrielle Benefield, Scrum Primer; cit. from [Larman, Vodde, p.306]

[Larman, Vodde]* Craig Larman, Bas Vodde, Scaling Lean & Agile development

[Larman, Vodde]** Craig Larman, Bas Vodde, Practices for Scaling Lean & Agile development

--

--