meaningful software: metaphor and software development
* * *
There has been much said about metaphors in software development. But I want to say more. My main point is that metaphors are a powerful, but also a dangerous thinking aid in regard to software design. I will briefly outline what the metaphor is and give some examples of software metaphors. Next, I will explore the strengths and risks associated with metaphorical thinking. I will give examples of the possible misuse of software metaphor, the consequences and ways to mitigate them. Finally, I’ll argue that software design is inherently rooted in language. Based on that I will try to establish a more specific role of metaphor in software engineering.
garden with bugs and weeds. software metaphors
Very crudely put, metaphor is a way to speak about one thing in terms of another. It is one of the most powerful and complex figures of speech.
When the American politicians such as JFK, Reagan, Obama, or Mitt Romney call the U.S. the “shining city on a hill” in their speeches, even outside of the historical connotations of the phrase, it immediately evokes and projects onto one’s image of the U.S. — rightly or wrongly — the political aspirations of strength, plenitude, organization, culture, and moral high ground. This image sets the terms in which one could choose to think about the U.S. (As an aside, “shining city on a hill” is a metaphor in metaphor: the “shining city” is not just reflecting rays of sun, but metaphorically emitting the light of its virtues like a beacon.)
That’s what metaphor is in software development: it is about explaining or illustrating concepts by comparing them through something relatable.
software development as gardening
For example, it is common to compare software development with gardening: you lay out a garden (product design); prepare soil and plant the seeds (implementation); control the weeds and pests (debugging and maintenance); prune and replant to shape the garden (maintenance and next iterations of design and implementation). You cannot just abandon your garden even for a few months without it starting to fall apart.
So, speaking of software as gardening evokes an image of constant hands-on care about your software code base, which is a reaction desirable both technically and emotionally.
software bugs
As another trivial example, the software bug is a metaphor: there are no actual insects lurking in the code. The image suggests hidden but harmful pests who need to be hunted down and squashed. However, no-one imagines a cockroach when hearing about a software bug anymore. Software bug is what’s called a dead metaphor. (‘Dead metaphor’ is a metaphor in itself since the metaphor never has been alive nor dead, so, while we are on that ‘dead metaphor’ is a dead metaphor in itself.)
Recently, when you hear a news anchor talking about ‘bugs’ in political or business governance, she does not liken governance problems to pest infestation (for that there are other metaphors meaning different things like ‘white-anting’), but metaphorically compares them to software bugs, making everyone immediately understand what she means.
software weeds
I came up with this one. If I tell you that there is a weed in a piece of code, what do you think I mean?
Once, I wrote a simple plain-old-data class that was used for some parameter configuration. My class had a boolean member, which I set to false
by default. A few years later, my class had been used in a bunch of software components across a few products in a couple of companies. However, it became clear that false
was an unfortunate default: 90% of time, true
was the required setting and the users of the class had to remember to set the flag to true
by hand. More often than not, they would forget, so their code would misbehave, and they had to debug and fix the problem. So, true
was a much better default. However, to use the better default, I had to spend a week to make the changes everywhere where the class was used, re-test, re-release, etc since its usage had proliferated across a number of components, repositories, and products. Everyone involved was very uneasy about it and we were thinking of leaving things as they were to avoid failures in production.
If I invested fifteen minutes upfront on a more thoughtful design decision to start with, none of it would happen, but now the weed had spread.
So, the software weed usually is not a bug or functional defect. It starts very small and thus easily gets overlooked as insignificant (since we have so much more important things to do), but once its usage spreads throughout the code, getting rid of it becomes fiendishly fiddly, time-consuming, and error-prone.
It is common in software development that such minor design decisions systemically are ignored in the team, which makes the code base severely crippled: it becomes hard to use, hard to change, and hard to grow — whatever weeds do to your garden, software weeds do to your code.
why software metaphors?
Metaphor is a powerful thinking aid. Just like the charts help to make sense of a large amount of numerical data at a glance, the metaphors help to grasp complex properties and relationships: bugs are small, they hide, they are nasty, they cause damage, they spread — we can express and grasp it all at once.
Software development is richer with metaphors than other industries because programming and code essentially operate almost entirely through the language, which is very different from the industries preoccupied with physical objects and processes. The language is permeated by metaphors, which are not just optional extras or embellishments, but an integral part of language, and by extension of software development.
I will try to demonstrate it in more detail in the last section.
software bugs are not brown. where does the metaphor break?
Software bugs are not brown, they do not have six legs and cannot fly. It is easy to draw the line between where the ‘bug’ metaphor holds and where it breaks. (And while we on that, ‘software’ is not ‘soft’, either.)
However, what’s called metaphor validity is not always so obvious. Say, it is clear that ‘the company as a family’ mostly is a [corny] stretch, at least in the West.
This metaphor may help to nurture the need for mutual care and support in the business that otherwise would be seen as cold, purely transactional, and driven by self-interest for everyone involved. However, at which point does it become too much? When does the supposed belonging become overblown and causes real harm?
One may unconsciously cross the metaphor’s validity boundary and continue drawing metaphor-induced conclusions: if the company is a family, then employees are, well, children — the metaphor does not leave much choice especially if it is deployed unconsciously. Who are the adults then? Whose contribution matters and whose does not? This line of thought is incredibly damaging for the individuals and toxic for the business.
Even worse, when metaphorical thinking is done unconsciously, then one would do metaphorical hopping whenever it suits the goal they would like to achieve: The employees are children, therefore — just like children — they do not have agency in the course the company is taking. But wait, if they are children, they need to be taken care of. That’s inconvenient. Hence, we switch to the metaphor of ‘software engineering as competition’ where engineers are athletes who perform or fail on their own and the managers are the judges who decide who is the winner and who is the loser.
When the policy-makers hop between those two metaphors (employees as children and employees as athletes) at their convenience, the company ends up with demotivated stuff left to their own devices and unwilling to come up with initiatives. Happens much more often than one may want to think. The leader may not feel it or care, however — even apart from the human element in it — if you take agency and team bonds away from people, money, total surveillance and fear remain among very few levers to enforce their productivity.
As I try to show in the following sections, when it comes to software development, the unconscious use of metaphors, crossing metaphor validity boundaries, and metaphor hopping not just harm the morale and motivation, but cause structural damage to the business and its products.
Since metaphors are dangerous and informal, why not get rid of them altogether and use precise direct language instead? We can’t.
Firstly, as the cognitive linguist and philosopher George Lakoff [Lakoff, Johnson] demonstrates, metaphor permeates the ways humans think or act and human knowledge is organized.
Secondly, metaphors make communication expressive and concise. Losing those qualities would inevitably lead to the loss of technical focus, which is too high a price to pay.
Thirdly, many metaphors appeal to strong mental imagery and emotions and thus they are inspirational by their nature. It is impossible to eliminate metaphors, but if one tried, it would hurt team cohesion and morale, business vision, communication with clients, etc.
So, it is important to be aware of where metaphors turn into wishful thinking and empty cheer-leading and stop contributing to good technical fundamentals.
dangerous software metaphors
Metaphors are dangerous because they are powerful.
Software metaphors break:
- When they are taken too far crossing their validity boundary: software bugs are not brown
- When unwarranted metaphor hoping occurs: engineers as children (family metaphor) vs engineers as athletes (competition metaphor)
- When metaphors are used unconsciously (e.g. ‘hiring as dating’, see below): The lack of awareness makes boundary-crossing and metaphor-hopping too easy. Also, metaphorical thinking and decision-making becomes a substitute for a proper skill or best practice
Below are some common examples of how it might happen in software teams and projects. Many of those metaphors relate not to software design, but to processes, communication, and teamwork, however they have direct consequences for the team productivity and software design quality.
software as a material thing/machine
what is it?
This metaphor was almost inevitable in the early years of industrial software since the latter was commonly seen as similar to the products of other industries since from the beginning of the industrial age, the production and much of it output has been machines, material processes, and physical objects.
Many software metaphors cluster around ‘software as a material thing or a machine’: ‘software system as architecture’, ‘software development as a production line’, ‘software as components’, etc.
This allowed to transfer and project the vocabulary, approaches, and lines of thought from the well-established industries onto the then novel software processes.
Software was new once and applying known to unknown was natural. Many excellent best practices in software development come almost without modification from the Japanese car manufacturing, e.g. from the Toyota Way.
when does it cross the line?
In one of the project kickoff meetings sometime in mid-2010s, we were discussing how we would design our system, which included lots of mechanics, electronics, and software. Our hardware team lead dropped: “Well, in hardware, we just break down the system into components, design them, and put them together. I assume it’s the same with software, isn’t it?”
It isn’t. Software — and software development by extension — almost by definition is language-bound. The way language operates quickly becomes — or rather is from outset — very dissimilar from the physical world and objects in it. This is the boundary of the ‘software as material thing’ metaphor. The last section will deal with why and how it happens in more detail.
Violating the boundary of this metaphor makes a long-lasting negative impact on the software design quality and software development cycle: it leads to the design methods and solutions that may look sensible (exactly because making sense is the purpose and outcome of metaphorical thinking), and so they proliferate throughout the industry and education system and spawn multi-million cottage industries that dominate the software product landscape affecting products, timelines, and jobs just to disappear without much trace a few years later.
Object-oriented programming probably is the best example of the ‘software as a material thing’ metaphor and its limitations. Its very name is literally (see what I’ve done here?) a variant of the ‘material’ metaphor. The objects, which are representatives or instances of classes, have properties and interact with each other. It looked naturally suited for modelling of the real world filled with objects of all sorts. And indeed it worked (and still does).
OOP came to dominate software development in 1990s and noughties. It was hard to get a software job without OOP in your resume. New OOP-heavy languages like Java transpired and became wide-spread.
The question was how to take vague and squishy client/user requirements, identify in them entities and relationships , and turn those into a coded object-oriented model. Since the object-oriented design seemed to represent the real world so well, the next — more human-friendly — language as an intermediary between clients and programmers was designed and heavily promoted: UML, Unified Modelling Language, an internationally standardised all-you-can-think-of way to design.
This spawned a flurry of cottage industries of UML training, UML certification programs, expensive graphic UML design tools, UML-to-code converters, UML consultants, etc.
By now, it is clear that designing software in UML is clunky, inflexible, restrictive, and slow. Moreover, there is no way to express many ‘modelling’ concepts in UML, because of the limits of the object-oriented model (more on it below).
UML is an example how the metaphor validity boundary was entirely missed — without even noticing — by a large and powerful part of the software industry resulting in multi-billion drawbacks in productivity and engineering education.
But don’t worry, UML is on its expensive deathbed, but now we have formal ontologies, a sprawling enterprise with international standardisation bodies, university courses, consultants and job descriptions — all ignoring the fact that software is not a ‘material’ thing.
how to guard again it?
We are not trying to guard against the metaphor itself since it is a powerful thinking aid.
For example, one way to see the whole C++ is C++ as a representation of your computer: its memory, peripherals, CPU registers, cache, etc. C++ is the only high-abstraction language that maintains this connection to the lowest of low-level stuff. (Removing this connection, e.g. getting rid of pointers and introducing garbage collection instead, was somehow one of the motivations behind Java.)
From this angle, always ‘feeling’ your C++ program as doing something ‘physical’ to your computer, i.e. applying the ‘materiality’ metaphor, leads to better and safer code and its author feeling her/himself in the driver’s seat.
However, we are trying to guard against unconscious metaphor use, crossing metaphor validity boundary, and metaphor-hopping. Unfortunately, there is no quick recipe for this one: designing software systems as a sort of material things is entrenched and wide-spread.
I try to give some brief answers in the last section of this chapter. I elaborate more on it in [Vlaskine].
software development as production line
what is it?
This metaphor is closely related to seeing software as a physical thing. Industrial software development was new once and it was easier to think about the novel thing in terms of the well-known ones: the industrial era’s staged and streamlined production.
There are all the good reasons to use the best practices of the industrial methods in software. The ideas of Lean Development originated in the physical production lines. The Kanban method, which is a simple and excellent organising fixture in software even on its own, is named after an actual wooden board with chalk notes on it (‘kanban’: board to look at).
when does it break?
Firstly, software is not a physical thing or machine and unlike uniform specialized skills and defined operations required in material assembly lines, there is a vast variation in software engineers’ skills, roles, and responsibilities.
The danger here is taking the industrial production metaphor too far and end up with the battery farm-style company with the engineers pigeon-holed into years of doing black-box testing or mundane GUI widgets. Many such companies exist and are successful — at the price of colossal human capital waste, which they may be comfortable with paying.
It is not a good idea to ignore the production line metaphor, throwing away a dearth of highly developed and time-tested best practices. So, for better results, it pays to be selective with this metaphor. For instance, ‘production line’ has a strong connotation of ‘uniformity’. As I just said, the software engineers and the nature, quality, and amount of their output vary vastly. Is ‘uniformity’ a breach of the metaphor boundary? Any experienced engineer knows the importance of the design and code uniformity (in a specific and carefully crafted software sense). Every departure from uniformity at any scale has a high implementation and maintenance price tag. ‘Almost the same’ is a curse in software: almost the same class methods, almost the same endianness in protocol packets, etc since it requires servicing every tiny difference and irregularity. Non-uniformity also makes automation (e.g. test automation) and reuse more difficult. Since non-uniformity is so expensive, it needs to be justified in software designs rather than seen as a sign of creativity or ignored altogether.
Lastly, this metaphor encourages assessing the performance and productivity as one would assess a physical assembly line output where ‘tangibility’ is conflated with ‘materiality’: e.g. counting number of of output units. That’s how this metaphor gave birth to the lines of code (LOC), one of the most absurd metrics of the project cost and complexity. This metric in all seriousness was used for decades since 1960s. The engineers actually were required to report their daily or weekly LOC counts. Thankfully, it is a thing of the past.
how to guard against it?
It is expensive and unwise to just drop the production line metaphor. On the other hand, every time we adopt the industrial concepts like ‘uniformity’, we need to rethink what they mean in terms of software design and trial each industrial practice work to see whether it works or not. So, the danger of this metaphor is that we need to stay near its validity boundary all the time and every time know on which side we are.
software implementation as wrapping
what is it?
This is a pretty specific, yet very influential metaphor. ‘Wrapping’ is a shortcut to the ‘fundamental theorem of software engineering’ (‘all problems in computer science can be solved by another level of indirection’). It is a metaphor: you cannot wrap software, not in paper, not in a piece of cloth.
when does it break?
When ‘wrapping’ is (unconsciously) taken as full encapsulation, no access may be left to the ‘wrapped’ functionality. Firstly, the wrapped functionality may become a private implementation detail not visible from outside. Secondly, the naming and usage semantics of the wrapper interface may not be kept consistent with the wrapped functionality, thus obfuscating the connection between the API and internals. Often, both happen. The code becomes opaque and becomes much harder to test and debug.
how to guard against it?
Ideally, by making the wrapper semantically transparent and orthogonal to the wrapped functionality, as, for example, python decorators or telecom protocols do.
management as leadership
what is it?
This metaphor has been used across industries for decades. The leadership is a vision- and purpose-driven, geared toward novelty, initiative, strategic thinking, and assumes the ability to motivate, influence, and lead others — who follow. Compared to it, management is down-to-earth: it is about coordination, communication, structure, and due diligence. Many executive and even mildly senior roles have been rebranded from management to leadership positions.
when does it cross the line?
‘Management’ and ‘leadership’ are really two different job descriptions. The recent management-to-leadership rebranding is a metaphor boundary violation: half of the time, what is required is coordination, structure, attention to detail, and due diligence — none of it being a strong or necessary connotation of ‘leadership’.
When one hires a leader, and especially when a leader hires leaders who hire leaders, it makes the company structure morph into a leadership hierarchy that lacks coordination throughout.
coordination neglect
The first way to breach the metaphor validity boundary is to neglect the ‘management’ connotations in ‘leadership’.
It is particularly bad in software development. Many software specialists aspire to leading, defining direction, and doing their own things their way rather than to meticulous coordination or being responsible for others’ work.
Moreover, leadership requires drive, ingenuity, passion, charisma, technical curiosity. All those qualities are emotionally charged, attractive, supposedly ‘naturally’ occurring as ‘a talent’, and hard to quantify.
Management requires discipline, structure, method, and care — qualities that are acquired through education, experience, and somewhat obsessive-compulsive attitude to the structure and quality. They may not look so exciting on a job description.
Given that and also the intangible nature of software, missing out on the lack of structure, focus, and quality way too often ends up in the massive blind spot of those in the position of leadership.
The team members may come up with observations in regard to messy code, lack of testing, broken backward-compatibility, inconvenient tools, work preferences, etc. Those fall under coordination and due diligence and systematically get ignored if the leader ‘leads’, but neglects ‘managing’.
This lack of focus and capability-building leads to the products delayed by years or never released and often to the business’s demise.
Last but not least, when the ‘leading’ equals ‘looking forward’, it is terrible for the business and morale if the leader does not regularly check in with his team. She/he may transactionally work with those she/he needs right now, but the other team members may not hear from anyone and be on their own literally for months. As weird and damaging as it is, It is pretty common. The metaphor of ‘absent father’ is suitable here, and the ‘military’ metaphor might be the remedy (see below).
subordination neglect
The second common way the metaphor validity boundary is violated is implying that the ‘leader’ is ‘higher up’ in some sense than those who follow her/him: the leader leads, the rest follow. As the result, it is really common that the leader does not have a ‘coordinator’ ‘above’ her/him since all everyone sees is her/his leader’s back: her/his initiatives and bright solutions will die in the dark, if the ‘coordinator’ above does not exist or is not listening (because the ‘coordination’ role was branded and filled as ‘leadership’).
capability neglect
A number of metaphors mentioned in this section seem to refer to organisational and social aspects of software engineering. They seem to have little to do with the software design per se. Here is how ‘management as leadership’ affects not just the ‘soft’ consequences like team cohesion or communication that only indirectly impact design, but direct ways to hurt design decisions.
The drive to identify direction and achieve certain goals is in the definition of leadership: Leadership without direction is contradiction in terms sounding — almost like a Zen-like riddle (and yet, it happens unfortunately and commonly).
The goal-driven leadership prioritises achieving the goals. If unchecked, it discards ‘managing’ and ‘coordination’ connotations. It heavily skews the software design decisions to whatever is expedient for the ‘product’ or ‘what client wants’ (supposedly and at the given moment) and abandoning any attempts to account for technical consequences, build capability, or manage technical debt.
It comes in a few flavours, which amplify each other: The technical leaders (CTOs, team leads, or senior engineers) design hastily to satisfy project needs without attention to reuse across project. It leads to overfit and duplicated solutions with — respectively — duplicated number of defects, inconvenient usage and high maintenance cost. The business development and marketing leaders refuse to allocate resources to anything that cannot be put on the project’s tab (more often than not, there are not even any accounts for maintenance and code base improvement). It leads to engineers burying or abandoning their capability-building effort as well as hidden maintenance costs that are much higher than the proper upfront development. Lastly, the more junior engineers naturally are not well-situated to realise technical consequences of their design decisions, because they may not have a bigger picture yet, may not have sufficient design skills to foresee the impact of their design decisions, and may be less inclined to make that effort due to their youthful temperament, incurring painful and expensive cleanup in the future.
(The young engineers grow into the future leaders. When growing professionally in the atmosphere of ‘management as leadership’, they may not get exposed to the methods and best practices of planning, coordination, and dealing with design concerns and so they perpetuate the problem.)
how to guard against it?
When thinking of a software project, the ‘team as a platoon’ metaphor may help here (see the ‘military’ metaphor below). It goes without saying that it is very counterproductive to use this analogy for drawing conclusions of what the discipline or relationships among the engineers should be.
However, it is a good thinking aid to see the ‘leadership’ and ‘management’ in the team as the roles of the platoon lieutenant and sergeant:
The lieutenant has access to a bigger picture, defines his platoon’s goals, and defines direction and strategy. The sergeant looks after the order execution, coordination, and soldiers’ welfare.
Does your team have a ‘lieutenant’ (the leader that knows where things are going and decides in which direction to move)?
Does your team have a ‘sergeant’ (someone who takes care of planning, structure, and having things done)?
Or (very rarely) is there a single person who does both?
In many dozens of software teams I have seen (as it happens, in most of them), the team/project lead may be a very good ‘lieutenant’, but there is no-one at all in the ‘sergeant’ role, no-one really is looking back into how things are done in the team. The lead looks forward. The team members also look forward. So, although the team seems to be marching in the same direction, all the team members see is the leader’s back. And by the way, without continuous coordination and staying on the same page, the leader’s ‘forward’ and each engineer’s ‘forward’ are almost guaranteed to be different directions.
So, who is the sergeant in your platoon?
software project as a military operation
This metaphor may not be used explicitly, but companies or product/project teams or their leaders may see themselves on a mission that requires a group work with a strong sense of purpose, difficult goals, discipline, and sometimes self-sacrifice.
Peter Druker, an author of many books on principles of management, used to say: “The Army trains and develops more leaders than all other institutions together — and with a lower casualty rate.” [Cohen] So, the positive and negative connotations of the ‘management as leadership’ metaphor and the military experience are closely related.
Discipline is another strong connotation of the military metaphor, which helps to herd cats in the diverse groups of software engineers.
And so is the duty of care: the Western military model helps us to think of the leadership vs management in the terms of the roles of lieutenant and sergeant.
Having each other’s back in the team, team work as opposed to individualist competition, clear definition of who is in charge, structured planning, strong emphasis on communication, feeling of purpose and serving others (and each other) — all these qualities and associated techniques and practices are worth borrowing.
For example, software businesses and projects, large and small, are plagued by atrociously bad communication affecting productivity and quality, failing deadlines by miles, breaking expectation, delivering wrong features, leaving the ‘leadership’ uninformed, and making engineers feeling abandoned and in the dark. There is much to learn in this sense from the military and, while we on that, from the hospital operation and best practices.
when does it break
It breaks very quickly: leadership turns into dictatorship, discipline into vacuous and demotivated obedience, duty of care into micromanagement, communication into paper-pushing, etc.
Also, seeing a project as a military campaign easily leads to just ‘taking that hill’: it all turns into forever living in the tents instead of building a town, the software project as military campaign turns into one of the scenarios Yourdon writes about in his classical Death March.
how to guard against it
The military (and hospital) metaphor is highly valuable:
As mentioned before, two of many major deficiencies of software teams are:
- Lack of lieutenant-sergeant or doctor-nurse-like relationships with their well-established, time-proven practices of communication, leadership, coordination, initiative, and personal growth.
- Inter-related with the former, lack of the person in the driver seat: the extremely pervasive and absurd situation when a number of engineers working on the same project have zero coordination and very little communication for weeks or months, which has nothing to do with the supposed ‘self-organizing software teams’ — one of the romantic notions of the early agile movement.
Thus, as for many other metaphors, it is counter-productive to just abandon this one. Unfortunately, because the military metaphor is highly charged with the notions of power and control, it is very hard to guard its validity boundary.
leadership as popularity
what is it?
A company or a team needs leaders who come up with the big picture, goals, direction, and inspiration at various scales and in various dimensions. Business leaders carry the company to its market goals. Tech leaders aspire to ingenuity, quality, and professional excellence.
By definition, the leaders have to be popular in some sense: the population of, say, engineers has to follow the leaders for the latter to be such.
Nevertheless ‘leadership as popularity’ is not literal. It is a metaphor when it is about ‘popularity’ in a group, on social media, etc. Also, the transitivity of metaphor hopping makes its indirectness doubly dangerous:
Both metaphors ‘management as leadership’ and ‘leadership as popularity’ make sense to an extent. However, the (likely unconscious) metaphor-hopping produces quite an absurd conclusion: thus, ‘management as popularity’.
when does it cross the line?
‘Leadership as popularity’ crosses its validity boundary when its equation is inverted, either unconsciously or deliberately, becoming ‘popularity as leadership’.
It may sound like a stretch: after all, ‘a nice guy’ obviously is not a sufficient qualification for a leader.
But that’s the problem of the unconscious metaphorical thinking: ‘leadership as popularity’ should be a tentative ‘leadership as if it were popularity’, but it takes a shortcut to ‘leadership is popularity’ and hence ‘popularity is leadership’.
‘Popularity as leadership’ entails the team structured by preferential attachment rather than by professional goals and even if one is aware of it, it is hard to resist.
how to guard against it?
It actually is not problematic but rather desirable for all team members to be leaders in various ways. Also, those are great teams where everyone is popular for her or his skills and just for being a good guy and so the hierarchy of more popular vs less popular does not even form.
The imbalance that needs fixing is the same as for the ‘management as leadership’ metaphor: for each leader, one should ask: who is his sergeant? Who looks not just in the direction where the leader leads, but inward to maintain the due diligence, coordination, and structured communication? Sometimes, albeit rarely, it is the leader her/himself, but if not, then which person plays this role?
These roles don’t have to be fixed, this ‘forward’/’inward’-looking pairing can be predefined (e.g. product owner vs software team lead) or happen ad-hoc. For example, face-to-face code reviews have the aspect of the user-implementer pairing. But they also have the aspect of the ‘leader’/reviewer-’sergeant’/implementer pairing.
Looking at it from a different angle, the leader has a strong drive: wishes, aspirations, or ideas. Then, her/his sergeant-like counterpart’s job is to accommodate those wishes. There can be a leader In any specific interaction — or even the switching of the roles in the process.
For example, in the learning situation, even an engineering intern can take the leading role in the sense that the learner may be strongly driven by the wish to learn, say, a specific skill. Then, their counterpart’s role is to dish out the knowledge in a structured expert way. Such an active learning can be a much more efficient that the other way around when, for example, a senior engineer ‘fills in’ the student.
So, in each class of large and small situations — from company’s business development and broad tech lead to design discussions, code reviews, or learning moments — it is insufficient if one plays the leader’s role, but no-one plays the ‘sergeant’s’ role.
software development as research
what is it?
This metaphor is more common in R&D companies, e.g. startups, or companies working on novel products. They may treat their product as an unknown, something that may take a bunch of trials and failures, something ‘novel’ and therefore having unknown yet structure.
It does not have to be rocket science, though. Even in garden-variety businesses, many software engineers genuinely dislike giving estimates of how long a specific piece of work might take. Surely, some companies have the toxic culture of holding engineers to account for wrong time estimates. Even more often, those high up hate realistic time estimates and thus the engineers prefer to steer clear of the stressful interactions of this sort. But beyond all that, the engineers still tend to say that it is impossible to pin down how long a task might take because it is somehow an open-ended problem.
when does it break?
It breaks when novelty, open-ended problems, and experimental features of a product are lumped together, making it impossible to tell how long the project will take, whether it will succeed, and most importantly, where it is right now.
Many research-heavy startups and industry-funded university projects are plagued by this haze, distort the funding priorities, gaslight the investors, stagger through years of delays, and often run out of money and come to nothing.
how to guard against it?
Even in the most research-rich projects, it is entirely possible to separate by design the open-ended bits and well-defined elements. This separation helps to make it easier and much more comfortable for researchers to focus on actual problems rather than wrestle with data conversion, communication protocols, or databases — all the well-understood stuff. The well-understood stuff should end up in well-defined and well-designed components and repositories rather than lumped in the code hastily cobbled together by the researchers.
We reduce uncertainty by partitioning and isolating open-ended bits. We also keep chipping off them anything that can help to reduce uncertainty and complexity of the problem: mundane features get cleanly implemented and pushed to generic repositories; the problem gets decomposed into parts; sensible constraints may reduce a theoretically difficult problem to a much simpler one that is sufficient for the project at hand, etc.
As for the time estimates, I find it a very meaningful metric since it reflects the budgetary costs (in terms of engineers’ salaries). It is a simple one, too (I find it simpler than e.g. the point system):
If it is hard to tell even roughly how long it would take to implement a software artefact: one month, one year, or five years, then the first deliverables will be not the artefact or its pieces, but uncertainty reduction: 3 months to 10 years is not a great estimate to base funding or budget decisions upon. What is it that we need to do this month to be able to reduce uncertainty? What could be a decisive experiment, or concept proof, or system partitioning that would give us a tighter range? It is very rare that absolutely nothing can be done and one month later we still will dwell in the same haze, I (e.g. as a venture capitalist) want a tighter estimate so that I can plan my investment, not just gamble on it.
Assume, we reduced the uncertainty to 4 components or functional areas:
- artificial general intelligence overlord module: 1–3 years
- shoe-string theory quantum supercomputer: 2–4 years
- communication middleware: 3–6 months
- GUI: 3–6 months
Our overall range, very crudely and pessimistically, becomes: 3.5–8 years, much narrower than the original 0.25–10 years.
Moreover, once we start designing, breaking down, and implementing the middleware and GUI:
- the uncertainty of those estimates will quickly contract
- more importantly, if we do good design job, we will get out of it reusable components that we can utilise in other projects
If the uncertainty and risk still look uncomfortable, then in the next months, a part of our effort should be in reducing uncertainty and risks of our AGI and supercomputer modules. We will continue to chip away the clear and certain bits and devise decisive experiments that would target two questions: where are we right now? and how can we further reduce risk and uncertainty spread?
There is no need to give all the fine-grain time estimates upfront. We just choose every month (well, every sprint) to attack most uncertain, most risky, and most urgent areas.
Once it comes to the artefacts with the estimates of a couple of months, it becomes much easier to devise, say, six-months milestones and to decompose them further. Lastly comes the scale of the bite-size pieces is from a couple of hours to a couple of days (typically not longer than 3–4 days). 2 hours to 4 days sounds like a huge spread, but it is not a problem, since dozens or hundreds of such pieces will balance each other out (statistically summing up to a square-root uncertainty, e.g. 100 pieces of work with normally-distributed estimates of 2–4 days would give us the total estimate of 290–310 days, rather than atrociously-looking naive 200–400 days).
software development as competition
what is it?
Software engineers have vastly different sets of skills. Their level of education, competence, productivity, and motivation can vary wildly with very direct impact on their output. There are few other industries of such scale with the workforce of such diversity at roughly the same seniority levels.
So, commonly, the job ads ‘seek best of the best’, ‘cream of the crop’, and ‘top talent’. The employers may have very protracted multi-stage technical testing (see ‘hiring as exams’ below) and generally see the hiring process and the job itself as an athletic competition. The software engineers also feel strongly about their professional capabilities, so this mindset may look natural to them.
when does it break?
First, this metaphor is a conflation of athletic and capitalist competition.
The competition may be appropriate in the job market. However, the engineering ‘athletes’ are much more atomized than many other professions and thus each of them has quite low market power. Moreover, ‘hiring as competition’ (or ‘hiring as exams’, see below) actually is likely to make you lose the top percentile of the candidates.
When the competition metaphor makes its way into the company operation itself, it loses much of its ‘market’ flavour. What remains is the internal one-on-one competition of personal professional prowess instead of team work.
how to guard against it?
The team work and duty of care aspects of the ‘military’ metaphor are a good counter-balance of seeing ‘engineers as athletes’.
company as a family
what is it?
We won’t repeat what already was said above about the benefits and dangers of this metaphor.
This metaphor is extremely loaded with one’s personal experience with her or his family.
As was said, this metaphor is very prone to abuse through crossing metaphor boundaries and metaphor-hopping.
Also, it has very different implications in different cultures.
For example, in the cultures founded on Confucianism — Chinese, Korean, Japanese, Singaporean, and others — work relationships are very strongly organised around this metaphor. It would be little exaggeration to say that the whole Confucianism is one big and elaborate and explicit ‘group as a family’ metaphor. It has been strongly cultivated in the corporate culture, tech companies included, and in many ways has contributed to cohesion, focus, and success of the East Asian businesses in the ways different from the West.
In the West, self-interest is central to the business goals and industrial relationships. The employer contractually buys the employee’s time and skills. While the company’s goals, the nature of the job, or the sense of belonging it gives may motivate the employee and contractually-stipulated compliance is expected, the employee nevertheless is expected to pursue their self-interest while the employer follows theirs.
In the East, the group cohesion and working toward a common goal usually is put above self-interest. (It varies in the Western work culture: e.g. Scandinavia places team and societal cohesion very high. It originates from the highly original Scandinavian Lutheran communal ethics expressed in the 18th-19th centuries by such figures as N.F.S. Gruntvig.) This has resulted in highly successful methods such as The Toyota Way or Lean Development.
Another noticeable difference is the product price-making. In the (South-)East Asia, the price-making is seen as a bunch of trade-off as the buyer is seen through a familial lens. In the West, the price-making commonly is adversarial: the producer/seller and buyer do not belong to the same group and thus, the seller is expected to charge as much as possible.
when does it break?
The family metaphor is not as cheesy as it may sound. It can be really powerful not only in team bonding, but in purely technical terms, e.g. as many excellent Toyota Way techniques and methods.
There are many useful metaphors in this space. For example , ‘the leader or supervisor as an absent father’ aptly captures the issue of coordination neglect mentioned earlier. Similarly, ‘a teenager’s bedroom’ is all you need to know about the code base that is in a dire need of ‘adults in the room’, i.e. ownership and maintenance.
However, this metaphor is very complex, especially in culturally mixed teams, and easily can lead to misunderstandings and broken expectations (people either feeling abandoned or their space invaded). It also can turn very toxic as in the example of employees as children above.
how to guard against it?
It is a very valuable, but also a very vague and emotionally loaded metaphor. So, there are no hard and fast solutions. In the West, a lot of it happens in the ‘company culture’ and ‘cultural fit’ space and unfortunately often degrades into the company’s idiosyncrasies, cheer-leading and power play in the teams.
Many of the useful developments come from the East-Asian cultures with strong emphasis on family. In the Toyota Way and Lean Development, the piecemeal problem-solving incorporates preserving and cultivating the elements of (somewhat) family-like ties and structures.
deliverables as school assignments
what is it?
When a young engineer gets a task, they may not have an idea what accomplishing it would entail due to the lack experience or bigger picture. In other words, they cannot yet put together a definition of done.
Such a definition may include certain design outcomes, implementation, testing, code reviews, documentation, handovers, etc. However, if they have not got adequate training or too little interaction with her/his supervisor or mentor, they still need to formulate their own idea of what the successfully completed task should look like.
It is possible that they will use their analytical judgement, but more often than not, it pushes them to thinking about unknown in terms of what they already know, i.e. to metaphorical thinking. What the young engineers have the vivid knowledge of is their schooling when executing a task would be completing an assignment: a term paper, a school project, a thesis, etc.
This is the ‘deliverables as school assignments’ metaphor — a common trait of software artefacts that extends far beyond the young engineers’ output: product pitches and acceptance process on both sides, quality of concept proofs, and output of many startups all often are marked by it.
when does it break?
Many metaphors are useful in a number of ways and getting rid of them is not a good idea. Not this one. ‘Deliverables as school assignments’ is a pervasive anti-pattern. It breaks where deliverables are produced just to meet immediate criteria rather than solutions crafted for long-term value or practical application.
The school assignments are done and forgotten. Their ownership ceases the moment they are accepted. The students don’t have to deploy, fix, maintain, or expand them. It is as transactional for the teachers, too: once the assignment is marked, it’s the end of the story.
This often becomes a culture in a growing company, especially with the founders who themselves come from a research background and the engineers once young move up the ladder without learning the better practices.
Externally, the company may look very successful at impressive demos, compelling grant applications, or comprehensive reports, but internally those are backed up just enough to cross the line to move to the next round of business pitches or funding applications.
Internally, software odds and ends follow the same pattern: they are thrown over the fence, rarely tested at all, and seldom put in the properly structured code base repositories (or you wish they had not since they trash the code base quickly and irrevocably).
how to guard against it?
This metaphor is applied unconsciously most of time and is rationalised as ‘solving the problem at hand’ to the exclusion of other design considerations, considering consequences, applying any methodology, or matching against the better practices from the broader industry.
The advice of being self-aware and avoid this metaphor in one’s decision-making may be well-meaning, but holds only limited value as any reliance on people to ‘do the right thing’, especially without articulating what those right things are — leads to the place where the trouble with this metaphor first started.
Two core differences in the life cycle of a software artefact and a school assignment are its ownership and user base.
The school assignment is transactional: neither the student nor the supervisor care about it once it is submitted and accepted. The students answers to their supervisor and those who assess their assignments and it ends there. That’s how the young engineers — and the less experiences supervisors — often see it: answering to their boss and that’s it.
In software, an artefact without an owner is flying junk: the ownership either stays with the code author or is handed over, but never ceases. Even more importantly, an artefact without users is literally useless. As the artefact in question is concerned, the owner answers to the users.
It is a common mistake to assume that as long as the author of the code is there, we do have the ownership of the code in place. It is a slightly veiled version of the same ‘deliverable as assignment’ metaphor: instead of being an owner, the artefact comes with the person attached: the artefact will not work without the engineer in charge. It is a lynch-pin anti-pattern when the engineer guards the artefact as her/his asset assuring their position in the company and does not try to make it handover-ready, which inevitably leads to the knowledge and ownership loss when the engineer moves on.
The proper ownership calls for a very different definition of done including testing, reviews, delivery, maintenance, knowledge distribution, disciplined handovers, etc.
Building good ownership of software artefacts across a team or a project is a slow meticulous process. Having a healthy definition of done at any step of this process is a bit like ‘not drinking specifically today’ in the 12-step program.
A part of such definition of done is the continuous asking of the question how your product/demo/concept proof contribute to the internal products or internal capability so that even if the demo or pitch don’t get accepted or the project fails, the artefacts produced in its course don’t die with it, but contribute to the company’s capability.
hiring as dating
what is it?
Hiring software engineers rarely can be left to the job agents or HR: the specifics of the software roles are too technical for a non-specialist to assess and it is easy to hire an absolutely irrelevant person that would become a drain on the company resources and others’ time. So, usually a few team members run a bunch of technical interviews with the candidates. Those might include informal conversations and formal tests.
The software engineers may not have training, experience in, or structured approach to judging the candidates. They would vaguely talk about the candidates in terms of ‘cultural fit’, which really means ‘we like the guy’. Because ‘liking someone’ is essentially emotional and rarely rarely including any conscious or structured analysis, it’s just the vibe that really matters.
Assume, A meets B one-two times for an hour or two. They have a relatively general conversation. Based on that, A decides whether she/he likes B or not and whether there will be a follow-up. Even if it is about the job interview, it sounds like a date. In the absence of a decision-making method, the engineers may — unconsciously — follow along the lines of their knowledge about the dating to arrive to the we-like-the-guy/we-don’t-like-the-guy conclusion. That’s what the ‘hiring as dating’ metaphor is.
when does it cross the line?
Very quickly.
I conducted hundreds of interviews and candidate reviews for software engineering roles and it happened so often after the first conversation that I felt about the candidate: Such a great guy! We would love to have him on the team! But, OK, let us still follow our due process and take him through the technical tests — and the person would perform really poorly. Or: Hm… I’m not sure it would be good for the team to have this person around (the cultural fit, you know)… And the engineer would shine during the tech test and do really good job when hired.
The common problem follows the lines of the dating metaphor:
It’s either: “There will be no second date.” (I.e. the candidate does not even get a chance of sitting a tech test.)
Or: “We still don’t like you anyway.” (Based on the early impression, the interviewers already have made up their mind. The candidate’s performance in the tech tests cannot change it — and given the abundance of technicalities, one always can find tricky questions to make the candidate look weak to justify the no answer that formed in the interviewer’s mind from the outset.)
As Kahneman et al put it: “There is strong evidence … that hiring recommendations are linked to impressions formed in the informal rapport-building phase of an interview, those first two or three minutes …” [Kahneman, Sibony, Sunstein]
Generally, a number of cognitive biases originate in evoking metaphors as stand-ins for missing knowledge of actual things. (For instance, a young person may know little about hiring, but knows dating.)
how to guard against it?
I do not devise any interviewing strategy here. The point of this section is to show how the unconscious ‘hiring as dating’ metaphor is highly likely to put the whole process on the wrong rails.
The ‘hiring as dating’ metaphor has a very limited validity, thus admitting that the interviewers (fellow engineers) are very likely to lack the knowledge and experience of hiring process is the first step.
When putting together a hiring sequence, it’s good to keep in mind three questions, answers to which are very specific to the company size, time and workforce budget, etc:
- What is the price of hiring a wrong candidate? For example, is it OK to hire the candidate and then hire during the probation period if he does not perform?
- What is the price of missing a good one? For example, Google is huge, rich, and hiring internationally. They don’t care if out of 1000 candidates they mistakenly miss 50 great engineers and eventually hire only 10. Can your company afford it?
- What is the price of continuing the hiring sequence on a candidate? Technical interviews take many hours and usually involve two-three interviewers, so, the full interview process is going to cost a couple of man-days and thousands of dollars per candidate. This cost does not matter for Google. Does it matter for your company? If so, what is your mechanism of filtering out unsuitable candidates as early as possible without missing good candidates?
There are many simple methods to structure the decision process: filling a questionnaire for each candidate and grade each entry, methodically comparing candidates pairwise, etc. See the chapter on hiring in Noise. A Flaw in Human Judgement [Kahneman, Sibony, Sunstein] for analysis and many good recommendations.
hiring as exams
what is it?
Looking for a software job has turned into a full-time job as it becomes more and more common that software companies drag their job candidates through a series of technical interviews, online tests, and take-home assignments, which may take many days or even weeks to complete (e.g. Google has a system of seven involved technical interviews).
It is not unusual at all for the candidate to spend a sum total of some ten hours or even more in direct contact with various interviewing teams, being repeatedly put under time pressure or taken through the timed IQ and personality tests. Sometimes, engineers are even asked to provide portfolios of their work even for relatively off-the-mill positions.
Thus, hiring is seen as a series of exams and stress tests.
when does it cross the line?
The employers may emphasize the tech tests with some pass/fail criteria. However, the software roles often are very business-specific, so it is hard to have mechanical ‘objective’ tests. To compensate for that, the employers come up with longer and longer obstacle course, usually under time pressure, in the hope that the harder and longer are the tests the easier the eventual choice would become, and that’s how the ‘hiring as exams’ metaphor breaks.
The purpose of exams is to pick good candidates. It seems logical that the longer and more involved exams should leave us with no corner of candidate’s knowledge unchecked the better hiring choice we will make.
Firstly, if the longer interview process does not have decisive quality at each step, it may only reduce the noise in your choice a bit.
Besides, the better candidates may be less inclined to sit through the endless testing and are more likely to find another job in the process. So, the longer tests often leave you with better applicants gone and a more mediocre talent pool remaining.
Also, not all the good engineers are dealing well with the stress of endurance and time pressure. For example, it is a fact that many software professionals are on the high end of the autistic spectrum. Their condition often helps them to become outstanding professionals. However, it also is well established that stress and anxiety greatly reduce their productivity. So, does the role you advertise really require daily resilience to stress? Is it essential for your engineers to come with a solution in fifteen minutes and an hour’s wait is unacceptable? Sometimes, the answer may be yes — although it is hard for me to picture such a business. But if it is not, you may be rejecting lots of good candidates for no good reason.
how to guard against it?
My consistent impression has been that in many of those cases, the tech hiring team simply are not sure how to select a candidate. They might have got burnt in the past with the ‘hiring as dating’ approach, as it tends to select for ‘being a good guy’, which only poorly correlates with the engineer’s tech abilities and potential. So, now they may be trying to protect themselves with trickier and trickier labyrinth for the candidates to navigate. The other problem may be that they still make an opinion of the candidate in the first minutes of the interview (i.e. unconsciously, it still is ‘hiring as dating’) and if that first-glance opinion was negative, they use more and more involved tests to have a reason to say ‘no’.
As I pointed out earlier, answering two questions may improve the process and its outcomes:
- What is the price of hiring a wrong engineer?
- What is the price of missing a good engineer?
The answers will be different for different companies. For example, Google (probably) don’t care if they miss some perfectly good candidates, while a company on a healthy but tight budget may not have capacity for hiring a passenger, but on the other hand may find conducting too many interviews expensive.
The answers will help to design the hiring ‘funnel’:
For example, I ran many hiring rounds in projects on very tight time budgets and hundreds applications. Hiring a wrong person would create an unaffordable overhead for us. We also were very short of time, so we wanted to spend as little time on each candidate as possible.
We devised a pretty generic three-stage process, but we made sure that each of the stages was geared for brevity. We also kept notes and a simple bullet-point record on each candidate’s performance on each task. We used the candidates’ records to do pairwise comparison, rather than relying on impressions and vague memories.
So, we did almost no resume reviews and sent a brief tech screening assignment to most of the applicants — many dozens of them in a single hiring round for two-three positions. Prior to that, we sat through the screening assignment ourselves to make sure that it would not take more than half an hour of the right candidate’s time. This step would take just a minute or two per candidate.
We had an automated test suite for the assignment. We would let candidates ask questions over email and go through two-three rounds of attempts. We were very conscious of candidate’s and our time. A couple of brief email exchanges about the bug fixes and code improvement took 10–15 minutes per candidate altogether. At the end, we would filter out those who had not performed at all. We had a very good idea about those who remained how they react to the feedback, how they take requests for cleaner code, what is their take on corner cases, etc.
It was easy to select a limited number of candidates for the in-person tech interviews. We would have a hands-on coding interview and a design+culture interview, each of them lasting two hours — usual stuff.
At each step, we were trying to save candidate’s and our own time. We also kept a simple record of each candidate’s performance on each task. We would use it to compare candidates pairwise.
We had very good hiring outcomes with less than 5–10% of our hires (both junior and senior) not performing and the rest doing great job.
We talked about a number of metaphors common in software industry. In the last section, let us have a closer look at the reasons why metaphor holds a special place in software engineering.
software engineering and language
In this section, I first outline software engineering as work with language. Then, I will demonstrate similarities between the conceptual structure of software artefacts (classes, libraries, utilities, components, etc) and that of metaphoric statements. Then, I will describe the design method that fits well with this generic conceptual structure. Lastly, I will demonstrate how the metaphorical mindset comes into play in the design process.
coding is writing
coding is a language activity
Software development is writing just like writing papers, fiction, or poetry. Unlike the metaphors I mentioned earlier, ‘coding is writing’ is not a metaphor but a literal statement even if writing is not all what coding is. Language literally is the tool, or material, or medium of programming and therefore many laws, rules, and constraints that apply to language in general, apply to programming, too.
It may seem that programming is all about the artificial languages. However, the code written in an artificial language is meant for humans to understand it, i.e. we can explain the meaning of the code in our conversational language, and vice versa, we formulate our design in natural language and then implement the respective code. So, the natural language also is a medium of software: software development is writing in both natural and artificial languages.
This interconnection between the natural and formalised expression is deeply embedded in the nature of language and goes both ways: On the one hand, the artificial languages were inspired by and designed after the natural language. On the other, familiarity with formal logic makes the natural language statements more consistent and meaningful. For example, it has been shown that the addition of code to the LLM pretraining improves LLM natural language reasoning and world knowledge [Aryabumi et al].
Code writing is a language activity. The difference from the natural language is that the code not only says things, but also does things. (The natural language also can do things. Said by the right person, the expression: ‘I pronounce you husband and wife’ actually makes the couple husband and wife with vast material consequences to their lives. Such expressions are called ‘performative’. The same it true for the code: it does things only if it runs on the right computer: it processes data, shows us something, controls physical devices, etc.) I talk about it in more detail here: [Vlaskine].
language and mirror of nature
Human language was once perceived as a reflection of thought, which, in turn, was seen as a reflection of the world. Without delving into too much detail, I will briefly mention a few such key concepts, figures, and ideas in very simplified terms.
Until the mid-20th century, the Western understanding of human thought was deeply shaped by Plato’s theory of ideas. According to Plato’s views, any thing had an ideal prototype (an idea): idea of door, idea of cat, and so forth. Humans were believed to possess innate knowledge of these ideas, embedded within the soul, and learning was seen as a process of recollection of those.
Since Plato, there was a lot of effort to connect those three: things in the world, human thought, and human language. In the late 19th and early 20th century, thinkers such as Russell and early Wittgenstein aimed to formalise the view of language as a reflection or mirror of ‘reality’. They conceived the words and sentences as references to things, facts, and states of affairs in the world: ‘dog’ referring to a dog (or also an idea of a dog); ‘Britain does not have a king’ being true in 1990, but false in 2024, etc.
These views were corroborated by scientific, societal, and industrial developments. However, they also led to many difficulties and paradoxes.
In the 20th century, the linguistic turn initiated by philosophers, logicians, and linguists such as Frege, Peirce, Austin, Hjelmslev, and the later Wittgenstein to name only a few challenged the idea that all the the language is a reflection of the world: The language just does not function merely as a reference to ‘reality’. Clinging to it leads to endless confusion and errors and severely limits what we can do with language.
For example, to mentions some such problems out of many, the earlier schools of thought assumed that only the sentences that can be true or false belong to the rational language. Sentences like ‘I declare war on France’ or ‘Please lift your right arm’ cannot be true or false (they are performative and imperative respectively). Yet, they are perfectly rational and pragmatic and can be used to execute or enable very real actions and have a clear material impact. Given such limitations of what once was seen as ‘rationality’, what can we do to still be able to a rational dialog, build coherent theories, or run social enterprises? In his famous series of lectures, a British philosopher and linguist J.L.Austin showed the meaning and place of such phrases in the everyday and rational discourse [Austin].
Definitions offer another example. For centuries, the definition had been a description of a class or objects or facts. The definitions seemed to apply to the actual things: Does it quack like a duck? Does it walk like a duck? It fits the definition of a duck and thus it is a duck. However, what is the definition of ‘game’? Or ‘well-being’? Or ‘life’? Or ‘mind’? Of ‘human’? It was assumed that definitions would be the way to accurately describe things. This approach guided scientists, philosophers, or public servant: define the problem first and then solve it. The research, technology, and public policies were based on those definitions with horrible consequences: The definition of ‘human’ did not quite extend to other races and thus led to death and suffering of hundreds of millions of people. Similarly, the notion of ‘consciousness’ was not extended even to high animals, leading to animal abuse and holding back research in cognition in animals.
It might be tempting to try and solve the predicament of definitions by improving, overturning, or expanding them, but it just postpones the problems or creates new ones.
Instead of defining things or concepts, Charles Sanders Peirce — an American philosopher, scientist, and one of the founders of the pragmatism as a school of thought — suggested that the “…entire purport of any concept lies in the character of the actions or external effects which it is calculated to produce or bring about.” [Peirce] To rephrase it somewhat liberally, an object simply is a sum total of what it can do to the rest of the world. The object or concept is defined by its consequences and nothing else.
Ludwig Wittgenstein, an Austrian philosopher, offered a different approach, arguing that things may be connected by a series of partially overlapping similarities — he called them ‘family resemblance’ — rather than by a single definition covering a class of things [Wittgenstein]. For example, it is easy to see that chess, cricket, betting, a boy kicking a ball against the wall, or two kittens playfully tumbling all are games resembling each other. Yet, it would be hard and unnecessary to come up with a blanket definition covering them all.
Meaning as use is the other major theme in Wittgenstein’s thought: the meaning of the words or sentences is not defined by their entries in a dictionary. The only meaning the words have comes from how we use them in various contexts of our communication, i.e. what we do or effect by using them.
Wittgensteinian meaning as use and Peircian concept as its consequences complement each other: the concepts we use in our communication are circumscribed by their context and their consequences. The pragmatic approach to the language has since revolutionised philosophy and produced powerful theoretical and practical results.
Which leads to the question:
how does it matter for software development?
Writing code is a language activity. Therefore, whatever rules, methods, or constraints that apply to language in general — and whatever problems and solution arise from them — can be meaningful and useful in software development.
Let us start with the example of definitions from the previous section.
In the past, the rigorous study on any kind of problem would either start or end with definitions of terms. A biologist would observe, describe, and then define a new species. A public policy white paper on well-being would come up from outset with a collection of criteria characterising well-being. A book on mathematics would start with definitions (and yet mention that there are some fundamental terms like ‘point’ or ‘line’ that we don’t define at all). The requirements of such definitions often still are a commonplace.
The definitions would describe and thus refer to classes of things. That also is at the core of the Object-Oriented Design: class hierarchies would define exactly that: instances of class Color
refer to colours, the ones of class Employee
to employees, etc. Classes possess properties and we can define relations between them. The idea was that it would be a universal way to model anything - thus Unified Modeling Language or - no more, no less - Ontology Engineering.
As I already mentioned above, OOD had a massive overreach and thus created enormous amount of waste and unwarranted expectations around the globe. OOD was just another incarnation of the view of language as a mirror of reality.
The conceptual methodological and linguistic tools to overcome the mirror-of-reality approach had been worked out decades before the OOD boom that started sometime in 1980s, but they were largely ignored by the software development community, which led to multi-billion waste, lower design quality, ugly code, useless but powerful standardization bodies and cottage industries producing tools, and less than perfect engineering education programs still affecting generations of engineers.
Much has changed since then: templating, generic programming, and elements of functional programming transformed C++ and its design methods; OOD is only a fairly minor part of python; Pattern Languages offer the design paradigm that does not require thinking in terms of classes and objects, etc.
For example, let us compare the object-oriented approach with pattern languages. (By no means they are incompatible. Also, nothing is wrong with OOD per se as long as it stays inside of its limitations.)
In OOD, we model a bunch of classes (of objects) in our problem space and then beef them up with the properties and relations between them. E.g. that’s more or less all what the UML tools allow us to do.
On the other hand, the concept of pattern rather than class or object is central to pattern languages. The pattern is not something you define, design, or model. Instead, you describe your problem(s) and its context(s) in terms of forces that are at play. Depending on the problem, it can be market forces, various user/use base-related, technical forces, etc. We do not invent the pattern. Instead, we observe what kind of pattern or patterns the interaction of those forces forms and how those forces can be balanced by the resulting patterns. Once we work out the pattern of our interest, we decide upon its implementation, whatever language mechanisms — classes and objects, traits, policies, functional components, templates, concepts, closures, distributed computing, microservices, or what not — may fit the bill.
Forces inform the semantic aspects of the pattern: in which contexts does the pattern interacts with each force? What kind of context-specific vocabulary and expressions do we use to describe the problem in the context of user experience? or performance? or connectivity? This will outline respective semantic spaces (namespaces) and ‘mini-languages’ describing the pattern’s ‘behaviour’. There may be multiple ‘mini-languages’ acting in the same context. Say, in the following pseudo-code, in the trading
namespace, we may have trading::instrument
and trading::portfolio
classes, which would have a mini-language of addition, subtraction, and multiplication so that we can say something like:
trading::instrument i0, i1, i2;
trading::portfolio p1 = i0 + 5 * i1;
p1 += i2;
p1 -= i1 * 2;
trading::portfolio p2 = 3 * i0;
trading::portfolio p3 = p1 + p2;
// etc
Additionally to that, we may have a ‘mini-language’ for risk management, something like:
trading::portfolio p = i0 + 5 * i1;
p.risk.max(5000);
std::cerr << "current risk: " << p.risk() << std::endl;
p += 3 * i2; // may throw an exception if portfolio max risk gets exceeded
trading::order order = p.risk.balance_using( i1, i2 );
When we speak about a software artefact — from a minor function to a product — semantic contexts and mini-languages help us map and partition the artefact’s semantic terrain — both from the artefact user’s and implementation points of view. The expressions in those mini-languages (Wittgenstein would call them language games) do not just describe the states of affairs, they are not true or false, but rather they do things (e.g. form the portfolio or balance its risk) and at the same time they say what they do (pragmatics and semantics in them are the same thing, in other words, their meaning is their use).
These basic examples give a glimpse of how software design is in business of doing things with language.
Things applicable to language in general are relevant to software design. Scale is one of those things as well: For instance, physical things are characterized and constrained by their size, by their spatial relations, by the whole being a sum of its parts, etc. Thus, extending the ‘physical thing’ metaphor upon software artefacts is a source of confusion and major errors. How major? Take time estimates in software: notoriously, software engineers routinely give time estimates that are off by an order of magnitude. Very ‘small’ templated classes can have a broad overarching effect and design leverage. Semantic relationships bear constraints entirely different from the spatial or physical ones. Lastly, it is notorious how quickly lack of focus and coordination makes software projects go astray and fall apart. Their deterioration is a matter of weeks and often days — much faster than the work on physical things does. Semantic partitioning as well as clarity of intention and usage semantics is necessary to render focus to the design and collaboration for software projects to stay intact.
With that in mind, let us return to metaphor, which, as mentioned above is one of the pillars of language.
namespaces and metaphor
Here is another way to look at it: Most of time if not always, unlike physical objects, a software artefact or product not just ‘is’ , but ‘is thought as’:
Say, there is no sensible single software representation of an airplane. It’s rather an airplane for a certain purpose: airplane as a machine, airplane as an inventory unit, airplane as an airborne object, airplane as an entity obeying the air traffic rules, etc. Each of those representations describe an aspect of an airplane, so, ‘being thought as’ is very close to belonging to a certain namespace (in C++ terminology): vehicles::airplane
, which has its geometry, engine characteristics, weight, etc; assets::airplane
, which has an id, price, etc; or airtraffic::airplane
, which has a flight plan, is meant to follow rules depending on its size or altitude, and so on.
Each of those classes is an aspect of the airplane, however for most if not all practical purposes, you don’t ever need to have a ‘master’ software representation of the Airplane that combines all the possible aspects and is Airplane (not airplane as xyz). Such a ‘master’ class if implemented, often (always?) becomes a trash bin that mixes unrelated concepts, breaks separation of concerns, and most importantly has little use in actual products: if you design the airplane wing, you may not care about air traffic regulations; if you work on a database of the cabin fish-or-chicken logistics, you don’t care about the engine mechanics, etc.
‘Airplane as a vehicle’ or ‘airplane as an asset’ are not metaphors since an airplane literally is a vehicle and it literally is an asset in the respective contexts. Metaphor is never literal by definition, but otherwise in some context we very well can see the ‘airplane as an iron bird’.
The metaphorical contexts have the same ‘being as’ structure as the classes and other entities in the programmatic namespaces. How does this structure operate?
When we are dealing with an airplane as a vehicle, we, for instance, can fly it. It is natural to say and do so in the conceptual namespace of vehicles.
When we are dealing with an airplane as an asset, we, for instance, can sell it. It is natural to say and do so in the conceptual namespace of assets.
When we are dealing with an airplane as an iron bird, what can we do with it? Seemingly, not much since an airplane as a bird does not correspond to anything real. We can name our airplane model after a bird. We can find a good name for signalling the intent behind the product and for punchy branding — exactly what the Germans have been doing to their tanks for almost a century. Also, we can work on something like a bird, something that feels alive, free and powerful (military refer to helicopters as ‘birds’). Last but not least, naming conventions are important in engineering for focused and concise communication: ‘wings’ and ‘tail’ are not literal, they are dead metaphors.
For now, let us limit ourselves to saying only that the airplane can be seen as an asset, because it has familial resemblance with assets. It can be seen as a transportation vehicle because it has familial resemblance with other means of transportation. Same with the airplane as a bird.
two design questions
Let us talk about class design. Later, I will mention how the same reasoning would work for libraries, utilities, components, systems, or products.
When I start to design a new class, I first need an idea of what that class is. One of the engineering biases is the implementation bias: rushing to write code without formulating a concept of what that code is meant to represents. The engineer gets overconfident, maybe sees the class as a counterpart (i.e. a reference) of a ‘real’ thing (e.g. of an aircraft) and goes ahead without a pause or consultation.
two design questions
A principled way to formulate what the class is to answer two questions:
- How will this class be used? (I.e. its usage semantics)
- Where is it going to sit? (I.e. its namespace, meaning the conceptual context, programmatic namespace, and/or directory containing the class code)
Note that working out the ‘real-world’ counterpart of the class (its reference in the semiotic terms) is not really required. We just need to figure out how the class will be used and in which context (or contexts). There is literally no use to think about parts of the class that won’t be used or its use outside of the relevant context — it adds nothing to the meaning of the class.
It may sound trivial to you, but do you trivially agree or trivially disagree? If you agree, do you answer those two questions every time you do your design?
From all experience, most engineers don’t. Their design process is heavily skewed by the implementation bias: too quickly, they slide into thinking about the design in terms of technical capabilities and implementation details, i.e. in terms of what libraries, what components, what databases or frameworks they have at their disposal. Thus, their design reflects the technicalities rather than the user’s concerns and they end up with the artefacts (e.g. classes) that are really hard to use due to clunky usage semantics, hard or impossible to reuse due to overfitting and semantic coupling, hard to maintain since they are not test- or change- friendly, and hard to find or make sense of since they are scattered in the locations and contexts that have not been thought through.
intermission: on triviality
Some (or many) agree with my point, but find it trivial, which leads to a triviality bias, which is surprisingly and incredibly pervasive and goes as following, usually unconsciously: Trivial things don’t require attention — that’s what makes them trivial. Therefore we don’t need to attend to them. Meaning that we ignore them. Meaning they don’t get done.
Some good practices might look trivial, but they become ‘good practices’ only if they are practised.
Washing hands before an operation may look trivial to a brilliant brain surgeon, but she still washes them. It was not the case some 200 years ago at all — handwashing before surgery was considered too trivial to bother. Ignaz Semmelweis promoted handwashing in the mid-19th century and thus revolutionised healthcare . And yet, “initial responses to Semmelweis’s findings tended to downplay their significance by arguing that he had said nothing new” [link]. Semmelweis’s efforts were ignored, he grew more insistent, eventually was declared insane due to his growing militancy, institutionalised, and died shortly after that — of sepsis of all things. Just years after his death, his views and methods eventually found acceptance in medical circles and thus saved innumerable lives.
So, it’s the practice that matters.
The problem is that good practices may look trivial when one sees no meaning or value in them due to a combination of reasons:
- Overconfidence: E.g. in Semmelweis’s case “some doctors, for instance, were offended at the suggestion that they should wash their hands, feeling that their social status as gentlemen was inconsistent with the idea that their hands could be unclean” [ibid].
- No skin in the game: The stakeholders (upper management, investors, etc) do not see the value in a practice because they are not the ones who feel the pain (even if they are the ones who rip consequences in indirect way). That’s how clunky software frameworks and processes are parachuted upon the teams despite objections, while ‘trivial’ practices never get support or traction.
- Lack of experience: It is hard to truly appreciate value of things if you have not lived through them. Introducing practices new to a team requires endless convincing and often turns not so much into religious wars, but into religious scuffles. Even after the practice is introduced, it needs to be maintained for a while, because it takes time for it to show added value. Moreover, even after it is clear that the practice (e.g. continuous testing) is really valuable, not everyone will develop the habit of practising it and you still may need to constantly promote, endorse, and enforce it.
On a greater scale, e.g. in a larger organisation or even in the society as a whole, the picture is even more difficult since the connection between the cause and effect is not so obvious. Assume, we have managed to make engineers to write tests for python software they develop. Writing pytest
tests is easy and, frankly, fun. As the test coverage grows, the engineers discover that it becomes much less stressful to develop in small controlled steps without fear of breaking the existing functionality. Even young, less patient engineers often love it since it really makes their life easier. After a year, the management observes better productivity and quality of code. Will they attribute the improvements to the testing? Lots of other things might have happened that year: new contracts signed, new employees joined, etc. The management may miss or ignore the causal link between testing and productivity, so they still may see no value in it.
One typical answer is commissioning research that statistically would demonstrate correlation between testing and quality. That’s how the tide eventually turned in Semmelweis’s case: The mortality statistics in the hospitals where handwashing was (eventually) stipulated were an order of magnitude better that in those where doctors did not wash their hands. Staging such research is expensive, complicated, and often inconclusive, but still possible of course. The other way is to regularly and carefully listen to those who feel the pain: to the engineers on the ground including those who bring ideas that may sound to you like criticism of status quo, the actual users of your product that may sound to you as needy and ‘trivial’, etc. Each of the small (and not so small) problems needs to be solved by improved practices around it. Lean Development is build upon this philosophy. Pattern languages are a natural and excellent tool for that, too.
Now, I will elaborate on both design questions and demonstrate that they are not as trivial as they may seem and require a specific type of effort, which might be well-known, but much less articulated and practised in the software development community than it should.
After that, I will return to the place metaphor holds in all that.
usage semantics
To work out the usage semantics of a class (and similarly of a library, component, GUI, communication protocol, etc) it is not enough to design its interface. We need to express how an object of the class will be used.
A good first step is working out the users of the class (both human and programmatic) and articulate multiple use cases. Here is a recipe: If there are no use cases, then don’t design the class. If there is only one use case, then consider at least an automated test suite as the second use case. If there are two or more dissimilar use cases, we are good (still add a test suite!) — this is something I call ‘semantic triangulation’: the class usage has at least two legs to stand on. The spread between use cases exposes the required degrees of freedom of usage semantics. Much more often than not, a single use case leads to overfit usage semantics, inconvenient, inflexible, and not reusable.
User stories are an excellent method to capture the class usage and avoid the implementation bias. They can be written in their standard form: ‘As XXX, I want functionality YYY so that I am able to do ZZZ.’
Here is a crude example of higher-level user stories:
As a trading portfolio manager
- I want to combine trading instruments so that I can build my portfolio
- I want to manipulate the risk of my portfolio so that I can manage risk
- I want to save current state of my portfolio so that I can track its state in time
We are not done yet, though: the next step is to write the code that expresses the use cases or user stories. Assume, we decided that the thing we need is a bunch of classes. Then we write down in the programming language of choice whatever we need to do with the class to express each of our user stories.
Rather than starting with formal specifications or definitions, it is much more efficient to work from examples, so that the concepts arise from the need and actual use cases as well as expressive capabilities grounded in the language of implementation itself.
For functions or classes, we may sketch the use of yet non-existent calls and methods. For command line utilities we may start with writing their --help
with command-line usage examples. We could express user stories with the non-throwaway empty-box or stub interfaces with GUI components, microservices, etc. (TDD, Test-Driven Development is one of the more formal ways of doing just that: write usage scenarios as runnable tests: surely, they fail since they don’t do anything yet, but they do express the usage semantics. Such tests are an extra use case.)
So, let us write our code/pseudo-code snippets for our user stories in python:
We already started sketching the usage semantic for the trading portfolio manipulation in the previous section. We observe that a portfolio is a collection of discrete items and so it may lend itself well to algebra-like operations, i.e. we see the portfolio manipulations as something like algebra, which can be expressed by defining python standard operators in our classes:
i0 = trading.instruments.share('NVDA')
i1 = trading.instruments.options.put(i0, expiry='20250101')
i2 = trading.instruments.options.call(i0, expiry='20250301')
p1 = trading.portfolio()
p1 = i0 + 5 * i1;
p1 += i2;
p1 -= i1 * 2;
p2 = 3 * i0;
p3 = p1 + p2 * 3;
i = [ trading.instruments.options.call(i0, expiry=expiry) for expiry in all_expiry_dates ]
functools.reduce( a, b: a + b, i, trading.portfolio() )
...
Now, before starting implementation, we can review our sketch with the subject-matter specialists (portfolio managers). We show them what they could do with the portfolio and ask whether we missed or misunderstood anything.
Such usage semantics examples become a blueprint for the implementation without the need of extra documentation.
The second user story could result in a sketch like this:
p0 = trading.portfolio(...)
p1 = trading.portfolio(...)
r = trading.risk( p0 )
...
if trading.risk( p0 ) < trading.risk( p1 ): ...
...
total_risk = trading.risk( p0 + p1 )
...
# or
r = p.risk()
...
This would lead to a design question for the portfolio manager whether portfolio has risk or risk evaluation is something we apply to a portfolio. Depending on the outcome, we may proceed with more sketches.
Finally for this examples, we want to log the portfolio state, which comes naturally in python (e.g. if we use dataclasses_json
or pickle
). However, as a bonus from seeing portfolio operations as algebra, we can add the respective serialization/deserialization almost for free. E.g. deserialization from a string (which can come from a GUI, a file, etc):
p = eval( s )
# where s is something like
"share('NVDA') * 50 + 30 * options.put( share('NVDA'), expiry='20250101' )"
...
Vice versa, serialization as a formula might look like:
p.to_formula()
...
write_to_log( time.time(), p.to_json() ) # naturally, for e.g. json serialization
...
namespacing
Usage semantics depends on the context (‘namespacing’) of the class. In its turn, the class’s context depends on its usage. So, both need to be designed together.
Continuing the example above, we risk to quickly turn our trading
namespace into a trash bin having trading instruments, risk management, order management, exchange connectivity, etc all at one place.
We may have base classes like portfolio
or instrument
in trading
namespace, however, beyond that, we should have more structured namespaces; to name a few:
trading.instruments
,trading.instruments.derivatives
,trading.instruments.derivatives.options
, etctrading.risk
,trading.risk.models
, etc
The namespace hierarchy starts growing like a tree structure, however we will quickly find that it is insufficient. For example, both trading.pricing.options
and trading.options.pricing
make sense, but it is inconvenient to have both. Or as in our earlier aircraft example, is it aircraft.assets
and aircraft.aerodynamics
or assets.aircraft
and aerodynamics.aircraft
? This ‘namespace inversion’ is not a deficiency of our design, but a feature of any language or, generally, semiotic system: the namespaces interact and permeate each other. Christopher Alexander, the Austrian-American architect, who also came up first with the concept of design pattern, showed that the contexts (namespaces) cannot be reduces to a neat tree-like hierarchy, but rather form semilattice [Alexander] and are overlapping and inter-related in non-linear, non-hierarchical way. Alexander’s work began in urban planning. Later, his ideas were widely adopted in software engineering, especially in pattern languages, which in itself was a cross-domain insight.
metaphor in software design
Let’s say we have come up with a class design and put it in a good namespace (e.g. trading.instruments
, trading.risk
, and others in the example above). As we step back and have a look at the result from a distance, at some point it may occur to us that our class or part of it is something else. As a simple example, our portfolio
class and operations on it seen as something like an algebra opens the door to using any algebra-compatible manipulations on it - such as sum
or functools.reduce
- without writing any extra code. Not always it is a generalisation - more often we realise that our class has family resemblance with something else in a different context.
This realisation is a powerful design moment — an opportunity for moving things around, making them more modular and easy-to-use. It gives us an opportunity to notice an open door for reuse of existing functionality, or the capturing of a new capability in a different, typically more generic code repository.
Even when engineers are too busy or impatient, once they see the open door, the least they could do is to keep that door open and uncluttered: add empty placeholders or — as a less preferable option — to-do comments that indicate the future intention.
(For instance, in our earlier example, only trading.instruments.derivatives.options.call
and trading.instruments.derivatives.options.put
may be among our immediate needs. However it is a good idea to drop even an empty class placeholder for trading.instruments.derivatives.future
even though we might not need it right now. Moreover, having such placeholder may push us to implement at least a basic common class for a derivative instrument trading.instruments.derivative
, from which call
, put
, and future
would inherit. If we had not the future
class, we might go lax on the class hierarchy for call
and put
, but future
serves as a second use case and thus provides ‘semantic triangulation’ for our design. But on top of it, isn’t our trading.portfolio
also a kind of generic inventory that could be composed of any kinds of assets and so its implementation really should not sit in the trading
namespace, but elsewhere - e.g. in a generic inventory
class - and rather be aliased, instantiated, or specialized in trading
?)
It is interesting (and important) to note that the skill of seeing a piece of software as something else has relation to the validity boundary of the metaphor of software development as competition, both in athletic and market sense. We discussed this metaphor earlier. When the engineers — especially unconsciously — ‘keep eyes on the prize’ and focus on the competition goal, it takes the competition metaphor too far with too much of ‘technical self-interest’ and too little of ‘technical empathy’. ‘Technical self-interest’ is concerned of only the problem at hand, the specific project, specific deliverable, or specific task. ‘Technical empathy’ is concerned with ‘others’: with other use cases, other users, other engineers, and with always being on lookout of our design as something else. The latter is the definition of metaphor — generally in language and specifically in software design.
Although such vision — of what else our design is or of where else our design fits — requires practice and a very specific mental effort, it has very little time overhead. On the other hand, even in a short term, it boosts implementation since we can reuse existing components rather than re-implement them over and over. More importantly, it gives a substantial boost to design: as we leave behind placeholders uncluttered by the overfit implementation, those placeholders and breadcrumbs left behind fulfill a large portion of the design effort upfront. It grows the company’s software capability to undertake its software projects at much lower cost and higher speed.
I am glad if you think that this conclusion it a commonplace and all the previous sections bring us to the well-known conclusions about reuse and separation of concerns. What I am trying to do though is showing what process of thought leads to good design. Doing it ‘naturally’ leads to a high spread in the software skills, productivity, and quality. Instead, I suggest that the mechanism of such design process is grounded in metaphorical thinking: in seeing the thing as something else.
Commonplace or not, this sort of conscious and intentional design more often than not is absent in software teams and commonly ignored if not actively disliked by the management since often it only indirectly (albeit powerfully) contributes to specific products and thus remains in the managerial blind spot.
As I tried to demonstrate above, the unconscious use of metaphor — of thinking about less familiar in terms of familiar (e.g. ‘software products as physical things’ or ‘hiring as dating’) — is a common go-to human heuristic. It helps, but it also leads to strong damaging biases and noisy judgement.
On the other hand, seeing something else in familiar is a mode of creativity and innovation. The conscious and structured effort of attention to and ability to see a software artefact as something else is the technical metaphoric thinking. If metaphoric traversal and meaningful and careful cross-domain metaphor hopping is cultivated as a skill and day-to-day practice rather than a talent or innate ability, it will become a powerful and essential tool of the software development.
references
metaphor in software design
Let’s say we have come up with a class design and put it in a good namespace (e.g. trading.instruments
, trading.risk
, and others in the example above). As we step back and have a look at the result from a distance, at some point it may occur to us that our class or part of it is something else. As a simple example, our portfolio
class and operations on it seen as something like an algebra opens the door to using any algebra-compatible manipulations on it - such as sum
or functools.reduce
- without writing any extra code. Not always it is a generalisation - more often we realise that our class has family resemblance with something else in a different context.
This realisation is a powerful design moment — an opportunity for moving things around, making them more modular and easy-to-use. It gives us an opportunity to notice an open door for reuse of existing functionality, or the capturing of a new capability in a different, typically more generic code repository.
Even when engineers are too busy or impatient, once they see the open door, the least they could do is to keep that door open and uncluttered: add empty placeholders or — as a less preferable option — to-do comments that indicate the future intention.
(For instance, in our earlier example, only trading.instruments.derivatives.options.call
and trading.instruments.derivatives.options.put
may be among our immediate needs. However it is a good idea to drop even an empty class placeholder for trading.instruments.derivatives.future
even though we might not need it right now. Moreover, having such placeholder may push us to implement at least a basic common class for a derivative instrument trading.instruments.derivative
, from which call
, put
, and future
would inherit. If we had not the future
class, we might go lax on the class hierarchy for call
and put
, but future
serves as a second use case and thus provides ‘semantic triangulation’ for our design. But on top of it, isn’t our trading.portfolio
also a kind of generic inventory that could be composed of any kinds of assets and so its implementation really should not sit in the trading
namespace, but elsewhere - e.g. in a generic inventory
class - and rather be aliased, instantiated, or specialized in trading
?)
It is interesting (and important) to note that the skill of seeing a piece of software as something else has relation to the validity boundary of the metaphor of software development as competition, both in athletic and market sense. We discussed this metaphor earlier. When the engineers — especially unconsciously — ‘keep eyes on the prize’ and focus on the competition goal, it takes the competition metaphor too far with too much of ‘technical self-interest’ and too little of ‘technical empathy’. ‘Technical self-interest’ is concerned of only the problem at hand, the specific project, specific deliverable, or specific task. ‘Technical empathy’ is concerned with ‘others’: with other use cases, other users, other engineers, and with always being on lookout of our design as something else. The latter is the definition of metaphor — generally in language and specifically in software design.
Although such vision — of what else our design is or of where else our design fits — requires practice and a very specific mental effort, it has very little time overhead. On the other hand, even in a short term, it boosts implementation since we can reuse existing components rather than re-implement them over and over. More importantly, it gives a substantial boost to design: as we leave behind placeholders uncluttered by the overfit implementation, those placeholders and breadcrumbs left behind fulfill a large portion of the design effort upfront. It grows the company’s software capability to undertake its software projects at much lower cost and higher speed.
I am glad if you think that this conclusion it a commonplace and all the previous sections bring us to the well-known conclusions about reuse and separation of concerns. What I am trying to do though is showing what process of thought leads to good design. Doing it ‘naturally’ leads to a high spread in the software skills, productivity, and quality. Instead, I suggest that the mechanism of such design process is grounded in metaphorical thinking: in seeing the thing as something else.
Commonplace or not, this sort of conscious and intentional design more often than not is absent in software teams and commonly ignored if not actively disliked by the management since often it only indirectly (albeit powerfully) contributes to specific products and thus remains in the managerial blind spot.
As I tried to demonstrate above, the unconscious use of metaphor — of thinking about less familiar in terms of familiar (e.g. ‘software products as physical things’ or ‘hiring as dating’) — is a common go-to human heuristic. It helps, but it also leads to strong damaging biases and noisy judgement.
On the other hand, seeing something else in familiar is a mode of creativity and innovation. The conscious and structured effort of attention to and ability to see a software artefact as something else is the technical metaphoric thinking. If metaphoric traversal and meaningful and careful cross-domain metaphor hopping is cultivated as a skill and day-to-day practice rather than a talent or innate ability, it will become a powerful and essential tool of the software development.
(some) references
[Alexander] C. Alexander, A City is Not a Tree
[Aryabumi et al] V. Aryabumi et al, To Code or Not To Code? Exploring Impact of Code in Pre-training
[Austin] J. L. Austin, How to Do Things with Words
[Cohen] W. A. Cohen, Druker on Leadership: New Lessons from the Father ofv Modern Management
[Kahneman, Sibony, Sunstein] D. Kahneman, O. Sibony, C. R. Sunstein, Noise. A Flaw in Human Judgement
[Lakoff, Johnson] G. Lakoff, M. Johnson, Metaphors We Live By
[Peirce] C. S. Peirce, The Collected Papers of Charles Sanders Peirce, Vol. VIII: Reviews, Correspondence, and Bibliography (191–193)
[Vlaskine] V. Vlaskine, Code as two texts, first published in ACCU C VU as Two sides of the code
[Wittgenstein] L. Wittgenstein, Philosophical Investigations
[Yourdon] E. Yourdon, Death March