When someone starts programming today, they often deploy and wrangle other people's code. As they begin that undertaking, they're also often met with a culture that values code over end-user value. These early experiences shape a person's understanding of what programming is and how it is done.
Beginners are provided instant gratification. They are rewarded for skipping the fundamentals of programming - most notably the ability to understand and follow program state during execution. There is little incentive to ever take a step back and learn those fundamentals.
This has led many programming subcultures to believe going low level and/or rolling your own is a mistake. It's easy to understand why: starting from scratch takes considerable skill and knowledge. Your first few attempts will inevitably be failures, and shine a light on a long path ahead. It's considered a fool's errand. Why continue on that path when using code someone else has written offers an easier route?
This lack of fundamentals has much longer term consequences than a Fizz Buzz test in a job interview. In my view, someone who primarily plumbs together unknown code will introduce unnecessary complexity, because they don't understand what the underlying computer is actually doing, and therefore will judge their solution on misguided metrics, such as the look or the cleverness of the code. The tool is no longer used to achieve an end, but becomes the primary consideration. The actual functionality of their programs, the end user experience, is secondary.
Code plumbing rewards hacking over understanding. The code being used is unknown, complex, voluminous. Instead of understanding the code, which would take years, people are incentivised to use trial and error until they get the result they want. Even as a programmer improves, no matter how intelligent they may be, their instinctive habit will be to hack.
Another factor is that they won't develop good metrics for software quality. When plumbing, you don't need to learn to build good, maintainable abstractions, since any abstractions you do create will tend to be fairly shallow. This means the full catastrophy of a bad abstraction isn't revealed. And then the wider culture starts celebrating particular abstractions in books, and the authority held by those abstractions ends up trumping concerns about whether they suit the particular context in which they are being used.
One personal anecdote of this is a software architect that complained the developers were "ruining the purity of [their] vision". In reality, the architect's vision didn't survive contact with reality, but the architect refused to reassess whether their suggested abstraction was appropriate, despite being shown the problems with it.
This boils down to having a bad measure of software quality. And results in programmers chasing metrics that are counterproductive:
"Programmers are bright people who are (often justly) proud of their ability to handle complexity and juggle abstractions. Often they compete with their peers to see who can build the most intricate and beautiful complexities. Just as often, their ability to design outstrips their ability to implement and debug, and the result is expensive failure." - Douglas McIlroy quoted in The Art of Unix Programming
This state of affairs is living on borrowed time, because ideas formed on the basis of a disconnection from what the code does or bad metrics for software quality will ultimately lose in the battle of ideas as the projects using those ideas fail.
How long it will take for them to lose that battle is another question. In medicine, bloodletting was misused for (according to Wikipedia) over 2,000 years. So for all that time, a practice widely considered beneficial was actually harming patients. I don't know what the solution is, but if we don't want the same for programming, we need to develop the programming equivalent of double-blind trials.
For particular problems, you can hit the ground running by using a framework. They allow programmers who aren't domain experts to produce working solutions to common yet difficult problems. You can feel safe that many others are using the same code, providing some assurance around things like security and scalability.
When you first start a project using a framework, it can feel like it's giving you many things for free. And it is. But even with a good framework, there is a longer-term cost to be paid:
I think the difficulty of building from scratch is generally overestimated and the on-going cost of using a framework is generally underestimated. If a good custom foundation is established, I think programmer productivity and the quality of the solution can skyrocket:
All that said, the early stages of a project will need much more care. The quality of the foundation will be determined by your skill and knowledge. It requires knowledge of the problem domain you're working in and/or the skill to quickly adapt to new problems you're unfamiliar with. It requires research of possible solutions, and then careful selection of the right approach for your project. Even if the difficulty is generally overestimated, it's still far from trivial.
If that gauntlet can be navigated, the rewards are potentially huge. But if things go wrong, it could be catastrophic. This risk is unacceptable for many projects.
I think frameworks are the right tool for many projects, particularly those involving a team of a programmers. But even then, if your team lacks knowledge of the problem domain you're tackling or a deep knowledge of the framework, mistakes will be made. However, with a few experienced programmers, I think the chances of delivering a working solution are increased dramatically, although the quality of that solution may be compromised - the sluggish responsiveness of most mobile apps comes to mind.
Frameworks also provide a tried-and-tested solution as many issues will have already been ironed out. From scratch, unless you're deeply familiar with a particular problem space, you will need many iteratations to reach a similarly usable and reliable solution. And if you do introduce issues, they may only become apparent later, when it has become costly to fix them.
There are a few reasons to not use a framework:
We may be moving towards a future where a generalist programmer who can tackle any problem is increasingly rare or even unfeasible. With quantum programming, maybe it's already impossible.
But even in an x64/arm world, despite any commonalities, 3D rendering and creating a website, for example, are unique problems with their own quirks. Our understanding of how to best tackle such problems has been improved by past failures. And maybe in the future the best way to solve such problems won't be to study that history in detail yourself, but to use a framework that has collected proven solutions.
Maybe some frameworks will reach a level of maturity - not just be this decade's fashionable trend - such that the benefits of avoiding them, in most cases, won't be worth the cost.
That said, I currently think that the quintessential skill of a programmer is the ability to successfully navigate a new problem domain.
There are mistakes that can be made with frameworks, like not following a framework's established way of handling a particular problem. But putting such mistakes aside, even in a world where frameworks are used by most projects, all projects have their idiosyncracies. Programmers earn their pay when tackling a problem that the framework or a dependency doesn't solve. That's when months or even years of developer time is lost or won.
For me personally, I have no intention of using frameworks for personal projects, even if there is a framework for the problem I'm trying to solve. It does increase the difficulty, but I think this is the right decision for someone primarily interested in learning their craft, and who wants to reap the longer-term rewards in solution quality and productivity.
My experiences online suggest two people from separate programming subcultures, each with multiple decades of experience, who articulately defend themselves, can disagree on fundamental issues. Both are convinced that their way is unquestionably correct. Their group's ideas are generally viewed as sacred, and yet they often contradict those of others groups.
Why do they disagree? And how do we decide who is right? I don't think there are any clear answers, but it's interesting to think about.
I think experienced programmers disagree because programming is (a) still in its infancy and (b) hard.
A programmer can only work on a handful of large projects in their lifetime, and it can take years before they receive feedback from their choices. This means that even a veteran programmer has little evidence to determine what's best -- and ambiguious evidence at that. Further more, since programmers tend to follow a specific culture, it's likely that the projects they've worked on all followed a similar methodology, and so they have limited experience of other approaches.
I think in most instances, programmers don't know much beyond the culture they were raised in. An embedded systems programmer is unlikely to understand the problems of a web programmer and vice versa. One is working in a jungle, and the other is working in the desert. One gets incredulous that the other doesn't wear suncream.
A better question might be "Who shouldn't we trust?" Teaching is a big industry: books, conferences, bootcamps, university degrees. Anyone can give a talk or write a book. "Those who can, do; those who can't, teach". And the teachers themselves disagree with each other.
If you dig into the CVs of well-regarded programming teachers, they often don't provide strong evidence for why you should trust them.
I have more respect for those that survey across projects, but that's still weak data and anecdotes. I think there are too many variables between different teams and projects to isolate what works. There are no double-blind studies here, and I think if we're going to reach sound conclusions, that's the rigour required.
What qualities should a trusted authority have? Maybe they've programmed much more than others, completed multiple large, complex projects to a high standard? How high is writing lots of books and giving lots of talks on that list?
If we trust people with a long, proven track record, that is no guarantee in itself. Even if correct, they will tell you what worked for them when solving their problems, when working within their environment.
An approach that has served me well when learning a new subject is to start at the beginning. When the foundations are dodgy, the elaborate structure on top is irrelevant.
Currently, my personal conclusion, rightly or wrongly, is that programming is primarily about solving data transformation problems. I find the more I focus on the data, the clearer things are. Code is easier to read and maintain if the transformation performed naturally arises out of the problem being solved, when self-expression is minimised.
Programming is an engineering discipline, not an art. Any solution should say more about the problem than it does about the person who solved it.
I like people who are ready to admit that they don't know. This subject is so large that nobody can know it all, and so varied that there are few absolutes. Like with anything else, an ego prevents learning, causes overconfidence, and leads to ostentation.
With so many experienced programmers disagreeing, I think the only sensible approach is to be slow to judge, to avoid dogma, to expand your horizons, and to always doubt your own approach.
While working on a test framework a while back, a particular piece of functionality inspired some thought about the relationship between the flow of data and code readability. I found that iterating towards less code meant the code, as written, no longer represented the flow of data. It was therefore left to the reader to work out the flow themselves.
You can see code at the time of writing here.
The test framework allows users to create test suites by subclassing the TestSuite class. They can then add tests to that subclass by creating a new method and prefixing
test__to its name.
To implement this in Python, we need to:
It turns out this can be achieved concisely:
for test_name in filter(lambda name: name.startswith('test__'), dir(self)):
After starting with something much longer, it's satisfying code to iterate to, but is it good?
If the reader has knowledge of Python -- understands filter, dir, etc. -- they may not need a search engine. But even so, they still have unpacking to do.
If we put the code aside and consider how the data is processed, it starts at the
dir(self), which returns a list of method names for the current class. It's the first thing we need to progress towards what we want: a list of test methods to run. But this isn't the first thing a reader would see. Instead,
dir(self) is at the end of the first line, nested inside a function call.
This means that to understand this code, the reader is forced to read the code multiple times, jump back and forwards, to identify the flow of data for themselves.
Even for an experienced Python developer it would require some mental overhead, and that all adds to the overhead of the code's larger context. People can only hold so many things in their head at once.
This reaches to a larger topic: who is the target audience and what does the coder wish to communicate to them. I believe the same issue is found in writing.
Consider The School by Donald Barthelme. On the surface the writing seems simple, but if you try to emulate his style it become apparent that behind that simplicity is great skill.
Compare Barthelme with Irene Iddesleigh by Amanda McKittrick Ros. Ros is out to impress, to the comical detriment of the writing itself.
Writing often wants to be stylish, to entertain, but I think code should to be workman's prose. It shouldn't draw attention to itself.
For me, the problem with the above code is that it prioritises conciseness over accurately representing the flow of data. The reader wants to understand the narrative of the data -- its journey from start to end.
Here is the code I ended up with:
method_name_list = dir(self)
is_test = lambda name: name.startswith('test__')
test_name_list = filter(is_test, method_name_list)
for name in test_name_list:
method = getattr(self, name)
The transformation of data can be followed in order by reading from top to bottom -- it matches the steps I gave earlier. And with the additional variable names, it's possible for a reader unfamiliar with the in-built functions called to infer what the code does.
I like the idea that code should be accessible to the inexperienced. I'm going to try and stick to this when coding in the future.
I am like most developers in that, rightly or wrongly, the most I pause before creating a pull request is to create a cleanup commit. PRs are often littered with unnecessary commits, and the commits themselves don't intentionally demonstrate the thought process behind the changes made.
There is much scope to improve. In a time when documentation is frequently trumpeted, I'm inclined to think logically grouping, ordering, and labelling the commits made in a PR can probably help more. And as it could be part of a workflow, it may be easier to stick to than an additional task that is easily forgotten like documentation.
Commit messages are the obvious starting point to improve a PR, but you can only have a good commit message if the commit changes themselves are explicative -- even with no message.
That said, it is the lowest hanging fruit -- particularly if, like me, you have created many-a-commit like
git commit -m"TKT-999 Fixes the thing"
There are already good guides for writing commit messages -- and interactive rebasing can help reword commit message before creating a PR.
There are others ways to improve PRs that aren't often discussed:
Interactive rebase can be used to manipulate commits on your branch before creating a PR. A commit that fixes a typo, for example, can be squashed. Consider this example:
$ git log
commit f001584020e873aa89a3063c29c7f29fcf87317a (HEAD -> master)
Author: Mark Youngman
Date: Sat Apr 18 20:04:57 2020 +0100
Add even more text to test.txt
Author: Mark Youngman
Date: Sat Apr 18 20:04:23 2020 +0100
Fix typo in test.txt
Author: Mark Youngman
Date: Sat Apr 18 20:01:42 2020 +0100
Added more text to test.txt
Author: Mark Youngman
Date: Sat Apr 18 19:40:04 2020 +0100
This can be fixed as follows:
git rebase -i HEAD~3
On starting the interactive rebase, instructions are provided:
6 # Commands:
7 # p, pick <commit> = use commit
8 # r, reword <commit> = use commit, but edit the commit message
9 # e, edit <commit> = use commit, but stop for amending
10 # s, squash <commit> = use commit, but meld into previous commit
11 # f, fixup <commit> = like "squash", but discard this commit's log message
12 # x, exec <command> = run command (the rest of the line) using shell
13 # b, break = stop here (continue rebase later with 'git rebase --continue')
14 # d, drop <commit> = remove commit
15 # l, label <label> = label current HEAD with a name
16 # t, reset <label> = reset HEAD to a label
17 # m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
18 # . create a merge commit using the original merge commit's
19 # . message (or the oneline, if no original merge commit was
20 # . specified). Use -c <commit> to reword the commit message.
22 # These lines can be re-ordered; they are executed from top to bottom.
24 # If you remove a line here THAT COMMIT WILL BE LOST.
26 # However, if you remove everything, the rebase will be aborted.
In this instance, you change
pick on the relevant commit to
f for fixup:
1 pick 42c9d47 Added more text to test.txt2 f 5ad7fb8 Fix typo in test.txt 3 pick f001584 Add even more text to test.txt
In a different situation, evidence of the typo may still exist in your PR. To erase all evidence, you can use
rebase -i to amend the appropriate commit:
git rebase -i master
[replace 'pick' on the appropriate commit with 'e']
[fix the typo]
git add [file]
git commit --amend
git rebase --continue
After squashing unnecessary commits, a PR can be further improved by having each commit be a logical step towards the completed feature. This means the PR can be read commit by commit, showing why the feature was developed as it was.
This ordered grouping of changes often isn't the case with PRs. The steps towards the completed feature are spread amongst different commits, so rather than
Commit 1: Finish step A
Commit 2: Finish step B
Commit 3: Finish step C
Commit 4: Finish step D
Commit 1: A little bit of A and C
Commit 2: Finish A and start B
Commit 3: Finish B and C and D
Ideally, making your PRs more like the former than the latter is done by having the discipline to identify the steps before starting work on a feature. Back in the real world, some cleanup will be required before creating your PR.
rebase -i, this cleanup can largely be done by rewording, squashing, and editing commits. However, in some instances you may want to split a commit.
Changes across two commits are probably simpler to follow than both combined. For that reason, alongside padding your commit stats, splitting a commit can make sense.
A commit can be split by following these steps:
[make a note of the hash of the commit you wish to split]
git rebase -i master
[replace 'pick' to 'e' on the commit immediately _before_ the commit you wish to split]
git checkout [commit_hash] -- . # Assuming you're in your project's root dir
git restore --staged .
[restage the files with the changes you want to appear in the 1st of the two commits]
git commit -m"Your commit message" # Note we create a new commit -- not amend!
git rebase --continue
If you're making such changes before creating a PR, it's probably best to clone your branch before making them, so you can check the two branches for unintentional differences.
I'm only starting to think about this, and it is uncertain to me whether time spent is worth it. But I can imagine several benefits: