Core Concepts: Conceptual Models

Time to embark on a not-very-ambitious project: I'm going to try to define, explain and justify the merits of some core concepts I wish everybody knew. The ideas that people have come up with a word for, and having that word lets you manipulate that idea in your head more easily.

I am going to start with the conceptual model. I think I first encountered the term in Donald Norman's The Design of Everyday Things, and while I thought it was useful at the time, I've come to find it an indispensable idea.


The Design of Everyday Things is about interaction design. Its core thesis goes something like this: human beings have a lot of physiological and psychological similarities, and we can use those similarities when designing objects for human use. The very shape of a well-designed artefact should tell its user how to use it.

An everyday example is doors. Do you push or pull on a door to open it? Norman firmly insists that you should never need "PUSH" or "PULL" signs on a door. If the door has a plate, the only option you have is to push it. If the door has a handle, it is affording you the option to pull it. A door built this way tells you how to use it (( This book was published in 1988, and yet my workplace, built just a couple of years ago, has ambiguously-handled doors that catch people out all the time )). Its shape gives you a conceptual model of how the door operates: you've got a pushable thing on this door, and if you push it, the door will open. If it has pull handles on both sides but only opens in one direction, it would be giving you a bad conceptual model that will lead you to misuse it.

Doors are quite a simple artefact. A more sophisticated example would be a thermostat. The typical thermostat works to known principles: you set it to a temperature, and heating elements will turn on or off to regulate the room to that temperature. There generally isn't any gradation in the heating elements; they're either on or off. In spite of this, thermostat users still commonly turn the thermostat up to a higher-than-desired temperature in the belief that it will heat the room faster.

What's gone wrong here is that the user has a bad conceptual model. They think the temperature of the thermostat is the temperature of the heating element. If this were the case, turning the thermostat to a higher temperature would indeed heat the room faster, but it's not the case, so they get undesired and confusing results from their thermostat.


Conceptual models play an important part in design. Today millions of people carry around incredibly sophisticated computing devices in their pockets, but the principles of interaction design make their use feel intuitive. Almost nobody using Twitter has any idea of how it works, but it's built with a very straightforward conceptual model that people can grasp without difficulty.

It's worth mentioning that conceptual models don't have to accurately reflect what's going on "under the hood" in order to be useful. Your computer, for example, doesn't store its data in any way that resembles actual files and folders, but the intermediary file system provides a conceptual model that allows you to abstract away all the ones and zeroes.

Design is not the only reason for thinking in terms of conceptual models. In the broadest sense, any model of how a concept works is a conceptual model ((the Wikipedia page linked above lists a variety of other disciplines that make extensive use of it)), but the design context is a very apposite one. When a user has a bad conceptual model of how a piece of technology works, they struggle to use it. They bang their head against it and call it all manner of nasty names. In my experience, this is also what people do when they have a bad conceptual model of more abstract concepts, like currency or calculus.


We're now getting to my motivation for wanting "conceptual model" as a thing inside everyone's head. I would love to be able to be able to ask "can you give me a conceptual model for how this works?" and expect a functional but not necessarily factually-correct story or metaphor that equips me to use it. I'd love to be able to say "I'm giving you a conceptual model for this", and have the person I'm talking to realise that I'm giving them a tinker-toy explanation that works for most practical purposes, but shouldn't be thought of as "true".

Also, bad conceptual models are everywhere, and I would like to be able to identify them as such. When someone's labouring under a false assumption, you can say "you're labouring under a false assumption" and they will probably understand what you mean. I'd like that level of conciseness whenever someone (especially myself) is labouring under a bad conceptual model.

Conceptual models are a highly useful concept: download this app to your necktop.

Don't Microwave the Hamster

[I originally posted this on Facebook, but enough people seemed interested enough for me to share it in somewhere more visible. At some point I will get around to adding appropriate citations and footnotes to this.]

One of these days I am going to learn to keep my big mouth shut. Today is not that day.

So Tom Napper is a guy I know, and he did the animation for this video about income inequality. I wish I could make things that look like this, and perhaps you should hire him for your assorted graphical needs.


Oh sweet Jesus, listening to this made my eye sockets bleed. For the record, I am in favour of progressive taxation. I'm in favour of redistributive fiscal policy that enables the disadvantaged. I'm in favour of gargantuan social safety nets. We do these things badly, and we should do them better. If watching this video dings a little lefty light in your brain, I am on your side. I want the same outcomes as you do.

That said, this video does some bad things. It puts wrong models in people's heads.

Good models are important. Very few people understand on a fundamental level how a microwave works, so instead they have a model of how it works in their head, involving timers and potatoes and bingy noises and metal spoons we shouldn't leave in there. If people think with a wrong model, at some point that wrong model is going to make them microwave the hamster.

I am going to try and highlight a couple of the wrong models I think this video perpetuates. You are, of course, at liberty to disagree with me, but it's worth mentioning that agreeing with me won't make you a Thatcherite. It won't even necessarily change what you think. But I believe that it will make you a better thinker who is less likely to stick the hamster in the microwave in future.


When you were a child, everything in the world probably came from your parents. If you had any siblings, your parents decided how much pocket money you each got at what age. There was a fixed amount of stuff in your household, an authority figure had control of all of it, and that authority figure divided it up as they saw fit. If you felt that division was unfair, you might complain to your mum and dad.

I'll call this the Pocket Money Model. It's highly pervasive, and the video is inviting you to use it. The reasoning goes "there's a fixed amount of wealth in this country, and mum and dad are dividing it up unfairly". In this case, it is a Wrong Model. In the national (and international, and indeed any) economy, not only is there not a fixed amount of wealth, but there's no mum and dad either.

People who like talking about arguments would call this illusory mum-and-dad an "agency fiction". The idea that wealth is being unfairly divided up as a result of the active conscious choice of someone with an agenda, an agent. That agent doesn't exist. There is no individual or coherent consortium of individuals deciding what both Brian and Brenda should get paid. That is determined, to a greater or lesser degree, by the spending and working habits of every single person in the world. If you care about wealth inequality, it's important to have this in your mental model, not least because you can focus your energy on things other than going "but muuuuuuum, that's unfair!"

(Even if you disagree in this case, I urge you to hold onto the idea of agency fictions. Many seemingly-organised things, both beneficial and harmful, are the result of the interplay of lots of different uncoordinated factors under nobody's control, but people insist on trying to paint a human face on them. It gives them someone to insult or discredit or swing their ideological fists at. But in cases such as wealth inequality, there isn't an actual person on the other side. So next time you hear someone ascribing personal characteristics to a complex social problem, you know what to call it.)

The lack of mum-and-dad has some important consequences. When the video speculates what the UK would be like if our incomes were distributed similarly to Scandinavia or the Netherlands, well...that's very nice, but it's not like we can go to mum-and-dad and renegotiate. Those countries may very well have interesting and beneficial policies that contribute to this state of affairs, and it's worth looking at this and whether we should adopt similar policies in this country, but we can't just do it. Rather than thinking of this as an easy problem that specific people are too stupid to fix, it's much more accurate to think of it as a fantastically difficult problem that no-one is smart enough to solve.


The second part of the Pocket Money Model that's worth questioning is the "fixed amount of wealth" assumption. This is best captured in the video with the line "higher incomes for some means lower incomes for others". I would like to examine this sentence and its implications.

There are a few interpretations of this sentence, but I think the one it's trying to get at is something along the lines of "one person's gain necessarily has to be another person's loss". This is sometimes termed the Zero Sum Fallacy. It's called that because it's a fallacy. It's fallacious. It's wrong. Wrong, wrong, wrong.

I would like to reiterate that you don't have to think banking bosses deserve all their earnings in order to see the wrongness of this. You just have to agree that by working together, people can achieve more mutually positive outcomes than by working against each other.

Six decades ago, South Korea was a decimated war zone. Today it's one of the fastest-growing economies. Even if you don't care how fast its economy is growing, look at photos of 1960s South Korea and compare them to contemporary South Korea. 2014 South Korea is an infinitely preferable place to be than 1960s South Korea. The people are healthier, more highly-educated, and substantially less likely to die in armed conflict. They have a staggering selection of capabilities and options available to them. This is what wealth is. Contemporary South Koreans are wealthier than their 1960s counterparts, and if you said you'd rather live in 1960s South Korea than 2014 South Korea, I would say you are either lying or there's something wrong with you.

They didn't take this wealth from anyone. They generated it. Conditions came about that allowed people in South Korea to cooperate and work productively and make stuff that they could trade for other stuff that made their country a nicer place to be. If you challenge this position, I challenge you to exhibit who South Korea took their wealth from. Who is poorer because South Korea is richer?

(As an interesting aside, there were an estimated 231 million people alive in 1 AD, with an estimated world average GDP per capita of $467 in 1990 US dollars. That's a Gross World Product of $108 billion. Current GWP is somewhere in the region of $50 trillion. Who did we take that from?)

This was described as a Zero Sum Fallacy because it assumes all events are zero-sum; that one person's gain has to be someone else's loss. This can be the case (when someone steals your phone, that's a zero-sum event: their gain and your loss sum to zero), but we also have positive-sum and negative-sum events. If you go and have a picnic with your friends, that's a positive-sum event, because you all get more out of it than you would if you'd individually sat in the park on your own. Global armed conflict is typically extremely negative-sum; we'd all be better off if we had the option of not entering into them.

(Since I am appealing to a left-leaning audience with this little essay, it's worth pointing out that when people complain about immigrants coming over here and taking our jobs, they are making the same type of error. This is generally termed the lump-of-labour fallacy: the idea that there are a fixed amount of jobs in the country, and some foreigner is taking a job that could be occupied by a native. This argument is false for similar reasons to the one in the video. Also it's sort of horrible, and while it's not exactly racist, it's powered by using the same sort of horrible mental circuitry racism uses. I invite you to consider why it's better for a stranger born on one side of a border to hold a job than it is for a stranger born on the other side of that border.)

I think it's important to internalise, to properly feel it in your bones, that one person's gain does not have to be another person's loss. The only alternative seems to involve resenting other people's gains as theft. I would rather live in a world where people realise that cooperation makes their lives better than one in which people think the only way they can better their lives is by taking stuff from someone else.


One last quibble with the video. Let's look at Brian. The banking sector has some pretty messed-up incentive structures. These played a significant role in the 2007 recession. We have very good reason to believe that certain parts of the banking sector are doing socially-undesirable things that can cause big problems. As such, we might imagine that Brian the Banker with his enormous salary does not generate as much wealth as his salary suggests.

Now let's look at Brenda. She's a nurse. I wouldn't want to be a nurse. Not just because it involves various icky and emotionally-draining activities, but because the care sector is a mess. Care is expensive and as a society we have no idea how to pay for it. Care also doesn't get cheaper while iPhones and broadband and plane travel get cheaper, because it doesn't benefit from economies of scale in the same way as manufacturing or information services or transport. The problem is also only going to get worse as the population ages. As a result, nurses do highly-skilled work we think of as very important, but they get paid pretty badly for it, especially in the public sector. We would be quite right to imagine that Brenda does a lot more good work than her salary suggests.

You know what these two examples have in common? They're highly atypical examples of what they're supposed to represent. The majority of one-percenters (or even nought-point-one-percenters) aren't the boss of the largest bank in Britain. (In the UK you can currently join the 1% club with an income of significantly less than £200,000 a year, incidentally). The median earner isn't in a high-skill, perversely-underpaid public-funded career.

This doesn't have to change your mind about any substantive issue. You are welcome to think that we should use greater redistributive taxation than we already do to shift wealth from the top 1% to the bottom 50%, or whatever. I personally am very much on board with this. What I am inviting you to do, when deciding whether this is the case, is to not use Brenda and Brian as your reference classes for who the money goes to and from. They are especially bad examples for doing this with. The most egalitarian society in the world will have underpaid nurses and overpaid bankers, because these are problems baked into nursing and banking. We don't have to like this fact, but we shouldn't try and generally reason with the consequences of this fact.

More broadly, I am inviting you to be skeptical of exotic, extreme and emotionally-charged examples that are used to sway your beliefs, or to reinforce the ones you already have. If you can spot these things in appeals for positions you agree with, you can spot them anywhere.


When it comes down to it, I don't really care what you believe (within reason; I do care very much if you believe I should be set on fire and fed to hungry lions that are also on fire. Please don't believe that). However, I would like those beliefs to be arrived at for good reasons, with the finest thinking tools we have at our disposal. I want everyone's map of the world to be made of good models.

I especially want people to not stick their hamster in the microwave. No matter what their model tells them, they probably don't want to do that.

The Grease that Oils the Wheels

The first book I ever read on body language was Allan Pease's Body Language: How to Read Others' Thoughts by Their Gestures. I must've been about twelve years old, and I got it out of the library thinking it would give me super powers. Instead, I'm pretty sure it messed me up for life.

In the intervening years I've read a fair few books on body language, interpersonal skills and the like, ranging in quality from spurious hokum to solid gold. In spite of this, I am still yet to develop super powers. This post is about what I've developed in their stead.

Tongue-Awareness Month
Once you notice that you point your feet at people you fancy, you'll never un-notice it. It's like becoming aware of your own breathing. When an unconscious action is brought to your attention, you'll second-guess how much of it you're supposed to do and lose all sense of what's natural. Where's your gaze going? How much are your hands moving? You're totally mirroring that guy. How obvious would it look if you shifted your position?

I think I've internalised quite a few of these cases of awkward posture-awareness, and in some cases developed some odd physical habits as a result. Still, every now and again I'll notice myself entering some textbook body language position, and then instinctively exiting it. This must look bizarre to observers.

Everybody Knows!
I once read a very simple magic trick in a best-selling book. In spite of the fact that its explanation is on a modest fraction of bookshelves up and down the country, I have regularly performed this trick for friends, on dates, or while trying to demonstrate a point, and not once has anyone said "you got that from [this pretty well-known book]". In spite of this experience, in the back of my mind I still expect everyone to have read it and know how it's done. Prescriptive people-skills advice feels quite similar.

Much like the above posture-awareness example, I've probably internalised any such advice to the point where it's become habit, but when you're first starting to apply something, it's easy to feel as if what you're doing is glaringly obvious. That book you read it in is freely available in shops. Anyone can pick it up and read it. Why don't they all know?

On the other hand...
I've had quite modest gains from my reading on this subject (by which I mean I'm disappointed that I'm not a combination of Derren Brown and Professor X by now), but one advantage of covering an assortment of material is that I am usually aware of when it's being used on me. With moderate regularity, I'll be talking with someone and realise they're trying to handle me with something they read in a book.

It's worth mentioning that the most naked attempts are the ones using methods from the most questionable sources ((neurolinguistic programming, I'm looking at you.)), but even adept people-handling has a certain feel to it. I am never entirely sure of the etiquette of saying "you learned that on a management course, didn't you?"

Just the right amount of sexism
At the beginning of this post I mentioned Allan Pease's book on body language. In collaboration with his wife, Barbara, he has written several books with titles along the lines of Why Men Eat Monster Trucks, and Women Can't Resist Ponies and Hair Brushes. ((While these books might not be totally devoid of valid observations, (I've never actually read any of them), in the absence of Alan and Barbara Pease having any substantial research credentials in this area, I'm going to suggest they most reliably offer insights into the domestic lives of Allan and Barbara Pease and their friends.))

Gender politics are a real, complex and salient feature of the social landscape, but even a "factual" approach to them involves walking a line between moronic overgeneralisation and cautious neutering into non-existence. Many of the people recommending Dale Carnegie's How to Win Friends and Influence People ((which is a lot of people, incidentally)) suggest getting a copy printed before 1960, as more recent updates to the book allegedly haven't fared well through various periods of heightened political sensitivity. My personal approach has been to include questionable material in my reading and trust my better judgement. That said, I'm not sure I'd trust the better judgement of a general readership (c.f. the pick-up artist community ((It's worth mentioning that while I would draw into question the basic human decency of many PUA proponents, I wouldn't be as critical of their scholarship. If you can overlook the outrageously shitty attitude towards women, they have quite a few genuine insights about patterns of human behaviour.))).

Exciting new frontiers of awkwardness
I'm fairly sure I am never going to run out of awkward physical habits to gradually become aware of. One of my more recent endeavours in a similar vein is public speaking. Recording yourself perform and then watching it back is an exquisite form of torture that will never stop horrifying you. Once you start training yourself to notice your disfluencies ("um", "uh", and every other gormless-sounding noise that can come out of your mouth) or semantically null words ("like", "well", "basically", "right"), you craft this amazingly versatile stick to beat yourself with whenever you open your mouth. While you're struggling to keep them under wraps, you end up leaving what feel like enormous pauses when you talk, just begging to be filled with whatever stupid words pop into your head.

(I have fairly recently discovered I possess a very odd habit of speech: when I talk without any good idea of where I'm going, I tend to end phrases with a high-rising terminal. This creates an expectation [not least in my own brain] that I'm going to keep on talking, so I say something else. Before I know it, I'm locked in a huge, rambling sentence that I feel aesthetically compelled to prolong. When I realised why this was happening, and that I could just stop talking by saying something in a concluding tone of voice, it was like magic. I'm convinced there's some area of presentation skills or media training or speech therapy that deals with how patterns of speech impact the content in this way, but I'm yet to stumble across it.)

And yet...
For all my misgivings on this cluster of subjects, I can't really dismiss its value. It would be difficult to describe how spectacularly socially-awkward I was in my young adulthood. While it's hard to say how much of my current relative social competence is simply maturity and further life experience, theory is a core part of how I operate. Having explicit functional models of how people work is undeniably useful if you're that sort of person. You obviously can't develop social skills just from reading books, but the books can definitely help.

I should probably end this with some book recommendations for the socially apposite and destitute alike. I would heartily recommend the recent What Every Body Is Saying by Joe Navarro as an introduction to body language. The author's credentials are quite solid, and he's satisfyingly conservative about the realistic applications of body language. It's consistent with my other reading on the subject, but self-contained and sensibly broken-down by topic.

I would also recommend the aforementioned How to Win Friends and Influence People, though I am not currently in a position to comment on whether you should read a version printed prior to 1960. I listened to the Andrew MacMillan audiobook, (currently available in full on Youtube), which I would recommend as an agreeable medium for the material. Off the back of this post, I've ordered a 1953 copy off AbeBooks, so I may retract this recommendation in this light in a few months time. Especially if it gives me super powers.

Triptych: a Theory of History

What did I know about history before reading a trio of books on the subject? This is a hard question to answer, but it's worth taking a crack at in order to figure out what I've gained from the experience.

A Personal History of History
History and I officially parted ways in secondary school, where a clash on the GCSE timetable robbed me of any opportunity to study a Humanities subject. This didn't seem like much of a sacrifice at the time. What is "geography" anyway? Who are the world's foremost geographers? What Nobel Prize category do they fall into? Once you siphon off geology and environmental sciences and anthropology and sociology and politics and economics, what's left? Why not just study those subjects instead, or at least sort them into more natural categories?

I digress. History. History, history, history. So what did I learn in History at secondary school? William the Conqueror didn't just wake up one morning and decide to invade England; historical events have causes, but they're manifold and difficult to tease out after the fact. There's a finite amount of evidence about the past, and as more time elapses between now and then, that amount gets smaller and smaller, so it becomes harder to piece together what happened. Recounting of historical events, both those made in the past and present, introduces bias into the record, because the people doing the recording are distanced from those events, either through time or culture or their own understanding. The closer a piece of evidence is to the event it pertains to, the stronger it is as evidence, but it is never free from scrutiny. Monarchs and priests argue a lot. Don't try to cross the Alps with elephants.

Add to this an arbitrary selection of historical knowledge about Medieval England, Ancient Rome, the rise of Islam, the Reformation, Tudors, Stuarts, and a couple of World Wars. That's not a terrible takeaway, to be honest. I don't have a massive amount of faith in the way contemporary schooling works, but considering I stopped officially learning history at the age of 13, the above paragraph doesn't seem to reflect too badly on the process. That said, I was one of the nerdy kids who joined Archaeology Club in Year 8, ((Also Sign Language Club, Astronomy Club and the choir. Make of this what you will.)), and some elements of this may have come from there, which was tailored to a smaller and more interested group.

Quite a bit of time has passed between now and then, and in the intervening years I've taken up further interest in a lot of stuff that is undeniably "history". In addition, other disciplines that have my attention benefit from historical perspectives, either in the sense of that discipline having its own history that is worth investigating (history of philosophy, for example), or in the sense of pertaining to historical circumstances, events or data (such as economics).

"This stuff", I reasoned, "happened in the past. We only have historical records of it. It turns out there's a whole super-field dedicated to the study of history. It's called 'history'. Are there fundamental principles of the study of history which I could learn from reading some undergrad texts on the subject, which would help me in understanding it? Specifically, is there a general theory of history that I can learn?"

Long story short, there isn't. Unless I was staggeringly unlucky in my selection of books, there is no royal road to understanding past events and the remnants they've left for us. In mathematics, some things are described as by convention, in that they are the way they are because everyone in mathematics has agreed to do it that way for convenience; "we do it this way by convention", you may hear a lecturer say, "while this has to happen the way it does because of inexorable fundamental facts about the universe, and if we did it any other way, black would be white and up would be down and nothing would make sense any more." History as a discipline seems be made entirely out of convention, (to my newbie eyes a pretty robust set of conventions that broadly make a lot of sense), but the inexorable facts about how to interpret the past don't seem to exist.

What do exist, however, are a variety of competing schools of thought around how it does and doesn't make sense to interpret history, and what it even means to do so. While all three of the books I read go over this to some extent, it is the primary basis for the first, The Houses of History, by Anna Green and Kathleen Troup. Divided into twelve chapters, each covers a broad "house" of historical inquiry, explaining its origin, proponents and motivations. The book is billed as a reader, and as such each chapter concludes in a sample text from a historian exemplifying those ideas. It seems the houses of history aren't quite so well-demarcated as Hogwarts; some chapters address fairly well-defined schools of thought (Marxism, for example), while others address much more general ideas, such as the use of sociological theory in historical analysis. Nonetheless, from my outside view, Houses of History does an admirable job of providing descriptions and examples of contemporary approaches to historical investigation.

Trivial Pursuits
Next on the list was The Pursuit of History by John Tosh. Lacking the chapter-by-chapter structure of Houses of History, The Pursuit of History has a similar goal: to move through different approaches to the titular pursuit of history, explaining what they're about and where they came from. It covers many of the same areas but with different emphases, elaborating on some of the more practical elements of history as a discipline. Tosh has a subtle sense of humour that I found quite pleasing, and goes to some greater lengths to expound upon answers to the question of what historians are actually doing, what its value is to society and what sort of role they should hold in broader discourse.

"Oh, but we've already met..."
History: A Very Short Introduction, by John Arnold, is obviously aimed at a more popular audience than the previous two books in this triptych, and this showed in a number of ways. For a start, it begins with a murder. While the previous two books were quite dry, this one is very much about getting the reader excited by the very notion of history. Something I found especially pleasing was an example of novel historical research carried out specifically for this book, in which the author walks us through the process of investigating the events surrounding an innocuous entry in a 17th Century ledger, following the life events of the individuals it mentions and putting them in a broader historical context. It also devotes a couple of chapters to a condensed history-of-history, going over historians featured in the previous two books in less of a distributed, topical meander.

Oh, the Humanities!
As hinted earlier, I find myself in foreign territory as far as the humanities are concerned. They have mildly askew alien values, and reading them is like driving along the edge of an uncanny valley. You never know when you'll tumble over the edge into a world where people take Sigmund Freud and Karl Marx seriously. Houses of History dedicates an entire chapter to psychohistorical analysis, which is apparently totally a thing that people still do, in spite of Freudian psychoanalysis being widely discredited by the overwhelming majority of contemporary psychologists, and the past few decades seeing phenomenal insights from the cognitive sciences ((Somewhat analogously, in a recent Scott Aaronson interview he mentions how when reading books on the philosophy of physics or computing, it's like they're several decades behind the times, talking about the implications of problems that have been long resolved in their parent field.)). That said, the sample text from this chapter had the spectacular name "The Legend of Hitler's Childhood", which I can't help but applaud.

Also Marx. So much Marx. Coming from an economics background, it's always mystifying as to why Karl Marx holds such sway in the humanities. Marx basically doesn't exist in any modern economics curriculum because the theoretical accounts of economic activity he provides are essentially wrong. No biggie. Lots of philosophers who thought about economic behaviour around this time were equally wrong; however relatively few of them have been canonised as one of the greatest thinkers of the 19th Century on the other side of the fence. Having now delved (very shallowly) into History's take on Marx, I'm perhaps a little less mystified, though I now have a few more Marxism Mysteries in my collection. It's hard to say whether I've come out ahead or not.

At some point, I will write the post on Talking Heads from History which I have been meaning to write pretty much forever. Now is not that point.

I started this triptych with the goal of excavating the tip of some general theory of history I was previously unaware of; one that might yield insights I can use when considering other ideas and disciplines in historical contexts. I wasn't successful in this regard, but I do feel like I'm walking away from it with a selection of intangible benefits. Rather than picking up some useful tools of thinking, I feel like I've gained broader appreciation and perspective, though only time (and presumably reading more history) will tell whether this is actually the case.

I am still quite confident that I shouldn't try to cross the Alps with elephants.

Coming Up...

Mostly for my own benefit, here's a run-down of subjects and books I plan on reading in the near future.

I've been meaning to delve into this for a while, but I think watching Crash Course World History crystallised it for me. Some subjects can be more amicably divorced from their histories than others. You don't need to know about Francis Galton to understand statistics, but if you want to make sense of the Business Cycle, there's an implicit need for knowledge of historical events that pertain to the Business Cycle. It seems to me that having a basic introduction to the theory of historical inquiry may be useful. As such, I've put myself together a triptych introduction to the subject.

The three books I've opted for are The Houses of History, (which I'm about half way through at time of writing), The Pursuit of History, and History: A Very Short Introduction ((I've heard a few good things about the Oxford University Press Very Short Introductions series, and I expect to sample quite a few in the near future. Not only are they in broad alignment with my goals, but they're also ridiculously cheap on AbeBooks, and having a ~150-page volume I can polish off in an evening makes rounding off a triptych a lot more straightforward.)). I've also picked up a copy of Jared Diamond's Guns, Germs and Steel, which I've been meaning to read for a while. The plan is to see what I make of it once I'm historied up to the eyeballs.

Mathematical Proofs
I have something of an embarrassing secret: I am terrible at mathematical proofs. I'm conversant with common methods and techniques, and I can follow some pretty hairy ones, but ask me to personally prove something, and most of the time I'll spend half an hour pushing vacuous algebra around a page before skulking off in failure. I am led to believe this isn't an uncommon problem for people in my position. I have a lot of "methods" under my belt, but haven't really done much by way of analysis. While this doesn't matter quite so much right now in my mathematical career, it will probably become a lot more important later on, so I suspect it'd be a good idea to tackle this one now.

With that in mind, I'm awaiting the arrival of How to Read and Do Proofs, which came recommended by the four corners of the internet, along with How to Solve It, which came similarly recommended, and satisfies the charm I am under as regards quaint books from decades ago which are presumably still in print because they're of genuine value. While I will likely review these books, I won't form them into a triptych. This is a case of reading as many books on the subject as it takes to not need to keep reading books on the subject any more.


A couple of weeks ago I finished reading Daniel Dennett's Intuition Pumps and Other Tools for Thinking. I'd recommend it, though it's not the focus of this post.

Early on in the book, two of the "tools" established by Dennett are certain words that he claims are strongly indicative of weak points in arguments. Specifically, these words are "surely" (which he claims is indicative of the author pleading a point he or she is in fact not entirely sure of), and "rather", (which he claims is often used to conceal a false dichotomy). I believe these claims to have some weight to them, though they're also not the focus of this post.

"What is the focus of this post?" I hear no-one cry. Well, having established these two words early on in the book, whenever they subsequently appear, Dennett suffixes them with "(ding!)". Sometimes the word appears in one of his own arguments and sometimes in those he is critiquing, but the purpose is to effectively draw your attention to their use. So effective, in fact, that when I was reading an unrelated piece of text last week, I hit the word "surely" and ding!ed.

I'm not sure whether this would be so effective for everyone, but I have the opportunity here to train myself to notice specific word usage. I've installed a regex browser extension (this one, for Chrome, is my weapon of choice, but I'm sure suitable alternatives exist for Firefox), and set it to append the (ding!) to a set of words I want to notice. For the time being I'm working with "surely", which I'm finding to have quite a bit of mileage.

You should consider how hard this post is for me to proof-read while this regex rule is in place. If I make a follow-up post in six months listing all the words I've added, that's going to be even more fun.

EDIT: Surely enough, a (ding!) crept into the draft of this post, and I spent about ten minutes wondering what was wrong with my incredibly simple regex that was causing two (ding!)s to appear

Of Siblings and Sea Sponges

Until our early twenties, my sister and I had almost identical academic histories, right up to and including dropping out of a physics degree. The only significant substitution was that she took an A-Level ((For non-Brits, A-Levels are among qualifications typically obtained between the ages of 16 and 18 before going to university.))in electronics while I took one in chemistry. This had an interesting side-effect: I have a paired control case for studying A-Level chemistry. It's especially interesting as I'm fairly sure that A-Level chemistry was one of the more practically useful academic endeavours of that period, even though I never went on to study it further.

There are so many common domestic activities where a working knowledge of chemistry is useful. Thickening a soup, thinning some paint, picking a suitable cleaning product or using the right glue all become a lot easier to do on the fly when you understand the principles behind them. I have learned a lot of useless stuff in my life ((I possess an alarming number of "facts" about starships, supernatural creatures and the metaphysics of fictional TV shows, which I'm sure will serve me well if I'm ever stuck in a piece of Star Trek/Buffy/Quantum Leap crossover fanfiction.)), but I've never regretted understanding what emulsification is, or how detergents work, or why acids are corrosive. I've never wished the knowledge of why glass and metal and rubber behave that way be replaced with something more useful. I also have a reasonable test for whether any given piece of knowledge is dependent on me having studied A-Level chemistry: I can just ask my sister.

You know what I didn't study at A-Level? Biology. I did one chemistry module in biochem, and I've picked bits up as an interested observer over the intervening years, but there's some alternative version of me in some Bizarro Biology A-Level world, wandering around with all sorts of knowledge of metabolic processes and enzyme production and protein synthesis, using the crap out of it in assorted everyday ways that I can't imagine. It's tantalising to think about Bizarro Biology A-Level me. So tantalising, in fact, that I've taken a few small steps towards becoming him.

You may already be familiar with brothers John and Hank Green as YouTube Internet Celebrities, who started vlogging to each other in 2007, and ended up with a committed internet following. They are both eloquent and diversely well-educated, with a broadly-appealing nerdy charisma and sense of humour. A year or so ago they started expressing frustration that while the internet is very informative, it's not necessarily as educational as they might like. There are many ways of learning a lot of atomic facts about a subject, but it takes a certain amount of effort to put those facts into a broader context where you actually start to appreciate what they mean. With this in mind, they started CrashCourse.

CrashCourse consists of playlists of 10-12 minute videos, with each playlist intending to provide a broad introductory overview of a subject. At present those subjects include Biology, Chemistry, Ecology, World History, US History and Literature. Over the past couple of weeks, I've worked my way through Hank's Biology playlist. It clocks in at about seven hours, and while I doubt I'm equipped for a Biology A-Level exam after that, I have more substantial foundations in place for further inquiry. I'm not very well-equipped for Chemistry A-Level right now either, but the useful concepts are still there.

The videos are produced with laudably high production values, and while they are watchable and entertaining, I believe they also succeed at the broader goal of being genuinely educational. "OK," they'll sometimes say, "this one is going to be pretty involved, but please bear with me; it's kind of important". I have a massive amount of respect for this approach, and feel it adds to the credibility of an educator if they have some faith in your motivations for learning.

While my biology appetite has been whetted, I'm not sure what to follow it up with. It is a massive subject, and yet it doesn't intersect too neatly with anything else I'm studying at the moment ((In actual expensive-piece-of-paper education, I've just finished a unit on medical statistics, and pharmacology/methods of action/chemistry crossovers is something that piques my interest when reading Derek Lowe's blog, but this would presumably require some pretty heavy and well-directed study before I have any appreciable understanding, which I then probably wouldn't have much use for.)). I may just let it brew for a while. There's also eight hours CrashCourse World History playlist sitting there, winking at me, and if I'm honest, I think I prefer John's delivery to Hank's.

Triptych: Foundations of Logic

Part of what I want to accomplish with this blog, and my mission to read an introductory textbook in every subject I claim to be interested in, is to map out my own ignorance. It's easy to convince yourself of your own imaginary expertise in a subject if you've never been forced outside of your comfort zone within that subject. I notice that a lot of people haven't been forced outside of their comfort zone on the subject of logic.

Many people who have done a bit of programming, a bit of maths and read a few Wikipedia articles on logical fallacies seem to fancy themselves experts on logic. A lot of these people are very vocal on the internet. I used to be one of them and I'm very, very sorry. In more recent years I'd found these people frustrating to look at, mostly because I'd done a bit more maths and read a bit beyond the Wikipedia articles, but I didn't have a particularly high horse to climb on.

By accident rather than design, over the past few months I've wound up completing something of a foundations-of-logic triptych, studying a coherent body of knowledge on three different fronts. It started with Lepore's Meaning and Argument, which I've mentioned previously. Then for unrelated reasons I ended up working through the Coursera Think Again: How to Reason and Argue MOOC, and Doug Walton's Informal Logic: A Pragmatic Approach.

Meaning and Argument

This is something of a misnomer as far as book titles go, as it doesn't really cover anything about meaning in the sense of semantics. I got hold of it on the recommendation of the Less Wrong Best Textbooks list. It is primarily an introductory text on formal logic, covering propositional, categorical and first-order logic, as well as use of logic-tree techniques to validate deductive arguments. It doesn't cover inductive logic. It excels at presenting a large number of exercises for drilling oneself in translating natural language into logical notation, as well as manipulating that notation and evaluating it for deductive validity. A very strong emphasis of the book is demonstrating the resistance natural language exhibits to being systematically translated in this manner.

Informal Logic: A Pragmatic Approach

Whereas the previous text dealt with formal logic as an abstract set of relationships between propositions, this text, as its title suggests, covers informal logic in the familiar environment of human discourse. It primarily concerns itself with the purpose of different types of dialogue, pragmatics, different categories and subcategories of informal argument, and in particular distinguishing cases where these categories of argument are and are not fallacious. This book strikes me as a very pleasing antidote to anyone who first sees a list of logical fallacies and thinks "woah! I'm totally going to win me some arguments with these!" In fact, it almost seems to be written for this specific purpose. It is not the easiest book to read from cover to cover. Previous versions had the subtitle "a handbook for logical argumentation", and the reference/handbookiness of it is very apparent. While it is very fit for purpose in terms of ironing out one's understanding of informal fallacies, I would recommend it only until I find something more readable.

Think Again: How to Reason and Argue

The goals of this course are wide but not lofty. It is a very broad introduction to a selection of topics surrounding reasoning and the formation and evaluation of arguments. I can't even remember why I started watching the video lectures, but I very quickly found myself charmed by the course instructors Walter Sinnott-Armstrong and Ram Neta. Their deliveries were warm and entertaining, and I found myself watching video after video. I skipped quite a few of the videos on subjects I'd covered in detail elsewhere, and watched most of them at 1.5x speed. In my case this was very much a gap-filling exercise, but over such a broad area there were quite a lot of gaps.

Think Again covers the linguistic foundations of arguments, formal logic up to first-order (along with a lot of drilling exercises, which I didn't really bother with off the back of Meaning and Argument, but which strike me as potentially useful), inductive arguments, causal and probabilistic reasoning, various categories of fallacious reasoning and processes of refutation. This was my first introduction to Walter Sinnott-Armstrong, whose academic work involves practical ethics and the evolutionary basis of morality. Off the back of this course I obtained his textbook, and while I've only flipped through it, the first half seems structurally similar to the Think Again course, only a lot less MOOCy and in much greater depth.

This triptych has definitely given me a much stronger position of meta-knowledge on various concepts and activities that get labelled "logic", though at some point I should take a more comprehensive introduction to mathematical logic and make it a tetraptych ((This is totally a real word. I just looked it up.)). Off the back of it, I'm probably going to investigate Walter Sinnott-Armstrong's other philosophical work, and I'm motivated to investigate the linguistics/pragmatics angle in more depth; I've been sitting on O'Grady's Contemporary Linguistics for about a year.

The triptych format (approaching a subject from three different sources) seems like a good format for building a solid subject foundation, so I may very well employ it again in future.

Precarious Armchairs and Filling the Gaps

I have a new rule: books I read for explicitly edifying purposes should be textbooks. I am allowed to read other books, but they go against entertainment in the ledger*, and don't count as learning any more than Terry Pratchett, Dilbert or Project Runway. **

I'm doing this so I can't convince myself that I understand a field of study when I don't. This can happen for a variety of reasons, but I think an especially common vector is being a part of the skeptic / pro-science / atheist crowd on the internet. As part of this crowd, you're introduced to a very shallow reading of a variety of enormous subjects (evolutionary biology, statistics, cosmology, logic, philosophy...), and especially if you take up skepticism / pro-science / atheism as political positions to be argued at all costs, you're going to abuse what little knowledge you have in defence of your protected beliefs and say a lot of stupid things. I shamefacedly hold my hand up and admit to doing this.

(A note to people who persist in doing this: it's piss-annoying, and if you ever grow out of it you will regret it more than any youthful fashion-blunder you can imagine. If you stop now, you will regret it a little less.)

There are slightly less embarrassing ways to scratch the surface of a subject and think you've exhausted it. A lot of popular science literature seems to do a good job of making the reader feel they've learned something, and a bad job of imbuing the reader with a sense there's more to learn. When you finish a pop-sci book on a subject you're unfamiliar with, you feel like your mind has been expanded, and compared to your still-unfamiliar peers, you can feel like a resident expert. It's as if all the condensed insight from the field has been given to you, and the messy underpinnings are something you're fortunate enough to not have to worry about.

It's easy to have an armchair understanding of a subject. You might know considerably more about that subject than the average person on the street, but that is a ridiculously low bar. There's a sense, I think, that moving from armchair understanding of a subject to pursuing it as part of a course of formal study is simply "filling in the blanks". My experience of diverse formal study is that while this is sometimes the case, on many occasions, those "blanks" turn out to collapse into vast crevices, and the fall can be very uncomfortable if you're not prepared.

As a result, I've decided the armchair is an unsafe place to theorise from. If a subject is important enough for me to be interested in, it's important enough for me to work through an undergrad textbook on that subject. This won't make me an expert in all these subjects, but that's not the point of the exercise; the point is to learn how much ignorance I have.

The first area where I've decided I don't know enough is philosophy. Courtesy of Abe Books, I've obtained Ernest Lepore's Meaning and Argument, Norman Melchert's The Great Conversation, and for good measure, Bertrand Russel's History of Western Philosophy. This seems enough to be going on with. As I finish each, I plan to report on what I feel I've learned from them.

* I don't actually have a ledger where I document all my media intake and meticulously label how worthy it is, though now the idea's been floated, I'm seriously considering it.

** I don't actually watch Project Runway. There is not enough helium in the world to float this idea high enough to make me consider it.