Agile Methodologies: a Retrospective

Disclaimer – While this may appear to be an article about a software development methodology, I’ve been using many components that are considered “agile” in non-software projects for over a decade. For the software folks out there, this is an article on the pieces of “Agile” that I find value in – your mileage may vary. After exceeding 2,000 words, I decided that I would need to kick out posts outlining details of the Agile components as separate components.

At a  recent meeting of the folks involved in the  PMI Southern Alberta Chapter (PMI-SAC) mentorship program, a discussion started around “Agile” since one of the people involved had been informed that their team would be using agile. A number of side discussions arose around this, including questions like:

  • What kind of Agile will you be using?
  • That seems like a project better suited to a traditional methodology – why are you using Agile?
  • What sort of projects are suited for Agile methodologies?
  • Why do IT projects continually fail / over-scope / finish late / …

I ended up having an interesting conversation around what I consider to be the “core” of agile, since I was involved in bleeding edge software development in the early 2000’s, which is when  lot of what is now considered “Agile” was just starting out. This was a time when programmers were talking about “patterns” and “anti-patterns” and really trying to get a handle on why software projects were invariably different than other types of projects.  Some of the “manifesto” materials included “The Mythical Man Month” and “The Pragmatic Programmer” and one of the first “methodologies” that I was aware of which had developed enough that it had a name was “Extreme Programming” (or XP1). I found it interesting (and a bit worrying) that a lot of the discussion (and confusion) around “Agile” seems to be very similar to the discussions we had almost 20 years ago.

Agile before “Agile” was a word

As far as I know, our team (at Facet Decision Systems) was one of the first in Canada to try and formally codify some of the things that are now considered “agile”  and didn’t (then) have a name. We spent a lot of time trying to figure out what was actually effective, and what just seemed to be “faddish” trying things like pair programming, rapid iterative development and unstructured “code and fix” sessions. I think that one of the big takeaways from our experience is that some things will work really well, and some will be horrible – and it will all depend on how your team works together. The Iterative development components of it got wrapped up into a system called “FastTracks” – but this was largely used to explain to traditional companies that we were really using a traditional “waterfall” development approach, but were breaking things up into little tiny projects.  Instead of trying to explain the entirety of Agile (again) I’ll suggest that folks who are interested take a look at Martin Fowler’s insightful summary (which also includes a lot of the history I wasn’t aware of at the time) HERE. I’ll now provide the 10-minute overview of the pieces of Agile that we found useful (and why I think that they got introduced in the order that they did).

Useful pieces of Agile – Scott’s Opinion

This is my distillation of the pieces of what has become known as “Agile” that I have found work well for me in almost any circumstance. Since I’ve spent most of the last decade working on “traditional” projects like permitting and building power plants and chemical facilities rather than software development, these are the pieces that I think have fairly universal application.

Iterative Development

dartboardFor all of the folks in software who are painfully familiar with the phrase “the spec is always wrong” you can take consolation in the fact that in the rest of the world it’s almost always incomplete or out of date.

There are two primary reasons to do iterative development:

  • Breaking your project into easier to manage pieces
  • Figuring out what you don’t know early in the process, while you still have time (and budget) to deal with uncertainties

The first reason is almost always an acceptable reason to do iterative development. Many managers (and executive sponsors) don’t want to consider the second, but in my opinion this is the best reason to use iterative development because it very quickly exposes assumptions that may not be on completely solid ground. I’ll start this with an apparent digression and outline how we built the “FastTracks” system, and the explicit reasons that we configured it the way we did, as well as the “official” and “unofficial” reasons for each step. Note that this is definitely “stone soup” territory because our experience demonstrated that if we told our clients the real reasons for the system our projects would get rejected. Discussions with several of our repeat clients demonstrated that our “duplicity” was appreciated because the reason that previous projects had failed was because the initial success criteria were the wrong criteria, and (people being people) many project sponsors are unwilling to admit that they don’t really know what their project will look like if it is “successful”.

Test Driven Development:

TestYou won’t find this phrase (or the acronym TDD) before the mid-2000’s. I was introduced to this concept through the groundbreaking book “The Pragmatic Programmer” and it has proven to be a pretty useful trick: build your test before you build your code. This is particularly useful if you are trying to replicate the activity of “”legacy” code (AKA “it works, but this programming language hasn’t been used for a couple of decades and we can’t port or maintain it anymore). Instead of writing a detailed specification, build yourself a sample input file (or files), and sample output. Your code is successful when you feed the input file(s) into your new code and the output matches your output test. This seems easy, but can be surprisingly difficult.

The core of this little technique is that if your test passes, your code is compliant (by definition). A subtle “gotcha” is that sometimes your test is compliant because it’s incomplete, so you may need to add pieces to your test so that it reflects what you actually meant it to do.  Typical test pieces will include known boundary conditions , for example sometimes the software behaves differently if you feed it one thing, instead of a list of things. To steal a line from Ralph Vaughn Williams “I don’t know whether I like it, but it is what I meant.”

While everyone thinks that test driven development is a software thing, I have found that it is also very useful for doing process improvement. In most cases your final product should be the same after a process improvement project as before so you can use the same sort of methodology, and if your process improvement also provides a product improvement, you can then change your test to include that enhancement.

Sprints (Scrum and a few others)


There are a lot of methodologies that use “sprints” – short time periods (usually a week to a month) to build the next set of features. Project managers are tempted to think that this is just so that you can have consistent sized chunks in your work breakdown structure, and some Agile practitioners think that this is to keep the size of each development step small enough that you can dispense with those pesky project managers completely.

I’ll propose another reason: momentum. If you are doing iterative development and building your testing before you write code and a hundred other possible things – all of which occur before you start – having the entire team bog down and spending forever on things that aren’t part of your project becomes a real concern.

The sprint forces you to do something and actually deliver, then look at what you built and then deliver again. Because the “sprints” are short, there is less pressure around delivering perfection and more of a focus on delivering something incrementally better. Without this step, projects can get trapped in the vicious cycle where your planning is making something even better than the original, so it’s worth waiting for. But because it’s late, you need to over-deliver. And that improvement you need to add to over-deliver will make it later…

There are three big risks that I see with the “sprint” methodology instead of a more conventional “waterfall” approach:

  • Sprints can deliver something other than what is expected, which can be less than ideal you are sprinting to a defined deliverable. Divergence from expectations is generally seen as a bad thing, so formal change management needs to be solid and outline the approved (and thus “expected”) changes.
  • Because the focus of sprint development is on the next iteration, there is a tendency to cut corners on each sprint. If this isn’t accounted for (and fixed) in the next iteration, then these shortcuts can build up, with the potential to cause cascading failures. 2
  • Larger projects are more like a marathon than a sprint. While some teams have figured out how to pace themselves, I know a lot of teams that started fast and then burned out. Some of the best “sprinters” I know are actually marathon runners who are neurotic about having a milestone every hundred meters. They suck at races under 800 meters, but for longer distances they leave their competition in the dust.

Other Pieces

There are a number of other pieces that are considered “Agile” that I tend to use, but these three pieces are things that I tend to use in most of my projects – whether or not they are software based. Other components are much more context sensitive. Standing meetings may not be a good idea when you are reporting to VP’s, and I note that the fable of the Chicken and The Pig (conspicuously absent from SCRUM since 2011 or so) is no longer part of the core precepts. While I understand the change, the whimsy of this fable was (in my opinion) helpful in building a team dynamic that let people laugh at themselves and then move on.

What’s with the Agile Elephant?

If you are wondering why this article started with the picture of an elephant, I chose it for the parable of the elephant and the blind men. What is Agile really, and does Agile mean different things to different people?

If you aren’t familiar with the parable look HERE. For a criticism of the parable (as it applies to the search for philosophic understanding) you can find an interesting counterpoint HERE.

Note 1: This was always a bit confusing since this is the time period when Windows XP was launched, so you needed to figure out if programmers were talking about the operating system of the philosophy, and it got exciting when you were talking to Extreme Programmers who were coding for Windows XP…

Note 2: Iteration failure is much less of an issue if you are using test driven development because it’s not “done” until it passes the test, so shortcuts that fail testing don’t count as progress.

Elephant Photo Credit: Coco Parisienne (Pixabay)


Using the Right Metrics for Project Profitability – an Optimization Fable

The Challenge:

Boardroom Table PhotoIn 2015 I was asked to help with a root cause analysis to investigate the poor performance of some projects in an engineering consulting firm’s portfolio. This article uses “fake” numbers, but they are similar (in the aggregate) to some extracted from an actual accounting database. When I entered the discussion the cause was assumed to be percentage cost of sales. Digging into the data resulted in different conclusions (and remedies) and underscored why using the right metrics for project profitability is critical.

These projects had been flagged as having very high sales costs  as a percentage of revenue. Since there is a minimum amount of work required to win any project (even if it’s nothing more than writing an email proposal) a commonly used practice is to look at the cost of sales (the effort required to win a contract) as a percentage of the value of the project. This metric is the percent cost of sales. While larger projects tend to require more effort (a bigger proposal) to win than smaller projects, in general larger projects have a smaller percent cost of sales, since a $12,000 proposal for a $400,000 project is much smaller (as a percentage) than a $200 proposal (an email) for a $5,000 project.

There was a “healthy debate” in the boardroom when I arrived, and one group was suggesting that the company avoid these large expensive projects and focus exclusively on smaller projects (which had been the core business of the company for much of the last decade). Another group was pushing back, pointing out that while the larger projects were not as profitable, their size and length made it possible to hire additional staff and ramp up the size of the company. After some discussion we all sat down and the team presented the following graph, showing the size of the project compared with the cost of sales, very similar to the one below.

(Graph) Contract Value versus Percent cost of sales

There were a number of mid-sized projects that looked quite attractive (green circle) but more than half of the small projects for the year had a lower cost of sales than either of the two large projects. Perhaps more striking is that many companies will simply not bid on projects where the expected cost of preparing a proposal is more than 5% of the contract value, and roughly half of the projects were more than the 5% maximum cost of sales that *should* have been applied. Something was clearly odd, and the cost of sales seemed to be out of control! Given that most companies of this type tend to have a 10% (or less) profit margin that would suggest that one of the two large projects was actually costing the company money, and the second wouldn’t be adding much to the bottom line.

Down the Rabbit Hole

Asking how the company was doing in terms of overall finances provided an interesting answer – the previous year had been one of their highest revenue years, and that it had been slightly more profitable than an average year – almost 18% profit before taxes. While the team was good, I didn’t think that they were more than twice as good as the average company in their industry – which they would have to be if they were making profits at that level after incurring massive sales costs!

To see if we could get a better handle on what was actually happening, I asked if we could get a graph showing project size against profitability, as a percentage of contract size. Crunching the data in this way, suddenly the “high cost” projects (in blue) started looking much more attractive: the reason that cost of sales is an important metric is because if it “costs” too much to get a sale then your overall business is unprofitable. These numbers suggested that the large projects were carrying their own weight. But this plot looked a  little too pretty – at least some projects should be expected to lose money, and according to this data, only one project was unprofitable in the last year!

(Graph) Contract Value vs Percent Profit (with Overheads Prorated)

Looking into the cluster of mid-sized projects (In the green circle around $150-$250k) there was general agreement that these were not “outliers”, but projects that had resulted from the natural progression of some medium-sized ($25-$50k) projects. Proposal costs were invariably low, and effectiveness was very high, since the project teams already knew what needed to be done and were already familiar with the projects (and the clients). If there was only one kind of project to do, these would be the projects – but there was no way of knowing which medium opportunities would lead to this kind of follow-on work.

At this point one of the people involved in the less profitable of the large projects asked how profitability was calculated – he knew that the proposal costs for his project had been moderate (considerably less than the cost of sales percentage had indicated) and the project metrics were comparable (although larger in scale) to some of the extremely profitable mid-sized projects. After some discussion, it was discovered that the accounting system did not track “overheads” (like invoicing) to a specific project, but instead applied them pro-rated to the projects based on contract value – assuming that a $400k project would require 100x the administrative overhead of a $4k project.

So after this discussion, we have a question as to where the “unprofitable” projects are, and exactly what costs should be applied to each project.

A Reality Check

Because of the way that overheads were calculated, there was no way to determine what the historic overhead costs were, but it was clear that the large projects were carrying a disproportionate share of the overhead burden. Instead of spending several months (and increasing the overhead burden) collecting this information, a rough breakdown of costs was determined based on the typical length of projects and their accounting complexity. Regardless of the size of the project, a certain amount of time was needed by the accounting team to set up a project, create appropriate project codes and close the project. The other major factor was monthly billing, with large projects taking up to 2x the amount of time to issue invoices every month as the smaller projects, and their typical duration was 5-9 months, while a small project was often created and closed in a single month. reallocating provided a very different graph, which had many of the “large project” proponents feeling vindicated, and the “small project” group scratching their heads.

(Graph) Contract Value Vs percent Profit

Notice that we now have a “tail” of low value projects that are unprofitable, and the large projects are almost as profitable as the mid-value projects that were acknowledged as the best projects that the company could pursue.

All Projects Are Not Created Equal

At this point we had a reversal of what we thought we were looking at: the larger projects were uniformly profitable (although those mid-sized projects were awesome) and we had a mix of projects that were profitable, and those that were extremely unprofitable. So what was the difference between them?

By looking at the specific projects associated with each data point, it is possible to look into what they were. Small projects can become unprofitable if there is any scope or schedule risk associated with them. On a small project (less than $5k) with expected profit margins in the 10% range almost any overrun will put them in the red – a $3,000 project is typically completed in under a week, so a 1/2 day overage is going to happen before many folks do their timesheet and realize that their budget is blown! On the other side of the coin, profitable small projects  are often projects with a defined scope and a work product that is well refined: two classic examples are small change orders (which are treated as separate “projects”) and “projects” that are really best thought of as part of a portfolio, such as GIS mapping, desktop analysis or environmental site assessments. Companies may deliberately have these cluster in the “unprofitable” zone to be used as loss leaders to get further work. Environmental Site Assessments are often done at a loss by companies that use site remediation as their main revenue source.

Looking into the specifics identified a couple of the projects in the “tail” as being projects that had a very high proposal cost, which then set up very inexpensive project wins for larger follow-on projects. There was also a cluster of projects in the $10-$25k range which averaged about 15% profitability associated with an Environmental Site Asessment (ESA) business unit – which was very well characterized and extremely cost competitive.

What Does This Mean For You?

One of the general trends that is being applied across many consulting firms is  a minimum contract size threshold. While this article uses synthetic data it was configured so that it would reflect the trends we see with actual accounting data. The typical lower threshold for smaller consulting firms is generally in the $10-$15k range, and some firms have trouble with anything under $25k being profitable. This threshold gets murkier if you can’t pull costs of sales and project-specific overhead costs out of your accounting systems.
As we saw earlier, if you track overhead as a “bucket” that gets pro-rated back to projects the picture *looks* better – often to the point that you will go “it looks like there might be a problem with little projects, but we don’t really need to take corrective action…” As soon as you are tracking actual costs against the projects that incurred them (even if you have to fudge the numbers and set a minimum cutoff) the smaller projects lose some of their “subsidy” from the larger projects and become more marginal.

Since the immediate thing that most folks concentrate on is “how do I optimize the projects that are unprofitable” is  almost invariably “it depends”. If you have a lot of projects that are very similar then handling them as a portfolio generally works – overheads are consolidated, execution on all of the “sub-projects” can be optimized, and these often become very profitable (and incidentally start to look like $60K+ projects)

The usual cause for unprofitable small projects is that they aren’t similar, and it takes a chunk of admin overhead for ALL projects, big or small (usually on the order of $1,000-$5,000 regardless of the size of the project) to enter the project in your internal systems, do the monthly accounting etc. a lot of that “tail” is usually hidden because folks don’t track overhead by project, but just pro-rate the overhead based on a percentage of invoicing – so that $200k project absorbs more of the “overhead” budget than the $10k project.

Instead of trying to “fix” issues with these small projects, choosing to not do projects below a certain threshold improves profitability: net profit for this sample portfolio is 12.8% on $4.785M, but if you simply choose not to do any projects under $15k this improves to 13.3% on $4.165M. This is obviously fallacious, since you still need to pay rent and the salaries of all of your staff, but dropping those less profitable projects and replacing them with projects similar to the rest of your projects will have the same effect on your (percent) profitability. Of course if you don’t have enough work, then you probably won’t choose to be this draconian, but the effort chasing those small projects (in this example 20% of projects were under $15k) could be spent in chasing large projects. Alternately aggregating the projects into portfolios (as discussed earlier) would also improve their profitability, and improve both the top and bottom lines.

Another thing to be aware of before you decide that the only thing you should chase is large projects – larger projects take more time (and thus resources) to win, and generally spend more time in the “sales funnel”`before you can start working on them. This means that there is a longer delay between investing the money to win them and being able to see the results, and if you don’t win these it will have a significant effect on your bottom line.

So What Happened?

Before issuing decrees like “We aren’t doing any project under $15k!” it is important to recognize that many “big” projects (often with tiny proposal costs associated) are often landed as a result of doing strategic small projects, or small projects which grow into big projects.

The recommendation provided to handle these competing demands (while following the PM Mantra “no surprises”) was a new policy flagging projects under a set threshold (in this case $15k) as “no bid” projects. To pursue a strategic proposal required a gatekeeper (A VP in this case, although this could be a business unit lead in a larger company) needs to provide approval. This approval process doesn’t need to be anything complex: an email stating why this project should be pursued, followed by an email response to approve the project is usually enough, but you want something that you can save with the initiation documentation to understand why this project was pursued. At the end of the year when reviewing these (unprofitable) projects, the cost can be lumped where it belongs – as a cost of business development.

To assist in tracking what is going on with these strategic projects you need a way to flag the connection between “loss leader” small projects and their larger follow-on projects. This will let you associate the total proposal costs with the aggregate project value so you can track the information needed to steer your company’s profitability.

Scott Martin


Photo Credit Benjamin Child –

2017 Reading List

Last updated March 25/2017 – “Confessions of an Advertising Man” by Ogilvy and “the Art of Profitability” by Slywotzky should be written up and on the list by early April. Both are worth a read, and both are fairly “light” reads

Upon reflection as 2016 wound down, I realized that I hadn’t been reading anywhere near as much as I had in the past. While I could put this down to many factors (work, kids, commute time) if I am being honest with myself it was really because I had not made reading a high enough priority. To try and catch up with everything I have missed (since I used to read at least a book a month) I’m trying to read a book a week.

This section will outline my 2017 reading list (as it proceeds) with quick reviews similar to those in the mentorship reading list, but with a much wider focus. Categories will appear as I build a system, and books that I think are appropriate for the project management Mentorship program will also appear there. Since I don’t think that anyone will care what my chronological reading list is, I won’t record it, but you can expect book commentary to update at least twice a month (and weekly if I can).

I have now read a book in 2017 that hasn’t made me wish I had read it earlier. As a result there is now a “I will not be re-reading books below this line” entry – with the better material on the top. The vast majority of this list is (in my opinion) exceptionally good

The Books:


This is the first book I have read written by Seth Godin (go ahead and read his blog here – if it is interesting you will be back sometime next week) and I will probably read at least one more this year. I got two useful chunks out of Seth’s book:

  1. Instead of trying to figure out how to profit from what you know, help people with it, and you will see benefits as a result. I have no idea where in the “chicken and egg” spectrum this falls, but this list is largely being published (instead of just complied) because of this book – I write notes anyway, so an extra 15 minutes a week that might help someone with a reading suggestion is time I think I can afford to spend
  2. Everyone is unique, and instead of trying to become a cog in the machine, you should try and become a Linchpin – an indispensable part. This builds more value for you, and increases the value of the organization you are in – even if that is an organization of one.

Breaking the Time Barrier

This is an interesting book, offered “by donation” by Mike McDerment, who created FreshBooks (a cloud based accounting service). It looks at how to approach charging for value, instead of hours. This is actually a very slippery concept, and in my opinion the  main reason that hourly rates remain in use is because they are easy to track, not because they have anything to do with value.

Mike McDerment and Donald Cowper do a good job at identifying why you would want to charge (and pay) higher rates and  why it is important to understand your goals (and the goals of your clients) before you start looking at pricing. One of the obvious conclusions is that regardless of the price, if a product or service provides value it is likely to be successful in the marketplace – as long as all parties understand that value.



The two traits that consistently predict “positive outcomes” in life are intelligence and self control. No one has been able to consistently increase the first, and the second has been denigrated (or ignored as an artifact) for much of the last century. It includes some interesting anecdotes about how in the age of Freud, the challenge was to “break through” barriers to get people to realize their limitations, and then people managed to correct them fairly easily. This had apparently changed by the second half of the 20th century, so that people were fairly quick to recognize their issues, but just appeared to be powerless to make a change.

One of the largest (unintentional) takeaways I got from this book is that psychological researchers are very strange. A more useful takeaway is that people with high willpower tend to make less decisions, and have systems that they follow to preemptively make decisions for them (if this then that).

This book is about Willpower, both from a historic perspective as well as some fairly recent biochemical research. More importantly from my point of view, it includes pointers for improving willpower, in much the same way that one would do resistance training for strength gains (and apparently following a similar mechanism). Other useful, albeit unsurprising information includes how willpower is depleted, how to “recharge” your store of willpower, and how “decision fatigue” can cause bigger problems than you think it does.

It also includes some notes around why it is so challenging to diet (since your “willpower muscle” is powered by glucose – so your body says “give me that sugary snack so I can resist the sugary snack”) and provided me with additional reasons to avoid sugar in my diet.


This book may be overly intellectual for some, and I suspect that I will take some time (and likely re-reads) to wrap my head around the book. The central tenant is that there are three (not two) kinds of things, those that are Fragile (susceptible to failure with change), Robust (indifferent to change) and Antifragile (get stronger with change). Our language apparently doesn’t have the word for “the opposite of fragile” and many have been using “robust” or “strong” as the antonym. As Taleb points out, neutral is not the opposite to negative, and proposes “Antifragile” as the word for this concept.

The summary is effective as a summary, once you have absorbed at least the gist of the argument,  although it probably isn’t as effective without the background. “Everything gains or loses from volatility, Fragility is what loses from volatility and uncertainty.” Robustness is indifferent, and antifragility gains from volatility and uncertainty.

Antifragility benefits from “positive” Black Swan events, and a large part of the book deals with strategies (natural and man-made) for  insulating yourself from negative events, and being positioned to benefit from positive ones.

Since I read this book almost immediately after “Willpower” I was struck by the antifragile aspects of Willpower with stress causing improvement – to a point.

Great by Choice

This is a book that looks into how some companies were able to take advantage of good fortune, and how others tried and failed. While there is a lot of good information in this book, it is notable that they stopped collecting data in 2002, and the last decade and a half have had considerable more uncertainty. I found this a bit hampered by cute naming conventions – “10x-ers” instead of “companies with large sustained growth” “Bullets then cannonballs” instead of “effective prototyping” and “20 mile march” instead of “consistent execution plans”. Lots of solid information backing up the basic concept that successful companies are the ones that weather the bad times and don’t overextend in the good times rather than those that grow madly when they can and then try to consolidate when that growth stops.
While counterintuitice, I’d suggest starting with the chapter “leading above the death line” since most of the rest of the book follows from that, then go back and reread the entire book.
In retrospect, I note that most of the books written by Jim Collins have companies that either fall or become more “meiocre” within a decade of being featured – perhaps this is a viable investment strategy, or perhaps (as Daniel Kahneman muses in “Thinking Fast and Slow”) it is simply a reversion back to the mean.


THis is a book about decision making. More specifically it outlines areas where snap judgements (or “thin slicing”) can provide decisions that are as good or better than deliberative decisions. There are a number of good examples of decision making processes where more information was actually harmful (such as who is at risk for an immediate heart attack) and how a checklist can be used to “retrain” people to make better decisions. There were also a number of examples of how snap judgements can result in bad decision making.
An interesting book worth a read. If you have read “Antifragile” already you may find yourself noting several of the decision making methodologuies were constructed around how to defend yourself from what you don’t know.
My core take-aways from this book:

  • Some decisions are better made with less, rather than more information
  • Many decisions can be optimized by forcing yourself out of your current decision making process
  • Controlling an opponents decision making process can lead you to control their decision (Illustrated in Blink by the chapter on Van Riper, but more generally executed by advertisers and used car salesmen)
  • Understanding that you are subject to decision manipulation is the best way to counteract it

Note that this book (and many others) reference “Thinking Fast and Slow”.

Thinking Fast and Slow

If you only read one book this year, it will be a hard choice between this book, which goes into a LOT of detail on how we percieve the world, and “Decisive” which covers some of the same ground, but is focused on strategies to rectify our cognitive blind spots. This has three big sections which interact:

  • System 1 Versus System 2. This section outlines how our brains are wired, and how our “thinking self” tends to hang out waiting for our “autopilot” to run into trouble. Our thinking self is pretty lazy, so it generally lets our autopilot have free reign. System 1 is great at averaging, and not so good at adding, which is an issue when we get around to how we remember stuff.
  • What is “Rationality” which outlines how (most) humans actually make decisions. Note that impulsive System 1 makes the decision, and system 2 can override it if it seems crazy. Unfortunately System 2 tends to be pretty lazy, and misses a lot, which explains the failure of a lot of large corporate aquisitions. There is lots of good stuff in here about how people are risk averse, and feel losses more than gains. A (mathematically) identical question framed in a different way will provide different results, which explains a lot about advertising.
  • The “remembering self” versus the “experiencing self”. We are who we remember ouselves to be, not the sum of our experiences. A critical point in this section is that when we are “remembering” how good a time we had, we are really ranking it on the average experience, and the END of the experience. This (sadly) explains my performance on a job interview that I thought I had knocked out of the park. Time pressure caused a fumble at the end, with no time to recover, and that is the largest “weight” remembered in the interview.

The Geography of Genius

“Talent hits a target no one else can hit; genius hits a target no one else can see.” This is a book about several nodes of genius, and helps debunk the western ideal of the “isolated genius” showing that the place where creativity happpens can matter as much (or perhaps more) than the people who inhabit it. Of the places that he outlines (Athens, Florence, Hangzhou, Calcutta, Edinburgh and Silicon Valley) in my opinion it remains to be seen whether the valley is truly a node of genius or simply an effective marketplace: a Milan rather than a Florence. Mr. Wiener does a good job of pulling some of the commonalities of these places together, including the blend of chaos, poverty and opportunity tied to the perceived fall of an old order and the rise of an as-yet-undefined new one. There are some interesting thoughts hidden in this book:

  • are you a genius if no one knows who you are?
  • Are there different kinds of genius?
  • does genius “spread” with breakthroughs in one field helping “cascade” discoveries in others?

One of the particularly interesting points is the lack of academic institutions – as if established education is anathema to “new” discoveries. While many institutions followed these outbreaks of genius, most of the cities referenced were more “working class” and necessity is often the mother of invention.


This is an interesting book about how we make decisions, and covers some of the same ground as “Thinking Fast and Slow”. Instead of focusing on how we perceive reality, this book is focused on how to make decisions to systematically cover those blind spots. The pragmatic advice on project pre-mortems, triplines and our inability to recognize when we should be unsure (an artifact of System 1 from Thinking Fast and Slow) are alone worth the time to read. A couple of other “minor” pieces that I found useful were the “what would your friend tell you to do?” question and the “10/10/10 rule” – how will you feel about this in 10 minutes / 10 months / 10 years. If it’s a positive outcome for all of them, then this should be a no-brainer.
For some additional detail:

  • Premortem – It’s a year after the project was authorized, and it was a collossal failure. Why did it fail? What could we have done about it if we had forseen it as a possibility?
  • Triplines – If “X” happens then we need to reevaluate our decision. If the following conditions are true then we should choose alternative “A” instead of alternative “B”. What can we do to make these conditions true? What can we do to make these conditions false?

I will not be rereading books below this line

The Wisdom of Crowds

This is the first book I have read this year that I won’t be reading again. Mr. Surowiecki started the book with some confusion about “crowd wisdom” and the properties of networks. Specifically he used an example of the “shortest path” being the one in a maze where the majority of turns were made resulted in the shortest path out of a maze. Unfortunately this is a side effect of graph theory (how things connect) not “crowd wisdom”. This really bogged me down and made me look at his examples to decide if they were providing useful information or were “red herrings”, and wondering how many of the examples were things that seem to be connected but likely aren’t (“Confused by randomness”).
Useful items that I got from this book included:

  • How to structure groups to make better decisions
  • Information Cascades and the “first mover” effect (and how to force them – or avoid them)
  • ways to improve coordination

The Best Time to Start

Mercury LaunchI keep having conversations that boil down to variations of “When is the best time to start?”. Whether this is starting a fitness program or diet, kickoff of a project, or finally fixing that dripping faucet, all of these conversations generally take one of two forms:

  1. This is really late, and we probably can’t finish in time, so it’s pointless and demoralizing and…
  2. We need to catch up, because we were supposed to start (a while ago) and now (another team) is way ahead of us and catching up is going to be really hard or impossible and why didn’t…

There is a lovely anecdote in Waltzing with Bears where a client is explaining to the project manager that the project must be executed on time. After a few iterations, the client agrees that if the project could complete the day it starts, the company would recognize immediate benefits. The conclusion: the project is starting too late! While starting “late” is not an ideal circumstance, it is important to remember that “late” is almost always better than “never”.

“The best time to plant a tree was 20 years ago. The second best time is now.” – Chinese Proverb

There is no way for you to go back in time and start earlier, but you can get in gear and start as quickly as possible. If you are waiting for the perfect time to start, you should either resign yourself to never start, or realize that a good solution now probably beats a perfect one later.

We seem to have developed a cultural fear of failing that is so pervasive that many people seem to think that it is better to not start: you can’t fail at anything that you aren’t doing. This idea that doing nothing is better than failure can lead to a creeping sort of feedback loop, where you see the world passing you by and makes you want to do something with your life – but inertia added to the fear of failure seems to overcome many of the best intentions . I wonder how many people wake up on a “milestone” birthday (40, 50…) and see what their life could have been. The best time to take a chance was probably yesterday – or twenty years ago – but the second best time is now.

So I think that I will start a few more things, and accept the fact that some of them are going to get dropped. I will wear (a bit more) egg on my face when I don’t succeed at everything. But rather than seeing this as a failure, I will try to remember that life is a learning experiment. By the beginning of 2018, I hope to know if increased failure is an indication of success.  While I haven’t taken this road before, there are a number of successful folks who claim that, while treacherous, it’s the best path forward.Foggy Highway

“To double your success rate, double your rate of failure.” – Thomas Watson (former leader of IBM)


Now if you will excuse me, I need to do a bunch of push-ups: I started my exercise program too late to finish in 2016. On the upside, if I had waited to start now I wouldn’t be finished until spring!

Scott Martin

Mercury 3 photo courtesy of NASA – link to original image Here
Highway photo courtesy of

Required Legalese:
Scott is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon. Revenues go to supporting mentorship programs.

The Power of Mentorship

2 kids with scienceYou’ve heard about Mentoring, and don’t understand what all of the hype is about – it’s just another way to say “have a conversation with people who know stuff you don’t” right? On the surface that may be all there is to it, but the power of mentoring goes a lot farther than those meetings may seem to indicate, and a successful mentoring relationship can often lead to decades of collaboration.

The Benefits

Mentoring has a lot of benefits, and not just to the person who is looking for a mentor (called the protege or “mentee”). A few of the benefits include:

  • Better engagement
  • More job satisfaction
  • Higher retention (less employee turnover) for both partners in the mentoring relationship
  • Faster promotions
  • Facilitated learning – for both parties
  • Better succession planning
  • Higher overall knowledge base for the business
  • Staff that are more skilled, more engaged and better at working together.

Most of these benefits are difficult to quantify, so many of the companies that specialize in supporting mentorship programs tend to focus on retention.

Mentorship Studies

Success AheadOne of the most comprehensive studies associated with a deliberate mentorship program was undertaken by Sun Microsystems, the developers of the JAVA programming language and one of the technology greats of the 1990’s and 2000’s (which was eventually swallowed by Oracle) you can find that study, “Sun Mentoring: 1996-2009” HERE1. Katy Dickinson and Tanya Jankot are still quite active in mentoring, and are worth looking into, particularly if you are a woman in a STEM (Science, Technology Engineering and Math) field.

There are also a number of studies in academia and government including:

You can even find a methodology for evaluating protégés’ satisfaction with mentoring relationships in medical education (Munich). This study concluded that “Satisfaction seems to be the most reliable predictor for the success of mentoring relationships” – so if the partners are happy with the relationship, they will meet their goals

So Now What?

There are a number of mentoring programs around that you can get involved. Some places to look include

  • Your workplace
  • Professional associations
  • Universities and colleges
  • Business and Entepreneurial associations

If you don’t have a mentorship program to get involved in, then maybe you should create your own: Mentorship Materials are available here so you can get started right away!

1 Note that “Sun Mentoring: 1996-2009” SMLI TR-2009-18, by Katy Dickinson, Tanya Jankot, and Helen Gracon is Copyright 2009, Sun Microsystems, Inc. All rights reserved. Unlimited copying without fee is permitted provided that the copies are not made nor distributed for direct commercial advantage, and credit to the source is given.

Photos by Jack Moreh