Blog 18: Navigating Rugged Landscapes – searching for global maxima

May 31st, 2008

Well, here we are again with a tangent on a tangent.  I had meant to followup the Edison/Archimedes (last seen in Blog 11) dialogs with a more theoretical examination of the continuum represented by those two distinct archetypes, but I got distracted:  by metrics, evolution, and now… mountaineering.  This blog ain’t so unlike my actual life as you might think.

Way back there in Blog 1, I had mentioned that the focus would be toward an audience of innovation leadership and not necessarily toward the innovators themselves (though you ARE the ones that make this all happen).  The topic at hand will likely have something for everyone.  Experimenters navigate rugged landscapes and strategists as well.

A great many problems are conceptualized as axes or parameters of design coupled with a final axis of response.  If we find ourselves in the Alps, we can mark the parameters of longitude and latitude with a result of altitude.  A great many problems collapse to the question, “what are the values of longitude and latitude that result in the greatest altitude.”  Oh, it’s true, the axes are rarely limited to three dimensions and they have obscure units of measurement but the problems are mathematically equivalent and we can handle 27-dimensional mountain ranges with almost the same ease that we handle three-dimensional mountains. 

Not surprisingly this same navigation problem is expressed for organizations, molecular structuresevolution and production line optimization. 

Some years back, a small group of us sat down with Stuart Kaufmann for his views on this topic.  You can see his work in the several citations in this blog and to which I might add just one more.  That evening he offered some rules for navigating rugged landscapes — for looking for higher and higher peaks when the terrain was hardly predictable.  As he spoke, it occurred to me that in many cases the EXACT OPPOSITE of what he recommended could be easily packaged to sound like a pretty good idea and I could imagine almost any of them surfacing as organizational objectives. 

Being given to sarcasm (usually more self-amusing than career-enhancing I must admit), I have reconstructed some of those navigation thoughts as rules to avoid the unpleasantness of adaptation and organizational change.  When taken all at once, the sarcasm is obvious and the leaders I know wouldn’t fall for it… but one at a time, these rules might just have a chance.  

Lest I leave any ambiguity, I will slip into a mythbuster voice (noted in red) at the end of each rule and its justification.  There I am being more true to the thoughts shared by Stuart and the intent for navigating these landscapes. 

(Any license I have taken that leads to errors or misrepresentations is entirely my own fault.)

Seven Rules to Avoid the Unpleasantness of Organizational Change

1. Each subset (division) of an organization should work for the good of the whole.
2. Make deliberate course corrections insuring that each step is upward.
3. Maximize productivity by focusing on value extraction.
4. Streamline environmental analysis by assuming no coevolution.
5. Simplify complex decisions (in NK landscapes by setting K=0) so the problem becomes tractable, communicable and theoretically solvable with straightforward tools understood by all users.
6. Insist that lessons from the past be thoroughly learned, established as best practices, and applied in all similar future tasks.
7. Maximize order through deterministic solutions that can be effectively planned and executed according to plan.

1. Each subset (division) of an organization work for the good of the whole: All successful organizations presently exist on a peak which represents — at least — a local maximum. By working for the good of the whole, it is expected that subunits will move the organization along (because landscapes evolve) this existing peak. However, even though the peak represents the “good of the whole,” it is entirely possible that single units would selfishly benefit from moving ‘downward’ or away from this preferred geography. Rigorous application of this rule will prevent organizational subunits from taking steps that might move the organization to entirely new locations on the performance landscape, new locations which would offer the risk of moving from their current peak to a new one, a new one that while it may be higher (an act of adaptation), would be sufficiently distant as to redefine the organization and thus demand change.

This point argues in favor of thoughtful de-centralization. purposely providing some groups “latitude for increased altitude” is imperative in a dynamic landscape/market and should be part of the deliberate design of an adaptive organization.

2. Make deliberate course corrections insuring that each step is upward: By carefully examining the local terrain it is often possible to distinguish upward steps from downward ones. Thus, by using small steps, the organization can move with the comfort that each agreed-upon move is an actual improvement (more fit, better results).  As one nears the peak in the present locale of the landscape, it is true that upward change becomes more challenging as more directions (choices) point downward.  However, such difficulty is offset by the confidence that one is nearing the top and thus has reached the pinnacle of success.  Larger corrections serve only to risk “leaping” to surrounding peaks and the real possibility exists that one would move to a peak that is even shorter than the current residence.

Fundamentally we are talking about embracing risk – “risk” being defined as “beta.” Allowing for local variation (occassionally moving down to move up even higher) is not currently part of most formal corporate proclamations of  ”take more risks.”  I could see an adaptive organization spending meaningful time training its leaders on risk:  what is it, how to recognize, when/how to manage it, how to distribute it, etc.  At present, most risk managment exercises are either fault tree analysis or contingency measures.  The busted myth would tell leaders to occasionally ‘leap’ with the deliberate intent of getting OFF the present peak.

3. Maximize productivity by focusing on value extraction: The case for “exploitation” is the case for value extraction; the case for exploration is the case for adaptation and change. Exploration flies in the face of productivity. How can one justify efforts that MAY or MAY NOT produce better results? The peak in the hand is better than two in the bush. We know what direction ‘up’ is. We can exploit that knowledge in a incremental way to insure continued improvements in productivity and better fitness. When we reach the very top of the peak, we certainly can’t be criticized for failing to improve on what is already optimal.

As we have been stating for some time now, the balance between exploration and exploitation is a “magical” balance in adaptive organizations.  It is “magical” in the sense that it defies a precise, reductionistic definition, but provides spectacular results when achieved.  Leaders have a role here in choosing which groups are going to focus on explore vs. exploit.  Also, they must decide on when and how to mix the two.  Stories I heard told by David Stark at Santa Fe Institute re: the Naskapi tribes and Nova Scotian fishing villages gave me some real grounding in these notions. 

4. Assume no coevolution: Coevolution is a nasty situation. In essence, it means that due to the actions of others, the peak I am standing on may change in height or shift along the axes. Most reaonable people would agree that none of us should be held accountable for the consequences of other’s actions, only our own. Thus, the most logical and desirable course is to disregard these aberrations produced by the actions of others. The secret is to stay focused only on the consequences of our actions, the ones we can control and be held directly accountable for.

A truly adaptive organization would clearly have an “external focus,” actively watching other groups in the ecosystem. However, their external focus would not be to merely “benchmark” internal activities with taller or shorter local peaks. Rather, they will look externally to discern the dynamics of the environment – surveying attractive non-local peaks as well as nearby players who may have similar aspirations (opportunity for collaboration) or those who might block our ascent.

5. Simplify complex decisions (in NK landscapes by setting K=0) so the problem becomes tractable, communicable and theoretically solvable with straightforward tools: This sounds so mathematical, it probably ought to be rejected on that basis alone. After all, a corporation engages in the serious business of management, selling, research, product development, manufacturing, etc. It isn’t some esoteric thought experiment for modelers. Complex landscapes represent confusion and ambiguity in decision making. And, the complexity of the landscape is related to the coupling between various elements of the organizational design and choices. However, it can be easily shown that by setting the coupling factor to zero, i.e., assuming that individual elements in a portfolio are distinctly separate or by focusing on each organizational design element one at a time, we get what is known as the Fujiyama landscape. There is one peak. It is both the local and the global maximum and by examining surrounding terrain and always moving upward we are guaranteed of reaching it. The analytic tools used for describing this situation are often familiar and well-understood. The use of such tools and the description of the goals in unambiguous terms are exemplary of the kind of clear leadership thinking that all people will respond to.

Adaptive organizations build in time (they evolve) to reflect and set up the problem – understanding the variables and their weight prior to execution. When they are totally befuddled, they may jump into execution, but they consciously observe themselves so as to “learn by doing” (this may involve some course correction which need not always be seen as non-adaptive — see point #2)

6. Insist that lessons from the past be thoroughly learned and applied in all similar future tasks: One way of assuring constant upward progress on our peak of choice is to avoid the unsavory possibility that an already learned repsonse would be mutated by the uninformed in some future application. Thus, we should carefully and rigorously study each action in response to some circumstance, identify the best course and insist that it be followed without fail in all future circumstances. This won’t always work as the environment may have shifted but inasmuch as we are not responsible for those shifts, refer to rule number 4 and hold all deviants accountable for violating best practices.

Of all the rules, this seems the most related to our recent foray into Lamarckian evolution.  Adaptive organizations avoid creating overlapping groups watching other groups to ensure that they are doing things the precise way we have always done them.  Information content is greatest when the signal varies.  A monotonous repetition can be compressed into very little space.  Why?  Becasue there isn’t much there to begin with.  If you want to learn new stuff, repetition isn’t the path.

7. Maximize order through deterministic solutions that can be completely planned and executed according to plan: Single-minded execution makes no sense without a plan. A plan is the very guide one needs when confronted with the unexpected. A planless team may improvise in those situations. If tolerant of such improvisational behaviors, how can we set bonus targets well in advance, how can we evaluate performance, how can we know that the failure to meet Wall Street expectations wasn’t our own fault instead of the fault of “an unforeseen downturn in economic indicators.”

Adaptive organizations still set clear goals, because it is important to have something to shoot for.  However, true to their name, they adapt when the environment calls for them to. This requires excellent skills in diagnosis — in “why” a system/target/environment has changed and the humility and courage to admit it.  As is often the case… back to leadership. There you have it. Seven solid rules for organizational design and behavior. While these alone may not insure the ability of an organization to behave in a perfectly consistent manner, they will go a long way to guaranteeing that ‘ruthless incrementalism’ isn’t compromised by unexpected jumping to new peaks in the fitness landscape.


Blog 17: Darwin vs. Lamarck – A brief summary for organisms, organizations and innovation

May 30th, 2008

In the prior blog, I made reference to the applicability of evolutionary theories (Darwinism and Larmarckism) to organizations as well as organisms.  I also made passing reference on two or three occasions to applicability to innovation.  This notion was much less developed and probably is going to be left more to the reader than anything else.  However, a few tables and anecdotes may help foster that line of thinking.  Without great precedence, let’s declare Table I and support a few of the notions later.

                                    TABLE I
Modes of Evolution as Applied to Different Systems


As previoulsy suggested Lamarckian evolution may be conceptualized as “learning” evolution.  And, learning plays a critical role in many systems of human design, from organizations to culture to the processes of invention, discovey and innovation.  However, as we have learned the significant power embedded in Darwinian processes of mutation/selection/replication, we have also either observed such modalities in our human systems or even designed them in where appropriate. 

Corporate change agents are disposed to look for points of organizational thawing – optimum times to introduce radically new directions.  (Though I remain convinced that, in these human systems, too few acts of mutation (with its attendant randomness) are either engineered or encouraged.) 

There are certainly times when radical hypotheses have been demanded by the data and though they may be avoided by the establishment, the superior hypotheses among them (fitness producing mutations) eventually find themselves to be accepted (grudgingly at first, then as if obvious all along).  Coming quickly to mind would be examples such as relativity and quantum mechanics.  

In less dramatic ways, many “lesser” scientific and innovation endeavors remain hampered by Lamarckism:  from architectural concepts of “what is a house” to “‘best’ synthetic routes for the synthesis of polyols.”  These conceptual ruts aren’t necessarily hampering our fundamental understanding of the universe, but keep us hiking around local maxima and completely ignoring greater peaks within our grasp.   Navigating a rugged landscape remains one of our challenges and whether we are innovating for profit, social well-being or personal fulfillment, it is almost always true that we are victims of the local maximum’s inherent stickiness (and – oh, no, we might have to go down to go up!). 

In the prior blog, we saw graphics showing the results of an artifical life simulation.  Go back and re-examine the first graph.  You saw that Lamarckism scaled the peak much more quickly, but found itself oscillating between two local maxima and could never find the yet higher peak within reach.  To get there, Darwinism wasn’t just an alternative, it was a necessity

In the language of innovation, this search for a global, not just a local maximum, is a huge argument in favor of openness. Fresh perspectives, even when initially unsatisfactory, mean fresh topography to explore and in some cases higher peaks to scale.  An open innovation practice can well be the shot of innovation Darwinism needed by your Lamarckian research program.


Blog 16: Darwin vs. Lamarck – The need for mutation in rapidly changing environments

May 30th, 2008

Still at the Valencian seaside.  This is good for blogging.  I suspect my posts would be more frequent and more regular without the interruptions of travel and myriad engagements.

We left with a promise to dig more deeply into the paper by Saaki and Tokoro.  The full text of their title is informative in itself:  Evolving learnable neural networks under changing environments with various rates of inheritance of acquired characters: comparison of Darwinian and Lamarckian evolution”

They basically created an artifical lifeform (as a neural network) in their computer simulation and placed a conceptual slider bar in the genetic machinery.  That slider bar could be set such that: 1) genes/traits are passed on from learned survival behaviors of the parents (Lamarck extreme) or 2) while allowing conservation of past traits, along with random mutations — mutations that may help or hurt survival, but which get preferential replication when they are beneficial (Darwin extreme).  Plus, the slider bar can be set anywhere in between, combining both inheritance mechanisms.  (You can read the paper to mull over these “partial inheritance” scenarios and feel free to post a bit of that learning to the blog comments.)

We’ll fill in a few details for the interested reader but let’s jump straight to the punchline for the impatient among us.  To quote the authors:

Paying particular attention to behaviors under changing environments, we show the following results. Populations with lower rates of heritability (less Lamarckism) not only show more stable behavior against environmental changes, but also maintain greater adaptability with respect to such changing environments. Consequently, the population with zero heritability, that is, the Darwinian population, attains the highest level of adaptation to dynamic environments.  (Italics added)

This observation does not fundamentally differ from quotes such as: “The war of Jena was lost by Alexander the Great;” or more generically, the ubiquitous position statement summed up as “fighting the last war.”  The non-quantitative moral of the quotations being to point out that doing what worked great for one generation fails for the next generation when the external environment has changed.  Probably a far from shocking conclusion – though the simulation makes the case with lots of fun graphs, charts, equations, neural net diagrams and hence a quantitativeness and finality often lacking in pithy sayings.  (I, for one, would not wish to place my future in the hands of pithiness alone.) 

Seen in this way, learning may well be, at times, at the root of failure.  It is precisely because we learned our lessons well that we subsequently failed when that learning was applied in a new environment and resulted in our undoing.  In fact, this is just the iterative loop that the authors neural nets find themselves in — learning, behavior, success (zig up), behavior, failure (zag down), new learning, new behavior, new success (zig up), new behavior, new failure (zag down). 

Two graphics from the study with the slider bar set at pure Lamarckism for a run and then pure Darwinism for another are shown in Figures 1 and 2.  The differences being that in Figure 1, the environment is accomodatingly constant and in Figure 2, it has the nasty habit of changing rapidly.  Now, recall that these are simulated life forms — and as such, may be no more simulations of real organisms than they are of real organizations.  Their learned pursuit of “food” and avoidance of “poison” could just have likely been stated as a simulation of their learned pursuit of “profit” and avoidance of “loss,” or even their pursuit of “progress” and avoidance of “regress” had the authors proposed that the simulation was meant to mimic scientific progress, not a life form.  Stripped of the reality (life form) for which the simulation is intended, this simulacrum is a study in heritability and fitness.  period.


Figure 1.  Lamarckism (tau = 1) vs. Darwinism (tau = 0)  Slowly changing environment.

It is worth noting that, consistent with expectations, the simulated life forms (SLFs), do appear to approach an optimal fitness much more rapidly when Lamarckism is fully on (tau = 1).  This is seen by the steep fitness climb over very few generations.  That is, naive early generations learn to be more than adequate survivors based on what their parents told them.  The environmental change per generation is insufficient to render ALL those lessons moot and does in fact render most of them beneficial.  Surely an organization would benefit similarly from training, lore, and SOPs in such an environment.  Note, too, that even though the SLF achieves a respectable level of fitness it still falls short of optimization as it cycles its learnings which are not as beneficial and even harmful at the margins of high fitness (a pretty good rule in your neighborhood, “look both ways before crossing the street,” isn’t as efficient when applied to the Dan Ryan Expressway at rush hour.  It’s hard to close that final gap.) 

This same shortcoming in ultimate optimization is not seen in the Darwinian colonies of SLFs.  Though they do take many (over 500) generations to get that mutation/selection/replication ball rolling up to high fitness. 


Figure 2.  Lamarckism (tau = 1) vs. Darwinism (tau = 0)  Rapidly changing environment.

With highly dynamic environments as in Figure 2, the case for Lamarckism (learning) is devastated in these studies.  Not only is it unclear if they are even optimizing more rapidly, at first, but is is very clear that their mean fitness level at equilibrium oscillates rapidly and does so much below the fitness level for Darwinian SLFs.

What to do with titles like, “The Learning Organization.”  Well, we should probably keep it.  The real questions here concern, “what is baby,” and “what is bathwater,” and then act accordingly.  The message I take from Saaki and Tokoro is, “fine, IF the environment is changing slowly, but in times of rapid change, don’t be too comfortable.  Excessive focus on learning may be just what you DON’T need.  Why not keep a few mutations alive, encourage them and reward them for the adaptive qualities they possess.”  These are hedging strategies at worst and almost assuredly a source of future traits at best. 

 As I said, the article is undercited.  My experience is that learning is valued over mutation and when the environment changes, there may well be a push to LEARN BETTER, train harder, tighten procedures.  In the practice of innovation this has parallels when hypotheses have long correlated with findings but under new environments they fail (i.e., classical mechanics at smale scale).  More experiments are conducted to find the sources of error but in fact, a mutation is called for.  Is the system open to mutations?  Is the process (scientific method) best at finding them or would alternatives facilitate Darwinian evolution when it is needed?

 Where does your organization stand?  Are you nurturing your mutants (not coddling them; they need to show fitness to earn survival) or are you extinguishing them before they are tested — extinguishing them because they can’t belong in an organism that has learned to respond consistently and effectively to its environment.  Is the change rate of the environment accelerating? Is the clock ticking on your Lamarckian company — is it about to fight the previous war?  If so, we’ll change the context, but say again what we said before:  “Go Darwin, Beat Lamarck!”


Blog 15: Darwin vs. Lamarck – Lessons for Evolving Organizations and Innovation

May 29th, 2008

A bit of a hiatus if you are checking the posting dates.  I’ve been traveling a lot lately.  Since the last post, I made a looping tour of the U.S., visiting each advisory board member for InnoCentive: a truly smart bunch.  Then I attended the European Patent Office meetings in Ljubljana, Slovenia, including the awards ceremony for European Inventor of the Year.  Pretty amazing stuff, once again laying to rest the 1899 urban legend that the U.S. Patent Office should close because everything’s been invented.  (see the Skeptical Inquirer for more details on the legend) And finally, after some board meetings in San Francisco, I headed for Spain to help celebrate innovation at the 40th anniversary of the Univesitat Politecnica de Valencia . I am writing this from a seaside room because, well, someone has to do it!

In 1999 Saaki and Tokoro published a paper in the MIT Press journal, Artificial Life. I read it when the journal arrived and thought, “wow, quite a lesson for life.” It strikes me that this paper has been too infrequently cited since.  Now, to be fair, I don’t have adequate expertise or familiarity with the somewhat obscure discipline of artificial life — and for all I know, within that domain, the paper may be cited quite extensively.  No, it seems that what has been overlooked is the relevance of the paper as an allegory for organizational behavior, business strategy and innovation.  It is there that I fear the citations are far too infrequent.

More specifically this cogent report of simulation studies explores the modes by which the simulated organisms efficiently evolved — and by extension, some valuable lessons for the evolution of either business models or innovation (or undoubtedly a host of other systems).  Though the debate on evolutionary mechanism seems well settled for both the plant and animal kingdoms (Go Darwin! Beat Lamarck!), the theories of Lamarck live strongly on when we look at organizations, families, businesses, human processes, cultures, and assorted manmade systems.  (In technical discourse, some memetic theorists will object to the use of Lamarckism to explain any of their work, others will be less rabid.  Just try the metaphor on for yourself and see if it works.)

To a first approximation, “Lamarckian” can be read “learning” — and who in their right mind would dispute the criticality of learning to efficient systems evolution.  By studying and hypothesizing about the past we more reliably move into the future… usually… sometimes… Well at least it feels like it should be right.

Let’s back up and remind ourselves of a few more details in these two vying hypotheses for evolution. You certainly remember the name Darwin (history is written by the winners). And you recall his postulates in the origin of species. His publication of “The Origin of Species” is about to celebrate its 150th birthday (published 1859).

The basic tenets of Darwinian evolution are: mutation, selection, replication. Assembled in paragraph form: nature samples/creates many variations (mutations) on a theme. Most are uninteresting. Those that are interesting are selected as “good” or “bad” with these terms being the non-value-laden description of how fit they are for survival. “Bad” forms (unfit) die and “good” variants are rewarded with survival and reproduction and increasing representation in the gene pool. (Way oversimplified but we’re headed for metaphor not biology).

You probably can’t recall the details of Lamarck’s evolutionary hypothesis, at least not until I remind you that it had to do with the blacksmith’s sons. Aha, that rang a bell. In Lamarckian evolution the blacksmith built his muscular arms through his trade, he trained (taught) his arms to possess strength above the norm and that change in his body was then incorporated into what was passed, through reproduction, to his sons. As said earlier, we can metaphorically equate Lamarckian evolution with learning processes:  a deliberate training that affects further generations.

Though discredited as an explanation for the musculature of blacksmith sons, Lamarck lives on each time we pass knowledge to succeeding generations.  This happens all the time in organizations and the processes those organizations employ.

We had scientific reasons for rejecting Lamarckian methods in species development but we seem to have just as surely deleted Darwin from our organizational and process evolution — and frankly without sound experimentation and analysis.  Don’t mistake where I seem to be heading.  Learning IS important.  I’m in favor of it. But our discomfort with randomness surely averts us from appreciating the role that Darwinism can and should play in organizational design and scientific advancement. 

That’s not to say that we don’t see these things when they happen.  We recognize that both scientific hypotheses and organizational cultures get moved by accidents (mutations).  But, we don’t set out to create accidents in order to achieve that movement, not as regularly as, say, we exploit irradiation in experiments designed to mutate and alter plant and animal species.

We’ll come back to this paper in the next post and see what insights the authors found.  Meanwhile, think about when and why you’d deliberately (and randomly) mutate organizations for survival — and have some fun with artificial life forms:


Blog 14: R&D Productivity Metrics and Ohm’s Law Part II

April 14th, 2008

Where were we?  Organizational conductance, innovation productivity.  Our parallels to Ohm’s Law.  Let’s see, that equation was C = I / R.

What’s the equivalence and units of, I-amps-current, in that equation?  It is your “flow of value” through the pipeline and into the market place.  The units I’d use: dollars/hour (or mins or seconds or years or whatever).  Basically I need flow, so I need the dollars to MOVE, unlike static portfolio NPV’s.

If you approach the finance department with this,  it’s going to be a long conversation.  Here is how to make it simple.  Pick a means of calculating value.  You’ve got lot’s of choices.  You could use option theory, you could use NPV calculated via discounted cash flow or in my preference via decision tree procedures (see Faulkner, T. W., “Applying ‘Options Thinking’ To R&D Valuation,” Research-Technology Management, Vol. 39, No. 3, May-June, 1996, pp. 50-56).  You could even use peak sales if the projects have similar market growth/decline profiles.  Don’t make it so hard that no one wants to do these calculations on an ongoing basis.  Pick one that finance will support as they will be called on to attest to the integrity of the data.  The magic here is the movement (while maintaining consistency on the valuation.)  So use a familiar tool to get “dollars.”  (Blogs later, we’ll talk about converting the point values of metrics to distribution.  But that’s a whole ‘nother approach).

So, how do we calculate the velocity (I’m assuming there is a clear vector toward the market place)?  Velocity has to have the units of distance/time.  And the distance to be traversed is one pipeline.  Given this, we COULD use the inverse of cycle time (actual cycle time, not proposed cycle time), and that’d work.  In cases where projects have widely varying cycle times and where the valuation is of a non-discounted type, this is probably the best idea.  If we are discounting for time and/or the projects have somewhat consistent timelines, then we can use ‘relative velocity.”  As easily said as done:  if the time from prototype completion to final test results is 90 days — and we’re not scheduled for completion until 120 days have passed — then our velocity is 90/120 or 0.75. On the other hand if it looks like we’ll wrap up final testing 30 days earlier than standard, then the speed is 90/60 or 1.5.  We’ll use this relative velocity for examples later in the post.

So, “I” is done (not bad grammar).  My “amps” are dollars*speed (some of you realize that this is momentum in other analogies. You can work with that.). 

Now, if we get “V,” we’re done.  And V is easy.  V is the pressure driving the system and R&D expenses act a great surrogate for that.  They include the purchase of materials, the production of prototypes, that salaries of project team members, etc.  In other words, all the resources that we throw at a project to make it move fast and to generate data to raise its market value.  That’s a pretty good description of voltage or pressure.

Some of you are saying, “wait a minute, we ALREADY have metrics for project NPV, budget control and cycle time. This has been a lot of work for nothing.  BUT, neglecting the relationship BETWEEN these prevents managers and teams from making the tradeoff calls.  And there WILL BE tradeoffs.  “We can slow the project down and get some more data that’ll make it more valuable when we launch;” “we can spend more on this project and speed up the production of the prototypes;” “we can save some money by giving up a feature that was part of the total market value. “

Example 1:  A project has an NPV of $300 million. It was originally to be launched in two years but looks like it has been delayed about 6 months.  It will consume about $60M in R&D expenses.  The conductance is therefore: 300*24/30 (speed ratio in months) divided by 60 which equals 4.

Example 2:  We can save $10 million but it’ll slow the project down another 6 months.  What’s the conductance? It’s (300x(24/36))/50 = 4.  A fair trade.  But then again, nothing GAINED. Make the call on how cash strapped you are vs. need to build near-term innovation momentum in the marketplace.

Example 3:  The original project could have its value raised to $400M but it’d cost $25M and add another 12 months to the delivery time. Conductance? 400x(24/42)/85 = 2.7.  I’d have to say no.

(You note that the use of relative velocity vs. 1/cycle time has the effect of multiplying the whole conductance by 24 — just a linear transformation and no real impact on the conclusions. In fact, I already made a transformation and expressed dollars in units of megadollars (divided by one million) and you hardly noticed.)

There remains one last modification necessary.  Thus far, we have spoken of a project’s value assuming it does succeed.  As we back up earlier in the pipeline, projects are risky.  They fail.  Sometimes often.  And in that case, the correction to the “value” should be: to multiply by probability of success — and our Innovation Productivity equation becomes P*V*S/E where P=probability, V=value, S=speed and E=expense.

So, I don’t know how this lands with your organization.  Is it too simple?  Trivial?  You need something more sophisticated. Or is it confusing?  Obtuse?  What’s all this about amps and conductance?  We need something more straightforward. I can easily imagine both reactions.  Maybe that means we got it right; maybe it means we missed t both ways.I can say two things:

1) You can’t get much simpler and not easily game or misread the metric.  Your metric has to have momentum, it has to move (your current ones probably don’t from my experience).  Static metrics lead to static pipelines and your ability to create is moot if you can’t move it into the market.  You have to have a denominator — productivity has to be about “bang” FOR “buck” and measuring “bang” has to be relevant to the business.  Yea, it can be real work  getting it right — but remember, there’s no “royal road to geometry,” either.

2) You can get a lot more complicated.  Resist.  Sooner or later, care and feeding of the metrics starts to rival the innovation efforts.  That can’t happen.  This needs to be simple enough that the organization stays focused on improving productivity, not simply measuring it.


PS: There is extra credit for anyone who wants to define the R&D pipeline equivalence of inductance or capacitance as well as the significance of those properties.

NOTE: Many (most) financial value calculations are going to have a component of time built in. Projects that move slow are going to be more heavily discounted than projects that move fast. So, why not just let the NPV stand? Good question. And IF the ‘launch dates are reconsidered on a frequent basis, that might work. In most cases they aren’t. And, projects get delayed without getting formally revalued. Putting in the velocity term compensates for this. In other instances (non-discounted) value measurements, the velocity is crucial to getting realistic project comparisons and measuring real value to the business.

Blog 13: R&D Productivity Metrics and Ohm’s Law

April 11th, 2008

Offline, I’ve been engaged in a conversation about R&D Productivity (Innovation Productivity for all practical purposes).  I had put this on a list of possible blog topics and was probably going to get to it in a year or so.  But the conversation provokes me to do something — not for the first time and certainly not for the last time:  run down a tangent.  Not so bad, really, as the blog format isn’t even expected to flow along smoothly and this one certainly won’t.

Now the problem with R&D productivity metrics isn’t that there aren’t any out there.  Oh, there are plenty.  And to be perfectly fair, its hard to understand why anyone needs or wants another one.  And yet, I find each a bit wanting and wonder why the metrics, presumably created by R&D folks themselves, don’t look a little more scientific, a little more technical;  they’re fuzzy.  After all, we have nature around us to learn from and she has provided all kinds of metaphors for productivity and efficiency.  I could easily imagine a metric built around the laws associated with viscosity — viscous R&D operations failing in comparison to fluid, rapidly flowing ones.  Or I could imagine the use of friction as a metaphor for the organizational and executional resistance to advancing products through a pipeline or maybe even some bizarre forays into quantum tunneling (later… much later).

The one I’m going to elaborate on here is an analogy to Ohm’s Law.  It’s simple.  It’s pretty well known and it seems to capture most of what we want to know about how well an R&D operation is functioning.  Ohm’s Law can be stated in many ways (algebraically equivalent), but we’ll go with the Wikipedia entry because that is easy to look up while I sit here at the keyboard — and nobody is going to get this simple little equation wrong.  Wikipedia says Ohm’s Law is I=V/R: Current is equal to Voltage divided by Resistance.  Recalling that voltage is “pressure” it all makes perfectly good sense.  The harder I push (more voltage) the more current will flow and, inversely, the greater I resist (down there in that denominator), the less current will flow.

Well, put that way, the parallel for R&D is pretty transparent. Flow (or current) through the pipeline and to the market is the whole point of commercial innovation — so we want that number high.  The pressure we apply is a cost, literally the cost of R&D, which subsumes the quantity of resources thrown at a project.  We obviously want enough pressure to achieve a desired flow but not more than necessary.  How much is that?  Well it all depends on the resistance.  Low resistance processes, systems and bureaucracies allow us a achieve a given flow at reduced pressure.  Obviously there’s a productivity metric in there somewhere.  I’ll do the algebra.

If I=V/R, then multiplying both sides by R and dividing both sides by I results in R=V/I. The organizational resistance to moving products through the pipeline and into the market could be quantified by measuring the resources invested and dividing by the flow. Makes sense.  In electrical circuits this resistance is expressed in units of “ohms” when flow is in “amps” (short for amperes) and voltage is in “volts.” (Yep, we’ll get to translating all of that into units your organization is probably already familiar with).  We could leave it there and minimize resistance (or ohms) in your organization and that would represent maximal R&D productivity.  But what is it about humans — and executive humans in particular — that want to maximize stuff, not minimize it.  So, there is a term known at engineer’s cocktail parties as mhos (yes, ohm(s) spelled backwards… and who says engineers lack whimsy).

Mhos are a measure of conductance or the inverse of resistance and expressed mathematically as 1/R.  Setting this equal to C (for conductance) and plugging it back into Ohm’s Law we get: C=I/V.  The conductance (or productivity) of our organization is equal to current divided by pressure.  It is equal to output divided by input.  It is equal to bang divided by buck.  WOW! What a long way around to define the obvious — that productivity is bang divided by buck.  Everyone knows that.  It’s true.  Which is why it’s all so much more alarming that they can screw it up so many different ways.

The first (way to screw up R&D productivity metrics) is to start well but abandon the denominator.  After all, you can’t do much about it.  You’re going to pay that depreciation charge on the lab facility no matter what.  Convincing ourselves that the costs are fixed allows us to set it as a fixed number and concentrate only on the numerator while still calling that “productivity.”  Let’s just measure the output.  More is good — and away we go.  Meanwhile, fixed costs creep up, REAL “productivity” gets hammered and we continue under the illusions that it’s “fixed” as opposed to “fixable.”  It isn’t fixed.  It needs fixing.  Fixing in the form of making it more and more variable so we can pull the levers in the denominator as well as those in the numerator in improving R&D productivity:  organizational conductance!

The second way to mess this up (and a good post facto rationalization for plowing through all that Ohm’s Law stuff) is to forget what amps really are.  They are not a bunch of things — like you put in a basket.  One amp is equal to one coulomb (a whole lot) of electrons passing a point every second.  It isn’t just coulombs.  It IS FLOW.  (Physicists like to say “flux” here, but this is a site for CEOs and not just physicist CEOs. I’ll deal with “coulombs” (and even amps) in a NOTE at the bottom of the post). 

We VERY OFTEN measure R&D output in static terms:  from the number of patents to the number of project teams to the NPV of the portfolio.  Ohm’s Law tells us we have to make those things MOVE, they have to flow, and sticking to amp-like units forces that correct measurement.

And third (and I’ll stop at only three ways to screw up R&D productivity metrics) is to forget that, if it is all going to make sense, it matters what units we use.  Great! we had 100 patent applications this year vs. 70 last year!  Better?  R&D productivity on the rise?  Maybe!  Were those 100 applications as useful, as complex, whatever, in comparison with the prior 70?  Did everyone figure out that patent filings get you promoted, not issued patents — so file on every silly derivative of your project just short of getting outright laughed at by the IP legal staff.

Is a project a project?  Don’t some generate more value than others?  Wouldn’t you like to keep the number at 17 if you throw out the bottom 5 and replaced them with 5 projects BETTER than the remaining 12?  Or is that going to hurt the R&D metric your bonus is based on?

I’m stopping at three ways to screw up R&D metrics and moving on.  But beware, all metrics are gameable.  And if you’ve hired smart people in your R&D organization they’ll figure it out before the first reporting cycle.  But, get the metrics as right as possible and you CAN manage the gaming of the exercise.  It’s no excuse for walking away from quantifying and improving innovation productivity.

Matters have gotten a little lengthy here.  I’m going to edit and post — and then write the remainder and post another day.  Some of you will have already been given enough clues.  But, we’ll get back to units, measuring tools, leading vs. lagging, and more good stuff.


NOTE:  Rigorous definition-wise, it’s all a tiny bit uglier than I presented, but not wrong.  An ampere is proposed to be a basic SI unit like seconds or grams, etc. and technically stands without reference to time.  The definition of coulombs as a quantity of charge, equal to one amp per sec (or amps = coulombs per second by simple algebra) allows us to do the Ohm’s Law math as I’ve done it without deceit. 

But, what’s a “coulomb?”  Hardly ever comes up in daily conversation; not like “moles.”  Moles are always coming up in daily conversation and not just the kind you and your neighbor cuss about while leaning over the fence.  Saying “a mole” is like saying “a dozen.”  It’s a shorthand for a number, a name for that number. “Dozen” is the name of 12 and “mole” is the name of 6.02 x 1023.  It’s just bigger than 12 and so needs a different name. We use “mole” generally when we talk about atoms and molecules because there are always a lot of them.  Electrons are like that, too.  Each of them is small and so we have numbers much bigger than 12 when talking about a practical number of electrons.  A coulomb is a practical number in the sense that a mole is — but reserved for quantifying elementary charge.  So a coulomb is the charge on 6.24150948×1018 electrons.  One amp is thus the current created when 6.24150948×1018 electrons pass a point in one second.

MUCH More importantly — for those other moles, we’d plug all their holes but one, shove a garden hose down it and duct tape the other end to the exhaust pipe of our 1959 Chevy.  Good luck with those things.

Blog 12: Organizational Design for Innovation

April 1st, 2008

As discussed in Blogs #9 and #10 (and the comments and other blog discussions they spawned), organizations have historically been built on the Edisonian approach for all the right reasons.  But just suppose we wanted to build an R&D organization based on Archimedes and more specifically on the effort to find “Archimedes” just as he lowers himself into the tub.
First let’s look at what some of the requirements might be for an Edisonian organization within the commercial world.  This strategy is essentially based on “organizing for expertise.”

1. Identify surrogates for future problem solving skills: We’ve got to recruit talent (our Edisons) and yet we can’t select those who have solved the exact problems they’ll be assigned in the future. So we need some other “marker” to look at. Maybe it’s intelligence, maybe it’s caliber of schooling or maybe it’s prior experiences. We’ve all done recruiting and we know it’s all of the above as imperfect as that is.

2. Creativity contracts: Since we’ve invested the effort to find these Edisons we need to be sure they’ll be there when the problem-solving commences. If we are confident that we can predict with better than random results who the future Edisons are, then we need to also ‘own their creativity’ — we need to keep that skill from benefitting the competition. We’re “entitled.” After all, we’re paying them even when they’re failing on our nickel. This’ll take the form of employment contracts in which we will lay claim to their creativity within the domains of commercial interest to us.

3. “Lockup” and retain: The contract is going to need to incentivize retention. Perhaps via non-compete language or longer-term financial incentives.

4. Build physical attractors: By creating desirable environments in which to work, we can preferentially retain these Edisons. We all want attractive surroundings and better labs, better offices and better industrial parks are all part of an investment worthwhile in giving us preferential access to the Edisons.

**Betting on Edison is betting on the best qualified experts.**
In contrast, how might organizing for Archimedes differ?  Remember that in this case we are organizing for serendipity.  We are not betting that Edison will work his analytical method in starting from an initial hypothesis and failing toward the answer.  Rather we are counting on a sudden unpredictable AHA! — a Eureka moment.  And while that may occur in person X today, it’s probably gonna happen to person Y tomorrow.  So we aren’t interested in retaining our “Archimedi” but in finding them when serendipity strikes.  To organize for serendipity we should:

1. Broadcast challenges: Not knowing when or who might take the all important bath and merge expertise and experience with a catalyst, we should broadcast our challenges widely in order to take advantage of numerous potentials and let the “happily prepared mind” find us.

2. Lightweight contracts: We are not interested in long-term retention, so we need a lightweight agreement.

3. Problem specific intellectual property agreements: Long-term retention or not, we are going to want fair access and rights to intellectual property created in response to the challenges. We may not want to “lockup” creativity of a one-time Archimedes but we do want the freedom to exploit the solution prepared on our behalf.

4. Build intellectual not physical attractors: Since these are problem-specific engagements, physical facilities probably have little meaning. Intellectual attractors on the other hand are key — they are obviously key to attraction and in some way we can’t quite define, they may well be the key to enhancing the odds of the mental catalyst doing its work. (for instance: do they communicate across a wide variety of skills and cognitive types?) And they can’t be diluted by accessory duties all too often coupled with the very nature of employment. (Applying for a parking permit, submitting equipment requisitions on appropriate forms, attesting to compliance criteria, vying for cubicles with windows, trying to avoid the most damaged set of furniture, getting an extra bookcase for you larger than average library…(wow, how does anything get done?))

5. Minimize core with reconfigurable problem-solver population: Transient Archimedi alone aren’t going to get the job done. So a minimum core of problem-definers, program-dissectors, question-askers and solution-integrators are going to needed along with a reconfigurable problem solver network from which Archimedes can be drawn just as he lowers himself into the tub. (Inside this minimizable core, we may want to organize for Edison. Ultimately, our two heros have to learn to play well together).

**Betting on Archimedes is betting on the happily prepared mind — that finds YOU.**
As quoted earlier:  “Genius is 1% inspiration and 99% perspiration.”  Well the facts of life are that we’re going to have to SWEAT out our next innovation, but if we tap an open system along with the effort we put in, maybe those “Edison ratios” get higher and the quality of our products and services get better.  NO, we’re not ready — not this decade — to entirely tear down the bricks and mortar and the organigrams of Edisonian structures, but we ARE ready to start building the Archimedean ones and seeking balance to tackle and solve more of the world’s needs.

Blog 11: Edison and Archimedes – A Final Telling

March 17th, 2008

Readership has been patient.  We’ve spent a couple blogs arguing the lessons of Edison vs. Archimedes.  And yet, never told those stories.  It’s worked.  Many readers already knew these two tales.  BUT, not all.  And so, as preface to the topic of organizational design, I’d like to re-tell the stories of Edison and Archimedes in my own words (”what not again?” Hey! I heard that!) — as it isn’t their life’s story but only life segments for which I am using them as poster children.  And maybe not even ACTUAL segments of life, rather methaphoric retellings of selected experiences shaped to emphasize a point… mingled with truth… a little… probably.  But, the result is a mythological variant we can springboard from as we launch into organizational designs.

Edison:  When Thomas Alva Edison set out to invent the light bulb, he admittedly had a few known facts going for him.  It was known that electric currents can generate heat — particularly in the presence of resistance — and it was known for millenia that “hot things glow.”  It seems there had to be an electric light bulb in there somewhere.  And yet, the known combinations of materials and design had not yet solved the problem in any practical way.  Edison set out with an initial hypothesis and carried out the experiments.  They kinda worked… they HAD TO — based on what was known — but alas, no good light bulb.  I don’t recall what those first experiments were; I have a vague recollection from history lessons that they may have involved cellulose substrates with ionic salts and or carbon powder impregnated in them.  They pretty much burned up (out?) within seconds. 

Other experiments were conducted — varying the composition of the filament, the current, and even the enclosing atmosphere.  Today the answer is pretty much:  “fine tungsten wire in a vacuum.”  But that wasn’t the second, third or even the FOURTH thing that Edison tried.  Each experiment taught him something and could be argued to have gotten him “closer” to the answer. 

When challenged on the numerous failures, Edison himself is reputed to have defended this form of progress by saying, “I now know 1000 ways NOT to make a light bulb.”  (Actually I can find this quote in various references with just about any number you choose in place of 1000.  I’ll leave it to others to track down the precise quote, but whether its 9 or 9999 the point is unchanged.)  At the end of it all, the light bulb was ‘invented.’  The essence of that search for an effective combination of materials and design has been replicated billions of times in human endeavors in innovation and each time accompanyable by the quote, “I now know X ways not to make a Y.”  But contrast that methodology with the story of Archimedes.
Archimedes:  Hiero, the Tyrant of Syracuse (this was apparently his formal title and not just commentary on his leadership style) had decided he was in need of a new crown.  He contacted a local artisan and gave him a quantity of pure gold out of which the crown was to be fashioned (we’ll just say, “a pound.”)  Hiero received the crown and was pleased.  Of course, he promptly weighed it and found that he’d been returned the same mass as he had given for the task.  (The artisan wasn’t THAT foolish after all.) 

But as Hiero considered matters, he realized that he could not be sure he hadn’t received a pound of gold-looking alloy in return and that part of his gold was, even now, hidden under the artisan’s mattress.  The matter could be put to rest IF the golden crown was of the same density as pure gold.  Since the weight was easily determined, all Hiero needed to know was the volume of the crown and he’d know the density.  He approached Archimedes who considered the problem as a geometer and realizing — that while he had formulas for calculating the volume of cubes, spheres, cylinders and even cones — he couldn’t handle a crown and informed his Tyrant that “there is no known way to measure the volume of an irregular object.” 

That evening, Archimedes lowered himself into the tub and made an observation most of us have made:  the water rose up alongside the edge as he wondered if he’d overfilled it and the last few inches of lowering would flood the bathroom.  BUT, Archimedes came to a realization most of us haven’t.  He realized that the displacement was occuring in direct proportion to the volume of his irregularly shaped body and as it was occurring along one axis of a regularly shaped tub (cylinder or rectangle), he could calculate that exact volume.  What would work for an irregularly shaped human body would work for an irregularly shaped crown.  Archimedes was supposedly so excited that he didn’t bother to dress but ran naked through the streets shouting, “Eureka, Eureka, I have found it, I have found it.”

I will post in the next day or two, the organizational designs suggested by these two parables. In closing though I have to confess that my telling of these stories has always been a little awkward. The Archimedes version seemed a bit too pat and I knew that Archimedes’ Principal wasn’t so much a statement about volume displacement as it was about buoyancy. I usually got away with this until I had the audacity to tell this story (in the above too folksy way) to a group of scientists at the Santa Fe Institute (actually to their scientific advisory board — even worse).  Murray Gell-Mann politely waited until the Q&A session and then pointed out the precise principle of Archimedes in all its buoyant glory. Busted! Murray was a good sport and acknowledged some mathematical equivalency and allowed that the argument of volume displacement is equally as well inferred from his bath and probably more “storyable” as well. Thanks Murray, for both keeping me honest and letting me off the hook. You’ll find (at LEAST) both versions in published literature. But given the unambiguous description of “Archimedes’ Principle” in Archimedes’ own writings, Murray is probably right (no surprise).

Blog 10: A Conversation re: Edison and Archimedes – with Chris Flanagan

February 22nd, 2008

Flanagan photo The original blog planned for this entry (#10) was going to be an exercise in contrasting   how to design an organization for Edisonian vs. Archimedean approaches to innovation. Along the way, a conversation broke out and the comments and subsequent blog of Chris Flanagan at the Business Innovation Factory has lead me to raise that dialog out of the comments section (under Blog 9) and from her blog as further introduction to the topic.

Her additional metaphors and her perspective, I think, will help clarify (and even guide) where I am ultimately going. In the end I agree that abandoning what has worked would be short-sighted. At the same time, relying on it and it alone to bring us in to the future is equally short-sighted. One problem I frequently experience as a blogger or writer (or even speaker/presenter for that matter) is that in order to clearly make my point regarding change I am left with too little attention span to point out that it is more often in the shifting of a balance and not an abandonment of the past.

MAYBE this approach will help make that clear while not losing the point that sticking only to the old just ain’t gonna cut it!

First, let’s quickly recap what sparked our conversation. In a nutshell, I wrote in Blog #9 that the distinction between Edison and Archimedes is one between the thoughtful application of trial and error and that of serendipity or the “aha” in which novel breakthroughs present themselves not in analytic response to prior experimental results but as sudden flashes of insight.

Some may argue that “Eureka” has ALWAYS been a part of our scientific endeavors and hence our scientific institutions, whether the NIH, Bell Labs, university research, DARPA or the garage work preceding Kitty Hawk. True. I don’t disagree. But our organizational practices couldn’t be built on the Eurekas – we loved it when they happened but we ORGANIZED around a solid cycle of experimentation.

    Christine Flanagan Says:

The complexity of innovation has given me a new found appreciation for the use of metaphor. (I’ve used your Archimedes/Edison example quite a bit over the past few years.) Blink author Malcolm Gladwell lays out another trippy analogy of innovation archetypes to rock and roll music.

In the 1960s and 80s, Fleetwood Mac went through a dizzying array of lineup changes, relationship issues and drug problems. If you listened now to their first few albums, you would have no idea who they were – they sounded nothing like the band we’d come to know.

Then there’s The Eagles, who charted a different course: Their very first album, Desperado, sounds unmistakably like the Eagles of later years.

So as a business innovator, is it better to be experimental (a la Fleetwood Mac) or conceptual (like The Eagles)?

Like everything in life, balance is required. But it certainly made us think about where we are on the spectrum and what opportunities might present themselves if we’re willing to change our perspective and bring other problem-solvers into our equation.

    -alph- Says:

Chris, Many thanks for the comments, both kind and informative. I appreciate you adding the story of the Eagles and Fleetwood Mac to the lore. And I’d like to react to that tale, even if a bit clumsily.

As a music company, I probably don’t look forward to funding that evolutionary process of Fleetwood Mac. Perhaps at the end they DO become the creators of hits like “Dreams” and “Landslide.” Or maybe they never do (not all bands that suffer “lineup changes, relationship issues and drug problems” are doing it on the path to greatness.)

Perhaps I like to see MANY bands and MANY tunes and place my bets on “Desperado.” I can always bet on “Rumors” later — when and if it comes into being. It sounds cold and I’m a scientist by training and not quite that unwilling to struggle with the evolutionary process. It’s just a question of who funds it and what are the alternatives?

Now, on the other hand, as a ‘creator’ which model do I prefer? Clearly the evolutionary story, the quest story. Aren’t they glad they didn’t give up (and aren’t we)? As creators (scientists, musicians or whatever), we need to continually evolve and challenge and tack toward the solutions. (With the occasional AHA!)

So how does that get funded in a solid company strategy of selecting the current best from a market pool? Some of those thoughts are hinted at in the blog on risk, but your ideas would always be welcome.

I looked back through your blogs to make sure you hadn’t already answered that question, but you’ve had over 50! since November (that’s 50-wow, not 50 factorial).

Maybe we’ll have a future cup of coffee and can get this issue figured out.

    Christine Flanagan Says:

I recently read The Global Brain by Satish Nambisan and Mohanbir Sawhney. It was quite good in showcasing the variety of ways that external communities can help facilitate the innovation process. (Innocentive is mentioned in it a few times.) You know better than anyone that a community’s power can translate into orders of magnitude improvements in innovation speed, cost and quality.

And while I agree that no company looks forward to funding the evolutionary process of a Fleetwood Mac, the fleeting nature of employee retention rates today offers little chance that an organization at one point or another won’t have to deal with it.

I have one more analogy to throw your way which, in retrospect, may be better than the music example. There is an industry out there that best exemplifies the hypothesis that diversity of exposure creates novel solutions – and that’s the food industry. I’m actually a chef – Johnson & Wales is what brought me to Rhode Island so many moons ago – and I was trained to experiment with combining and recombining different flavors, textures, profiles from a variety of cultures and cuisines. It’s an open, shared environment where experimentation rules. There’s a quote by Thomas Keller of French Laundry that says a good cook is about four things: awareness, inspiration, intellect, evolution. To your point, you just can’t get there without engaging with the MANY.

(p.s. I’d welcome that cup of coffee anytime!)

    -alph- Says:

Chris, As you said, “…little chance that an organization at one point or another won’t have to deal with it.” I agree and one thing I hope doesn’t get read into my blog is that it’s about X “or” Y. It’s usually about X “and” Y — tapping in to the MANY ways we tap in to the MANY minds. I actually remain a huge fan of “CLOSED innovation.” Only INSIDE can they understand the problems and programs to the extent that they can wisely dissect and reassemble pieces of solutions. And some of those pieces should be sought inwardly and some outwardly. And in that they will often have to come to grips as an organization learns and evolves.

Thanks for telling me the cuisine story. It actually captures pretty well the concept of solution-space with axes for flavor, texture, profiles and even, appearance — and I’m sure there are lots of dark, cold, empty regions. I know, I’ve made that stuff.

    Christine Flanagan says:

What’s the right way to organize around problem-solving and experimentation? No one can corner the market on smarts and diversity of exposure does create novel solutions. Innocentive is certainly testament to that. Thomas Edison said that genius is 1 percent inspiration and 99 percent perspiration. You once told me that you thought it was time to update that equation. At our summit a couple of years ago you said, “There will always be a lot of perspiration involved, but should the inspiration be 1 percent, or should we make it 10 percent by opening it up to a diverse net of human beings before you put the perspiration in?”

I’d forgotten about the Edison quote and my quip on it. Glad Chris reminded me. It helps illustrate the very point I want the reader to leave with. There’s been lots of sweat in the past and there’ll be lots in the future, but we can start changing the mix and bring more “aha’s” to our business and our customers if we open things up. And an open R&D operation won’t look just like a closed one — so in Blog #11, we’ll start defining that difference.


Blog 9: Edison, Archimedes and Solution Space

February 12th, 2008

We’ll keep revising the taxonomy document; at the same time we’ll return to the regular blog with some new topics and follow-ups. It also occurs to me that I am going to want to refer back to prior blogs — so with this one, I’ll go back and number them all. We plowed headlong through blogs 5-7 in order to keep a reasonably tight conversational thread on the rationale for wide OPEN innovation (the “crowdsourcing” systems described in our taxonomy.)

During those blogs, several tangents suggested themselves and we’ll take time now to go back and tidy them up. In Blog 5, we discussed Diversity and I made a passing reference to Archimedes: “What IF someone other than Archimedes is taking that all critical “bath” just as we are seeking to measure the volume of irregular objects?”

I left it assumed that the relevant details of that reference would be accessible by the reader (memory or a little wikipedia). I’d like to flesh out that story a bit as I think it underpins a point worth making: all R&D institutions to date are ORGANIZED around the principles of Edisonian problem-solving while the future holds the potential for Archimedean organizations. (and yes, I will try to personally respond to all the nay-sayers that add their comments to this blog).

In a nutshell, the distinction is one between the thoughtful application of trial and error (admittedly, informed to varying degrees) and that of serendipity or the “aha” in which novel breakthroughs present themselves – not in analytic response to prior experimental results, but – as sudden flashes of insight. In fact, in the most common tale of Archimedes he is presented as having given both types of responses. The first, to Hiero, the Tyrant of Syracuse, being, “there is no known way to measure the volume of an irregularly shaped object.” And the second, being, “Eureka, Eureka, I have found it. I have found it!” as he ran naked through the streets.

Some may argue that “Eureka” has ALWAYS been a part of our scientific endeavors and hence part of our scientific institutions, whether the NIH, Bell Labs, university research, DARPA or the garage work preceding Kitty Hawk. True. I don’t disagree.

But our organizational practices couldn’t be BUILT on the Eurekas – we loved it when they happened but we ORGANIZED around a solid cycle of experimentation. We couldn’t very well assign the next research project to the person who was going to have the Eureka experience. We couldn’t stick all employees in bath water hoping it’d provoke the right effect. No, of course not. Instead, we hired smart people, we gave them tools to experiment with, we improved their experimental designs, we gathered and recorded data, we learned 1000 ways NOT to make a light bulb and we justified the cost of failure as, “well that’s R&D for you.” Really — quite Edisonian.

One conceptual way to think of problem-solving is the search of solution-space. Assuming a given problem is tractable, we can imagine axes of solution-space as representing each of the variables contributing (or imagined to possibly contribute) to a solution.

Let’s oversimplify. I want to assemble a simple flat puzzle on my table top. Each piece needs to be in some specific location relative to its fellow pieces. Now, there are lots and lots of non-solutions, including the one where we just dump all the pieces out of the box. For an n-piece puzzle, my solution space is 2N dimensions where each dimension is either the X or Y coordinate of each puzzle piece (see phase space). There is a point (literally) in this solution space where the puzzle is correctly assembled. In fact there are many points if we use an absolute set of coordinates anchored say by putting (0,0) at the lower left corner of the table (This is simply to say that the puzzle could be assembled anywhere on the table and even sideways).

But although there are MANY solutions, there are many more “non-solutions” and puzzle-space is pretty empty (sparsely populated). Our task of assembling the puzzle could be described as maneuvering through this empty space looking for a point of solution. (The imagery of navigating through real space looking for a star works OK even if we only have three dimensions to deal with in that case). So one way to describe problem-solving is the act of entering that space with an initial hypothesis and navigating our way to a star. The clues following each set of experiments are what guide us in (hopefully) the right direction until the solution is arrived at. If that is an Edisonian description, how would an Archimedean one differ. Well Archimedes was relaxing in his tub when the space was suddenly discernible by him and he stepped in – landing straightaway upon a star.

Now, in some ways, this sounds like a non-distinction. Is it just an accident of how close to a star that intitial step into solution space is? There is a simple continuum from landing on a star to finding oneself in the deepest, least well-lighted regions. OK – granted the metaphor probably isn’t a very good one, but let’s accept that nearly all persons have had these distinct experiences in problem-solving and can accept that there is a difference between “1000 ways not to make a light bulb” and “Eureka, Eureka.” That experience — and permitting the metaphor to persist — at least allows us to think rationally about Archimedean organizational designs.

We’ve already acknowledged that the design can’t be trivialized by saying simply that we’ll assign problems to our own “Archimedi” or implement a series of random lateral exercises (i.e., bathing) to stimulate the experience. But thinking of the Archimedean model as “stepping on a star,” what can we do?  My simple-minded answer is to take a LOT of steps “into space” and see if one of them doesn’t qualify. Don’t start with one or even a few hypotheses and “tack to the light,” but start with dozens, hundreds, maybe even thousands of initial hypotheses and judge them based on the brightness of the original point of entry rather than the subsequent efficiency of navigation.  (Of course, maybe the future will hold more clever approaches (I actually hope so)).

(It’s gotten pretty long-winded already — so, in a later blog, we’ll ask, “what are the features of an Archimedean organization and how might they be expected to differ from an Edisonian one.”)


Blog 8: Innovation Taxonomy – v1.3

January 29th, 2008

(Last revision: 021108)  The gap in posting dates has a two-fold cause.  One, I was out on the road — driving from California to Indiana and taking a more scenic route up the Columbia River Gorge and through the Tetons.  I am more than guilty of multitasking while driving, but adding blogging to the mix isn’t likely to happen soon.  Still, I thought about this topic while driving and even discussed it over the phone with some colleagues, particularly Dwayne Spradlin, the current CEO of InnoCentive (he really deserves and gets co-credit on this blog.) 

Second, it’s more text than normal, but that’s the nature of the beast.  Unlike past blogs that get posted after they’re “finished,” I think this one is going up quick in a draft state and will be modified over time.  I will tweak the revision number for anyone trying to keep track.  There may be other voices that want to suggest changes and as I won’t have completed all my literature reviews, those voices, both in print and otherwise, will suggest modifications to the first text.

Let’s begin by proposing a picture that highlights the relationship between various innovation practices and we’ll conclude with a proposed glossary of terms.   Steve Somermeyer has pointed out some possible places where definitions might be found and over time, we’ll get those tracked down and incorporated.  I have also wondered if this calls for an innovation wiki (found one quickly at but it is in development and hasn’t tackled any of these particular terms)…. but there again, I’ve got some research work to do before I either find it or figure out how to make it.  In the near term, I will probably try to add a page to this blog site that’ll make the definitions and relationships available just a click away. 

Figure 1 shows a variety of terms in current use and defines the relationship between them. 


Figure 1.  Innovation taxonomy based on relationship between terms.

Our primary focus here is on the actual work (technical, scientific, design, engineering, etc.) from which the innovation derives. There are obviously many related processes: for example, how is the effort funded? how is a partner found? how is the project selected? how is the porttfolio of projects managed? and what infrastructure is necessary and distinct for both internal and external innovation work? Since recent conversations in innovation have spawned relatively new tools, approaches and terminology, a few of these attendant processes are picked off for completeness’ sake — even if they don’t rigorously fit within the framework of defining modes of innovation. (And to be honest, some REALLY DO and it’s the frame that falls short).

Technology Scouting - surveying the existing marketplace for applicable technologies and products that may very likely NOT be on the “wish list.”  Proctor and Gamble tells the sory of finding candies on a stick with a cheap disposable rotating motor that spun the candy.  This discovery lead to inexpensive motorized toothbrushes.  This may be done by small groups of geographically disperse employees, contractors or even open solicitations.  It has been placed “off the chart” here due to the mixed nature of intenral and external work and the fact that it is more “search” than “execution”in the innovation space.

Venture Capital Partnering – Regardless of the source of innovation or its boundaries, many companies are opting to share the risk and cost associated with research exploration that may or may not lead to a marketed product (and a return).  When successful, the returns are likewise shared with the venture partner.  Again, this is “off-chart” as it is a funding strategy.

Alliance Management – A final “off-figure” process is Alliance Management.  In many ways, this is less “off” than the others just covered.  Rarely do companies manage external or open innovation by simply granting the effort out of company and awaiting results.  More often there is a constant need to assess and tune the relationship, to iterate on plans and goals and to adapt to new learnings.  The enhanced focus on this management effort has resulted in dedicated roles within companies for specific duties of alliance management.  And, in many cases, these roles are played by dedicated employees comprising an alliance management office.

In Licensing – I am taking this out of order to set up for the definition of out-licensing. It fits on the chart under External innovation.  It is a case where the technology or product is purchased, either exclusively or non-exclusively and marketed by the acquiring company.  It may, in fact, already be a proven market performer and yet a transfer of rights was deemed beneficial by both companies.  BUT? Is it innovation? Probably not, but it is a way to access innovation and MAY be attractive as an alternative to building and sustaining an internal capability.

Out Licensing – Pretty obviously the flip side of the above.  While licensing plays an increasingly important role in nearly every company’s portfolio and strategy, not a lot of time is intended to be spent on it here.  It has the virtue that execution risk has almost always been “sunk” and in some cases market risk has been as well.  It allows licensees to balance their product portfolios or offer broader ranges of customer service than they are equipped to achieve internally.  For the licensor it provides a source of revenue that might not be realized if an undersupported product were weakly marketed or even just abandoned.  But, it is accomplished by a primarily legal execution and not what you’d define as an “innovation process” per se.  So we will continue to focus on act of innovation and leave some of this to business development teams.

Turning now to the body of our figure, we will begin with top box and work our way across and downward.

Innovation - Though it was explicitly defined in early drafts of the introductory blogs, it seems to have been edited out.  I trust it wasn’t too confusing for anyone up until this point that we left the term innovation undefined.  That may be OK as we all pack some operating definition and it isn’t clear if differences in our definitions would bear too adversely on the dialog.  For the record, we are going to call innovation, “the introduction of something new that is favorably received by the intended (or maybe the unintended) customer” — “an invention that succeeds.”  That’s not the most unique definition and embodies several others.  A more complete list of innovation definitions can be found in Wikipedia, (I’ll link there often; I’m surely not going to compete with the “crowd.”)

Closed innovation – This is innovation practiced within the walls of the innovating company, usually the same corporate entity planning to generate value in the marketplace.  It has a long tradition and continues to dominate the fraction of innovation work that is done.  It is often justified on the basis of confidentiality and intellectual property protection.  It has recently been challenged as falling short in diversity and amplifying the cost and risk of business without commensurate return.  In spite of this, it remains the dominant modality for innovation and research. 

Open innovation – This is pretty much definable as the complement of “closed innovation.”  It is work that is done outside the organization with the objective of commercializing the innovation.  Of course, there’s outside and there’s OUTSIDE which leads to the rest of the table.  “Outside” can be as “inside” as work done by a exclusive vendor or it can be as “outside” as freely contracted to the planet at large (we’ve been using OPEN and crowdsourcing in other blogs to characterize this “outside-outside” scenario.)  If you want to find an outstanding analysis and case for “open innovation” a great place to start is Henry Chesbrough’s book by the same name.

Contract Research – The most familiar form of external innovation.  In essence the work of design and/or development of a new offering is granted to an external organization.  This may be motivated by a desire to find greater expertise, unique skills, specialized facilities or even simply capacity when workload exceeds resources.  Contractors may be academic labs, government research institutions, contract research organizations (CROs) that have come in to existence simply to absorb extra work capacity and focus in areas of specialization (fluorine chemistry as one example), or even retirees.  We won’t define each of these individually as their definitions are probably pretty well inferred by the one or two word descriptors. 

External Collaborations – OK, I admit that just about anything defined here could have been under contract research and vice versa.  But I wanted to open it up to some specialized contracting circumstances.  Let’s assume for this exercise that one characteristic of this category vs. the prior one is that the contracting organization stays more intimately involved and that the innovation process is iterative and shared. It could include joint ventures where both parties share in costs and returns. It could include vendor relationships where the vendor pays the innovation cost and takes the risk in return for a guaranteed market channel and perhaps a contracted return. It could inlcude the growing public-private partnerships where partnering with government organizations allows one to tap into wider resources and where a case for “public good” can be legitimately made. Etc.

Both these two prior categories have the common trait that the work is defined up front, the terms agreed to and a contract is in place BEFORE the innovation occurs. Only in rare instances is the expense solely a function of results. In most cases, whether contracting with government, academic labs or private ventures, the contractee pays for “time and materials.” There MAY be a success fee but that is usually a bonus above and beyond what is necessary to keep the doors open. Deviations from that arrangement probably occur most frequently when contracting with vendors who weigh the total business relationship into the contracting terms for a specific project.

Crowdsourcing - The third type of external research is brought under the broad category of crowdsourcing (a term coined by Jeff Howe). Here the innovation need is opened up to a much larger population of potential innovators.  Here, it is also more common for this larger market to assume some or all of the risk.  After all, they “opt in,” and do so only when they feel that the risk can be adequately managed without an up-front (or in-progress) payment. To highlight the risk distribution, we have subdivided this into two broader categories: simple cases where the risk-sharing is nearly irrelevant and those where it is central to the strategy.

Answers - This first subategory of crowdsourcing is one where the responses sought are almost solely “top of mind.” There is no risk assumed on the part of the solution provider as they have invested little if anything in arriving at the answer. We see this type of innovation in computer user bulletin boards where one user cannot solve a problem requiring some clever arrangement of hardware and software. Admittedly these aren’t oftne huge innovations. In another example, we saw Amazon’s Mechanical Turk as a way of tapping the crowd and getting specific answers to wide-ranging questions.

Risk Sharing – This family of crowdsourcing is for those projects in which the mental or physical effort required to respond to a challenge are significant enough that, under traditional outsourcing terms, it would be expected that payments would be made a priori or agreed as a condition of attempting the work. This feature is common to all the subcategories that will be defined.

Mass Collaboration - This is a system which allows many individuals to make partial contributions and it is the aggregation of this fractional solutions that sum to a viable whole solution.  It is distibguished form other forms of collective activity in two ways:  1) the individual contributions cannot be simply the following of independent recipes — there must be some form of adaptation based on the work of others.  And, 2) the mechanism for that adaptation lies not in an external social mechanism (i.e., teamwork) but in the content or product itself (think a wiki for example). 

Broadcast Search – A term defined by Lakhani , et al., in a paper examining the functioning of opening up innovation needs to very large populations and scientific communities as a problem solving system. In these types of crowdsourcing, the challenge is very widely exposed to many, many minds and solvers choose to tackle a challenge based on prior experience, unique insight and even “aha!” experiences when reading the problem definition. Though the internet allows for unique models, the use of such a mechanism can be traced back to the longitude problem. It is also becoming increasingly

Lead User Innovation - This mechanism, defined by von Hippel, is one wherein the problems themselves along with the solutions are defined by a select customer population. Nearly every advanced technology has a fraction of its customers that push the boundaries and make new discoveries in the process. Those advances can be recaptured by the suppliers and used to iterate product lines and improve existing ones. (Good summary on Patty Seybold’s blog)

Public e-RFP – Though listed under crowdsourcing, this domain (a widespread – typically internet based – request for proposal) has been shown in Figure 1 as transparent inasmuch as the crowdsourcing component has been targeted primarily at FINDING new contract partners who may be uniquely capable of delivering innovation. AFTER they have been found, the innovation work itself proceeds in a non-crowdsourcing, more traditional, contract research fashion.

As I said at the beginning of this lengthy blog, we’ll handle this one a bit differently. I will update periodically and adjust revision number accordingly. There may be suggestions from others that’ll be incorporated; I want to hyperlink several references in the text; and I may even try to improve the figure (it looks great when transported between MS files but loses a lot when moved into WordPress which I use for editing the blog.) I will also see if I can create a new page and make this a living document.   (DONE!)

Dwayne Spradlin and

Blog 7: Risk Sharing – What you offload is NOT what they take on

January 13th, 2008

We have recently covered two of three rationale put forward for OPEN innovation. The two already covered include 1) diversity and 2) spot capacity. The final of these is “risk-sharing” and will be our topic for the next few minutes.

Sharing the risk of research, the uncertainty that a given hypothesis will prove valid, is actually a key part of tapping diversity.  One very simple reason that closed innovation — or even open innovation of the external contracting variety — must constrain diversity is cost and risk.  A company (or not-for-profit, or government agency…) limits the diversity of approaches employed in solving a problem for the simple reason that they can’t afford it.  It would be untenable to look at a technical or scientific challenge, identify the myriad approaches to it and try them ALL, either sequentially or in parallel.  At the end of the day, some would succeed, some would fail and you’d pay for them all.

Better to try them sequentially, take the first one that’s “good enough,” or abandon the project after some reasonable number of attempts — and move on.  The consequences?  Competitors succeed where you fail, projects are revisited with different results, a sub-optimal solution is brought to the market, you pay for a lot of false positives (failed attempts) and you create a lot of false negatives (abandoned potential successes).  (We will definitely return to a look at these alpha and beta type errors and the implications for better innovation strategies.)

Clearly, we’d be better off to run a more diverse exploration of solution space… but we just can’t afford it.  This is where a risk-sharing model comes in.  It’s surprisingly simple: pay for the solutions AFTER the problem is solved and don’t pay for the failures. (Entice bounty-hunters, don’t hire the Pinkertons).  At first blush, this can’t possibly work (though bounty-hunting has survived economic analysis, historic experience and even reality TV!).  In fact, when designing the business model for InnoCentive, this was our primary unknown – would scientists assume the risk for failure on behalf of a large entity looking for the solution?  It seems that the answer must be, “NO.”  Why should they?  One learning (at least for us) was that risk is not symmetrical.  That is, the risk a company offloads is not the same as the risk an individual assumes.

There are at least two reasons for this asymmetry.  First, individuals that choose to work on a problem self-select based on their belief in their ability to efficiently solve the problem (and who knows the contents of their minds better than they do?).  A research organization must often assign the task based on availability of time or balancing workloads.  Though the allocation of resources is far from random, it isn’t going to match talent and problem as efficiently as “opting in.”

And second, individuals may often have other reasons for working on a given challenge.  As a professor at Stanford said to me, “Alph, I’m always going to be looking for novel ways of making new dehydroamino acids; I’d love to put your compound in MY table.” In essence, if what you need done fits what I want to do, I can do it with no direct expectation that I’m investing all this time and effort into YOUR problem, I’m investing it in mine.

Simply “outsourcing” a technical challenge isn’t going to tap enough minds to find this risk asymmetry, but OPEN innovation, the crowd-sourcing type we’ve been discussing just might. It’s a big planet and somewhere on it, someone may well WANT to do just exactly what you need done and have a novel hypothesis to boot. In fact, given larger and larger candidate populations one can meet multiple filtering criteria and find (actually enable them to find you) the qualified mind that a) opts in, b) has a novel idea or even an “aha!” and c) can and will accept the risk in your stead.


Blog 6: Spot Capacity – Getting Capacity Transaction Costs to Zero

January 5th, 2008

The last blog argued for wide OPEN innovation to truly tap the potential of diversity in problem solving. This had been suggested as one of THREE arguments in favor of a crowd-sourcing type approach going even beyond the idea of opening up your R&D to external contracted labs. The other two propositions were: 2) spot capacity and 3) risk-sharing

Tackling the next, “spot capacity,” let’s look at what a crowd-sourcing-esque model offers. Now we all understand “capacity” pretty well. So the key word here is “spot.” What all does this suggest and can we agree that some types of capacity are more “spot” than others?

But first, what does capacity look like in a PERFECT world? Capacity (the resources to do work) is always matched exactly to demand (the work needing to be done).  All managerial experience declares that the perfect world is realized for fleeting moments at best.  All too often we are dogged by excess capacity and its drain on remaining resources or inadequate capacity and the resultant project delays and launch misses — or, most often, the WRONG capacity in which over and under capacity simultaneously exist:  too few 4000 gallon production tanks and too many endocrinologists.  It seems that, at our very best performance level, all we can do is be in-the-process-of-creating-the-right-capacity-for-yesterday’s-needs.  Ouch.

Correcting capacity mis-matches is an expensive proposition.  Expensive in both time AND money.  It’s going to cost you to add new resources and, if in the future they are no longer needed, it’s going to cost you to discard them.  The recruiting of even a single, skilled researcher will often take as much as a year and create a direct expense on the same order of magnitude as a year’s salary (that doesn’t even count the lost hours of productivity.)  Very few operations, including R&D operations, haven’t evolved to having some sort of flexible capacity at their disposal.  In our nomenclature, the more “flexible,” the more “spot” that additional capacity is.

In many cases, “flexible capacity” translates operationally into the use of contract research organizations (CROs) — labs with scientists and equipment almost exactly mirroring those within the contracting organization.  But before that external capacity can be utilized, contracts must be put in place.  These will describe the work to be done and the terms of compensation.  The crafting of those agreements alone is expensive and time-consuming (lawyers like THEIR job security as much as scientists do).   A few trips down this road and most organizations start to adopt the use of the term “preferred partner.”  This reflects the fact that the second, third and fourth agreements between any two given organizations ought to get a bit easier as trust and experience are established

Add to this “preferred partnership” the desire to be “first in line” when tapping these resources and sometimes we find that retainers are paid to keep the pilot light lit or projects are rotated into the CRO to keep them busy with YOUR needs as opposed to your competitors’ (who else would they sell capacity to in order to achieve their own financial goals?).  All this makes sense… or does it?  The more preferred a partner becomes — and the more they are kept “ready to go” — the more they look like internal capacity without an employee ID badge.  At the extreme, they are virtually indistinguishable from an economic and organizational perspective.  (Though given corporate value statements they are probably easier to “fire” as they are seldom the “people” referred to in those declarations.)

Finally, add in the significant expenses associated with managing the external relationship as well as the overhead (the CRO doesn’t sell to you at cost) and you get some idea of why most R&D organizations were recently outsourcing only about 10% of their capacity.

A crowd-like model is designed from the get-go to have minimal contracting demands.  The solution providers often do so at their own risk and expense, counting on post-facto compensation (awards, prizes, bounties, etc) to balance the books. (Note that some of this same advantage is achievable in online RFP structures where the terms are pre-written and the contracting organization is looking for any contractee who finds them acceptable).  This starts to create a REAL flexible capacity, true SPOT capacity.  The need to simplify the contracting is baked in and the need to stay “first in line” is obviated by the fact that “all the world” acts as if it were essentially infinite capacity so everyone is “first in line.”

By reducing the transaction cost to near zero for tapping external research capacity, the financial resources can be focused on the actual asset value (the research) creating a win for both parties. Capacity mismatches have been historically tolerated primarily because THOSE inefficiencies are generally no worse than the inefficiencies associated with external capacity transaction costs.  BUT, that is changing.  The creation of numerous CROs over the last two decades and the ease with which they can be “found” (i.e., internet search, portals and even better physical directories) is changing the relative costs associated with these two inefficiencies.

But of even greater ultimate impact is the use of internet-based innovation models that result in near-zero transaction cost for accessing global talent through OPEN innovation.  These models may very well, over time, begin to reverse the integration trends that lead to our current corporate model (See the Theory of the Firm and transaction cost theories of Coase).  The resultant post-corporate organizational design will be characterized primarily by a strategic intent overseen by a core of leadership and orchestrating talent to manage the ad hoc networks in which the work is accomplished… but more on this later.


Blog 5: Diversity – Critical Not Only to Innovation, But to Survival As Well.

December 27th, 2007

We are in the middle of a specific conversation — making the case for “OPEN innovation,” that crowd-sourcing type approach where many minds are exposed and the best ideas bubble forth. We’d posited three key principles in the prior blog:  
  1) diversity,
  2) spot capacity and
  3) risk-sharing.

Almost all cases for open innovation include the notion of diversity. Even when it is simply the “open innovation” process where an external lab (academic, CRO, supplier, etc) is contracted for a specific project or task, the suggestion is that they will bring independent and new ways of thinking.  But, the very act of partner selection based on expertise, familiarity and whatever other parameters are rationally used, seems to risk downgrading the diversity being sought. 

What IF the best approach currently resides in another discipine?
What IF the best idea is held by a young researcher not yet recognized for their expertise?
What IF someone other than Archimedes is taking that all critical “bath” just as we are seeking to measure the volume of irregular objects?

A crowd-sourcing approach – the exposure of the problem to MANY minds brings a level of problem-solving diversity that soundly trumps simply contracting an external lab. (For more on Archimedes, see Blogs 9 and 10)

I am here reminded of an experience in graduate school. Complex challenges in organic synthesis were given to a roomful of reasonably bright students from diverse prior academic backgrounds. In almost all instances unique solutions were submitted by each student. Maybe there would be occasional overlaps or even the odd duplication. But the lesson here was that if you had assigned the very same problem to one chemist in your company you got only one of those solutions. If you collaborated with two chemists you might get two solutions and if you hired a CRO (contract research organization), you might get a third idea. But repeatedly I saw 20-25 unique solutions for each problem! Why don’t we work like that more often in the commercial world? (oh, I can answer that. Perfectly rational NOT to work that way. But let’s wait until we get to the third of the three points mentioned above.

Wouldn’t it be better to start with 25 possibilities than only one or two? And if we want those 25 potential solutions to be viable ones and yet radically diverse, we want many smart minds, we want diverse sources of training, we want diverse experiences and we want diverse “ahas” — those brainstorms that most scientists have experienced when they suddenly saw deeper into a problem and solution than at other times. We want to broadly expose our research and innovation challenges, not just to one carefully selected and managed CRO, but to THE WORLD: to bright students in Moldova, to trained professionals in India, to retirees in Poland and to associate professors in Chile

Oh, I admit that this notion comes with plenty of challenges:  Confidentiality, Intellectual Property, Competitive Intelligence Leaks — interestingly, all UNrelated to the base issue of innovation and ingenuity — but real business issues for sure.   And so, an approach of this type will require careful crafting and stewardship as challenging as the management of closed innovation has ever been

But escape the need of dealing with these issues at your own innovation peril. A recent (Nov 1-3) Santa Fe Institute symposium dealt with diversity and the adverse consequences when it is present in too little measure. The summary of that meeting was sent to those of us affiliated with the institute and opened with the these words, “when the going gets tough, the tough choose diversity.”   Speakers at the intitute where of course wise enough to weigh the arguments made contrary to diversity:  focus, efficiency, resource conservation, etc.  But it is indeed quite possible to focus yourself right out of business.  (we have to come back to the whole concept of “exploration vs. exploitation” but that’ll be several blogs into the future).

We’ll close this blog on that note. ( As you’ve seen in the past, each new concept introduced branches into four or five areas that will give us fodder for future topics.)

And a Happy New Year to all,


NOTES from SFI symposium: “When things get tough, the tough choose diversity.

The overarching theme struck by the sharpest minds on Wall Street, in emergency medicine, genetics, language, computer science, neuroscience and biology — even the honeybee expert — was that loss of diversity almost always leads to negative consequences. In short, diversity makes a system more robust in the face of an uncertain future and more adaptable when big changes come.

Arguably, many speakers pointed out, reducing diversity and making particular strategies as efficient as possible has economic advantages. So, you might say, there’s a tradeoff between diversity and optimization.

Finding that balance depended on the speaker’s view of the future. Variables, such as economic climate, climate change, social networks, species endangerment, language reduction, hedge fund health, trucking regulations, the Federal Reserve, malaria, and ocean circulation come into play.If there was a lesson to be gleaned from the two day conference it’s that the cost of maintaining diversity for these systems is probably small, in the long run, compared to the cost of not maintaining diversity and being wrong about the future. “  — Aaron Clauset, SFI Postdoctoral Fellow

Blog 4: What Do We Call This?

December 23rd, 2007

Having introduced the notions of “open” (outside the walls of the company) and “OPEN” (broadly sourced) innovation in the most recent blog, let’s begin examining some of the rationale for adopting the latter. But before getting in to that, let’s go back and acknowledge the awkardness of this nomenclature as a means of capturing its history.

The 90’s saw many varieties of “open innovation” — emerging or on the increase — in R&D business practices — for example: contract research, technology scouting, outsourcing, in-licensing, etc. We also had a growing movement in what was called “open source software.” While open source software development was indeed “open to the source of the solutions,” the term itself referred to the underlying code, the “source code,” and its intent to be “open to the public,” i.e., not copyrighted but placed in the public domain. Thus, the novel development practice of being “open to the source of ideas” didn’t really have a name unto its own. There were a few examples near the turn of the century, very few, like: Hello Brain, InnoCentive, TopCoder, BountyQuest and I’d also put the X-Prize in this category.  As this approach was replicated at varying levels of complexity, we saw rapid-fire, problem solving and consulting in models like e-Lance, Gerson-Lehrman, and then Amazon’s Mechanical Turk and Google-Answers (these lists are hardly exhaustive – of either historical references or present implementations.  Some survived, some didn’t, some morphed… but you get the idea).

 In this climate, Jeff Howe’s Wired article in 2006 introduced the term “crowdsourcing” — a good descriptor that has gained considerable traction. The term has been comfortably applied to both the quick response “answers” systems as well as more complex endeavors like InnoCentive and I recently participated in a podcast (on WeAreSmarter) on just this topic.

At the other end of the “answers” spectrum we have growing use of the phrase, “Prize Philanthropy” where we see diversification of the X-prize efforts, addition of the Earth Prize, the Prize4Life and other examples as well. What characterizes this end of the spectrum is that the qualifying submissions are often heroic in execution and require considerable investment and likely a coalition of talents and disciplines to pull off. We are, however, seeing Prize Philanthropy rapidly moving “down-scale” to seek modular, key solutions that are part of a bigger ecology in global problem-solving. These would include many independent efforts as well as collaborative efforts where existing platforms like InnoCentive serve the needs of foundations and obviate the need for massive duplication of efforts in platform construction as well as provide leverage by tapping communities of the right skills for more diverse problems to solve.

Given the history and evolution of the descriptors we may well need no others. However, it has been my experience that when speaking to R&D Directors, Business Innovation Champions and Chief Scientific Officers within the commercial world, neither “crowdsourcing” nor “Prize Philanthropy” have entered their vocabulary — and more importantly neither has its applicability to their “tough-minded, serious business of commercial R&D.” For that reason alone, any ideas for a more tailored nomenclature would be welcome. (See also Blog 8 on taxonomy)

But, I digress. We said we’d make a case for “OPENNESS.” Let’s at least get a START on that and continue for a blog or two. For me the case seems to come down to three key principles: 1) diversity, 2) spot capacity and 3) risk-sharing. Stay tuned. I’ll hit the first of these in a few days. Meanwhile, Happy Holidays!


Blog 3: Competing on Question Asking

December 16th, 2007

Intros over.  We’ll dive in and start with what’ll probably be several forking blogs on “open innovation.”  Many companies – and even whole industry sectors — compete primarily on the basis of innovation. This is, as opposed, to competing on added services, price, convenience or some other aspect of business that allows for an “edge.”

Historically such competition has revolved around each company’s ability to assemble research departments — and most importantly, teams of talent that strive to out-innovate their competition. This was accomplished by smart people, with excellent equipment and laboratories, inventing new products – and even new technologies – and oftimes, making fundamental advances in science. Think Bell Labs as a prototype.

Of course, one realizes that though Bell Labs continues as a distinct entity,  it has not fully survived in the form that characterized its heyday (spinoffs, layoffs, mergers (and recent growth), mission changes, etc). No doubt many factors contributed to the transformation of the central lab (with a wide remit for science) and it is not my intent to thoroughly analyze it or even to propose a scholarly hypothesis. Surely some of those factors must include the wider-spread access to knowledge that has been brought about in the “information age.” Coupled to knowing what is out there, business also became more sophisticated in its ability to license. This decreased the need to invent it all in-house. Satchel Paige’s comment “none of us is as smart as all of us” has been scaled up and globalized.

Even so, an enormous percentage of the applied science and technology and “reduction to practice” remained an internal skill. Responding to this reality, a significant number of graduating scientists, engineers and technologists historically took employment in large corporations. The shift to “distributed innovation” has taken place over decades – until today, when many sectors can point to significant fractions of their new product introductions and underlying technologies as originating external to their corporate labs. Some, like P&G, have even declared this as a strategic intent (C&D or Connect and Develop)and set quantitative goals to increase licensing as the primary mode of innovative growth while holding internal resources more constant.

Now is the era of “open innovation” and even “OPEN innovation.” What’s the distinction? The shift to contract labs and licensed technologies is surely the major part of the open innovation movement, but recent broadband internet access and leaps in communication allow us to imagine a future in which technical problem solving, on the spot invention, and on-demand innovation can be realized through open communities of scientists (e.g., InnoCentive, TopCoder, Prize4Life, X-Prize and Earth Prize ). The enabling platforms where the network is managed on behalf of other institutions (i.e. innocentive) have been coined innomediators by Professor Sahwney at Chicago. This ushers in OPEN innovation:  marketplaces where intellectual property (with or without its legal appendages (topic for future blog)) is exchanged as readily as hummels on eBay. (well… maybe that’s a bit of an exaggeration in 2007… but stay tuned)

So, if all the world becomes the innovation lab how do innovation competitors compete? As pointed out by Raynor and Panetta, the new basis for innovation competition shifts from the “problem-solvers” to the “question-askers.” Even in OPEN innovation — where resources well beyond any imaginable corporate lab are available to solve problems – those minds principally respond to the innovation challenges defined by the question-asker’s commitment to manufacture, market and make available to the public.
This “question-based” competition clearly defines an evolving role for internal staff even when open innovation (including “OPEN”) is fully embraced. Better defining this role is a topic for another day.


Blog 2: Introduction – Part II

December 10th, 2007

Got a bit of housekeeping out of the way in the first blog.  Going to stay with that for one more round just to finish setting the stage… then off we go. 

Here are some future titles to give you an idea of where we might be going, at least in the nearer term:

Networks vs. Employees;
Risk Structure; why hold it, why share it;
Theory of the Firm - drivers for firm integration and disintegration;
Collating Non-expert Knowledge and making sense of the collective;
Organizations that attract ideas vs those that order them;
Hierarchy vs. Adhocracy
Simulations and models
Combinatorial Processing
Prediction markets
Long tails of knowledge
Scientific method vs serendipity. (can you organize for serendipity?).

Of course, my greatest desire is that 90% of what will be said couldn’t have been scripted ahead of time. I look forward to that union of our shared perceptions and experiences — your insights, your questions, your critiques.

Along the way, we are probably going to have to refine our language a little. We have a few recently introduced terms (in general lit, if not in this blog…yet), like, “Open Innovation,” ”Innomediation,” “Democratizing Innovation” that may be overly broad for some of our distinctions and we have some new concepts simply unnamed. Maybe we could flag a few of these nomenclature requests and get the collective wisdom of our readers engaged?

In the coming months, your reactions and comments will create new titles and distractions… productive distractions. I really look forward to playing jazz where ideas get extracted and tossed up for comment and fruitful scrutiny. My goal is to learn something here; that goal will be hard to achieve if the script remains unassaulted.


Blog 1: Introduction

December 5th, 2007

InnoBlogger is sponsored by InnoCentive, Inc., an innomediation company managing the skills of a large global network of solvers to tackle innovation challenges. The blog however is authored INDEPENDENTLY and may oftimes not reflect the practices of InnoCentive or the opinions of its management. (It’s not gonna surprise me when it happens).

Specifically, as the tagline implies, InnoBlogger is directed to institutional leadership: to CEOs, company founders, chief scientific officers, foundation directors, government agency heads, policy makers, etc…. in other words, to the decision-makers that are ultimately held accountable for the innovation performance of their organizations — whether a not-for-profit foundation seeking cures for orphan diseases, a corporate business unit bringing a new product to market, or a government agency seeking breakthroughs to better provide security — or even food — for its citizens.

I have selected this audience because innovation leadership is crucial to the performance of a great many institutions and certainly to the quality of life of every person on earth. It was innovation (creativity + implementation) that brought us efficient farm practices to better feed ourselves and our families, water purification to minimize disease, educational practices to better our understanding of the world around us, the printing press, the steam engine, the internet… a near endless list of human advances.

Within the walls of various departments charged with innovation are “the geese that have laid the golden eggs” — the products and ideas on which the company was built and thrived. Organizational culture and lore sometimes add… “and if leadership doesn’t understand that world, they’re as likely to kill those geese as to nourish them.” Seems risky for a non-R&D leader to monkey with that part of the organization — let’s just leave it to others.

These attitudes would rarely be tolerated in leaders if applied to operating divisions like marketing or manufacturing — and yet, for R&D, somehow it seems OK. And, truth be known, it may well be that, often, the scientists and technical leaders LIKE this status quo.

Of course, few things don’t benefit from the occasional airing and “black boxes” are no exception. Innovation — its what, who, how, where, when and why — should be a deliberate and well-debated plank in most institutional strategic platforms.

To facilitate that debate, we’ll tackle a few key elements of innovation in this column – topics like: open innovation, competitiveness, risk, portfolio management, organizational innovation structures and even the extent to which R&D skills can be effectively “rented” rather than “owned.”

Over the next few blogs, we will embark on a discussion of meta-innovation, innovative new ways to innovate, how one structures such an organization, the skills needed, what could/should be done in 2008 that wouldn’t have made sense in 1988.   We’ll strive to avoid trivial solutions: be smarter, have more money, be bigger, rashly parallel process, cheat, etc.

But for now, mull this over, look at the site:  
Does it look blogworthy? (color/layout/elements)
Is the text in a human voice?
Help me make the message impactful. I believe in it.

I’m new to this, but still not such an old dog…. (but coming on fast, which is why I gotta get in this one last trick).