The meaning of Minimum Viable Product (MVP) as part of the product release cycle

The notion of building a MVP or minimum viable product often pops up when planning software implementation or development projects. Although it not an inherent part of agile approaches to software development, it is often cited as a means to come to a quick delivery.
In my opinion the idea is rarely fully embraced or followed, maybe related to connotations related to the word ‘minimal’ and it’s echo of insufficient, crappy or a poor excuse for not wanting to invest sufficient time and means to come to a full featured product. Majority of project owners rather want to consider it as a first release of the product they had originally in mind, being fairly full featured and complete.
Is it possible to apply a MVP approach in all circumstances or should we reconsider the positioning of a MVP as part of a project approach?
A MVP can be produced for any type of product, being it a physical product or a service provided to a customer base. The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time. The minimum viable product lacks many features that may prove essential later on. (Eric Ries in the Lean Startup p. 77) Thus it serves the needs of the users/customers and is the first step in a journey of learning and fine tuning the product. In this way it certainly fits the design thinking approach where it can prove to be of better service than the prototype as the MVP is put to the test in a real productive environment interacting with the intended user/customer.
Based on this description of the MVP it is ideally fit to be applied in a green-field, developing a new product or service. Based on a conceptual understanding of the needs of the intended customer a first basic version of the product can be put to the test, seeking feedback, while already earning some income through the new service or product that is ready for a ramp up based on the feedback provided by user interaction and interaction measurement. The learning based on the first use of the product can and should then be transformed into new product feature, for which the demand is clearly demonstrated, new product ‘behaviour’ or configuration conforming the wishes and needs that have be tracked based on use. Therefor producing MVPs require the courage to put one’s assumptions to the test.
The MVP is often positioned in larger organisations. However, it does not succeed in delivery on expectations. Are there obvious reasons why the MVP approach is not taken up or even met with resistance by project or product owners in organisations?
A MVP is often seen as a phase of the product development and release life cycle, when developing new solutions or replacing existing solutions. The expectation level in this situations and certainly in the latter case is not the same, maybe even not similar to those of a green-field development. In the context of the succesion of an existing product or process, one is often confronted with an extensive set of requirements, and a high level of detail in the requests detailing the needs for the product to be developed. Unless the product to be developed holds a complete change of concept, the set of requirements is often that large and detailed that the initial release can, in my opinion, hardly be considered a MVP.
The cited MVP is not really an MVP in a large number of enterprise projects. In the context of the succession planning of an existing application, the product to be delivered, even though the technology or system basis changes is a new release, not a new product. Current users expect the new product to perform better than the product they are currently using. It should contain all current features, be modern and have increased performance and improved usability compared to the current product. The product to be delivered is not subject of discovery and learning it is expected to be an improvement for features well known and – when lucky – well documented.
It is to be discussed whether providing automated support to a running process can result in the release of an MVP. Although the (software) product may be new and its usage and usability is subject to testing and feedback, process and information support requirements are sufficiently known to be explicitly discussed and documented. It might be that the processes to be supported are fraught with bureaucratic controls and needs to be cut down, the concept as such is not to be tested.
Even when we are talking about a real MVP, there are potential reasons for reluctancy to adopt a MVP as step in the way to the implementation of a solution of product are to be found in relationships, the disruptive nature of the approach where majority of interactions are contractually defined.
Relationship with the IT solution supplier – an external contractor is expected to work for a limited time and should deliver the product as defined initially, when granting the project. Even when working with the internal company or organisation’s own IT department, relations are often not as smooth as they could be. In some cases the department is seen as an enemy, it transpires to know better the needs of the business than business departments do and behave in a similar contractual relationship as the external supplier. Challenged by lack of resources, they tend to block of extra requests or changes in scope or requirements, increasing resentment while taking a reactive stance rather than pro-actively participating in the definition of the product along and together with the business department that is product owner.
We want everything to go to plan attitude is part of the overall culture we live in. This need-to-be-in-control, and to act in a predictable way, leads to a lack of open communication, where mistakes, misunderstandings and planning deviation are tucked away by not communicating or vaguely communicating about progress, results and problems while preceding in the development of the product. An efficiency culture that starts with the premise that all is to be done based on past experience, known facts and on precedents or similar activities executed in the past increases expectations and decreases the perceived need to be explicit and clear when expressing requirements and desires. It has been done before, relax, is the standard (requested) attitude… Focus is not on a discovery of the real need and the real added value as design thinking promotes.
In order to profit fully from agile and lean innovation concepts including the use of MVPs, a cultural turn around is needed for departments and enterprises that will thrive on innovation and adaptability. They should go from a command and control view on things, where the few plan and think out products and the majority execute without question what they are ordered to do, to a learning and improvement attitude where everything is in flux, knowledge is gained and all participants in the organisation participate in creating it and its products. All concerned have to work with the same overall purpose in mind, bringing down the walls between departments and teams introduced with an eye on efficiency and task specialisation.

Search – Find – Retrieve

All we do is talk about searching. Finding is what we should be talking about. Hence the interest for the findability idea with content publishers on the world wide web as it leads people to their web site. Authors motivated by earnings as in content marketing environment will adopt a specific writing style influencing search engine algorithms giving them a higher ranking in the search results. Inside the company a lot of content is written in the context of a specific business process or satisfying the execution of a task, and when is does not concern marketing, the author does not tweak his texts for easy finding, discovery and retrieval. The information contained by the content, however, can have extreme high knowledge value in times of efficiency and stressing the innovative capacity of organisations.
The search and search experience
However not all searches are equal. We often refer to google as the search experience of reference. You often hear “Why can’t we have a google at the office, then we would find the information we need”. There might even be a full text engine already available in that organisation. What makes that we are  that happy with the finding capability of google?
We list some elements:
The boundaries of the ‘collection’ – in the case of the use of google, with which most of us are familiar, we search the internet, a vague concept when looked at it from the point of view of a library or repository. The internet as such has vague boundaries. We have no idea of the content available. The amount of information is huge. As a consequence, the statistical chance of finding something interesting and/or relevant is high. In a corporate environment we are talking of a well circumscribed library of content with a relatively small amount of information artefacts (at least compared to the world wide web).
Related to the boundaries of the collection there is also the group of contributors, that is fairly limited, whilst rather endless on the internet not counting the free services provided by volunteer groups as in the case of wikipedia, who summarise information about most general topics.
Search coverage – Often not all information sources in an organisation is indexed or accessible using the same search engine. Whilst this is largely irrelevant for information available on the internet given its vague boundaries and sheer volume of information available.
The context of the search performed – generally speaking internet searches are there to get acquainted or informed about a certain topic. In a business context, searches are much more targeted, often driven by the execution and deadline of a specific task or case oriented.
This leads us to the goal of the information search – in general when searching the internet our attitude is more of the nature of ‘I want to know something about …’. A number of internet sources are specifically geared toward these ‘what about’ questions. In contrast majority of requests in the corporate environment are very targeted, aimed at confirming factual data. The find expectation is very targeted at a specific result. In a number of cases, we are looking for confirmation or proof, having the document or information artefact in mind. We however forgot the specific wording or document reference.
The feedback model which is strongly related with the earning model in the case of google is pushing them to deliver. The more targeted the search result is, the more likely one will click on it, visit the site and google gets paid for associated publicity. Internal search engines are generally licensed on a server based licence. There is no incentive included based on the quality of the services provided.
Optimising Search and Discovery is a challenge that is not really taken up by search technology providers. Majority of solutions are driven by and inverted file index, listing all keywords used in indexed content with a reference to the source as you will find in the back of a book with a search interface running against it. Basically the search request is mapped with the entries available in the index.
Although precision and high relevance of the search results are highly appreciated by people searching for information there is a balance through volume. The higher the volume of information the larger the result set requires a higher level of precision in indicating relevance in the result set. While in smaller sets the diversity in the result set may be higher the decrease in precision while going through the result list is more visible for the searcher.
In a context of limited content volume introducing the notion of synonym rings, a list of terms having similar meaning,  may ensure recall or a somewhat more important search result list, giving value to searches in a multi-disciplinary environment using different terminology or in a multi-lingual environment. Setting up the synonym lists requires important effort. In a similar effort, enhancing search results, the introduction of semantic web applications controlling vocabulary certainly helped search result quality. It makes navigating the collection possible through the use of a controlled vocabulary whilst not requiring extensive human indexing effort. Alternatively upstream tweaks, at the intake of content, as through automatic classification try to take over the work of the human indexer by automating it after a training period.
The use of clustering on the level of presenting search results helps searchers target the information needed. While the overall search result may be long, topical grouping will guide him or her to obtain the information required faster. Think of requesting information on “Milan”. The search cluster will inform you whether the clusters cover tourist information on Italian city and capital of Lombardia, people having Milan as a first name (with their last name as lower level sorting order) or AC Milan the soccer club.
Characteristics of corporate context
This focus on the content index for searching denied characteristics of corporate context in which the employee is looking to find information. Contrary to the generic web search we know a lot about the person launching the search. Like Google and other public social platforms, as Facebook, we can work with the search and response history when tuning search results. Which items of the result list were visited? Is there a pattern in the visits?  Coupling the search results to content ratings evaluating information found on corporate networks can be used to define the domain of interest.
Unlike on the internet content types with associated meta data can be identified. In the corporate environment adding meta data can be much more controlled, supported by value lists established in a specific business or process context by trained business analist. This does not prevent that additional free tags can be added to content items. Belonging to the same organisation and specific department will per definition enhance external qualifiers or content attributes. All people live in the same terminology cloud defined by their practice, corporate culture and corporate speak. Although slightly modified to unit or team adherence, vocabulary coherence is much higher than used by the widely scattered internet population.
Completely dissimilar to the internet, organisational and functional data is available on the user launching his search. We know in which department, team, project the person works, what the main focus of activities is. For each of the organisational units it is possible to indicate the semantic field of activity, linking the corporate directory to the semantic map declared and maintained in the corporate triple store.
Building these elements into the relevance ranking algorithm combined with the capacities of big data, of AI and the use of probabilistic reasoning and learning from previous search and retrieval behaviour and occasional feedback on content, search results can be targeted much better reducing searcher frustration even with a smaller library and the fundamental different search motivation in corporate contexts. Combined with already existing technological solutions this can lead to superior search and retrieval experience.
Thesauri and ontologies can both standardise language while also added flexibility to searches. Relationships built into the model can take variations into account as they are created on the author side or on the side of the searcher who may use sub vocabularies as created in sub-domains or teams. Some discipline oriented thesauri are available, customising them to the in-house dialects is time consuming and costly, the same goes for specific once when the material is not available. Feedback on the use of search queries and the interaction with the proposed result lists, that are built by the search engine, combined with big data and AI may kick start this process.
km-search
Example implementations, not necessarily covering all elements of the model are:
Open Semantic Search (www.opensemanticsearch.org)
Nuance in the medical sector (https://www.nuance.com/healthcare.html)

Deep content is what will remain in a digital world

Currently there is a lot discussion going on about the digitalisation of our lives. Even though we find it disrupting we are still in the early stages of fundamental digitalisation of products, work processes and services.
Like with other new technology introduced, it takes a while to understand its full capacity and potential. It is already clear that in the evolution to a real digital society and business model we should leverage technology better. An increasing volume of data will be captured in original electronic formats, ready for direct consumption by back-end applications. It will certainly have an impact on what documentary information will be created and how it will be edited. Today still a lot of data is collected through paper forms, as a relic from the traditional administrative processes or bureaucratic tradition. Often scanning solutions with extensive form recognition capabilities are used to digitalise the capture of data filled out by customers, consumers and citizens. This however, does not change processes, services or user experience fundamentally.
Citizens are increasingly invited to launch transaction and interact via the internet. Although not everybody is already sufficiently computer literate we are all taking up more of the administrative processes ourselves, ousting administrative clerks. Nevertheless filling in forms is a burden. The device we use for subscribing to a service or the launch a business transaction like an online sale should be able to help us. Certainly smart phones contain a lot of data on the user and the situational environment he is in. It can be used in contexts of data analysis but should also be able to support us when filling out the umpteenth form. Identification and address data could be filled in automatically saving us time and focusing on verifying and releasing the data. This with the increasing spread of easy to use carry on digital devises will lead to a diminishing of traditional paper documents used to capture structured data, for which computer apps or forms can guarantee more structure and quality control than the lenient paper form.
Shifting to digital will be enabled on the level of the user devise, that is increasingly compact, light and cheap. Tablets and smart phones have not only sleek tactile and intuitive user interfaces they also attain processing power capacity of standard desktop or portables. Interaction with applications is increasingly supported with apps and applications best adapted to the device used, thus encouraging the use of digital native data and information formats. Also important is the increased connectivity, permitting to be online at all time and nearly everywhere. Increased screen quality, size, enhanced setting and flexible positioning invites to more on screen reading and increasing the number of people, currently mainly confined to younger generations, consuming information on screen. From a content provisioning point of view fluent transition into content made available, needs to be provided. The traditional document paradigm will make room for richer content that is however easier to access.
The document, as currently supported by document management and file sharing solutions, are individual files made up in their specific binary format only accessible using specific editing or viewing software, resulting in a cumbersome process opening the document which incurs long waiting times. In their formatting they inherit directly from the paper age, adapting the text presentation to the borders laid out by paper formats such as A4, folio or other printing standards. Digital screens are not confined by these measures, they permit for continuous scrolling. Features available in software and supported by globally accepted standards as applied in browser support hypertext features permit hopping between content parts. This supports a more liberal consumption of textual materials than through written cross references used in printed materials. The use of smaller components will of course have an impact on the writing styles used in non-fictional prose.
This will most certainly lead to a broader adoption of wiki-like technology letting us focus on the actual content or text rather than formatting and inviting us to collaborate even more intensely in the creative process of writing.
A similar evolution is to be recognised in the context of structured data and reporting. Usability and intuitiveness of data analysis tools will push us more to active interaction with data sources, building our own reports and dashboards, based initially on internal data but increasingly enhanced with external sources, either coming from customer tracking or public sources provided by governments in open data formats or from commercial partners collecting data coming through users interacting with their systems (think of the data google has available). It will not only permit to analyse current population but extrapolate on larger demographics exhibiting trends and highlighting fundamental needs.
Further enrichment of information and explicit or implicit interaction with information and data will reveal other information and insights.
The use of multi-media content mixed with more traditional textual content will certainly impact education. Enriching the learning experience using richer media appeals to all learners. It permits to interact calling upon various learning styles and predispositions. The use of digital interfaces permits to mix audio, video and gaming elements in educational and informative content. Why not include interactive graphics with 3D aspects permitting the information consumer to navigate through the elements in any chosen way.
On a lower, more granular level working with real digital content also permits to increase value like through adding index elements. In a document environment, indexes or meta data were added external to documents submitted. A dedicated document index could be added but would only function within the context of the document, like we are used to in books. Native digital content or text larded with index and index barrier markers can be exposed on larger collections and used as navigating instruments adding an additional reading experience. Enriched with search and semantic web possibilities it will increase usability and potentially interact with personal context, terminological preferences and frame of reference. The same can be applied to classification engines, working on paragraph level rather than on document level.
Traditional documents will disappear. Maybe not in the short term, certainly those designed to support administrative processes capturing data. Personal data will almost certainly be a system commodity provided by individual devices we use daily to communicate, to come online and to interact as social beings regardless the location of others.