# Weaving Analytics for Effective Decision Making

Books

• Chapters
• Front Matter
• Back Matter

## Acknowledgements

Dedicated to our Parents

## Foreword

With more than three decades of experience in academia along with working for large corporations to develop data-driven corporate decisions, Dr Arindam and Dr Tanushri have brought together the unique combination of academics and practice in this guide book that will help corporate decision makers navigate the complex and ever-evolving maze of analytics. While much of what is today called as data science perhaps germinated in academia, the last decade has seen an explosion of its use in business decision making. Equally, there has been a significant growth of ‘techniques’ and technology on how to access and analyse data, pushing businesses to commit scarce corporate resources, both money and people, in the pursuit of better decisions.

Today's business leaders have multiple approaches and sources to provide them with the insights needed to improve decision making; however, there is a high probability of getting trapped into the ‘how’ or the technique of developing the insights than ‘what’ or prioritizing the insight that drives the right decision. This guide book clarifies and provides a good control tower view of business decision making using data-driven insights.

By arranging the chapters that replicate typical decision processes in organizations, the authors have used many live examples to illustrate and bring to life the practical aspects and issues that need to be addressed to integrate data-led decision-making process along with ‘gut feel’ in the board room. The authors have equally focused on one time projects versus building long-term skill and sustaining capability in the organization to help harness a robust decision-making process.

Finally, this guide book is for decision makers looking to build a sustainable capability in data science to complement their decision-making process. It will also be very helpful for those who have already invested in analytical ability to revisit and reprioritize their future investments to help make key decisions that drive business.

I invite the reader (be it a business leader or an analyst) to actively refer to this book while they grapple with their challenges of data-driven business problem-solving initiatives. On one hand, while it serves as a good complement to manuscripts that essentially focus on analytic techniques, it also helps buttress the efficacy of analytic initiatives with its mission to be primarily focused on the business objectives.

Partner,A.T. Kearney,Singapore

## Preface

At the outset, let us state that this book is written for decision makers who would like to invest in Analytics as a support for their work. We are not going to explain the virtues of Analytics. It is assumed that the reader is already convinced about its worth.

The book will also help data scientists and other analysts who work with large databases, but would like to reorient their work to better serve business decision-making activities. This is an important dimension of Analytics that does not get much prominence and therefore is the primary motivation for us to write this book. Based on our own experience, let us also try to describe why this theme is important in the current context in India.

Our ‘brush’ with Analytics spans over 25 years now. It includes initially working on model building using large-scaled data in the retail space in the United States in the early nineties using SAS and other contemporary software tools. At that time, no one really called this function as Analytics. The nomenclature came much later.

While we thrived on the opportunities in information sciences and analytics that came our way, little did we realize that the skills that we learnt back then to solve pertinent problems for business would someday become titled as the ‘sexiest profession’ (quote from Harvard Business Review, October 2012 issue). At least, when some of us went through the grind many years ago, it did not look so exciting. It was more like an essential component of managerial decision making. What was never envisaged, back then, was that it had the potential to become the next hot spot in career options for a youngster (as it is currently pitched). In fact, at the turn of the century, India as a market was still not ready for this type of expertise. We have personally faced situations where industry leaders have frowned upon this function as something not too critical for Indian business operations. What then created this ‘brouhaha’ in the corporate world that led to a dramatic change in the fortune of professionals involved in this trade?

For one, at the turn of the century, enterprises in the developed economies started selectively shifting processes to the developing world to overcome the disadvantage of operating in a high labour cost environment. As a result, technologically sophisticated processes moved to geographies which provided abundant ‘techno-trained’ and cheap resources and analytics was one such process.

Influx of new processes which were technologically advanced obviously led to heightened interest in the domestic corporate landscape regarding this new function. At the same time, entry of analytically skilled labour and talent from across borders also led to an overall escalation in curiosity about its advantage to businesses. From the middle of the first decade of the 21st century, Analytics took rapid steps to tease the minds of decision makers in India. Increased competition in the domestic market additionally helped spurt the interest in Analytics.

Interestingly, Analytics has taken a skewed turn in India, given that parts of the function were selectively imported keeping the focus on the technological aspect of the domain. Data science was the centrepiece, although the original business model for organizations in the Western world did not make these sub-parts of Analytics separable entities. In India however, Analytics processes quickly focused on the technical elements of analytics, aka, modelling and predictive power.

This trend should naturally perplex business decision makers who manage businesses and do not necessarily want to manage specialized processes, without seeing a connection between the process output and its impact on business. A professional acquaintance who has been managing marketing intelligence function for large consumer organizations in the United States for years, quipped:

It is the political divide between the East Coast-based consumer products companies and the West Coast-based technology companies. Analytics, in the East Coast (companies) is more evolutionary and centred around business problem-solving using data. However, the technology companies on the West Coast have focused more on the technology platforms and data science and touted them as the drivers of business Analytics.

India may have been exposed more to the ideas emanating from the technology-based companies in recent times. While this may be sweeping generalization, it is still worth a thought.

The focus of this book, as stated earlier, is squarely on the dilemma faced by the decision maker of an Indian business. It stays clear of the objective of promoting data science; instead it provides guidance on how business leaders should use analytics as a catalyst to solve business problems. The emphasis is on solving business problems and not on the technology.

Hopefully, it will provide guidance to decision makers on how to evaluate investments in analytics project (or process) for their organization and what may be an effective way to adopt such expertise over time, with an unflinching focus on business impact. At the same time, it will complement the role of data science.

This unique positioning of the book should add value to the learning on the practice of Analytics and hopefully be a useful reference to both decision makers as well as to the analysts. For the latter, it should provide helpful tips to fine tune their analytical output to make them more user friendly for businesses.

The book is organized into two distinct parts. Part I (Introduction to Chapter 5) provides an approach to decision makers on how to build an effective analytics process. It also provides an exposure to the various kinds of Analytical methods and infrastructure that are used in practice.

The Introduction specifically ignites the dilemma of many business leaders about how they should initiate and direct their organization's analytic capabilities. Chapters 14 provide some guidance on how to resolve such problems.

Chapter 5 provides a view on how to evaluate infrastructure to support the analytics process. While the theme of the book remains focussed on providing business leaders with an approach to making their analyses more meaningful, no one can shy away from the challenge of upscaling this initiative, once the concept is proven. We discuss the challenge of scaling up analytics through building appropriate infrastructure and its impact on the organization culture in this chapter.

Part II is a commentary of the analytics landscape, as is today, with a focus on India. Chapter 6 provides insights into the real challenges that organizations are facing in ramping up productivity. This is based on our research study, the perspectives of various analytics managers and also our findings from in-depth studies of a few organizations on the state of development of analytics practices in India. It provides insights on the evolution, potential challenges and opportunities that Indian organizations face as they develop their internal analytics prowess for sustaining competitive advantage.

We hope you will find the book useful.

## Acknowledgements

Both of us would like to acknowledge the immense contribution made by our respective employers (IIM Ahmedabad and Pandit Deendayal Petroleum University) in providing a very productive academic environment for us, to (a) interact with enthusiastic students and business professionals, (b) research on topical issues faced in the Analytics domain and (c) facilitate documenting our ideas and thoughts on developing new approaches to the practice. It would not have been possible to get this manuscript ready without the support of this healthy collegial environment.

We would also like to thank our academic colleagues for their constant support and feedback in numerous internal forums which have sharpened our thinking around the content of this book. To the participants of various executive programs on Analytics that we have conducted, a special thank you for the diverse ideas that sprang up from them during the course of some very spirited discussions.

Finally, we appreciate the encouragement provided by our parents and siblings. Without their support and motivation over time, this project could not have been accomplished. The enthusiasm and affection of our children, Antara and Dhruv, led us to stay focused and devote time to conceive this manuscript.

This book is partially supported by a grant from the Research and Publication Office of IIM Ahmedabad.

• ## Postscript

Our objective in writing this book was primarily to address issues faced by decision makers (mainly in emerging economies like India) while dealing with a relatively new organizational process called Analytics. While the topic has created enormous interest in the practicing world, we firmly believe that most of the insights being developed in the field are focussed on the role of the analysts and little, if at all, is being developed that would be insightful to, (a) a decision maker, (b) a consumer of insights and (c) an investor of analytics process.

Our attempt was to fill up this gap in the literature by identifying the focus areas for an organizational leader, to drive effectiveness in an analytics function. The first five chapters and the Introduction of the book deal with these issues and approaches to deal with some of them. The next chapter is solely focussed on compiling the voice of the industry experts and their priorities and concerns.

In conclusion, we want to bust some common myths regarding Analytics:

Myth # 1: Analytics is about a Technology (Platform)-Based Capability in Organizations

Nothing can be farther from truth. Technology is at best a facilitator of effective Analytics in organizations. The process has to be guided, driven and evaluated by business managers continuously. The larger role of technology (automation) comes much later when the process becomes standardized and need not depend as much upon human intervention.

Myth # 2: Analytics is about Data Science

Here's another fable that is worth dismissing at the earliest. True, data science (computer-based modelling and statistical analysis) is a facilitator and a good one in certain domain with large data bases, however, like automation, data science expertise is not the uber solution for all Analytics requirements for organizations.

Myth # 3: Analytics should be a Specialized, Stand-Alone Capability in Organizations

This is, again, not generalizable for all organizations. At different stages of evolution, Analytics may be ‘embedded’ to a business or a ‘specialized’ stand-alone function. For instance, in the early stage of evolution, embedded functions may lead to better appreciation of the value of Analytics by the ultimate user community in the organization.

Also, the nature of some businesses may require continuous interfacing with analytic teams. In this case, embedded structures are more effective. Pursuit of new innovations and ‘better’ models, which have a greater role of technology, may trigger the formation of specialized Analytic units manned by qualified data scientists.

Myth # 4: Analytics ‘Substitutes’ for Business Acumen

Surely it does not. However, it may help better decision making by providing consistent findings from data. It can help in validating hunches and refuting subjective claims with evidence from data. The necessary condition is that appropriate data should be available to support such processes.

We hope the contents of this book have accomplished the above. To summarize, leadership in organizations involved with developing internal capabilities in Analytics may focus squarely on the following:

• Mapping available data resources in the organization to their potential utility in supporting key business decisions.
• Developing the analysis framework (plan) that is needed to convert these important data into useful information that feeds into decision making.
• Creating an evaluation criterion to measure the benefit of the information (analysis) on business performance/ decision making.
• Building effective communication skills in its analytics professionals to project the benefits of the process output in a form relevant for decision makers.
• Assessing the appropriateness of the existing (prospective) analytic infrastructure to facilitate the above.
• Staying clear of infrastructure investment decisions without undergoing the planning steps mentioned above. Many organizations have made this error in judgement.

We wanted to emphasize the difference between process management of Analytics and the outcome/objective management approach. Our book focuses on the issues related to monitoring outcome and benefits obtained from running an Analytics operation with an eye to improving business performance. This distinction has been repeatedly brought out in various parts of the book.

Lastly, we hope that the reader is able to get a mix of both directive oriented knowledge as well as perspectives of varied stakeholders in this domain. Hopefully, in the process we are able to achieve a unique position for this book in the slew of technical books available on this subject matter.

## Appendix 1

One summer afternoon in July 1995, Terry West sat in his small office in suburban Rye, New York, thinking about his financial success in the past four years since he had launched Railroad Cleaning Service (RCS). Business looked good with sales touching almost $2 million annually. With no signs of significant direct competition in the market, Terry was optimistic of yet another year of good fortune with a high growth rate of around 45 percent. He wondered why no one else was venturing into his business model given that it had made such an impact in the market place. He just could not believe his good luck. The Commuting New Yorker Terry finished his bachelor's in Actuarial Sciences from Tulane University in New Orleans and headed to New York City (NYC) with the dream of striking gold in the field of business and commerce in the big city. It was the fall of 1988, when armed with a$32,000 per annum job in NYC, Terry took up quarters in its northern suburbs in Greenwich, CT.

Each workday Terry would be up by about 6:30 am, and would barely have time to make a cup of coffee, grab his business suit and somehow find his way into it, only to hurriedly head out of the front door of his small apartment to catch the 7:10 Stamford local train on NYC's Metro-North rail line. The Greenwich train station was thankfully just around the corner from his apartment block and reaching it just in time to catch the train was usually not a problem. The Stamford local train would disgorge its passengers at NYC's main terminus, the Grand Central at 8:25 am, just enough time for Terry to hail a taxi to his place of work by 8:45 am. Terry worked at a small insurance firm in mid-town Manhattan, and his boss did not like him being late to work. This was, after all, Terry's first job after school and he meant to keep it to save enough money for Graduate School.

Work and commuting to and from office consumed most of Terry's life for the next 4–5 years. Most work days, Terry would return home at about 9:00 pm, spent by the day's commute and would have just enough stamina to grab a frozen dinner from the refrigerator and put it on the gas grill to broil. That would be his dinner.

Weekends were more relaxed and primarily meant to catch up on sleep. Although the week had a punishing schedule, it also meant weekends would be consumed in getting the laundry done, fixing the house and yard for homeowners and attending to the countless chores that suburban living imposed upon New Yorkers.

To sum up the life of a typical suburban New Yorker, it did not exactly sound very exciting. More so for a young, single male like Terry West trying to make a future in the ‘Big Apple’.

The Idea

Over the years that Terry used the Metro train service to commute to New York, he had lots of time to ponder aimlessly during the hour-fifteen minute ride to and from NYC. Like him, countless New Yorkers living in suburban Westchester county, eastern counties of New Jersey that bordered on Manhattan and Long Island travelled to and from work and on an average spent 3 hours travelling each day. That amounted to an average of 15 hours of travel time each week. With over 4.5 million commuters travelling to Manhattan from various locations in Long Island (Long Island Railroad), Westchester County (Metro-North Railroad) and New Jersey (NJ Transit), there were many whose life was short of one critical thing: time. Like Terry, they spent way too much time travelling and hence, once they were back home, they had too little time to attend to their daily personal chores.

One day in June 1991, while travelling to work on the Stamford Local, Terry finally made up his mind. He decided to quit his job in the insurance firm as well as his dream of becoming a successful business executive and launched his company—Railroad Cleaning Service.

With a $2500 investment from his bank savings account, Terry rented space at the Greenwich railroad station from the Metro-North Company and set up a kiosk. Every morning from 6:45 am, Terry would man his kiosk, which served as a drop-off counter for dirty shirts. Commuters travelling to New York would drop off their shirts to be taken care by Terry's laundry service. The kiosk would remain open until 9:30 am for people to drop off their laundry. By mid-morning, Terry would cart the entire lot to the local laundry facility in Greenwich to be washed, cleaned and ironed. The lot would be ready for return by 4:00 pm for Terry to return to his kiosk at the station, just in time for the start of the evening rush hour traffic from NYC. Normally the kiosk would remain open for laundry pick-up until 9:00 pm to cater to the late commuters from the city. Business was uncertain at first, but picked up quite steadily after the first 15 days. In fact, Terry had to get a temporary help within the first month to help him with the growing demand. From a regular customer, ‘This is very much wanted around here… I mean,… who's got the time to get the laundry done at the end of the day… and I don't have enough shirts to get me through the week,… this is what was required… just great !!!’ The premium charged by Terry for his service ranged from 10 percent to as much as 40 percent compared to the charges at the regular Laundromat. Prices were higher for delicate items. Commuters did not mind paying up for the service provided, especially since it reduced the aggravation of getting one's cleaning job done at the end of the day or on weekends. Within a few months of operations, Terry was doing about$1500 of business a day. Initially, he accepted men's shirts for laundry, but eventually he began accepting women's dresses too. Soon he realized that his beat up Chevrolet sedan was becoming too small to cart the laundry over to the local Laundromat, so he leased a pick-up truck to relieve the pressure off his sedan.

A year from the start of the operations, Terry had bought out the Laundromat, in the way of backward integration of his business. He had hired three permanent helps to man his kiosk and had also opened two new pick-up kiosks at Stamford, CT and Rye, NY to cater to additional customers. He was also eyeing the Long Island Railroad system to expand his business into other routes in the New York suburban transport system. He realized that the potential of his idea was enormous and that he had to quickly move in to capture the potential market before anyone else could copy his idea.

By March 1995, Railroad Cleaning Service was a \$3 million turnover company, employing 45 full-time employees and operating 15 railroad stations across Westchester County, Long Island and New Jersey. Competition has been slow to get in and Terry has been effectively deterring entry by opening kiosks at high density station which were at least 30 miles from Grand Central station. In July 1994, a similar service sprang up at Trenton station on the NJ Transit route.

Terry could not fulfil his dream of becoming a business whiz kid, but he sure does have the business acumen to generate profits out of unusual business ideas.

Problem to be addressed in this situation:

If you are a potential competitor of Terry West, how would you plan a research project to evaluate the option of entering the market?

## Appendix 2

ABV Tyre Company (Case)

ABV is a tyre manufacturing company primarily selling two-wheeler tyres to industrial buyers (like Bajaj, LML, Hero Honda, etc.). The company has non-existent presence in the replacement market with minimal brand recall in the market place. It also wanted to protect its domination in the industrial market, fearing that a low brand-recall in the replacement market would backfire someday in their industrial market share. The company wanted to develop a growth strategy to expand its replacement market share.

The company's previous experiences of vehicle intermediates (components of two wheelers) indicated that pull strategies to develop markets would be difficult because of the low levels of customer involvement. But, tyre being more visible could be among the ‘higher’ involvement intermediates and hence subject to some evaluation of quality by the buyer. The type of Influences (and Influencers) on decision making and involvement levels may also differ across two wheeler segments (bikes and scooters). See Figure A2.1.

A structure of the customer decision-making process is given:

Various strategic initiatives have been used to reach the target groups by various tyre manufacturers. The dealers are targeted through direct communication, sales process, service and policy schemes. Mechanics/influencers are reached through media positioning and direct schemes. Positioning, mass media and PR are used to interact with the end users.

Figure A2.1 Influencers in the Buying Process

In the overall tyre market, ABV has a miniscule share since it specializes in two wheeler tyres only (and concentrates in the industrial markets). The big players in the market like MRF, Ceat, draw their strength from being multi-vehicle tyre manufacturers which include truck, car and two wheelers. ABV has a disadvantage retailing two wheeler tyres in the replacement market where its formidable competitors use the leverage of having a complete product line. ABV's strength lies in having a leading presence in the industrial buyer segment of the two wheeler tyre market; hence, it has an opportunity to cater to a significant number of buyers of new two wheelers.

Past Attempts at Understanding Customers

Past research by ABV on consumer preferences have been rather sketchy and market information has been collected on an ad hoc basis. There is no evidence of any extensive study conducted on consumers and most of the theories going around in the company are based on hunches and ‘gut feel’.

In summary, the marketing department did not have a consistent and researched understanding of what drove tyre sales and how did customers go about buying two wheeler tyres. Some probing by a set of external consultants revealed that marketing and sales had a different version of what product attributes were important to the customer.

It should also be noted here that ABV has a state of the art manufacturing facility which includes a modern tyre testing lab. Most of their tyres compare well vis-à-vis their competitors' tyres in lab tests and road tests conducted by ABV. The quality of tyres is judged on wear-rate of the tyre, which is measured with a calibration device after certain prespecified usage.

Market Coverage

ABV Tyres has sales operations in the North, West and South zones. Export sales are close to 10 percent of total sales. Central office and factory are located in Coimbatore, Tamil Nadu. ABV is affiliated to a large south-based auto ancillary distribution network. This affiliation provides the advantage of cross-selling with other ancillaries as well as the potential of better utilization of the common distribution network.

To formulate a growth strategy, ABV felt the need to understand:

• the relative impact of customers and influencers on sales, what drives each type of customer to buy a brand and each type of influencer to push a brand,
• what cues a good tyre and a good tyre company, and the gaps in the strengths/weaknesses of its brand vis-à-vis competition both in the OE vehicle buying segment as well as the replacement segment.

Problem for resolution: Identify the ‘true’ problem that needs resolution and prepare an approach to solve it.

## Appendix 3

Marketing Mix Modelling

Customer Tracking services, such as the ones maintained by large research agencies like AC Nielsen and IRI in the United States, provide information not only about consumer attitudes but also their actual behaviour on an ongoing basis. Behavioural data is considered more useful for developing strategy since it directly reflects on business performance rather than conventional slice-in-time customer surveys. This has led to the emergence of marketing research techniques designed for planning future marketing initiatives, against the more traditional role of merely reporting ‘nice-to-know’ customer reactions in posterity.

Prediction modelling became a widely used methodology to calibrate marketing mix to positively impact customer response. This trend became popular specifically in the consumer packaged goods (CPG) industry in the United States with large organisations such as Philip Morris, Coke, Pepsi, etc., adopting these models for resolving both their strategic and more tactical level decisions.

A very widely used managerial decision support system (DSS) based on prediction modelling of customers' transaction level data is the volume (market share) decomposition analysis. It has evolved, over a period, into a standard diagnostic tool for marketing managers in developed markets to assess the effectiveness of their marketing programmes. It also helps the manager to evaluate the attractiveness of alternate marketing strategies and therefore is an effective aid in his/her decision making. The analysis provides a logical basis to the manager to compute the differential impact of a firm's advertising strategies and sales strategy vis-à-vis its competitors on the market share of its brands. The total sale of the firm's brand is decomposed into base sales (shown as grey colour area in Figure A3.1) and incremental sales (shown as dotted area in Figure A3.1). The base sale is driven by the long-term equity of the firm and is a reflection of its decisions in the past vis-à-vis its competitors. Whereas incremental sales of the firm is influenced by the short-term marketing activity (tactics) of the firm as well as its competitors and helps managers evaluate the effectiveness of various tactics.

Figure A3.1 Decomposition of Volume into Base and Incremental Components

The DSS uses the β-coefficients (average impact on sales) of each marketing programme estimated from a sophisticated choice model using statistical techniques such as multinomial logit and regression, to decompose the total brand market share into individual components that are directly attributable to specific marketing activities (price reductions, promotion packs, freebies, etc.). The consumer choice model is an integral part of developing such DSS and is extensively used in marketing research in developed markets to identify real drivers of market performance. Ideally, a choice model requires an input of customer databases covering attributes related to demographic (age, education, etc.), psychometric (attitudes, etc.) and marketing mix variables (price, promotion, advertising, display, etc.) of all competing brands in a product category over large number of purchase occasions from a representative customer sample. Customer tracking services in the developed markets have developed expertise in collecting and managing these types of data on a continuous basis. Depending on the richness of available data (measured in terms of the number of customer-related and market-related data collected) the incremental sales in Figure A3.1 can further be decomposed to find incremental sales due to price, promotion, trade discounts and short-term advertising or image-building effort of the firm (refer Figure A3.2). Such decomposition of volume/ share into component shares can help managers objectively identify the cause of gain/loss in market share and segregate successful strategies from the rest.

Figure A3.2 ‘Due to’ Analysis using Marketing Mix Modelling

Moreover, it helps in diagnosing the effect of actions taken by competitors in the same period. Development of such planning tools have had an enormous impact in terms of fine tuning strategic and tactical planning activities in the CPG sector in the United States.

A conceptually similar DSS has been developed recently at the Indian Institute of Management (IIM), Ahmedabad. The system is unique since it is built on consumer panel data available in the Indian market environment. Information from consumer panel maintained by a large research agency which tracks consumer purchases, retail audit information from Nielsen which provides pricing and promotional data, and consumers' attitudinal information regarding brands collected from an ongoing survey-based panel are the inputs into this DSS. The data was made available for this pilot project by a marketing organisation in FMCG sector. The objective of this pilot was to develop a decision model which would help marketing managers diagnose the impact of brand-building initiatives vis-à-vis field level selling initiatives in the overall performance of the brand (market share). It is purported that this is the first step towards building a powerful diagnostic as well as prognostic tool for Indian mangers.

The project replicated the steps described earlier with regards to the estimation of a consumer choice model and subsequent decomposition of the share into parts attributable to each marketing mix element. At a preliminary step, this system can help managers decompose the total market share to base level share and incremental share (refer Figure A3.3).

The incremental share is largely driven by the short-term sales efforts of the firm. The ability to decompose total share into components that can be specifically attributed to every marketing tactic used is directly related to the amount of details that are captured in the input data regarding the specific marketing activities initiated in the market place. Given the limitations in the scope of data collection in the Indian market, it is not possible to execute the volume decomposition at the level of granularity obtained with similar types of data collected from the United States. It is anticipated that with improvements in the collection techniques and increase in demand for more detailed record of market level activities, the outputs can be significantly improved.

Figure A3.3 Decomposition of Market Share in Indian Context Based on Model

Managerial Uses of Volume Decomposition Methodology

As a diagnostic tool the DSS can help managers evaluate their periodic investments in various marketing activities vis-à-vis the corresponding performance (volume or market share) attributable to it. Specifically, it helps identify the relative importance of advertising and sales efforts in achieving the ultimate sales. Table A3.1 illustrates an example by which annual expenses incurred in various types of marketing activities are compared with the volume/market share of the brand attributable to the specific activity. This provides a powerful basis to evaluate marketing activities on an efficiency measure.

Table A3.1 Contribution Versus Efficiency

It can also foster more efficient resource allocation for future by identifying effective versus ineffective market development initiatives. As competitive marketing environments become too complex for the manager to evaluate at a holistic level, the model output can provide enough flexibility for the manager to compare alternative strategies after accounting for the complexities in market dynamics. The scenario builder (Figure A3.4) developed on the basis of the model output provides insights about probable outcome due to alternative marketing initiatives for managers to get a ‘feel’ on what may drive improved performance of their brands. This tool can also evaluate possible competitive reactions to changes in one's own marketing policies. The net result of all changes is portrayed in terms of likely market share of various brands.

Volume (share) decomposition models and their applications have fairly widespread use in the brand and sales management activities of CPG companies in the United States. At a generic level, these applications help managers improve the consistency of their decisions by acting as a ‘bouncing board’ to validate their own subjective assessments based on their field experience. More specifically, they assist in developing insights about market dynamics in managerial environments where such intuition is lacking. There is however, the danger of relying too much on the market share forecasts computed by the models on making decisions for the future. Managers must exercise caution while drawing inferences based on the model output since the constraints imposed by the standard techniques of developing mathematical models limit to some extent their ‘representativeness’ to the real market conditions.

Figure A3.4 Scenario Builder

The simplicity of interpretation of volume/share decomposition is the primary reason for its widespread use as a standard tool for making brand and sales management decisions in the United states. Our experience at IIM Ahmedabad in building such models using Indian data has provided some very encouraging results towards developing user-friendly decision support tools.

The relevance of these models in Indian context is beyond doubt. A significant contribution in marketing diagnostics is possible using these models to attribute good/bad performance to specific functional initiatives such as brand-building programmes or ground level sales development activities. While it is easy to point out some negative ramifications of such management reporting, such as it being used to identify ‘scapegoats’ in the organisation to account for poor performance, it is obvious that the utility of this tool is enormous if used meaningfully to invest in the appropriate market-building activities in the long run. Figure A3.3 clearly depicts a real-life example of the sales function acting as support to the more significant brand-building activities (viewed in terms of proportion of share attributable to each activity group). Our experience shows that this would be true for many product categories in the Indian market where opportunities for market growth and product differentiation are still significant. In the developed markets such as the United States, many product categories exhibit just the opposite characteristic. Market maturity reduces differentiation across brands and the larger proportion of brand market share of leading brands is vulnerable to competitive selling pressures. This fact highlights the predominant role of brand equity building in the Indian context compared to more myopic sales promotional strategies.

Market share decomposition models can be developed for any appropriate geographic market definition—at the country level, a region-specific or city-specific level or even a city-part specific level. The market definition for model development is primarily driven by appropriateness (accounting for the heterogeneity of consumers across various market, and also varied marketing programmes run in various geographic territories) and the availability of adequate consumer data at various defined market levels. If the richness in the data source is adequate, such models can be developed for various customer segment levels (as opposed to geographic markets) to evaluate effectiveness of marketing programmes across various demographic and psychographic segments.

A critical barrier to large-scale usage of these models in India is the non-availability of adequate resource such as a detailed customer database. Managers have expressed interest in such analytical processes and confirm the usefulness of the outputs emanating from such models. However, most organisations lack resources to build large-scale customer databases on their own to initiate such modelling ventures. This is an opportunity for a consortium of like-minded managers across firms to organise and develop a syndicated customer data service for initiating such prognostic marketing research activities.

## Appendix 3A

Note on Regression Models

Ordinary Least Squares (OLS) regression models are very prolifically used prediction models in practice. The basic requirement for developing these models is an outcome variable that is measured on a continuous scale (interval or ratio) and juxtaposed on the outcome variable should be some relevant predictor (exogenous) variables also measured usually on a continuous scale.

The model-building process attempts to fit a linear additive function (we assume this mathematical form) of the predictor variables to the outcome variable. In the process, the weights (coefficients) of the predictor variables are determined in such a way that the summation of the variables (adjusted by their weights) has a value closest to the value of the outcome variable (the best fit equation).

The closer the fit is to the actual data (higher r-square), the better is the chance that the equation will be able to predict outcomes based on values of the input (predictor) variables better. However, there is no guarantee that the models will continue to predict well, unless the nature of the data remains largely the same.

The standardized coefficient (the magnitude) signifies the importance of the variable in determining the value of the outcome. The sign (+ve or –ve) determines the nature of the relationship between the outcome and the variable. For instance, the weight associated with the price variable in determining sales will normally have a negative value, signifying an opposite relationship between price and sales (see Figure A3A.1).

Figure A3A.1 Associations in Regression

The model which is developed on a training sample is tested across multiple validation sample to check on the reliability of its prediction power before it is approved for use.

Predictive power can be increased in the training sample by using more complex mathematical functions instead of a simple additive linear regression model. Non-linear functions, sector-wise functional forms of different order, incorporating discontinuity in the functions are various ways to customize the prediction model given the nature of distribution of the outcome variable in the training sample. However, like in separation models (logistic models), too much customization may not be useful from reliability of prediction perspective, since many such models are not seen to perform up to mark in independent validation exercises. Hence, the seasoned practitioner's tussle is to match the customization of the model to the need for reliable predictability. This exercise is usually context driven and depends on the nature of data that is used to build and validate models.

Predictive models are relatively easier to specify compared to diagnostic models that identify associations between profile variables and the outcome. In the latter, the identification of a logical association is more important (not reducing the importance of predictability as much). Reliable prediction is not the main aim of developing these models as much as being able to identify the causation of a certain behaviour outcome.

Developing diagnostic regression models require a curious mix of technical acumen and contextual familiarity to pick the right profile variables, ensure that the impact of each variable is exactly measured without the confounding due to colinearity1 in the profile variables. This is as much an art as it is a science and it is left to the experienced analyst to take the right calls.

Linear regression models are widely discussed and literature is openly available. We would encourage the reader to seek appropriate references for a more detailed exposition of these models.

## Appendix 4

Logit Modelling: A Note

We just touched upon Regression Analysis and OLS in the previous section. Just to reiterate, regression analysis is used to build a causal relationship between a dependent variable, say sales volume, and a set of independent variables, say price, discount, advertising, store display. A causal relationship is more like a one-way influencer relationship— price or advertising changes have an impact on the changes in sales volume in the retail store. However, the reverse is not proven. Significantly, the business impact is estimated as changes in a continuous variable either in monetary terms (total volume worth in rupees) or, in the actual number of units or net weight sold (total number of kg). The dependent variable is interval scaled (loosely described as continuous variable).

The problem becomes somewhat intriguing when the dependent variable does not have the elements of an interval scale. For instance, if instead of sales volume of a particular brand of soap we had to examine the effect of competitive measures on the consumer's proclivity towards a brand, exhibited in terms of her choice; say, if the price differential (difference in price of the two brands of the same pack size) is changed from ₹50 to ₹30, what impact does it have on the consumer's inclination to buy Dove over a competing brand, say, Fiama? (To keep the problem manageable, in our stylized environment, let us assume for now that the market has only these two brands to offer.) Of course, we can certainly incorporate other significant effects of the market that influence the choice decision, but the point that we are making here is that there is a structural change in the model from the one we were describing for volume changes. Instead of sales volume, we have a nominal variable— choice between Dove and Fiama as a dependent variable. The independent variable is also modified from price to price differential. Philosophically, the problem remains what we have solved before in regression analysis, except that the structure has changed.

How do We Build a Model to Predict Brand Choice?

While the actual outcome in our stylized scenario is either choosing Dove or Fiama (one cannot choose both in our example), the model which we use to model this zero-sum game is stochastic. The choice of Dove is denoted by the following probability:

$\mathrm{Pr}\left(Dove\right)=\left\{\mathrm{exp}\left({\text{U}}_{Dove}\right)/\left(\mathrm{exp}\left({\text{U}}_{Dove}\right)+\mathrm{exp}\left({\text{U}}_{Fiama}\right)\right)\right\}$

(exp. means exponential).

‘Pr (Dove)’ is the estimated probability that the Dove brand will be chosen by the customer on a particular shopping occasion. The probability is dependent on the exp (Utility) of both brands. It is not very difficult to imply that the probability of choosing Dove is dependent not only on its own utility, but also on utility provided by the competitive brand. Hence, the final outcome is a derivative of the relative perceived strength of the brands in a particular shopping occasion.

Similarly, the probability of choosing Fiama is

The two probabilities add up to one, indicating that these are the only two options that the customer can execute. However, it is easy to incorporate the possibility of many other options, and also the possibility of ‘non-purchase’ in the model to make it more realistic and generalizable.

Now let us turn our attention to the utility function. Suffice to say that the Utility for each brand can be represented as (illustrative):

The utility is composed of some specific Dove related dimensions, say product attributes (we refer to them as ‘a’) and, some marketing mix variables specifically attributable to the brand. The ‘b's are similar to the regression coefficients that one measures in OLS; to us they are more like the impact of the particular parameter on the utility, while economists casually would put it as elasticity.

To complete the argument, Fiama will have its own utility function, with parameters that pertain to the brand (‘a’, price, discount and other marketing mix variables, etc.). The impact coefficients ‘b's are the same as the ones in the Dove utility function. Like in regression models, the purpose of this statistical model (also called the Logit model) is to estimate the ‘b's or the impact coefficients.

It is not very difficult to perceive that choice probability (Pr) of a particular brand is not only dependent upon its own set of parameters (marketing mix or others), but also on the alternative's (competitive) values for the same parameters. Mathematically-tuned readers will appreciate our attempt to transform the original equation (1) into the following:

$\mathrm{Pr}\left(Dove\right)=\left\{\mathrm{exp}\left({\text{U}}_{Dove}-{\text{U}}_{Fiama}\right)/\left(\mathrm{exp}\left({\text{U}}_{Dove}-{\text{U}}_{Fiama}\right)+1\right)\right\},$

where

Similarly,

$\mathrm{Pr}\left(Fiama\right)=\left\{\left(1\right)/\left(\mathrm{exp}\right)\left({\text{U}}_{Dove}-{\text{U}}_{Fiama}\right)+1\right\}$

Hopefully, this transformation will help appreciate the fact that choice of brand is dependent on not only what a particular brand's marketing mix is, but also on what the competition is up to (hence, the predictors are transformed as differences). After estimating the coefficients, one can use this mathematical formulation to simulate what the choice probabilities may be if one changes some parameter (within reasonable ranges) with respect to competition.

Recapitulating the significance of the impact coefficients (‘b's): The magnitude and direction (positive or negative) of the coefficients will determine which parameter (for example, marketing mix or brand specific), rather the difference in the parameter values of the two brands, will impact the relative share of Dove versus Fiama.

Estimating Logit Models

Recalling the normal regression case, OLS technique uses the principle of minimizing the variance (squared difference of the actual volume to that obtained by the predictor function). In a brand choice example, or in the general case, whenever the dependent variable is categorical, the data comes in the form of the category selected (for example, the person chose Dove or Fiama on a certain purchase occasion). This is more like assigning a probability of choice of 1 for the chosen brand and zero for the brand not chosen. Logit estimation (maximum likelihood method) tries to estimate coefficients that will maximize the probability of choosing the brand that was actually chosen on every purchase occasion (note the stress on word ‘every’).

For a live example, if one is given a string of purchases and the corresponding marketing mix differentials for each purchase occasion:

Occasion 1Brand chosenFiama
Occasion 2Brand chosenDove
Occasion 3Brand chosenDove
Occasion 4Brand chosenFiama
Occasion 5Brand chosenFiama
Occasion 6Brand chosenDove

In this example, the Logit model will try to estimate the coefficients such that:

The likelihood function or the joint probability of all six purchases are maximized: Pr(Fiama) * Pr(Dove) * Pr(Dove) * Pr(Fiama) * Pr(Fiama) * Pr(Dove).

It is now worth talking about applications of this model. Obviously, it does not take much thinking to anticipate that the entire brand choice prediction is done using this technique. Market research organizations such as Information Resources Inc. (IRI) and AC Nielsen Corporation (ACN) collect individual household level data (panel data) from a large representative sample of households (about 40,000 each) in the United States. ACN has a similar panel in India as well. Data such as brands bought, amount bought, price paid, any discounts obtained, coupons used and any store displays seen by the customer are recorded for every purchase occasion. This database provides a rich source of information to the CPG industry in the United States on consumer price elasticity, brand loyalty and product assortment issues. Logit modelling is a common statistical technique used to develop choice prediction models and build further complex choice-quantity models (what brands do consumers buy and how much do they buy based on the marketing mix variables as well as product and consumer attributes such as loyalty and value).

The example that we discussed above is the case of two choices. Two choice logit models are appropriately called binary logit or logistic regression. Usually, in situations where one brand is a market leader and one would like to study the effect of competitive pricing on the market share of the leader, binary logit is the way to go. (Note that the estimated probability which is obtained from the logit model can be construed as estimated market share too.)

The world is far more complex than what can be accommodated in a binary logit model. Most choice situations have multiple alternatives or brands. In situations like these, an extension of the binary logit is used, which is widely known as the multinomial logit (MNL). Even more complicated models, such as Hierarchical (Nested) logits attempt to model both ‘Direct’ and ‘Indirect’ competitors, which are models closer to the reality. However, more realistic models are also complicated and harder to estimate and additionally do not have the intuitive ‘feel’ that simpler models provide. Hence, in practice most applications of logit modelling are reduced to a two-state case (also popularly termed as logistic regression).

The Polytomous Logit Model

A close variant of the above logit model (also called Conditional Logit) is the polytomous binary logit model. Instead of option (brand) specific predictor variables, the respondent characteristics (demographics, attitude, etc.) are used to predict their choice outcome. For instance, the probability of choosing Dove will be,

$\mathrm{Pr}\left(Dove\right)=\left\{\mathrm{exp}\left({\text{U}}_{\mathrm{Re}spondent}\right)/\left(\mathrm{exp}\left({\text{U}}_{\mathrm{Re}spondent}\right)\right)+1\right\},$

where URespondent = ∑ [β X (respondent characteristics such as age, income, attitude, etc.)]

Similarly,

$\mathrm{Pr}\left(Fiama\right)=\left\{1/\left(\mathrm{exp}\left({U}_{\mathrm{Re}spondent}\right)+1\right)\right\}.$

Note that the ‘β’ coefficients determine the strength and direction of influence of the respondent characteristic on the propensity to choose the alternative (Dove). As earlier, the probabilities of choice for the two options add up to one. The direction of influence that the characteristics have on the propensity to choose the other brand (Fiama) will all be opposite to their influence on Dove.

Polytomous Logits have widespread application in banking and insurance sector to model consumer attrition, profitability and risk. Retailing and telecom are the other sectors where applications of such models have been widely reported.

Other ‘Separation’ Models

Multiple discriminant analysis is an alternative technique when the predictive variable is a nominal or a classification variable. For example, if one were to identify the type of individuals who buy mutual funds, as against ones who do not consider mutual funds as an investment option, based on household/demographic characteristics and other relevant behavioural traits of the respondent, multiple discriminant analysis would be a good approach to develop a classification routine that decides group identities. Note that logit model is also used for classification purposes in such cross-sectional analysis. The only alleged advantage for discriminant analysis, with its ability to estimate multiple discriminant functions (number of functions being related to the number of categories) is its potential to do a far better job at classification than a simple utility formulation used in most MNL models.

However, it is important to note that the construction of the discriminant function is significantly different from the logit model and makes different assumptions regarding the nature of the dependent variable. Logits, on the other hand, are very powerful tools to develop prediction models, given the elasticity determination, while discriminant analysis often times yield good classification models.

We hope this brief description of logit models would have helped the reader appreciate the key model difference in comparison with regression models. While the objectives are similar, regression and logits differ significantly in the approach to construction of the model.

## Appendix 5

Interview Guide for Our Industry Research

The basis for having a discussion with Analytics practitioners is given below. These are broad guidelines that were used to initiate the conversation and thereon flexibility was maintained to ensure that newer issues that emerged during the course of the conversation were explored further.

• Broad categories of expectations from the ‘Analytics’ function in your organization.
• What is accomplished, what is desirable in terms of output, what should be the areas to improve in the next 5 years?
• What are the immediate constraints in improving productivity of the Analytics function and, the causes of the same—why are they hard to remove?
• In the long run, what should the industry do to overcome these constraints?
• Describe the specialist resource available/required in this function—where is it sourced from, their experience profile, skill mix and their future progression—any possible constraints?
• Describe the leadership role for this function—profile, capabilities, long-term orientation, possible gaps in future leadership.
• What causes or defeats a thriving Analytics function/ practice in an organization – roles and responsibilities of Analytics functionaries?
• Why Analytics is important today and why it was not so earlier?
• What to avoid, unreasonableness in expectations?

## Appendix 6

Select Cases of Analytics Adoption in Indian Organizations

We followed up our business executive study (described in the last chapter) with a more detailed study of five organizations (that we were allowed to visit). The purpose was to investigate in some detail, how data is currently used and processed for business monitoring and decision making. Additionally, we wanted to have a better understanding of the influencers of adoption of knowledge processes (Analytics). Our focus was on India-based (indigenous) organizations.

An Upcoming Hospitality Chain
Focus on Managing Business Data for Enhancing the Quality of Insights

The hospitality chain maintains two prominent databases of its customers that are used for decision support: (a) transaction (bookings) data for revenue management, accounting and operational monitoring and (b) loyalty database that is used for tracking repeat customers for generating marketing, promotions and customer retention initiatives.

Both these initiatives are managed separately and there is no data integration across these two platforms to manage the information holistically.

Ideally, it may require someone to utilize their planning processes to connect the disparate pieces of information and build a roadmap to leverage the information source for greater organizational use. However, this organization does not have personnel to spare who have enough time to conceptualize such a solution. Most executives are preoccupied with their near-term operational responsibilities to apply their mind on longer-term value additions. The feeling is that ‘we shall cross the hurdle when we encounter it’. It was obvious to us that there is no felt need for such initiative and if required, a technology consultant would be hired at a later date to build the infrastructure.

An Oil Refinery
Building a more Effective Monitoring System for the Decision Maker: Throughput Dash Board

While this case does not pertain to the domain of Marketing Analytics, it does provide interesting insights beyond operations (which is the focus in this case).

This refinery (like many) has invested in the distributed control system (DCS) to monitor crude oil refining process operations. This system monitors, displays and stores various stage-wise process parameters along with the plant throughput. A template of the monitoring of the throughput is given below (Figure A6.1). This kind of a monitoring report can be created at appropriate periodicity based on the suitable requirement of the decision makers.

However, the DCS also stores various process parameters (pressure, temperature, etc.) on a continuous basis. In most cases, the data is used in real time to monitor the refining process. However, what is important is that the process data are stored as a time series which can be used to create appropriate aggregate diagnostics. Associations between changes in process parameters (on a periodic basis) can be correlated with variance in output on equivalent time intervals to develop plausible associations (may be causality). Of course, our survey rarely came across such diagnostics.

Instead, most process data from the DCS are used by operations managers to monitor plant productivity (Figure A6.2). For productivity review sessions, the data are not looked into formally, but we understand that the personnel responsible for managing operations provide a view based on their experience to senior management. A plausible reason for this ‘reluctance’ to create a diagnostic report which associates variance in throughput with variance in refining process parameters could be that such reports are not demanded by ‘higher authority’. Asymmetry in information availability (and knowledge of information processes) may also create differential power equations within organizations, which many would not like to disrupt without a necessary demand for the same.

Our study also revealed that the idea of attribution modelling (causality) using the DCS data would be of immense help to many plant/operations heads. However, as stated earlier, there are signs of reluctance to change age old procedures and ‘ruffle feathers’ in the process.

A Process Industry Manufacturing Industrial Chemicals
Operational Monitoring versus Futuristic Planning

A similar situation prevails at another process plant that was surveyed. This plant produces industrial chemicals. A DCS is available to monitor and record process parameters continuously. Daily and fortnightly throughput and variance from plan is reported to senior management similar to the one described in the earlier case. However, if a variance report has to be discussed for causation, it still has to be at a review meeting with operations personnel providing a perspective rather than an automated process generating a report on variance in process parameters (causal factors) from the DCS.

Figure A6.1 Representative Dashboard for Throughput Monitoring: For Senior Management

Figure A6.2 Real Time Monitoring of Plant/Process Parameters: Dashboard

Why then is an investment in process control and monitoring system not used optimally? The response is that ‘the time is not ripe to question age-old practices’. Why rock the boat when no one is questioning current practices?

It must be pointed out that we also found evidence of employees getting trained in quality management issues and some are skilled to develop reports associating process variance with throughput variances for leakage monitoring in the system. Most of these attempts are however, still sporadic and the information system is not yet being utilized to build a culture to demand higher level diagnostics (Figure A6.3).

Figure A6.3 Illustrative Water Fall Chart Depicting Attribution (Due to) Analysis

Actual = Planned + (due to Reason A) – (due to Reason B) – (due to Reason C) – (due to Reason D)

A Regional Dairy Products Marketing Organization

A full-portfolio regional dairy product company has product variants that run up to more than a hundred stock keeping units, including milk, processed milk products and tertiary items like chocolates. Milk being a perishable item requires immaculate planning to ensure that wastage is minimized. Such precise planning would actually require a very sophisticated forecasting/planning model.

However, much to our surprise, such concerns about the need for precise forecasting was not found to be of material significance by the management. Historical basis of planning was past consumption behaviour. Our hunch (though not confirmed by the management) is that in spite of a large operation, given that the per capita consumption of milk in India is still very low, production never exceeds the intrinsic consumption capacity. Hence, planning rarely leads to over production since supply is almost always lower than market demand. Additionally, the production cycle being daily, production planning can be recalibrated at a very high frequency to address stock piling issues, especially for highly perishable items.

Hence, from a market planning perspective, this dairy organization still relies on experience and hunches and collective wisdom of its dairy union members and it has worked fine for them so far. Analytics may not have a very important role in such environments for now.

Useful information regarding stock brokers and the type of transaction that they effect is recorded daily that provides insights about broker activity and quantity/quality/nature of transaction. Currently, most of the analysis of this data is about monitoring trade and attrition (loss of volume, if any). Proactive analysis for providing consultative services to brokers on transaction quality and optimal business is not part of the deliverables for the trading platform managers.

Again, performance monitoring of the trading activity with the generated data is not considered to be a primary activity. Many potentially interesting and creative ideas were discussed about optimal utilization of data, but were summarily dismissed from an implementation point of view due to ‘legal constraints’.

So What Did We Conclude?

The organizations studied are seen to have significant (though not comprehensive) information resources to plan their analytical processes in alignment with their business decision support. This contradicts, to some extent, the perception created based on the business executive survey. It is noted that the adoption of analytic practices across industries is varied in spite of similar resource availability and potential. We feel, based on our discussions, this is largely attributable to the lack of exogenous imperatives that spur such activities. The perceived need is still largely dormant.

Be that as it may, it may just take one foresighted organization to break the ‘inertia’ barrier for such a practice to become an industry standard.

Implications of the Findings

There is seemingly a lack of motivation (rightly or otherwise) among many organization leaders to look beyond the normal operational usage of business database. Numerous attempts to enquire regarding how improvements can be brought into the analytical prowess have been met with guarded optimism and often times seemingly little enthusiasm. The lack of ‘self-criticism’ that is observed in many domestic organizations is naturally bewildering and goes against the popular sentiments expressed in trade publications on the potential of Analytics in business organizations.

We propose a few postulates based on this rather subdued perception of analytics adoption in Indian businesses. Our strong prior is that:

• It has not caught the senior management's imagination and hence it is still low priority in many organizations. Nowhere did we find the senior executives very involved in discussions regarding their internal capabilities in analytics. This could be a reason for the lack of an appetite (culture) for analytics.
• Decision makers find it difficult to discern the value of analytics. Hence, their ability to provide direction in developing this capability is limited.
• Some organizations may expect turnkey solutions to their problems and do not have internal resources to spare for development initiatives. They depend upon vendors who largely sell ‘product solutions’ rather than ‘solutions to problems’ and hence the issues remain unaddressed fully.
• No one in the industry has taken significant steps to improve Analytical capabilities in their respective organization and hence it is not a priority (on competitive benchmark).
• There is confusion over what is the true scope of Analytics. Is it a tool box or process to generate insights from data for supporting decision making? There are more than one explanation for what Analytics stands for and hence little clarity on what needs to be done.
• It reduces information control among a few savvy employees and therefore disrupts traditional operating style of the management. Hence, it is not favoured by some employees unless there is an imposition from the top.
• There is a significant disconnect between Analytics competent employees (centred around the IS function) and the business decision makers. Hence, there is no common talking platform to develop new and useful data-driven decision support applications.

It appears that asymmetric information control, inadequate ‘felt’ need, top management's unfamiliarity with the potential, near-term exigent requirements and unrealistic expectations from turnkey solutions seem to be some of the reasons impeding the adoption of an appropriate analytics culture in some of these organizations.

This trend may continue for a while, and as mentioned earlier, until the market parameters in India change radically for business organizations to feel the ‘heat’ of Analytics. This fact is corroborated by Germann et al. (2012) who find that level of competition is positively correlated with the depth of Marketing Analytics used in firms.