Wednesday, October 25, 2017

We have moved


Dear Clients, Value partners and Associates,

Please note my new digital coordinates:
sudhir@fitforpurposecontent.com

I have duly informed each one of you individually
but this broadcast is for those I may have
inadvertently missed, and
who may be wondering
whatever happened to sudhir@studiorainbow.in.
This is possible in case
we have worked on one-off projects.

The rechristening follows my resolve to
rightsize my offerings and focus solely on
fit for purpose content development and
thought leadership communication.

After great deliberation, I have decided to minimize,
if not shun
the orientation workshops and role play sessions,
as they demand substantial time and travel,
which leaves little bandwidth for
fit for purpose content development.

I would still do select assignments
for those retainers where I have already committed
to a long term plan.

Looking forward to take our relationship
to the next level and collectively
move up the value chain.

Let's dive deeper to soar higher...

cheers



Friday, October 20, 2017

Happy Birthday Shammi Saab


Tumsaa Nahi Dekha



India Infoline News Service | Mumbai |

All his life, he defied age and ailment with the same rebellious spirit that shaped his melodic frolics on the silver screen. Today, he is no more but the legend called Shammi Kapoor lives on. Sudhir Raikar recalls one hallowed evening of sparkling interaction with the eternal style icon.

It was a seemingly desolate evening of June 2007. The downpour was steady if not savage. I reached his apartment in Mumbai’s plush Malabar Hill, a stone’s throw from the CM’s bungalow ‘Varsha’.The security at the entrance was astonishingly helpful, not the usual tight-lipped folks who terrorise you with their barrage of questions. I was quickly escorted to the modest living room which I presumed would stage a long wait before the man appeared.

No way, he was before me within minutes. Wheel chaired and weary from the thrice a week dialysis regime post his renal failure, his eyes still spared a twinkle or two for the non-entity guest representing a Mumbai tabloid. He took me to the adjacent room which he fondly called his ‘den’. In the two and a half hours that followed, he recalled his strife and success, grief and gratification of his life and times with remarkably detached introspection -...whether the formative lessons in theatre and classical music or the early rejection and dejection playing second fiddle to Bharat Bhushan and Pradeep Kumar, the turning point in the form of ‘Tumsa Nahi Dekha’ or the ensuing golden period of name and fame, the tryst with personal tragedy or the gradual fade out of the popular image... The only instance where the emotions flowed unchecked was when he spoke of his celestial chemistry with singer Mohammed Rafi. “Rafi was my soul” he exclaimed with moist eyes.

The eternal style icon relished talking about his past but refused to cling to it - precisely why he insisted that I used his present-day snap for the story, not the usual photographs from his yesteryear hits.

“Duniya ko pataa to chale mein aaj kaisa dikhtaa hun” (Let the world know how I look like today)

Shammi Kapoor won worldwide attention as a maverick star but the fact remains that he was also an actor par excellence - an aspect that was rarely acknowledged by filmmakers and audiences alike. Precisely why acting legend Naseeruddin Shah considers him as one of the most underestimated actors of Hindi cinema. But Shammi Kapoor shrugged off the accolades with characteristic humility.

“Bhai Mere, Acting is the forte of doyens like Dilip Kumar and Balraj Sahni. If I should get credit of any sort, it’s only as an inventive expressionist of music” he pointed out with child-like fervour.

A compulsive ‘Apple’ user for years, his obsession with the internet and the ‘Safari browser’ is well known. But even his ailment could not keep him inactive. During the later years, he took it upon himself to spread awareness on kidney ailments at forums organized by nephrologists and kidney foundations. And once in a while, he got rid of the wheelchair to speed his Merc to Lonavala for a rejuvenating drive. “I’ll do everything what’s possible in the given constraints” he said of his passion for driving.

After the interview, he generously kept in touch through occasional phone calls, sporadic emails and twitter PMs but lately, even the public posts were missing. He did mention the desire to pen his autobiography for which we were to meet at leisure some day. But destiny had other plans.

How I wish someone like Ranbir Kapoor - his grand nephew - digs the memoirs out of his den and publishes them as they are. Given the prolific speaker that he was, his written expression would surely be worthy of print and more important, of universal significance.

Every time I think of that wonderful evening, I remember the fag-end chat on his illness. When I mentioned about Pranayam’s healing powers as a possible antidote - Kapaal Bhaati and Anulom Violm in particular - he did a little boogie woogie in his wheelchair, mocking the breathing exercises with a smile.

“Sab kar ke dekh liya yaar, Koi farak naahi padtaa” (I have tried everything but to no avail).

That little dance act epitomises the indefatigable spirit of Shamsher Raj Kapoor - the musical non-conformist of timeless charisma. May his soul rest in peace!

Friday, September 29, 2017

A Macro View of Microservices - A Plain Vanilla Primer for Pragmatic Practitioners



The Litmus Test of Enterprise Tech: Change-readiness


Business change is constant and instant, Tech teams needs to be in start-up mode

Markets are getting wider, scaling out is the new norm, so is the adoption of emerging tech

Shrinking Time to Market calls for rapid development in distributed environments backed by continuous deployment

Microservices key value prop: Change-friendliness

Their contexts and boundaries are defined in terms of distinct business capabilities

They are right-sized, invariably small (think scope of service, not lines of code)

They are independently deployable in line with business needs: for instance a new feature or a bug fix would be deployed immediately as also tracked for performance and behavior.

Deployment decisions are choreographed in collaboration with service owners eliminating the need for orchestration across multiple teams which is invariably arduous.

Service owners are free to choose the technologies and persistence mechanisms for building and operating individual services with consensus agreement on cross team parameters like log aggregation, monitoring and error-diagnosis.

Services collaborate with each other using technology-agnostic network calls

They offer a cost-effective and low-risk test bed for evaluating the effectiveness of new technologies in production environments


What does that mean for business?

Scaling is ‘on-demand’ and cost-effective, in line with business needs

Independent deployment, with quick failure isolation and rollbacks, ensures quick value delivery to the market

Ready-to-deploy business capabilities make customer engagement more holistic across different channels

Smaller codebase means significantly lower risk and cost of replacing or rewiring software (vis-à-vis the typical monolith compromise of coping with core modules running on redundant technologies)

Microservices: Here’s the deal

How they deal with change in business requirements


Unlike in monoliths, responsibilities are decomposed in respective services defined by business capabilities, hence change affects only the given module, data segments are API-encapsulated while service overlaps are mapped through higher-order services or hypermedia


How they deal with business capability enhancements or modifications


Bounded contexts enable independent deployment of the impacted service(s) without disturbing business capabilities residing in other services. This eliminates the need for time-consuming and costly regression tests of the monolith universe.


How they deal with situations where business abstractions are dependent on low-level services outside their bounded contexts


‘API gateway’ inverts the dependencies between clients and microservices in this scenario. The secondary abstraction is declared by the high-level abstraction within its service interface, and is implemented by the dependant collaborating services through several means - reducing network chattiness, performing protocol translations, concurrently aggregating service responses and transforming service responses in specific end-user formats


A closer look at API gateways


In the microservice universe, a client’s RESTful HTTP request to individual services can be a painful user experience given the plethora of requests to different services.

Enter the API gateway which tailors APIs to client’s network capabilities.

For instance, a desktop client may make multiple calls while mobile client would make a single request. The API gateway will proxy finely-grained desktop requests to corresponding services while handling coarse-grained mobile requests by aggregating multiple service call results.

Outcome: optimized communication between clients and applications while encapsulation of microservice details.

API Gateways ease the evolution of microservice: whether two microservices merge or one is partitioned into two, updation would be made at the API gateway-level, the clients on the other side of the gateway would be impervious to the change.


Why Microservices call for Continuous Deployment

Microservices are highly amenable to change and continuous deployment makes it rapid and reliable.

Microservices make deployment easier, so that it becomes faster and frequent.

Faster deployment ensures faster feedback from the market.

Faster feedback ensures timely improvements – making it more responsive and secure.


Why Microservices and Polyglot Persistence go together

The microservice approach of multiple teams managing discrete services naturally implies database capability within each service, else coupling at the data level would defeat the very purpose of autonomy.

Using multiple data stores invites eventual consistency which is a known compromise in most businesses. Even relational databases settle for eventual consistency when data is sent to or read from remote systems like value chain databases.

Like how RDBMS uses event streams for constructing reliable views, the microservice world uses event sourcing for triggering service updates from ‘events to log’

The trade-off in favor of high availability becomes even more acceptable when compared to the Multi-Version Concurrency Control issues of the relational world.


Embracing Microservices: No cookbook in the kitchen


Every organization is different – you can’t mirror success stories, or even failures for that matter

How to ensure whether microservices are fit for purpose – the adoption challenge

Microservices demand a paradigm shift - Cultural, Structural and Functional

Compelling benefits come bundled with significant complexities


The Adoption challenge


Greenfield projects

When businesses need to evolve rapidly, the monolith environment may work best for managing small number of applications to deliver the firm’s competitive edge. Microservices however would be helpful to startups in building a minimum viable product.


Brownfield projects

When established businesses need to scale rapidly across large, complex applications, microservices becomes fit for purpose but given the tangled dependencies between applications, an incremental approach to the evolution is highly advisable:

Re-engineer applications based on business priority in phase one

Build API gateways to interface with monolith applications that talk to inefficient database models

Perform minimal database modifications to maintain stateful connections

Re-engineer balance applications in fast-track mode using phase one templates and components

Spilt monolith services into microservices

Normalize relational data models and embrace efficient database models using polyglot persistence


Getting in microservice mode


Pre-adoption Diagnostics


Defining core business capabilities for decomposition into services

Dissecting services in terms of business, processes, constraints, channels and data behaviors to be able to ‘group those things that change for the same reason’

Identifying capabilities to bridge skill gaps of technical team on emerging technologies and best practices


Building the Microservices organization


Aligning the technical architecture to the business organization

Defining service boundaries, appointing service owners and custodians based on optimal team size and maximum productivity.

Promoting autonomy of service owners while enforcing rules for and monitoring implementations of ‘what the services should expose and what they should hide’

Complexities can be overwhelming

Issues of an expanding estate

Host of services

Scores of Processes post resilience

Several Interfaces

Network latency

Network failures


Need for value-added solutions


Versioning and Message serialization

Load balancers and messaging layers

Remote procedure calls

Backward compatibility and functional degradation

Use of home-grown code and off-the-shelf products for high-level automation, roll-outs based on service dependencies

Asynchronicity

Need for solution-minded teams

Visionary technical architects, Competent Custodians, Highly adept DevOps team

Database professionals conversant with polyglot persistence scenarios

Intelligent co-ordination between teams

Democratic governance

Questions that demand credible answers… a partial list

How does one move forward in the absence of standards?

How does one form the Microservice team – domain experts, technical architects, emerging tech champions, networking wizards…How does one bridge the skill gap?

How does one choose technologies, techniques, protocols, frameworks, and tools?

How does one approach design, development and deployment?

How does one identify services?
How does one define service boundaries?
How does one chip off services from the monolith structure?
Why and when does one split services, merge them, or create new services?
How does one deal with service interfaces?
Is there a way to ensure a uniform way to call services?
How does one ensure the quality of code bases? small doesn’t automatically guarantee quality
Can one build coherence and reusability across different components?
How does one tackle the accidents post the ‘first version’ release?
How does one avoid versioning nightmares?
How does one adopt a culture of individual deployment s before evolving into continuous deployment mode:
given the monolith legacy of single unit deployments and regression tests


Summing up

Microservices are an evolutionary phenomenon, not a ready-to-deploy solution

Microservices will ensure measurable value to your organization only if they are Fit for Purpose – irrespective of whether the project is Greenfield or Brownfield.

Microservices Vs. Monolith is not a black and white David Vs. Goliath scenario both have distinct value props.

Microservices are naturally attuned to the virtues of heterogeneous data stores and Continuous Deployment

Microservice trade-offs should be guided by respective trade realities, not by the experiences of other organizations

Thursday, September 28, 2017

NoSQL in perspective: Biz above Buzz, Needs above Names


Random notes based on the seminal book "NoSQL Distilled by Pramod J. Sadalage and Martin Fowler", aimed at enabling faster comprehension



Business needs of the Modern Enterprise

Real-time capture and analysis of big data – coming from multiple sources and formats and spread across multiple locations

Better customer engagements through personalization, content management and 360 degree views in a Smartphone era

Ability and agility in proactively responding to new markets and channels


Constraints of the RDBMS environment

Frequent database design & schema revisions in response to fast-changing data needs have serious application-wide ramifications as RDBMS is the point of business integration

Growing data storage needs call for more computing resources but RDBMS ‘scale up’ is prohibitively expensive

Clustering is an effective solution but cluster-aware Relational DBs can’t escape the ‘single point of failure’ trap in making all writes to an abundantly-available shared disk.

Sharding in RDBMS puts unsustainable loads on applications



NoSQL in perspective

Over time, enterprise with complex and concurrent data needs created tailored non-relational solutions specific to their respective business environments.

They are a natural fit for the clustering environment and fulfill the two principal needs of the modern enterprise, viz,

Cost-effective data storage ensuring fit-for-purpose resilience and several options for data consistency and distribution

Optimal and efficient database-application interactions


It would be appropriate to name this ever-expanding universe as NoSQL, which contrary to what the name implies, is ‘non-relational’ rather that ‘non-SQL’ since many RDBMS systems come with custom extensions. (NewSQL hybrid databases are likely to open new doors of possibilities)

Each data model of the NoSQL universe has a value prop that needs to be considered in the light of the given business case including the required querying type and data access patters. There’s nothing prescriptive about their adoption. And they are not a replacement for SQL, only smart alternatives.

NoSQL data models

A closer look at two common features:

Concept of Aggregates

Group all related data into ‘aggregates’ or collection of discrete data values (think rows in a RDBMS table)

Operations updating multiple fields within each aggregate are atomic, operations across aggregates generally don’t provide the same level of consistency

In column-oriented models, unit of aggregation is column-family, so updates to column-families for the same row may not be atomic

Graph-oriented models use aggregates differently – writes on a single node or edge are generally atomic, while some graph DBs support ACID transactions across nodes and edges


Materialized views

To enable data combinations and summarization, NoSQL DBs offer pre-computed and cached queries, which is their version of RDBMS materialized views for read-intensive data which can afford to be stale for a while. This can be done in two ways:


Overhead approach
Update materialized views when you update base data: so each entry will update history aggregates as well
Recommended when materialized view reads are more frequent than their writes, and hence views need to be as fresh as possible

This is best handled at application-end as it’s easier to ensure the dual updates – of base data and materialized views.

For updates with incremental map-reduce, providing the computation to the database works best which then executes it based on configured parameters.

Batch approach

Update materialized views in batches of regular intervals depending on how ‘stale’ your business can afford them to be



Domain-specific compromises on consistency to achieve:

a. High availability through Replication: Master-slave & peer-to-peer clusters

b. Scalability of performance through Sharding

In each case, the domain realities would matter than developmental possibilities –what level and form of compromise is acceptable in the given business case would help arrive at a fit for purpose solution.

Many NoSQL models offer a blended solution to ensure both High Availability and High Scalability - where sharding is replicated using either Master-slave or peer-to-peer methods.

Replication

Master-slave cluster:

Works best for read-intensive operations

Database copies are maintained on each server.

One server is appointed Master: all applications send write requests to Master which updates local copy. Only the requesting application is conveyed of the change which, at some point, is broadcast to slave servers by the Master.

At all times, all servers – master or slaves - respond to read requests to ensure high availability. Consistency is compromised as it is ‘eventually consistent’. Which means an application may see older version of data if the change has not be updated at its end at the time of the read.

Fail scenarios in Master-slave cluster and their possible mitigation:

Master fails: promote a slave as the new master. On recovery, original Master updates needful changes that the new Master conveys.

Slave fails: Read requests can be routed to any operational slave. On recovery, slave is updated with needful changes if any.

Network connecting Master and (one or more) Slaves fails: affected slaves are isolated and live with stale data till connectivity is restored. In the interim, applications accessing isolated slaves will see outdated versions of data.

Peer-to-peer cluster:

Works best for write-intensive operations.

All servers support read and write operations.

Write request can be made to any peer which saves changes locally and intimates them to requesting application. Other peers are subsequently updated.

This approach evenly spreads the load, but if two concurrent applications change the same data simultaneously at different servers, conflicts occur which have to be resolved through Quorums. If there’s a thin chance of two applications updating the same data at almost same times, a quorum rule can state that data values be returned as long as two servers in the cluster agree on it.

Sharding

Evenly partition data on separate databases, store each database on a separate server. If and when workload increases, add more servers and repartition data across new servers.

To make the most of sharding, data accessed together is ideally kept in the same shard. It’s hence recommended to proactively define aggregates and their relationships in a manner that enables effective sharding.

In case of global enterprises of widely-dispersed user locations, the choice of sites for hosting shards should be based on user proximity apart from most accessed data. Here again, aggregates should be designed in a manner that supports such geography-led partitioning.

Sharding largely comes in two flavors:

Non-sharing Shards that function like an autonomous databases and sharding logic is implemented at application-end.
Auto Shards where sharding logic is implemented at database-end.

Sharding doesn’t work well for graph-oriented data models due to the intricately connected nodes and edges which make partitioning a huge challenge.

Ways to improve ‘eventual consistency’ : Quorums Versioning

Read and Write Quorums

Quorums help consistency by establishing read and write quorums amongst servers in a cluster. In case of reads, data values stored by the read quorum are returned. In case of writes, it is approved by a write quorum of servers in the cluster.

Applications read and write data with no knowledge of quorum arrangements which happen in the background.

The number of servers in a quorum – read or write – have a direct bearing on database performance and application latency. More the number of servers, more the time for read and write quorum approvals.


Version Stamps

Consistency problems can arise in Relational and Non-relational despite ACIDity or quorum rules. A case in point is a lost updates from concurrent access of the same data where one modification overwrites the changes made by other. In business cases which can’t afford pessimistic locking, version stamps are a way out:

An application reading a data item also retrieves version information. While updating, it re-reads version info, if it’s unchanged , it saves modified data to the database with the new version info. If not, it retrieves the latest value probably changed by another application and proceeds to re-read version stamp before modifying data.

In the time between re-reading the version info and changing values, an update can still be lost from a change made by another application. To prevent this, data can be locked in the given time frame in the hope that it will be miniscule.

Few NoSQL models like column-oriented DBS enable storing of multiple versions of the same data in an ag­gregate along with version timestamp. As and when needed, an application can consult the history to determine the latest modification.

When synchronicity between servers in a cluster is in question due to network constraints, vector clocks are seen as a way out. Each server in the cluster maintain a count of updates enforced by it, which other servers can refer to thereby avoiding conflicts.

What ‘schema-less’ actually means?

Onus shifts to Application-side

In NoSQL databases, data structures and aggregates are created by applications. If the application is unable to parse data from the database, a schema-mismatch is certain. Only that it would be encountered at application-end.

So contrary to popular perception, the schema of the data needs to be considered for refactoring applications.

That applications have the freedom to modify data structures does not condone the need for a disciplined approach. Any unscrupulous changes in structure can invite undesirable situations: they can complicate data access logic and even end up with a lot of non-uniform data in the database.

Wednesday, September 27, 2017

Big Data, Bigger Names

Sudhir Raikar , IIFL | Mumbai | March 08, 2016 09:15 IST

Even as companies big and small are busy declaring their Big Data initiatives ahead of driving them, IIFL pays tribute to a few nonconformist champions – some central, other tangential – who together lend meaning and substance to the over chewed buzzword, thanks to their intuition, insights and inquisitiveness.




- See more at: http://www.indiainfoline.com/article/editorial-perspectives-technology/big-data-bigger-names-116030800132_1.html#sthash.6KGIry2R.dpuf

In a world of unabashed corporate antagonism, replete with umpteen “founding” and first-mover claims to breakthrough ideas, concepts or methodologies, certain mavericks stand out for their quiet authority.

Like computer scientist John Mashey, founder of the ASSIST assembler language teaching software and author of PWB Unix shell, or "Mashey Shell". He’s arguably believed to be the father of the term Big Data, having christened it in 1994 in a remarkably matter of fact fashion while he was chief scientist with Silicon Graphics, then a hot and happening Valley player working on Hollywood special effects and spy surveillance systems and hence playing with a lot of data.

Devoid of any academic attribution save for numerous technical talks, thankfully available on websites devoted to technical research, Mashey has only his unflinching conviction to fall back on. He doesn’t need to simply because he’s not staking any claim. Instead, he selflessly right sizes the imagination of people keen to confer the founding title on him, humbly summarising the coinage as only an attempt to settle on an all-inclusive phrase to convey the explosive growth and advancement in computing. This hiking, biking, skiing enthusiast is too busy with his intellectual and creative pursuits to seek reverence for his prescience. This introduction slide from one of his technical presentations (http://www.slideshare.net/amhey/big-data-yesterday-today-and-tomorrow-by-john-mashey-techviser) is a good window into his talent and temperament.




Like Gartner data analyst Douglas Laney, who first recalled Mashey’s name - in the context of big data - through a media correspondence. Douglas is the author of the 2001 pioneering research note 3-D Data Management: Controlling Data Volume, Velocity and Variety and among the earliest to discern that more than growing volumes, it was the data flow speeds, thanks to the collective handiwork of e-commerce and post-Y2k ERP application boom that posed a real challenge to data management teams worldwide. As expected, several vultures from the unabashedly ambitious market place claimed Laney’s research as their own, peddling muddled replications and variations of his 3-V (Volume, Velocity and Variety) framework. Laney’s retort befits his nonconformist nature. He’s posted the contents of his original paper (sadly no longer available in Gartner archives) “for anyone to reference and attribute”. Here it is: http://blogs.gartner.com/doug-laney/deja-vvvue-others-claiming-gartners-volume-velocity-variety-construct-for-big-data/

Like etymologist, editor and Yale researcher Fred Shapiro who traces the origin, development and spread of words as a means to study intellectual evolution, not for academic posterity.

Like University of Pennsylvania economist Francis X. Diebold, who initially claimed to have coined the term in his paper “Big Data Dynamic Factor Models for Macroeconomic Measurement and Forecasting,” but later wrote another research paper to humbly reverse the claim, circuitously acknowledging Mashey’s contribution. To quote him, “The term “Big Data,” which spans computer science and statistics/econometrics, probably originated in lunch-table conversations at Silicon Graphics Inc. (SGI) in the mid 1990s, in which John Mashey figured prominently.”

And last but not the least, like award-winning journalist Steve Lohr, author of the definitive software chronicle “Go To: The Story of the Math Majors, Bridge Players, Engineers, Chess Wizards, Maverick Scientists and Iconoclasts — The Programmers Who Created the Software Revolution” and “Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else.”

Mashey’s deep connect with Big data came to light through Lohr’s perceptive 2012 search for the term’s origins in loads and loads of digital archives. It was at Lohr’s behest that Shapiro dug out several digital references to trace the origin of Big Data. When he could not come up with anything conclusive, Lohr approached people with knowledge of the subject matter and Diebold and Laney were one of the many people to respond.

Unfazed by the inconclusive results of his hunt, Lohr kept it going, looking for the two words, not merely used as a pair, but used in a manner that would connote the essence as we know it today: massive volumes of structured and unstructured data that move too fast and call for new ways of management. Such usage, Lohr believed, could only be steered by someone with a computing context. Precisely why he zeroed in on Mashey, not on other intriguing but out-of-context references like these two lines from bestseller author Erik Larson’s Harper’s Magazine piece on mailbox junk spread by the direct-marketing industry: “The keepers of big data say they do it for the consumer’s benefit. But data have a way of being used for purposes other than originally intended.”

Hats off to Lohr for his inquisitive and informed search for the name of a phenomenon that’s a now a household name across spheres. Companies flaunting their smallest of Big Data initiatives would do well to learn from Mashey’s prolific nonchalance and Laney’s altruistic activism. Armed with the duo’s frame of mind, they would be in a better position to lock horns with the multihued Big Data challenges including curation, updation and integration. Read all about Lohr’s account in this dated but delightful piece: http://bits.blogs.nytimes.com/2013/02/01/the-origins-of-big-data-an-etymological-detective-story/?_r=0 - See more at: http://www.indiainfoline.com/article/editorial-perspectives-technology/big-data-bigger-names-116030800132_1.html#sthash.6KGIry2R.dpuf

“To me, bancassurance is more about going deep rather than going wide”

courtesy: http://www.indiainfoline.com/article/editorial-interviews-leader-speak/vighnesh-shahane-ceo-wholetime-director-idbi-federal-life-insurance-117092600279_1.html



A former cricketer known for his Jeff Thompsonesque sling‐arm bowling action, 48‐year old CEO of IDBI Federal Life Insurance Vighnesh Shahane seems to have maintained a good line and length in managing the affairs of a joint venture ‐ between IDBI Bank, Federal Bank and Belgian insurance major Ageas, three entities with diametrically diverse legacies bound by an intrinsically common goal ‐ which is also believed to be contemplating a stake sale. Shahane’s clarity of conviction is evident in the way he pinpoints key challenges and opportunities of the insurance space, and outlines his corporate plans and priorities in this interaction with Sudhir Raikar. Edited excerpts...

Do you feel your unflinching bancassurance conviction has kept you in good stead, at a time when most insurers have gone the ULIP way?

I will answer this question in two parts. On the Bancassurance channel, established infrastructure and loyal customer base obviously make it a preferred choice of life insurance companies. In our case, being a joint venture of two prominent banks ‐ IDBI Bank and Federal Bank ‐ and Ageas, we had ample room for growth amid high competition and our performance bears testimony. We were among the few life insurance companies to break even in a mere five years of operations. Our gross written premium (GWP) has almost doubled in the last three years. In fact, for the quarter ending June FY18, we recorded a 65 per cent year‐on‐ year increase in new business premium (NBP) in the individual insurance segment.

Talking about ULIPs, we truly believe that it’s a great product in its current avatar. However, customers yet lack a good understanding of the ULIP nitty gritty. There have been instances when they came back with grievances. We don’t intend to chase ULIPs at the cost of jeopardizing our bancassurance relationship and hence, we prefer to sell them to discerning customers who understand the product well and are not swayed by transient market volatility. During Q1 FY 18 ULIPs comprised 17% of our total New Business Premium.

Does the cocoon of high persistence, low-cost banca channel unknowingly restrict the scope for growing the digital and agency channels?

High persistency and Banca channel are our strategic growth drivers, not really a cocoon limiting our creative thinking. We continue to leverage other growth channels. We have a growing digital channel and an agency channel, both of which we are nurturing in a calibrated and profitable manner. Besides, we also have other channels like Group, Broking and NR, and we were one of the earliest entrants into the newly introduced POS channel. We are also on the lookout for new bancassurance tie ups to strengthen our distribution.

Do you expect the online route to fetch better outcomes in the time to come, more so if customers demand simple, transparent and fast interactions?

Life insurance is essentially a push product, requiring a face‐to‐face interaction. Products require long‐term commitment and hence assistance from some expert acts like an assurance for the buyer. I don’t see human intervention and dependency reducing immediately, unless AI or any other technological innovation suddenly transforms the purchase experience. Having said that, there’s great potential for simpler and pure term products in the online space. This is because of the increasing awareness about the need to safeguard the financial future of one’s loved ones as also the low‐cost awareness surrounding online products. I strongly feel there’s immense scope for product innovation through digital.

How is IDBI Federal Insurance placed on the tech innovation front?

For us technology is driven by purpose. Being a medium‐sized company, it is pertinent we drive tech innovation only after thorough evaluation. To avoid needless rollout delays, we have a dedicated team focused on ushering in new ideas and innovation. However, every implementation is subject to scrupulous evaluation.

This year we have launched two new IT initiatives. The first is our mobility platform whereby we have developed a tablet application to enable our sales teams in selling our products on the digital platform. We have named this “On the Go” keeping in mind the real benefit it brings to the salespeople and their customers. We have conducted a pilot of this tablet‐based sales model and the results are very encouraging. Interestingly, our late entry into this platform helped us learn from others which in turn helped us launch it in record time. Where other companies have spent years in solution development, we launched the pilot in just 7 months. We shall go live with 100 users in Banca and gradually cover the entire sales force.

The second initiative is the new IDBI Federal website. Our new website is built for the digital consumer and addresses all kinds of visitors – explorers, buyers and on‐boarded customers. The new front end (that the customers see) is supported by a completely new backend. Not only is the look and feel new‐age, the engine driving it is also state‐of‐the‐art technology, developed keeping the future in mind. Besides these, we have also implemented a Workflow Management System which has reduced our turnaround time for issuance by 75%.

What are your thoughts on the fate of open architecture in India?

Open architecture has not really taken off in India. To me, bancassurance is more about getting deep rather than going wide. Banks have their own products too; life insurance is not their core offering. So, it requires lot of time and effort to egg them on to sell life insurance products. It is essential for a bank to tie up with an insurance company with which it has strategic and cultural alignment, and build on the relationship thereafter.

What’s the biggest distribution-related challenge in your reckoning?

The biggest distribution challenge for IDBI Federal is how to enhance the productivity of existing distribution network. As I mentioned earlier, though bancassurance is the largest contributor to our business, we are still scratching the surface. To realise the full potential of this channel as also other channels, it’s imperative to enhance their productivity.

You seem to have steered clear of the Health & Pension space.

There is no plan as of now. Medical costs are increasing and there are good opportunities. But it’s not our playing field as of now. Likewise, we don’t have immediate plans to go deeper in rural areas.

What was the underlying thought behind your claims guarantee scheme? How has it fared?

One of the biggest challenges that the industry faces is how to woo the customer back. For the customer, the most critical part of their relationship with their insurer is at the point of making a claim. Through our claims guarantee scheme, we aimed to settle claims in just eight working days. In case we fail, we would pay an interest of eight per cent per annum on the death claim amount for each day of delay beyond eight working days. We have not had to pay a single penny as interest since the launch of this initiative in 2014. On its positive impact on the brand, IDBI Federal was declared one of top ten most trusted life insurance companies of India as per Economic Times Brand Equity Survey.

How do you see life insurance products evolving over time? How disruptive would the waves of Big data, Machine learning, IoT and value-added analytics prove for the sector?

Digital is not just the purview of the handful of people in the digital team. It is a culture that needs to be spread across the organisation. We may introduce many initiatives in the ‘digital’ space but very few would succeed unless we have digital embedded within the organisation’s culture.

Digital and technology are double edged swords. With every step in the positive direction, there is a potential downside that you need to shield yourself against. In the case of IT, the downside comes in the form of cyber risk and the threat on data security. One, therefore, must tread the digital path with care and after thorough evaluation. It’s not an ‘either or’ business case, a careful balance needs to be maintained.

What’s also important in the race to get more tech oriented and digitally savvy is the relentless focus on the consumer. Else, a business may get overawed by technology and could adopt it without a strong consumer benefit attached to it. We are wary of this and always keep the consumer filter on while evaluating tech.

Where do you see IDBI Federal in three years from now, vis-a-vis competition from public and private players?

Nobody can predict what the future holds despite having ambitious growth plans for the company. Our gross written premium (GWP) has almost doubled in the last 3 years. In 2013‐14 our Total Premium was Rs. 826 crore, and in 2016‐17 we closed the year at Rs. 1565 crore. Two years ago, this seemed impossible, but we did it. Our basic goal is to keep performing better than the industry average.

Do you expect steady growth both in NBPs and Renewals going forward or would it be skewed in favour of one of these?

NBP grows faster than renewals in early years but as a company gets tenured, the renewal growth becomes significant. Our persistency across buckets is one of our strengths. Our surrender ratio is also one of the lowest. We have an equal focus on new business and the business staying in our books.

What are your views on the possibility of consolidation in your sector in the coming time? What's your take on the inorganic route to growth?

Consolidation and listing are the new normal for the insurance industry. Speaking of inorganic growth, we have no plans as of now. Our immediate priority is to fortify Banca, re‐energize Agency as well as incubate new channels. We are unflinchingly focused on profitable, all‐round growth and value addition for our shareholders, customers and employees.

Friday, September 22, 2017

Dear Pradyuman


We know you now keep vigil from up above. We know you have had to take matters in your own hands, going by the sorry state of affairs down below. Things seem to be falling in place only because you are now in charge of your own case. With the CBI now in the picture, we hope justice is on its way. But the larger truth continues to haunt us.





As a nation of rogues across spheres - polity, education, medicine, health care, judiciary, police, business, art, leisure, media, sport, science, technology, religion and spirituality included - and of helpless bystanders like me who can do nothing more than pay condolences, we have collectively failed you. All our progress since independence - strides in outer space, laughable IT superpower claims, rich heritage & culture ball talk have come to naught. Please forgive us. And please forgive our talk show specialists (especially school principals, educationists, columnists and tinsel town celebrities) for their politically correct media bytes, dramatic posturing and even pseudo poetry.

We were introduced to you only after your demise but it didn't take long to realize that you are very special, just like your name Pradyumna, arguably the only three-lettered Sanskrit word with all letters joint (जोडाक्षर) Please give us the strength to come to terms with the fact that you are no longer with us.


Dear Bloomberg Businessweek

On September 8, India woke up to one of its worst tragedies in recent times when seven year old Pradyman Thakur was found brutally murdered in the toilet of his school Ryan International in Gurugram. I feel Bloomberg Businessweek should do an in-depth story on this national disaster as also on the money-minting business of Education in India. No wonder, several fraudsters thrive on their nation-wide school and college chains, largely helped by scheming minds, political clout as also unsuspecting (read unmindful) parents who take the 'international' tag at way more than face value.

Your Indian coverage is extremely sketchy anyway (compared to your China bytes) and some of the reports on India's IT challenges in the Trump era and demonetization were pretty mediocre. The best Bloomberg piece on India was Ben Crair's report titled "Maniac Killers of the Bangalore IT Department." It would be great if Ben covers the Ryan episode as well.

Little Pradyuman awaits justice, what if posthumously. Falling standards of journalism, particularly in India, have made us highly cynical about our expectations from the media. Bloomberg is one sweet exception. Your reportage goes way beyond business matters and seems to trigger actionable insights, unlike many Indian publications and even a few reputed global names. A Bloomberg story could go a long way in making the world aware of Pradyuman's tragedy which is, and should be, ours in the same breath.


Regards