Showing posts with label risk management. Show all posts
Showing posts with label risk management. Show all posts

Thursday, February 23, 2017

The Many Ways to Transfer Risk

There are four legitimate ways to treat risk: avoid it, accept it, mitigate it, and transfer it.  If “transferring risk” is thought of at all in cyber security, it is usually about buying an insurance policy.  And in fact cyber insurance is a rapidly growing market, although one with teething problems.  Exactly what losses will be covered, and how will the extent of loss be determined?  Will there be favorable pricing for firms that have a good security program in place, and if so who will determine the effectiveness of the program a firm claims to have?  What about the moral hazard problem:  will insured parties have incentive to be lax about or misrepresent their security programs?  How will rates be determined, given the carriers’ relative lack of loss data, compared to other insured hazards?

Nevertheless, insurance carriers are keen to the opportunity and are developing packages of services that bundle legal advice and incident response with traditional insurance.

There are other ways to transfer risk, some of which look like “buying insurance” in a different guise, and some that look totally different.  Financial institutions and other investors can hedge their investment positions by buying options or other derivative instruments.  Credit default swaps (CDS) can insure a lender against default by a borrower – assuming the seller of the swap has the financial capacity to cover the default. (Overuse and underestimating the risk of CDS’s contributed significantly to the 2008 financial crisis.)

A firm can also transfer risk, either partially or totally, to other firms through normal commercial contracts – other than insurance policies.  Many business-to-business contracts include service level agreements or other assurances of a minimum level of quality, sometimes with financial penalties for non-performance.  The seller may have some ability to negotiate service level terms, depending on its market power relative to the buyer.  I will likely not be successful in demanding a 99% on-time delivery guarantee from Amazon, but Amazon may get one with UPS.

Commercial contracts commonly have disclaimers, representations and warranties that protect suppliers from claims by customers. Whether such clauses can be used to protect a firm from cyber security risks depends on who has the market power, but also what is customary and reasonable.  A service provider may get a customer to agree that it is responsible to protect its users’ passwords and network connection points.  More generally, SSAE16 audit reports contain a section on the controls that the service provider relies on the customer to implement.  In other words, “don’t blame me if the controls fail because of something you did.”

Transferring risk using contracts has its limits.  The extent of risk transfer is often limited, either in scope (kind of risk or conditions) or in amount (amount of loss, number of occurrences).  Even if the risk is legally transferred, it may not be practically transferred.  The other party may not have the capacity, financial or otherwise, to absorb the risk.  And even if it does, your firm may experience some degree of loss.  We may agree that you are responsible to protect your passwords, but if an attacker penetrates my network due to your negligence, I still have an incident to manage.  Finally, recognize the difference between the probability that a loss may occur, and the amount of loss if it does occur.  A conventional insurance policy protects the holder against some portion of the loss amount, whereas a supplier’s commitment to a robust security program should reduce the likelihood that a loss will occur at all.

Among the four recognized types of risk treatment, transferring the risk to a counterparty is one that is often overlooked as a management option.  Transferring risk is the sibling of avoiding risk, and a strategy well worth considering.  It is easy to fall into the trap of ignoring these two options if cybersecurity is over-delegated to IT engineers.

Saturday, February 18, 2017

Of Clocks and Systems and Risk Decisions


You probably have had the somewhat jarring experience of glancing at a digital clock and a clock with hands one after another.  The feeling can be a little unsettling, if not mildly irritating.  There’s a good reason why, and it tells us something important about how we make decisions.  What’s going on here?

Suppose a digital clock says the time is 2:42. You probably do a quick mental calculation and think “OK, I have about 20 minutes until my 3 o’clock appointment.”  But if you look at an analog clock, you probably don’t even bother with the minute-level of precision because you immediately have an intuition of how much time is left until 3.   The digital readout demands just a little bit of cognitive effort, while the analog readout is immediately intuitive.  Some analog clocks don’t even have numbers.

Psychologists have discovered that people have two ways of making decisions, called System 1 and System 2.  System 1 depends on experience and intuition.  It is relatively fast, comfortable, and effortless.  System 2 is more like the scientific method. It relies on data gathering, logic, analysis, and cognitive work.  A lot of people do not like System 2 thinking because it is more work.  “I’m not a math person; I go with my gut.”

There is a time for System 1 and a time for System 2.  System 1 is what you want if you are being chased by a bear. You don’t have time for analysis and you have plenty of hormonal intuition about fight or flight. Forget the analysis, run! 

But System 1 can get you into a lot of trouble.  They are bad for investment decisions and bad for deciding when to go to war.  That’s when you need System 2.  Facts, data, analysis, logic, formal models.

In making risk decisions, when should we use System 1 vs System 2?  If the consequences of being wrong are small, and we have good intuition, or we must make an immediate decision, System 1 is probably the ticket.  Otherwise, the effort of System 2 will likely have a good payoff. 

But using System 2 is not necessarily hugely burdensome.  Sometimes a quick back-of-the-envelope analysis, or a moment of reflection, is all you need.  After all, that is what you did in reading the digital clock. You can train for it.


For more on Systems 1 and 2, there is no better source than Thinking, Fast and Slow, by Daniel Kahneman.

Thursday, February 2, 2017

Ignorance of the Risk Is No Excuse


A previous note offered a quarterly executive risk review as a simple and pragmatic way to start a risk management program.  A risk review fits naturally into the agenda of the quarterly business review, and it lays a good foundation from which to evolve a risk management program of whatever sophistication and at whatever pace is desired.


The first thing that will come out of the risk review is, “What do we do now to manage our top risks?”  A future note will explore the four general methods of treating risk.  But first we’ll look at the pros and cons of willful ignorance.

There may be a strong inclination to turn a blind eye to some risks.  You may feel that there are some things you do not want to “know” – in quotes because of course you are aware, but you do not want evidence to be created that could come back to haunt you.  Somebody could find that document and require you to address the risk, or worse accuse you of negligence, because there is evidence that you knew of a risk, or should have known, and did nothing about it. 

Management can take a willful-ignorance approach.  But let’s look at the balance sheet. 


There are a few points on the plus side. The executive may have plausible deniability for a time, and gain some time to address many other pressing issues first.  She or he may even get away with doing nothing indefinitely.  In a fledgling enterprise, the executive may calculate that it is more important to establish that the business is viable than to manage certain risks.  If there is no business, risk doesn’t matter.

There are more points on the minus side.  The trend in the investment, risk management, and regulatory environments is toward less patience with ignorance of risk.  All risk management frameworks require regular executive review of risk.  It is an important part of corporate governance.  Big customers and regulators will demand a risk management program.  Investors too want to understand their risk before committing funds to your enterprise, and cyber risk is now prominent in everybody’s awareness.  Especially bankers!

Furthermore, it may not make good management sense to ignore a risk.  Most risks do not get better with time, and some can blow up to jeopardize the very existence of the company.  Imagine a breach of confidential data just when you are trying to sign that first marquee customer.  Finally, there is value in being able to sleep at night, and knowing what your problems are is better than worrying about what they may be.

Turning a willful blind eye to a risk -- “rejecting” it -- is not the same as knowingly accepting a risk, which may be the best way to treat it.  It is management’s decision whether to treat or reject a risk, but rejecting is not a winning strategy in the long run.

Thursday, July 30, 2015

Managing the Inevitable Cyber Losses

There have been many breaches recently in each of which tens of millions of Americans have had their person information compromised.  New ones are all too frequent.   And in several of the most notorious recent cases, months and even years have elapsed before the breach was discovered and dealt with.  The attackers are evolving new threats faster than defenders are reacting – and to some extent faster than they can react in today’s world. 

We can draw three conclusions from this trend. 

First, since it appears highly likely that this tide of personal-information disclosures will continue, organizations must become much better at incident response.  It seems that the information security profession has placed higher priority and put more resources into prevention and detection – especially in technology – than in incident response.  One can speculate why this would be so, but the fact remains that the magnitude of loss increases with the time taken to effectively respond.  Therefore, since breaches will remain inevitable for the foreseeable future, the key to managing the magnitude of the losses is to respond rapidly and effectively.  If you can’t prevent holes in the boat, at least plug them fast.

Second, we have to become much better at detection.  This means not only detecting a breach or intrusion that has already occurred, but also detecting its precursor events.  Thanks to Lockheed-Martin’s application of kill-chain analysis to cyber breaches, we now understand more clearly that exfiltration of data is a multi-phase affair that requires success in several steps in sequence, and therefore may take weeks or months to pull off.  This gives the defender two advantages, the advantage of multiple ways defend against and multiple ways to defeat an attack, and the advantage of time to do so.  But that depends on an ability to detect anomalous events when they occur, as well as on an effective capacity to respond quickly.  So organizations must become better at detecting anomalous events.  But crucially “detection” does not stop with some piece of technology logging an event or even firing off an alert.  A person has to make a determination that “We got a problem here, Houston,” and get resources dispatched to deal with it.

This brings us to the final point.  Detection and logging systems are famously inundated with thousands or millions of false positives and irrelevant alerts for low-level threats.  This situation is made to order for susceptibility to well-known human failings of inattention and fatigue.  It is a mystery why, in a profession and an industry so imbued with technology, better technology is not available to dramatically increase the signal-to-noise ratio, and do it cheaply (which means it cannot depend on having expensive security engineers continuously tweaking rules). 

Here are some action take-aways:
1.       CISOs:  Review and test your incident response plans.  Does your IR plan address the highest-priority threat scenarios?  Try exercising it on “garden-variety” incidents, like lost laptops, to see if it works and how it can be improved.  Hold at least a table-top test once a year.

2.       CIOs and CISOs:  Review the balance between investments in security technology (SIEMs and IDPSs for example) and the funding for their effective use once they are installed.  Do not fall victim to the set-and-forget fallacy in which, once a system is installed, one thinks “well, that problem is solved now.”  Do you have capable staff assigned to manage the technology, and do they have the training, the time, and the management expectation to do the job?

3.       Security technology suppliers:  Create products and services for your customers to initially configure their detection devices with good starter sets of filtering rules and keep them updated frequently.  IDPS operators should be able to get updates at least daily to threat signatures discovered by, at least, all owners of similar equipment, but ideally the entire security community.

4.       Legislators, staff aides, and policy analysts:  Give us laws that protect organizations, especially corporations, from liability if they contribute threat signatures to a common repository.  The low-bandwidth, high-latency sharing of information security knowledge that occurs in conferences and white papers is fine, but it needs to be complemented with daily operational updates.  If a small but critical mass of organizations contributed in near-real-time to a common repository of threat signatures that was available to all, the time from threat discovery to effective defense could be dramatically reduced.  This is one way to turn the asymmetry of the threat against the attackers.