Showing posts with label cyber risk. Show all posts
Showing posts with label cyber risk. Show all posts

Tuesday, October 10, 2017

Social Contract Beats Regulation for Cybersecurity

A Review of The Cybersecurity Social Contract, Implementing a Market-Based Model for Cybersecurity, Internet Security Alliance, 2016

This may be the most important book on cybersecurity ever written.  It echoes many a truth that the risk manager on the front lines experiences daily.  It not only resonates, it recommends a better way.

Whatever we are doing now in cybersecurity, it is plainly not working.  In just a few days we learn that Equifax lost control of 145 million consumer credit records, systems of the Securities and Exchange Commission were breached, and NSA secrets were lost.  Almost unnoticed, Yahoo announced that a 2013 attack took data on not 1 billion but 3 billion accounts. 

As Equifax ex-CEO Richard Smith endured a multi-day grilling in Congress, the calls for more government regulation and jail time were both loud and inevitable.  Meanwhile, calls for jail time for senior government officials whose systems were also hacked are, to put it mildly, mild. While a public debate on the data collection practices of the credit reporting industry may be due, so too is the notion of more government regulation as the way to fix cybersecurity.

The Structure of the Argument

The Internet Security Alliance is an international multi-sector trade association that promotes new thinking about public policy toward cybersecurity.  The Cybersecurity Social Contract is the ISA’s manifesto for a reformulation of public policy on internet security based the idea of social contract, as opposed to the regulate-audit-penalize framework that prevails in the United States.

The book has both the strengths and limitations of being written by a committee.  The best is the first section, two chapters written by Larry Clinton, ISA’s President, making the case that the current regulatory approach rooted in 19th-century technology that does not work and even cannot work in cybersecurity, and that a new approach based on the idea of a social contract can.  The second section is a series of chapters devoted to key industries, including defense, finance, electric power, health, telecommunications, IT, manufacturing, and the overlooked food and agriculture sector.   Section II also has a strong chapter addressing the critical shortage of cybersecurity talent.  Chapters in the third section cut across industries along such lines as corporate governance, compliance audits, and cyber insurance.  One of its best chapters is what works and what doesn’t in public-private partnerships.  The appendices are in the form of briefing memos to the new President (this was in 2016) on these topics. 

Regulation Does Not Work

That the current approach of regulate-audit-penalize does not work is manifest from the headlines.  The current model of regulation is backward-looking and moves too slow to keep up with cyber threat technology.  It is based on the implicit but false premise that government experts know best. But they cannot secure their own systems, even with the clear and detailed mandates of the National Institute of Standards and Technology.  Industry executives are rightly jaundiced when regulators come to them with dirty hands and say, “do what we say, not what we do.”

A central problem of the current model is its economic irrationality.  NIST and other standards bodies issue lists of hundreds of controls, compliance to which firms are driven by audits.  In practice the only passing grade is 100% compliance, regardless of the stated intent that companies should manage to the risk, not to a checklist.  “Guidance” is merely the iron fist of the regulator wrapped in a velvet veneer of PR-speak.  The FFIEC issued guidance to financial institutions for the “voluntary” adoption of the NIST Cyber Security Framework, but not long before the FDIC and OCC made its use mandatory in bank examinations.  NIST has not assessed the cost-effectiveness of the CSF practices as required by the executive order that created it.  In fact tests for economic rationality are generally absent from cyber regulations, the very idea of which seems inimical to the idea of regulation. Instead the Open FAIR framework to quantify the economic assessment of security investments needs to be broadly implemented.

The result, in case after case, is a compliance structure that is unaffordable to all but the biggest companies, and therefore erects yet more barriers to entry by the smallest, most-innovative firms that create most new jobs and new products in our economy.  The effect is manifest in defense, finance, health, and electric power.  This regulatory creates a two-tier model in which only “the bigs” have the money and lobbying power to win.  It’s a prescription for economic stagnation, and we’ve written it for ourselves.

A New Social Contract

The doer is hectored by the nattering critic who finds fault with everything.  It’s easy to criticize, but what’s better?  The ISA’s answer is a new social contract, which means arrangements between government and industry founded more on incentives and less on mandates, such as rate-of-return regulation that gave us universal and reliable telephone services and electric power.  The ISA offers many suggestions, some specific, others vague, for what this new social contract might entail.  A few:

  •  The government, particularly the Defense Department, could require suppliers to have cyber insurance, just as it requires other forms of insurance.  This could stimulate the development of the nascent cyber insurance market, and be a model to the broader economy.
  • Regulatory audits could incorporate a maturity model in which companies that show a serious commitment and a record of improvement on security would be rewarded, in a sense, by a lighter-weight audit in the next year. 
  • Rules and regulations should be made practical for small and medium-sized businesses.
  • Let economic rationality prevail.  Practices of the NIST Cyber Security Framework should be assessed for cost-effectiveness, though the results may depend on circumstances and change with time.  Regulators and other stakeholders should accept competent situation-specific cost-effectiveness analyses of controls.
  • The Department of Energy should expedite renewal of security clearances for senior power-utility executives changing companies.  Long delays hamper their effectiveness in their new companies.
  • The federal government must accept the role that only it can play in protecting the portions of the nation’s critical infrastructure that is beyond the power of the private sector.  Only the federal government has the resources, expertise, and legal and political power to defend the public networks that banks, utilities, health care providers, and defense contractors use against attacks by nation-states to implant malware and steal intellectual property and personal information. 
  • Shockingly inadequate federal resources devoted to cybersecurity must be ramped up.  Total private sector spending on cyber security is estimated at $120 billion a year, compared to just $13 billion for the federal government, most of which is for cyberwar-fighting.  DHS spends only $1.3 billion on protecting government systems and national infrastructure combined.  By comparison just two banks spend that much.  Agency IT budgets must include maintenance funds so upgrading from Windows XP does not literally require and act of Congress.


Government, Clean Up Your Act

And the government has to get its act together. Congress has seventy-eight committees and subcommittees that have some jurisdiction over cybersecurity.  There are about as many government agencies when you add 50 states to a dozen or so federal departments and agencies. New laws are needed to unify the patchwork of dozens of state disclosure requirements and offer some liability protection for sharing threat intelligence. 

The US approach to regulation has usually been sector-specific, so a company may be subject to multiple costly, overlapping, and sometimes conflicting rules.  Every agency seemingly wants its own patch of cybersecurity turf, regardless of their competence to manage it. How much of the sturm und drang of the Equifax episode would be taken care of with a cross-industry approach to privacy like the EU’s General Data Protection Regulation?  The GDPR may go too far, but at least it is the same for all industries.

A Few Criticisms

This is where The Cybersecurity Social Contract has some weaknesses.  Several of the chapters recommend tax incentives and other kinds of inducements without offering specifics.  It’s too easy to ask for a special tax break.

The chapter on electric power utilities pleads for the Department of Energy to remain the main point of contact for security.  This seems self-serving and destined to take us right back to the welter of sector-specific regulations we have now. 

The chapter on auditing cyber controls was also written by a committee and reads like it.  It is filled with the impenetrable audit-speak.  It calls for cybersecurity examination reports (which generate fee income) to be voluntary.  But what’s voluntary now becomes a de facto mandate later, and the audit firms know it.  The examinations would be based on an evolution of “trust services criteria” defined by the American Institute of Certified Public Accountants, and based on the longstanding framework for internal control of the Committee of Sponsoring Organizations (COSO), but that’s just what SOC 2 and previously SSAE16 did, so what’s new here?  Trust us; just wait a bit for the next version.

Some Cause for Optimism

The book and this review end on a high note, that being the chapter on best practices in public-private partnerships.  If we take the main message to heart, that we should develop the pieces of a new social contract, the question is how to do it.  Larry Clinton once again comes to the rescue by extracting lessons for what worked and what did not from a surprising variety of past efforts.  What does not work: keeping participants compartmented from each other, unclear or unstated selection and decision criteria, lack of access to contributed information, lack of openness to discussion.  DHS looms large here.  What does work:  joint drafting of language by industry and government officials, personal commitment, consensus decision-making, early engagement with industry, starting without ideological preconceptions, soliciting written input, collaboration in developing objectives and priorities, building on past efforts, following through, having adequate support.  NIST is an exemplar.  In short, commitment and open collaboration work, hidden agendas and secrecy don’t.  The only surprise is that we need to be told this. 


If you despair of progress in cyber security, read this for a solid dose of reason for optimism based on fact and logic.

Wednesday, April 5, 2017

The Three Musketeers of Risk Mitigation

Once you have identified your risks, determined which cannot be accepted, and decided that avoiding them or transferring some of them to someone else is not possible or desirable, the last step in the risk decision process is deciding how to mitigate them.  Mitigating a risk means either by reducing the probable frequency of occurrence of the loss event (don’t store gasoline in the file room), or reducing the likely amount of damage if the loss event occurs (have fire extinguishers), or both. 

A control is something you do to mitigate a risk.  The problem for the risk manager is that there are so many controls to choose from – hundreds – and they all have their advocates.  Sometimes the advocates are people in your own organization, maybe engineers enamored of one defensive technology or another.  Sometimes they are regulators, because regulations may encourage or mandate specific controls.  Often they are vendors of technologies or services that claim to have certain risk mitigation benefits.  But before getting into the minutiae of controls, it’s well to survey the landscape.
 
Although there are various taxonomies for controls, the most useful one describes what the organization does to implement the control.  Seen this way, controls fall into three groups:  administrative, physical, and technical.  Like the Three Musketeers, they are all needed and they reinforce each other.  All for one, and one for all.

Administrative

Administrative controls are ones you implement by taking administrative actions.  (That was helpful, wasn’t it?)  They are best described by examples.  Policies, procedures, standards and guidelines are issued by management, usually and best in writing, to guide or constrain the behavior of workers.  They often include procedures for background checks, disciplinary and termination procedures, confidentiality agreements, and acceptable use rules.  Training is another good example. 

Advantages of administrative controls are that they are often relatively quick, easy, and inexpensive to implement.  Need a policy on background checks?  Just write it, get the VP of HR and CEO to agree, and tell the HR staff to do it.  Written administrative controls can also be very useful to demonstrate management commitment, which is important to auditors and regulators, so long as there is evidence that management is truly supportive.

The main disadvantage of administrative controls is that they are often hard to implement completely and effectively.  You can have a policy that all passwords be at least 8 characters long, but making sure this is actually done without some technological enforcement is quite a different matter.  So the main criticism of administrative controls is that they are unreliable, and so require constant follow-up, which can be expensive and annoying to all concerned.

Nevertheless, administrative controls have their place, and in fact are essential since all controls must be supported by management to be effective, and controls derive from policies issued by management.

Physical

Physical controls are generally about restricting physical access to or protection of personnel, facilities and physical assets (as opposed to logical assets like information).  They include locks and keys, guards, CCTV cameras, fences, burglar alarms, and fire escapes.

Physical controls are commonplace so people are used to them and accept them.  Many kinds of physical controls are either customary (fences), mandated by regulation (fire extinguishers) or required for insurance (proximity to a fire station), so budgeting and implementing them usually do not meet much resistance.

But physical controls can be expensive (seismic reinforcement in earthquake country) and tend to be inflexible.  Once you build a masonry wall, you can’t move it.  Physical controls also often have an administrative element (security guards have to be managed), and may not work without it (CCTV cameras are no good if nobody’s watching).

Physical controls are just as essential as administrative ones.  There is no logical security without physical security.  If anyone could get into your data center, your days would be numbered.

Technical

Technical controls, or what are sometimes called logical controls, constitute the vast majority of what people normally think of as information security, and they are what makes people think that cyber security is only an “IT thing.”  There is a vast and thriving industry of technological control vendors:  firewalls, intrusion detection systems, identity access systems, monitoring, logging – the list is endless.  The profusion of options is both a blessing and a curse.  There are many to choose from but qualifying solutions and coming up with an efficient, synergistic combination can be challenging.

Technical controls tend to be strong where administrative and physical controls are weak.  Properly implemented and administered, they are much more consistent and reliable than administrative controls.  And since they are inevitably software-controlled, they are more flexible and adaptable than many physical controls. 

Then again, technical controls have their weak points.  They need to be implemented correctly and appropriately to the local situation.  They need to be managed, updated, maintained, and watched.  That requires trained staff.  You are not protected just by having a firewall.  It needs to have a good set of rules.  So the cost of implementation, management and maintenance need to be considered along with purchase price.

As with administrative and physical controls, there is no getting around the need for technical controls either.  After all, information security is all about computers and networks.

All for One, One for All


It is hard to imagine an environment, even the simplest, in which all three types of controls are not needed.  There is no logical security without physical security.  There no technical and few physical controls that do not depend on administrative controls.  And there is no dependable security at all unless management has communicated its expectations in policies.  Each type of control is both essential and depends on the others.  All for one, one for all.

Tuesday, March 14, 2017

Before We Freak Out on Controls


In previous articles in this series, we’ve talked about three of the four legitimate ways of treating risk – avoiding, transferring, and accepting it – and one illegitimate way, ignoring it.  By “legitimate” I mean acceptable to a regulator or auditor under the right circumstances. I’ve purposely put off discussing strategies for mitigating risk, the fourth risk treatment, for two reasons.

First, most of the security industry – that pretty much means vendors – focus on controls, that is, ways to spend money to reduce risk.  That is understandable since vendors are in the business of transferring money from your pocket to theirs. You might never get the idea from a vendor that there is any way to treat a risk but by spending some money.  But as we have seen there are other ways.

Second, risk mitigation is a huge topic. It immediately leads us into a welter of risk management frameworks, standards, and control sets.  Among them are the ISO 27000 series, the U.S. National Institute of Standards and Technology special publications (the NIST SP series), the SSAE16 standard of the American Institute of Certified Public Accountants, the Payment Card Industry Data Security Standard (PCI DSS), the Control Objectives for Information and Technologies (COBIT) of the Information Systems Audit and Control Association (ISACA), and others.   All of them are quite happy to help you understand and implement their standards, always with hundreds of pages of documents and usually for a fee.

And that’s just the tip of the iceberg.  Unpack any one of them and you can easily find dozens or hundreds of what they call “controls.”  A control is simply something you do to reduce or limit risk.  Sometimes a single control leads to many individual practices or detailed specifications. Setting standards for security risk assessment is an industry unto itself.

As if the proliferation of standards were not enough, large heavily-regulated enterprises like financial institutions and healthcare providers are wont to visit on their suppliers customized risk-assessment questionnaires and processes, and these questionnaires can easily have hundreds of items.  The Standardized Information Gathering (SIG) questionnaire of the Shared Assessments Program has over 1,000 items at last count.

The language of the standards and questionnaire often convey the distinct impression that every item is mandatory, despite statements to the contrary.  And of course they are all different enough to preclude a standardized response, but similar enough to offer a glimmer of hope for economies.

It can be a daunting challenge for the executive of a small- or medium-sized company who wants to win the business of “the bigs” in the industry.  How much of this stuff do I really have to do?  How do I even get my arms around the overlapping and seemingly conflicting demands of multiple customers and regulators?  Will I really lose the business unless I have all employees use different 15-character passwords for every system that they change every month, among scores of other items?


For the sake of the innovation, entrepreneurship, and competitiveness of the American economy, it is our mission to help the SMB executive navigate a path through this morass of standards.  Future articles will attempt to contribute to this mission.

Thursday, February 23, 2017

The Many Ways to Transfer Risk

There are four legitimate ways to treat risk: avoid it, accept it, mitigate it, and transfer it.  If “transferring risk” is thought of at all in cyber security, it is usually about buying an insurance policy.  And in fact cyber insurance is a rapidly growing market, although one with teething problems.  Exactly what losses will be covered, and how will the extent of loss be determined?  Will there be favorable pricing for firms that have a good security program in place, and if so who will determine the effectiveness of the program a firm claims to have?  What about the moral hazard problem:  will insured parties have incentive to be lax about or misrepresent their security programs?  How will rates be determined, given the carriers’ relative lack of loss data, compared to other insured hazards?

Nevertheless, insurance carriers are keen to the opportunity and are developing packages of services that bundle legal advice and incident response with traditional insurance.

There are other ways to transfer risk, some of which look like “buying insurance” in a different guise, and some that look totally different.  Financial institutions and other investors can hedge their investment positions by buying options or other derivative instruments.  Credit default swaps (CDS) can insure a lender against default by a borrower – assuming the seller of the swap has the financial capacity to cover the default. (Overuse and underestimating the risk of CDS’s contributed significantly to the 2008 financial crisis.)

A firm can also transfer risk, either partially or totally, to other firms through normal commercial contracts – other than insurance policies.  Many business-to-business contracts include service level agreements or other assurances of a minimum level of quality, sometimes with financial penalties for non-performance.  The seller may have some ability to negotiate service level terms, depending on its market power relative to the buyer.  I will likely not be successful in demanding a 99% on-time delivery guarantee from Amazon, but Amazon may get one with UPS.

Commercial contracts commonly have disclaimers, representations and warranties that protect suppliers from claims by customers. Whether such clauses can be used to protect a firm from cyber security risks depends on who has the market power, but also what is customary and reasonable.  A service provider may get a customer to agree that it is responsible to protect its users’ passwords and network connection points.  More generally, SSAE16 audit reports contain a section on the controls that the service provider relies on the customer to implement.  In other words, “don’t blame me if the controls fail because of something you did.”

Transferring risk using contracts has its limits.  The extent of risk transfer is often limited, either in scope (kind of risk or conditions) or in amount (amount of loss, number of occurrences).  Even if the risk is legally transferred, it may not be practically transferred.  The other party may not have the capacity, financial or otherwise, to absorb the risk.  And even if it does, your firm may experience some degree of loss.  We may agree that you are responsible to protect your passwords, but if an attacker penetrates my network due to your negligence, I still have an incident to manage.  Finally, recognize the difference between the probability that a loss may occur, and the amount of loss if it does occur.  A conventional insurance policy protects the holder against some portion of the loss amount, whereas a supplier’s commitment to a robust security program should reduce the likelihood that a loss will occur at all.

Among the four recognized types of risk treatment, transferring the risk to a counterparty is one that is often overlooked as a management option.  Transferring risk is the sibling of avoiding risk, and a strategy well worth considering.  It is easy to fall into the trap of ignoring these two options if cybersecurity is over-delegated to IT engineers.

Saturday, February 18, 2017

Of Clocks and Systems and Risk Decisions


You probably have had the somewhat jarring experience of glancing at a digital clock and a clock with hands one after another.  The feeling can be a little unsettling, if not mildly irritating.  There’s a good reason why, and it tells us something important about how we make decisions.  What’s going on here?

Suppose a digital clock says the time is 2:42. You probably do a quick mental calculation and think “OK, I have about 20 minutes until my 3 o’clock appointment.”  But if you look at an analog clock, you probably don’t even bother with the minute-level of precision because you immediately have an intuition of how much time is left until 3.   The digital readout demands just a little bit of cognitive effort, while the analog readout is immediately intuitive.  Some analog clocks don’t even have numbers.

Psychologists have discovered that people have two ways of making decisions, called System 1 and System 2.  System 1 depends on experience and intuition.  It is relatively fast, comfortable, and effortless.  System 2 is more like the scientific method. It relies on data gathering, logic, analysis, and cognitive work.  A lot of people do not like System 2 thinking because it is more work.  “I’m not a math person; I go with my gut.”

There is a time for System 1 and a time for System 2.  System 1 is what you want if you are being chased by a bear. You don’t have time for analysis and you have plenty of hormonal intuition about fight or flight. Forget the analysis, run! 

But System 1 can get you into a lot of trouble.  They are bad for investment decisions and bad for deciding when to go to war.  That’s when you need System 2.  Facts, data, analysis, logic, formal models.

In making risk decisions, when should we use System 1 vs System 2?  If the consequences of being wrong are small, and we have good intuition, or we must make an immediate decision, System 1 is probably the ticket.  Otherwise, the effort of System 2 will likely have a good payoff. 

But using System 2 is not necessarily hugely burdensome.  Sometimes a quick back-of-the-envelope analysis, or a moment of reflection, is all you need.  After all, that is what you did in reading the digital clock. You can train for it.


For more on Systems 1 and 2, there is no better source than Thinking, Fast and Slow, by Daniel Kahneman.

Friday, February 10, 2017

Don't Do That!

My CFO’s words still echo after 15 years.  I’ve long forgotten why he said it on any of multiple occasions. But with reflection and more experience, it’s become clear that he was managing risk.

Of the four common ways to treat risk – mitigating, transferring, accepting, and avoiding -- avoiding is often the most neglected.  Yet it may be the simplest, fastest, cheapest, and is undoubtedly the safest. 

There are a few ways to avoid risk.  One is to decide not to engage at all in some activity that exposes you (your critical assets, that is) to risk, especially if there is no upside.  Workplace safety rules are full of risk-avoidance ideas.  Management should consider carefully whether the potential returns of a new venture or strategy are worth the risks.  That requires having a deep and clear understanding of what those risks are.  Many financial institutions that over-invested in credit default swaps learned that lesson the hard way in 2008.  In the field of information security, your business does not need to have, or benefit from having, personally identifiable information, don’t collect it. 

Other ways to avoid risk are to limit the scope or the time duration of the exposure to the threat.  If you must have PII, or there is a big benefit to it, minimize the amount you have.  Minimize the number and diversity of environments in which you keep it.  Keep it out of development and test networks.  Get rid of it as soon as you can. 

Another way to avoid risk sometimes looks like transferring it to another party.  Risk transfer usually takes the form of buying insurance or other contractual arrangements.  In these, there is often a clear price for the transfer of risk.  But it is also possible to avoid risk entirely by defining your business process in a way that specialists handle certain parts of it.  You avoid the risk of having credit card data by integrating your e-commerce site to a payments processor, like PayPal.  That’s their business.  As a consumer, you avoid some kinds of identity theft risks by using a credit card or cash instead of a debit card. 

A great way to start the risk decision-making process is to ask, Do I need to take that risk at all?  The answer may well be, Don’t do that!

Thursday, February 2, 2017

Ignorance of the Risk Is No Excuse


A previous note offered a quarterly executive risk review as a simple and pragmatic way to start a risk management program.  A risk review fits naturally into the agenda of the quarterly business review, and it lays a good foundation from which to evolve a risk management program of whatever sophistication and at whatever pace is desired.


The first thing that will come out of the risk review is, “What do we do now to manage our top risks?”  A future note will explore the four general methods of treating risk.  But first we’ll look at the pros and cons of willful ignorance.

There may be a strong inclination to turn a blind eye to some risks.  You may feel that there are some things you do not want to “know” – in quotes because of course you are aware, but you do not want evidence to be created that could come back to haunt you.  Somebody could find that document and require you to address the risk, or worse accuse you of negligence, because there is evidence that you knew of a risk, or should have known, and did nothing about it. 

Management can take a willful-ignorance approach.  But let’s look at the balance sheet. 


There are a few points on the plus side. The executive may have plausible deniability for a time, and gain some time to address many other pressing issues first.  She or he may even get away with doing nothing indefinitely.  In a fledgling enterprise, the executive may calculate that it is more important to establish that the business is viable than to manage certain risks.  If there is no business, risk doesn’t matter.

There are more points on the minus side.  The trend in the investment, risk management, and regulatory environments is toward less patience with ignorance of risk.  All risk management frameworks require regular executive review of risk.  It is an important part of corporate governance.  Big customers and regulators will demand a risk management program.  Investors too want to understand their risk before committing funds to your enterprise, and cyber risk is now prominent in everybody’s awareness.  Especially bankers!

Furthermore, it may not make good management sense to ignore a risk.  Most risks do not get better with time, and some can blow up to jeopardize the very existence of the company.  Imagine a breach of confidential data just when you are trying to sign that first marquee customer.  Finally, there is value in being able to sleep at night, and knowing what your problems are is better than worrying about what they may be.

Turning a willful blind eye to a risk -- “rejecting” it -- is not the same as knowingly accepting a risk, which may be the best way to treat it.  It is management’s decision whether to treat or reject a risk, but rejecting is not a winning strategy in the long run.