from AI Trends https://ift.tt/31uanCT Allison Proffitt https://ift.tt/35slnl9
Understanding and Advising on Cyber and Physical Risks to the Nation’s Critical Infrastructure
Brian R. Gattoni is the Chief Technology Officer for the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. CISA is the nation’s risk advisor, working with partners to defend against today’s threats and collaborating to build a secure and resilient infrastructure for the future. Gattoni sets the technical vision and strategic alignment of CISA data and mission services. Previously, he was the Chief of Mission Engineering & Technology, developing analytic techniques and new approaches to increase the value of DHS cyber mission capabilities. Prior to joining DHS in 2010, Gattoni served in various positions at the Defense Information Systems Agency and the United States Army Test & Evaluation Command. He holds a Master of Science Degree in Cyber Systems & Operations from the Naval Postgraduate School in Monterey, California, and is a Certified Information Systems Security Professional (CISSP).
AI Trends: What is the technical vision for CISA to manage risk to federal networks and critical infrastructure?
Brian Gattoni: Our technology vision is built in support of our overall strategy. We are thenation’s risk advisor. It’s our job to stay abreast of incoming threats and opportunities for general risk to the nation. Our efforts are to understand and advise on cyber and physical risks to the nation’s critical infrastructure.
It’s all about bringing in the data, understanding what decisions need to be made and can be made from the data, and what insights are useful to our stakeholders. The potential of AI and machine learning is to expand on operational insights with additional data sets to make better use of the information we have.
What are the most prominent threats?
The sources of threats we frequently discuss are the adversarial actions of nation-state actors and those aligned with nation-state actors and their interests, in disrupting national critical functions here in the U.S. Just in the past month,we’ve seen increased activity from elements supporting what we refer to in the government as Hidden Cobra [malicious cyber activity by the North Korean government]. We’ve issued joint alerts with our partners overseas and the FBI and the DoD, highlighting activity associated with Chinese actors. On CISA.gov people can find CISA Insights, which are documents that provide background information on particular cyber threats and the vulnerabilities they exploit, as well as a ready-made set of mitigation activities that non-federal partners can implement.
What role does AI play in the plan?
Artificial intelligence has a great role to play in the support of the decisions we make as an agency. Fundamentally, AI is going to allow us to apply our decision processes to a scale of data that humans just cannot keep up with. And that’s especially prevalent in the cyber mission. We remain cognizant of how we make decisions in the first place and target artificial intelligence and machine learning algorithms that augment and support that decision-making process. We’ll be able to use AI to provide operational insights at a greater scale or across a greater breadth of our mission space.
How far along are you in the implementation of AI at the CISA?
Implementing AI is not as simple as putting ina new business intelligence tool or putting in a new email capability. Really augmenting your current operations with artificial intelligence is a mix of the culture change, for humans to understand how the AI is supposed to augment their operations. It is a technology change, to make sure you have the scalable compute and the right tools in place to do the math you’re talking about implementing. And it’s a process change.We want to deliver artificial intelligence algorithms that augment our operators’ decisions as a support mechanism.
Where we are in the implementation is closer to understanding those three things.We’re working with partners in federally funded research and development centers, national labs and the department’s own Science and Technology Data Analytics Tech Center to develop capability in this area. We’ve developed an analytics meta-processwhich helps us systemize the way we take in dataand puts us in a position to apply artificial intelligence to expand our use of that data.
Do you have any interesting examples of how AI is being applied in CISA and the federal government today? Or what you are working toward, if that’s more appropriate.
I have a recent use case. We’ve been working with some partners over the past couple of months to apply AI to a humanitarian assistance and disaster relief type of mission. So, within CISA, we also have responsibilities for critical infrastructure.During hurricane season, we always have a role to play in helping advise what the potential impacts are to critical infrastructure sites in the affected path of a hurricane.
We prepared to conduct an experiment leveraging AI algorithms and overhead imagery to figure out if we could analyze the data from a National Oceanic and Atmospheric Administration flight over the affected area. We compared that imagery with the base imagery from Google Earth or ArcGIS and used AI to identify any affected critical infrastructure. We could see the extent to which certain assets, such as oil refineries, were physically flooded. We could make an assessment as to whether they hit a threshold of damage that would warrant additional scrutiny, or we didn’t have to apply resources because their resilience was intact, and their functions could continue.
That is a nice use case, a simple example of letting a computer do the comparisons and make a recommendation to our human operators.We found that it was very good at telling us which critical infrastructure sites did not need any additional intervention. To use a needle in a haystack analogy, one of the useful things AI can help us do is blow hay off the stack in pursuit of the needle. And that’s a win also. The experiment was very promising in that sense.
How does CISA work with private industry, and do you have any examples of that?
We have an entire division dedicated to stakeholder engagement. Private industry owns over 80% of the critical infrastructure in the nation. So CISA sits at the intersection of the private sector and the government to share information, to ensure we have resilience in place for both the government entities and the private entities, in the pursuit of resilience for those national critical functions.Overthe past year we’ve defined a set of 55 functions that are critical for the nation.
When we work with private industry in those areas we try to share the best insights and make decisions to ensure those function areas will continue unabated in the face of a physical or cyber threat.
Cloud computing is growing rapidly. We see different strategies, including using multiple vendors of the public cloud, and a mix of private and public cloud in a hybrid strategy. What do you see is the best approach for the federal government?
In my experience the best approach istoprovide guidance to the CIO’s and CISO’sacross the federal government and allow them the flexibility to make risk-based determinations on their own computing infrastructure asopposed to a one-size-fits-all approach.
We issue a series of use cases that describe—at a very high level—a reference architecture about a type of cloud implementation and where security controls should be implemented, and where telemetry and instrumentation should be applied. You have departments and agencies that have a very forward-facing public citizen services portfolio, which means access to information, is one of their primary responsibilities. Public clouds and ease of access are most appropriate for those.And then there are agencies with more sensitive missions. Those have critical high value data assets that need to be protected in a specific way. Giving each the guidance they need to handle all of their use cases is what we’re focused on here.
I wanted to talk a little bit about job roles. How are you defining the job roles around AI in CISA, as in data scientists, data engineers, and other important job titles and new job titles?
I could spend the remainder of our time on this concept of job roles for artificial intelligence; it’s a favorite topic for me. I am a big proponent of the discipline of data science being a team sport. We currently have our engineers and our analysts and our operators. And the roles and disciplines around data science and data engineers have been morphing out of an additional duty on analysts and engineers into its own sub sector, its own discipline. We’re looking at a cadre of data professionals that serve almost as a logistics function to our operators who are doing the mission-level analysis. If you treat data as an asset that has to be moved and prepared and cleaned and readied, all terms in the data science and data engineering world now, you start to realize that it requires logistics functions similar to any other asset that has to be moved.
If you get professionals dedicated to that end, you will be able to scale to the data problems you have without overburdening your current engineers who are building the compute platforms, or your current mission analysts who are trying to interpret the data and apply the insights to your stakeholders. You will have more team members moving data to the right places, making data-driven decisions.
Are you able to hire the help you need to do the job? Are you able to find qualified people? Where are the gaps?
As the domain continues to mature, as we understand more about the different roles, we begin to see gaps—education programs and training programs that need to be developed. I think maybe three, five years ago, you would see certificates from higher education in data science. Now we’re starting to see full-fledged degrees as concentrations out of computer science or mathematics. Those graduates are the pipeline to help us fill the gaps we currently have. So as far as our current problems, there’s never enough people. It’s always hard to get the good ones and then keep them because the competition is so high.
Here at CISA, we continue to invest not only in our own folks that are re-training, but in the development of a cyber education and training group, which is looking at the partnerships with academia to help shore up that pipeline. It continually improves.
Do you have a message for high school or college students interested in pursuing a career in AI, either in the government or in business, as to what they should study?
Yes and it’s similar to the message I give to the high schoolers that live in my house. That is, don’t give up on math so easily. Math and science, the STEM subjects, have foundational skills that may be applicable to your future career. That is not to discount the diversity and variety of thought processes that come from other disciplines. I tell my kids they need the mathematical foundation to be able to apply the thought processes you learn from studying music or studying art or studying literature. And the different ways that those disciplines help you make connections. But have the mathematical foundation to represent those connections to a computer.
One of the fallacies around machine learning is that it will just learn [by itself]. That’s not true. You have to be able to teach it, and you can only talk to computers with math, at the base level.
So if you have the mathematical skills to relay your complicated human thought processes to the computer, and now it can replicate those patterns and identify what you’re asking it to do, you will have success in this field. But if you give up on the math part too early—it’s a progressive discipline—if you give up on algebra two and then come back years later and jump straight into calculus, success is going to be difficult, but not impossible.
You sound like a math teacher.
A simpler way to say it is: if you say no to math now, it’s harder to say yes later. But if you say yes now, you can always say no later, if data science ends up not being your thing.
Are there any incentives for young people, let’s say a student just out of college, to go to work for the government? Is there any kind of loan forgiveness for instance?
We have a variety of programs. The one that I really like, that I have had a lot of success with as a hiring manager in the federal government, especially here at DHS over the past 10 years, is a program called Scholarship for Service. It’s a CyberCorps program where interested students, who pass the process to be accepted can get a degree in exchange for some service time. It used to be two years; it might be more now, but they owe some time and service to the federal government after the completion of their degree.
I have seen many successful candidates come out of that program and go on to fantastic careers, contributing in cyberspace all over. I have interns that I hired nine years ago that are now senior leaders in this organization or have departed for private industry and are making their difference out there. It’s a fantastic program for young folks to know about.
What advice do you have for other government agencies just getting started in pursuing AI to help them meet their goals?
My advice for my peers and partners and anybody who’s willing to listen to it is, when you’re pursuing AI, be very specific about what it can do for you.
I go back to the decisions you make, what people are counting on you to do. You bear some responsibility to know how you make those decisions if you’re really going to leverage AI and machine learning to make decisions faster or better or some other quality of goodness. The speed at which you make decisions will go both ways. You have to identify your benefit of that decision being made if it’s positive and define your regret if that decision is made and it’s negative. And then do yourself a simple HIGH-LOW matrix; the quadrant of high-benefit, low-regret decisions is the target. Those are ones that I would like to automate as much as possible. And if artificial intelligence and machine learning can help, that would be great. If not, that’s a decision you have to make.
I have two examples I use in our cyber mission to illustrate the extremes here. One is for incident triage. If a cyber incident is detected, we have a triage process to make sure that it’s real. That presents information to an analyst. If that’s done correctly, it has a high benefit because it can take a lot of work off our analysts. It has low–to–medium regret if it’s done incorrectly, because the decision is to present information to an analyst who can then provide that additional filter. So that’s a high benefit, low regret. That’s a no-brainer for automating as much as possible.
On the other side of the spectrum is protecting next generation 911 call centers from a potential telephony denial of service attack. One of the potential automated responses could be to cut off the incoming traffic to the 911 call center to stunt the attack. Benefit: you may have prevented the attack. Regret: potentially you’re cutting off legitimate traffic to a 911 call center, and that has life and safety implications. And that is unacceptable. That’s an area where automation is probably not the right approach. Those are two extreme examples, which are easy for people to understand, and it helps illustrate how the benefit regret matrix can work. How you make decisions is really the key to understanding whether to implement AI and machine learning to help automate those decisions using the full breadth of data.