This week on NVTC’s blog, NVTC member LMI shares how emerging technology is making it easier for agencies and their partners to share essential data, even when the organizations have different security policies and protocols.


Last week’s Virtualization in a Collaborative Information Sharing Environment Forum, sponsored by the Intelligence and National Security Alliance (INSA), shed light on how emerging technology is making it easier for agencies and their partners to share essential data, even when the organizations have different security policies and protocols.

Network virtualization, also known as software-defined networking, uses cloud-based principles and technology to provide a more efficient IT infrastructure while opening the door for different types of users to seamlessly access information for which they are authorized by law and policy.

Kshemendra Paul, who oversees the Information Sharing Environment (ISE), noted that the original vision of a single, universal cloud providing services to all federal agencies has changed. Today, ISE’s emphasis is to establish common policy to “federate trust.”

Groups with different security and access controls share many common elements around trust (i.e., business rules for issuing credentials, individual attributes, data retention), so there is a framework for a diverse range of professionals to come together and share data. Paul noted Alabama already has developed a trust framework to enable the medical and law enforcement communities to share casework data.

To move agencies to a state where users share information without being hampered by technology, the panel discussed the following.

Network virtualization

  • Is gaining momentum—already, the National Geospatial-Intelligence Agency is fast-tracking implementation of network virtualization and wants other agencies to join.
  • Could automate security policy—by using the National Institute of Standards and Technology (NIST) framework for trusted identities in cyberspace, XML could be used to translate thousands of access control policies into machine-executable code.
  • Offers flexibility and immediacy—agencies will be able to expand and contract networks, as needed, as well as create them and move them around rapidly.
  • Creates efficiencies—alongside enhancing mission capabilities, virtualization lowers costs and improves end-user service through faster configuration and instant upgrades.
  • Tightens security—patches are quickly applied, since IT departments know all the users and applications for a given network.

Key challenges for implementing virtualization include change management and security. Seamlessly sharing sensitive information between organizations often goes against the grain of agency culture. Making virtualization scalable requires a culture change.

Security remains a constant challenge. As the data grow, IT departments will need to analyze bigger and bigger data sets to find insiders behaving badly. The right security investments need to be set aside for virtualization projects.

Keith Nelson is a member of LMI’s Organizational and Human Capital Solutions group, supporting human resources IT, workforce management, succession planning, and performance management for the State Department, the Department of Homeland Security, and the General Services Administration. Mr. Nelson holds an MBA from UCLA and a Master of Journalism from UC Berkeley.

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

The best relationships are built on great communication and mutual understanding – which is why the relationship between federal CIOs and the applications that drive their agencies’ performance is getting more complicated. This week on NVTC’s blog, Davis Johnson, the vice president of public sector at NVTC member company Riverbed Technology explains why it’s important to improve your network visibility.


The best relationships are built on great communication and mutual understanding – which is why the relationship between federal CIOs and the applications that drive their agencies’ performance is getting more complicated.Federal leaders are too often in the dark about which applications are delivering value, which personnel are using them, and how those applications are performing. Agencies simply don’t know their apps very well, and understanding applications begins with gaining visibility into the networks they run on.

The network visibility crisis is getting even more serious as agencies move to the cloud and consolidate data centers. The result is that applications are traveling farther distances across agency networks to reach defense and civilian workers that rely on them every day. Agencies need to make sure they have visibility into the new network paths, and roadblocks, that their applications navigate, or face negative impacts to performance and budgets.

In a Riverbed-commissioned survey conducted by Market Connections, over 50 percent of Federal IT respondents reported that it takes a day or more to detect and fix application performance issues. Furthermore, only 17 percent reported being able to address and fix the issue within minutes.

The costs associated with network outages can be staggering. Today, the average cost of an enterprise application failure is $500,000 to $1 million per hour. This is why it is so important to have good network visibility to identify and fix network and performance application problems as they occur.

Many federal IT executives lack the manpower, budget and tools necessary to find and fix performance issues quickly and efficiently. Without the right tools to monitor network and application performance, federal IT professionals cannot pinpoint problems that directly lessen agency or mission effectiveness. This can mean supply chain delays of materiel to warfighters in the field or lack of access to critical defense and global security applications.

Networks need to perform quickly and seamlessly in order to fulfill mission requirements. Performance monitoring tools provide the broadest, most comprehensive view into network activity, helping to ensure fast performance, high security and rapid recovery.

With visibility across the entire network and its applications, IT departments can identify and fix problems in minutes—before end users notice, and before productivity and citizen services suffer. More than two-thirds (68%) of respondents see improved network reliability as a key value of monitoring tools and more than three-quarters (77%) of respondents said automated investigation and diagnosis is an important feature in a network monitoring solution.

Survey respondents shared which features are important in network monitoring, providing a window into their thoughts about current issues. Those features, listed in order of importance, are capacity planning (79%), automated investigation (77%), application-aware visibility (65%), and predictive modeling (58%).

By improving network visibility, an agency will have improved network reliability, know about problems before end-users do, experience improved network speed, maximize employee productivity, and gain have insight into risk management/cyber threats. Because IT executives will be able to see an agency’s whole network, they can become proactive in not only fixing issues but avoiding them as well.

With today’s globally distributed federal workforce, network visibility is critical to monitoring performance, and identifying and quickly fixing problems.

Using network monitoring tools is a critical step toward managing the complex network environment and ensuring transfers to the cloud are effective and beneficial experiences for the agency, the end users and, ultimately, the constituents.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS