This week’s guest blog post is by Norm Snyder, partner at Aronson, LLC. Snyder is also chair of NVTC’s Small Business and Entrepreneur Committee. Snyder shares highlights and lessons learned from the Committee’s All Star Seed/Early Stage Investor Panel that took place on Nov. 15.

aronson-llc1 v2Can seed, early stage and angel investment capital be found in the D.C. metro area? This question and others were discussed by NVTC’s Small Business and Entrepreneur Committee’s engaging All Star Seed/Early Stage Investor Panel on Nov. 15, with some of the area’s most active early stage investors.

Moderated by Aronson Partner Norm Snyder, the panelists included Ed Barrientos, “super-angel” investor and entrepreneur CEO of Brazen, Steve Graubart, CFO of 1776, John May, founding partner of New Dominion Angels, Liz Sara, angel investor and entrepreneur and chair of the Dingman Center, and Tom Weithman, managing director of CIT GAP Funds and CIO of Mach37.

During the event, panelists discussed their recent experiences, desired investee profiles and offered practical advice to an audience of start-up entrepreneurs engaged in navigating the challenging early stage investment world. While the general consensus is that early stage capital is available in the D.C. metro area, it takes persistence and hard work for entrepreneurs to successfully attract sufficient investment from the right investors.

According to Weithman, over 100 companies have been funded in Virginia by CIT with a focus on tech, fin-tech, cyber and life sciences. However, he stated there is a dearth of seed funds generally available for cyber. Sara stated that approximately 15 deals were funded in the last year by her Dingman Angel group. Barrientos has made significant angel investments in a number of companies and has raised venture capital funds for Brazen. May stated that almost every deal that should be funded is funded, but it is rare for one angel to fund the entire deal. Graubart said 1776 has made 30 investments to-date with a focus on regulated industries such as ed-tech, health IT, fintech, smart cities and transportation.

So how does an investor stand out in the crowd of early stage companies?

Panelists offered a range of suggestions. Research potential investors – plenty of information is available to find out what they are interested in. Don’t waste your time and theirs chasing investors not interested in your company’s profile. For early stage, investors are betting first on the entrepreneur and their team and not on a single idea or concept, which is likely to evolve several times before it goes to market. Put together a passionate team with strong domain experience and the ability to sell themselves to attract investors, customers and future team members. Remember, the team should include an experienced advisory board with strengths and experiences that compliment and extend the abilities of the entrepreneurs. Put together well thought out and concise pitches and applications.

Be persistent – get in front of groups of investors. Warm referrals tend to get looked at first, so use your advisors to help you get noticed and invest time building relationships. Be able to demonstrate market acceptance and traction. Be coachable; you may be the “master” of your technology, but each successful start-up faces different challenges and there’s a lot to learn. Early stage entrepreneurs shouldn’t focus on trying to get the “highest” valuation – high valuations can scare away very qualified investors and may lead to future disastrous down rounds. Convertible debt, instead of preferred stock, can help take the focus off the subjective valuation issue for early stage companies.

Most importantly, the closing advice to attendees: be passionate and persistent and make sure you enjoy what you do!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Our latest NVTC member guest blog post is by ePlus Chief Security Strategist Tom Bowers. Bowers discusses the latest advancements in machine learning and its impact on cybersecurity.

eplusAccording to a Ponemon Institute study released in March, 63% of survey respondents said their companies had been hit by “advanced (cyber) attacks” within the last year. Only 39% felt their company was highly effective at detecting cyber attacks. And worse, only 30% considered their organizations highly effective at preventing them.

A few weeks ago, I moderated a panel discussion at the ePlus/EC-Council Foundation CISO Security Symposium in National Harbor, Md. Our purpose was to gather together leading security experts to get their insights on the latest security threats and to discuss ideas and strategies. CISOs from many different industries were there. And as you might imagine, given the importance of cybersecurity today, the event was well-attended.

During the session, we covered various pressing topics in the realm of cybersecurity. But the most intriguing “future-looking” trend we discussed was machine learning.

That’s not a surprise because machine learning is a hot topic in tech circles. But it’s more than just the latest buzzword in the industry, and vendors are responding accordingly. In March, Hewlett Packard Enterprise (HPE) announced the availability of HPE Haven OnDemand, their cloud platform “machine-learning-as-service” offering. In October, IBM, whose Watson system is known as a leader in artificial intelligence (AI), changed the name of their predictive analytics service to “IBM Watson Machine Learning” to emphasize their direction “to provide deeper and more sophisticated self-learning capabilities as well as enhanced model management and deployment functionality within the service.”

Simply speaking, machine learning refers to the ability of computers to, in effect, “learn and grow in knowledge” based on past experience. Machine learning begins with a base set of teaching material and through subsequent experiences (i.e. the processing of more and more data sets and responses), the machine learning algorithm adds to the base material—it’s body of knowledge, so to speak—and the program becomes more intelligent. As a result, machine learning programs are able to answer questions and to make predictions with increasing accuracy.

What are the implications for security operations?

Machine learning has made tremendous strides in the last few years. From self-driving vehicles to medical research to marketing personalization to data security, machine learning algorithms are being used to churn through huge stores of data to identify patterns and anomalies, enabling data-driven decisions and automation. And that capability continues to mature and extend into the area of cybersecurity.

For years, those of us in IT security have worked tirelessly to increase the maturity of security operations in our companies. We’ve strived—in the face of increasing complexity and rising threats—to advance our information security capabilities beyond simple “detect and respond” reactive methods to risk-based “anticipate and prevent” proactive approaches. Machine learning is playing a role in that mission today and will play an even larger part in the years to come.

As more security vendors incorporate machine learning engines into their solutions, security operations will change. For example, log scanning—a tedious, labor-intensive effort—will become automated. Instead of a security analyst scrolling SIEM output, scrutinizing correlated events and analyzing their meaning, machine learning engines will parse huge log files, identify anomalies, and make decisions in near real-time.

In addition, machine learning engines will identify trends, threats, and incidents much faster. Instead of waiting on a security analyst to conclude their analysis, machine learning engines will parse reams of security data collected from enterprise machines, such as servers, smartphones, tablets, network devices, applications, and others. Through big data analytics and machine learning, this machine data will be searched and analyzed to gain insight into what is happening inside corporate networks, enabling trends to be exposed and incidents to be identified much faster than they are today.

But more importantly, machine learning engines will be able to “hunt” for exploits. By combining input from learned behaviors, known indicators of compromise (IOCs), and external threat intelligence feeds, machine learning engines will be able to predict malicious events with a high degree of accuracy, preventing major incidents before they materialize or become widespread problems. And we are seeing examples of this capability today. For instance, the cyber solution Endgame operates at the microprocessor level, analyzing pre-fetch instruction cache searching for zero-day exploits so they can be detected and eliminated long before an incident occurs.

Not to be overlooked is the ability of machine learning to enable automated responses. Machine learning engines not only can detect malicious behavior faster, based on IOCs and “experience,” but also can take action to eliminate the threat early in the kill chain without requiring human involvement. This enables incidents to be avoided proactively and lessens the workload on short-handed staff.

The benefits of machine learning are clear and compelling. But many security professionals are asking, “Is the technology really ready?” There are valid concerns, such as the validity of data from external threat intelligence feeds into machine learning engines and the potential for machine learning algorithms to be attacked and fed false models, but work continues by vendors and academia alike to sort out those questions. In fact, Georgia Institute of Technology just launched a new research project to study the security of machine learning systems.

Like most technology, machine learning will continue to evolve. But if expectations prove out, machine learning will transform how CISOs manage security operations within the next three years.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Did you know?

  • Only 7% of federal employees today are age 30 or under – the lowest percentage in the last ten years
  • By 2017, 31% of federal workers will be eligible to retire
  • The government loses about 5,000 information technology employees each year

In a recent Government Executive blog post, NVTC member Susan Fallon Brown of Monster Government Solutions shared these astounding statistics and highlighted the growing opportunity for the federal government to bolster its millennial workforce and reduce overall hiring gaps with millennial talent. Here are some of the key themes she shared in the blog:

  • The importance of federal agencies being able to articulate their missions – millennials want to be a part of organizations that serve the greater good; an agency’s mission statement, often the first point of entry into an organization for a candidate, must clearly express the positive impact the agency is making
  • Digital channels are key to millennial recruitment – millennials are using social networks and digital channels in their job search more than ever before; agencies should leverage their digital channels as an extension of their recruitment efforts, utilizing clear and enticing messaging
  • Transparency and engagement are a must in the recruitment process – millennials want to be continually engaged in the hiring process. They want feedback from recruiters at all stages of the hiring process – and to hear from recruiters after the interview process, even if they didn’t get the job

Millennials make up about one-third of the workforce in Fairfax and Arlington Counties according to a 2016 Millennial Research report conducted by NVTC’s NextGen Leaders Committee. The report explored what attracts and retains millennials in organizations in Northern Virginia.

The notion of connection – millennials’ desire to feel connected to the community they live in, to their employer’s mission and charitable efforts, and to their colleagues, emerged throughout the report. Here are some interesting points from the research:

  • Millennials place strong emphasis on flexibility in their positions – in their schedule, in the physical location of their job and in their responsibilities. Instead of the amount of hours they work, millennials want to be evaluated on the quality of their output.
  • Millennials place strong value on ongoing learning and development opportunities; career progression and mentorship is highly important, even though company loyalty isn’t always a driving career factor for millennials.
  • Millennials highly value employee recognition in a variety of forms, including constructive feedback, awards, perks and promotions.
  • A company’s social responsibility efforts and commitment to being ethical is critical for millennials and a driving recruitment factor; millennials place strong value in the trust they have for their employer, their transparency and commitment to bettering the world.

Interested in learning more about recruiting and retaining millennials in our region? Read the full NextGen Leaders Millennial Research report.

Check out Government Executive’s blog here.

NextGen Leaders Millennial Graphic

Click to enlarge infographic above – just one of the interesting infographics you’ll find in the NextGen Leaders Millennial Research report

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

leaseweb-logoThis NVTC guest blog post is written by Marc Burkels, manager of dedicated servers at LeaseWeb. LeaseWeb, an NVTC member company, is an Infrastructure-as-a-Service (IaaS) provider offering dedicated servers, CDN and cloud hosting on a global network. LeaseWeb recently exhibited at the Capital Cybersecurity Summit on Nov. 2-3, 2016.

Let’s say you want to become the new Facebook. Believe it or not, I regularly run into people who have this ambition. The number one question these new Mark Zuckerbergs ask me is which server they need.

It is always a challenge to convince them to not rush into anything. Instead, I have them sit down and tell me what they really want. Since many companies switch servers within a few months after buying and this is always time consuming (not to mention the costs), it is certainly worth your while to think well before you decide. What is the service you want to deliver? What is your workload? Does it involve large databases?

I always discuss the following 8 things to help people decide on the right hosting provider and hardware configuration of a dedicated server:

1. Business impact of downtime

What is the business impact of potential failure of your hosting environment? One of the first things to consider when selecting a dedicated server is how to deal with potential downtime. In a cloud environment, the setup of the cloud protects you against hardware failures. With a dedicated server, you know you are not sharing resources with anyone else. But since there is always a single point of failure in one server, you need to decide whether you are able to accept potential downtime – if you do not have the option to scale to multiple dedicated servers.

2. Scalability of your application

Scalability is another important issue when choosing a dedicated server. How well does your application scale? Is it easy to add more servers and will that increase the amount of end users you can service?

If it is easy for you to scale, it doesn’t matter whether you use a dedicated server or a virtual solution. However, some applications are difficult to scale to multiple devices. Making sure a database is running on multiple servers is a challenge since it needs to be synchronized over all database servers. It might even be easier to move the database to a server that has more processing capacity, RAM and storage. Moving to a cloud environment – where you can clone a server, have a copy running in production and can add a load balancer to redirect traffic to multiple servers – could also be a good option for you.

3. Performance requirements of your server

What are your performance requirements? How many users do you expect and how many servers do you potentially need? Several hardware choices influence server performance:

Processor/CPU

Generally , you can choose the amount of processors and cores in a server. It depends on the application you are running whether you will benefit from more cores (but any multi-threaded application will benefit from more cores, for instance web servers or database servers). Consider also the performance of the core defined in clock speed (MHz): some processors have a better turn-around time with less cores and more GHz per core. The advice on which processors and how many cores to choose will ideally come from someone who is managing the application or the vendor of the software. Of course, they need to also take into account the expected amount of users.

RAM

The faster the CPU and the more cores it has, the more RAM options are available to you. If you are unsure about your RAM needs, choose a server that allows you to add RAM if needed since this is relatively easy. The ranges of RAM choices, especially with double processors, are enormous.

The size of your server is important when choosing RAM, as is the latest technology. Current generation servers use DDR4-technology, which could have a positive effect on database performance. DDR4 is priced interestingly nowadays, since it is the standard.

Hard Drives

Choose a RAID set-up for your hard drives, so you are well protected against the failure of a single hard drive. Your system will still be up and running – with some performance loss – until the hard drive is replaced.

The larger the server, the more hard drive options you have. SATA drives stand for high volume but relatively low performance. SAS performs twice as well as SATA, but has a higher price and lower capacity. SAS has been succeeded by SSD, which is 50 to 100 times faster than SATA.

4. Load balancing across multiple dedicated servers

If your application can scale across multiple dedicated servers, a form of load balancing where end users are split across all available servers- is necessary. If you are running a website and traffic is rising, at some point you will need to use multiple web servers that serve a multitude of users for the same website. With a load balancing solution, every incoming request will be directed to a different server. Before doing this, the load balancer checks whether a server is up and running. If it is down, it redirects traffic to another server.

5. Predictability of bandwidth usage

The requirements in bandwidth naturally relate to the predictability of data traffic. If you are going to consume a lot of bandwidth but predictability is low, you could choose a package with your dedicated server that has a lot of data traffic included, or even unmetered billing. This is an easy way of knowing exactly how much you will be spending on the hosting of your dedicated server.

6. Network quality

As a customer, you can choose where a dedicated server is placed physically. It is important to consider the location of your end user. For instance, if your customers are in the APAC region, hosting in Europe might not be a sensible choice since data delivery will be slow. Data delivery also depends on the quality of the network of the hosting provider. To find out more about network quality, check a provider’s NOC (Network Operation Center) pages and test the network. Most hosting providers will allow you to do this.

7. Self-service and remote management

To which degree are you allowed to manage your server yourself? If you are running an application on a dedicated server, you probably have the technical skills and the knowledge to maintain the server. But do you have access to a remote management module? Most A-brand servers are equipped with remote management modules. Providers can allow you secure access to that module.

A remote management module can also help if you are in a transition from IT on premise to a hosted solution (perhaps even a private cloud solution). It can be an in-between step that will leave existing work structures intact and ease the transition for IT personnel, since they will still be able to manage their own software deployments and the customized installation of an operating system.

8. Knowledge partner

And last but definitely not least: make sure your hosting provider involves his engineers and specialists when trying to find a solution tailored to your needs. A true knowledge partner advises on best practices and different solutions. This may involve combining different products into a hybrid solution.

The above will probably give you a good idea of what to consider before renting a dedicated server. If you are looking for specific advice or need assistance, please feel free to contact the LeaseWeb team. They can help you find the solution that is right for you.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Doug Logan, chief technologist at US Cyber Challenge and CEO of Cyber Ninjas, is the author of our latest cybersecurity guest blog post on new approaches to cybersecurity hiring and retaining top cybersecurity talent. US Cyber Challenge’s National Director, Karen Evans, will be speaking on the Force Multipliers to Future Cybersecurity panel at the 2016 Capital Cybersecurity Summit on Nov. 2-3, 2016.


us cyber challenge logoWith over 209,000 vacant cybersecurity jobs in the U.S and job postings up 74% over the last 5 years; it is an understatement to say that cybersecurity is a growth field. Yet with my work with the US Cyber Challenge, I am routinely told by some of America’s best and brightest that they’re having difficulty finding a job. Once a person reaches the six month mark in a cybersecurity role, recruiters will call like crazy. Getting that initial experience is another story. If we’re going to secure our companies and our country, this is a problem we need to solve.

Traditional hiring practices suggest that we find people who have performed the job function in the past. By this measure, studies have shown that fewer than 25% of cybersecurity applicants are qualified to perform the job functions. I’ve actually had even less optimistic results with less than 10% of candidates qualified. In many cases this is despite certifications, or even similar past job experience. The resource pool is simply not large enough to readily find skilled candidates; and those who are skilled are extremely expensive. I’d like to suggest a different approach: hire the inexperienced and train them.

Time and time again I’ve been surprised at how quickly smart, passionate, but inexperienced individuals out-perform more experienced but “normal” candidates. On average I find that the right candidates learn about twice as fast as your typical candidate. This means that at six months in, my passionate candidate is functioning at the one year experience level; and that one year in, they already function at the equivalent of two years of experience. At this pace it does not take long before they surpass those with more experience; and best of all, home-grown talent is more loyal and won’t typically jump ship. But how do you find this talent?

The best way I’ve found to find smart, passionate, individuals who are interested in cybersecurity is taking a look at those candidates who find the time to learn cybersecurity topics even though they are not required to. This is often showcased in resumes that are littered with self-study topics related to the field, or with participating in one of the many cybersecurity competitions available. This list includes Cyber Aces, Cyber Patriot, the US Cyber Challenge and the National Collegiate Cyber Defense Competition. If you want to check out a site that specializes in showcasing this type of talent, this is why the site CyberCompEx was created.

Unlike the inflated prices of experienced cybersecurity professionals, truly entry-level candidates can typically be picked up at a fraction of the cost. However, with this discount in salary you should be planning on spending a good $5,000-$10,000 the first year on investing in their training. In addition, you should be sure to review their performance at the six month mark and bump their pay appropriately at that time. While home-grown talent is less likely to jump ship, you always need to be in the ball park of their current worth.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Jack Huffard, president, COO and co-founder of Tenable Network Security, discusses the latest legislation on legacy IT in the federal government in his NVTC guest blog post. Huffard will be participating on the Collaborating for Cyber Success Panel at NVTC’s Capital Cybersecurity Summit on November 2-3, 2016.


jack-huffard-2015-2-webIn government IT, the old adage “if it works, don’t fix it” no longer applies. While legacy systems may still technically be working, they can harbor risky vulnerabilities without vendor support, regular security updates or patch management. This point hit home for many in May when a report from the Government Accountability Office revealed that the country’s nuclear arsenal was still controlled by a system with an 8-inch floppy disk.

More recently, the House Oversight and Reform Committee released its report analyzing the OPM Data Breach that exfiltrated personally identifiable information (PII) of over 4 million government employees and over 21 million more cleared individuals. One of the report’s key recommendations was to modernize existing legacy federal information technology assets to help prevent another such egregious attack.

The Modernizing Government Technology Act of 2016

Earlier this year, to address this urgent situation, two bills were introduced in Congress to help modernize government IT systems – the MOVE IT Act and the IT Modernization Fund. Both bills have since been combined into the Modernizing Government Technology Act of 2016 (the MGT Act). This Act would create individual funds for government agencies and a broader centralized fund to which agencies could apply for financing modernization efforts. The bill states that the funds could be used “for technology related activities, to improve information technology, to enhance cybersecurity across the Federal Government.”

Details of the MGT Act

More specifically, MGT stipulates several areas in which modernization funds can be used, including:

  • Replacing existing systems that are outdated and inefficient
  • Transitioning to cloud computing (using the private sector as a model)
  • Enhancing information security technologies

The Act states that the government currently spends almost 75% of its IT budget (which now totals over $80 billion) on operating and maintaining legacy systems, leaving little left over for modernization efforts. Not only are these systems subject to failure, but as they get older and older, they present greater and greater security risks as well. So it is good to see that the Act encourages not only the simple replacement of agencies’ IT systems, but the addition of cybersecurity technology. Regardless of which new technology is chosen – on-premises, virtual, or cloud-based – there is also a pressing need for better information security solutions for government infrastructures, as evidenced by recent agency breaches.

MGT is unique and different than previous proposals because it does not appropriate funds. Rather, it enables agencies to transfer monies – that they have saved by retiring legacy systems and moving to newer technologies – into individual IT working capital funds. They could then reinvest those funds over the next three years for other modernization initiatives, avoiding the “use it or lose it” cycle.

The Act also calls for a general government-wide IT Modernization Fund. This centralized fund would be overseen by the General Services Administration (GSA) and an IT Modernization Board in accordance with guidance from the Office of Management and Budget. Agencies would apply, and present business cases for access to the funds to modernize their legacy IT infrastructures. The centralized fund would then be replenished with savings from those modernization initiatives.

The 8-member IT Modernization Board would include the Administrator of the Office of Electronic Government, a GSA official, a NIST employee, a DoD employee, a DHS employee, and three tech-savvy federal employees.

Moving forward in the 21st century

The MGT Act was introduced by Rep. Will Hurd (R-Tx.) who is one of the few members of Congress with a computer science degree. It was co-sponsored by Rep. Gerry Connolly (D-Va.) in a welcome display of bipartisan collaboration. The House passed the bill at the end of September 2016. It is now up to the Senate to act on the bill. Prospects for passage are encouraging, and this bill would be a good step towards updating legacy IT systems, strengthening cybersecurity and embracing 21st century technologies.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

We’re thrilled to share our latest cybersecurity guest blog post written by Rick Howard, chief security officer at Palo Alto Networks. Howard will be sharing his expertise at the Capital Cybersecurity Summit on November 2-3, 2016 on the CISO Sidebar panel.


Rick Howard HeadshotIn today’s cybersecurity landscape, where attacks are increasing in number and sophistication, the network defense model developed over the past 20 years has become overwhelmed. Commonly referred to in cybersecurity circles as the “Cyber Kill Chain,” the model uses what was originally a military concept to help network defenders find a cyber attack and fix any damage it caused and then track, target and engage with the cyber attacker.

Over time, cyber adversaries’ capabilities grew. Soon, they were routinely finding ways to circumvent the Cyber Kill Chain model. This happened for several reasons:

  • Too many tools for defenders to manage. As network defenders struggled to keep up with evolving cyber attackers, more security tools were implemented on the network, and the man-hours spent ensuring those tools were operating correctly and analyzing the data they provided quickly became a burden with which most network defense teams couldn’t keep up.
  • Too much complexity for security. As new security tools were added, the complexity of the network grew. The more complex the network, the easier it is for network defenders to make a mistake that can expose the network to cyber attacks.
  • Too much wasted time. As vendors launched new security tools, customers entered into a kind of arms race in which they were constantly evaluating new “best of breed” security products against each other to determine which was the most effective. These evaluations could take months, with more time and money wasted after a decision was made in order to remove legacy security tools and replace them with new ones, and then train teams on how to use them effectively. It was a process that became more complex – and expensive – every year as cyber threats evolved and new tools were developed to address them.
  • Too inefficient at crossing the last mile. Cyber attackers often leave clues when they penetrate a network’s defenses, which are called “indicators of compromise.” Once an indicator is found, network security vendors develop prevention and detection controls that address the indicator and deploy them to customers—a process the industry has referred as “crossing the last mile.” But when an indicator affects multiple products from different vendors, or a new indicator of compromise is discovered, keeping track of the status of each tool and whether or not that tool has the most updated controls installed becomes a logistical nightmare.

Much of the complexity that currently overwhelms the Cyber Kill Chain model can be solved with an integrated security platform. “Platform” is a buzzword many vendors use, but I define it as a way to combine tools that network defenders have previously implemented as point solutions from different vendors into a platform built and maintained by one vendor. The “secret sauce” is that integration – when the platform components work together – makes each component more effective as a result of its integration with the others and it makes the network easier to defend by reducing the number of tools to be managed.

More advanced security platforms have the additional ability to automate the deployment of prevention and detection controls, making the process to cross the last mile much less labor-intensive. By replacing an ad hoc collection of independent, patched-together tools with a well-integrated, automated security platform, the problems described above become much simpler to resolve or disappear altogether. Partnering with one vendor gives network defenders leverage in terms of contract negotiations. They can use longer term contracts to get significant discounts from the vendor and, because of that, they can insist on creative fulfillment models that are advantageous to themselves in defending their networks.

The challenge for automated security platform adoption is primarily cultural. Network defenders are familiar with the best-of-breed security tool model, and many see the constant evaluation of new tools as a sort of “survival of the fittest” contest that ensures they’ll find the best tool for their network. It will take a lot of education and mind-changing, a process that may require support from an organization’s board of directors or C-suite, to ensure it happens. But it’s a change that needs to happen in order to protect our way of life in this digital way more effectively and efficiently in the future.


Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s blog is written by Connie Pilot, executive vice president and chief information officer at Inova Health System. Pilot will be sharing her expertise on the “The Coming Storm from IoT” panel at the Capital Cybersecurity Summit on November 2-3, 2016


Pilot_Connie UpdatedWith billions of data-generating devices connected to the Web, the Internet of Things (IoT) is changing the way we do business. No industry is immune, including healthcare. The Food and Drug Administration estimates that 500 million people around the world use some sort of mobile health app on their smartphones and millions more have embraced wearable health technology. Inside the hospital, Internet-connected medical devices such as MRI machines, CT scanners and dialysis pumps provide critical patient monitoring and support and as wireless technology proliferates in healthcare, so too does risk. The Web is fertile ground for stolen medical records, which are now more valuable to hackers than credit cards. Providers must find new ways to secure private data in an ultra-connected world.

The IoT offers important benefits for healthcare delivery and efficiency. It provides new avenues for patient communication, improves patient engagement and compliance, and enhances value-based care and service. At Inova, we use it in many ways: to monitor fragile newborns in the neonatal intensive care unit, control temperature and humidity in the operating room, deliver pain medication post-operatively and measure heart rhythm in cardiac patients, to name just a few. Medical data tracking enables us to intervene when necessary to provide preventive care, promptly diagnose acute disorders or deliver life-saving medical treatment. The benefits extend beyond our hospital walls into the community, where the IoT drives telehealth advancements that improve access for patients, such as virtual visits, eCheck-In, patient portals and electronic health records.

Balancing the benefits of greater connectivity with the need to protect critical data is a growing priority for healthcare providers. Opportunities exist for instilling interoperability and security standards that will seamlessly facilitate the sharing of necessary patient care information, while continuing to safeguard it from cyber-attacks.

Enabling connection and communication among different information technology systems and software applications can be daunting. While healthcare organizations can use proven security protocols in other domains, differences between IoT devices and traditional computing systems pose significant challenges. The IoT introduces innovative technology that requires emergent, often untested, software and hardware. Wearables, such as consumer fitness trackers and smartwatches, are a case in point. They present non-traditional access into the technology environment. While they use existing communication protocols that can be secured, there are challenges with multi-factor authentication and control of the devices in case of loss or theft.

Additionally, with millions of people using wearables, the volume of data generated can easily overwhelm an organization’s network, leaving it vulnerable to a potential denial of service attack. In this scenario, hackers attempt to prevent legitimate users from accessing information or services. Methods must be developed to limit data transmitted from wearables solely to those devices that should be transmitting and solely to information that is required for patient care.

Clearly, developing new methods of securing devices and the information they generate is a formidable task. We are fortunate to do business in an area that is well positioned to tackle this growing cybersecurity threat. With one of the most sophisticated technology workforces in the country, pioneering start-ups, world-class educational resources and a large government infrastructure, the National Capital region stands at the epicenter of innovation, policy and research. Our collective expertise can help us meet healthcare privacy and security challenges, and keep our patients and community safe.

 

Connie Pilot is executive vice president and chief information officer at Inova Health System. As the leader of Inova’s technology services division, she oversees all aspects of technology, including IT applications, change and quality management, information security, enterprise architecture, service delivery and informatics. 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Gabriela Coman, partner and co-chair of Rubin and Rudman’s Intellectual Property Practice in Washington, D.C., discusses the ever-expanding field of medical device wearable technology and the important intellectual property implications around these devices.


RubinRudman

Wearable devices such as personal health monitoring, prevention and management devices, as well as methods of using such wearable devices, have become part of our everyday life and essential tools of modern medicine. From head-mounted display devices such as Google Glass or Oculus Rift to bracelets such as Fitbit or Garmin, wearable devices have also become part of an increasingly competitive and litigious environment, especially when competitors enter the market.

To become successful in the marketplace, a wearable device company needs a superior product and patent protection for its wearable device and related methods of use, both in the United States and abroad.

Patents are critical. A patent is a legal right that excludes others from practicing, manufacturing and selling the technology claimed in the patent (the wearable device and/or method of use of the wearable device).To obtain such patent protection, a wearable device company must submit a separate patent application for each country (or region, in the case of the European patent application) in which it wishes to protect its investment and invention. The time, money and effort required to obtain U.S. and international patents are important considerations because the process to obtain a patent requires a significant investment after filing the application.

Without patent protection, the costly product development for wearable devices may easily be copied by competitors. However, if the wearable device is patentable (and once it has been patented), the company will be able to (i) create legal barriers to entry for competing devices by preventing others from copying, selling or manufacturing the patented device; (ii) license the patented device to generate revenue; and (iii) enhance the value of the wearable device company by building equity in the company and creating assets that may attract other investments.

Before a wearable device company invests time and money to develop a wearable device and bring it to market (particularly for medical devices in the U.S. market that require FDA approval and clearance), the wearable device company should consider the following:

1.    What Are Wearable Devices?

Wearable devices encompass various technologies and systems that span numerous lifestyle applications including health and wellness, sports and fitness, home diagnostics, childcare, pet care, fashion and continuous lifestyle monitoring, among many others. These wearable, portable medical devices make it easier for people to assess their wellness, adopt better lifestyles and prevent the majority of diseases with early diagnosis and treatment. These wearable devices (when connected to a hospital or doctor) can also alert health professionals to various problems regardless of where the patient is located.

For example, a personal heart monitor like AliveCor Heart Monitor (FDA-approved for detection of atrial fibrillation) allows patients to monitor their heartbeat using an iPhone and provide the information to their doctors. The AliveCor Heart Monitor may be combined with its AliveECG app to provide a 30-second, one lead electrocardiogram in addition to recording heart rate per minute. In just 30 seconds, a patient could capture a medical-grade electrocardiogram and know instantly if the heart rhythm is normal or if atrial fibrillation is detected in the electrocardiogram. The AliveCor Heart Monitor operates remotely and includes a control unit wirelessly connected to a transmitter that could relay heartrate signals and electrical profile of the heartbeats, skin temperature and other measurements from a chest band or patch, for example.

Google Glass is another exemplary wearable device. As a head-mounted display device in the shape of a pair of eyeglasses, Google Glass allows medical personnel (surgeon) to view information relevant to a patient during surgery without having to turn away from the patient. As the projector display is next to the user’s right eye, the surgeon could see all medical information without looking across the room and away from the patient. The Glass projector could also display patient’s vital signs, urgent lab results and surgical checklists, along with relevant information on the specific surgical procedure. The doctor can control the device manually through voice commands and a touchpad located on its frame.

2.    Impact Of Wearable Devices On Health Information Technology

With the 2014 launch of the Apple Watch and its related Apple Health app (a health and fitness data dashboard) and HealthKit platform, many have predicted the beginning of a digital healthcare revolution. Indeed, wearable technology devices have impacted our personal lives in many ways providing insight into our health and diet regimen, blood pressure, sleep pattern, heart rate and many other life aspects. Wearable devices in the form of sport watches track steps and amount of calories burned; Doctor on Demand facilitates video conferences and live discussions with remote physicians; Google Glass facilitates surgery by offering surgeons information relevant to the patient without having to turn away from the patient; mobile health apps help patients stop smoking or lose weight (and can be installed either on a mobile phone or tablet).

Recently, medical device companies have promoted the use of biometric technology within people/patients. The idea is that sensors within the body could be used to call the healthcare provider if the person is sick. These sensors could be swallowed and placed in the blood or injected or inserted directly under the skin. The sensor can report when a patient ingested a prescription drug, as well as a patient’s vital signs. For example, a digital sensor recently approved by the FDA can be placed inside a pill and swallowed by a patient. Once the patient swallows the tiny digital device, the sensor transmits the identity of the medication and timing of ingestion to a mesh worn on the patient’s skin. The mesh then transmits the received information to a mobile phone app that can also provide physicians with vital signs such as heart rate, body temperature and various rest patterns.

Data from biometric digital sensors can be integrated with wearable devices to create new age health monitors that are further integrated with smartphone apps. Conventional health parameters such as glucose, blood pressure and heart rate can now be combined with environmental data to provide predictive as well as preventative information. In this manner, the emphasis is shifted from treatment to prevention of illnesses and diseases.

3.    Wearable Devices And Types Of Intellectual Property

Wearable devices in the medical field could be protected by various types of intellectual property including patents, copyrights and trademarks.

Utility patent applications may be filed to encompass various aspects of the device per se, such as components and specific structures of the wearable device, as well as designs of various components of the wearable device (through design patent applications).

Patent applications may be also filed to cover hardware of the wearable device such as software, interface, or materials and specialized particulates employed in the wearable technology.

A wearable device company may also apply for copyrights in various software for operating the wearable technology and device, and/or trademarks directed to branding. Considerations may be also given to the packaging of the device for possible protection by trade dress.

4.    Patent Protection For Wearable Devices

Wearable device are protected and patented in the U.S. and other countries.  However, methods or surgery and medical treatment methods are protected and patentable in the U.S. and Australia but typically not in Europe and other countries such as Canada, South Korea or Japan.

Utility patent applications may be filed directed to various aspects of the device per se, such as systems, sensors (electrical, optical or chemical sensors that monitor patient parameters), servers, accelerometers, actuators, materials, controls, kits or specific mechanical components of the wearable device, as well as designs of various structural components of the wearable device (through design patent applications).

Patent applications may also be directed to software, interface (iconic, graphical or numeric user interface with monochrome or color LCD display) or controller (high speed microprocessors or microcontrollers for analysis and data control) of the wearable device.

For example, US 8,764,651 entitled “Fitness Monitoring” discloses and claims inter alia a monitoring system with a portable device, one or more sensors and a processor; a system with a cellular telephone, an accelerometer and one or more sensors; and a system with a server, a portable appliance with a heart sensor and a processor. US 8,108,036 entitled “Mesh network stroke monitoring appliance” discloses and claims inter alia a monitoring system that includes one or more wireless nodes and a sensor coupled to a person to determine a stroke attack; as well as a heart monitoring system that includes one or more wireless nodes, a wearable appliance and a statistical analyzer. Similarly, USD 737159 and USD 764346 are examples of design patents that depict and claim ornamental design for wearable devices.

Medical device companies in the wearable technology field should protect all novel aspects of the wearable device including structural attributes and methods of use, as well as the ornamental look and design of the product. When possible, medical device companies should include claims that cover not only the product per se but also software that is within the app and the wearable device, without referring to the device, to preserve the right of the patent owner to sue the manufacturer of the software for direct infringement of the patent.

5.    Wearable Devices And Privacy Concerns

While wearable devices and biometric technology are redefining the information landscape offering many opportunities, they also pose several challenges.

One important challenge is protecting personal data and ensuring that the policies protecting the privacy and confidentiality of patients are evolving at the same pace as the expanding use of new technologies. Concerns are being raised as to where this personal data is stored and how it is being protected. Highly sensitive and personal data is constantly input into many smartphones with health apps which monitor an individual depending on the data that is inputted. The more data that is inputted, the more vulnerable the individual/patient becomes.

The digital format of data from wearable devices and biometric records opens a world of opportunities for hacking and data breaches, especially when the wearable device is linked with a smartphone, tablet and computer.

 

Gabriela I. Coman is partner and co-chair of Rubin and Rudman’s Intellectual Property Practice in Washington, D.C. Coman practices primarily in the intellectual property area, concentrating in the fields of medical, biotechnology, pharmaceuticals, chemical, semiconductors and design patents. Contact Gabriela Coman by email at gcoman@rubinrudman.com or by phone at 202.794.6300.

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Marty Herbert of NeoSystems Corp. shares the second in a series of tips for workflow and process automation.


In Part 1 of our Workflow and Process Automation Series, Re-evaluating Your Processes, we looked at a few steps your organization can take towards drastically simplifying your billing process. Keep in mind that throughout this series, I will highlight solutions which produce time saving, compliance-driven processes that integrate with business systems, like Deltek Costpoint, NetSuite, SAP or others and create an enhanced workflow automation framework. In today’s post, Part 2 of our series, we’ll address vendor invoice processing.

A few years back, while working on a series of consulting projects, I looked at a client’s AP department while performing an audit and noted several variations they employed to process their vendor invoices. Some invoices came in via email, others via snail mail. Some came in to the attention of the company’s AP department; others came in via the project manager. Some were based on a PO and others were one-off ‘bills that needed to be paid.’ Knowing who the appropriate approver is could be multi-faceted and involve the receipt of goods (or services). Similar to many larger government contractors, our client used Deltek Costpoint for vendor invoice processing so I will use that system as an example of a well-known business system that is largely identifiable for our audience.

This business system has a great mechanism for capturing data and information related to accounts payable, but it can’t necessarily control how invoices are delivered, who approves them, and how that approval is captured for compliance purposes.

Our client’s overarching goal (outside of employing processes that increased efficiency and effectiveness) was to find a way to electronically interface an APPROVED invoice for vouchering in Costpoint. That sounds like a simple objective, but there are nuances that might not be immediately obvious. The “approved” aspect implies that there needs to be a process followed to obtain a valid, recognized approval. The “electronic” aspect implies that the entry into the ERP system should be automated without the need for manual data entry. Automated work flow tools make the design and controlled execution of a process possible, while Costpoint Web Services enables an electronic interface.

But, let’s slow down. Before we send data along, we have to gather the data. In this case the data comes from a vendor’s invoice, but we want to make sure the vendor’s invoice has been reviewed and approved before we send it into the system of record. The first step in automating this process is to gather the data input (the invoices). There are multiple ways we could approach this:

  • We can give vendors access to a “portal” whereby they upload the invoice directly into a workflow, or vendors can email the invoice to a specific address that will automate process kick-off and move it into a queue for AP servicing, or
  • We can receive a vendor invoice and initiate the process by loading it to the AP queue (potentially after scanning it in if it is received hard copy).

Then it is time to route the invoice to the proper ‘approver.” If companies are already connected to an ERP application that supports project management data, they are able to use the data inherent to any given project to pull the relevant approvers for PO-based invoices. AP clerks will then have matched the invoice to a PO (unless the vendor did that already) and chosen the lines from the PO to which the invoice applies then… well, that is all they have had to do so far.

Off to the approver(s) the invoice goes. The approver gets the invoice that has been submitted as well as details added by the AP department. The approver can decide to reject it or send it to another approver, or sit on it a while. Any (or all) of these tasks can be built into the process. The end result is (hopefully) an approved invoice.

At this point, the system should validate the invoice information and manage the voucher process through creation, voucher number generation, accept or reject status and check generation. It is critical and most efficient to have a complete trail of activity from submission to payment.

This process, when automated, is extremely easy to follow, saves time and money and is easier to implement than one might think. Unfortunately, most government contractors don’t know the ease with which automation software can achieve this and many other processes quickly and effectively.

There are numerous effective workflow management software systems in the market today. Integrify, a workflow management software used to automate a myriad of processes within a variety of platforms, is one tool we use at NeoSystems to automate vendor invoice processing within the business systems we use.

Our next blog will focus on the delightful automation of purchase requisition. If you have any burning questions about this or other processes (even those we haven’t gotten to yet!) using web services and workflow management software for your business system, please feel free to contact me.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS