leaseweb-logoThis NVTC guest blog post is written by Marc Burkels, manager of dedicated servers at LeaseWeb. LeaseWeb, an NVTC member company, is an Infrastructure-as-a-Service (IaaS) provider offering dedicated servers, CDN and cloud hosting on a global network. LeaseWeb recently exhibited at the Capital Cybersecurity Summit on Nov. 2-3, 2016.

Let’s say you want to become the new Facebook. Believe it or not, I regularly run into people who have this ambition. The number one question these new Mark Zuckerbergs ask me is which server they need.

It is always a challenge to convince them to not rush into anything. Instead, I have them sit down and tell me what they really want. Since many companies switch servers within a few months after buying and this is always time consuming (not to mention the costs), it is certainly worth your while to think well before you decide. What is the service you want to deliver? What is your workload? Does it involve large databases?

I always discuss the following 8 things to help people decide on the right hosting provider and hardware configuration of a dedicated server:

1. Business impact of downtime

What is the business impact of potential failure of your hosting environment? One of the first things to consider when selecting a dedicated server is how to deal with potential downtime. In a cloud environment, the setup of the cloud protects you against hardware failures. With a dedicated server, you know you are not sharing resources with anyone else. But since there is always a single point of failure in one server, you need to decide whether you are able to accept potential downtime – if you do not have the option to scale to multiple dedicated servers.

2. Scalability of your application

Scalability is another important issue when choosing a dedicated server. How well does your application scale? Is it easy to add more servers and will that increase the amount of end users you can service?

If it is easy for you to scale, it doesn’t matter whether you use a dedicated server or a virtual solution. However, some applications are difficult to scale to multiple devices. Making sure a database is running on multiple servers is a challenge since it needs to be synchronized over all database servers. It might even be easier to move the database to a server that has more processing capacity, RAM and storage. Moving to a cloud environment – where you can clone a server, have a copy running in production and can add a load balancer to redirect traffic to multiple servers – could also be a good option for you.

3. Performance requirements of your server

What are your performance requirements? How many users do you expect and how many servers do you potentially need? Several hardware choices influence server performance:

Processor/CPU

Generally , you can choose the amount of processors and cores in a server. It depends on the application you are running whether you will benefit from more cores (but any multi-threaded application will benefit from more cores, for instance web servers or database servers). Consider also the performance of the core defined in clock speed (MHz): some processors have a better turn-around time with less cores and more GHz per core. The advice on which processors and how many cores to choose will ideally come from someone who is managing the application or the vendor of the software. Of course, they need to also take into account the expected amount of users.

RAM

The faster the CPU and the more cores it has, the more RAM options are available to you. If you are unsure about your RAM needs, choose a server that allows you to add RAM if needed since this is relatively easy. The ranges of RAM choices, especially with double processors, are enormous.

The size of your server is important when choosing RAM, as is the latest technology. Current generation servers use DDR4-technology, which could have a positive effect on database performance. DDR4 is priced interestingly nowadays, since it is the standard.

Hard Drives

Choose a RAID set-up for your hard drives, so you are well protected against the failure of a single hard drive. Your system will still be up and running – with some performance loss – until the hard drive is replaced.

The larger the server, the more hard drive options you have. SATA drives stand for high volume but relatively low performance. SAS performs twice as well as SATA, but has a higher price and lower capacity. SAS has been succeeded by SSD, which is 50 to 100 times faster than SATA.

4. Load balancing across multiple dedicated servers

If your application can scale across multiple dedicated servers, a form of load balancing where end users are split across all available servers- is necessary. If you are running a website and traffic is rising, at some point you will need to use multiple web servers that serve a multitude of users for the same website. With a load balancing solution, every incoming request will be directed to a different server. Before doing this, the load balancer checks whether a server is up and running. If it is down, it redirects traffic to another server.

5. Predictability of bandwidth usage

The requirements in bandwidth naturally relate to the predictability of data traffic. If you are going to consume a lot of bandwidth but predictability is low, you could choose a package with your dedicated server that has a lot of data traffic included, or even unmetered billing. This is an easy way of knowing exactly how much you will be spending on the hosting of your dedicated server.

6. Network quality

As a customer, you can choose where a dedicated server is placed physically. It is important to consider the location of your end user. For instance, if your customers are in the APAC region, hosting in Europe might not be a sensible choice since data delivery will be slow. Data delivery also depends on the quality of the network of the hosting provider. To find out more about network quality, check a provider’s NOC (Network Operation Center) pages and test the network. Most hosting providers will allow you to do this.

7. Self-service and remote management

To which degree are you allowed to manage your server yourself? If you are running an application on a dedicated server, you probably have the technical skills and the knowledge to maintain the server. But do you have access to a remote management module? Most A-brand servers are equipped with remote management modules. Providers can allow you secure access to that module.

A remote management module can also help if you are in a transition from IT on premise to a hosted solution (perhaps even a private cloud solution). It can be an in-between step that will leave existing work structures intact and ease the transition for IT personnel, since they will still be able to manage their own software deployments and the customized installation of an operating system.

8. Knowledge partner

And last but definitely not least: make sure your hosting provider involves his engineers and specialists when trying to find a solution tailored to your needs. A true knowledge partner advises on best practices and different solutions. This may involve combining different products into a hybrid solution.

The above will probably give you a good idea of what to consider before renting a dedicated server. If you are looking for specific advice or need assistance, please feel free to contact the LeaseWeb team. They can help you find the solution that is right for you.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Doug Logan, chief technologist at US Cyber Challenge and CEO of Cyber Ninjas, is the author of our latest cybersecurity guest blog post on new approaches to cybersecurity hiring and retaining top cybersecurity talent. US Cyber Challenge’s National Director, Karen Evans, will be speaking on the Force Multipliers to Future Cybersecurity panel at the 2016 Capital Cybersecurity Summit on Nov. 2-3, 2016.


us cyber challenge logoWith over 209,000 vacant cybersecurity jobs in the U.S and job postings up 74% over the last 5 years; it is an understatement to say that cybersecurity is a growth field. Yet with my work with the US Cyber Challenge, I am routinely told by some of America’s best and brightest that they’re having difficulty finding a job. Once a person reaches the six month mark in a cybersecurity role, recruiters will call like crazy. Getting that initial experience is another story. If we’re going to secure our companies and our country, this is a problem we need to solve.

Traditional hiring practices suggest that we find people who have performed the job function in the past. By this measure, studies have shown that fewer than 25% of cybersecurity applicants are qualified to perform the job functions. I’ve actually had even less optimistic results with less than 10% of candidates qualified. In many cases this is despite certifications, or even similar past job experience. The resource pool is simply not large enough to readily find skilled candidates; and those who are skilled are extremely expensive. I’d like to suggest a different approach: hire the inexperienced and train them.

Time and time again I’ve been surprised at how quickly smart, passionate, but inexperienced individuals out-perform more experienced but “normal” candidates. On average I find that the right candidates learn about twice as fast as your typical candidate. This means that at six months in, my passionate candidate is functioning at the one year experience level; and that one year in, they already function at the equivalent of two years of experience. At this pace it does not take long before they surpass those with more experience; and best of all, home-grown talent is more loyal and won’t typically jump ship. But how do you find this talent?

The best way I’ve found to find smart, passionate, individuals who are interested in cybersecurity is taking a look at those candidates who find the time to learn cybersecurity topics even though they are not required to. This is often showcased in resumes that are littered with self-study topics related to the field, or with participating in one of the many cybersecurity competitions available. This list includes Cyber Aces, Cyber Patriot, the US Cyber Challenge and the National Collegiate Cyber Defense Competition. If you want to check out a site that specializes in showcasing this type of talent, this is why the site CyberCompEx was created.

Unlike the inflated prices of experienced cybersecurity professionals, truly entry-level candidates can typically be picked up at a fraction of the cost. However, with this discount in salary you should be planning on spending a good $5,000-$10,000 the first year on investing in their training. In addition, you should be sure to review their performance at the six month mark and bump their pay appropriately at that time. While home-grown talent is less likely to jump ship, you always need to be in the ball park of their current worth.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Jack Huffard, president, COO and co-founder of Tenable Network Security, discusses the latest legislation on legacy IT in the federal government in his NVTC guest blog post. Huffard will be participating on the Collaborating for Cyber Success Panel at NVTC’s Capital Cybersecurity Summit on November 2-3, 2016.


jack-huffard-2015-2-webIn government IT, the old adage “if it works, don’t fix it” no longer applies. While legacy systems may still technically be working, they can harbor risky vulnerabilities without vendor support, regular security updates or patch management. This point hit home for many in May when a report from the Government Accountability Office revealed that the country’s nuclear arsenal was still controlled by a system with an 8-inch floppy disk.

More recently, the House Oversight and Reform Committee released its report analyzing the OPM Data Breach that exfiltrated personally identifiable information (PII) of over 4 million government employees and over 21 million more cleared individuals. One of the report’s key recommendations was to modernize existing legacy federal information technology assets to help prevent another such egregious attack.

The Modernizing Government Technology Act of 2016

Earlier this year, to address this urgent situation, two bills were introduced in Congress to help modernize government IT systems – the MOVE IT Act and the IT Modernization Fund. Both bills have since been combined into the Modernizing Government Technology Act of 2016 (the MGT Act). This Act would create individual funds for government agencies and a broader centralized fund to which agencies could apply for financing modernization efforts. The bill states that the funds could be used “for technology related activities, to improve information technology, to enhance cybersecurity across the Federal Government.”

Details of the MGT Act

More specifically, MGT stipulates several areas in which modernization funds can be used, including:

  • Replacing existing systems that are outdated and inefficient
  • Transitioning to cloud computing (using the private sector as a model)
  • Enhancing information security technologies

The Act states that the government currently spends almost 75% of its IT budget (which now totals over $80 billion) on operating and maintaining legacy systems, leaving little left over for modernization efforts. Not only are these systems subject to failure, but as they get older and older, they present greater and greater security risks as well. So it is good to see that the Act encourages not only the simple replacement of agencies’ IT systems, but the addition of cybersecurity technology. Regardless of which new technology is chosen – on-premises, virtual, or cloud-based – there is also a pressing need for better information security solutions for government infrastructures, as evidenced by recent agency breaches.

MGT is unique and different than previous proposals because it does not appropriate funds. Rather, it enables agencies to transfer monies – that they have saved by retiring legacy systems and moving to newer technologies – into individual IT working capital funds. They could then reinvest those funds over the next three years for other modernization initiatives, avoiding the “use it or lose it” cycle.

The Act also calls for a general government-wide IT Modernization Fund. This centralized fund would be overseen by the General Services Administration (GSA) and an IT Modernization Board in accordance with guidance from the Office of Management and Budget. Agencies would apply, and present business cases for access to the funds to modernize their legacy IT infrastructures. The centralized fund would then be replenished with savings from those modernization initiatives.

The 8-member IT Modernization Board would include the Administrator of the Office of Electronic Government, a GSA official, a NIST employee, a DoD employee, a DHS employee, and three tech-savvy federal employees.

Moving forward in the 21st century

The MGT Act was introduced by Rep. Will Hurd (R-Tx.) who is one of the few members of Congress with a computer science degree. It was co-sponsored by Rep. Gerry Connolly (D-Va.) in a welcome display of bipartisan collaboration. The House passed the bill at the end of September 2016. It is now up to the Senate to act on the bill. Prospects for passage are encouraging, and this bill would be a good step towards updating legacy IT systems, strengthening cybersecurity and embracing 21st century technologies.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

We’re thrilled to share our latest cybersecurity guest blog post written by Rick Howard, chief security officer at Palo Alto Networks. Howard will be sharing his expertise at the Capital Cybersecurity Summit on November 2-3, 2016 on the CISO Sidebar panel.


Rick Howard HeadshotIn today’s cybersecurity landscape, where attacks are increasing in number and sophistication, the network defense model developed over the past 20 years has become overwhelmed. Commonly referred to in cybersecurity circles as the “Cyber Kill Chain,” the model uses what was originally a military concept to help network defenders find a cyber attack and fix any damage it caused and then track, target and engage with the cyber attacker.

Over time, cyber adversaries’ capabilities grew. Soon, they were routinely finding ways to circumvent the Cyber Kill Chain model. This happened for several reasons:

  • Too many tools for defenders to manage. As network defenders struggled to keep up with evolving cyber attackers, more security tools were implemented on the network, and the man-hours spent ensuring those tools were operating correctly and analyzing the data they provided quickly became a burden with which most network defense teams couldn’t keep up.
  • Too much complexity for security. As new security tools were added, the complexity of the network grew. The more complex the network, the easier it is for network defenders to make a mistake that can expose the network to cyber attacks.
  • Too much wasted time. As vendors launched new security tools, customers entered into a kind of arms race in which they were constantly evaluating new “best of breed” security products against each other to determine which was the most effective. These evaluations could take months, with more time and money wasted after a decision was made in order to remove legacy security tools and replace them with new ones, and then train teams on how to use them effectively. It was a process that became more complex – and expensive – every year as cyber threats evolved and new tools were developed to address them.
  • Too inefficient at crossing the last mile. Cyber attackers often leave clues when they penetrate a network’s defenses, which are called “indicators of compromise.” Once an indicator is found, network security vendors develop prevention and detection controls that address the indicator and deploy them to customers—a process the industry has referred as “crossing the last mile.” But when an indicator affects multiple products from different vendors, or a new indicator of compromise is discovered, keeping track of the status of each tool and whether or not that tool has the most updated controls installed becomes a logistical nightmare.

Much of the complexity that currently overwhelms the Cyber Kill Chain model can be solved with an integrated security platform. “Platform” is a buzzword many vendors use, but I define it as a way to combine tools that network defenders have previously implemented as point solutions from different vendors into a platform built and maintained by one vendor. The “secret sauce” is that integration – when the platform components work together – makes each component more effective as a result of its integration with the others and it makes the network easier to defend by reducing the number of tools to be managed.

More advanced security platforms have the additional ability to automate the deployment of prevention and detection controls, making the process to cross the last mile much less labor-intensive. By replacing an ad hoc collection of independent, patched-together tools with a well-integrated, automated security platform, the problems described above become much simpler to resolve or disappear altogether. Partnering with one vendor gives network defenders leverage in terms of contract negotiations. They can use longer term contracts to get significant discounts from the vendor and, because of that, they can insist on creative fulfillment models that are advantageous to themselves in defending their networks.

The challenge for automated security platform adoption is primarily cultural. Network defenders are familiar with the best-of-breed security tool model, and many see the constant evaluation of new tools as a sort of “survival of the fittest” contest that ensures they’ll find the best tool for their network. It will take a lot of education and mind-changing, a process that may require support from an organization’s board of directors or C-suite, to ensure it happens. But it’s a change that needs to happen in order to protect our way of life in this digital way more effectively and efficiently in the future.


Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s blog is written by Connie Pilot, executive vice president and chief information officer at Inova Health System. Pilot will be sharing her expertise on the “The Coming Storm from IoT” panel at the Capital Cybersecurity Summit on November 2-3, 2016


Pilot_Connie UpdatedWith billions of data-generating devices connected to the Web, the Internet of Things (IoT) is changing the way we do business. No industry is immune, including healthcare. The Food and Drug Administration estimates that 500 million people around the world use some sort of mobile health app on their smartphones and millions more have embraced wearable health technology. Inside the hospital, Internet-connected medical devices such as MRI machines, CT scanners and dialysis pumps provide critical patient monitoring and support and as wireless technology proliferates in healthcare, so too does risk. The Web is fertile ground for stolen medical records, which are now more valuable to hackers than credit cards. Providers must find new ways to secure private data in an ultra-connected world.

The IoT offers important benefits for healthcare delivery and efficiency. It provides new avenues for patient communication, improves patient engagement and compliance, and enhances value-based care and service. At Inova, we use it in many ways: to monitor fragile newborns in the neonatal intensive care unit, control temperature and humidity in the operating room, deliver pain medication post-operatively and measure heart rhythm in cardiac patients, to name just a few. Medical data tracking enables us to intervene when necessary to provide preventive care, promptly diagnose acute disorders or deliver life-saving medical treatment. The benefits extend beyond our hospital walls into the community, where the IoT drives telehealth advancements that improve access for patients, such as virtual visits, eCheck-In, patient portals and electronic health records.

Balancing the benefits of greater connectivity with the need to protect critical data is a growing priority for healthcare providers. Opportunities exist for instilling interoperability and security standards that will seamlessly facilitate the sharing of necessary patient care information, while continuing to safeguard it from cyber-attacks.

Enabling connection and communication among different information technology systems and software applications can be daunting. While healthcare organizations can use proven security protocols in other domains, differences between IoT devices and traditional computing systems pose significant challenges. The IoT introduces innovative technology that requires emergent, often untested, software and hardware. Wearables, such as consumer fitness trackers and smartwatches, are a case in point. They present non-traditional access into the technology environment. While they use existing communication protocols that can be secured, there are challenges with multi-factor authentication and control of the devices in case of loss or theft.

Additionally, with millions of people using wearables, the volume of data generated can easily overwhelm an organization’s network, leaving it vulnerable to a potential denial of service attack. In this scenario, hackers attempt to prevent legitimate users from accessing information or services. Methods must be developed to limit data transmitted from wearables solely to those devices that should be transmitting and solely to information that is required for patient care.

Clearly, developing new methods of securing devices and the information they generate is a formidable task. We are fortunate to do business in an area that is well positioned to tackle this growing cybersecurity threat. With one of the most sophisticated technology workforces in the country, pioneering start-ups, world-class educational resources and a large government infrastructure, the National Capital region stands at the epicenter of innovation, policy and research. Our collective expertise can help us meet healthcare privacy and security challenges, and keep our patients and community safe.

 

Connie Pilot is executive vice president and chief information officer at Inova Health System. As the leader of Inova’s technology services division, she oversees all aspects of technology, including IT applications, change and quality management, information security, enterprise architecture, service delivery and informatics. 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Gabriela Coman, partner and co-chair of Rubin and Rudman’s Intellectual Property Practice in Washington, D.C., discusses the ever-expanding field of medical device wearable technology and the important intellectual property implications around these devices.


RubinRudman

Wearable devices such as personal health monitoring, prevention and management devices, as well as methods of using such wearable devices, have become part of our everyday life and essential tools of modern medicine. From head-mounted display devices such as Google Glass or Oculus Rift to bracelets such as Fitbit or Garmin, wearable devices have also become part of an increasingly competitive and litigious environment, especially when competitors enter the market.

To become successful in the marketplace, a wearable device company needs a superior product and patent protection for its wearable device and related methods of use, both in the United States and abroad.

Patents are critical. A patent is a legal right that excludes others from practicing, manufacturing and selling the technology claimed in the patent (the wearable device and/or method of use of the wearable device).To obtain such patent protection, a wearable device company must submit a separate patent application for each country (or region, in the case of the European patent application) in which it wishes to protect its investment and invention. The time, money and effort required to obtain U.S. and international patents are important considerations because the process to obtain a patent requires a significant investment after filing the application.

Without patent protection, the costly product development for wearable devices may easily be copied by competitors. However, if the wearable device is patentable (and once it has been patented), the company will be able to (i) create legal barriers to entry for competing devices by preventing others from copying, selling or manufacturing the patented device; (ii) license the patented device to generate revenue; and (iii) enhance the value of the wearable device company by building equity in the company and creating assets that may attract other investments.

Before a wearable device company invests time and money to develop a wearable device and bring it to market (particularly for medical devices in the U.S. market that require FDA approval and clearance), the wearable device company should consider the following:

1.    What Are Wearable Devices?

Wearable devices encompass various technologies and systems that span numerous lifestyle applications including health and wellness, sports and fitness, home diagnostics, childcare, pet care, fashion and continuous lifestyle monitoring, among many others. These wearable, portable medical devices make it easier for people to assess their wellness, adopt better lifestyles and prevent the majority of diseases with early diagnosis and treatment. These wearable devices (when connected to a hospital or doctor) can also alert health professionals to various problems regardless of where the patient is located.

For example, a personal heart monitor like AliveCor Heart Monitor (FDA-approved for detection of atrial fibrillation) allows patients to monitor their heartbeat using an iPhone and provide the information to their doctors. The AliveCor Heart Monitor may be combined with its AliveECG app to provide a 30-second, one lead electrocardiogram in addition to recording heart rate per minute. In just 30 seconds, a patient could capture a medical-grade electrocardiogram and know instantly if the heart rhythm is normal or if atrial fibrillation is detected in the electrocardiogram. The AliveCor Heart Monitor operates remotely and includes a control unit wirelessly connected to a transmitter that could relay heartrate signals and electrical profile of the heartbeats, skin temperature and other measurements from a chest band or patch, for example.

Google Glass is another exemplary wearable device. As a head-mounted display device in the shape of a pair of eyeglasses, Google Glass allows medical personnel (surgeon) to view information relevant to a patient during surgery without having to turn away from the patient. As the projector display is next to the user’s right eye, the surgeon could see all medical information without looking across the room and away from the patient. The Glass projector could also display patient’s vital signs, urgent lab results and surgical checklists, along with relevant information on the specific surgical procedure. The doctor can control the device manually through voice commands and a touchpad located on its frame.

2.    Impact Of Wearable Devices On Health Information Technology

With the 2014 launch of the Apple Watch and its related Apple Health app (a health and fitness data dashboard) and HealthKit platform, many have predicted the beginning of a digital healthcare revolution. Indeed, wearable technology devices have impacted our personal lives in many ways providing insight into our health and diet regimen, blood pressure, sleep pattern, heart rate and many other life aspects. Wearable devices in the form of sport watches track steps and amount of calories burned; Doctor on Demand facilitates video conferences and live discussions with remote physicians; Google Glass facilitates surgery by offering surgeons information relevant to the patient without having to turn away from the patient; mobile health apps help patients stop smoking or lose weight (and can be installed either on a mobile phone or tablet).

Recently, medical device companies have promoted the use of biometric technology within people/patients. The idea is that sensors within the body could be used to call the healthcare provider if the person is sick. These sensors could be swallowed and placed in the blood or injected or inserted directly under the skin. The sensor can report when a patient ingested a prescription drug, as well as a patient’s vital signs. For example, a digital sensor recently approved by the FDA can be placed inside a pill and swallowed by a patient. Once the patient swallows the tiny digital device, the sensor transmits the identity of the medication and timing of ingestion to a mesh worn on the patient’s skin. The mesh then transmits the received information to a mobile phone app that can also provide physicians with vital signs such as heart rate, body temperature and various rest patterns.

Data from biometric digital sensors can be integrated with wearable devices to create new age health monitors that are further integrated with smartphone apps. Conventional health parameters such as glucose, blood pressure and heart rate can now be combined with environmental data to provide predictive as well as preventative information. In this manner, the emphasis is shifted from treatment to prevention of illnesses and diseases.

3.    Wearable Devices And Types Of Intellectual Property

Wearable devices in the medical field could be protected by various types of intellectual property including patents, copyrights and trademarks.

Utility patent applications may be filed to encompass various aspects of the device per se, such as components and specific structures of the wearable device, as well as designs of various components of the wearable device (through design patent applications).

Patent applications may be also filed to cover hardware of the wearable device such as software, interface, or materials and specialized particulates employed in the wearable technology.

A wearable device company may also apply for copyrights in various software for operating the wearable technology and device, and/or trademarks directed to branding. Considerations may be also given to the packaging of the device for possible protection by trade dress.

4.    Patent Protection For Wearable Devices

Wearable device are protected and patented in the U.S. and other countries.  However, methods or surgery and medical treatment methods are protected and patentable in the U.S. and Australia but typically not in Europe and other countries such as Canada, South Korea or Japan.

Utility patent applications may be filed directed to various aspects of the device per se, such as systems, sensors (electrical, optical or chemical sensors that monitor patient parameters), servers, accelerometers, actuators, materials, controls, kits or specific mechanical components of the wearable device, as well as designs of various structural components of the wearable device (through design patent applications).

Patent applications may also be directed to software, interface (iconic, graphical or numeric user interface with monochrome or color LCD display) or controller (high speed microprocessors or microcontrollers for analysis and data control) of the wearable device.

For example, US 8,764,651 entitled “Fitness Monitoring” discloses and claims inter alia a monitoring system with a portable device, one or more sensors and a processor; a system with a cellular telephone, an accelerometer and one or more sensors; and a system with a server, a portable appliance with a heart sensor and a processor. US 8,108,036 entitled “Mesh network stroke monitoring appliance” discloses and claims inter alia a monitoring system that includes one or more wireless nodes and a sensor coupled to a person to determine a stroke attack; as well as a heart monitoring system that includes one or more wireless nodes, a wearable appliance and a statistical analyzer. Similarly, USD 737159 and USD 764346 are examples of design patents that depict and claim ornamental design for wearable devices.

Medical device companies in the wearable technology field should protect all novel aspects of the wearable device including structural attributes and methods of use, as well as the ornamental look and design of the product. When possible, medical device companies should include claims that cover not only the product per se but also software that is within the app and the wearable device, without referring to the device, to preserve the right of the patent owner to sue the manufacturer of the software for direct infringement of the patent.

5.    Wearable Devices And Privacy Concerns

While wearable devices and biometric technology are redefining the information landscape offering many opportunities, they also pose several challenges.

One important challenge is protecting personal data and ensuring that the policies protecting the privacy and confidentiality of patients are evolving at the same pace as the expanding use of new technologies. Concerns are being raised as to where this personal data is stored and how it is being protected. Highly sensitive and personal data is constantly input into many smartphones with health apps which monitor an individual depending on the data that is inputted. The more data that is inputted, the more vulnerable the individual/patient becomes.

The digital format of data from wearable devices and biometric records opens a world of opportunities for hacking and data breaches, especially when the wearable device is linked with a smartphone, tablet and computer.

 

Gabriela I. Coman is partner and co-chair of Rubin and Rudman’s Intellectual Property Practice in Washington, D.C. Coman practices primarily in the intellectual property area, concentrating in the fields of medical, biotechnology, pharmaceuticals, chemical, semiconductors and design patents. Contact Gabriela Coman by email at gcoman@rubinrudman.com or by phone at 202.794.6300.

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Marty Herbert of NeoSystems Corp. shares the second in a series of tips for workflow and process automation.


In Part 1 of our Workflow and Process Automation Series, Re-evaluating Your Processes, we looked at a few steps your organization can take towards drastically simplifying your billing process. Keep in mind that throughout this series, I will highlight solutions which produce time saving, compliance-driven processes that integrate with business systems, like Deltek Costpoint, NetSuite, SAP or others and create an enhanced workflow automation framework. In today’s post, Part 2 of our series, we’ll address vendor invoice processing.

A few years back, while working on a series of consulting projects, I looked at a client’s AP department while performing an audit and noted several variations they employed to process their vendor invoices. Some invoices came in via email, others via snail mail. Some came in to the attention of the company’s AP department; others came in via the project manager. Some were based on a PO and others were one-off ‘bills that needed to be paid.’ Knowing who the appropriate approver is could be multi-faceted and involve the receipt of goods (or services). Similar to many larger government contractors, our client used Deltek Costpoint for vendor invoice processing so I will use that system as an example of a well-known business system that is largely identifiable for our audience.

This business system has a great mechanism for capturing data and information related to accounts payable, but it can’t necessarily control how invoices are delivered, who approves them, and how that approval is captured for compliance purposes.

Our client’s overarching goal (outside of employing processes that increased efficiency and effectiveness) was to find a way to electronically interface an APPROVED invoice for vouchering in Costpoint. That sounds like a simple objective, but there are nuances that might not be immediately obvious. The “approved” aspect implies that there needs to be a process followed to obtain a valid, recognized approval. The “electronic” aspect implies that the entry into the ERP system should be automated without the need for manual data entry. Automated work flow tools make the design and controlled execution of a process possible, while Costpoint Web Services enables an electronic interface.

But, let’s slow down. Before we send data along, we have to gather the data. In this case the data comes from a vendor’s invoice, but we want to make sure the vendor’s invoice has been reviewed and approved before we send it into the system of record. The first step in automating this process is to gather the data input (the invoices). There are multiple ways we could approach this:

  • We can give vendors access to a “portal” whereby they upload the invoice directly into a workflow, or vendors can email the invoice to a specific address that will automate process kick-off and move it into a queue for AP servicing, or
  • We can receive a vendor invoice and initiate the process by loading it to the AP queue (potentially after scanning it in if it is received hard copy).

Then it is time to route the invoice to the proper ‘approver.” If companies are already connected to an ERP application that supports project management data, they are able to use the data inherent to any given project to pull the relevant approvers for PO-based invoices. AP clerks will then have matched the invoice to a PO (unless the vendor did that already) and chosen the lines from the PO to which the invoice applies then… well, that is all they have had to do so far.

Off to the approver(s) the invoice goes. The approver gets the invoice that has been submitted as well as details added by the AP department. The approver can decide to reject it or send it to another approver, or sit on it a while. Any (or all) of these tasks can be built into the process. The end result is (hopefully) an approved invoice.

At this point, the system should validate the invoice information and manage the voucher process through creation, voucher number generation, accept or reject status and check generation. It is critical and most efficient to have a complete trail of activity from submission to payment.

This process, when automated, is extremely easy to follow, saves time and money and is easier to implement than one might think. Unfortunately, most government contractors don’t know the ease with which automation software can achieve this and many other processes quickly and effectively.

There are numerous effective workflow management software systems in the market today. Integrify, a workflow management software used to automate a myriad of processes within a variety of platforms, is one tool we use at NeoSystems to automate vendor invoice processing within the business systems we use.

Our next blog will focus on the delightful automation of purchase requisition. If you have any burning questions about this or other processes (even those we haven’t gotten to yet!) using web services and workflow management software for your business system, please feel free to contact me.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Marty Herbert of NeoSystems Corp. shares the first in a series of tips for workflow and process automation.


Marty HIf you are an ERP user, you likely know that most applications are rich with many features that address the nuances of running projects, especially if you are a government contractor.  However, no application can address the many steps that an organization must go through to accomplish what might be seen on the surface as a simple task.

Take ‘billing’ for example. I was asked a while back to determine how to route a bill for approval, and I thought it would be a “piece of cake”. Create bill. Send to approver. Get approval. Bill is right – Send to customer. Bill is wrong – rinse and repeat.  For this article, we’ll use commonly known GovCon ERP, Deltek Costpoint, as an example.  This system is very good at the first part. If you need to create a bill, you can create bill replete with support for hours worked and costs incurred. The problem, however, is there is no nice and simple way of implementing a workflow process that will accommodate most organization’s review and approval routines within the ERP framework.  That’s not a knock against Costpoint, no ERP systems on the market adequately address this issue, especially when you magnify it by the many, many other processes, that an organization has in place to accomplish their back office routines.

Over the next six weeks we will be taking a look at several areas where workflow plays a big role and how to leverage the automation of workflows via integration with your ERP. Companies unaware of how to automate in these areas are wasting precious time in determining the process, missing steps and ultimately don’t know how to streamline efficiencies that will save them money down the road.

In our first post for “Evaluating Your Process for Users of Deltek Costpoint or a Similar System,” I’ll examine the role of an AR clerk with my ‘piece of cake’ attempt at automating bill routing.

I had bills created from our ERP and I had Outlook, so I sent two bills to their respective approvers to verify hours were correct so we could bill the services to the client. Then I waited and waited and waited and waited… you get the picture. I followed up via email at least three times over the next week and finally, a week later, I knocked on their doors to see if they had time to review the email I sent.

‘Approver 1′ called me to his desk and had me look at the count of emails in his inbox. Until then, I was unaware that this number could go over 9,999, but there it was. I apologized and helped him find my email. Five minutes later he reviewed it and sent me an email saying we could bill it. Finally, the bill was out the door. I don’t remember whether I had to mail it or email it, but that is of no consequence. Oh, and of course, I forgot to tell my supervisor that I got the bill out the door so she was unnecessarily on my case the next morning.  I’ll try not to make that mistake again.

‘Approver 2′ (let’s call her Amy), asked if I had received her email. She said she responded immediately to each of the messages I sent, so I crept back to my cube and found her responses.  Suddenly I was the culprit in slowing down my own process! “Sorry, this Acme project isn’t mine,” she said. “These should go to Janet, she runs the Acme project.” Ugh! Wouldn’t you know she didn’t even have the courtesy to copy Janet on her response to me. So I just trudged down the hall to Janet’s office and had her review the paper copy. She looked at it briefly and said “yep, looks fine.” Great, I was out her door and happy to get the bill out of the door. Never mind that I forgot to get Janet to initial the invoice to indicate she had approved it and, of course, I forgot to tell my supervisor I sent the bill.  But, hey…bill is out the door, case closed.

Actually, the case was just getting started. The following week, in walks my supervisor. “I got a call from Acme Company’s CFO.  She asked me who Francis Miller was and why we were billing Acme for her travel to Las Vegas.  When I look in our system, this bill isn’t even posted, when did you send it out? Did you get Amy to review and approve this before you sent it out?” Sorry, I said, I forgot to post the bill in the system, and Amy said the project really belongs to Janet, so I got her to review and approve it…..see (as I pulled my copy from the file drawer). But, of course, Janet’s initials weren’t there.  Now my boss is mad at me for sending out an invoice that she thinks I didn’t get reviewed AND I forgot to post it. Swell.

I realized there was A LOT of room for improvement in this process. Problem #1, people are swarmed with email. Problem #2, people change roles and responsibilities a lot. Problem #3, no coordination with the ERP and the approval activities.  Problem #4, I can be my own worst enemy. Why couldn’t all this stuff be linked together somehow, and why isn’t there a way to get things posted in the system without me having to remember every little thing. I’m only human, after all. And this was a simple bill.  I could only imagine – or rather didn’t want to in this case – what would have happened if there had been revisions.

From experience I’ve gathered intelligence on how to sidestep these common pitfalls. Apart from working together as a team, companies always think in terms of making changes to their IT infrastructure. What I believe needs to happen is approaching these pitfalls in terms of changing the process infrastructure. There are no short term ‘quick fix’ changes, but rather logical steps toward automating manual processes that run at the heart of their businesses.Workflow

Step 1

Get people out of email and into a single system for approvals. This will help solve problem #1 and 3. By logging in to a single system for approvals, the approver should be able to get to a “To Do” list that helps them focus on the task(s) at hand. A system that alerts ONLY when an approval is required, and only when this task is “past due,” can assist in decreasing problem #1.

Step 2

Link your system to Deltek Costpoint or a similar platform! Not only does it save time from transferring information into Outlook, but it also ensures that the information will not be incorrectly entered or failed to be entered. Additionally, users can maintain project leads in Costpoint, and can link to a user in the system to automatically assign the approver to the person(s) involved in any given approval process. Problems #2 and 3 solved.

Step 3

Create a workflow that allows for rework, rejection, and handles the issues and items that may need to be addressed when something is “wrong.” That way, the stakeholders that need to be involved can be included automatically based on roles, or by selecting a user from a list of possible issues/departments involved. This decreases the amount of emails sent out for approvals. Assigning a task and automating reminders in the system accomplishes all these things.

Step 4

Solve Problem #4.  Remove yourself from your enemy list.  Relax. Stay out of email. Work on other things. Seriously. At a recent conference I attended, it was estimated that we spend around 28 percent of our work time sending or reading emails. What happens when you remove a single work stream worth of emails from your list of things to do? You can get back a piece of that time to work on other more pressing issues.

If it sounds like I’ve been through this process at least a few times, it’s because I have. Using the power of a business process management tool called Integrify, NeoSystems has automated this and other processes and tied those processes to Costpoint and similar platforms. Throughout this series, I will highlight the ways we have implemented, envisioned, and produced time-saving, compliance-driven processes that integrate with your ERP to create an Enhanced Workflow Automation Framework.

Have burning questions about Process Automation? Feel free to contact me ahead of next week’s blog post.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Collecting Big Data Footprints

May 23rd, 2016 | Posted by Sarah Jones in Guest Blogs | Member Blog Posts - (Comments Off)

This week on NVTC’s blog, the Virginia Commonwealth University School of Engineering shares research on Big Data footprints that the Electrical and Computer Engineering Department is working on with the Huazhong University of Science and Technology.


vcublogXubin He, Ph.D., professor and graduate program director of the Virginia Commonwealth University School of Engineering Electrical and Computer Engineering department, is working with Huazhong University of Science and Technology (HUST) to establish an international research institute focused on creating design techniques to improve data reliability and performance. Coordination efforts are currently underway to create rotation periods for students from VCU and HUST to conduct research within each university’s state-of-the art laboratories.

“This next step in our partnership with VCU helps both universities attract more high-quality research students, while enhancing the breadth and depth of our research,” said Dan Feng, Ph.D. and dean of the School of Computer Science and Technology at HUST. Feng also serves as director of the Data Storage and Application lab at HUST.

Managing big data

Data storage is a booming industry, with lots of opportunities. Just a decade ago, computational speed dominated research efforts and water cooler conversations. According to He, data is more important now. “Data empowers decision-making and drives business progress. No one can tolerate data loss, whether that data represents favorite photos or industry trends and analytics,” added He. And yet, trying to increase data capacity or replace obsolete data systems can shut down vital data centers for days.

Research teams from both universities find creative solutions to global data pain points. For example, these collaborative research teams reduced overhead costs associated with data failures by up to 30 percent. Their algorithms allow businesses to encode data that can be easily retrieved, instead of having to rely on costly data copies or redundant data centers.

Currently, in addition to HUST, He’s team also works with top data storage companies such as EMC, which ranks 128 in the Fortune 500 and had reported revenues of $24.4 billion in 2014.

The network effect

He has a simple philosophy to gauge the success of university research efforts — he looks at who else is there. “At top data storage and systems events such as USENIX’s File and Storage Technologies conference and USENIX’s Annual Technical conference, we’re presenting with peers from Harvard, MIT, Princeton and other premier universities we admire,” said He. These conferences typically accept about 30 presentation papers — that’s less than 20 percent of the global submissions they receive.

“Professor He’s leadership represents one of many efforts to build our international reputation in industry and academia,” said Erdem Topsakal, Ph.D. and chair of the Department of Electrical and Computer Engineering. “HUST is ranked 19 on the U.S. News World & Report’s Best Global Universities for Engineering list. When leading universities like HUST want to work closely with you, you know you’re doing something right.”

For more news from the Virginia Commonwealth University School of Engineering, click here.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Protecting Data at Its Core

May 20th, 2016 | Posted by Sarah Jones in Guest Blogs | Member Blog Posts - (Comments Off)

This week on NVTC’s blog, Richard Detore of GreenTec-USA discusses the deep concerned over recent cyber-attacks and offers a solution to prevent data damage.


picforblogEveryone in the cybersecurity field – both inside and outside of government – is deeply concerned over the kind of cyber-attacks that hit federal agencies such as the Office of Personnel Management (OPM) and private companies such as Sony. Rightly so, government agencies and private companies continue to make large investments in cybersecurity.

This sense of urgency extends to America’s key infrastructure, as underscored last October when President Obama issued a Presidential Proclamation on Critical Infrastructure and Resilience. In that proclamation, the president noted that

“Our Nation’s critical infrastructure is central to our security and essential to our economy. Technology, energy and information systems play a pivotal role in our lives today, and people continue to rely on the physical structures that surround us. From roadways and tunnels, to power grids and energy systems, to cybersecurity networks and other digital landscapes, it is crucial that we stay prepared to confront any threats to America’s infrastructure.”

Last year, in testimony before the Senate Armed Services Committee, Director of National Intelligence, James Clapper, noted how cyber-attacks threaten public and private sector interests:

“Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity…instead of deleting it or disrupting access to it. Decision making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

And in his most recent appearance before the Senate Armed Services Committee, Clapper stated that “Cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication and severity of impact.”

According to a recent study published by the cybersecurity firm Tripwire, 82 percent of the oil and gas companies surveyed said they saw an increase in successful cyberattacks over the past year. More than half of the same respondents said the number of cyberattacks increased between 50 to 100 percent over the past month.

Last year, federal investigators uncovered the fact that Russian hackers had penetrated the U.S. State Department in a major cybersecurity breach that gave Russian hackers access to the White House – including the President’s schedule.

Other threats, such as ransomware, are now on the radar screen of key policy makers in Congress, as well as the U.S. Departments of Justice and Homeland Security. Ransomware encrypts a computer user’s information, and hackers then demand payment – usually in the form of crypto-currency such as Bitcoin (which is extremely difficult to trace) – to unlock the information.

In fact, in recent years several police departments have fallen victim to ransomware and have had to make payments to the hackers. One typical example happened in Maine when two police departments were hacked into. To date, the perpetrators in these cases have not been apprehended.

Obviously, protecting and securing data at its core is a key component of cybersecurity efforts for both the public and private sectors. While it is important for cybersecurity efforts to focus on improving detection and enhancing firewalls, one approach that may often be overlooked is better protecting data at its core.

picforblog2Until recently, it was not possible to fully protect data at its core –the hard drive. In 2013, Write-Once-Read-Many (WORM) disk technology was developed and successfully installed that now, for the first time, allows government agencies and private companies to safely secure and protect data at the physical level of the disk. Any and all data stored on a WORM disk cannot be altered, overwritten, reformatted, deleted or compromised in any way within a computer or data center. The WORM disk functions as a normal Hard Disk Drive with zero performance degradation from its additional built-in capabilities. These capabilities prevent data damage from any form of cyberattack.

This new breakthrough combined with encryption makes it impossible for hackers to steal data or render it useless by attacking the stored data, or disks.

In addition to advances in malware and firewall enhancements, comprehensive cybersecurity efforts should take a close look at technologies that protect data at its core. Such efforts will impact the public and private sectors in profound ways.

Richard Detore is a NVTC member and CEO of GreenTec-USA, a technology company based in Reston, VA.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS