insperity-ogThis week’s NVTC member guest blog post is by Kelly Yates, vice president of Service Operations at Insperity. Yates shares five key priorities outlined in the Equal Employment Opportunity Commission’s (EEOC) recently-released Strategic Enforcement Plan.

The Equal Employment Opportunity Commission (EEOC) recently released its Strategic Enforcement Plan through the year 2021, which outlines its priorities for the coming years.

In this plan, one of the areas of focus is a grouping the agency refers to as “emerging and developing workplace issues” – an ever-evolving subject, which can be challenging to navigate. From qualification standards and inflexible leave to discriminatory practices, typically, these complaints parallel societal issues that are also gaining a bigger spotlight in the media.

Here are five areas employers should watch carefully as they start their new year:

1. Discrimination against those with disabilities

The EEOC continues to broaden its definition of what constitutes a disability. Under the EEOC’s new plan, the agency is narrowing its focus to give priority to the areas of job qualification standards and inflexible leave policies to better protect disabled employees.

Qualification standards

A qualification standard is how the employer defines who is qualified to be hired for a particular job. This means that job descriptions could fall under scrutiny if there are unnecessary physical qualifications placed on applicants.

For example, if your job description says that applicants must be able to lift 20 pounds, but there is an accommodation that could be provided where lifting wouldn’t be required, it may not be reasonable to use this as a qualifier when making hiring decisions.

Inflexible leave policies

There is no clear cut definition of whether a leave policy is flexible or inflexible. But typically, any policy that takes a definitive stance on the time limitations of a leave will be labeled as inflexible by the EEOC.

For example, an inflexible policy might state: “You must be 100 percent healed to return to work when a 12-week FMLA leave ends, or you will be terminated.”

Leave policies require careful attention to the language used to ensure that they cannot be interpreted as inflexible and have a one-size-fits-all approach. And, these situations require an interactive dialogue between employers and employees to understand what return-to-work accommodations may need to be made, such as granting additional time off work or the ability to telecommute for a specified time period.

Even a policy that says that employees aren’t allowed to work remotely could be deemed as inflexible, if, based on their disabilities and the nature of their job duties, it would be a reasonable accommodation to allow them to work from home.

This is one of the biggest risk areas for employers. It’s one of the most challenging to work through because there is not a definitive manual on the topic. Each situation has to be handled individually and based upon its own merits – not based upon how the last case was treated.

2. Accommodating pregnancy-related limitations

Gender equality in the workplace has been in the spotlight in recent years. One area that the EEOC has weighed in on heavily has been removing pregnancy as a barrier for equal treatment in the workplace.

Employers are advised to treat pregnant employees as they would treat any other employee who has any other temporary medical condition. Consider all accommodation requests by engaging in an interactive dialogue with the employee and make necessary job modifications as specified by the employee’s medical provider.

For example, accommodations may include altered breaks and work schedules (e.g., breaks to rest or use the restroom), permission to sit or stand, ergonomic office furniture, shift changes, elimination of marginal job functions and permission to work from home for a specified time period.

If a woman isn’t able to work during her pregnancy, she may be eligible for time off not only under FMLA, but also under the Americans with Disabilities Act (ADA). Pregnancy complications may be covered under the ADA as a temporary disability.

These types of accommodations help to keep barriers neutralized so that women can continue to progress in their careers while pregnant. And, these accommodations can also help ward off potential discrimination claims.

3. Protecting LGBT from discrimination based on sex

If you look up the Civil Rights Act of 1964, you won’t see LGBT as a category of protection. But the agency’s recent interpretation has been that it falls into the broad category of discrimination based on sex.

To avoid EEOC discrimination charges, you must treat LGBT employees as you would all other employees. If employees come to you with complaints, make sure to treat the complaint seriously, and ensure that a prompt and thorough investigation is undertaken.

Share the outcome of the investigation with the employee, and discuss resolution options. Then, be sure to follow up with the employee regularly after the issue is resolved to ensure that no new concerns have arisen. This also conveys to your employee that you have an open door to ongoing communication.

Work toward making sure that your workplace is a comfortable environment for all employees.

4. Clarifying the employment relationship and workplace rights

Temporary workers, staffing agencies, independent contractors and the on-demand economy are all changing the dynamic of the employer/employee relationship.

For example, let’s say you hire five temporary workers from a staffing agency. The agency pays these employees so they aren’t your “employees of record” for purposes of your payroll taxes, etc. However, because these workers are conducting work on your premises and for your business, including engaging with your employees, as an employer, you may be deemed to be controlling the “terms and conditions of the employees” work environments. Therefore, you must ensure that these workers are treated in compliance with EEO laws, just as you would for all employees.

Say, for instance, one of your employees or managers harasses a temporary agency worker. The temporary worker could file an EEOC claim against your business, even though you’re not his or her employer of record. And, if the agency deems that the worker’s complaint has merit, you could be liable to make financial settlements to resolve the complaint.

Just because workers aren’t on your payroll as full-time, permanent employees, it doesn’t mean you are absolved of responsibility for ensuring that they are treated fairly under the law.

5. Addressing discriminatory practices against Muslims

In the midst of global terrorist attacks, the EEOC believes there may be an increase in discriminatory actions against people who are Muslim, Sikh, or of Arab, Middle Eastern or South Asian descent (or those perceived to be a part of those groups).

In an effort to get out in front of this, the EEOC is giving priority consideration to these cases.

First and foremost, hiring managers must understand that national origin or religion should never be used as a factor in any hiring or employment decision.

You may think that everything is fine because you aren’t hearing any complaints or concerns from employees. But often, employees are hesitant to file complaints because they think doing so will jeopardize their jobs. In these cases, they may wait to bring forward a concern until it has escalated to a point where they have already decided to involve an outside agency or attorney.

Your managers should take extra steps to vocalize and demonstrate their commitment to the company’s open door policy so that employees aren’t afraid to come to them with their concerns.

Also, ensuring that you regularly conduct discrimination and harassment prevention training for all employees and that you have clear policies with a zero-tolerance for discrimination and harassment in-place are important preventative steps for employers to take.

Understand the risks and move forward thoughtfully

While the EEOC provides information for employers on their website, employers are often confused as to how each law and regulation should be interpreted and applied to their specific workplace. As you navigate through issues in these areas, it is advisable to work closely with HR experts, legal counsel or a professional employer organization (PEO), who are well-versed and experienced at preventing and resolving workplace complaints.

Keep in mind that if an EEOC charge is filed and the agency begins an investigation of your company, it could quickly become a huge disruption to your employees and to your business overall.

Here are a few possible outcomes:

  • On average, it takes at least a year for a charge to be resolved.
  • There can be disruptions if EEOC investigators come on-site to interview employees and review your company’s documents.
  • The EEOC doesn’t have to notify your business when it contacts your employees or former employees for interviews.
  • The investigation can impact the morale and productivity of your workforce.
  • There are huge monetary impacts for having to defend against and resolve the complaint.
  • A “cause finding” by the EEOC can become a matter of public record, which impacts your reputation and credibility as an employer.
  • If the EEOC finds cause for a complaint, typically, your business will have to routinely provide reports to the agency and comply with ongoing requirements, such as employee training – usually for three to five years.
  • If the EEOC finds cause for a complaint, you may be required to post notices throughout the workplace to inform employees that your company was found to have engaged in a discriminatory practice.

If your business is charged in one of the areas from the strategic enforcement plan, it’s important to take it seriously and respond appropriately. Consider consulting with an HR expert or employment counsel on how to proceed.

Learn more about how to protect your business by downloading Insperity’s complimentary e-book, Employment Law: Are You Putting Your Business at Risk?

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s guest blog post is by Norm Snyder, partner at Aronson, LLC. Snyder is also chair of NVTC’s Small Business and Entrepreneur Committee. Snyder shares highlights and lessons learned from the Committee’s All Star Seed/Early Stage Investor Panel that took place on Nov. 15.

aronson-llc1 v2Can seed, early stage and angel investment capital be found in the D.C. metro area? This question and others were discussed by NVTC’s Small Business and Entrepreneur Committee’s engaging All Star Seed/Early Stage Investor Panel on Nov. 15, with some of the area’s most active early stage investors.

Moderated by Aronson Partner Norm Snyder, the panelists included Ed Barrientos, “super-angel” investor and entrepreneur CEO of Brazen, Steve Graubart, CFO of 1776, John May, founding partner of New Dominion Angels, Liz Sara, angel investor and entrepreneur and chair of the Dingman Center, and Tom Weithman, managing director of CIT GAP Funds and CIO of Mach37.

During the event, panelists discussed their recent experiences, desired investee profiles and offered practical advice to an audience of start-up entrepreneurs engaged in navigating the challenging early stage investment world. While the general consensus is that early stage capital is available in the D.C. metro area, it takes persistence and hard work for entrepreneurs to successfully attract sufficient investment from the right investors.

According to Weithman, over 100 companies have been funded in Virginia by CIT with a focus on tech, fin-tech, cyber and life sciences. However, he stated there is a dearth of seed funds generally available for cyber. Sara stated that approximately 15 deals were funded in the last year by her Dingman Angel group. Barrientos has made significant angel investments in a number of companies and has raised venture capital funds for Brazen. May stated that almost every deal that should be funded is funded, but it is rare for one angel to fund the entire deal. Graubart said 1776 has made 30 investments to-date with a focus on regulated industries such as ed-tech, health IT, fintech, smart cities and transportation.

So how does an investor stand out in the crowd of early stage companies?

Panelists offered a range of suggestions. Research potential investors – plenty of information is available to find out what they are interested in. Don’t waste your time and theirs chasing investors not interested in your company’s profile. For early stage, investors are betting first on the entrepreneur and their team and not on a single idea or concept, which is likely to evolve several times before it goes to market. Put together a passionate team with strong domain experience and the ability to sell themselves to attract investors, customers and future team members. Remember, the team should include an experienced advisory board with strengths and experiences that compliment and extend the abilities of the entrepreneurs. Put together well thought out and concise pitches and applications.

Be persistent – get in front of groups of investors. Warm referrals tend to get looked at first, so use your advisors to help you get noticed and invest time building relationships. Be able to demonstrate market acceptance and traction. Be coachable; you may be the “master” of your technology, but each successful start-up faces different challenges and there’s a lot to learn. Early stage entrepreneurs shouldn’t focus on trying to get the “highest” valuation – high valuations can scare away very qualified investors and may lead to future disastrous down rounds. Convertible debt, instead of preferred stock, can help take the focus off the subjective valuation issue for early stage companies.

Most importantly, the closing advice to attendees: be passionate and persistent and make sure you enjoy what you do!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Our latest NVTC member guest blog post is by ePlus Chief Security Strategist Tom Bowers. Bowers discusses the latest advancements in machine learning and its impact on cybersecurity.

eplusAccording to a Ponemon Institute study released in March, 63% of survey respondents said their companies had been hit by “advanced (cyber) attacks” within the last year. Only 39% felt their company was highly effective at detecting cyber attacks. And worse, only 30% considered their organizations highly effective at preventing them.

A few weeks ago, I moderated a panel discussion at the ePlus/EC-Council Foundation CISO Security Symposium in National Harbor, Md. Our purpose was to gather together leading security experts to get their insights on the latest security threats and to discuss ideas and strategies. CISOs from many different industries were there. And as you might imagine, given the importance of cybersecurity today, the event was well-attended.

During the session, we covered various pressing topics in the realm of cybersecurity. But the most intriguing “future-looking” trend we discussed was machine learning.

That’s not a surprise because machine learning is a hot topic in tech circles. But it’s more than just the latest buzzword in the industry, and vendors are responding accordingly. In March, Hewlett Packard Enterprise (HPE) announced the availability of HPE Haven OnDemand, their cloud platform “machine-learning-as-service” offering. In October, IBM, whose Watson system is known as a leader in artificial intelligence (AI), changed the name of their predictive analytics service to “IBM Watson Machine Learning” to emphasize their direction “to provide deeper and more sophisticated self-learning capabilities as well as enhanced model management and deployment functionality within the service.”

Simply speaking, machine learning refers to the ability of computers to, in effect, “learn and grow in knowledge” based on past experience. Machine learning begins with a base set of teaching material and through subsequent experiences (i.e. the processing of more and more data sets and responses), the machine learning algorithm adds to the base material—it’s body of knowledge, so to speak—and the program becomes more intelligent. As a result, machine learning programs are able to answer questions and to make predictions with increasing accuracy.

What are the implications for security operations?

Machine learning has made tremendous strides in the last few years. From self-driving vehicles to medical research to marketing personalization to data security, machine learning algorithms are being used to churn through huge stores of data to identify patterns and anomalies, enabling data-driven decisions and automation. And that capability continues to mature and extend into the area of cybersecurity.

For years, those of us in IT security have worked tirelessly to increase the maturity of security operations in our companies. We’ve strived—in the face of increasing complexity and rising threats—to advance our information security capabilities beyond simple “detect and respond” reactive methods to risk-based “anticipate and prevent” proactive approaches. Machine learning is playing a role in that mission today and will play an even larger part in the years to come.

As more security vendors incorporate machine learning engines into their solutions, security operations will change. For example, log scanning—a tedious, labor-intensive effort—will become automated. Instead of a security analyst scrolling SIEM output, scrutinizing correlated events and analyzing their meaning, machine learning engines will parse huge log files, identify anomalies, and make decisions in near real-time.

In addition, machine learning engines will identify trends, threats, and incidents much faster. Instead of waiting on a security analyst to conclude their analysis, machine learning engines will parse reams of security data collected from enterprise machines, such as servers, smartphones, tablets, network devices, applications, and others. Through big data analytics and machine learning, this machine data will be searched and analyzed to gain insight into what is happening inside corporate networks, enabling trends to be exposed and incidents to be identified much faster than they are today.

But more importantly, machine learning engines will be able to “hunt” for exploits. By combining input from learned behaviors, known indicators of compromise (IOCs), and external threat intelligence feeds, machine learning engines will be able to predict malicious events with a high degree of accuracy, preventing major incidents before they materialize or become widespread problems. And we are seeing examples of this capability today. For instance, the cyber solution Endgame operates at the microprocessor level, analyzing pre-fetch instruction cache searching for zero-day exploits so they can be detected and eliminated long before an incident occurs.

Not to be overlooked is the ability of machine learning to enable automated responses. Machine learning engines not only can detect malicious behavior faster, based on IOCs and “experience,” but also can take action to eliminate the threat early in the kill chain without requiring human involvement. This enables incidents to be avoided proactively and lessens the workload on short-handed staff.

The benefits of machine learning are clear and compelling. But many security professionals are asking, “Is the technology really ready?” There are valid concerns, such as the validity of data from external threat intelligence feeds into machine learning engines and the potential for machine learning algorithms to be attacked and fed false models, but work continues by vendors and academia alike to sort out those questions. In fact, Georgia Institute of Technology just launched a new research project to study the security of machine learning systems.

Like most technology, machine learning will continue to evolve. But if expectations prove out, machine learning will transform how CISOs manage security operations within the next three years.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Did you know?

  • Only 7% of federal employees today are age 30 or under – the lowest percentage in the last ten years
  • By 2017, 31% of federal workers will be eligible to retire
  • The government loses about 5,000 information technology employees each year

In a recent Government Executive blog post, NVTC member Susan Fallon Brown of Monster Government Solutions shared these astounding statistics and highlighted the growing opportunity for the federal government to bolster its millennial workforce and reduce overall hiring gaps with millennial talent. Here are some of the key themes she shared in the blog:

  • The importance of federal agencies being able to articulate their missions – millennials want to be a part of organizations that serve the greater good; an agency’s mission statement, often the first point of entry into an organization for a candidate, must clearly express the positive impact the agency is making
  • Digital channels are key to millennial recruitment – millennials are using social networks and digital channels in their job search more than ever before; agencies should leverage their digital channels as an extension of their recruitment efforts, utilizing clear and enticing messaging
  • Transparency and engagement are a must in the recruitment process – millennials want to be continually engaged in the hiring process. They want feedback from recruiters at all stages of the hiring process – and to hear from recruiters after the interview process, even if they didn’t get the job

Millennials make up about one-third of the workforce in Fairfax and Arlington Counties according to a 2016 Millennial Research report conducted by NVTC’s NextGen Leaders Committee. The report explored what attracts and retains millennials in organizations in Northern Virginia.

The notion of connection – millennials’ desire to feel connected to the community they live in, to their employer’s mission and charitable efforts, and to their colleagues, emerged throughout the report. Here are some interesting points from the research:

  • Millennials place strong emphasis on flexibility in their positions – in their schedule, in the physical location of their job and in their responsibilities. Instead of the amount of hours they work, millennials want to be evaluated on the quality of their output.
  • Millennials place strong value on ongoing learning and development opportunities; career progression and mentorship is highly important, even though company loyalty isn’t always a driving career factor for millennials.
  • Millennials highly value employee recognition in a variety of forms, including constructive feedback, awards, perks and promotions.
  • A company’s social responsibility efforts and commitment to being ethical is critical for millennials and a driving recruitment factor; millennials place strong value in the trust they have for their employer, their transparency and commitment to bettering the world.

Interested in learning more about recruiting and retaining millennials in our region? Read the full NextGen Leaders Millennial Research report.

Check out Government Executive’s blog here.

NextGen Leaders Millennial Graphic

Click to enlarge infographic above – just one of the interesting infographics you’ll find in the NextGen Leaders Millennial Research report

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

leaseweb-logoThis NVTC guest blog post is written by Marc Burkels, manager of dedicated servers at LeaseWeb. LeaseWeb, an NVTC member company, is an Infrastructure-as-a-Service (IaaS) provider offering dedicated servers, CDN and cloud hosting on a global network. LeaseWeb recently exhibited at the Capital Cybersecurity Summit on Nov. 2-3, 2016.

Let’s say you want to become the new Facebook. Believe it or not, I regularly run into people who have this ambition. The number one question these new Mark Zuckerbergs ask me is which server they need.

It is always a challenge to convince them to not rush into anything. Instead, I have them sit down and tell me what they really want. Since many companies switch servers within a few months after buying and this is always time consuming (not to mention the costs), it is certainly worth your while to think well before you decide. What is the service you want to deliver? What is your workload? Does it involve large databases?

I always discuss the following 8 things to help people decide on the right hosting provider and hardware configuration of a dedicated server:

1. Business impact of downtime

What is the business impact of potential failure of your hosting environment? One of the first things to consider when selecting a dedicated server is how to deal with potential downtime. In a cloud environment, the setup of the cloud protects you against hardware failures. With a dedicated server, you know you are not sharing resources with anyone else. But since there is always a single point of failure in one server, you need to decide whether you are able to accept potential downtime – if you do not have the option to scale to multiple dedicated servers.

2. Scalability of your application

Scalability is another important issue when choosing a dedicated server. How well does your application scale? Is it easy to add more servers and will that increase the amount of end users you can service?

If it is easy for you to scale, it doesn’t matter whether you use a dedicated server or a virtual solution. However, some applications are difficult to scale to multiple devices. Making sure a database is running on multiple servers is a challenge since it needs to be synchronized over all database servers. It might even be easier to move the database to a server that has more processing capacity, RAM and storage. Moving to a cloud environment – where you can clone a server, have a copy running in production and can add a load balancer to redirect traffic to multiple servers – could also be a good option for you.

3. Performance requirements of your server

What are your performance requirements? How many users do you expect and how many servers do you potentially need? Several hardware choices influence server performance:

Processor/CPU

Generally , you can choose the amount of processors and cores in a server. It depends on the application you are running whether you will benefit from more cores (but any multi-threaded application will benefit from more cores, for instance web servers or database servers). Consider also the performance of the core defined in clock speed (MHz): some processors have a better turn-around time with less cores and more GHz per core. The advice on which processors and how many cores to choose will ideally come from someone who is managing the application or the vendor of the software. Of course, they need to also take into account the expected amount of users.

RAM

The faster the CPU and the more cores it has, the more RAM options are available to you. If you are unsure about your RAM needs, choose a server that allows you to add RAM if needed since this is relatively easy. The ranges of RAM choices, especially with double processors, are enormous.

The size of your server is important when choosing RAM, as is the latest technology. Current generation servers use DDR4-technology, which could have a positive effect on database performance. DDR4 is priced interestingly nowadays, since it is the standard.

Hard Drives

Choose a RAID set-up for your hard drives, so you are well protected against the failure of a single hard drive. Your system will still be up and running – with some performance loss – until the hard drive is replaced.

The larger the server, the more hard drive options you have. SATA drives stand for high volume but relatively low performance. SAS performs twice as well as SATA, but has a higher price and lower capacity. SAS has been succeeded by SSD, which is 50 to 100 times faster than SATA.

4. Load balancing across multiple dedicated servers

If your application can scale across multiple dedicated servers, a form of load balancing where end users are split across all available servers- is necessary. If you are running a website and traffic is rising, at some point you will need to use multiple web servers that serve a multitude of users for the same website. With a load balancing solution, every incoming request will be directed to a different server. Before doing this, the load balancer checks whether a server is up and running. If it is down, it redirects traffic to another server.

5. Predictability of bandwidth usage

The requirements in bandwidth naturally relate to the predictability of data traffic. If you are going to consume a lot of bandwidth but predictability is low, you could choose a package with your dedicated server that has a lot of data traffic included, or even unmetered billing. This is an easy way of knowing exactly how much you will be spending on the hosting of your dedicated server.

6. Network quality

As a customer, you can choose where a dedicated server is placed physically. It is important to consider the location of your end user. For instance, if your customers are in the APAC region, hosting in Europe might not be a sensible choice since data delivery will be slow. Data delivery also depends on the quality of the network of the hosting provider. To find out more about network quality, check a provider’s NOC (Network Operation Center) pages and test the network. Most hosting providers will allow you to do this.

7. Self-service and remote management

To which degree are you allowed to manage your server yourself? If you are running an application on a dedicated server, you probably have the technical skills and the knowledge to maintain the server. But do you have access to a remote management module? Most A-brand servers are equipped with remote management modules. Providers can allow you secure access to that module.

A remote management module can also help if you are in a transition from IT on premise to a hosted solution (perhaps even a private cloud solution). It can be an in-between step that will leave existing work structures intact and ease the transition for IT personnel, since they will still be able to manage their own software deployments and the customized installation of an operating system.

8. Knowledge partner

And last but definitely not least: make sure your hosting provider involves his engineers and specialists when trying to find a solution tailored to your needs. A true knowledge partner advises on best practices and different solutions. This may involve combining different products into a hybrid solution.

The above will probably give you a good idea of what to consider before renting a dedicated server. If you are looking for specific advice or need assistance, please feel free to contact the LeaseWeb team. They can help you find the solution that is right for you.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s blog is written by Connie Pilot, executive vice president and chief information officer at Inova Health System. Pilot will be sharing her expertise on the “The Coming Storm from IoT” panel at the Capital Cybersecurity Summit on November 2-3, 2016


Pilot_Connie UpdatedWith billions of data-generating devices connected to the Web, the Internet of Things (IoT) is changing the way we do business. No industry is immune, including healthcare. The Food and Drug Administration estimates that 500 million people around the world use some sort of mobile health app on their smartphones and millions more have embraced wearable health technology. Inside the hospital, Internet-connected medical devices such as MRI machines, CT scanners and dialysis pumps provide critical patient monitoring and support and as wireless technology proliferates in healthcare, so too does risk. The Web is fertile ground for stolen medical records, which are now more valuable to hackers than credit cards. Providers must find new ways to secure private data in an ultra-connected world.

The IoT offers important benefits for healthcare delivery and efficiency. It provides new avenues for patient communication, improves patient engagement and compliance, and enhances value-based care and service. At Inova, we use it in many ways: to monitor fragile newborns in the neonatal intensive care unit, control temperature and humidity in the operating room, deliver pain medication post-operatively and measure heart rhythm in cardiac patients, to name just a few. Medical data tracking enables us to intervene when necessary to provide preventive care, promptly diagnose acute disorders or deliver life-saving medical treatment. The benefits extend beyond our hospital walls into the community, where the IoT drives telehealth advancements that improve access for patients, such as virtual visits, eCheck-In, patient portals and electronic health records.

Balancing the benefits of greater connectivity with the need to protect critical data is a growing priority for healthcare providers. Opportunities exist for instilling interoperability and security standards that will seamlessly facilitate the sharing of necessary patient care information, while continuing to safeguard it from cyber-attacks.

Enabling connection and communication among different information technology systems and software applications can be daunting. While healthcare organizations can use proven security protocols in other domains, differences between IoT devices and traditional computing systems pose significant challenges. The IoT introduces innovative technology that requires emergent, often untested, software and hardware. Wearables, such as consumer fitness trackers and smartwatches, are a case in point. They present non-traditional access into the technology environment. While they use existing communication protocols that can be secured, there are challenges with multi-factor authentication and control of the devices in case of loss or theft.

Additionally, with millions of people using wearables, the volume of data generated can easily overwhelm an organization’s network, leaving it vulnerable to a potential denial of service attack. In this scenario, hackers attempt to prevent legitimate users from accessing information or services. Methods must be developed to limit data transmitted from wearables solely to those devices that should be transmitting and solely to information that is required for patient care.

Clearly, developing new methods of securing devices and the information they generate is a formidable task. We are fortunate to do business in an area that is well positioned to tackle this growing cybersecurity threat. With one of the most sophisticated technology workforces in the country, pioneering start-ups, world-class educational resources and a large government infrastructure, the National Capital region stands at the epicenter of innovation, policy and research. Our collective expertise can help us meet healthcare privacy and security challenges, and keep our patients and community safe.

 

Connie Pilot is executive vice president and chief information officer at Inova Health System. As the leader of Inova’s technology services division, she oversees all aspects of technology, including IT applications, change and quality management, information security, enterprise architecture, service delivery and informatics. 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Marty Herbert of NeoSystems Corp. shares the second in a series of tips for workflow and process automation.


In Part 1 of our Workflow and Process Automation Series, Re-evaluating Your Processes, we looked at a few steps your organization can take towards drastically simplifying your billing process. Keep in mind that throughout this series, I will highlight solutions which produce time saving, compliance-driven processes that integrate with business systems, like Deltek Costpoint, NetSuite, SAP or others and create an enhanced workflow automation framework. In today’s post, Part 2 of our series, we’ll address vendor invoice processing.

A few years back, while working on a series of consulting projects, I looked at a client’s AP department while performing an audit and noted several variations they employed to process their vendor invoices. Some invoices came in via email, others via snail mail. Some came in to the attention of the company’s AP department; others came in via the project manager. Some were based on a PO and others were one-off ‘bills that needed to be paid.’ Knowing who the appropriate approver is could be multi-faceted and involve the receipt of goods (or services). Similar to many larger government contractors, our client used Deltek Costpoint for vendor invoice processing so I will use that system as an example of a well-known business system that is largely identifiable for our audience.

This business system has a great mechanism for capturing data and information related to accounts payable, but it can’t necessarily control how invoices are delivered, who approves them, and how that approval is captured for compliance purposes.

Our client’s overarching goal (outside of employing processes that increased efficiency and effectiveness) was to find a way to electronically interface an APPROVED invoice for vouchering in Costpoint. That sounds like a simple objective, but there are nuances that might not be immediately obvious. The “approved” aspect implies that there needs to be a process followed to obtain a valid, recognized approval. The “electronic” aspect implies that the entry into the ERP system should be automated without the need for manual data entry. Automated work flow tools make the design and controlled execution of a process possible, while Costpoint Web Services enables an electronic interface.

But, let’s slow down. Before we send data along, we have to gather the data. In this case the data comes from a vendor’s invoice, but we want to make sure the vendor’s invoice has been reviewed and approved before we send it into the system of record. The first step in automating this process is to gather the data input (the invoices). There are multiple ways we could approach this:

  • We can give vendors access to a “portal” whereby they upload the invoice directly into a workflow, or vendors can email the invoice to a specific address that will automate process kick-off and move it into a queue for AP servicing, or
  • We can receive a vendor invoice and initiate the process by loading it to the AP queue (potentially after scanning it in if it is received hard copy).

Then it is time to route the invoice to the proper ‘approver.” If companies are already connected to an ERP application that supports project management data, they are able to use the data inherent to any given project to pull the relevant approvers for PO-based invoices. AP clerks will then have matched the invoice to a PO (unless the vendor did that already) and chosen the lines from the PO to which the invoice applies then… well, that is all they have had to do so far.

Off to the approver(s) the invoice goes. The approver gets the invoice that has been submitted as well as details added by the AP department. The approver can decide to reject it or send it to another approver, or sit on it a while. Any (or all) of these tasks can be built into the process. The end result is (hopefully) an approved invoice.

At this point, the system should validate the invoice information and manage the voucher process through creation, voucher number generation, accept or reject status and check generation. It is critical and most efficient to have a complete trail of activity from submission to payment.

This process, when automated, is extremely easy to follow, saves time and money and is easier to implement than one might think. Unfortunately, most government contractors don’t know the ease with which automation software can achieve this and many other processes quickly and effectively.

There are numerous effective workflow management software systems in the market today. Integrify, a workflow management software used to automate a myriad of processes within a variety of platforms, is one tool we use at NeoSystems to automate vendor invoice processing within the business systems we use.

Our next blog will focus on the delightful automation of purchase requisition. If you have any burning questions about this or other processes (even those we haven’t gotten to yet!) using web services and workflow management software for your business system, please feel free to contact me.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week on NVTC’s blog, Marty Herbert of NeoSystems Corp. shares the first in a series of tips for workflow and process automation.


Marty HIf you are an ERP user, you likely know that most applications are rich with many features that address the nuances of running projects, especially if you are a government contractor.  However, no application can address the many steps that an organization must go through to accomplish what might be seen on the surface as a simple task.

Take ‘billing’ for example. I was asked a while back to determine how to route a bill for approval, and I thought it would be a “piece of cake”. Create bill. Send to approver. Get approval. Bill is right – Send to customer. Bill is wrong – rinse and repeat.  For this article, we’ll use commonly known GovCon ERP, Deltek Costpoint, as an example.  This system is very good at the first part. If you need to create a bill, you can create bill replete with support for hours worked and costs incurred. The problem, however, is there is no nice and simple way of implementing a workflow process that will accommodate most organization’s review and approval routines within the ERP framework.  That’s not a knock against Costpoint, no ERP systems on the market adequately address this issue, especially when you magnify it by the many, many other processes, that an organization has in place to accomplish their back office routines.

Over the next six weeks we will be taking a look at several areas where workflow plays a big role and how to leverage the automation of workflows via integration with your ERP. Companies unaware of how to automate in these areas are wasting precious time in determining the process, missing steps and ultimately don’t know how to streamline efficiencies that will save them money down the road.

In our first post for “Evaluating Your Process for Users of Deltek Costpoint or a Similar System,” I’ll examine the role of an AR clerk with my ‘piece of cake’ attempt at automating bill routing.

I had bills created from our ERP and I had Outlook, so I sent two bills to their respective approvers to verify hours were correct so we could bill the services to the client. Then I waited and waited and waited and waited… you get the picture. I followed up via email at least three times over the next week and finally, a week later, I knocked on their doors to see if they had time to review the email I sent.

‘Approver 1′ called me to his desk and had me look at the count of emails in his inbox. Until then, I was unaware that this number could go over 9,999, but there it was. I apologized and helped him find my email. Five minutes later he reviewed it and sent me an email saying we could bill it. Finally, the bill was out the door. I don’t remember whether I had to mail it or email it, but that is of no consequence. Oh, and of course, I forgot to tell my supervisor that I got the bill out the door so she was unnecessarily on my case the next morning.  I’ll try not to make that mistake again.

‘Approver 2′ (let’s call her Amy), asked if I had received her email. She said she responded immediately to each of the messages I sent, so I crept back to my cube and found her responses.  Suddenly I was the culprit in slowing down my own process! “Sorry, this Acme project isn’t mine,” she said. “These should go to Janet, she runs the Acme project.” Ugh! Wouldn’t you know she didn’t even have the courtesy to copy Janet on her response to me. So I just trudged down the hall to Janet’s office and had her review the paper copy. She looked at it briefly and said “yep, looks fine.” Great, I was out her door and happy to get the bill out of the door. Never mind that I forgot to get Janet to initial the invoice to indicate she had approved it and, of course, I forgot to tell my supervisor I sent the bill.  But, hey…bill is out the door, case closed.

Actually, the case was just getting started. The following week, in walks my supervisor. “I got a call from Acme Company’s CFO.  She asked me who Francis Miller was and why we were billing Acme for her travel to Las Vegas.  When I look in our system, this bill isn’t even posted, when did you send it out? Did you get Amy to review and approve this before you sent it out?” Sorry, I said, I forgot to post the bill in the system, and Amy said the project really belongs to Janet, so I got her to review and approve it…..see (as I pulled my copy from the file drawer). But, of course, Janet’s initials weren’t there.  Now my boss is mad at me for sending out an invoice that she thinks I didn’t get reviewed AND I forgot to post it. Swell.

I realized there was A LOT of room for improvement in this process. Problem #1, people are swarmed with email. Problem #2, people change roles and responsibilities a lot. Problem #3, no coordination with the ERP and the approval activities.  Problem #4, I can be my own worst enemy. Why couldn’t all this stuff be linked together somehow, and why isn’t there a way to get things posted in the system without me having to remember every little thing. I’m only human, after all. And this was a simple bill.  I could only imagine – or rather didn’t want to in this case – what would have happened if there had been revisions.

From experience I’ve gathered intelligence on how to sidestep these common pitfalls. Apart from working together as a team, companies always think in terms of making changes to their IT infrastructure. What I believe needs to happen is approaching these pitfalls in terms of changing the process infrastructure. There are no short term ‘quick fix’ changes, but rather logical steps toward automating manual processes that run at the heart of their businesses.Workflow

Step 1

Get people out of email and into a single system for approvals. This will help solve problem #1 and 3. By logging in to a single system for approvals, the approver should be able to get to a “To Do” list that helps them focus on the task(s) at hand. A system that alerts ONLY when an approval is required, and only when this task is “past due,” can assist in decreasing problem #1.

Step 2

Link your system to Deltek Costpoint or a similar platform! Not only does it save time from transferring information into Outlook, but it also ensures that the information will not be incorrectly entered or failed to be entered. Additionally, users can maintain project leads in Costpoint, and can link to a user in the system to automatically assign the approver to the person(s) involved in any given approval process. Problems #2 and 3 solved.

Step 3

Create a workflow that allows for rework, rejection, and handles the issues and items that may need to be addressed when something is “wrong.” That way, the stakeholders that need to be involved can be included automatically based on roles, or by selecting a user from a list of possible issues/departments involved. This decreases the amount of emails sent out for approvals. Assigning a task and automating reminders in the system accomplishes all these things.

Step 4

Solve Problem #4.  Remove yourself from your enemy list.  Relax. Stay out of email. Work on other things. Seriously. At a recent conference I attended, it was estimated that we spend around 28 percent of our work time sending or reading emails. What happens when you remove a single work stream worth of emails from your list of things to do? You can get back a piece of that time to work on other more pressing issues.

If it sounds like I’ve been through this process at least a few times, it’s because I have. Using the power of a business process management tool called Integrify, NeoSystems has automated this and other processes and tied those processes to Costpoint and similar platforms. Throughout this series, I will highlight the ways we have implemented, envisioned, and produced time-saving, compliance-driven processes that integrate with your ERP to create an Enhanced Workflow Automation Framework.

Have burning questions about Process Automation? Feel free to contact me ahead of next week’s blog post.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Collecting Big Data Footprints

May 23rd, 2016 | Posted by Sarah Jones in Guest Blogs | Member Blog Posts - (Comments Off)

This week on NVTC’s blog, the Virginia Commonwealth University School of Engineering shares research on Big Data footprints that the Electrical and Computer Engineering Department is working on with the Huazhong University of Science and Technology.


vcublogXubin He, Ph.D., professor and graduate program director of the Virginia Commonwealth University School of Engineering Electrical and Computer Engineering department, is working with Huazhong University of Science and Technology (HUST) to establish an international research institute focused on creating design techniques to improve data reliability and performance. Coordination efforts are currently underway to create rotation periods for students from VCU and HUST to conduct research within each university’s state-of-the art laboratories.

“This next step in our partnership with VCU helps both universities attract more high-quality research students, while enhancing the breadth and depth of our research,” said Dan Feng, Ph.D. and dean of the School of Computer Science and Technology at HUST. Feng also serves as director of the Data Storage and Application lab at HUST.

Managing big data

Data storage is a booming industry, with lots of opportunities. Just a decade ago, computational speed dominated research efforts and water cooler conversations. According to He, data is more important now. “Data empowers decision-making and drives business progress. No one can tolerate data loss, whether that data represents favorite photos or industry trends and analytics,” added He. And yet, trying to increase data capacity or replace obsolete data systems can shut down vital data centers for days.

Research teams from both universities find creative solutions to global data pain points. For example, these collaborative research teams reduced overhead costs associated with data failures by up to 30 percent. Their algorithms allow businesses to encode data that can be easily retrieved, instead of having to rely on costly data copies or redundant data centers.

Currently, in addition to HUST, He’s team also works with top data storage companies such as EMC, which ranks 128 in the Fortune 500 and had reported revenues of $24.4 billion in 2014.

The network effect

He has a simple philosophy to gauge the success of university research efforts — he looks at who else is there. “At top data storage and systems events such as USENIX’s File and Storage Technologies conference and USENIX’s Annual Technical conference, we’re presenting with peers from Harvard, MIT, Princeton and other premier universities we admire,” said He. These conferences typically accept about 30 presentation papers — that’s less than 20 percent of the global submissions they receive.

“Professor He’s leadership represents one of many efforts to build our international reputation in industry and academia,” said Erdem Topsakal, Ph.D. and chair of the Department of Electrical and Computer Engineering. “HUST is ranked 19 on the U.S. News World & Report’s Best Global Universities for Engineering list. When leading universities like HUST want to work closely with you, you know you’re doing something right.”

For more news from the Virginia Commonwealth University School of Engineering, click here.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Protecting Data at Its Core

May 20th, 2016 | Posted by Sarah Jones in Guest Blogs | Member Blog Posts - (Comments Off)

This week on NVTC’s blog, Richard Detore of GreenTec-USA discusses the deep concerned over recent cyber-attacks and offers a solution to prevent data damage.


picforblogEveryone in the cybersecurity field – both inside and outside of government – is deeply concerned over the kind of cyber-attacks that hit federal agencies such as the Office of Personnel Management (OPM) and private companies such as Sony. Rightly so, government agencies and private companies continue to make large investments in cybersecurity.

This sense of urgency extends to America’s key infrastructure, as underscored last October when President Obama issued a Presidential Proclamation on Critical Infrastructure and Resilience. In that proclamation, the president noted that

“Our Nation’s critical infrastructure is central to our security and essential to our economy. Technology, energy and information systems play a pivotal role in our lives today, and people continue to rely on the physical structures that surround us. From roadways and tunnels, to power grids and energy systems, to cybersecurity networks and other digital landscapes, it is crucial that we stay prepared to confront any threats to America’s infrastructure.”

Last year, in testimony before the Senate Armed Services Committee, Director of National Intelligence, James Clapper, noted how cyber-attacks threaten public and private sector interests:

“Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity…instead of deleting it or disrupting access to it. Decision making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

And in his most recent appearance before the Senate Armed Services Committee, Clapper stated that “Cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication and severity of impact.”

According to a recent study published by the cybersecurity firm Tripwire, 82 percent of the oil and gas companies surveyed said they saw an increase in successful cyberattacks over the past year. More than half of the same respondents said the number of cyberattacks increased between 50 to 100 percent over the past month.

Last year, federal investigators uncovered the fact that Russian hackers had penetrated the U.S. State Department in a major cybersecurity breach that gave Russian hackers access to the White House – including the President’s schedule.

Other threats, such as ransomware, are now on the radar screen of key policy makers in Congress, as well as the U.S. Departments of Justice and Homeland Security. Ransomware encrypts a computer user’s information, and hackers then demand payment – usually in the form of crypto-currency such as Bitcoin (which is extremely difficult to trace) – to unlock the information.

In fact, in recent years several police departments have fallen victim to ransomware and have had to make payments to the hackers. One typical example happened in Maine when two police departments were hacked into. To date, the perpetrators in these cases have not been apprehended.

Obviously, protecting and securing data at its core is a key component of cybersecurity efforts for both the public and private sectors. While it is important for cybersecurity efforts to focus on improving detection and enhancing firewalls, one approach that may often be overlooked is better protecting data at its core.

picforblog2Until recently, it was not possible to fully protect data at its core –the hard drive. In 2013, Write-Once-Read-Many (WORM) disk technology was developed and successfully installed that now, for the first time, allows government agencies and private companies to safely secure and protect data at the physical level of the disk. Any and all data stored on a WORM disk cannot be altered, overwritten, reformatted, deleted or compromised in any way within a computer or data center. The WORM disk functions as a normal Hard Disk Drive with zero performance degradation from its additional built-in capabilities. These capabilities prevent data damage from any form of cyberattack.

This new breakthrough combined with encryption makes it impossible for hackers to steal data or render it useless by attacking the stored data, or disks.

In addition to advances in malware and firewall enhancements, comprehensive cybersecurity efforts should take a close look at technologies that protect data at its core. Such efforts will impact the public and private sectors in profound ways.

Richard Detore is a NVTC member and CEO of GreenTec-USA, a technology company based in Reston, VA.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS