In Asurion’s new member blog post, Senior Vice President of Retail Application Delivery and Voice Services Sean Nass discusses embracing new product development models rooted in collaboration and focused on outcomes. A global leader of connected life services, Asurion partners with leading wireless carriers, retailers and pay-tv providers to provide consumers with protection and premium tech help  supporting mobile phones, consumer electronics and home appliances. 


644-012 D15b FINAL Asurion LogoThere is a big shift in how technology companies are going about business.

Many are pivoting from a process-and project-based strategy to one that is more forward-thinking. Team members are crossing boundaries, blurring the lines of previous structures and coming together in the spirit of collaboration to deliver outcomes that exceed customer expectations.

In the old model of business, the technology focus was on delivering large projects using project managers and a pool of resources, defining and limiting capacity. Instead of focusing on an outcome, teams would get together and create big requirement documents with minute details that would bog down capacity, forcing a project through months of work and still frequently achieving a result that was somehow different from how it was initially envisioned. Opportunity costs were often lost under this old model of product development.

Now businesses are pivoting, with a more forward-thinking attitude in mind. At Asurion, we recently built what we call journey teams, individuals across key sectors of our business who come together to optimize the speed with which a project achieves its desired outcome and experience for consumers. As part of this shift, we merged product, design, technology and customer experience teams to optimize the process with the project’s outcome in mind rather than focus on the process itself. The days of separate “product” and “IT” silos are behind us. We’ve combined product, design and technology teams and have empowered them to ask the question “How do we focus on what’s best for the consumer experience?”

Take the claim process as an example. Under previous models, a customer’s claim would pass through various workflows, often with redundant or unnecessary steps that may not have been a great experience for the customer. Under our journey team model, we dedicated product, technology and design leads to focus on an outcome that equates to a positive customer experience. This mentality leads to faster time to market and less waste in resource capacity, and allows our team members the ability to innovate in a rapid fashion. More importantly, the customer has a really positive experience.

We don’t tell our journey teams what to do or how to do it – instead, they innovate and test ideas and are empowered to make decisions on their own, all with this singular goal of improving consumer experience. The journey teams put together a vision based on a desired outcome, a vision that nails down what is going to work and what isn’t to drive improvements in speed, reliability and efficiency of a product’s delivery.

The shift has opened up new channels of communication and new ways of interacting across teams, even to the point of how we collocate in our workspace. We have seen a radical change in the quality of our intercommunications because people are developing prototypes, conducting tests and not working off huge requirement documents.

Our goal is to create a seamless integration of product, technology and design that optimizes the experience for our customers.

It hasn’t all been easy, but progress doesn’t occur without change. We certainly can’t transform all teams into this model at once. However, with patience and modeling teams’ successes, we are seeing increases in quality and speed, and the enthusiasm of the teams is amazing. They are so engaged because they see a direct correlation in their work and how it dramatically improves a customer interaction. There’s more alignment among product, technology and client services than we’ve ever seen before. If you are thinking of trying something similar in your offices, I recommend forming a shared goal, a shared alignment across all teams and placing the focus on future growth. Your efficiency and product development and delivery will improve, and that’s what everyone is looking for, after all.

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

NVTC’s latest guest blog post is by Stefano Migliorisi, CEO and founder of swyMed, a provider of patented technology that expands telemedicine care to places where it was previously unavailable, powering truly mobile exceptional-quality live video encounters, even at the lowest bandwidths. Migliorisi participated on the Telemedicine and Remote Patient Monitoring Panel at the Capital Health Tech Summit on June 15. View full video from the panel below.


swymedlogoTelemedicine allows patients and doctors to connect over a distance. The industry has been growing tremendously, with the U.S. market valued at $4.9 billion in 2015 and projected to reach $6.7 billion in 2020. Others estimate the sector is significantly larger, depending on which technologies are included and how much of the infrastructure and staffing/services are included in the definition of telehealth, but the one constant is the trend of explosive growth.

In many rural and underserved areas, specialists are in short supply; just getting to the doctor’s office, which is far away in a larger city, and back home again can take an entire day. Instead, telemedicine allows a patient to remain at a local clinic for a virtual visit with the same physician, which is considerably less time consuming and more economical both for patients and healthcare providers.

Telemedicine enables specialists to “beam” themselves into underserved communities via a broadband internet connection, and can have a transformative impact on reducing cost, improving patient compliance and improving outcomes. However, many regions across the country still lack broadband Internet connections sufficient to deliver quality care virtually.

Regardless of whether patients are logging in on their home computers, data-enabled smartphones, or wireless tablet devices, one thing is generally needed: a reliable high-speed Internet connection. We typically think of this as a challenge in rural areas, but a similar dynamic plays out in urban environments, where networks can become congested.

Telemedicine Uses

Telemedicine has been effective in helping patients to better manage chronic conditions. Advances in home health care have made it possible for patients to connect with their healthcare providers from the comfort of their living rooms, improving rates for follow-up visits and treatment regimen compliance. Digital devices such as pulse oximeters, blood pressure monitors and scales can automatically send data from a patient’s home to a care manager’s desktop, where he or she can monitor the patient’s status, prioritize interventions and initiate an audio-video call where follow-up is warranted. This technology has proven particularly successful for treating chronic diseases such as diabetes.

Similar data has emerged from skilled nursing facilities (SNF) where it has been demonstrated that patient well-being and outcomes are improved, transports to the emergency room (ER) reduced and total system costs lowered  dramatically by establishing a telemedicine link between the SNF and the ER. This is particularly valuable during nights and weekends, when the SNF is less likely to have a staff doctor available. Having click to call access to a local ER, allows the patient to receive a consultation, the SNF staff to have confidence that the patient is getting the appropriate level of care, and more often than not avoids a transfer to the ER which has significant cost and staffing implications for the SNF, the ER and the health care system as a whole.

Connectivity Remains a Challenge to Telemedicine

It has been estimated that 70% of face-to-face medical encounters COULD be delivered as telemedicine encounters. As more and more potential applications of the technology are piloted and evaluated, connectivity and reliability are emerging as critical factors to overcome before virtual video care can be relied upon as a primary channel to deliver care.

* Connectivity – Telemedicine relies on Internet connectivity to function, but the same rural regions tend to have the largest physician shortages and the most barriers to Internet access. The lack of broadband infrastructure impedes both real-time services such as video visits and store-and-forward technologies, leaving a gap in coverage for patients in these areas.

* Reliability – Telemedicine requires uninterrupted connections to prevent missed instructions or possible patient mismanagement. If systems are not reliable, or need to be restarted several times in the course of a patient encounter, trust in the system on the part of providers, patients and healthcare workers will be diminished.  When telemedicine is used for acute diagnoses such as telestroke diagnosis and treatment, every minute of delay can negatively impact patient outcomes with significant long term consequences.

Numerous approaches, both from established video communication companies as well as newer market entrants have arisen to address the emerging connectivity and reliability challenges. These approaches include everything from reducing the size and quality of image transmission, to new compression algorithms, and enhancing broadband signal availability with mobile communications hotspots, to development of data transport mechanisms that can operate over low bandwidth while still delivering high quality imagery.

View full video of the panel below:

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

In their new NVTC member blog post, Alvarez & Marsal Taxand discusses how companies in the tech industry can prepare for proposed tax reforms that may be implemented in the near future. Alvarez & Marsal Taxand, an affiliate of Alvarez & Marsal (A&M), a leading global professional services firm, is an independent tax group made up of experienced tax professionals dedicated to providing customized tax advice to clients and investors across a broad range of industries. Alvarez & Marsal Taxand is a founder of Taxand, the world’s largest independent tax organization, which provides high quality, integrated tax advice worldwide. 


AM_Taxand_Logo_Wordmark_color (2)Under the House Republican “Blueprint for Tax Reform” (the Blueprint), companies would be able to deduct interest expense against interest income, but no current deduction would be allowed for net interest expense. Any net interest expense would be carried forward indefinitely and allowed as a deduction against net interest income in future years. In addition, the proposed reduction of U.S. tax rates may also reduce the value of U.S. interest deductions. These proposals should impact decisions right now around multinational intercompany financing structures for tech companies, as well as other aspects of their intragroup contractual arrangements.

Until now, the high U.S. corporate income tax rate of 35 percent has created an environment that favors foreign related-party lending to U.S. affiliates, particularly when the loan is advanced from a low-tax jurisdiction. U.S. taxable income may be reduced via an interest deduction and the corresponding interest income may be captured in the lower tax jurisdiction. Alternatively, tax considerations may have made it desirable to incur third-party debt in a U.S. group company, rather than in lower-taxed group companies. The feasibility and/or desirability of these sorts of “earnings stripping” benefits would be greatly diminished by the Blueprint.

So, how are forward-looking companies, particularly in the tech industry, preparing for these potentially dramatic changes? We are seeing a number of them explore the following questions:

1. Should the debt level of U.S. group companies be reduced and, if so, how?
2. Should the interest rate be reduced on intragroup debt financing of U.S. group companies?
3. Can we replace debt financing with other forms of financing arrangements that may yield deductions other than interest expense for the U.S. company (e.g., rent expense on sale / leaseback transactions, royalty expense on intellectual property (IP) licensing transactions)?
4. Should U.S. group companies make interest-bearing loans to other group companies that can benefit from interest deductions in their countries, thereby creating interest income in the U.S., against which the U.S. company could then deduct its own interest expense (e.g., should a U.S. company be a group finance company)?
5. Can lost interest deductions be replaced by more aggressive transfer pricing for other intragroup transactions (e.g., the intragroup purchase and/or sale of goods or services)?

All of these questions regarding intragroup transactions have important transfer pricing implications. For most intragroup transactions (other than those rare instances when the comparable uncontrolled price method is the best method), the prevailing transfer pricing theory permits a range of choices for the intercompany transfer price. So, whether the decision relates to the level of U.S. indebtedness, the substitution of interest expense with other types of deductions, or the creation of interest income in the U.S., the after-tax impact of those decisions can be significantly enhanced by proactive transfer pricing planning. This is true regardless of whether the objective is the more traditional one of minimizing taxable income in the U.S., or a new one to increase taxable income in the U.S. (with the offsetting decrease in other countries with higher tax rates) in light of dramatically changed U.S. tax rules. Our international tax and transfer pricing specialists can help your company to determine the most desirable course of action and to substantiate an appropriate / defensible range of choices for intercompany prices that will yield the optimal results.

Visit A&M Taxand’s Tax Advisor Minute for more helpful insights for executives in the technology sector.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s member blog post is from Tangible Security Executive Chairman and CEO Dr. Mark Mykityshyn. Tangible Security employs the most sophisticated cybersecurity tools and techniques available to protect clients’ sensitive data, infrastructure and competitive advantage. Dr. Mykityshyn discusses the current regulatory climate around drones and unmanned aircraft systems and the need for new policies to fuel market growth in the industry.


Undoubtedly, drones and unmanned aircraft systems (UAS) are a very hot topic these days and their technology, business, policy and cybersecurity implications continue to rapidly expand and evolve.

Tangible Security recently participated in a roundtable meeting in Washington, D.C. that engaged thought leaders and stakeholders from aerospace and aviation, academia, Congress, government and related industry organizations. The group shared ideas, explored and challenged assumptions, and discussed policy positions and current practices in drone/UAS.

The roundtable was organized by ADS Infrastructure Partners (ADS) as part of a national campaign to help fund and establish the Drone/Unmanned Aircraft Systems Regulatory Association (DURA), the first step in unlocking the full economic value of the sector.

Roundtable conferees widely acknowledged that development of the drone/UAS commercial market is constrained, in great part, due to the existing FAA regulatory environment and the slow pace of rulemaking and certification. The group recognized that drone/UAS sector regulation requires urgent streamlining to realize full market potential, economic growth and jobs.

According to FAA’s recent market forecast, sales of UAS for commercial purposes are expected to grow from 600,000 in 2016 to 2.7 million by 2020. Industry experts have recognized that this growth, and the billions of dollars at stake, may not materialize without overhauling the current regulatory model.

Conferees also agreed that the immediate next step is to explore the pros and cons of drone industry regulation through delegation of FAA authority mandated by Congressional legislation, and to develop a blueprint for the new organization. The creation of DURA, an archetype of an industry-led public-private partnership, is an idea whose “time has come,” according to many roundtable attendees.

According to Jim Williams, head of JHW Unmanned Solutions, and most recently the Manager responsible for the FAA’s Unmanned Aircraft Systems Integration Office, “The future of unmanned aircraft operations depends on finding new ways to manage the airspace and regulate the operators. Forming a delegated organization to manage the airspace, approve the vehicles, and oversee the operators is the key to opening up this extremely valuable new segment of aviation.”

To expand this dialog nationwide, ADS will hold a National Summit in Washington, D.C. in September 2017 where leaders who represent more than five hundred businesses, agencies, associations, customers and stakeholders will assemble.

If you or your organization is interested in participating in DURA or attending the National Summit in September, please don’t hesitate to email me. All members of the technology, aviation and business community are invited to attend.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Is your organization DFARS cybersecurity compliant? Read on for more information on how your organization can stay compliant and be ready to handle cyber-attacks in CohnReznick’s new member blog. CohnReznick provides clients with forward thinking advice that helps them navigate complex business and financial issues.


cohnreznick-logoCyber-attacks on organizations, including government contractors and federal agencies, have been rapidly increasing over time. With a lack of defined security policies, processes and controls, many government contractors are ill-equipped to effectively handle potential cyber-attacks that could severely undermine business operations and swiftly lead to insurmountable damages as data and records are destroyed.

To mitigate the risk that businesses face, cybersecurity standards are becoming more prevalent. In particular, organizations with government contracts need to demonstrate compliance with cybersecurity standards as specified in contract requirements and regulations. For example, defense contractors that provide services to Department of Defense (DoD) agencies related to building, maintaining and managing DoD systems, networks, programs, or data may be required to demonstrate compliance with Defense Federal Acquisition Regulation Supplement (DFARS) Safeguarding rules and clauses.

In 2015, the DoD issued a ruling that requires defense contractors and subcontractors to demonstrate cybersecurity compliance with regard to the protection of Covered Defense Information (CDI), also known as Controlled Unclassified Information (CUI), or Unclassified Controlled Technical Information (UCTI).

How Can A Defense Contractor Demonstrate DFARS Clause Compliance?

GovCon Article Graphics1bDefense contractors and sub-contractors must implement and continuously assess security requirements, thereby demonstrating adequate cybersecurity measures are in place to safeguard CDI information from unauthorized access and disclosure. Additionally, such security measures can help identify, prevent, detect and report cyber-related intrusion events that affect defense contractors’ unclassified information systems. The security requirements are specified in National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”

Security requirements are categorized into 14 control families as listed in the graphic to the right. In addition to implementing the 14 security requirements, defense contractors and sub-contractors must have processes in place to identify a cybersecurity incident and report the incident no later than 72 hours upon discovery of the incident/breach. Reporting of the incident requires addressing elements, as outlined on the cyber incident reporting form, and providing necessary supporting documentation and evidence related to the incident. The incident can only be reported using a DoD-approved medium assurance certificate.

Pair DFARS Compliance Assessment With Advanced Breach Detection Solutions

GovCon Article Graphics2_WithTitleA critical component of DFARS regulation, as well as an area where we have found contractors to continually lack capabilities, is in breach detection. That is why it is important to have advanced solutions combined with appropriate governance and mature processes to enable contractors to rapidly detect devices of interest and indicators of compromise (IOC).

CohnReznick utilizes a holistic solution designed explicitly to fill this gap with clients. Our solutions can analyze thousands of protocols and hundreds of new attack vectors each day to find breaches and anomalous behavior on the defense contractor network. X-ray visibility into your environment is achieved by continuously analyzing application-based metadata ― combined with user information and the latest threat intelligence, against past, current, and future network activity ― to detect any previously unidentified breaches. Defense contractors and sub-contractors can be assured of accelerated compliance with DFARS requirements for incident response, risk assessment, and system and communications protection.

Moreover, IOCs and compromised device behavior can be pinpointed through behavioral analysis conducted on the network communications. Such IOCs and compromised device behavior could include:

Anomalous internal file transfers

Unexpected protocols

Suspicious or illegitimate connections

Encrypted communications

Unauthorized credential usage

Use of anonymizing applications

Risks from bring your own device (BYOD) policies

Beaconing

Exfiltration

Non-standard ports

Remote access tools

Suspicious downloads

File type mismatches

What If I Can’t Demonstrate DFARS Clause Compliance?

The defense contractor is required to notify the DoD CIO within 30 days of contract award if the defense contractor and their sub-contractors are not in compliance with all of the security requirements. Contractors have until December 2017 to attain compliance with all of the security requirements in NIST SP 800-171. Non-compliance can lead to cure notices, adverse past performance, fee reduction penalties, and possibly civil False Claims Act (FCA) implications, as well as reputational risk and responsibility issues, which could lead to loss of awards.


About CohnReznick’s Technology Risk and Cybersecurity Services

CohnReznick provides cybersecurity solutions that are dynamic, scalable, and tailored for growth companies. CohnReznick’s security professionals average more than 15 years in the field and hold key certifications. Our professionals have deep experience assisting organizations in implementing and complying with information and cybersecurity requirements using NIST 800-53, DIACAP, ISO 27001, COBIT and other industry leading standards and frameworks. Learn more.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

NVTC’s Spring 2017 The Voice of Technology magazine cover story, “Past is Prologue,” highlighted the Internet2 networking consortium and its role in supporting the early stages of the Internet, as well as its continued impact in connecting universities, government agencies, libraries, healthcare organizations and corporations today.

As a follow up to the article, University of Maryland Associate Vice President for Corporate and Foundation Relations Brian Darmody discusses University of Maryland’s role in early Internet development below in their new member blog post.


Did you know the nation’s first Internet exchange point was established at the University of Maryland (UMD)?

UMD Blog v2UMD and UMD Professor Glenn Ricart played a strong role in the start of the interconnected Internet that we know of today. Prof. Ricart developed the nation’s first Internet exchange point at UMD in 1988, which connected the original federal TCP/IP networks and the first U.S. commercial and non-commercial networks. Arguably, this was the world’s first ISP as a commercial vendor joined the previously university-only network. This exchange point was called the Federal Internet Exchange (FIX), then FIX-East and then MAE-East.

Later, Prof. Ricart would go on to help UMD establish the nation’s first TCP/IP university campus-wide network.  For these and other accomplishments, Prof. Ricart was inducted into the Internet Hall of Fame in 2013.

Prof. Ricart’s early work in laying the foundation for the Internet continues today in the Mid-Atlantic Crossroads in the UMD Research Park, which is one of the nation’s most robust regional high-speed connectivity networks for research and service to K-12 schools, universities, nonprofits, federal research agencies and the private sector, including counties in Virginia, companies in D.C. and federal agencies in Maryland.

In 1994, UMD’s alumni magazine featured an article on the early work UMD did in computer networking in the 1980s, which featured one of the first computer messages that was delivered from UMD to George Washington University. It is interesting to read the article now given the ubiquity of computer networking today, but is a proud illustration of our region’s role in pioneering the early computer communications infrastructure. Check out the article below:

 

Internet Network Is Born (Fall 1994, UMD Alumni Magazine)

UMD Blog 1At the annual Computer Science Center Christmas party in 1986, the champagne glasses were clinking, the holiday music was humming and Jack Hahn, project director for the newly formed Southeastern University Research Association network (SURAnet), was “walking on air.” On that day, an electronic message was sent from the University of Maryland at College Park to George Washington University — the first on a network whose technology would become the model for what Hahn calls, “one of the most powerful intellectual tools that mankind has ever had at its fingertips.”

Although no one seems to recall just what that historic message was (“probably, something like ‘hey, is this thing working?’” says Hahn), the first few keystrokes were the culmination of years of work initiated by Glenn Ricart, director of the university’s Computer Science Center.

The idea was to link the 14 SURA institutions into a communications network so that information could be trans-ferred between academic departments on each campus. It was such a novel idea at the time that, when Ricart brought his proposal to the National Science Foundation, they couldn’t tell him which office to send it to. “Nobody had ever done a network like this before, and it wasn’t clear that this was science and how this would help science, so NSF really didn’t know what to do with it,” he says (the NSF ended up establishing an entire division for networking and computing and solicited similar proposals).

In the meantime, Ricart, Hahn, Mark Oros, network operations supervisor, and Mike Petry, manager of communication  software, retreated to the nondescript basement of the Computer and Space Sciences building and began wiring the circuits that would link an entire region.

By late spring of 1987, connections to the original SURAnet universities were up and running. Colleges and universities from other regions recognized a good thing and began flocking to College Park to see the new technology. The National Science Foundation then decided to link all the regional networks using something called “fuzzball technology” developed by Dave Mills, an adjunct professor at College Park, and the humble beginnings of what would become known as the present-day Internet were formed.

Hahn originally monitored the fledgling network from his basement. “I used to say SURAnet has a network information center and a network operations center — a nic and noc — and you’re talking to both of them,” he says.

Adding more universities, federal institutions and commercial networks, SURAnet grew too large to remain on campus and now employs 40 people in a “somewhat secret” location on Route 1 in College Park. Over 400 organizations across 13 states and the District of Columbia are supported by the network, ranging from the Enoch Pratt Free Library in Maryland to the U.S. Department of Natural Resources and state and local governments in the region.

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s NVTC member guest blog is by Telos Corporation CEO and Chairman and NVTC Board Member John B. Wood. Telos Corporation is an information technology leader that offers solutions to empower and protect the world’s most security-conscious enterprises.


telos-logoWith the May 11 signing of the “Presidential Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure,” our nation took a major step forward in improving our overall cyber posture.

As I said in the hours after the President signed the order, even the most rigorous processes for managing modern cyber threats require a foundation of modern technology. That’s why I was encouraged to see that the executive order specifically instructed federal agencies to show preference in their procurement for shared IT services, including the cloud. A growing number of federal agencies have realized that the cloud offers them secure and cost-efficient computing capabilities, but many others have been hesitant to make the move. This executive order provides the needed boost for all agencies to look towards the cloud.

With this executive order and the latest version of the Modernizing Government Technology Act (MGT) legislation moving through Congress, I believe we have reached a tipping point where the federal government will have the White House support and the financial means to truly tackle IT modernization and make it a top area of focus for every agency. In unveiling the order, the White House also showed vision by saying that planned federal IT modernization will include transitioning agencies to one or more consolidated networks, with the goal being to view “our IT as one federal enterprise network.”

Another very interesting aspect of the order, which I was likewise encouraged to see, was the direction for all federal agencies to immediately begin to use the NIST Cybersecurity Framework (CSF) to manage their cybersecurity risk.  At Telos, we have long advocated for a common language when it comes to cybersecurity so stakeholders in all areas of the organization can communicate about cyber risk, which ultimately leads to more informed decisions about what security investments need to be made. The CSF is a powerful framework for enabling improved risk management throughout the government enterprise. Replacing outdated legacy systems, and making adoption of the framework more efficient with automation, will only strengthen our government’s cybersecurity defenses.

In the near-term, I will be paying close attention as agencies work to provide their own 90-day plans for implementing the NIST CSF, as required by this executive order.

Locally, this order should be welcome news to the vast number of technology and cybersecurity companies in Northern Virginia who work with the federal government. For those of us in this field, the executive order is exactly the type of nudge that federal agencies have needed to make the necessary improvements to their IT infrastructure and cybersecurity posture. However, for this executive order to truly deliver value, it will be contingent upon industry and government working together. I have no doubt that industry will step up to ensure success.

Overall, the cybersecurity executive order constitutes a long-overdue move by the federal government to take the steps necessary to better protect its networks and data. Moreover, the order sends a powerful message that our nation’s cyber defenses must continuously be monitored, evaluated and improved, and that this effort will be a key priority for this administration over the coming months and years.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Does your organization have a mentoring program? Have a well-structured employee mentoring program in place is a vital component to the mentoring experience. Read on for important tips from Insperity for shaping your organization’s mentoring program.


insperity v2Mentorship can play a critical role in the successful onboarding of new employees and the long-term development of existing team members. But how do you determine the right mentor for a particular mentee?

Should they be like-minded or in similar roles? Or, should the mentor be strong in the skills that the employee needs the most growth in? What role does personality fit play?

First, a definition: A mentor is not another boss, but a helpful confidant who gives relevant, occasional feedback and guidance that helps the employee gain needed skills.

Mentoring is different from performance management. A mentor program targets those employees who are already performing well and need extra input to grow and reach their full potential.

Mentoring is not remedial learning. If an employee is underperforming or has some other workplace problem, their manager must tackle the issue through coaching and other performance management techniques, not by selecting a mentor.

Know what you want to accomplish

The type of mentor you choose for an employee depends on your business goals. Does the employee in question need help with technical skills or leadership skills? Is this a new employee or a long-term employee?

You first need to know what you want to accomplish to successfully pair a mentor and mentee.

For instance, a new employee will probably benefit from a mentor who helps them learn about your business’s cultural norms and processes. This mentor should have an open mind and an open ear to candidly speak about processes and the best ways to navigate the environment.

They should also be experienced and organized enough to explain key procedures, and communicate clearly and consistently.

On the other hand, if you’ve identified a junior machinist who needs to learn a particular technical skill, you’ll want to pick a mentor who has that skill and who also communicates well.

If a junior executive wants to become a senior executive, the mentor should be able to offer guidance on cultural norms and processes, look for ways the mentee’s potential can benefit the organization and facilitate getting the mentee connected to these new opportunities.

A mentor should have the necessary communication skills and desire to be a continual learner, not someone with a tired or know-it-all attitude. Mentors should also be willing to share ownership and accountability for the work, giving the mentee credit when it’s due. Remember, mentoring is a two-way street, so pick a mentor who is willing to listen, give good counsel and learn from their mentee.

Another aspect of that two-way street: Not all mentors have to be older, long-time employees. Maybe one of your younger employees can help an older one gain confidence in using new software or social media for work or offer up-to-date information on the latest business technologies and workplace trends.

Yes, pairing employees with similar viewpoints, life experiences and work styles may help the relationship succeed, but ultimately the match should be determined by your organization’s needs.

Success requires structure

Larger companies often build significant structure around their mentor programs, with formal pairings, training and reporting required. That sort of structure may not be practical for a smaller business, but to be successful your mentor program will still need some definition.

What that structure looks like will be determined by the business goals you identified earlier. But, you still need to define goals, expectations and schedules. You also need to make sure both the mentor and mentee have time to accomplish the goals you set.

For example, if the mentee needs to gain technical expertise, the mentorship may consist of the mentor teaching specific skills and the mentee practicing at consistent times followed by question-and-answer periods. A mentor-mentee pairing like this may only last a few weeks or months, with a clearly defined goal that technical expertise will be attained by a certain date.

Follow-up is important too. Ask questions such as:

  • Did the mentorship help you learn that new skill or refine an existing skill?
  • Did the program help you get more comfortable in your new job?
  • Was it a good use of your time?
  • Do you feel better prepared to handle the work ahead?

Answers to these questions will help you determine whether your mentor-mentee pairs are a good fit. If they’re not, don’t hesitate to break up a pair and reassign them to other people. Mentor pairs are as individual as the people involved, and not everyone will be compatible.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Interested in transitioning to the cloud? Wondering where to start? Then you’ll want to read this NVTC member guest blog from LeaseWeb’s Julia Gortinskaya first to get prepared for your cloud transition.


leaseweb-logoFrom both a business and an IT perspective, migrating to the cloud can be a good option for many businesses. But, it’s not something that can be done without the right research and preparation. If you want to be successful when migrating to the cloud, you need open communication with both your own team and hosting provider, as well as a clearly defined cloud migration strategy that is connected to your business needs. What follow are five tips to help you get started:

1. Share your roadmap

Setting goals is everything. Your goals for migrating to the cloud should be closely connected to your business goals. How fast do you want to grow (i.e. how scalable does your technology need to be)? Who in your organization needs what functionality in order to reach which goal?

Select a cloud partner who is open to discussion about your roadmap and its implementation. Together you can create a technology roadmap that best supports your ambitions. Ideally, your cloud partner is a trusted advisor who shares his or her expertise with you. Keeping in close contact with your partner and sharing the load will also enable you to divide tasks between you: while your cloud provider focuses on hosting a cloud platform and making sure your servers are up-and-running, you will be able to concentrate on creating more value for your customers.

The value of leveraging a third party can only be achieved when both sides understand their responsibilities and expectations. This means communication between you and your partner should be one of your top priorities.

2. Check certifications and compliance statements

Security and compliance are enablers, not obstacles. When migrating to the cloud, it is important to know in advance which certifications your cloud partner has, what exactly is covered and the independent auditor monitoring process. For instance, privacy and compliance certifications are necessary for organizations supporting compliant workloads.

Since security and compliance are shared responsibilities between you and your cloud provider, and perhaps other third parties as well, you’ll likely be able to benefit from the certifications your cloud provider already has in place. If your enterprise data is stored on servers in a datacenter owned by your cloud provider, the physical security of the datacenter is the cloud partner’s responsibility.

Make sure to find answers to questions such as ‘who has access to my data?’, ‘where is my data stored geographically?’ and ‘what are the export restrictions?’ You may prefer to store data in a specific region, but may also be bound to a location by customer contracts and/or privacy laws.

And don’t forget, certifications and regulations evolve over time. Cloud providers should follow developments closely and advise on any action you need to take.  While you may not want to come across as suspicious, you should ask your partner to deliver proof of any certifications.

3. Look for a partner who can scale quickly

When migrating to the cloud, there are different options and delivery models for specific workloads: private, public, hybrid, hyper-scale, on premise and off-premise. New ones are developed at a rapid pace. Explore the options (and the degree of service, the security and the expected costs) that are available for your needs.

Whichever partner you choose, select one that can act the moment you need to scale quickly. If your business requires you to add server capacity either temporarily or for a longer period, your partner should be able to provide the flexibility and speed that you need.

4. Train your people before, during and after

Most cloud projects require a different set of skills from your IT staff to implement and manage workloads (e.g. APIs, open source platforms).Traditional skill sets in server, network and desktop administration are not needed in a cloud environment as they are embedded in the service. In most instances, re-skilling employees in more DevOps centric areas can be wise.

Instead of acquiring engineering skills, your IT staff will have to learn to think more as a cloud architect (which will probably be more challenging than being an administrator anyway). And since tactical day-to-day support is managed by your cloud partner, IT staff should spend more time developing and delivering services and applications that demonstrate direct value to the business.

5. Consider changes in architecture

We have come a long way from ‘one server for one service.’ Cloud computing changes the way applications are deployed and resources are delivered. Your current architecture might work in the cloud, but may also need some changes. Some applications can be migrated to the cloud, while others might require adaptation, such as the decoupling of data. You might also benefit from taking a more service-oriented approach, from cloud services delivered through API’s. Try to design an architecture that will give you full advantage of native cloud features.

You can download the full checklist “10 Do’s and Don’ts When Migrating to the cloud” here.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

This week’s guest blog post is by Qlarion. Qlarion helps the public sector use BI to effectively manage, access and understand information in order to make more effective business decisions. In their blog, Qlarion provides a wrap-up of NVTC’s Big Data and Analytics Committee Meeting that took place in March.


QlarionOn March 7, Qlarion’s CEO, Jake Bittner, moderated a discussion at Northern Virginia Technology Council’s (NVTC) Big Data and Analytics Committee Meeting, where Anthony (Tony) Fung, Virginia Deputy Secretary of Technology; Ernie Steidle, COO/CIO, Virginia Department for Aging and Rehabilitative Services; and Anthony Wood, Program Manager, Virginia Information Technologies Agency’s Innovation Center of Excellence, shared thoughts on the challenges and opportunities related to big data and analytics in Virginia.

Deputy Secretary of Technology Tony Fung kicked off the event with a discussion of the landscape of data analytics across the Commonwealth. Deputy Secretary Fung emphasized that there have been individual analytics success stories among several state agencies and the opportunity for real progress through analytics has never been better.

Virginia now has the resources in place to implement big data and analytics programs on a large scale.

In 2016, Governor McAuliffe issued Executive Directive 7, which mandates data sharing across state agencies. Deputy Secretary Fung announced that a final report, which will provide agencies with more detail about how to comply with the directive, will be released in a few weeks.

Virginia Department for Aging and Rehabilitative Services’ Ernie Steidle reported that, in addition to the executive directive, the state legislature recently passed HB 2457, which enables data sharing across Health and Human Resources agencies. The law dictates that all HHR agencies and departments, for the purpose of data sharing, be considered a single organization. Eliminating barriers between the agencies will increase efficiency and streamline services for constituents. Based on the results of the initiative, other agency groups, such as Public Safety, could adapt the same model.

The state has solidified its commitment to modernizing its technology programs by forming the Virginia Information Technologies Agency’s Innovation Center of Excellence (VITA ICE). Virginia Information Technologies Agency’s Innovation Center of Excellence’s Anthony Wood explained that VITA ICE’s primary goal is to evaluate and implement new technologies by leveraging the capabilities of Virginia’s technology companies. It’s developed a number of resources to establish relationships with Virginia tech companies.

State leadership is also intent on securing internal buy-in and educating government decision makers on the value of big data and analytics. The upcoming Governor’s Data Analytics Summit, an event exclusively for state and local government employees, will feature a lineup of speakers and panelists who will discuss how agencies can overcome challenges and achieve their goals through analytics. The event will offer actionable strategies for scoping and launching analytics projects.

This blog post originally appeared on Qlarion’s website.

Click here to learn more about the Big Data and Analytics Committee.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS