Artificial Intelligence in the Benefits & HR Space; Protecting Private Data When Moving Toward AI in the Workplace
Published by Manage HR and HR Tech Outlook
orothy Cociu, RHU, REBC, GBA, RPA President, Advanced Benefit Consulting
Artificial Intelligence is everywhere; growing more rapidly than anyone ever anticipated. It has become a part of our daily lives; whether seeing and using predictive text on your computers or phones or chat features while looking at products or services online, and sometimes when getting those automated phone responses when you’re stuck talking to machines instead of a real person when you really need a human being to discuss something with. Whether you view it as scary or innovative, frightening or fascinating, annoying or thought-provoking, it’s here, and the possibilities of AI seem endless.
I recently sat in a Professionals in Human Resources (PIHRA) south Orange County meeting on AI with multiple speakers and a packed room. One of the speakers, doing an actual ChatGPT demonstration, asked the audience, full of HR professionals, to respond to a multiple-choice question on whether or not they or their organization were using some sort of AI program like ChatGPT. An overwhelming majority responded with “yes, but my boss doesn’t know it!,” which really got me thinking
about how many HR Professionals and Benefits Professionals are actually using an AI program to help them with their jobs and have NOT gone through proper channels at work to verify they can be using it on company systems. Yes, on a number of functions, AI can work really well… For example, for assistance in narrowing job applicants to a manageable number or finding ways to evaluate performance without the human emotions, for streamlining claims and enrollment functions, designing health and other plans based on data analytics, and seriously reducing administrative burdens. Chatbots and virtual assistants can provide measurable real-time support to plan participants, and AI algorithms can analyze data to tailor plans to meet the specific needs and preferences of a very diverse workplace. In production facilities, there is no doubt that AI can help with many routine and automated functions, but that is not the same as using AI in the areas of the workplace that work with confidential data, such as payroll, human resources, and benefits. Two questions /always/ need to be asked up front.
1). Are those AI programs crossing privacy lines?
2). Do you need proper firewalls put in place /before/ you use AI?
We’ll examine these questions throughout this article and provide some real-world solutions to use AI effectively, but safely.
Let’s back up for a moment and start from the beginning, to assist those perhaps not as familiar with artificial intelligence and how it can help (or hurt) in the workplace.
What exactly is Artificial Intelligence? It is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. AI is a tool that humans can use to help workflow and improve efficiencies. One thing to keep in mind is that AI doesn’t automatically replace people…. But it was recently said by IBM that “AI won’t replace people -but people who use AI will replace people who don’t,” so that can be a concern for some.
Along the same definitional lines, Hybrid Intelligence crosses over the human intelligence (people, experiences, flexibility, creative functions, empathetic, instinctive or showing common sense) with Machine Intelligence (fast, efficient, cheap, scalable, consistent) by using data and algorithms to assist humans in improvements or efficiencies.
You may have heard of Generative AI and wondered what that means. A Generative AI system is one that can independently “create” new and unique content based on the prompts given to it from massive datasets, such as ChatGPT.
Generative AI only generates or produces, but it does not create. Basically, it can only generate content based on information fed to it by humans. Humans are the creative ones; robots use constant machine-learning tools and computer programming to respond to querying, prompt engineering and produce human-free response generation, including programs like ChatGPT. One of the most important components of this is “Prompt Engineering.” What are you asking, or prompting, the program to do? These of course can be good or bad prompts, depending on how specific or non-specific the human prompting it has been. For example, if you ask a chatbot to write you a resume with no real job specifics, versus asking it to write you a resume after you send a job description and former resume, and asking it to optimize your resume and include bullet points for the specific job you’re applying for, and ask it to include metric-based achievements or something similar, you will see wildly different results from each. That is Prompt Engineering within Generative AI.
There are many phases of Generative AI. The input stage, which trains the AI algorithm by feeding it with good and relevant data and/or content, which can produce anything from text to images to code, and can even compose music in a specific style or genre. In the input stage, you need to be very specific, as used in the example above of writing a resume. Remember, like any software program, “garbage in” means “garbage out.”
There is also the processing stage, where the algorithm identifies and replicates patterns in the data or content that aligns with the user’s prompts, which is where the human factor comes in. This is also the area where we’ve found some highly controversial “garbage out” data generated, such as a recent legal case where the attorneys used AI for their case and the data that came out was not factual, causing the attorneys major embarrassment, when they discovered (during the trial!) that the AI tool “hallucinated” or made up things… like the precedents it provided in the AI produced research… /When it couldn’t find a supporting case that the attorney needed, it basically made one up… /This is where it is imperative that any and all data created that is in the processing stage is reviewed and verified before being used….
Particularly if you’re relying on it for an actual situation… In this example, in a court case in front of a judge and jury! The Output stage is the result generated on the input, user prompts and processing stages, so always check what comes out and verify the facts to be true before using it! /It has been documented that AI can produce very convincing gibberish! /
Putting aside the “garbage out” for a moment, it is true that generative AI can have a tremendous impact on workflow and production. Brain-power is growing at an exponential rate; from ChatGPT generating 3,000 words per minute in 2022 to GPT4 achieving 25,000 words per minute in 2023. It is now documented that Claude’s AI is producing 75,000 words per minute, and that is a lot faster than any human could conceive of producing.
I asked Ted Flittner, Principal of Aditi Group, our technology, HIPAA Security and Cybersecurity consultant partners, how careful the user should be when using Generative AI to be sure that it is generating factual information (not hallucinating or making up things)? “This hits on a major issue with AI right now. AI in general and popular ones like ChatGPT are designed to give answers,” replied Ted. “And they can provide a detailed answer, even if there really isn’t one to give or is not true. Users need to take output not as fact, but something to consider and evaluate. We want to test answers in the real world and understand what inputs led to the answers – and consider whether the inputs are true.” There are things, of course, that can be done to improve the use of AI in your organization. “Training is a major issue,” Ted stated. “AI has a natural tendency to amplify inequities and incorrect patterns. It tends to give answers that match the training data – even if the data is NOT a fair sample of the real world.”
AI can help employees, such as HR Managers (used frequently for assistance in narrowing down job candidates, performance evaluations, etc.), and Risk Managers (used for examining trends and making better risk decisions) be better at their jobs, by allowing employees to focus on analysis rather than crunching data or performing other time-consuming tasks, and it allows employees to focus on strategic thinking and problem solving rather than mechanical tasks. However, using such tools as AI can be risky and if you use it, you need to be safe… So, I asked Ted, what are some basic safety/security protocols employers should implement when using AI?
“Clearly define who can access AI programs and output data,” Ted responded. “And define in writing, what data fields are that AI programs can use. Ideally, don’t use any private personal data. Don’t let AI results and answers get released to the public unless we’re /absolutely sure/ that private info is not exposed. A growing number of computer services are being offered to automate the process of evaluating data that AI systems can see and output. They’re akin to email security/encryption programs that try to prevent users from accidentally sending out emails with social security numbers, for example. The best approach is use all of the tools: make good policies, train people, and use software to watch for problems automatically.”
*Use of Proper Security and Approval Channels*
Going back to that PIHRA meeting I recently attended, that attendee response really concerned me, and actually prompted me to include an AI class in our September, 2023 Lunch & Learn educational series for our clients and guests, and to follow up with writing an article to assist users.
As I talked about in that Lunch & Learn in September, you shouldn’t be downloading AI programs such as ChatGPT without going through the proper security and approval channels at your office. I asked Ted if he would recommend that employers be sure there are policies in place BEFORE their employees (assume HR Department, for example) starts using AI such as ChatGPT? Ted’s response didn’t surprise me at all (and I assumed it back in September at our Lunch & Learn program). “Absolutely,” replied
Ted. “You need to set the rules before it comes back to bite you. Good policies and controls can prevent privacy breaches or AI output from being used in the wrong way. Check points on data input, data output and what people do with the ‘answers’ are a MUST. We have too many examples of groups rushing to implement AI and realizing later thatpersonal data is or was wrongly shared or wrong assumptions were made from data output from [one or more] AI programs. Too many people are feeling the ‘AI Burn.’”
*Dangers of Using ChatGPT and AI Programs*
What are the dangers of using ChatGPT or similar software without the proper firewalls in place first? Is there a danger of ChatGPT crossing over into confidential databases or other proprietary or trade secrets information? “The dangers are both human and software system ones,” stated Ted. “First, people can choose to share data that they shouldn’t with ChatGPT, for example. ChatGPT does use data to help improve the model – it uses data we enter, unless we take certain steps to block or minimize it. OpenAI does not use API data to train. So, we can choose which tools to use, depending on how secure we need to be.
“The software dangers really show up when AI results drive automated actions. For example, when facial recognition company Clearview AI software was quickly embraced by police departments, it led to many arrests of innocent people. The software marked people as suspects and police put too much trust in technology. As technology investigative journalist Kashmir Hill said ‘It wasn’t a simple matter of an algorithm making a mistake, it was a series of human beings making bad decisions, aided by fallible technology.’”
Another concern of mine, which has been a high security concern since 2020’s COVID massive move to remote work, is people continuing to workat home, without regular supervisory oversight. I asked Ted if there additional (or perhaps same as office in some circumstances) policies that should be in place for employees working from home? Ted responded “Match the privacy and security policies relating to other sensitive data for your company. Keep data private. Keep answers private. Sendinformation securely. Work on secure computer devices and networks. Keep it focused on business.”
Today organizations are realizing the vast potential of harnessing AI technology to enhance productivity, augment intelligence and gain a competitive edge with tools like data analytics. You can greatly improve productivity by automation of repetitive and time-consuming (and often hated) tasks, so humans can focus on the more strategic and creative functions on their desk, or in their workplace. You can unlock innovation by using algorithms to power your applications and business models and improve and increase your data analytics with more accurate predictions and improve data-backed decisions.
These tools, which are used commonly in the benefits and human resource space, are continually improving and helping us mere mortal humans in predicting future claims patterns, long term cost projections based on past group behavior, etc. Self-funded health plans (as well as insured health plans of course) have used data analytics to help them predict costs, and design benefits to meet the specific needs of that particular employer’s population for decades now… But now, with AI tools, the future looks even brighter when it comes to providing valuable insight into future costs and patterns, and to the benefits industry, AI seems to be a highly valuable tool that can be used to help contain costs.
*Potential Future Uses of AI in the Workplace*
Potential future uses of AI in the workplace, particularly in HR and Benefits, include (but are not limited to) advanced analytics
applications, process changes and reorganization uses, Employee Value Proposition assistance (common in HR now), ways to measure performance with AI (without the emotions). I asked Ted if he could comment on the internal data that AI may need access to in order to do these functions, and how an employer can be protected from AI technology accessingconfidential information within their systems, and why that needs more privacy/security protections in place? Ted responded: “We’re talking about the kinds of data that HR managers have access to every day, at both the individual staff member and /family member/ level. We really
want to de-identify personal info and aggregate data so that personal data is not used directly. That could be done manually or by other software systems that ‘cleanse’ data before it’s analyzed.”
*AI in Benefits Administration & Legal Implications*
I asked our Benefits Attorney, Marilyn Monahan, if there are there specific cautions or concerns she has about using AI in Benefits
Administration? Her response was: “While AI could be a great help in streamlining benefits enrollment and administration—making the process easier and more useful for both employees and employers—the human touch is still necessary. No system is turnkey, and work will have to be done—both to set it up and as part of an on-going monitoring process—to ensure the system is accurate and effective. Further, from the point of view of employee relations, employees will continue to have questions for HR on the enrollment process and benefit options, and HR needs to be available to answer those questions. From the point of view of benefit administration, the data produced by the system will need to be reviewed and analyzed to ensure benefits administration is going smoothly. ‘The computer did it’ is not a very compelling defense when a mistake is made.”
I also asked Marilyn what some potential drawbacks, limitations andlegal risks of using AI in benefits administration may be? She replied: “There are several issues that could arise. For example, an AI programused to translate an SPD might translate plan language incorrectly, or the translation might not satisfy the ERISA standard that the SPD is written calculated to be understood by the average plan participant. Or, when AI is used for enrollment, if the system has built-in biases, it might, for example, steer applicants in a protected class to benefit options that are not best for them personally.”
There are concerns about Intellectual Property and privacy and trade secrets and privacy when you use AI programs. I asked Marilyn if she could comment on her primary concerns? “These issues—and problems—could arise in various contexts,” stated Marilyn. “For example, employers using an AI system to draft written communications should be concerned about the system incorporating copyrighted material without attribution. As another example, companies should recognize that materials created by
AI will probably /not/ be protected by copyright laws.”
The legal implications of using AI continue to be a major concern, of course. States like California and cities like New York have or are considering laws on automated decision tools and AI… I asked Marilyn if she could tell us a little bit about these? “AI is the hot topic these days, not only within industry but by legislators as well. The City of New York has already passed a law regulating employer use of automated decision-making tools (the AEDT Law). It has been reported that the California legislature intends, when it returns from recess in January, to look into whether it should pass legislation to address AI in the workplace and beyond.”
Besides lawsuits and penalties, I asked Marilyn what some other potential consequences are of improperly using AI? Mairlyn replied “Do not overlook the damage to the company’s reputation, and the impact news of the misuse could have on client relationships, employee morale, and more.”
I asked Marilyn in general, what her primary privacy & security concerns of HR using AI? She stated “Any data being input into an AI system must be adequately protected to ensure it is not accessible by those who are not entitled to access it, and it is not vulnerable to a cyber-attack. Information that needs to be protected includes private employee data (such as personnel and medical data), customer data, and proprietarydata. Before utilizing an AI system, employers need to ensure that the system is secure, that access is limited, and that any necessarycontractual agreements (such as business associate agreements) are in place. In addition, employers should be concerned about employees using AI systems on their own, without the employer’s permission. Employers should put employees on notice that such actions are prohibited and will result in discipline.”
*Types of Employees/Departments Safety Precautions*
A question that is asked often today is what types of employees/departments are in general, more “safe” to use AI and what employees/departments should be more cautious overall? Once again, I asked for the opinion of Ted Flittner.
“Personally, I think everyone needs to follow the same cautions,” Ted responded. “HR, Accounting, and Finance generally have access to the sensitive staff data. Sales and Customer Service may see end-customer private info as well. All departments should understand the priority of keeping data private and follow the HIPAA ‘Minimum Necessary’ guideline of just giving access to the data that people need to get the job done.”
Certain job functions in the workplace have seen major headlines in the news over the past several months. Replacing writers with ChatGPT-like programs is big in the news lately; particularly given the recent Hollywood writers’ strike, etc. I asked Marilyn if she would comment on possible trade secrets and copyright implications that may occur when using a Chat-GPT or other AI program in general, and if she had any warnings and suggestions for employers? “Two concerns that come to mind are accuracy and copyright. You cannot assume that the information generated by AI is true or accurate—it must be verified. Also, if the AI system copies copyrighted material without proper attribution, the employer could violate copyright laws when it uses the material.”
Should employers be worried that anything generated by a program such as Chat GPT could end up being put out on the internet for the public? Those are certainly concerns that I have, given my background in HIPAA Privacy & Security, so once again, I asked Ted for his opinion. “Yes, for sure,” Ted said in response to an employer being worried about these programs resulting in data going public on the internet. “Employers don’t want payroll, personnel reviews, or confidential company plans to be broadcast. They don’t want it to be done by computers OR people making bad decisions.”
I asked Ted what his biggest concerns related to privacy & security are when using any type of AI? “Systems that collect data, analyze it, share it, or take action on it, without our knowledge or consent,” Ted responded. “Again, facial recognition is a great example. More and more places and groups are using it – with photos from all parts of the internet and everywhere we go. We’re not asked for our OK with all of that. And so more cities and states are enacting bans on using facial recognition in public.”
I then asked Ted if there are other general comments/concerns/warnings or cautions he’d like to share with employers using or contemplating using AI for HR/Benefit functions? Ted responded, “Don’t rush to use AI just because it’s new and it’s cool. Any process or tool you use must provide real value. Ask yourself ‘How does it add value to our customers?’” I would echo those cautions provided by Ted.
As an employer, I’d want to know if AI tools are prone to cyber-attacks and if so, are they more or less than any other programs or uses in employer offices? Ted shared these thoughts. “AI is not more inherently prone to attack than other IT. The Internet is a two-way Superhighway. If a computer system can reach the internet or if it’s in the cloud, it is at risk. Use the same precautions.”
*Artificial Intelligence Policy Concerns & Considerations*
Marilyn Monahan and I discussed the policy concerns that employers should be looking at before using AI in the workplace as they relate to privacy & security. These included: Be sure that they suit your needs and priorities; Customize your policies to suit your workplace; Outline the purpose of the policy and the scope of the policy; Create Policies to Maintain Data Privacy & Security; Put in safeguards to protect data inputted into any GenAI technology, Address data collection, storage and sharing; Prohibit employees from entering private or personal information into *any* GenAI platform; Uphold company confidentiality – trade secrets, private information, PII and PHI of employees and third parties, confidential data, sensitive data; Protect Commitment to Diversity and Discrimination standards; Prevent Copyright or Theft Concerns; Double-check sources; Use AI as an idea-generator, not as a replacement for content creation; Prohibit Employment-Based Decisions Aided by GenAI; Do not use AI to help you make employment decisions about applicants or employees (recruitment, hiring, retention, promotions, transfers, performance monitoring, discipline, demotions, terminations, etc.) Uphold legal principals; Outline Best Practices – have workers confirm information before relying on it (avoid hallucinations or outdated answers); Understand the risks of data breaches in AI- /treat questions as if they will go viral on the internet; /Recommend employees disclose when they are using AI and the extent it aided in the creation of any content developed; Be Clear About Consequences if violations occur; Include a Disclaimer; and be sure to use Multi-Disciplinary input from stakeholders of organization. I asked Ted if he had any additions to this list. He added “ Employers should focus on value.”