Even when I’m not looking for a job, I browse job listings. It’s a compulsive habit, a superstitious ritual. As if perusing could protect me from losing my job and being unable to find another decent one.
Lately I have been doing my looking on LinkedIn. As I proceed to the search bar to type in my keywords, a post at the top of my feed sometimes catches my eye. Often, it has been written by a recruiter giving advice on résumés. A common tip is to stop worrying about the ATS—the applicant tracking systems that companies use to process job candidates.
The earliest versions of these systems are older than job listing websites. Reporting in 1989 on how Universal Studios dealt with all the résumés it received, the Orlando Sentinel explained that the company “enters the information into its ‘applicant tracking system’ computer and then mails postcards with E.T.’s picture, thanking applicants for their interest.” In the 1990s, companies like Monster and CareerBuilder promised to speed up such routines by allowing job seekers to post their résumés online so that employers could find and process them more easily. Nearly all of the world’s largest companies were recruiting online by the early 2000s, and by the middle of that decade most required online applications. If you’ve applied for a job recently, there’s a good chance you had to do it through an ATS made by a company like Oracle, Workday, or UKG.
Over the years, job seekers have developed fears that software might filter them out for unforeseeable reasons. It doesn’t help that today well-paying jobs can seem elusive even in a supposedly healthy economy. Reddit is full of pain for which applicant tracking systems catch the blame, posters livid over getting an apparently automated rejection shortly before a scheduled interview. Or getting an apparently automated rejection for a job at the company where you already work, after your manager referred you. “Fuck ATS,” somebody titled their post. “How does ATS work?” wondered someone else. A cottage industry has arisen in response to the uncertainty. Companies offer to ATS-optimize your résumé. Newspapers give advice about how to get it to a person.
Of course, a person who works in HR might not seem much better if you’re among the legions of HR objectors: leftists who recognize HR as an emissary of management and an enemy of labor organizing, reactionaries who oppose HR’s DEI programs, and all kinds of employees who think HR professionals are simply bad at their jobs. But while objections to HR people are pervasive, objections to HR technology have siphoned some of the energy, to the extent that recruiters have tried to mitigate concerns by downplaying the role of ATS in hiring. Some go as far as to claim their profession’s norm is still to read “every résumé.” LinkedIn even published an article in 2023 “debunking” the “misinformation” that applicant tracking systems filter out a significant number of applications. And yet, a 2021 Harvard Business School report found that most large U.S. employers were using their hiring software to rank or filter candidates. Across the United States, the UK, and Germany, for example, 48 percent of companies surveyed for the report chose to filter out some candidates with employment gaps longer than six months.
I began to develop suspicions about the comforting claims of recruiters. An ATS is just one of the inscrutable ways employers sort, filter, and monitor people, and the makers and deployers of all kinds of HR software have a history of minimizing its real repercussions, from discrimination on the basis of gender, race, and disability, to privacy invasion, inaccuracy, and bewildering hurdles. Now, like many in the tech industry, HR software companies are investing heavily in generative AI and other machine-learning tools, which, with their black boxes and proprietary models, tend to further obscure who is doing what to whom. I wanted to see the process more clearly or at least closer up. What were these people and their software up to, and what did they have in store for us?
Ranked and Filed
Over three days last September, more than seven thousand HR professionals and vendors gathered at the Mandalay Bay Convention Center in Las Vegas for the annual HR Technology Conference & Exposition. From its beginnings in the late 1990s, HR Tech has pitched itself as a place where executives can hear “major new product announcements.” There were around 115 exhibitors in 1999, the year the New York Times reported that job candidates were starting to keyword-optimize their résumés, though many companies still took paper applications “more seriously.” The exhibitors back then used modifiers like “web-based” and “e-” to differentiate products from their analog predecessors. A quarter century later, there were more than four hundred exhibitors, and “AI,” variously defined, was now their preferred differentiator.
My plane landed the night before the conference began. At baggage claim, there were ads for Workday, which makes one of the most-used ATS in the country. In my suitcase there were office clothes from before I started working remotely. By the time I got to my hotel, it was midnight. I had worked all day before leaving for the airport and then in the car on the way. My backpack, heavy with both of my computers, was starting to strain my shoulders. The line for the check-in desk, I noticed with dejection, looked to be twenty-some people long. In a prelude to the days to come, signs said we could download an app and check in online instead.
The next morning, a white badge hanging from my neck, I was ready to blend into the crowds flowing around several hallways of conference rooms. Downstairs was an exhibit hall where companies hawked applicant tracking systems and software for everything that comes after them: video interviews, background checks, drug tests, time tracking, payment processing, wellness programs, trainings, performance evaluations, surveys, chatbots, and the data mining that goes along with all of it.
Workers were stationed outside most conference rooms with portable devices they used to scan people’s badges before allowing us to enter. I scanned into a room where the president of an ATS maker was going to talk about AI’s effects on job candidates. The few conference sessions I’d attended already hadn’t been very revealing. A panel of “women in HR tech” talked about their careers, and a DEI consultant shared suggestions for getting along with bigots. I thought the ATS guy might finally give me some insight into the inventions of his heedless ilk, the ones causing all that distress back out in the world.
I was taken aback when he started speaking, because he did not actually seem heedless at all. Unlike most of the speakers I heard before and after him, his descriptions of workers’ concerns seemed rooted in reality, rather than the wishful projections of technology companies. “People are saying there’s more scams, there’s more spam, they’re not hearing [back] from anybody,” he said, citing a candidate survey his company, Greenhouse, had run. AI had made it easier for scammers and spammers to generate fake jobs and fake applications. At the same time, he added, real job seekers felt pressure to keep up with the application numbers they saw others posting on social media. Some of them were using AI themselves to send more applications more easily. But it’s not as though HR had infinite staff to deal with the influx. (In fact, big HR departments have had recent layoffs.) He didn’t have to say structural problem to bring me back to the reality that this was one.
For him, of course, it was a business opportunity. He was part of what I would come to see as a savvy minority of people and companies capitalizing on AI fatigue. I was beginning to feel it myself: the conference was oversaturated with more than forty AI sessions. Greenhouse was introducing features designed to help with problems AI had supposedly exacerbated. If generative AI had led to more applications, leading to more overwhelmed recruiters ghosting people, Greenhouse was available to step in. Their new features included ways to screen out redundant applications and a kind of anti-ghosting badge that companies could earn for sending timely rejections.
Greenhouse also announced new filtering options for recruiters. The pitch was that they would leave hiring “in the hands of humans.” Their foil was the machine learning many other companies were using to prioritize applicants. Unlike these companies’ potentially biased black-box algorithms, the new options would leave ranking and sorting to recruiters, based, for example, on keywords they selected. My former confusion about recruiters’ advice—did we really not have to worry about computerized ranking and filtering?—started to feel quaint. A more pertinent question seemed to be: By whose logic would we be ranked and filtered?
Slog in the Machine
The exhibition hall offered many possibilities, too many to take in during the first evening’s “grand opening and pub crawl.” Wandering around, I recognized company names like Equifax and ADP and marveled at some that were new to me, like Happl and Survale. The biggest booths were bigger than four-car garages, and “football field” was the most obvious unit of measure for the floor. (It was about two fields, I learned afterward.) The “pub crawl” meant some booths had drinks, and whether thanks to alcohol or the energy of beginning, the hall stirred more than it would in the days afterward. I smiled at other faces out of habit, quickly finding that the reps flanking the booths used my eye contact as a conversation hook. Before I learned to direct my eyes with greater intention, I was reeled into a virtual-reality game that involved shooting HR-related words as they raced toward my face.
I had arrived at HR Tech most anxious about how a person is processed when they apply for a job, but it was more disquieting to learn how a person might be processed after getting one.
After a few minutes, I handed back the headset and controllers. But keywords continued to jump out at me from tchotchkes, signs, and literature. The word people appeared in countless slogans: “our purpose is people,” “always designing for people.” Here was skills, a longstanding catchphrase with conveniently blurry connotations: while “skills-based hiring” is said to give people a fairer chance by deprioritizing formal credentials, it also justifies work’s gigification or fragmentation into “skills” instead of jobs. Here was the future of work, described by a promotional tote bag as “a world where HR leaders use AI-driven platforms to hire the right people with the right skills into best-fit roles so they can grow, develop and stay forever and ever.”
I went to a demo of one such platform, HiredScore, recently acquired by Workday, which uses machine learning to grade job candidates on a scale of A to D. The grade reflects the “relevance” of their application materials to the job description. From a row of low stools in front of a big screen, I watched the demonstrator click through a table of sample candidates. To the left of each name was a big, color-coded letter: as A was to green, so D was to red. HiredScore says, in apparent anticipation of black-box accusations, that these grades are “explainable.” The interface even offers some explanations; in the demo, one reason a candidate got an A was “relevant industry experience.” These explanations reminded me of the old joke: terrible food—and such small portions! They seemed so basic, but were they more robust, things could be worse for applicants. The scholars Ifeoma Ajunwa and Daniel Greene argue that “information asymmetry” is one of the main ways hiring platforms tilt the balance of power toward companies and away from job seekers.
HiredScore also promises fairness. Not only is the tool itself “unbiased,” Workday says, it can reduce bias in the hiring process. Hiring platforms have made such claims for years, and it’s true that the human bar doesn’t seem as though it should be hard to clear. Why couldn’t a machine-learning algorithm be less biased than we are or have its bias corrected as we might correct our own? This argument permeated the exhibition hall, now implicitly, now explicitly. Based on the confident pitches, you’d be forgiven for not realizing that experts in multiple fields had weighed in with great caution, if not outright skepticism. Even those who believe fair algorithms are possible note that the technical challenge is steep and the counterexamples plentiful. Most famously, Amazon abandoned an experimental hiring tool in 2017 because it learned to prefer male over female candidates.
HiredScore tries to prevent this kind of thing by, among other measures, “balancing” training data to prevent reinforcement of historical discrimination. A Workday senior director promised in a recent webinar that even if a company had hired white men at a higher rate, the software wouldn’t learn white maleness was a preferred trait. “That’s a classic mistake,” he said, citing the Amazon debacle. Whether HiredScore avoids similar errors is impossible to confirm independently. In September, it released a third-party bias audit, which found no racial or gender bias, in accordance with New York City law. But the New York Civil Liberties Union has maintained that such audits prove little. They do not account for age, disability, or pregnancy bias, nor do they evaluate whether a tool accurately assesses candidates’ ability, a detail that is “critical” to determining whether they have been victims of unfair assessment.
Either way, the very question of whether a hiring tool is biased can serve to obscure the power dynamics at play in the hiring process. As Julia Powles wrote six years ago, “The reality that bias is primarily a social problem and cannot be fully solved technically” might actually put AI in a more powerful position. This reality, she argued, allows “relative increases in ‘fairness’ to be claimed as victories—even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.”
Actuarial decision-making was the real star of the exhibition hall. “Filter the crème in seconds,” one booth called out. The acting CEO of yet another ATS company described the benefits of such technology by analogy to her personal tracking device. She was wearing a Whoop, a wristband that collects your biometric data with the apparent goal of relieving you of the burden of self-knowledge. “When I wake up in the morning, it tells me if I am ready—recovered enough—to go on a ten-mile run or if I need to go do yoga,” she explained. “And based on what this little app says, I make a choice. I adapt. That is what technology at the B2B level is starting to lean into.”
It was easy to imagine what a relief it might be, as an overwhelmed recruiter, to abdicate decision-making in this way. Look at the As, ignore the Ds, filter the crème. Let your own knowledge and experience be subsumed like a piece of driftwood in a wave of data. Or maybe it would be frightening, a threat to your self-image and livelihood. When a product promised to free recruiters from tedium so they could focus on meaningful work, did they fear they might be freed from work entirely and thrust into the mass of job seekers themselves? Was that a reason for the strenuous insistence, in the face of so much contrary evidence, that résumés are destined for human hands?
Workplace Panopticon
It was hard to gauge how recruiters, not to mention other attendees, felt about everything at the conference. While people with something to sell were usually happy to speak to me, those I approached in audiences and hallways were not. Two women, when I asked them to talk, told me they would have to check with their employer to see if it was allowed. Another woman took my email, saying she would write, but I never heard from her. A man agreed to meet with me one afternoon, then failed to show. Others gave me their emails but didn’t respond after I wrote.
I did interview two people who weren’t actively promoting something. One, a team leader at a five-thousand-person company, was a lover of innovation, with a genius for noncommittal vagueness. (Even when she brought up her mother’s age, she would only give a range.) The other, a soft-spoken man who mentioned he was not a manager, got so anxious after expressing mild AI skepticism that he refused to give me his email, even for fact-checking. I promised I had no interest in using his name, but he still covered his badge with both hands as he walked away.
I couldn’t blame him; anyone could scan anyone else’s badge with a QR code. Mine went to the results of a Google search for my Baffler articles. Companies must also have been able to access contact details; over a month later, I was still getting sales emails from those that scanned me, and I got scanned more times than I could count, including during interactions as small as taking a chocolate from a candy bowl.
This was in keeping with the general ethos of HR technology, not only during, but beyond, the hiring process. HR’s adoption of AI, and the data collection AI requires, have made workplace surveillance and monitoring “ubiquitous” and “perpetual,” as two legal scholars recently wrote. In their 2020 paper, “The Invisible Web at Work,” Richard A. Bales and Katherine V.W. Stone compile pages and pages of examples: sensors, microphones, and cameras on chairs, badges, and equipment. Products that track time on task or frequency of multitasking. Programs that infer an employee’s engagement level from the language they use in emails or their likelihood of success from, among other things, the length of their résumés. (Shorter is better.) The HR term for all this data collection is “people analytics,” a subject on which most major business schools, have recently created classes or held conferences.
At least for now, data about people is hard for companies to use in all the ways they might wish to. Spread over many poorly integrated systems, it’s too fragmented to cohere into an all-purpose surveillance tool. At least, that’s the impression I got. People at the conference complained, or knowingly empathized, about how annoying it was to compile data from different systems or to enter data from one system into another. A lot of drudgery appears to stand between workers and the most fully realized iteration of a workplace panopticon.
Several companies promised to reduce such drudgery, but none stuck with me as much as one called Rippling. Rippling’s session, on the afternoon of the second day, was called “Can AI Predict Employee Performance? Come Find Out.” When I entered the room, the company’s chief operating officer was standing at the front in khakis and a dark jacket, his hair parted to the side. “The question that we were asking ourselves,” he said, was “what would it look like if you could actually pull a chair up next to an employee over the course of their first ninety days on the job and literally look at absolutely everything they had done?” This is what Rippling’s new product, Talent Signal, had been designed to do.
The Josh Bersin Company, which analyzes HR tech trends, called Talent Signal “the first product to use AI to leverage work-data to show employees and managers who is performing at the highest levels, stack rank employees, and give workers and managers detailed development feedback.” What this means is, having trained on the performance data of similar employees, it evaluates all of a customer service rep’s calls, or all of an engineer’s code, from their first three months of work. Talent Signal accomplishes this through integration with the platforms where that work is done: Salesforce, Zendesk, and GitHub. Each employee is ranked “high potential,” “typical,” or “pay attention,” according to how promising they are determined to be. Accompanying the ranking is an AI-generated explanation, with examples to illustrate the AI’s rationale.
I had arrived at HR Tech most anxious about how a person is processed when they apply for a job, but it was more disquieting to learn how a person might be processed after getting one. In a demo of Talent Signal, one of the examples from a customer service rep evaluation was:
[Their] understanding and responsiveness contributed to a timely and satisfactory resolution, as reflected in the customer’s eventual appreciation, stating, “Thank you for going above and beyond to help, I really appreciate it.”
I asked from the audience whether an employee evaluated by a model that rewards them for customers’ praise might get de facto punished if assigned to a region where people are less praiseful. An SVP of sales said, yes, this was possible. He anticipated that such a situation could lead to a positive outcome, wherein the manager would review the AI assessment, realize what was going on, and ultimately be impressed with how well the rep dealt with brusque customers. (He envisioned these customers as New Yorkers.)
This was a constant, at Rippling and beyond: AI and the surveillance it requires were presented as a net positive, not only for companies but for employees. Potential downsides for workers were not denied so much as euphemistically minimized. (Bales and Stone are less sanguine, writing that workplace AI “threatens to invade worker privacy, deter unionization, enable subtle forms of employer blackballing, exacerbate employment discrimination, render unions ineffective, and obliterate the protections of the labor laws.”) Rippling stressed the upsides: workers would receive “high-quality,” “independent” assessment. Under-recognized stars would finally get their due. Anomalies and mistakes would be caught by a human fail-safe. The point was to “help new hires succeed.”
Josh Bersin himself, invited onstage by the COO to lend an independent perspective, belied these enlightened goals, however inadvertently. “I know it’s a little bit scary; I know it’s a little bit creepy,” Bersin began. “But . . . I’ve seen these kinds of things before.” Big companies, he said, were already doing similar AI evaluations with technology they had built themselves: “I know for a fact, for example, that when Meta did the big layoff about a year and a half ago, they did this under the covers.” Wondering exactly what “this” and “under the covers” meant, I called Bersin afterward to ask for more details on how Meta had used AI evaluations to inform their layoffs. He didn’t want to talk about it and correctly suspected that Meta wouldn’t either, but he did point out that the tool hadn’t caused the layoffs. The cause of the layoffs was Mark Zuckerberg’s “year of efficiency.” Employee evaluation was going to happen with these tools or without them, just as it always had.
I wanted to object: any tool that gives bosses so much information about you, which they may or may not choose to share, tilts the balance of power further toward them. (And who knew whether the information would be accurate? And accurate according to whom?) At the same time, I had to grant Bersin’s point, if not quite in the way he intended it. Focusing on the addition of AI to old practices—like focusing on whether a hiring algorithm is biased—can sometimes distract from the bigger picture.
Quiet Desperation
It is one thing to be laid off from a software engineering job at Meta. It is another not to be able to make a living even though you are currently working. Each time I visited the exhibition hall, I noticed a new booth promoting “earned wage access,” an industry that roughly doubled between 2021 and 2022, when seven million Americans used it, according to a Consumer Financial Protection Bureau report from last summer. Earned wage access apps allow employees to get a percentage of their paycheck early. This access might be free if you can wait a day or two, but if you need the money right away—and many people do—you often have to pay a fee (the CFPB found that typical fees add up to an annual percentage rate of 109.5 percent). Earned wage access providers present themselves as a less expensive, and more benevolent, alternative to payday loans. But their business model similarly depends on enough people making so little that they are regularly desperate for emergency cash. Reps at the booths, as always, managed to find the bright side. “You realize how desperate these people are,” one said, describing typical expenditures (food, gas, bills, a daughter’s prom dress). “It’s very sad, but that’s the reality. So I feel really good about myself at this company.”
Mass desperation and inequality are hard to face squarely, especially if you are their beneficiary. But you don’t have to face them if you imagine the subjugation is natural, akin to a parent’s dominion over their children. A man described his company’s decision to offer earned wage access as “almost . . . parental.” A woman wanted to “promote good financial behavior.” One booth had a wall where conference goers affixed sticky notes naming their “very first hourly/frontline job,” as though such jobs were juvenile, the type of thing from which graduation was assumed, though members of the catering, security, and A/V teams on the floor varied widely in age. I asked an A/V guy who appeared to be in his sixties what he thought of what he was hearing. He said he heard only when the sound stopped, because it meant something was wrong; otherwise, he tuned it out.
His approach was tempting; the things I heard were spiking my anxiety. The opening keynote speaker had described copywriters like me as “at the front lines” of AI’s labor market transformation, alongside paralegals and “call center people.” “Frontline,” in HR terms, is a euphemism often used to connote lower-status workers, which casts their slim economic chances as unfortunate but inevitable. By invoking it, the speaker suggested, with or without intention, that frontline ranks were expanding as AI threatened more jobs. There are many fronts in “the war on talent,” as one HR leader’s parapraxis had it (“war for talent” is the typical expression). And while conditions differ significantly depending on what type of “talent” one is deemed to be, the tools on display at the Mandalay Bay Convention Center could facilitate the diminishment of almost any livelihood, whether by redundancy, gigification, profiteering, or surveillance.
My hotel was connected to the convention center by a series of covered walkways, and each night I joined the badged throngs milling through them, past restaurants, shops, and so many slot machines. At first I thought I felt immaterial, like a ghost, because I was an outsider who didn’t know anyone. As the conference progressed, the feeling both grew stronger and attached to a different source. While the word people was plastered everywhere as both a noun and an adjective, the workers of the exhibit hall’s collective imagination were not real, three-dimensional people. They were shadows without substantive interests or worries beyond the success of their companies. That was the only way these products could be pitched as win-wins. But, come on. We were in Las Vegas—everyone here knew the real money comes from making sure enough people are losing.