David Platzer, 2012 Dissertation Proposal Development Fellowship fellow in the Mediated Futures: Globalization and Historical Territories research field, is currently in the final stages of completing his PhD in medical anthropology at Johns Hopkins University (JHU). At JHU, Platzer conducted his dissertation research on autism, technology, and social design, focusing specifically on autism hiring programs at major technology companies like Microsoft, HP, and SAP, among others. For his research, Platzer traveled throughout the United States, India, and Europe and conducted over 150 interviews with adults on the autism spectrum. His research leads an emerging anthropology of autism and neurodiversity employment and contributes to existing research on the anthropology of work, corporate/organizational ethnography, and the study of neoliberalism. In May 2017, Platzer’s work was highlighted in a piece in Forbes, “Increasing Autism Employment: An Anthropologist’s Perspective,” which discusses the growing awareness of neurodiversity employment within corporate culture and the changing workplace norms this shift brings about. More recently, his research was featured in Entrepreneur magazine and in the upcoming documentary film This Business of Autism, which will be released in 2018.

David Platzer
David Platzer speaking on a “Thinking Positive” Panel with the College Internship Program earlier this year in Berkeley, CA

Rather than dive headfirst into academia, while finishing his dissertation Platzer accepted a job as a user experience (UX) researcher with Adobe Systems, a multinational computer software company. UX research has grown substantially in the technology industry as a way for companies to improve the quality of their customers’ interactions with products and to gain insights into users at every stage of the design process. Because many UX research methodologies are similar to those used in the social sciences, the field has seen a rise in the number of social scientists it employs.

As supporters of Platzer’s research project in its earlier stages, we were interested in learning how his project evolved during his time at JHU and how his training as a medical anthropologist contributes to his current work as a UX researcher.

Part 1: Graduate School Experience

Please provide a brief synopsis of both the theoretical and practical findings of your dissertation research.

My research on autism had two primary foci, both of which looked at forms of entrepreneurship braided to practices of care. That is, from adjacent though not quite overlapping angles, my dissertation investigated the way technical labor was being used to create a place in society for folks whose disability is defined precisely by an impairment in their ability to relate socially.

The bulk of my dissertation research looked at a global social entrepreneurial movement that seeks to create jobs for autistics in the tech industry. This movement began in Denmark with a small social enterprise called Specialisterne (“the specialists” in Danish) but now extends to autism hiring programs at major multinationals like Microsoft, HP, SAP, and Ford, as well as a network of smaller companies that extend from Silicon Valley to Munich to South India. Most are spearheaded by executives and entrepreneurs who are also parents of youth on the autism spectrum. These corporate programs and small enterprises take one clinical criterion of autism, perseveration (fixed, restricted, and repetitive behaviors and interests), and aim to turn this erstwhile pathological trait into a productive form of labor. They do so by training autistic employees to “use” their perseverative tendencies to look for bugs in prerelease software. In teaching autistic adults to become software testers, or QA (“quality assurance”) specialists, they hope to transform a cognitive “deficit” into something economically valuable: efficient labor. And thus, in a logic that mirrors one of the industry’s favorite tropes, a “bug” (autism) is turned into a desirable “feature” (high-performing employees).

This movement has been heralded by the global business press as innovative, representing a transformation in management strategy that simultaneously paves the way for a more inclusive workforce. And many of its leading figures, such as José Velasco of SAP and Thorkil Sonne of Specialisterne, both parents of young adults on the autism spectrum, frequently point out that this entrepreneurial movement represents “good business” and not a form of charity. Sonne and others argue that since autistic employees are highly loyal—being as they are grateful for the opportunity to work—and as a consequence of their perseveration, detail-oriented, they are both immune the boredom that can lead to mistakes among non-autistic workers and easier to retain. In short, the perfect employees. And thus, these programs benefit neurodiverse employees while serving the bottom line.

This idea—elegant in its simplicity and exemplifying the industry’s ethos of innovation—has captivated the global business press, in turn inspiring a range of start-ups throughout the world: small businesses like , ASPertise in Montreal, Prayas Lab in Bangalore, ULTRA Testing, Coding Autism, the Specialists Guild, AspireTech, and many others. Implicit in the public speech of its leading figures, like Sonne and Valasco, and more explicitly in much of this glowing press coverage, there is the suggestion that governments and families alike may be inessential to the care of the disabled. Indeed, one step further, that disability—a term that emerged in the early twentieth century to refer to those unable to work as a consequence of injuries sustained either in battle or in factories—might not in fact entail a loss of one’s productive capacities. That is to say, disability here is conceptualized as latent ability, in turn drawing on the notion of neurodiversity, albeit toward utilitarian, and nakedly capitalistic, ends.

But what I discovered in conducting this research is that the “successes” of Specialisterne, SAP, and others are in fact far more modest than the steady stream of media stories might suggest. Only a very small percentage of autistic adults are eligible for such positions (maybe 5–10 percent of the population by my rough estimate), and these are the folks who were often identified by my interlocutors as “the cream of the crop,” that is, those who are the least disabled, or the least autistic (at least if measured by conventional standards). Those hired often have considerable work experience, and in most cases at the very least, a college education. To even be considered for Microsoft’s program, for instance, one must be proficient in C++, a coding language that is increasingly antiquated in contemporary workflows. As such, one interlocutor was screened out because he only knew JavaScript. Another was rejected after flying to Redmond for a week-long screening because he only passed two out of the three formal interviews that are part of the hiring process. But beyond such stringent screening, the ideas that autistic people do not get bored by repetitive tasks or that their only interests are technological are simply not true. In fact, while many autistics are technical, many more are not. And the pervasive stereotypes about autism that media representations of this movement inevitably affirm and reify, fail to do justice to the vast, vast diversity of those on the spectrum. Not to mention that this is all unfolding at a moment in which the field of quality assurance is increasingly being automated. So, as with many other kinds of work, the need for (and availability of) this expertise is rapidly diminishing.

More immediately troublingly, as I learned in conducting this research, the ubiquity of press coverage about this movement gives many adults on the autism spectrum and their families elusive hope for a brighter future: the dream of a more inclusive, forgiving society. Given the facts that something like 85–90 percent of autistic adults are unemployed, that depression and anxiety are so pervasively co-occurring, and that many live in poverty, the idea that Microsoft, for example, would be interested in supporting autistic adults with well-paying jobs and robust support services is thrilling. And while these programs have certainly benefited the small number of those who have been accepted into them (in fact one interlocutor had been homeless and suicidal before being hired by SAP; today he is a manager and mentor for the program), at the same time they can generate considerable suffering for the majority who are excluded, filtered out in the extremely restrictive screening processes I discussed a moment ago. Thus, many who “fail” wind up feeling as if they have “the wrong kind of autism,” as one interlocutor put it to me. In short, although this is a somewhat reductive summary, what we see here is largely the triumph of a kind of media discourse about technology, technical labor, and the ethical potentiality of markets thereof, and not the emergence of a more truly inclusive society or workforce through them.

The other, ultimately smaller, part of my research focused on similar themes, from a different vantage. I also looked at tech entrepreneurs developing autism interventions, largely iPhone applications, toward behavioral/social therapeutics and remote diagnostics. I was interested in the way ubiquitous consumer technologies like the iPhone were being refashioned into novel “medical” devices, into machines that could diagnose a complex disorder like autism or be used as a therapeutic instrument to ameliorate the social deficits it can incur. That is, I was interested, for example, in the strange, and to me fascinating, notion that a smartphone could help “socialize” youth with developmental delays or correctly determine the nature of their disorder. In pursuing this line of inquiry, I soon found myself encountering a wide range of entrepreneurs, most, parents of children and young adults on the autism spectrum, who saw in their entrepreneurship a “solution” to the problem of care (in the sense that a commercially successful app might not only aid their child, but also generate a source of income to support them). What I discovered, apart from the enormous creativity and anxiety that goes into entrepreneurship in general, is that nearly all such “autism entrepreneurs” are unsuccessful, at least if measured exclusively by financial rubrics. Because autism is so heterogeneous, so diverse, an app that might “work” for one autistic would likely do quite little for another. That is, the “market,” as such, is much smaller than the high prevalence rates of autism might otherwise suggest. And thus, for some of my interlocutors, failing in a commercial enterprise can lead to a sense of having failed as a parent and caregiver.

Stepping back from autism, and my immediate object domain, both pieces of my research investigated the tremendous hopes and aspirations human beings in today’s world place on technology’s ability to “solve” fundamental aspects of the human condition. That is to say, technology is often seen as a solution to the fact that we are limited beings (that we die, that our bodies fail us, that we may be born with certain differences—like autism—or that our children might be disabled). In the contemporary world, it is easy to imagine that technology in one form or another might allow us to transcend such limits. In the case of my research, whether it be therapeutic iPhone apps or much-publicized tech hiring programs at conglomerates like Microsoft, we find that failure (or highly constricted successes) can reveal to us not only something specific about autism but, more generally, cautionary tales about the blind faith we might place in technologies and markets predicated upon them. In other words, like their inventors, technologies too have limits.

In different ways, I have been particularly inspired in this project by two mentors with whom I have had the extraordinary good fortune of working quite closely: Paul Rabinow, of the University of California, Berkeley, and Veena Das, a faculty advisor at Johns Hopkins University. Rabinow’s pioneering work on biosociality and the biotech industry set in motion the field of inquiry in which my own study is situated, and he has been gracious enough to serve on my dissertation committee. Similarly, Das’s mentorship over many years at Hopkins has profoundly shaped the way I approach anthropology and ethnographic research more broadly: as an attentive attunement to the fragility of the human and linguistic forms in which its relations unfold.

At JHU, were you part of a minority of anthropologists using digital technology in your research? If so, how did you manage tensions between traditional anthropological research methodologies and the digital methods that ultimately led to your project’s findings?

Actually, despite the technological themes of my research, I myself was probably far more “analog” than many peers and even faculty. I know that several faculty in my department have taken considerable video while conducting fieldwork and have used software that can sync field notes with audio of interviews. Others have experimented with different technically assisted ways of conducting research. In particular, Anand Pandian, another inspiring advisor at JHU, has tirelessly experimented with different ways of using new media to conduct ethnographic fieldwork and to teach the practice to undergraduates. Graduate student peers have used other digital techniques to conduct and record research, the details of which I only vaguely understand.

In conducting my own fieldwork, I was actually somewhat of a traditionalist. And what findings I ultimately arrived at depended more or less on the same tricks of the trade that anthropologists have been using for over a century now: getting to know some people quite intimately and interviewing others as a supplement. In other words, participant-observation. More concretely, I collected audio of conversations and interviews with my iPhone and organized my field notes, audio, and photos (also taken with my iPhone) in a Dropbox folder. The only “digital” aspect was that I was sometimes obligated to conduct interviews by Skype or Zoom, since this is common practice in the business world. Several of my autistic interlocutors also found face-to-face conversations overwhelming, so a couple of my longest and most substantial “informant” relationships were primarily conducted via email.

That said, at an earlier phase of this research, in fact the piece funded by my DPDF, I did a form of “digital ethnography” by following message boards and engaging with an online community of autistics. In the end, I found this approach fairly limited, even though it provided some context that later became quite useful. Overall, I would caution against assuming that “traditional” and “digital” methodologies are all that different. The mediums and milieus might change—or might not—but anthropology will always depend on a combination of interviews, observation, and archival work. Such work might take a different form in new media ecologies, but we should not assume that all such changes are revolutionary or that they demand fundamental modifications to our research practices. Moreover, it is important to be reflexive and critical about what we mean when we use terms like “digital.” Technically, it means something very specific (Boolean algebra and its applications), but today we use it as shorthand for a vast array of mediums and mediated social relations. The term itself can often connote a radical break with the past that, to my mind at least, is never quite as radical it might seem. Without being too pedantic, I am always a bit weary of calling things “digital” for precisely this reason.

In other words, and now risking hopeless pedanticism, “traditional” and “digital” are not mutually exclusive, since neither term really means much. From my perspective, it would be more fruitful to think about how the constituent elements of anthropological fieldwork (observation, interviews, archival work, and various forms of participation) are either useful or not useful for the specific questions we are looking to engage in our research. Rather than lead with technique or method, I would argue that we should start with problems and then see what kinds of tools (whether technical, conceptual, methodological, or otherwise) we might leverage to address them. This perspective is indebted to the work of Rabinow, in particular Anthropos Today (2003).

As a graduate student, what training and resources did you need and have available to enable you to use digital technology as an anthropological tool?

This is a broader question about training, which I do believe needs to be updated to reflect current economic realities. In fact, it is in part the subject of an article I recently co-wrote with Anne Allison of Duke University (which will be published on the website of Cultural Anthropology in early 2018 with a series of responses). In the article we looked at the job market in anthropology in relation to the way PhD students are trained and their often quite-tortured experiences seeking employment once they graduate. Most graduate programs in anthropology still assume that a vast majority of their students will become tenured or tenure-track professors and thus, training is overwhelmingly oriented to such jobs. The simple fact of the matter is that less than one-third of PhDs in anthropology will attain such positions. So, graduate programs in anthropology are effectively training a majority of their students for a role they will never occupy. To my mind, beyond the question of digital technologies and their application in the context of training and fieldwork, and the analysis and presentation thereof, there is a bigger question about pedagogy. In other words, how can graduate programs in anthropology train their students to flourish as anthropologists both within and outside the academy (as ultimately a majority will have to)? Digital technologies may play some role in such pedagogy, but my own perspective is that the bigger question is how we can remediate the split between “applied” and academic anthropology. This split is by no means an easy one to address, as the objectives of each form of anthropology, as well as their respective research protocols, are similar but by no means indistinguishable.

To be somewhat polemical, I think that graduate programs need to do a better job of training students to have some familiarity with the skills and methods that they will eventually need to master if they find themselves working outside the academy. For instance, in addition to a robust knowledge of field languages, fieldwork technique, and the canon (or, rather, one or another conception thereof), graduate students might also learn how to translate fieldwork into a set of personas—Weberian ideal-types that are used in many design processes to help ensure that a product, service, or system addresses real-world needs. From the vantage of academic anthropology, personas may be considered reductive simplifications of our interlocutors’ individuality and unique subjectivity, since they are by definition utilitarian and typical. But knowing how to track between, say, a finely textured and analytically astute article about fieldwork and a set of personas will ultimately help graduate students flourish in a broader set of professional contexts. It is not that personas are intrinsically superior to a richly described and ethnographically incisive article (or vice versa), but rather that each represents a different form of collecting and representing fieldwork findings. They each have their time and place, but learning how to move from one to another while still in graduate school will allow for more flexibility (and thus employability) once finished. The challenge, then, will be to find a form of pedagogy that can sustain what is best about anthropology as it is practiced in academic contexts, while also preparing graduate students for the wider variety of jobs and professional contexts with which many, in fact a majority, will have to reckon. Certainly digital technologies will play a role here, but I would wager that they will serve as tools and not as solutions in and of themselves.

Given the increasing number of early-career social science and humanities students seeking to research and understand the roles of technology and science in our everyday lives, what advice can you offer those who are just beginning exploratory field research?

Do not be afraid to get creative in how you approach the field and do not personalize rejection from potential interlocutors. Often, research projects on science, technology, and the corporate world will involve “studying up,” to use Laura Nader’s well-known term for researching social and economic elites. That means it can often involve a lot of indifference or even hostility on the part of potential interlocutors. And as I learned in conducting this project, this indifference often means no more than the fact that the people you want to interview are extremely busy and cannot see how talking with a graduate student in anthropology would benefit them. In my own experience, periodically pestering such people often paid off. In one case, it took two years of occasional emails before I eventually got a phone interview with one key informant. That phone interview led to an in-person interview, which led to longer meetings and, eventually, a robust email and phone correspondence. Now I speak with this interlocutor a couple times a month and see him whenever he is in San Francisco. A bit of persistence, that is, and acquired immunity to rejection can be necessary when studying up.

Also, some degree of research diversification can be helpful. While I had originally wanted to write my dissertation about the autism employment movement that become the eventual focus of my thesis, early on I found it extremely difficult to get access. Thus, parent-entrepreneurs developing iPhone applications, many of whom were far more eager to speak with me, presented another way to engage some of my core thematic interests as I was very, very slowly building up a network of informants on the employment side. Diversifying my object domain (and thus expanding my potential pool of interlocutors) allowed me to keep up a certain amount of momentum, continue to develop my field research skills, and, in the end, also allowed me to shed light on my interests from another angle. In my final dissertation, this part of my research amounts to a single, fairly short chapter (one that will likely be jettisoned into a standalone article in the eventual monograph), but it was still invaluable in terms of my trajectory.

Finally, do not fetishize technology as fundamentally “new” or imagine that since you are studying techno-scientific milieus, “old” anthropological methods like face-to-face interviews, longitudinal informant relationships, and a pad of paper on which to take notes are now shamefully inadequate. While there is much to be said for experimenting with new forms of participant-observatory fieldwork and data collection, I would wager that the core elements of fieldwork will be fairly consistent with the earliest days of the discipline. Speaking conceptually, sometimes a technological innovation will actually transform the texture and timbre of human relations. But just as often it will not. Part of our job as researchers of science and technology is to pursue rigorous, clear-headed inquiry into the social situated-ness and consequences of technology. We have to resist the temptation to get seduced by/into narratives of progress and “transformation” that so often attend these fields. Technology is nothing new, even if there are new devices, mediums, and so on.

Part 2: Postdoctoral Experience

How has your dissertation research on the role of technology within the global autism community contributed to your current role as a UX researcher at a top tech company?

On one level, the fact that much of my research was conducted in the business world certainly helped me with things like corporate lingo and the social protocols of meetings, interviews, etc.—all of this likely helped me land the job at Adobe. On another level, early struggles I had in conducting fieldwork (namely in securing “access”—meaning, getting people to talk to me) meant that I conducted a large number of interviews, basically with anyone who would give me the time of day as long as they were at least tangentially related to the core themes of my project. Conducting something like 150 interviews for my fieldwork, if not more, taught me how to interview people. And because many of my informants were busy entrepreneurs or executives at corporations like SAP, I also had to learn to conduct different kinds of interviews. If I only had thirty minutes to talk with someone, I could not realistically conduct a life history or develop the kind of relational intimacy that I had with other informants. So, I learned not only how to conduct interviews, but how to do so under different kinds of temporal constraints.

Much of what I do today is interview-based and so honing this craft while conducting doctoral fieldwork was probably the biggest thing that my dissertation research has contributed to my current role. Beyond interviewing technique, the “participant” aspect of my participant-observatory research required me to become something like an expert/activist in diversity/nontraditional employment and work more generally. Such attention to what work means to people and how it impacts their lives is something that all of my current projects depend on. While my research at Adobe now focuses largely on creative workflows (how designers design things like websites, how YouTube stars film and edit their vlogs, how restauranteurs design menus—in short, how and why visual culture is created), my dissertation research, and my training at JHU more broadly, taught me to be attentive to the experiences of others. The cultivation of such attentiveness and the concrete forms it takes—for instance, how to ask the right questions—was invaluable preparation. And my fieldwork in particular taught me much about the personal and social values implicated in one’s work. So while my current research does not focus on neurodiversity employment, both my training at JHU and my dissertation fieldwork still inform nearly every aspect of what I do as a UX researcher.

Do you think UX research needs social scientists? How do your contributions differ from the work of those who have similar roles but do not have an advanced social science degree?

My team is largely comprised of PhDs in psychology and anthropology, with some folks who have design and design research backgrounds. In some ways, we are all social scientists, though the meaning of “science” here is variable. The psychologists I work with tend to look more at cognitive and perceptual processes/operations and are more comfortable with typologies and models that anthropologists might perceive as overly schematic and cognitivist. Many are quite fluent in computational forms of behavioral modeling, drawing on data science to understand users’ behavior. In contrast, the anthropologists with whom I work bring an ethnographic, conversational sensibility to the field and typically eschew the more structured discussion guides and research protocols that our peers from psychology, design research, and HCI rely upon. Even when conducting usability testing, we bring an anthropological spirit to our practice, which I understand as a holistic attention to how particular behaviors or motivations are embedded in broader webs of meaning and of social relations. I take this holism as our unique contribution to UX research and it is the reason why Sharma Hendel, a PhD in cognitive science who is a senior researcher at Adobe, recruited me and other anthropologists to the organization. Despite our different approaches, there is a great spirit of collegiality and I have learned much about psychology from Hendel and other colleagues since working at Adobe. There is true interdisciplinarity, if also a degree of healthy agonism (psychologists might see us anthropologists as overly concerned with individuals and we might see their models as de-subjectivized).

That said, we are all interested in the relationship between behavior and motivation, and the role that both socioeconomic factors, like income, and more individual things, like attitude, preference, and skill, play in shaping them. Much of what we all do involves interviewing and it can also involve observing users interacting with an application or prototype (this is what we call usability testing). From these techniques, we develop one or another generalizable, actionable model that can help guide designers to create applications that are more useful and intuitive for the target user. These models are not scientific, in the sense that they only have to be “good enough” to guide design decisions. That is, in contrast to academia, there is a different standard of truth, one conditioned by the much more rapid temporality and more immediately pragmatic ends. One form said models can often take is the set of personas I discussed earlier. But there are also other forms, such as journey maps, and these forms are always evolving and new ones emerging. Sometimes my team also does more market-oriented research, to broadly understand a particular population (say YouTubers) or a kind of workflow (say collaboration) independent of a particular application or piece of software. For example, how and why do people take photos on their phones? What apps do the use? What are the problems they encounter? What do they hope to accomplish? These are the kinds of questions we engage as UX researchers and we use personas to present our findings to designers in a way that will be useful for the development and refinement of products.

I do not think that one must have a PhD or advanced social science degree to flourish in this field, though I have seen that those who do have them tend to do quite well. Often, they are less wedded to particular techniques or forms of presentation (like using personas) and more adaptable to new ones. And much to my colleagues’ perpetual amusement and occasional irritation, like other anthropologists, I myself often bring in ethnographic and philosophical sources to help interpret findings. In the last three months, for example, I have drawn on Gregory Bateson, Veena Das, and Hegel to help explain patterns of behavior that I was observing in my research and underlying motivations about which I was looking to make inferences. While my peers who have a deeper knowledge of design have much to teach me, my unique contributions largely lay in my skill as an interviewer and in the way I can bring a robust, eclectic, and unfamiliar set of conceptual frames to bear. Where I struggle, and where others fresh from academia struggle, is in translating our research into actionable findings; something few of us have been trained to do. Designers creating a new application are ultimately looking for something very concrete. So while Hegel or Geertz can help in UX work, such illuminations must be translated into something quite specific and concise, ultimately stripped of the scaffolding of such references (citation means something very different). That is, the reference to Bateson is ultimately meaningless unless it gets at something quite immediate and pragmatic and, in this case, when I presented my conclusions I had to condense the “double-bind” into a pithy sentence. Designers want to know what to create, not who to cite.

Flourishing as a UX researcher ultimately depends upon one’s skill and agility as a researcher, the ability to translate findings into the concrete and actionable language I have been discussing, and also the capacity to work in a highly, highly collaborative environment. Unlike in academia, where we spend a good deal of time pursuing our work in isolation, in the world of product design, our endeavors are thoroughly collaborative. To work collaboratively, you must be able to speak in a way that people with different kinds of expertise will understand and also put the good of the project over any individual desire for recognition. That is, you must master a clear, direct, and concrete idiom, one free of jargon or esoteric references, and with an eye to the immediate ramifications of the findings. This is a skill set that one does not typically learn in academic contexts, and the attention to a collective goal (a great product) over an individual one (the brilliant article) is also at odds with one’s training in academic contexts. In other words, advanced degrees in social science can serve as both an advantage and a liability to those working in UX.

Besides anthropology, what other social science and humanities disciplines do you think are best attuned to high-level UX research?

Psychology is the other academic discipline that I often see represented in UX research and psychologists bring a slightly different approach to the field, one that is often more quantitative, cognitive, and experimental (in the technical sense) than us anthropologists. Yet, while discipline, academic background, and even specific research methods can be important preparation, they are less important than one’s orientation to other people. That is to say, the core “skill” of anthropology and UX research alike is a curiosity about others and something like “empathy,” or the ability and desire to attune oneself to the experiences of others. Whether you are using statistical measures and models or more “qualitative” ones, that curiosity and empathy are going to be the core of what you must bring to the field in order to be successful. Otherwise, the work will start to feel tedious. I would say that anyone who has that curiosity (which of course can take innumerable forms, including an interest in nineteenth-century novels), independent of academic training, and when matched with a strong desire to work in highly collaborative contexts, could very well become an excellent UX researcher.

Do you plan to return to academia at any point in your career and, if so, what areas of academic research do you think you might pursue given your experiences with UX research at Adobe?

I might be open to returning to academia at some point in the future. But whether I do or do not return to academic anthropology in some capacity myself, I do hope that in coming years there is increased fluidity between industry and academia. I believe this would benefit the discipline by increasing undergraduate interest and enrollment in anthropology, and at the same time relieve some of the economic anxieties that graduate students today face. Such systemic changes could very well encourage greater investment in anthropology programs by university administrators and thus lead to more new lines in anthropology departments. One could imagine an undergraduate course like “UX and Anthropology: An Introduction to Ethnography.” I could see such a course attracting much interest. Given the paucity of tenure-track jobs in the discipline (by my account less than forty or so were available this year), such fluidity might soon become a necessity.

So if I do return to academia in some capacity, I would be interested in researching anthropological pedagogy and would like to use this research to develop the new curricula and forms of training that I have been discussing: teaching that prepares students (graduates and undergraduates alike) to practice anthropology (or some version thereof) in a wider range of professional contexts. Less pragmatically, I also have several other research interests. While I cannot talk much about it here, at Adobe I am currently working on a project that explores machine learning, automation, and the self-perceptions of professional designers. I would be interested in pursuing this topic further and with reference to a broader history of design practices stretching back to the Bauhaus. I am also interested in conducting research on bio-hackers and their data-driven bodily practices, which I suspect represent a new form of what the philosopher Peter Sloterdijk has called anthropotechnics: late capitalist, and often libertarian, forms of spiritual acrobatics that seem in equal measure entrepreneurial, scientific, and highly ascetic.

With all that said, I am extremely happy at Adobe and see no reason that I could not pursue any of the academic projects I just mentioned while continuing to work here. I greatly enjoy the collaborative spirit and pragmatic orientation of my work as a UX researcher and appreciate Adobe’s famously collegial atmosphere. In short, I suspect I will stay put for a good, long while. My manager, the incomparable Sheryl Ehrlich, who has a PhD in psychology from UC Berkeley, has been at Adobe for seventeen years, where she has worked on a vast array of research projects that extend beyond her dissertation research on vision. Sheryl encourages our team to publish and to stay active as academic researchers, and Adobe also supports us in things like attending conferences and taking time off to work on research. In fact, my psychologist colleagues are frequently traveling to academic conferences and I myself have participated in several in the short time I have been at Adobe. Relieved of any teaching obligations, I have found that I actually have more time to write as a UX researcher, not less.

In short, I do not see UX research (or applied work more broadly) and academia as mutually exclusive career options. One can be a successful applied researcher and still participate in academic life. I would encourage social scientists interested in learning more about UX research, especially PhD students in anthropology, to reach out to me directly (platzer@adobe.com). If you find yourself in San Francisco, I would be happy to host you for lunch or coffee to talk more.