Researchers at Google DeepMind and Stanford University have created highly effective AI replicas of more than 1,000 people based on simple interviews.
A two-hour conversation with an AI model is enough to create a fairly accurate image of a real person’s personality, according to researchers from Google and Stanford University.
As part of a recent study, the researchers were able to generate “simulation agents” — essentially AI replicas — of 1,052 people based on two-hour interviews with each participant. These interviews, based on an interview protocol developed by the American Voices Project, which explores a range of topics of interest to social scientists, including life stories and views on current societal issues, were used to train a generative AI model designed to mimic human behavior.
To then evaluate the accuracy of the AI replicas, each participant completed two rounds of personality tests, social surveys, and logic games. When the AI replicas completed the same tests, their results matched the answers of their human counterparts with 85% accuracy.
When answering personality questionnaires, the AI clones’ responses differed little from their human counterparts. They performed particularly well when it came to reproducing answers to personality questionnaires and determining social attitudes. But they were less accurate when it came to predicting behavior in interactive games that involved economic decisions.
A question of purpose
The impetus for the development of the simulation agents was the possibility of using them to conduct studies that would be expensive, impractical, or unethical with real human subjects, the scientists explain. For example, the AI models could help to evaluate the effectiveness of public health measures or better understand reactions to product launches. Even modeling reactions to important social events would be conceivable, according to the researchers.
“General-purpose simulation of human attitudes and behavior—where each simulated person can engage across a range of social, political, or informational contexts—could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers write.
However, the scientists also acknowledge that the technology could be misused. For example, the simulation agents could be used to deceive other people online with deepfake attacks.
Security experts already see deepfake technology rapidly advancing and believe it’s a matter of time before cybercriminals find a business model they can use against companies.
Many executives have already said their companies have been targeted with deepfake scams of late, in particular around targeting financial data. Security company Exabeam recently discussed an incident in which a deepfake was used as part of a job interview in conjunction with the rising North Korean fake IT worker scam.
The Google and Stanford researchers propose the creation of an “agent bank” of the 1,000-plus simulation agents they have generated. The bank, hosted at Stanford University, would “provide controlled research-only API access to agent behaviors,” according to the researchers.
While the research does not expressly advance any capabilities for the creation of deepfakes, it does show what is fast becoming possible in terms of creating simulated human personalities in advanced research today.