Insight
is our reward

Publications in Artificial Intelligence & Image Processing by NOMIS researchers

1 - 10 of 10 results

NOMIS Researcher(s)

Published in

February 19, 2025

Recent advances in stem cell-derived embryo models have transformed developmental biology, offering insights into embryogenesis without the constraints of natural embryos. However, variability in their development challenges research standardization. To address this, we use deep learning to enhance the reproducibility of selecting stem cell-derived embryo models. Through live imaging and AI-based models, we classify 900 mouse post-implantation stem cell-derived embryo-like structures (ETiX-embryos) into normal and abnormal categories. Our best-performing model achieves 88% accuracy at 90 h post-cell seeding and 65% accuracy at the initial cell-seeding stage, forecasting developmental trajectories. Our analysis reveals that normally developed ETiX-embryos have higher cell counts and distinct morphological features such as larger size and more compact shape. Perturbation experiments increasing initial cell numbers further supported this finding by improving normal development outcomes. This study demonstrates deep learning’s utility in improving embryo model selection and reveals critical features of ETiX-embryo self-organization, advancing consistency in this evolving field.

Research field(s)
Bioinformatics, Artificial Intelligence & Image Processing, Biophysics, Developmental Biology

NOMIS Researcher(s)

How can we build AI systems that can learn any set of individual human values both quickly and safely, avoiding causing harm or violating societal standards for acceptable behavior during the learning process? We explore the effects of representational alignment between humans and AI agents on learning human values. Making AI systems learn human-like representations of the world has many known benefits, including improving generalization, robustness to domain shifts, and few-shot learning performance. We demonstrate that this kind of representational alignment can also support safely learning and exploring human values in the context of personalization. We begin with a theoretical prediction, show that it applies to learning human morality judgments, then show that our results generalize to ten different aspects of human values — including ethics, honesty, and fairness — training AI agents on each set of values in a multi-armed bandit setting, where rewards reflect human value judgments over the chosen action. Using a set of textual action descriptions, we collect value judgments from humans, as well as similarity judgments from both humans and multiple language models, and demonstrate that representational alignment enables both safe exploration and improved generalization when learning human values.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

September 21, 2024

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case and that these systems in fact offer new opportunities for Bayesian modeling. Specifically, we argue that artificial neural networks and Bayesian models of cognition lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, in which a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

Published in

July 24, 2024

The capacity to leverage information from others’ opinions is a hallmark of human cognition. Consequently, past research has investigated how we learn from others’ testimony. Yet a distinct form of social information—aggregated opinion—increasingly guides our judgments and decisions. We investigated how people learn from such information by conducting three experiments with participants recruited online within the United States (N = 886) comparing the predictions of three computational models: a Bayesian solution to this problem that can be implemented by a simple strategy for combining proportions with prior beliefs, and two alternatives from epistemology and economics. Across all studies, we found the strongest concordance between participants’ judgments and the predictions of the Bayesian model, though some participants’ judgments were better captured by alternative strategies. These findings lay the groundwork for future research and show that people draw systematic inferences from aggregated opinion, often in line with a Bayesian solution.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards. However, this class of models fails to capture more idiosyncratic, discrete heuristics and strategies that people and animals appear to exhibit. Despite recent advances in strategy discovery using tools like recurrent networks that generalize the classic models, the resulting strategies are often onerous to interpret, making connections to cognition difficult to establish. We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness. Focusing on bandit tasks, we find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text. But are they equally adept at forming coherent probability judgments? We use probabilistic identities and repeated judgments to assess the coherence of probability judgments made by LLMs. Our results show that the judgments produced by these models are often incoherent, displaying human-like systematic deviations from the rules of probability theory. Moreover, when prompted to judge the same event, the mean-variance relationship of probability judgments produced by LLMs shows an inverted-U-shaped like that seen in humans. We propose that these deviations from rationality can be explained by linking autoregressive LLMs to implicit Bayesian inference and drawing parallels with the Bayesian Sampler model of human probability judgments.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

Published in

January 1, 2024

The rapid development of machine learning has led to new opportunities for applying these methods to the study of human decision making. We highlight some of these opportunities and discuss some of the issues that arise when using machine learning to model the decisions people make. We first elaborate on the relationship between predicting decisions and explaining them, leveraging findings from computational learning theory to argue that, in some cases, the conversion of predictive models to interpretable ones with comparable accuracy is an intractable problem. We then identify an important bottleneck in using machine learning to study human cognition—data scarcity—and highlight active learning and optimal experimental design as a way to move forward. Finally, we touch on additional topics such as machine learning methods for combining multiple predictors arising from known theories and specific machine learning architectures that could prove useful for the study of judgment and decision making. In doing so, we point out connections to behavioral economics, computer science, cognitive science, and psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

Published in

January 10, 2023

In this article, we develop two independent and new approaches to model epidemic spread in a network. Contrary to the most studied models, those developed here allow for contacts with different probabilities of transmitting the disease (transmissibilities). We then examine each of these models using some mean field type approximations. The first model looks at the late-stage effects of an epidemic outbreak and allows for the computation of the probability that a given vertex was infected. This computation is based on a mean field approximation and only depends on the number of contacts and their transmissibilities. This approach shares many similarities with percolation models in networks. The second model we develop is a dynamic model which we analyze using a mean field approximation which highly reduces the dimensionality of the system. In particular, the original system which individually analyses each vertex of the network is reduced to one with as many equations as different transmissibilities. Perhaps the greatest contribution of this article is the observation that, in both these models, the existence and size of an epidemic outbreak are linked to the properties of a matrix which we call the R-matrix. This is a generalization of the basic reproduction number which more precisely characterizes the main routes of infection. © 2023, The Author(s).

Research field(s)
Applied Sciences, Information & Communication Technologies, Artificial Intelligence & Image Processing

NOMIS Researcher(s)

January 1, 2023

Showing or telling others that we are committed to cooperate with them can boost social cooperation. But what makes us willing to signal our cooperativeness, when it is costly to do so? In two experiments,we tested the hypothesis that agents engage in social commitments if their subjective confidence in predicting the interaction partner’s behavior is low. In Experiment 1 (preregistered), 48 participants played a prisoner’s dilemma game where they could signal their intentions to their co-player by enduring a monetary cost. As hypothesized, low confidence in one’s prediction of the co-player’s intentions was associated with a higher willingness to engage in costly commitment. In Experiment 2 (31 participants), we replicate these findings and moreover provide causal evidence that experimentally lowering the predictability of others’ actions (and thereby confidence in these predictions) motivates commitment decisions. Finally, across both experiments, we show that participants possess and demonstrate metacognitive access to the accuracy of their mentalizing processes. Taken together, our findings shed light on the importance of confidence representations and metacognitive processes in social interactions © 2023 American Psychological Association

Research field(s)
Applied Sciences, Information & Communication Technologies, Artificial Intelligence & Image Processing

NOMIS Researcher(s)

Published in

November 1, 2019

The average judgment of large numbers of people has been found to be consistently better than the best individual response. But what motivates individuals when they make collective decisions? While it is a popular belief that individual incentives promote out-of-the-box thinking and diverse solutions, the exact role of motivation and reward in collective intelligence remains unclear. Here we examined collective intelligence in an interactive group estimation task where participants were rewarded for their individual or group’s performance. In addition to examining individual versus collective incentive structures, we controlled whether participants could see social information about the others’ responses. We found that knowledge about others’ responses reduced the wisdom of the crowd and, crucially, this effect depended on how people were rewarded. When rewarded for the accuracy of their individual responses, participants converged to the group mean, increasing social conformity, reducing diversity and thereby diminishing their group wisdom. When rewarded for their collective performance, diversity of opinions and the group wisdom increased. We conclude that the intuitive association between individual incentives and individualist opinion needs revising.

Research field(s)
Applied Sciences, Information & Communication Technologies, Artificial Intelligence & Image Processing