Student researchers

We currently have several openings for Master's or (senior) Bachelor's students. Each position is paid and can be adapted to a thesis or a research assistant project. We consider applications until positions are filled. Please reach out to the listed contacts. Physical presence in Munich is required.


Human decision-making in the wild
Contact: marcel.binz@helmholtz-munich.de
Tags: experimental study, decision-making, cognitive modeling
Description: How people make decisions is still heavily debated in psychology, neuroscience, and economics. In a recent paper [1], we have shown that certain heuristics are resource-rational strategies in specific situations. Further experimental studies confirmed that it is precisely in these situations that people rely on heuristic decision-making. However, in our study, we relied on a quite artificial experimental design, directly informing people about side information on the ranking or direction of features. In contrast to this, in the real world, people often rely on implicit, semantic knowledge to estimate feature rankings and directions. The goal of the present project is to address this shortcoming and investigate human decision-making on problems with semantic information. First, you will measure peoples' intuitions about relationships between different features. Then, you will construct problems in which people have strong intuitions about either feature rankings or directions and test them on human subjects. Finally, you will model human behavior using cognitive models.

[1] Binz, M., Gershman, S. J., Schulz, E., & Endres, D. (2022). Heuristics from bounded meta-learned inference. Psychological review, 129(5), 1042.


Semantic label smoothing
Contact: luca.schulze-buschoff@helmholtz-munich.de, can.demircan@helmholtz-munich.de
Tags: machine learning, computer vision, object recognition, representation learning
Description: Computer vision models are often trained on large scale visual object recognition data sets such as ImageNet. These data sets rely on a one-hot encoding approach for the target labels: the correct image class is set to 1, while all other classes are assigned 0. Therefore, if a model assigns the label “Dalmatian” to a “Hungarian Viszla”, it is penalized in the same way as if it had assigned the label “House” instead. This does not consider semantic structure at all, and semantic information can improve model performance and robustness if used correctly [1].
In this project, you will combine semantic information with an engineering trick called label smoothing [2], which is commonly used to train image models. You will come up with a loss function that combines the two concepts and train computer vision models on object recognition datasets (such as CIFAR). Finally, you will evaluate the models' performances and representations.

[1] Muttenthaler, L., Linhardt, L., Dippel, J., Vandermeulen, R. A., Hermann, K., Lampinen, A., & Kornblith, S. (2024). Improving neural network representations using human similarity judgments. Advances in Neural Information Processing Systems, 36.
[2] Müller, R., Kornblith, S., & Hinton, G. E. (2019). When does label smoothing help?. Advances in neural information processing systems, 32.


The effects of repetition on set size effects in long-term memory recall
Contact: susanne.haridi@helmholtz-munich.de, mirko.thalmann@helmholtz-munich.de
Tags: experimental study, memory, recall, learning, set-size effects
Description: It has long been known that repetition benefits memory recall. In a recent study we have also shown that the number of memories affects the time it takes a person to recall a specific memory. In this study we want to investigate how these two factors interact. Based on our current modeling approaches, we expect that with repeated presentations the slope of the set-size effects will decrease. Furthermore, we are interested if the repetition of only some stimuli in a set harms or benefits the recall of others. We plan to test this via an experimental study.
In this project you will refine the experimental question, design an online study to answer said question, program the study (we have lots of code already to help you out), collect the data, analyze the data and write up the results. Accordingly you will get the chance to experience most stages of experimental research in cognitive science.


Reconsidering the evidence for fixed effects in cognitive science
Contact: mirko.thalmann@helmholtz-munich.de
Tags: statistical modeling, experimental data
Description: Cognitive science and cognitive psychology are mostly sciences of the average. That is, they postulate the existence/absence of an effect and then test, for example in a sample of participants, whether the effect is observable (or not) on average. In statistical terms, the average effect is referred to as the fixed effect or the group effect. Researchers often assume that people differ in how strong they show a certain effect. Statistically, individual differences in the strength of an effect are referred to as random effects. Still, these models assume that there is one effect on average, and researchers are most often interested in showing that there is evidence in favor of or evidence against the average, fixed effect.
In this thesis, you are going to develop an approach to test whether previously published results can be re-interpreted in terms of qualitatively different groups. For example, even though an effect may be positive on average in a large sample, some participants may not show the effect at all. The thesis will involve simulation and recovery studies to show whether the developed approach can detect qualitatively different groups in theory. Eventually, you are going to apply this approach to empirical data, for example to data from the Horizon task [1].

[1] Wilson, R. C., Geana, A., White, J. M., Ludvig, E. A., & Cohen, J. D. (2014). Humans use directed and random exploration to solve the explore–exploit dilemma. Journal of Experimental Psychology: General, 143(6), 2074. https://doi.org/10.1037/a0038199