Projects
Find information about my current and past projects below.
Where applicable, collaborators leading those projects are highlighted.
My peer-reviewed and preprint publications can be found on Google Scholar.
All my project reports and presentations are hosted at figshare.
Ongoing
Shape/texture bias in brains and machines
With: Zejin Lu,
Radoslaw Cichy,
Tim Kietzmann
Summary: Building off
Geirhos et al. 2018, where CNNs were shown to be texture-biased as compared to humans, we redefine the shape bias metric and assess the influence of recurrent processing and developmental trajectories on the shape bias in RNNs and humans.
Multi-area readouts in brains and machines
With: Johannes Singer,
Radoslaw Cichy,
Tim Kietzmann
Summary: Given observations in
Birman & Gardner 2019 and
Yeh et al. 2023, we study if readout from multiple areas, in the human visual cortex and ANNs, is useful in performing classification tasks.
Perception of rare inverted letters among upright ones
With: Jochem Koopmans,
Genevieve Quek,
Marius Peelen
Summary: In a Sperling-like task where the letters are mostly upright, there is a general tendency to report occasionally-present and absent inverted letters as upright to the same extent. Previously reported expectation-driven illusions might be post-perceptual.
Comments: Jochem's masters thesis. Paper in prep.
Flexible rule learning in brains and machines
With: Rowan Sommers,
Daniel Anthes,
Tim Kietzmann
Summary: Inspired by the Wisconsin Card Sorting Task, we study the priors needed for neural networks to learn object/scene-specific rules continually, and relate their behaviors to human behavior on the same tasks.
Decision-making RNNs
With: Jonas Bieber,
Tim Kietzmann
Summary: Human categorisation reaction times can be
inferred from RNN representations/outputs (
Spoerer et al. 2020,
Goetschalckx et al. 2023). Instead, we explore whether RNNs can
generate RTs that are comparable to human RTs.
Representational drift in macaque visual cortex
With: Daniel Anthes,
Peter König,
Tim Kietzmann
Summary: Employing tools developed during our investigations into continual learning, we study if representational drift occurs in macaque visual cortex and how that multi-area system deals with changing representations.
Brain reading with a Transformer
With: Victoria Bosch,
Tim Kietzmann, et al.
Summary: Using fMRI responses to natural scenes to condition the sentence generation in a Transformer, we study the neural underpinnings of scene semantics (objects and their relationships) encoded in natural language.
2024
Assessing the emergence of an attention schema in object tracking
With: Lotta Piefke,
Adrien Doerig,
Tim Kietzmann
Summary: In tracking an object through clutter with spatial attention, with RL, an agent learns to create an explicit encoding of the attentional state - an attention schema. This schema is most useful when the attentional state cannot be inferred from the stimulus.
Comments: Preprint, under review
Structured representational drift aids continual learning
With: Daniel Anthes,
Peter König,
Tim Kietzmann
Summary: In contemporary continual learning, readout misalignment due to learning-induced representational drift poses a big problem. However, constraining this drift to the readout null-space helps networks be both stable and plastic.
Publication: CCN'23 paper
Comments: CCN paper in brief,
RDAC Preprint
Task-dependent characteristics of neural multi-object processing
With: Lu-Chun Yeh,
Marius Peelen
Summary: The association between the neural processing of multi-object displays and the representations of those objects presented in isolation is task-dependent: same/different judgement relates to earlier, and object search to later stages in MEG/fMRI signals.
Publication: JN'24 paper
Comments: JN paper in brief
Size-dependence of object search templates in natural scenes
With: Surya Gayet,
Marius Peelen, et al.
Summary: Object size varies with the location of the object in scenes. During search for an object, in addition to the object's identity, the attentional template contains information its size, entangled with its identity, which is inferred from its location in the scene.
Publication: JEP:HPP'24 paper
Comments: JEP:HPP paper in brief
2023
How does recurrence interact with feedforward processing in RNNs?
With: Adrien Doerig,
Tim Kietzmann
Summary: In RNNs performing image classification, the feedforward sweep instantiates a representational arrangement that dovetails with the recurrence-induced "equal movement for all representations" prior, allowing classifications to be corrected.
Publication: CCN'23 paper
Comments: CCN paper in brief
2022
Statistical learning of distractor co-occurrences facilitates visual search
With: Genevieve Quek,
Marius Peelen
Summary: Efficient visual search relies on the co-occurrence statistics of distractor shapes. Increased search efficiency among co-occurring distractors is probably driven by faster and/or more accurate rejection of a distractor's partner as a possible target.
Publication: JOV'22 paper
Comments: JOV paper in brief
Bodies as features in visual search
With: Marius Peelen
Summary: Are high-level visual features prioritised, via feature-based attention, spatially-globally? We found attentional gain modulation of the fMRI representations of body silhouettes, presented in task-irrelevant locations, in high-level visual cortex.
Publication: NeuroImage'22 paper
Comments: NeuroImage paper in brief,
Code + Data
2021
Recurrent operations in neural networks trained to recognise objects
With: Giacomo Aldegheri,
Tim Kietzmann
Summary: In a recurrent neural network trained for object categorization, the recurrent flow carries category-orthogonal object feature (e.g. object location) information, which is used, iteratively, to constrain the subsequent inferences about the object's category.
Publication: SVRHM'21 paper
Comments: SVRHM paper in brief
2019
The function of early task-based modulations in object detection
With: Giacomo Aldegheri,
Marcel van Gerven,
Marius Peelen
Summary: Task-based modulation of early visual processing in neural networks alleviates subsequent capacity limits caused by task and neural constraints. Bias/gain modulation of neural activations can be linked to tapping into a superposition of networks. Optimised neural modulations are
not feature-similarity gain modulations.
Publications: CCN'18 paper,
CCN'19 paper
The influence of scene information on object processing
With: Ilze Thoonen,
Sjoerd Meijer,
Marius Peelen
Summary: Task-irrelevant scene information biases categorization response towards co-varying objects (e.g. cars on roads). However, no evidence is found, across 4 experiments, for task-irrelevant scene information boosting the sensitivity of detecting co-varying objects. Further experimentation is required for validating these observations.
Comments: Summary presentation
The nature of the animacy organization in human ventral temporal cortex
With: Daria Proklova,
Daniel Kaiser,
Marius Peelen
Summary: The animacy organisation in the ventral temporal cortex is not fully driven by visual feature differences (modelled with a CNN). It also depends on non-visual (inferred) factors such as agency (quantified through a behavioural task).
Publications: eLife'19 paper,
Masters Thesis
Comments: Masters thesis in brief,
eLife paper in brief
2016
Reverse dictionary using a word-definition based graph search
With: Varad Choudhari
Summary: A method to process any forward word dictionary to build a reverse dictionary, using a n-hop reverse search on a graph, through word definitions. Performs as well as the state-of-the-art on a 3k lexicon. Doesn't scale well to 80k.
Publication: COLING'16 Paper
Comments: COLING paper in brief
2015
A Spiking Neural Network as a Quadcopter Flight Controller
With: Sukanya Patil,
Bipin Rajendran
Summary: a. Model-based control system for quadcopters towards velocity-waypoint navigation.
b. Modular SNNs for real-time arithmetic operations, using plastic synapses. SNNs are hard to tame!
Publications: IJCNN'15 paper,
B.Tech. Thesis
Comments: Thesis rumination,
IJCNN paper in brief