The work spans machine learning, neuroscience, and scientific modeling, with a consistent technical agenda: build systems that generalize reliably and quantify what they do not know.

Working Statement

I am an independent researcher with training in computational neuroscience, machine learning theory, and scientific modeling. Much of the recent work is about uncertainty, regularization, and the geometry of learning; the older and still active threads run through plasticity, motor control, color, and representation.

uncertainty-aware ML
learning theory
computational neuroscience
adaptive control
Research Areas

A compact map of the work.

These threads fit together around a shared set of questions: how learning proceeds under limited evidence, how uncertainty can track reality, and how structured systems adapt without collapsing into brittle heuristics. The same lens also extends to scientific discovery itself when agents become collaborators in reasoning, search, and experiment design.

Uncertainty and calibration

Methods for making models express what is genuinely supported by data. This includes calibration-aware objectives, Bayesian neural networks, and uncertainty for neural PDE surrogates.

Learning theory and geometry

A theory of learning time, finite-data thresholds, signal-to-noise, curvature, and high-dimensional effects in optimization and generalization.

Motor control and world models

Cerebellar-style controllers, embodied reinforcement learning, and reference-trajectory world models for fast adaptation under changing dynamics.

Plasticity, perception, and representation

Local learning rules, cortical representation, correlation-invariant synaptic plasticity, and a more speculative line of work on color and associative structure.

AI science and scientific discovery

Research on how agent systems can support inquiry itself: literature synthesis, hypothesis generation, uncertainty-aware reasoning, and the structure of human-AI scientific collaboration.

Selected Highlights

A few anchor points.

Reliable AI

Cross-regularization, Twin-Boot, and precise Bayesian neural networks all address the same practical issue: models should know when they are guessing.

Brains and control

Recent work uses motor adaptation as a meeting point for reinforcement learning, control theory, and cerebellar computation.

Technical depth

Background spans theoretical neuroscience, deep learning, scientific computing, competitive programming, and software engineering.

Other Work

Installations, colour, and consciousness.

Another part of the work centers on multi-agent public installations, colour perception, qualia, and the structure of experience.

Chatsubo: AI bar

A live multi-agent social simulation built as an AI bar for the 2024 Metamersion: Healing Algorithms exhibition in Lisbon. Autonomous bartenders, ghost patrons, memory, rumor, and human visitors all share the same evolving social environment.

Colour theory and consciousness

A research line on colour qualia as learned associative structure, including The Blue is Sky, work on empiricist theories of consciousness, and experiments on qualia drift under altered spectral environments.

Research Program

Selected papers and active threads.

Selected publications across learning theory, reliable AI, motor control, and computational neuroscience. Links point to conference, journal, or arXiv records.

2026
Scaling of learning time for high dimensional inputs
arXiv. Learning theory, geometry, signal-to-noise, and training-time structure.
2026
Direct Learning of Calibration-Aware Uncertainty for Neural PDE Surrogates
ICLR 2026 AI and PDE Workshop. Calibration-aware uncertainty in scientific machine learning.
2026
Thinking About Thinking With Machines That Think
ICLR 2026 Post-AGI Science and Society Workshop. Human-AI scientific reasoning, process flattening, and the decomposition principle.
2026
Directly Optimizing Calibrated Test-Time Uncertainty
ICLR 2026 TTU Workshop. Test-time uncertainty and calibrated prediction for unseen data.
2025
Precise Bayesian Neural Networks
arXiv. Bayesian deep learning with geometry-aware uncertainty.
2025
Twin-Boot: Uncertainty-Aware Optimization via Online Two-Sample Bootstrapping
arXiv. Bootstrap-inspired training signals for epistemic uncertainty.
2025
Cross-regularization: Adaptive Model Complexity through Validation Gradients
ICML 2025. Adaptive regularization and generalization control in large networks.
2025
World Models as Reference Trajectories for Rapid Motor Adaptation
NeurIPS 2025. Embodied RL, latent reference trajectories, and adaptive control.
2025
World Models as Reference Trajectories for Rapid Motor Adaptation
ICLR 2025 Robot Learning Workshop. Workshop version on rapid motor adaptation and cerebellar-style control.
2024
Learning what matters: Synaptic plasticity with invariance to second-order input correlations
PLOS Computational Biology. Normative neuroscience and representation learning with local rules.
2016
Nonlinear Hebbian learning as universal principle in receptive field development
PLOS Computational Biology. A unifying view of receptive field development and unsupervised feature learning.
Research Themes

How the pieces fit together.

Generalization without bluffing

The uncertainty work is not cosmetic calibration. It treats uncertainty, regularization, model size, and robustness as parts of the same generalization problem.

Learning as a geometric process

A central question is why some structures are learned quickly while others remain slow or unreachable, and how that depends on dimension, curvature, and finite data.

Controllers inside learned systems

Motor-control work separates long-horizon policy learning from fast corrective control, both in robots and as a theoretical picture of cerebellar function.

Brains as learning algorithms

The neuroscience line asks which local plasticity rules can plausibly learn structure from raw sensory inputs, and what that says about cortex and representation.

Science as a learning system

Another active thread asks how discovery changes when agents can read, compare, critique, and help structure scientific reasoning without flattening the human part of the process.

Consciousness and Public Work

Colour, qualia, and live systems in public.

Another active branch of the work connects philosophy of mind, colour perception, and public-facing agent systems. It sits closer to consciousness research and experimental aesthetics than to standard ML.

Curriculum Vitae

Curriculum vitae.

Academic history across research, service, outreach, and technical work.

Positions

2025-present

Independent researcher, Lisbon

Self-directed independent lab (NightCity Labs) researching on trustworthy AI, uncertainty-aware machine learning, AI for science, computational neuroscience, and adaptive systems.

2022-2025

Research Scientist, Champalimaud Research

Natural Intelligence Lab, Champalimaud Centre for the Unknown, Lisbon.

2020-2021

Machine Learning Expert, Cambridge Spark

Course design, large-scale machine learning training material, and the G-Research Kaggle competition.

2020-2021

Visiting Scientist, EPFL

Laboratory of Computational Neuroscience, Lausanne.

2016-2019

Postdoctoral Researcher, Gatsby Computational Neuroscience Unit, UCL

Postdoctoral work in theoretical neuroscience and machine learning.

2006 / 2003

Engineering internships at Google and IAE/CTA

Software systems, optimization, and computational engineering.

Education

PhD in Computational Neuroscience, EPFL, 2010-2016.
MSc in Neuroscience, University of Sao Paulo, 2008-2009.
BSc in Computer Engineering, ITA, 2002-2007.

Awards

Silver Medal, International Mathematical Olympiad (IMO), 2001.
World Finalist, International Collegiate Programming Contest (ICPC), 2005 and 2006.
Gold Medal, Brazilian Mathematical Olympiad, 2000, 2001, 2003, and 2004.
2nd prize, International Mathematics Competition, 2004 and 2005.
First prize, Brazilian Physics Olympiad, 2001.

Selected talks

Generalization, regularization and Bayesian neural networks. Gatsby UCL Transfer Talk, 2019.
Adaptive regularization by noise in deep neural networks. Albert Einstein Hospital Big Data Initiative, 2019.
Five ways to regularize deep neural networks. LCN EPFL Seminars, 2018.
Theory of synaptic plasticity for receptive field development. Oxford Neurotheory Seminars, 2015.

Service

Conference Media Chair, Cosyne, 2018-2025.
Seminar Organiser, Gatsby UCL Weekly External Seminars, 2017-2019.
External Seminar Organiser, Champalimaud Research, 2022-2024.
Founder and organiser, Theory Mini-Symposium series, 2022-2024.
Referee for Cosyne, NeurIPS, ICML, and ICLR.
Guest editor, PLOS Computational Biology.

Outreach

Science on the Walls | Ciencia nas Paredes, Lisbon, 2024. Artificial Intelligence and Medicine, Itau Cultural, Sao Paulo, 2018. How the brain represents the world, Arte della tavola, Lausanne, 2013.

Languages and tools

Portuguese, English, Spanish, French, and German. Technical stack includes Python, C++, Julia, Matlab, PyTorch, and agentic systems tooling.